title
stringlengths
0
56
url
stringlengths
56
142
markdown
stringlengths
129
213k
html
stringlengths
243
3.93M
crawlDate
stringlengths
24
24
VITS
https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/vits
# VITS ## Overview The VITS model was proposed in [Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech](https://arxiv.org/abs/2106.06103) by Jaehyeon Kim, Jungil Kong, Juhee Son. VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior. A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers, much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to synthesise speech with different rhythms from the same input text. The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training. To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor, the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform. The abstract from the paper is the following: _Several recent end-to-end text-to-speech (TTS) models enabling single-stage training and parallel sampling have been proposed, but their sample quality does not match that of two-stage TTS systems. In this work, we present a parallel end-to-end TTS method that generates more natural sounding audio than current two-stage models. Our method adopts variational inference augmented with normalizing flows and an adversarial training process, which improves the expressive power of generative modeling. We also propose a stochastic duration predictor to synthesize speech with diverse rhythms from input text. With the uncertainty modeling over latent variables and the stochastic duration predictor, our method expresses the natural one-to-many relationship in which a text input can be spoken in multiple ways with different pitches and rhythms. A subjective human evaluation (mean opinion score, or MOS) on the LJ Speech, a single speaker dataset, shows that our method outperforms the best publicly available TTS systems and achieves a MOS comparable to ground truth._ This model can also be used with TTS checkpoints from [Massively Multilingual Speech (MMS)](https://arxiv.org/abs/2305.13516) as these checkpoints use the same architecture and a slightly modified tokenizer. This model was contributed by [Matthijs](https://huggingface.co/Matthijs) and [sanchit-gandhi](https://huggingface.co/sanchit-gandhi). The original code can be found [here](https://github.com/jaywalnut310/vits). ## Model Usage Both the VITS and MMS-TTS checkpoints can be used with the same API. Since the flow-based model is non-deterministic, it is good practice to set a seed to ensure reproducibility of the outputs. For languages with a Roman alphabet, such as English or French, the tokenizer can be used directly to pre-process the text inputs. The following code example runs a forward pass using the MMS-TTS English checkpoint: ``` import torch from transformers import VitsTokenizer, VitsModel, set_seed tokenizer = VitsTokenizer.from_pretrained("facebook/mms-tts-eng") model = VitsModel.from_pretrained("facebook/mms-tts-eng") inputs = tokenizer(text="Hello - my dog is cute", return_tensors="pt") set_seed(555) with torch.no_grad(): outputs = model(**inputs) waveform = outputs.waveform[0] ``` The resulting waveform can be saved as a `.wav` file: ``` import scipy scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=waveform) ``` Or displayed in a Jupyter Notebook / Google Colab: ``` from IPython.display import Audio Audio(waveform, rate=model.config.sampling_rate) ``` For certain languages with a non-Roman alphabet, such as Arabic, Mandarin or Hindi, the [`uroman`](https://github.com/isi-nlp/uroman) perl package is required to pre-process the text inputs to the Roman alphabet. You can check whether you require the `uroman` package for your language by inspecting the `is_uroman` attribute of the pre-trained `tokenizer`: ``` from transformers import VitsTokenizer tokenizer = VitsTokenizer.from_pretrained("facebook/mms-tts-eng") print(tokenizer.is_uroman) ``` If required, you should apply the uroman package to your text inputs **prior** to passing them to the `VitsTokenizer`, since currently the tokenizer does not support performing the pre-processing itself. To do this, first clone the uroman repository to your local machine and set the bash variable `UROMAN` to the local path: ``` git clone https://github.com/isi-nlp/uroman.git cd uroman export UROMAN=$(pwd) ``` You can then pre-process the text input using the following code snippet. You can either rely on using the bash variable `UROMAN` to point to the uroman repository, or you can pass the uroman directory as an argument to the `uromaize` function: ``` import torch from transformers import VitsTokenizer, VitsModel, set_seed import os import subprocess tokenizer = VitsTokenizer.from_pretrained("facebook/mms-tts-kor") model = VitsModel.from_pretrained("facebook/mms-tts-kor") def uromanize(input_string, uroman_path): """Convert non-Roman strings to Roman using the `uroman` perl package.""" script_path = os.path.join(uroman_path, "bin", "uroman.pl") command = ["perl", script_path] process = subprocess.Popen(command, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) stdout, stderr = process.communicate(input=input_string.encode()) if process.returncode != 0: raise ValueError(f"Error {process.returncode}: {stderr.decode()}") return stdout.decode()[:-1] text = "이봐 무슨 일이야" uromaized_text = uromanize(text, uroman_path=os.environ["UROMAN"]) inputs = tokenizer(text=uromaized_text, return_tensors="pt") set_seed(555) with torch.no_grad(): outputs = model(inputs["input_ids"]) waveform = outputs.waveform[0] ``` ## VitsConfig ### class transformers.VitsConfig [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vits/configuration_vits.py#L29) ( vocab\_size = 38 hidden\_size = 192 num\_hidden\_layers = 6 num\_attention\_heads = 2 window\_size = 4 use\_bias = True ffn\_dim = 768 layerdrop = 0.1 ffn\_kernel\_size = 3 flow\_size = 192 spectrogram\_bins = 513 hidden\_act = 'relu' hidden\_dropout = 0.1 attention\_dropout = 0.1 activation\_dropout = 0.1 initializer\_range = 0.02 layer\_norm\_eps = 1e-05 use\_stochastic\_duration\_prediction = True num\_speakers = 1 speaker\_embedding\_size = 0 upsample\_initial\_channel = 512 upsample\_rates = \[8, 8, 2, 2\] upsample\_kernel\_sizes = \[16, 16, 4, 4\] resblock\_kernel\_sizes = \[3, 7, 11\] resblock\_dilation\_sizes = \[\[1, 3, 5\], \[1, 3, 5\], \[1, 3, 5\]\] leaky\_relu\_slope = 0.1 depth\_separable\_channels = 2 depth\_separable\_num\_layers = 3 duration\_predictor\_flow\_bins = 10 duration\_predictor\_tail\_bound = 5.0 duration\_predictor\_kernel\_size = 3 duration\_predictor\_dropout = 0.5 duration\_predictor\_num\_flows = 4 duration\_predictor\_filter\_channels = 256 prior\_encoder\_num\_flows = 4 prior\_encoder\_num\_wavenet\_layers = 4 posterior\_encoder\_num\_wavenet\_layers = 16 wavenet\_kernel\_size = 5 wavenet\_dilation\_rate = 1 wavenet\_dropout = 0.0 speaking\_rate = 1.0 noise\_scale = 0.667 noise\_scale\_duration = 0.8 sampling\_rate = 16000 \*\*kwargs ) Parameters - **vocab\_size** (`int`, _optional_, defaults to 38) — Vocabulary size of the VITS model. Defines the number of different tokens that can be represented by the `inputs_ids` passed to the forward method of [VitsModel](/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsModel). - **hidden\_size** (`int`, _optional_, defaults to 192) — Dimensionality of the text encoder layers. - **num\_hidden\_layers** (`int`, _optional_, defaults to 6) — Number of hidden layers in the Transformer encoder. - **num\_attention\_heads** (`int`, _optional_, defaults to 2) — Number of attention heads for each attention layer in the Transformer encoder. - **window\_size** (`int`, _optional_, defaults to 4) — Window size for the relative positional embeddings in the attention layers of the Transformer encoder. - **use\_bias** (`bool`, _optional_, defaults to `True`) — Whether to use bias in the key, query, value projection layers in the Transformer encoder. - **ffn\_dim** (`int`, _optional_, defaults to 768) — Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder. - **layerdrop** (`float`, _optional_, defaults to 0.1) — The LayerDrop probability for the encoder. See the \[LayerDrop paper\](see [https://arxiv.org/abs/1909.11556](https://arxiv.org/abs/1909.11556)) for more details. - **ffn\_kernel\_size** (`int`, _optional_, defaults to 3) — Kernel size of the 1D convolution layers used by the feed-forward network in the Transformer encoder. - **flow\_size** (`int`, _optional_, defaults to 192) — Dimensionality of the flow layers. - **spectrogram\_bins** (`int`, _optional_, defaults to 513) — Number of frequency bins in the target spectrogram. - **hidden\_act** (`str` or `function`, _optional_, defaults to `"relu"`) — The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported. - **hidden\_dropout** (`float`, _optional_, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings and encoder. - **attention\_dropout** (`float`, _optional_, defaults to 0.1) — The dropout ratio for the attention probabilities. - **activation\_dropout** (`float`, _optional_, defaults to 0.1) — The dropout ratio for activations inside the fully connected layer. - **initializer\_range** (`float`, _optional_, defaults to 0.02) — The standard deviation of the truncated\_normal\_initializer for initializing all weight matrices. - **layer\_norm\_eps** (`float`, _optional_, defaults to 1e-5) — The epsilon used by the layer normalization layers. - **use\_stochastic\_duration\_prediction** (`bool`, _optional_, defaults to `True`) — Whether to use the stochastic duration prediction module or the regular duration predictor. - **num\_speakers** (`int`, _optional_, defaults to 1) — Number of speakers if this is a multi-speaker model. - **speaker\_embedding\_size** (`int`, _optional_, defaults to 0) — Number of channels used by the speaker embeddings. Is zero for single-speaker models. - **upsample\_initial\_channel** (`int`, _optional_, defaults to 512) — The number of input channels into the HiFi-GAN upsampling network. - **upsample\_rates** (`Tuple[int]` or `List[int]`, _optional_, defaults to `[8, 8, 2, 2]`) — A tuple of integers defining the stride of each 1D convolutional layer in the HiFi-GAN upsampling network. The length of `upsample_rates` defines the number of convolutional layers and has to match the length of `upsample_kernel_sizes`. - **upsample\_kernel\_sizes** (`Tuple[int]` or `List[int]`, _optional_, defaults to `[16, 16, 4, 4]`) — A tuple of integers defining the kernel size of each 1D convolutional layer in the HiFi-GAN upsampling network. The length of `upsample_kernel_sizes` defines the number of convolutional layers and has to match the length of `upsample_rates`. - **resblock\_kernel\_sizes** (`Tuple[int]` or `List[int]`, _optional_, defaults to `[3, 7, 11]`) — A tuple of integers defining the kernel sizes of the 1D convolutional layers in the HiFi-GAN multi-receptive field fusion (MRF) module. - **resblock\_dilation\_sizes** (`Tuple[Tuple[int]]` or `List[List[int]]`, _optional_, defaults to `[[1, 3, 5], [1, 3, 5], [1, 3, 5]]`) — A nested tuple of integers defining the dilation rates of the dilated 1D convolutional layers in the HiFi-GAN multi-receptive field fusion (MRF) module. - **leaky\_relu\_slope** (`float`, _optional_, defaults to 0.1) — The angle of the negative slope used by the leaky ReLU activation. - **depth\_separable\_channels** (`int`, _optional_, defaults to 2) — Number of channels to use in each depth-separable block. - **depth\_separable\_num\_layers** (`int`, _optional_, defaults to 3) — Number of convolutional layers to use in each depth-separable block. - **duration\_predictor\_flow\_bins** (`int`, _optional_, defaults to 10) — Number of channels to map using the unonstrained rational spline in the duration predictor model. - **duration\_predictor\_tail\_bound** (`float`, _optional_, defaults to 5.0) — Value of the tail bin boundary when computing the unconstrained rational spline in the duration predictor model. - **duration\_predictor\_kernel\_size** (`int`, _optional_, defaults to 3) — Kernel size of the 1D convolution layers used in the duration predictor model. - **duration\_predictor\_dropout** (`float`, _optional_, defaults to 0.5) — The dropout ratio for the duration predictor model. - **duration\_predictor\_num\_flows** (`int`, _optional_, defaults to 4) — Number of flow stages used by the duration predictor model. - **duration\_predictor\_filter\_channels** (`int`, _optional_, defaults to 256) — Number of channels for the convolution layers used in the duration predictor model. - **prior\_encoder\_num\_flows** (`int`, _optional_, defaults to 4) — Number of flow stages used by the prior encoder flow model. - **prior\_encoder\_num\_wavenet\_layers** (`int`, _optional_, defaults to 4) — Number of WaveNet layers used by the prior encoder flow model. - **posterior\_encoder\_num\_wavenet\_layers** (`int`, _optional_, defaults to 16) — Number of WaveNet layers used by the posterior encoder model. - **wavenet\_kernel\_size** (`int`, _optional_, defaults to 5) — Kernel size of the 1D convolution layers used in the WaveNet model. - **wavenet\_dilation\_rate** (`int`, _optional_, defaults to 1) — Dilation rates of the dilated 1D convolutional layers used in the WaveNet model. - **wavenet\_dropout** (`float`, _optional_, defaults to 0.0) — The dropout ratio for the WaveNet layers. - **speaking\_rate** (`float`, _optional_, defaults to 1.0) — Speaking rate. Larger values give faster synthesised speech. - **noise\_scale** (`float`, _optional_, defaults to 0.667) — How random the speech prediction is. Larger values create more variation in the predicted speech. - **noise\_scale\_duration** (`float`, _optional_, defaults to 0.8) — How random the duration prediction is. Larger values create more variation in the predicted durations. - **sampling\_rate** (`int`, _optional_, defaults to 16000) — The sampling rate at which the output audio waveform is digitalized expressed in hertz (Hz). This is the configuration class to store the configuration of a [VitsModel](/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsModel). It is used to instantiate a VITS model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the VITS [facebook/mms-tts-eng](https://huggingface.co/facebook/mms-tts-eng) architecture. Configuration objects inherit from [PretrainedConfig](/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig) and can be used to control the model outputs. Read the documentation from [PretrainedConfig](/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig) for more information. Example: ``` >>> from transformers import VitsModel, VitsConfig >>> >>> configuration = VitsConfig() >>> >>> model = VitsModel(configuration) >>> >>> configuration = model.config ``` ## VitsTokenizer ### class transformers.VitsTokenizer [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vits/tokenization_vits.py#L57) ( vocab\_file pad\_token = '<pad>' unk\_token = '<unk>' language = None add\_blank = True normalize = True phonemize = True is\_uroman = False \*\*kwargs ) Parameters - **vocab\_file** (`str`) — Path to the vocabulary file. - **language** (`str`, _optional_) — Language identifier. - **add\_blank** (`bool`, _optional_, defaults to `True`) — Whether to insert token id 0 in between the other tokens. - **normalize** (`bool`, _optional_, defaults to `True`) — Whether to normalize the input text by removing all casing and punctuation. - **phonemize** (`bool`, _optional_, defaults to `True`) — Whether to convert the input text into phonemes. - **is\_uroman** (`bool`, _optional_, defaults to `False`) — Whether the `uroman` Romanizer needs to be applied to the input text prior to tokenizing. Construct a VITS tokenizer. Also supports MMS-TTS. This tokenizer inherits from [PreTrainedTokenizer](/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizer) which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. #### \_\_call\_\_ [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/tokenization_utils_base.py#L2732) ( text: typing.Union\[str, typing.List\[str\], typing.List\[typing.List\[str\]\]\] = None text\_pair: typing.Union\[str, typing.List\[str\], typing.List\[typing.List\[str\]\], NoneType\] = None text\_target: typing.Union\[str, typing.List\[str\], typing.List\[typing.List\[str\]\]\] = None text\_pair\_target: typing.Union\[str, typing.List\[str\], typing.List\[typing.List\[str\]\], NoneType\] = None add\_special\_tokens: bool = True padding: typing.Union\[bool, str, transformers.utils.generic.PaddingStrategy\] = False truncation: typing.Union\[bool, str, transformers.tokenization\_utils\_base.TruncationStrategy\] = None max\_length: typing.Optional\[int\] = None stride: int = 0 is\_split\_into\_words: bool = False pad\_to\_multiple\_of: typing.Optional\[int\] = None return\_tensors: typing.Union\[str, transformers.utils.generic.TensorType, NoneType\] = None return\_token\_type\_ids: typing.Optional\[bool\] = None return\_attention\_mask: typing.Optional\[bool\] = None return\_overflowing\_tokens: bool = False return\_special\_tokens\_mask: bool = False return\_offsets\_mapping: bool = False return\_length: bool = False verbose: bool = True \*\*kwargs ) → [BatchEncoding](/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.BatchEncoding) Parameters - **text** (`str`, `List[str]`, `List[List[str]]`, _optional_) — The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set `is_split_into_words=True` (to lift the ambiguity with a batch of sequences). - **text\_pair** (`str`, `List[str]`, `List[List[str]]`, _optional_) — The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set `is_split_into_words=True` (to lift the ambiguity with a batch of sequences). - **text\_target** (`str`, `List[str]`, `List[List[str]]`, _optional_) — The sequence or batch of sequences to be encoded as target texts. Each sequence can be a string or a list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set `is_split_into_words=True` (to lift the ambiguity with a batch of sequences). - **text\_pair\_target** (`str`, `List[str]`, `List[List[str]]`, _optional_) — The sequence or batch of sequences to be encoded as target texts. Each sequence can be a string or a list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set `is_split_into_words=True` (to lift the ambiguity with a batch of sequences). - **add\_special\_tokens** (`bool`, _optional_, defaults to `True`) — Whether or not to add special tokens when encoding the sequences. This will use the underlying `PretrainedTokenizerBase.build_inputs_with_special_tokens` function, which defines which tokens are automatically added to the input ids. This is usefull if you want to add `bos` or `eos` tokens automatically. - **padding** (`bool`, `str` or [PaddingStrategy](/docs/transformers/v4.34.0/en/internal/file_utils#transformers.utils.PaddingStrategy), _optional_, defaults to `False`) — Activates and controls padding. Accepts the following values: - `True` or `'longest'`: Pad to the longest sequence in the batch (or no padding if only a single sequence if provided). - `'max_length'`: Pad to a maximum length specified with the argument `max_length` or to the maximum acceptable input length for the model if that argument is not provided. - `False` or `'do_not_pad'` (default): No padding (i.e., can output a batch with sequences of different lengths). - **truncation** (`bool`, `str` or [TruncationStrategy](/docs/transformers/v4.34.0/en/internal/tokenization_utils#transformers.tokenization_utils_base.TruncationStrategy), _optional_, defaults to `False`) — Activates and controls truncation. Accepts the following values: - `True` or `'longest_first'`: Truncate to a maximum length specified with the argument `max_length` or to the maximum acceptable input length for the model if that argument is not provided. This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided. - `'only_first'`: Truncate to a maximum length specified with the argument `max_length` or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided. - `'only_second'`: Truncate to a maximum length specified with the argument `max_length` or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided. - `False` or `'do_not_truncate'` (default): No truncation (i.e., can output batch with sequence lengths greater than the model maximum admissible input size). - **max\_length** (`int`, _optional_) — Controls the maximum length to use by one of the truncation/padding parameters. If left unset or set to `None`, this will use the predefined model maximum length if a maximum length is required by one of the truncation/padding parameters. If the model has no specific maximum input length (like XLNet) truncation/padding to a maximum length will be deactivated. - **stride** (`int`, _optional_, defaults to 0) — If set to a number along with `max_length`, the overflowing tokens returned when `return_overflowing_tokens=True` will contain some tokens from the end of the truncated sequence returned to provide some overlap between truncated and overflowing sequences. The value of this argument defines the number of overlapping tokens. - **is\_split\_into\_words** (`bool`, _optional_, defaults to `False`) — Whether or not the input is already pre-tokenized (e.g., split into words). If set to `True`, the tokenizer assumes the input is already split into words (for instance, by splitting it on whitespace) which it will tokenize. This is useful for NER or token classification. - **pad\_to\_multiple\_of** (`int`, _optional_) — If set will pad the sequence to a multiple of the provided value. Requires `padding` to be activated. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability `>= 7.5` (Volta). - **return\_tensors** (`str` or [TensorType](/docs/transformers/v4.34.0/en/internal/file_utils#transformers.TensorType), _optional_) — If set, will return tensors instead of list of python integers. Acceptable values are: - `'tf'`: Return TensorFlow `tf.constant` objects. - `'pt'`: Return PyTorch `torch.Tensor` objects. - `'np'`: Return Numpy `np.ndarray` objects. - **return\_token\_type\_ids** (`bool`, _optional_) — Whether to return token type IDs. If left to the default, will return the token type IDs according to the specific tokenizer’s default, defined by the `return_outputs` attribute. [What are token type IDs?](../glossary#token-type-ids) - **return\_attention\_mask** (`bool`, _optional_) — Whether to return the attention mask. If left to the default, will return the attention mask according to the specific tokenizer’s default, defined by the `return_outputs` attribute. [What are attention masks?](../glossary#attention-mask) - **return\_overflowing\_tokens** (`bool`, _optional_, defaults to `False`) — Whether or not to return overflowing token sequences. If a pair of sequences of input ids (or a batch of pairs) is provided with `truncation_strategy = longest_first` or `True`, an error is raised instead of returning overflowing tokens. - **return\_special\_tokens\_mask** (`bool`, _optional_, defaults to `False`) — Whether or not to return special tokens mask information. - **return\_offsets\_mapping** (`bool`, _optional_, defaults to `False`) — Whether or not to return `(char_start, char_end)` for each token. This is only available on fast tokenizers inheriting from [PreTrainedTokenizerFast](/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast), if using Python’s tokenizer, this method will raise `NotImplementedError`. - **return\_length** (`bool`, _optional_, defaults to `False`) — Whether or not to return the lengths of the encoded inputs. - **verbose** (`bool`, _optional_, defaults to `True`) — Whether or not to print more information and warnings. \*\*kwargs — passed to the `self.tokenize()` method A [BatchEncoding](/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.BatchEncoding) with the following fields: - **input\_ids** — List of token ids to be fed to a model. [What are input IDs?](../glossary#input-ids) - **token\_type\_ids** — List of token type ids to be fed to a model (when `return_token_type_ids=True` or if _“token\_type\_ids”_ is in `self.model_input_names`). [What are token type IDs?](../glossary#token-type-ids) - **attention\_mask** — List of indices specifying which tokens should be attended to by the model (when `return_attention_mask=True` or if _“attention\_mask”_ is in `self.model_input_names`). [What are attention masks?](../glossary#attention-mask) - **overflowing\_tokens** — List of overflowing tokens sequences (when a `max_length` is specified and `return_overflowing_tokens=True`). - **num\_truncated\_tokens** — Number of tokens truncated (when a `max_length` is specified and `return_overflowing_tokens=True`). - **special\_tokens\_mask** — List of 0s and 1s, with 1 specifying added special tokens and 0 specifying regular sequence tokens (when `add_special_tokens=True` and `return_special_tokens_mask=True`). - **length** — The length of the inputs (when `return_length=True`) Main method to tokenize and prepare for the model one or several sequence(s) or one or several pair(s) of sequences. #### save\_vocabulary [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vits/tokenization_vits.py#L238) ( save\_directory: str filename\_prefix: typing.Optional\[str\] = None ) ## VitsModel ### class transformers.VitsModel [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vits/modeling_vits.py#L1356) ( config: VitsConfig ) Parameters - **config** ([VitsConfig](/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. The complete VITS model, for text-to-speech synthesis. This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vits/modeling_vits.py#L1386) ( input\_ids: typing.Optional\[torch.Tensor\] = None attention\_mask: typing.Optional\[torch.Tensor\] = None speaker\_id: typing.Optional\[int\] = None output\_attentions: typing.Optional\[bool\] = None output\_hidden\_states: typing.Optional\[bool\] = None return\_dict: typing.Optional\[bool\] = None labels: typing.Optional\[torch.FloatTensor\] = None ) → `transformers.models.vits.modeling_vits.VitsModelOutput` or `tuple(torch.FloatTensor)` Parameters - **input\_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using [AutoTokenizer](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer). See [PreTrainedTokenizer.encode()](/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode) and [PreTrainedTokenizer.**call**()](/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__) for details. [What are input IDs?](../glossary#input-ids) - **attention\_mask** (`torch.Tensor` of shape `(batch_size, sequence_length)`, _optional_) — Mask to avoid performing convolution and attention on padding token indices. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - **speaker\_id** (`int`, _optional_) — Which speaker embedding to use. Only used for multispeaker models. - **output\_attentions** (`bool`, _optional_) — Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. - **output\_hidden\_states** (`bool`, _optional_) — Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for more detail. - **return\_dict** (`bool`, _optional_) — Whether or not to return a [ModelOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput) instead of a plain tuple. - **labels** (`torch.FloatTensor` of shape `(batch_size, config.spectrogram_bins, sequence_length)`, _optional_) — Float values of target spectrogram. Timesteps set to `-100.0` are ignored (masked) for the loss computation. Returns `transformers.models.vits.modeling_vits.VitsModelOutput` or `tuple(torch.FloatTensor)` A `transformers.models.vits.modeling_vits.VitsModelOutput` or a tuple of `torch.FloatTensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various elements depending on the configuration ([VitsConfig](/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsConfig)) and inputs. - **waveform** (`torch.FloatTensor` of shape `(batch_size, sequence_length)`) — The final audio waveform predicted by the model. - **sequence\_lengths** (`torch.FloatTensor` of shape `(batch_size,)`) — The length in samples of each element in the `waveform` batch. - **spectrogram** (`torch.FloatTensor` of shape `(batch_size, sequence_length, num_bins)`) — The log-mel spectrogram predicted at the output of the flow model. This spectrogram is passed to the Hi-Fi GAN decoder model to obtain the final audio waveform. - **hidden\_states** (`tuple(torch.FloatTensor)`, _optional_, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`) — Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. - **attentions** (`tuple(torch.FloatTensor)`, _optional_, returned when `output_attentions=True` is passed or when `config.output_attentions=True`) — Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attention weights after the attention softmax, used to compute the weighted average in the self-attention heads. The [VitsModel](/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsModel) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import VitsTokenizer, VitsModel, set_seed >>> import torch >>> tokenizer = VitsTokenizer.from_pretrained("facebook/mms-tts-eng") >>> model = VitsModel.from_pretrained("facebook/mms-tts-eng") >>> inputs = tokenizer(text="Hello - my dog is cute", return_tensors="pt") >>> set_seed(555) >>> with torch.no_grad(): ... outputs = model(inputs["input_ids"]) >>> outputs.waveform.shape torch.Size([1, 45824]) ```
<!DOCTYPE html><html class=""><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"> <meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science."> <meta property="fb:app_id" content="1321688464574422"> <meta name="twitter:card" content="summary_large_image"> <meta name="twitter:site" content="@huggingface"> <meta property="og:title" content="VITS"> <meta property="og:type" content="website"> <meta property="og:url" content="https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/vits"> <meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png"> <link rel="stylesheet" href="/front/build/kube-b0520c1/style.css"> <link rel="preconnect" href="https://fonts.gstatic.com"> <link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&amp;display=swap" rel="stylesheet"> <link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&amp;display=swap" rel="stylesheet"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" as="style" onload="this.onload=null;this.rel='stylesheet'"> <noscript> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" /> </noscript> <title>VITS</title> <script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script> <script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"><link rel="stylesheet" href="/docs/transformers/v4.34.0/en/_app/immutable/assets/0.e3b0c442.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/1.38c5c2f6.js"></head> <body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage"> <div class="flex min-h-screen flex-col"> <div class="SVELTE_HYDRATER contents" data-props="{&quot;classNames&quot;:&quot;&quot;,&quot;isWide&quot;:true,&quot;isZh&quot;:false}" data-target="MainHeader"><header class="border-b border-gray-100 "><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <div class="flex flex-none items-center justify-center p-0.5 place-self-stretch lg:hidden"><button class="relative z-40 flex h-6 w-8 items-center justify-center" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div></div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="rounded-full border border-transparent bg-gray-900 px-3 py-1 leading-none text-white hover:border-black hover:bg-white hover:text-black" href="/join">Sign Up</a></li></ul></nav></div></header></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div> <main class="flex flex-1 flex-col"><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapters&quot;:[{&quot;title&quot;:&quot;Get started&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;🤗 Transformers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;index&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/index&quot;},{&quot;title&quot;:&quot;Quick tour&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;quicktour&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/quicktour&quot;},{&quot;title&quot;:&quot;Installation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;installation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/installation&quot;}]},{&quot;title&quot;:&quot;Tutorials&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Run inference with pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_tutorial&quot;},{&quot;title&quot;:&quot;Write portable code with AutoClass&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;autoclass_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/autoclass_tutorial&quot;},{&quot;title&quot;:&quot;Preprocess data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocessing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/preprocessing&quot;},{&quot;title&quot;:&quot;Fine-tune a pretrained model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;training&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/training&quot;},{&quot;title&quot;:&quot;Train with a script&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;run_scripts&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/run_scripts&quot;},{&quot;title&quot;:&quot;Set up distributed training with 🤗 Accelerate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;accelerate&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/accelerate&quot;},{&quot;title&quot;:&quot;Load and train adapters with 🤗 PEFT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;peft&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/peft&quot;},{&quot;title&quot;:&quot;Share your model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_sharing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_sharing&quot;},{&quot;title&quot;:&quot;Agents&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers_agents&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/transformers_agents&quot;},{&quot;title&quot;:&quot;Generation with LLMs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;llm_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/llm_tutorial&quot;}]},{&quot;title&quot;:&quot;Task Guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Natural Language Processing&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text classification&quot;,&quot;id&quot;:&quot;tasks/sequence_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/sequence_classification&quot;},{&quot;title&quot;:&quot;Token classification&quot;,&quot;id&quot;:&quot;tasks/token_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/token_classification&quot;},{&quot;title&quot;:&quot;Question answering&quot;,&quot;id&quot;:&quot;tasks/question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/question_answering&quot;},{&quot;title&quot;:&quot;Causal language modeling&quot;,&quot;id&quot;:&quot;tasks/language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/language_modeling&quot;},{&quot;title&quot;:&quot;Masked language modeling&quot;,&quot;id&quot;:&quot;tasks/masked_language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/masked_language_modeling&quot;},{&quot;title&quot;:&quot;Translation&quot;,&quot;id&quot;:&quot;tasks/translation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/translation&quot;},{&quot;title&quot;:&quot;Summarization&quot;,&quot;id&quot;:&quot;tasks/summarization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/summarization&quot;},{&quot;title&quot;:&quot;Multiple choice&quot;,&quot;id&quot;:&quot;tasks/multiple_choice&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/multiple_choice&quot;}]},{&quot;title&quot;:&quot;Audio&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio classification&quot;,&quot;id&quot;:&quot;tasks/audio_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/audio_classification&quot;},{&quot;title&quot;:&quot;Automatic speech recognition&quot;,&quot;id&quot;:&quot;tasks/asr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/asr&quot;}]},{&quot;title&quot;:&quot;Computer Vision&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image classification&quot;,&quot;id&quot;:&quot;tasks/image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_classification&quot;},{&quot;title&quot;:&quot;Semantic segmentation&quot;,&quot;id&quot;:&quot;tasks/semantic_segmentation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/semantic_segmentation&quot;},{&quot;title&quot;:&quot;Video classification&quot;,&quot;id&quot;:&quot;tasks/video_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/video_classification&quot;},{&quot;title&quot;:&quot;Object detection&quot;,&quot;id&quot;:&quot;tasks/object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot object detection&quot;,&quot;id&quot;:&quot;tasks/zero_shot_object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot image classification&quot;,&quot;id&quot;:&quot;tasks/zero_shot_image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification&quot;},{&quot;title&quot;:&quot;Depth estimation&quot;,&quot;id&quot;:&quot;tasks/monocular_depth_estimation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation&quot;}]},{&quot;title&quot;:&quot;Multimodal&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image captioning&quot;,&quot;id&quot;:&quot;tasks/image_captioning&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_captioning&quot;},{&quot;title&quot;:&quot;Document Question Answering&quot;,&quot;id&quot;:&quot;tasks/document_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/document_question_answering&quot;},{&quot;title&quot;:&quot;Visual Question Answering&quot;,&quot;id&quot;:&quot;tasks/visual_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/visual_question_answering&quot;},{&quot;title&quot;:&quot;Text to speech&quot;,&quot;id&quot;:&quot;tasks/text-to-speech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/text-to-speech&quot;}]},{&quot;title&quot;:&quot;Generation&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Customize the generation strategy&quot;,&quot;id&quot;:&quot;generation_strategies&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/generation_strategies&quot;}]},{&quot;title&quot;:&quot;Prompting&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image tasks with IDEFICS&quot;,&quot;id&quot;:&quot;tasks/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/idefics&quot;}]}]},{&quot;title&quot;:&quot;Developer guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Use fast tokenizers from 🤗 Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;fast_tokenizers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/fast_tokenizers&quot;},{&quot;title&quot;:&quot;Run inference with multilingual models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;multilingual&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/multilingual&quot;},{&quot;title&quot;:&quot;Use model-specific APIs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;create_a_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/create_a_model&quot;},{&quot;title&quot;:&quot;Share a custom model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_models&quot;},{&quot;title&quot;:&quot;Templates for chat models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;chat_templating&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/chat_templating&quot;},{&quot;title&quot;:&quot;Run training on Amazon SageMaker&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;sagemaker&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/sagemaker&quot;},{&quot;title&quot;:&quot;Export to ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;serialization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/serialization&quot;},{&quot;title&quot;:&quot;Export to TFLite&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tflite&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tflite&quot;},{&quot;title&quot;:&quot;Export to TorchScript&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;torchscript&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/torchscript&quot;},{&quot;title&quot;:&quot;Benchmarks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;benchmarks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/benchmarks&quot;},{&quot;title&quot;:&quot;Notebooks with examples&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;notebooks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/notebooks&quot;},{&quot;title&quot;:&quot;Community resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;community&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/community&quot;},{&quot;title&quot;:&quot;Custom Tools and Prompts&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_tools&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_tools&quot;},{&quot;title&quot;:&quot;Troubleshoot&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;troubleshooting&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/troubleshooting&quot;}]},{&quot;title&quot;:&quot;Performance and scalability&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;performance&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/performance&quot;},{&quot;title&quot;:&quot;Efficient training techniques&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Methods and tools for efficient training on a single GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_one&quot;},{&quot;title&quot;:&quot;Multiple GPUs and parallelism&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_many&quot;},{&quot;title&quot;:&quot;Efficient training on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu&quot;},{&quot;title&quot;:&quot;Distributed CPU training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu_many&quot;},{&quot;title&quot;:&quot;Training on TPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu&quot;},{&quot;title&quot;:&quot;Training on TPU with TensorFlow&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu_tf&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu_tf&quot;},{&quot;title&quot;:&quot;Training on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_special&quot;},{&quot;title&quot;:&quot;Custom hardware for training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_hardware&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_hardware&quot;},{&quot;title&quot;:&quot;Hyperparameter Search using Trainer API&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;hpo_train&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/hpo_train&quot;}]},{&quot;title&quot;:&quot;Optimizing inference&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Inference on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_cpu&quot;},{&quot;title&quot;:&quot;Inference on one GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_one&quot;},{&quot;title&quot;:&quot;Inference on many GPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_many&quot;},{&quot;title&quot;:&quot;Inference on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_special&quot;}]},{&quot;title&quot;:&quot;Instantiating a big model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;big_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/big_models&quot;},{&quot;title&quot;:&quot;Troubleshooting&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;debugging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/debugging&quot;},{&quot;title&quot;:&quot;XLA Integration for TensorFlow Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tf_xla&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tf_xla&quot;},{&quot;title&quot;:&quot;Optimize inference using `torch.compile()`&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_torch_compile&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_torch_compile&quot;}]},{&quot;title&quot;:&quot;Contribute&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;How to contribute to transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;contributing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/contributing&quot;},{&quot;title&quot;:&quot;How to add a model to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_model&quot;},{&quot;title&quot;:&quot;How to convert a 🤗 Transformers model to TensorFlow?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_tensorflow_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_tensorflow_model&quot;},{&quot;title&quot;:&quot;How to add a pipeline to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_pipeline&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_pipeline&quot;},{&quot;title&quot;:&quot;Testing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;testing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/testing&quot;},{&quot;title&quot;:&quot;Checks on a Pull Request&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pr_checks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pr_checks&quot;}]},{&quot;title&quot;:&quot;Conceptual guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Philosophy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;philosophy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/philosophy&quot;},{&quot;title&quot;:&quot;Glossary&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;glossary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/glossary&quot;},{&quot;title&quot;:&quot;What 🤗 Transformers can do&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;task_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/task_summary&quot;},{&quot;title&quot;:&quot;How 🤗 Transformers solve tasks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks_explained&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks_explained&quot;},{&quot;title&quot;:&quot;The Transformer model family&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_summary&quot;},{&quot;title&quot;:&quot;Summary of the tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tokenizer_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tokenizer_summary&quot;},{&quot;title&quot;:&quot;Attention mechanisms&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;attention&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/attention&quot;},{&quot;title&quot;:&quot;Padding and truncation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pad_truncation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pad_truncation&quot;},{&quot;title&quot;:&quot;BERTology&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;bertology&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/bertology&quot;},{&quot;title&quot;:&quot;Perplexity of fixed-length models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perplexity&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perplexity&quot;},{&quot;title&quot;:&quot;Pipelines for webserver inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_webserver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_webserver&quot;},{&quot;title&quot;:&quot;Model training anatomy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_memory_anatomy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_memory_anatomy&quot;}]},{&quot;title&quot;:&quot;API&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Main Classes&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Agents and Tools&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/agent&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/agent&quot;},{&quot;title&quot;:&quot;Auto Classes&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/auto&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/auto&quot;},{&quot;title&quot;:&quot;Callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/callback&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/callback&quot;},{&quot;title&quot;:&quot;Configuration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/configuration&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/configuration&quot;},{&quot;title&quot;:&quot;Data Collator&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/data_collator&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/data_collator&quot;},{&quot;title&quot;:&quot;Keras callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/keras_callbacks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/keras_callbacks&quot;},{&quot;title&quot;:&quot;Logging&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/logging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/logging&quot;},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/model&quot;},{&quot;title&quot;:&quot;Text Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/text_generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/text_generation&quot;},{&quot;title&quot;:&quot;ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/onnx&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/onnx&quot;},{&quot;title&quot;:&quot;Optimization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/optimizer_schedules&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules&quot;},{&quot;title&quot;:&quot;Model outputs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/output&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/output&quot;},{&quot;title&quot;:&quot;Pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/pipelines&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/pipelines&quot;},{&quot;title&quot;:&quot;Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/processors&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/processors&quot;},{&quot;title&quot;:&quot;Quantization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/quantization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/quantization&quot;},{&quot;title&quot;:&quot;Tokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/tokenizer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/tokenizer&quot;},{&quot;title&quot;:&quot;Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/trainer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/trainer&quot;},{&quot;title&quot;:&quot;DeepSpeed Integration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/deepspeed&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/deepspeed&quot;},{&quot;title&quot;:&quot;Feature Extractor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/feature_extractor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/feature_extractor&quot;},{&quot;title&quot;:&quot;Image Processor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/image_processor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/image_processor&quot;}]},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALBERT&quot;,&quot;id&quot;:&quot;model_doc/albert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/albert&quot;},{&quot;title&quot;:&quot;BART&quot;,&quot;id&quot;:&quot;model_doc/bart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bart&quot;},{&quot;title&quot;:&quot;BARThez&quot;,&quot;id&quot;:&quot;model_doc/barthez&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/barthez&quot;},{&quot;title&quot;:&quot;BARTpho&quot;,&quot;id&quot;:&quot;model_doc/bartpho&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bartpho&quot;},{&quot;title&quot;:&quot;BERT&quot;,&quot;id&quot;:&quot;model_doc/bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert&quot;},{&quot;title&quot;:&quot;BertGeneration&quot;,&quot;id&quot;:&quot;model_doc/bert-generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-generation&quot;},{&quot;title&quot;:&quot;BertJapanese&quot;,&quot;id&quot;:&quot;model_doc/bert-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-japanese&quot;},{&quot;title&quot;:&quot;Bertweet&quot;,&quot;id&quot;:&quot;model_doc/bertweet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bertweet&quot;},{&quot;title&quot;:&quot;BigBird&quot;,&quot;id&quot;:&quot;model_doc/big_bird&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/big_bird&quot;},{&quot;title&quot;:&quot;BigBirdPegasus&quot;,&quot;id&quot;:&quot;model_doc/bigbird_pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus&quot;},{&quot;title&quot;:&quot;BioGpt&quot;,&quot;id&quot;:&quot;model_doc/biogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/biogpt&quot;},{&quot;title&quot;:&quot;Blenderbot&quot;,&quot;id&quot;:&quot;model_doc/blenderbot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot&quot;},{&quot;title&quot;:&quot;Blenderbot Small&quot;,&quot;id&quot;:&quot;model_doc/blenderbot-small&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot-small&quot;},{&quot;title&quot;:&quot;BLOOM&quot;,&quot;id&quot;:&quot;model_doc/bloom&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bloom&quot;},{&quot;title&quot;:&quot;BORT&quot;,&quot;id&quot;:&quot;model_doc/bort&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bort&quot;},{&quot;title&quot;:&quot;ByT5&quot;,&quot;id&quot;:&quot;model_doc/byt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/byt5&quot;},{&quot;title&quot;:&quot;CamemBERT&quot;,&quot;id&quot;:&quot;model_doc/camembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/camembert&quot;},{&quot;title&quot;:&quot;CANINE&quot;,&quot;id&quot;:&quot;model_doc/canine&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/canine&quot;},{&quot;title&quot;:&quot;CodeGen&quot;,&quot;id&quot;:&quot;model_doc/codegen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/codegen&quot;},{&quot;title&quot;:&quot;CodeLlama&quot;,&quot;id&quot;:&quot;model_doc/code_llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/code_llama&quot;},{&quot;title&quot;:&quot;ConvBERT&quot;,&quot;id&quot;:&quot;model_doc/convbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convbert&quot;},{&quot;title&quot;:&quot;CPM&quot;,&quot;id&quot;:&quot;model_doc/cpm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpm&quot;},{&quot;title&quot;:&quot;CPMANT&quot;,&quot;id&quot;:&quot;model_doc/cpmant&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpmant&quot;},{&quot;title&quot;:&quot;CTRL&quot;,&quot;id&quot;:&quot;model_doc/ctrl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ctrl&quot;},{&quot;title&quot;:&quot;DeBERTa&quot;,&quot;id&quot;:&quot;model_doc/deberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta&quot;},{&quot;title&quot;:&quot;DeBERTa-v2&quot;,&quot;id&quot;:&quot;model_doc/deberta-v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta-v2&quot;},{&quot;title&quot;:&quot;DialoGPT&quot;,&quot;id&quot;:&quot;model_doc/dialogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dialogpt&quot;},{&quot;title&quot;:&quot;DistilBERT&quot;,&quot;id&quot;:&quot;model_doc/distilbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/distilbert&quot;},{&quot;title&quot;:&quot;DPR&quot;,&quot;id&quot;:&quot;model_doc/dpr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpr&quot;},{&quot;title&quot;:&quot;ELECTRA&quot;,&quot;id&quot;:&quot;model_doc/electra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/electra&quot;},{&quot;title&quot;:&quot;Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encoder-decoder&quot;},{&quot;title&quot;:&quot;ERNIE&quot;,&quot;id&quot;:&quot;model_doc/ernie&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie&quot;},{&quot;title&quot;:&quot;ErnieM&quot;,&quot;id&quot;:&quot;model_doc/ernie_m&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie_m&quot;},{&quot;title&quot;:&quot;ESM&quot;,&quot;id&quot;:&quot;model_doc/esm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/esm&quot;},{&quot;title&quot;:&quot;Falcon&quot;,&quot;id&quot;:&quot;model_doc/falcon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/falcon&quot;},{&quot;title&quot;:&quot;FLAN-T5&quot;,&quot;id&quot;:&quot;model_doc/flan-t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-t5&quot;},{&quot;title&quot;:&quot;FLAN-UL2&quot;,&quot;id&quot;:&quot;model_doc/flan-ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-ul2&quot;},{&quot;title&quot;:&quot;FlauBERT&quot;,&quot;id&quot;:&quot;model_doc/flaubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flaubert&quot;},{&quot;title&quot;:&quot;FNet&quot;,&quot;id&quot;:&quot;model_doc/fnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fnet&quot;},{&quot;title&quot;:&quot;FSMT&quot;,&quot;id&quot;:&quot;model_doc/fsmt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fsmt&quot;},{&quot;title&quot;:&quot;Funnel Transformer&quot;,&quot;id&quot;:&quot;model_doc/funnel&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/funnel&quot;},{&quot;title&quot;:&quot;GPT&quot;,&quot;id&quot;:&quot;model_doc/openai-gpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/openai-gpt&quot;},{&quot;title&quot;:&quot;GPT Neo&quot;,&quot;id&quot;:&quot;model_doc/gpt_neo&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neo&quot;},{&quot;title&quot;:&quot;GPT NeoX&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox&quot;},{&quot;title&quot;:&quot;GPT NeoX Japanese&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox_japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese&quot;},{&quot;title&quot;:&quot;GPT-J&quot;,&quot;id&quot;:&quot;model_doc/gptj&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptj&quot;},{&quot;title&quot;:&quot;GPT2&quot;,&quot;id&quot;:&quot;model_doc/gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt2&quot;},{&quot;title&quot;:&quot;GPTBigCode&quot;,&quot;id&quot;:&quot;model_doc/gpt_bigcode&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode&quot;},{&quot;title&quot;:&quot;GPTSAN Japanese&quot;,&quot;id&quot;:&quot;model_doc/gptsan-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese&quot;},{&quot;title&quot;:&quot;GPTSw3&quot;,&quot;id&quot;:&quot;model_doc/gpt-sw3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt-sw3&quot;},{&quot;title&quot;:&quot;HerBERT&quot;,&quot;id&quot;:&quot;model_doc/herbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/herbert&quot;},{&quot;title&quot;:&quot;I-BERT&quot;,&quot;id&quot;:&quot;model_doc/ibert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ibert&quot;},{&quot;title&quot;:&quot;Jukebox&quot;,&quot;id&quot;:&quot;model_doc/jukebox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/jukebox&quot;},{&quot;title&quot;:&quot;LED&quot;,&quot;id&quot;:&quot;model_doc/led&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/led&quot;},{&quot;title&quot;:&quot;LLaMA&quot;,&quot;id&quot;:&quot;model_doc/llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama&quot;},{&quot;title&quot;:&quot;Llama2&quot;,&quot;id&quot;:&quot;model_doc/llama2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama2&quot;},{&quot;title&quot;:&quot;Longformer&quot;,&quot;id&quot;:&quot;model_doc/longformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longformer&quot;},{&quot;title&quot;:&quot;LongT5&quot;,&quot;id&quot;:&quot;model_doc/longt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longt5&quot;},{&quot;title&quot;:&quot;LUKE&quot;,&quot;id&quot;:&quot;model_doc/luke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/luke&quot;},{&quot;title&quot;:&quot;M2M100&quot;,&quot;id&quot;:&quot;model_doc/m2m_100&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/m2m_100&quot;},{&quot;title&quot;:&quot;MarianMT&quot;,&quot;id&quot;:&quot;model_doc/marian&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/marian&quot;},{&quot;title&quot;:&quot;MarkupLM&quot;,&quot;id&quot;:&quot;model_doc/markuplm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/markuplm&quot;},{&quot;title&quot;:&quot;MBart and MBart-50&quot;,&quot;id&quot;:&quot;model_doc/mbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mbart&quot;},{&quot;title&quot;:&quot;MEGA&quot;,&quot;id&quot;:&quot;model_doc/mega&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mega&quot;},{&quot;title&quot;:&quot;MegatronBERT&quot;,&quot;id&quot;:&quot;model_doc/megatron-bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron-bert&quot;},{&quot;title&quot;:&quot;MegatronGPT2&quot;,&quot;id&quot;:&quot;model_doc/megatron_gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2&quot;},{&quot;title&quot;:&quot;Mistral&quot;,&quot;id&quot;:&quot;model_doc/mistral&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mistral&quot;},{&quot;title&quot;:&quot;mLUKE&quot;,&quot;id&quot;:&quot;model_doc/mluke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mluke&quot;},{&quot;title&quot;:&quot;MobileBERT&quot;,&quot;id&quot;:&quot;model_doc/mobilebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilebert&quot;},{&quot;title&quot;:&quot;MPNet&quot;,&quot;id&quot;:&quot;model_doc/mpnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpnet&quot;},{&quot;title&quot;:&quot;MPT&quot;,&quot;id&quot;:&quot;model_doc/mpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpt&quot;},{&quot;title&quot;:&quot;MRA&quot;,&quot;id&quot;:&quot;model_doc/mra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mra&quot;},{&quot;title&quot;:&quot;MT5&quot;,&quot;id&quot;:&quot;model_doc/mt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mt5&quot;},{&quot;title&quot;:&quot;MVP&quot;,&quot;id&quot;:&quot;model_doc/mvp&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mvp&quot;},{&quot;title&quot;:&quot;NEZHA&quot;,&quot;id&quot;:&quot;model_doc/nezha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nezha&quot;},{&quot;title&quot;:&quot;NLLB&quot;,&quot;id&quot;:&quot;model_doc/nllb&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb&quot;},{&quot;title&quot;:&quot;NLLB-MoE&quot;,&quot;id&quot;:&quot;model_doc/nllb-moe&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb-moe&quot;},{&quot;title&quot;:&quot;Nyströmformer&quot;,&quot;id&quot;:&quot;model_doc/nystromformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nystromformer&quot;},{&quot;title&quot;:&quot;Open-Llama&quot;,&quot;id&quot;:&quot;model_doc/open-llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/open-llama&quot;},{&quot;title&quot;:&quot;OPT&quot;,&quot;id&quot;:&quot;model_doc/opt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/opt&quot;},{&quot;title&quot;:&quot;Pegasus&quot;,&quot;id&quot;:&quot;model_doc/pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus&quot;},{&quot;title&quot;:&quot;PEGASUS-X&quot;,&quot;id&quot;:&quot;model_doc/pegasus_x&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus_x&quot;},{&quot;title&quot;:&quot;Persimmon&quot;,&quot;id&quot;:&quot;model_doc/persimmon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/persimmon&quot;},{&quot;title&quot;:&quot;PhoBERT&quot;,&quot;id&quot;:&quot;model_doc/phobert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/phobert&quot;},{&quot;title&quot;:&quot;PLBart&quot;,&quot;id&quot;:&quot;model_doc/plbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/plbart&quot;},{&quot;title&quot;:&quot;ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/prophetnet&quot;},{&quot;title&quot;:&quot;QDQBert&quot;,&quot;id&quot;:&quot;model_doc/qdqbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/qdqbert&quot;},{&quot;title&quot;:&quot;RAG&quot;,&quot;id&quot;:&quot;model_doc/rag&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rag&quot;},{&quot;title&quot;:&quot;REALM&quot;,&quot;id&quot;:&quot;model_doc/realm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/realm&quot;},{&quot;title&quot;:&quot;Reformer&quot;,&quot;id&quot;:&quot;model_doc/reformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/reformer&quot;},{&quot;title&quot;:&quot;RemBERT&quot;,&quot;id&quot;:&quot;model_doc/rembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rembert&quot;},{&quot;title&quot;:&quot;RetriBERT&quot;,&quot;id&quot;:&quot;model_doc/retribert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/retribert&quot;},{&quot;title&quot;:&quot;RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta&quot;},{&quot;title&quot;:&quot;RoBERTa-PreLayerNorm&quot;,&quot;id&quot;:&quot;model_doc/roberta-prelayernorm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm&quot;},{&quot;title&quot;:&quot;RoCBert&quot;,&quot;id&quot;:&quot;model_doc/roc_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roc_bert&quot;},{&quot;title&quot;:&quot;RoFormer&quot;,&quot;id&quot;:&quot;model_doc/roformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roformer&quot;},{&quot;title&quot;:&quot;RWKV&quot;,&quot;id&quot;:&quot;model_doc/rwkv&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rwkv&quot;},{&quot;title&quot;:&quot;Splinter&quot;,&quot;id&quot;:&quot;model_doc/splinter&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/splinter&quot;},{&quot;title&quot;:&quot;SqueezeBERT&quot;,&quot;id&quot;:&quot;model_doc/squeezebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/squeezebert&quot;},{&quot;title&quot;:&quot;SwitchTransformers&quot;,&quot;id&quot;:&quot;model_doc/switch_transformers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/switch_transformers&quot;},{&quot;title&quot;:&quot;T5&quot;,&quot;id&quot;:&quot;model_doc/t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5&quot;},{&quot;title&quot;:&quot;T5v1.1&quot;,&quot;id&quot;:&quot;model_doc/t5v1.1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5v1.1&quot;},{&quot;title&quot;:&quot;TAPEX&quot;,&quot;id&quot;:&quot;model_doc/tapex&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapex&quot;},{&quot;title&quot;:&quot;Transformer XL&quot;,&quot;id&quot;:&quot;model_doc/transfo-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/transfo-xl&quot;},{&quot;title&quot;:&quot;UL2&quot;,&quot;id&quot;:&quot;model_doc/ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ul2&quot;},{&quot;title&quot;:&quot;UMT5&quot;,&quot;id&quot;:&quot;model_doc/umt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/umt5&quot;},{&quot;title&quot;:&quot;X-MOD&quot;,&quot;id&quot;:&quot;model_doc/xmod&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xmod&quot;},{&quot;title&quot;:&quot;XGLM&quot;,&quot;id&quot;:&quot;model_doc/xglm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xglm&quot;},{&quot;title&quot;:&quot;XLM&quot;,&quot;id&quot;:&quot;model_doc/xlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm&quot;},{&quot;title&quot;:&quot;XLM-ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/xlm-prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa-XL&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl&quot;},{&quot;title&quot;:&quot;XLM-V&quot;,&quot;id&quot;:&quot;model_doc/xlm-v&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-v&quot;},{&quot;title&quot;:&quot;XLNet&quot;,&quot;id&quot;:&quot;model_doc/xlnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlnet&quot;},{&quot;title&quot;:&quot;YOSO&quot;,&quot;id&quot;:&quot;model_doc/yoso&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yoso&quot;}]},{&quot;title&quot;:&quot;Vision models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;BEiT&quot;,&quot;id&quot;:&quot;model_doc/beit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/beit&quot;},{&quot;title&quot;:&quot;BiT&quot;,&quot;id&quot;:&quot;model_doc/bit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bit&quot;},{&quot;title&quot;:&quot;Conditional DETR&quot;,&quot;id&quot;:&quot;model_doc/conditional_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/conditional_detr&quot;},{&quot;title&quot;:&quot;ConvNeXT&quot;,&quot;id&quot;:&quot;model_doc/convnext&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnext&quot;},{&quot;title&quot;:&quot;ConvNeXTV2&quot;,&quot;id&quot;:&quot;model_doc/convnextv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnextv2&quot;},{&quot;title&quot;:&quot;CvT&quot;,&quot;id&quot;:&quot;model_doc/cvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cvt&quot;},{&quot;title&quot;:&quot;Deformable DETR&quot;,&quot;id&quot;:&quot;model_doc/deformable_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deformable_detr&quot;},{&quot;title&quot;:&quot;DeiT&quot;,&quot;id&quot;:&quot;model_doc/deit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deit&quot;},{&quot;title&quot;:&quot;DETA&quot;,&quot;id&quot;:&quot;model_doc/deta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deta&quot;},{&quot;title&quot;:&quot;DETR&quot;,&quot;id&quot;:&quot;model_doc/detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/detr&quot;},{&quot;title&quot;:&quot;DiNAT&quot;,&quot;id&quot;:&quot;model_doc/dinat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinat&quot;},{&quot;title&quot;:&quot;DINO V2&quot;,&quot;id&quot;:&quot;model_doc/dinov2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinov2&quot;},{&quot;title&quot;:&quot;DiT&quot;,&quot;id&quot;:&quot;model_doc/dit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dit&quot;},{&quot;title&quot;:&quot;DPT&quot;,&quot;id&quot;:&quot;model_doc/dpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpt&quot;},{&quot;title&quot;:&quot;EfficientFormer&quot;,&quot;id&quot;:&quot;model_doc/efficientformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientformer&quot;},{&quot;title&quot;:&quot;EfficientNet&quot;,&quot;id&quot;:&quot;model_doc/efficientnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientnet&quot;},{&quot;title&quot;:&quot;FocalNet&quot;,&quot;id&quot;:&quot;model_doc/focalnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/focalnet&quot;},{&quot;title&quot;:&quot;GLPN&quot;,&quot;id&quot;:&quot;model_doc/glpn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/glpn&quot;},{&quot;title&quot;:&quot;ImageGPT&quot;,&quot;id&quot;:&quot;model_doc/imagegpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/imagegpt&quot;},{&quot;title&quot;:&quot;LeViT&quot;,&quot;id&quot;:&quot;model_doc/levit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/levit&quot;},{&quot;title&quot;:&quot;Mask2Former&quot;,&quot;id&quot;:&quot;model_doc/mask2former&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mask2former&quot;},{&quot;title&quot;:&quot;MaskFormer&quot;,&quot;id&quot;:&quot;model_doc/maskformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/maskformer&quot;},{&quot;title&quot;:&quot;MobileNetV1&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v1&quot;},{&quot;title&quot;:&quot;MobileNetV2&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v2&quot;},{&quot;title&quot;:&quot;MobileViT&quot;,&quot;id&quot;:&quot;model_doc/mobilevit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevit&quot;},{&quot;title&quot;:&quot;MobileViTV2&quot;,&quot;id&quot;:&quot;model_doc/mobilevitv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevitv2&quot;},{&quot;title&quot;:&quot;NAT&quot;,&quot;id&quot;:&quot;model_doc/nat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nat&quot;},{&quot;title&quot;:&quot;PoolFormer&quot;,&quot;id&quot;:&quot;model_doc/poolformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/poolformer&quot;},{&quot;title&quot;:&quot;Pyramid Vision Transformer (PVT)&quot;,&quot;id&quot;:&quot;model_doc/pvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pvt&quot;},{&quot;title&quot;:&quot;RegNet&quot;,&quot;id&quot;:&quot;model_doc/regnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/regnet&quot;},{&quot;title&quot;:&quot;ResNet&quot;,&quot;id&quot;:&quot;model_doc/resnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/resnet&quot;},{&quot;title&quot;:&quot;SegFormer&quot;,&quot;id&quot;:&quot;model_doc/segformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/segformer&quot;},{&quot;title&quot;:&quot;SwiftFormer&quot;,&quot;id&quot;:&quot;model_doc/swiftformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swiftformer&quot;},{&quot;title&quot;:&quot;Swin Transformer&quot;,&quot;id&quot;:&quot;model_doc/swin&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin&quot;},{&quot;title&quot;:&quot;Swin Transformer V2&quot;,&quot;id&quot;:&quot;model_doc/swinv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swinv2&quot;},{&quot;title&quot;:&quot;Swin2SR&quot;,&quot;id&quot;:&quot;model_doc/swin2sr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin2sr&quot;},{&quot;title&quot;:&quot;Table Transformer&quot;,&quot;id&quot;:&quot;model_doc/table-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/table-transformer&quot;},{&quot;title&quot;:&quot;TimeSformer&quot;,&quot;id&quot;:&quot;model_doc/timesformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/timesformer&quot;},{&quot;title&quot;:&quot;UperNet&quot;,&quot;id&quot;:&quot;model_doc/upernet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/upernet&quot;},{&quot;title&quot;:&quot;VAN&quot;,&quot;id&quot;:&quot;model_doc/van&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/van&quot;},{&quot;title&quot;:&quot;VideoMAE&quot;,&quot;id&quot;:&quot;model_doc/videomae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/videomae&quot;},{&quot;title&quot;:&quot;Vision Transformer (ViT)&quot;,&quot;id&quot;:&quot;model_doc/vit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit&quot;},{&quot;title&quot;:&quot;ViT Hybrid&quot;,&quot;id&quot;:&quot;model_doc/vit_hybrid&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_hybrid&quot;},{&quot;title&quot;:&quot;ViTDet&quot;,&quot;id&quot;:&quot;model_doc/vitdet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitdet&quot;},{&quot;title&quot;:&quot;ViTMAE&quot;,&quot;id&quot;:&quot;model_doc/vit_mae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_mae&quot;},{&quot;title&quot;:&quot;ViTMatte&quot;,&quot;id&quot;:&quot;model_doc/vitmatte&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitmatte&quot;},{&quot;title&quot;:&quot;ViTMSN&quot;,&quot;id&quot;:&quot;model_doc/vit_msn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_msn&quot;},{&quot;title&quot;:&quot;ViViT&quot;,&quot;id&quot;:&quot;model_doc/vivit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vivit&quot;},{&quot;title&quot;:&quot;YOLOS&quot;,&quot;id&quot;:&quot;model_doc/yolos&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yolos&quot;}]},{&quot;title&quot;:&quot;Audio models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio Spectrogram Transformer&quot;,&quot;id&quot;:&quot;model_doc/audio-spectrogram-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer&quot;},{&quot;title&quot;:&quot;Bark&quot;,&quot;id&quot;:&quot;model_doc/bark&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bark&quot;},{&quot;title&quot;:&quot;CLAP&quot;,&quot;id&quot;:&quot;model_doc/clap&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clap&quot;},{&quot;title&quot;:&quot;EnCodec&quot;,&quot;id&quot;:&quot;model_doc/encodec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encodec&quot;},{&quot;title&quot;:&quot;Hubert&quot;,&quot;id&quot;:&quot;model_doc/hubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/hubert&quot;},{&quot;title&quot;:&quot;MCTCT&quot;,&quot;id&quot;:&quot;model_doc/mctct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mctct&quot;},{&quot;title&quot;:&quot;MMS&quot;,&quot;id&quot;:&quot;model_doc/mms&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mms&quot;},{&quot;title&quot;:&quot;MusicGen&quot;,&quot;id&quot;:&quot;model_doc/musicgen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/musicgen&quot;},{&quot;title&quot;:&quot;Pop2Piano&quot;,&quot;id&quot;:&quot;model_doc/pop2piano&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pop2piano&quot;},{&quot;title&quot;:&quot;SEW&quot;,&quot;id&quot;:&quot;model_doc/sew&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew&quot;},{&quot;title&quot;:&quot;SEW-D&quot;,&quot;id&quot;:&quot;model_doc/sew-d&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew-d&quot;},{&quot;title&quot;:&quot;Speech2Text&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text&quot;},{&quot;title&quot;:&quot;Speech2Text2&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text_2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2&quot;},{&quot;title&quot;:&quot;SpeechT5&quot;,&quot;id&quot;:&quot;model_doc/speecht5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speecht5&quot;},{&quot;title&quot;:&quot;UniSpeech&quot;,&quot;id&quot;:&quot;model_doc/unispeech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech&quot;},{&quot;title&quot;:&quot;UniSpeech-SAT&quot;,&quot;id&quot;:&quot;model_doc/unispeech-sat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech-sat&quot;},{&quot;title&quot;:&quot;VITS&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/vits&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vits&quot;},{&quot;title&quot;:&quot;Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2&quot;},{&quot;title&quot;:&quot;Wav2Vec2-Conformer&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2-conformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer&quot;},{&quot;title&quot;:&quot;Wav2Vec2Phoneme&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2_phoneme&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme&quot;},{&quot;title&quot;:&quot;WavLM&quot;,&quot;id&quot;:&quot;model_doc/wavlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wavlm&quot;},{&quot;title&quot;:&quot;Whisper&quot;,&quot;id&quot;:&quot;model_doc/whisper&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/whisper&quot;},{&quot;title&quot;:&quot;XLS-R&quot;,&quot;id&quot;:&quot;model_doc/xls_r&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xls_r&quot;},{&quot;title&quot;:&quot;XLSR-Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/xlsr_wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2&quot;}]},{&quot;title&quot;:&quot;Multimodal models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALIGN&quot;,&quot;id&quot;:&quot;model_doc/align&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/align&quot;},{&quot;title&quot;:&quot;AltCLIP&quot;,&quot;id&quot;:&quot;model_doc/altclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/altclip&quot;},{&quot;title&quot;:&quot;BLIP&quot;,&quot;id&quot;:&quot;model_doc/blip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip&quot;},{&quot;title&quot;:&quot;BLIP-2&quot;,&quot;id&quot;:&quot;model_doc/blip-2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip-2&quot;},{&quot;title&quot;:&quot;BridgeTower&quot;,&quot;id&quot;:&quot;model_doc/bridgetower&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bridgetower&quot;},{&quot;title&quot;:&quot;BROS&quot;,&quot;id&quot;:&quot;model_doc/bros&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bros&quot;},{&quot;title&quot;:&quot;Chinese-CLIP&quot;,&quot;id&quot;:&quot;model_doc/chinese_clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/chinese_clip&quot;},{&quot;title&quot;:&quot;CLIP&quot;,&quot;id&quot;:&quot;model_doc/clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clip&quot;},{&quot;title&quot;:&quot;CLIPSeg&quot;,&quot;id&quot;:&quot;model_doc/clipseg&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clipseg&quot;},{&quot;title&quot;:&quot;Data2Vec&quot;,&quot;id&quot;:&quot;model_doc/data2vec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/data2vec&quot;},{&quot;title&quot;:&quot;DePlot&quot;,&quot;id&quot;:&quot;model_doc/deplot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deplot&quot;},{&quot;title&quot;:&quot;Donut&quot;,&quot;id&quot;:&quot;model_doc/donut&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/donut&quot;},{&quot;title&quot;:&quot;FLAVA&quot;,&quot;id&quot;:&quot;model_doc/flava&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flava&quot;},{&quot;title&quot;:&quot;GIT&quot;,&quot;id&quot;:&quot;model_doc/git&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/git&quot;},{&quot;title&quot;:&quot;GroupViT&quot;,&quot;id&quot;:&quot;model_doc/groupvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/groupvit&quot;},{&quot;title&quot;:&quot;IDEFICS&quot;,&quot;id&quot;:&quot;model_doc/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/idefics&quot;},{&quot;title&quot;:&quot;InstructBLIP&quot;,&quot;id&quot;:&quot;model_doc/instructblip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/instructblip&quot;},{&quot;title&quot;:&quot;LayoutLM&quot;,&quot;id&quot;:&quot;model_doc/layoutlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlm&quot;},{&quot;title&quot;:&quot;LayoutLMV2&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv2&quot;},{&quot;title&quot;:&quot;LayoutLMV3&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv3&quot;},{&quot;title&quot;:&quot;LayoutXLM&quot;,&quot;id&quot;:&quot;model_doc/layoutxlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutxlm&quot;},{&quot;title&quot;:&quot;LiLT&quot;,&quot;id&quot;:&quot;model_doc/lilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lilt&quot;},{&quot;title&quot;:&quot;LXMERT&quot;,&quot;id&quot;:&quot;model_doc/lxmert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lxmert&quot;},{&quot;title&quot;:&quot;MatCha&quot;,&quot;id&quot;:&quot;model_doc/matcha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/matcha&quot;},{&quot;title&quot;:&quot;MGP-STR&quot;,&quot;id&quot;:&quot;model_doc/mgp-str&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mgp-str&quot;},{&quot;title&quot;:&quot;Nougat&quot;,&quot;id&quot;:&quot;model_doc/nougat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nougat&quot;},{&quot;title&quot;:&quot;OneFormer&quot;,&quot;id&quot;:&quot;model_doc/oneformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/oneformer&quot;},{&quot;title&quot;:&quot;OWL-ViT&quot;,&quot;id&quot;:&quot;model_doc/owlvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/owlvit&quot;},{&quot;title&quot;:&quot;Perceiver&quot;,&quot;id&quot;:&quot;model_doc/perceiver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/perceiver&quot;},{&quot;title&quot;:&quot;Pix2Struct&quot;,&quot;id&quot;:&quot;model_doc/pix2struct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pix2struct&quot;},{&quot;title&quot;:&quot;Segment Anything&quot;,&quot;id&quot;:&quot;model_doc/sam&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sam&quot;},{&quot;title&quot;:&quot;Speech Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/speech-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder&quot;},{&quot;title&quot;:&quot;TAPAS&quot;,&quot;id&quot;:&quot;model_doc/tapas&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapas&quot;},{&quot;title&quot;:&quot;TrOCR&quot;,&quot;id&quot;:&quot;model_doc/trocr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trocr&quot;},{&quot;title&quot;:&quot;TVLT&quot;,&quot;id&quot;:&quot;model_doc/tvlt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tvlt&quot;},{&quot;title&quot;:&quot;ViLT&quot;,&quot;id&quot;:&quot;model_doc/vilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vilt&quot;},{&quot;title&quot;:&quot;Vision Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/vision-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder&quot;},{&quot;title&quot;:&quot;Vision Text Dual Encoder&quot;,&quot;id&quot;:&quot;model_doc/vision-text-dual-encoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder&quot;},{&quot;title&quot;:&quot;VisualBERT&quot;,&quot;id&quot;:&quot;model_doc/visual_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/visual_bert&quot;},{&quot;title&quot;:&quot;X-CLIP&quot;,&quot;id&quot;:&quot;model_doc/xclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xclip&quot;}]},{&quot;title&quot;:&quot;Reinforcement learning models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Decision Transformer&quot;,&quot;id&quot;:&quot;model_doc/decision_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/decision_transformer&quot;},{&quot;title&quot;:&quot;Trajectory Transformer&quot;,&quot;id&quot;:&quot;model_doc/trajectory_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer&quot;}]},{&quot;title&quot;:&quot;Time series models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Autoformer&quot;,&quot;id&quot;:&quot;model_doc/autoformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/autoformer&quot;},{&quot;title&quot;:&quot;Informer&quot;,&quot;id&quot;:&quot;model_doc/informer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/informer&quot;},{&quot;title&quot;:&quot;Time Series Transformer&quot;,&quot;id&quot;:&quot;model_doc/time_series_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/time_series_transformer&quot;}]},{&quot;title&quot;:&quot;Graph models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Graphormer&quot;,&quot;id&quot;:&quot;model_doc/graphormer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/graphormer&quot;}]}]},{&quot;title&quot;:&quot;Internal Helpers&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Custom Layers and Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/modeling_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/modeling_utils&quot;},{&quot;title&quot;:&quot;Utilities for pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/pipelines_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/pipelines_utils&quot;},{&quot;title&quot;:&quot;Utilities for Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/tokenization_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/tokenization_utils&quot;},{&quot;title&quot;:&quot;Utilities for Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/trainer_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/trainer_utils&quot;},{&quot;title&quot;:&quot;Utilities for Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/generation_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/generation_utils&quot;},{&quot;title&quot;:&quot;Utilities for Image Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/image_processing_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/image_processing_utils&quot;},{&quot;title&quot;:&quot;Utilities for Audio processing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/audio_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/audio_utils&quot;},{&quot;title&quot;:&quot;General Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/file_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/file_utils&quot;},{&quot;title&quot;:&quot;Utilities for Time Series&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/time_series_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/time_series_utils&quot;}]}]}],&quot;chapterId&quot;:&quot;model_doc/vits&quot;,&quot;docType&quot;:&quot;docs&quot;,&quot;isLoggedIn&quot;:false,&quot;lang&quot;:&quot;en&quot;,&quot;langs&quot;:[&quot;de&quot;,&quot;en&quot;,&quot;es&quot;,&quot;fr&quot;,&quot;it&quot;,&quot;ko&quot;,&quot;pt&quot;,&quot;zh&quot;],&quot;library&quot;:&quot;transformers&quot;,&quot;theme&quot;:&quot;light&quot;,&quot;version&quot;:&quot;v4.34.0&quot;,&quot;versions&quot;:[{&quot;version&quot;:&quot;main&quot;},{&quot;version&quot;:&quot;v4.34.0&quot;},{&quot;version&quot;:&quot;v4.33.3&quot;},{&quot;version&quot;:&quot;v4.33.2&quot;},{&quot;version&quot;:&quot;v4.33.0&quot;},{&quot;version&quot;:&quot;v4.32.1&quot;},{&quot;version&quot;:&quot;v4.32.0&quot;},{&quot;version&quot;:&quot;v4.31.0&quot;},{&quot;version&quot;:&quot;v4.30.0&quot;},{&quot;version&quot;:&quot;v4.29.1&quot;},{&quot;version&quot;:&quot;v4.29.0&quot;},{&quot;version&quot;:&quot;v4.28.1&quot;},{&quot;version&quot;:&quot;v4.28.0&quot;},{&quot;version&quot;:&quot;v4.27.2&quot;},{&quot;version&quot;:&quot;v4.27.1&quot;},{&quot;version&quot;:&quot;v4.27.0&quot;},{&quot;version&quot;:&quot;v4.26.1&quot;},{&quot;version&quot;:&quot;v4.26.0&quot;},{&quot;version&quot;:&quot;v4.25.1&quot;},{&quot;version&quot;:&quot;v4.24.0&quot;},{&quot;version&quot;:&quot;v4.23.1&quot;},{&quot;version&quot;:&quot;v4.23.0&quot;},{&quot;version&quot;:&quot;v4.22.2&quot;},{&quot;version&quot;:&quot;v4.22.1&quot;},{&quot;version&quot;:&quot;v4.22.0&quot;},{&quot;version&quot;:&quot;v4.21.3&quot;},{&quot;version&quot;:&quot;v4.21.2&quot;},{&quot;version&quot;:&quot;v4.21.1&quot;},{&quot;version&quot;:&quot;v4.21.0&quot;},{&quot;version&quot;:&quot;v4.20.1&quot;},{&quot;version&quot;:&quot;v4.20.0&quot;},{&quot;version&quot;:&quot;v4.19.4&quot;},{&quot;version&quot;:&quot;v4.19.3&quot;},{&quot;version&quot;:&quot;v4.19.2&quot;},{&quot;version&quot;:&quot;v4.19.0&quot;},{&quot;version&quot;:&quot;v4.18.0&quot;},{&quot;version&quot;:&quot;v4.17.0&quot;},{&quot;version&quot;:&quot;v4.16.2&quot;},{&quot;version&quot;:&quot;v4.16.1&quot;},{&quot;version&quot;:&quot;v4.16.0&quot;},{&quot;version&quot;:&quot;v4.15.0&quot;},{&quot;version&quot;:&quot;v4.14.1&quot;},{&quot;version&quot;:&quot;v4.13.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.5&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.4&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.0.0&quot;},{&quot;version&quot;:&quot;doc-builder-html&quot;}],&quot;title&quot;:&quot;VITS&quot;}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"> <div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation </p> <div class="flex items-center"><p class="font-semibold">VITS</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "> <button class=" " type="button"> <h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> </button> <div class="flex items-center"> <select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1" selected="">v4.34.0</option><option value="2">v4.33.3</option><option value="3">v4.32.1</option><option value="4">v4.31.0</option><option value="5">v4.30.0</option><option value="6">v4.29.1</option><option value="7">v4.28.1</option><option value="8">v4.27.2</option><option value="9">v4.26.1</option><option value="10">v4.25.1</option><option value="11">v4.24.0</option><option value="12">v4.23.1</option><option value="13">v4.22.2</option><option value="14">v4.21.3</option><option value="15">v4.20.1</option><option value="16">v4.19.4</option><option value="17">v4.18.0</option><option value="18">v4.17.0</option><option value="19">v4.16.2</option><option value="20">v4.15.0</option><option value="21">v4.14.1</option><option value="22">v4.13.0</option><option value="23">v4.12.5</option><option value="24">v4.11.3</option><option value="25">v4.10.1</option><option value="26">v4.9.2</option><option value="27">v4.8.2</option><option value="28">v4.7.0</option><option value="29">v4.6.0</option><option value="30">v4.5.1</option><option value="31">v4.4.2</option><option value="32">v4.3.3</option><option value="33">v4.2.2</option><option value="34">v4.1.1</option><option value="35">v4.0.1</option><option value="36">v3.5.1</option><option value="37">v3.4.0</option><option value="38">v3.3.1</option><option value="39">v3.2.0</option><option value="40">v3.1.0</option><option value="41">v3.0.2</option><option value="42">v2.11.0</option><option value="43">v2.10.0</option><option value="44">v2.9.1</option><option value="45">v2.8.0</option><option value="46">v2.7.0</option><option value="47">v2.6.0</option><option value="48">v2.5.1</option><option value="49">v2.4.1</option><option value="50">v2.3.0</option><option value="51">v2.2.2</option><option value="52">v2.1.1</option><option value="53">v2.0.0</option><option value="54">v1.2.0</option><option value="55">v1.1.0</option><option value="56">v1.0.0</option><option value="57">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en" selected="">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"> <button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"> <svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> </a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Get started<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/index"><!-- HTML_TAG_START -->🤗 Transformers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/quicktour"><!-- HTML_TAG_START -->Quick tour<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/installation"><!-- HTML_TAG_START -->Installation<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Tutorials<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_tutorial"><!-- HTML_TAG_START -->Run inference with pipelines<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/autoclass_tutorial"><!-- HTML_TAG_START -->Write portable code with AutoClass<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/preprocessing"><!-- HTML_TAG_START -->Preprocess data<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/training"><!-- HTML_TAG_START -->Fine-tune a pretrained model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/run_scripts"><!-- HTML_TAG_START -->Train with a script<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/accelerate"><!-- HTML_TAG_START -->Set up distributed training with 🤗 Accelerate<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/peft"><!-- HTML_TAG_START -->Load and train adapters with 🤗 PEFT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_sharing"><!-- HTML_TAG_START -->Share your model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/transformers_agents"><!-- HTML_TAG_START -->Agents<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/llm_tutorial"><!-- HTML_TAG_START -->Generation with LLMs<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Task Guides<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Natural Language Processing<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Audio<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Computer Vision<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Multimodal<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Generation<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Prompting<!-- HTML_TAG_END --></span> </span></span> </div></div> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Developer guides<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/fast_tokenizers"><!-- HTML_TAG_START -->Use fast tokenizers from 🤗 Tokenizers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/multilingual"><!-- HTML_TAG_START -->Run inference with multilingual models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/create_a_model"><!-- HTML_TAG_START -->Use model-specific APIs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_models"><!-- HTML_TAG_START -->Share a custom model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/chat_templating"><!-- HTML_TAG_START -->Templates for chat models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/sagemaker"><!-- HTML_TAG_START -->Run training on Amazon SageMaker<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/serialization"><!-- HTML_TAG_START -->Export to ONNX<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tflite"><!-- HTML_TAG_START -->Export to TFLite<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/torchscript"><!-- HTML_TAG_START -->Export to TorchScript<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/benchmarks"><!-- HTML_TAG_START -->Benchmarks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/notebooks"><!-- HTML_TAG_START -->Notebooks with examples<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/community"><!-- HTML_TAG_START -->Community resources<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_tools"><!-- HTML_TAG_START -->Custom Tools and Prompts<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/troubleshooting"><!-- HTML_TAG_START -->Troubleshoot<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Performance and scalability<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/performance"><!-- HTML_TAG_START -->Overview<!-- HTML_TAG_END --> </a> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Efficient training techniques<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_one"><!-- HTML_TAG_START -->Methods and tools for efficient training on a single GPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_many"><!-- HTML_TAG_START -->Multiple GPUs and parallelism<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu"><!-- HTML_TAG_START -->Efficient training on CPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu_many"><!-- HTML_TAG_START -->Distributed CPU training<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu"><!-- HTML_TAG_START -->Training on TPUs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu_tf"><!-- HTML_TAG_START -->Training on TPU with TensorFlow<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_special"><!-- HTML_TAG_START -->Training on Specialized Hardware<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_hardware"><!-- HTML_TAG_START -->Custom hardware for training<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/hpo_train"><!-- HTML_TAG_START -->Hyperparameter Search using Trainer API<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Optimizing inference<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_cpu"><!-- HTML_TAG_START -->Inference on CPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_one"><!-- HTML_TAG_START -->Inference on one GPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_many"><!-- HTML_TAG_START -->Inference on many GPUs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_special"><!-- HTML_TAG_START -->Inference on Specialized Hardware<!-- HTML_TAG_END --> </a> </div><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/big_models"><!-- HTML_TAG_START -->Instantiating a big model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/debugging"><!-- HTML_TAG_START -->Troubleshooting<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tf_xla"><!-- HTML_TAG_START -->XLA Integration for TensorFlow Models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perf_torch_compile"><!-- HTML_TAG_START -->Optimize inference using `torch.compile()`<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Contribute<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/contributing"><!-- HTML_TAG_START -->How to contribute to transformers?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_model"><!-- HTML_TAG_START -->How to add a model to 🤗 Transformers?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_tensorflow_model"><!-- HTML_TAG_START -->How to convert a 🤗 Transformers model to TensorFlow?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_pipeline"><!-- HTML_TAG_START -->How to add a pipeline to 🤗 Transformers?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/testing"><!-- HTML_TAG_START -->Testing<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pr_checks"><!-- HTML_TAG_START -->Checks on a Pull Request<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Conceptual guides<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/philosophy"><!-- HTML_TAG_START -->Philosophy<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/glossary"><!-- HTML_TAG_START -->Glossary<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/task_summary"><!-- HTML_TAG_START -->What 🤗 Transformers can do<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tasks_explained"><!-- HTML_TAG_START -->How 🤗 Transformers solve tasks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_summary"><!-- HTML_TAG_START -->The Transformer model family<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tokenizer_summary"><!-- HTML_TAG_START -->Summary of the tokenizers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/attention"><!-- HTML_TAG_START -->Attention mechanisms<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pad_truncation"><!-- HTML_TAG_START -->Padding and truncation<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/bertology"><!-- HTML_TAG_START -->BERTology<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perplexity"><!-- HTML_TAG_START -->Perplexity of fixed-length models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_webserver"><!-- HTML_TAG_START -->Pipelines for webserver inference<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_memory_anatomy"><!-- HTML_TAG_START -->Model training anatomy<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->API<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Main Classes<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/agent"><!-- HTML_TAG_START -->Agents and Tools<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/model_doc/auto"><!-- HTML_TAG_START -->Auto Classes<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/callback"><!-- HTML_TAG_START -->Callbacks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/configuration"><!-- HTML_TAG_START -->Configuration<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/data_collator"><!-- HTML_TAG_START -->Data Collator<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks"><!-- HTML_TAG_START -->Keras callbacks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/logging"><!-- HTML_TAG_START -->Logging<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/model"><!-- HTML_TAG_START -->Models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/text_generation"><!-- HTML_TAG_START -->Text Generation<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/onnx"><!-- HTML_TAG_START -->ONNX<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules"><!-- HTML_TAG_START -->Optimization<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/output"><!-- HTML_TAG_START -->Model outputs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/pipelines"><!-- HTML_TAG_START -->Pipelines<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/processors"><!-- HTML_TAG_START -->Processors<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/quantization"><!-- HTML_TAG_START -->Quantization<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/tokenizer"><!-- HTML_TAG_START -->Tokenizer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/trainer"><!-- HTML_TAG_START -->Trainer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/deepspeed"><!-- HTML_TAG_START -->DeepSpeed Integration<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor"><!-- HTML_TAG_START -->Feature Extractor<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/image_processor"><!-- HTML_TAG_START -->Image Processor<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Text models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Vision models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Audio models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer"><!-- HTML_TAG_START -->Audio Spectrogram Transformer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bark"><!-- HTML_TAG_START -->Bark<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/clap"><!-- HTML_TAG_START -->CLAP<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/encodec"><!-- HTML_TAG_START -->EnCodec<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/hubert"><!-- HTML_TAG_START -->Hubert<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mctct"><!-- HTML_TAG_START -->MCTCT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mms"><!-- HTML_TAG_START -->MMS<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/musicgen"><!-- HTML_TAG_START -->MusicGen<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/pop2piano"><!-- HTML_TAG_START -->Pop2Piano<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/sew"><!-- HTML_TAG_START -->SEW<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/sew-d"><!-- HTML_TAG_START -->SEW-D<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/speech_to_text"><!-- HTML_TAG_START -->Speech2Text<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2"><!-- HTML_TAG_START -->Speech2Text2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/speecht5"><!-- HTML_TAG_START -->SpeechT5<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/unispeech"><!-- HTML_TAG_START -->UniSpeech<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/unispeech-sat"><!-- HTML_TAG_START -->UniSpeech-SAT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/vits"><!-- HTML_TAG_START -->VITS<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2"><!-- HTML_TAG_START -->Wav2Vec2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer"><!-- HTML_TAG_START -->Wav2Vec2-Conformer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme"><!-- HTML_TAG_START -->Wav2Vec2Phoneme<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/wavlm"><!-- HTML_TAG_START -->WavLM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/whisper"><!-- HTML_TAG_START -->Whisper<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xls_r"><!-- HTML_TAG_START -->XLS-R<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2"><!-- HTML_TAG_START -->XLSR-Wav2Vec2<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Multimodal models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Reinforcement learning models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Time series models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Graph models<!-- HTML_TAG_END --></span> </span></span> </div></div> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Internal Helpers<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/modeling_utils"><!-- HTML_TAG_START -->Custom Layers and Utilities<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/pipelines_utils"><!-- HTML_TAG_START -->Utilities for pipelines<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/tokenization_utils"><!-- HTML_TAG_START -->Utilities for Tokenizers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/trainer_utils"><!-- HTML_TAG_START -->Utilities for Trainer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/generation_utils"><!-- HTML_TAG_START -->Utilities for Generation<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/image_processing_utils"><!-- HTML_TAG_START -->Utilities for Image Processors<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/audio_utils"><!-- HTML_TAG_START -->Utilities for Audio processing<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/file_utils"><!-- HTML_TAG_START -->General Utilities<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/time_series_utils"><!-- HTML_TAG_START -->Utilities for Time Series<!-- HTML_TAG_END --> </a> </div> </div></nav></div></div></div> <div class="z-1 min-w-0 flex-1"> <div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg"> <div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div> <p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience </p> <div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes </div></div></div> <div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a> <p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div> <div class="prose-doc prose relative mx-auto max-w-4xl break-words"><!-- HTML_TAG_START --> <link href="/docs/transformers/v4.34.0/en/_app/immutable/assets/0.e3b0c442.css" rel="modulepreload"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/entry/start.c2db227a.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/scheduler.9bc65507.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/singletons.e3057404.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/index.3b203c72.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/paths.e7de6301.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/entry/app.879d9b87.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/index.78c82d43.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/0.242aaaff.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/each.e59479a4.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/268.3b792fc1.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/Tip.87d55b76.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/Docstring.4e7352e2.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/globals.7f7f1b26.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/IconCopyLink.bedaa44d.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/CodeBlock.73e038be.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/ExampleCodeBlock.872b014d.js"><!-- HEAD_svelte-1phssyn_START --><meta name="hf:doc:metadata" content="{&quot;local&quot;:&quot;vits&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;overview&quot;,&quot;title&quot;:&quot;Overview&quot;},{&quot;local&quot;:&quot;model-usage&quot;,&quot;title&quot;:&quot;Model Usage&quot;},{&quot;local&quot;:&quot;transformers.VitsConfig&quot;,&quot;title&quot;:&quot;VitsConfig&quot;},{&quot;local&quot;:&quot;transformers.VitsTokenizer&quot;,&quot;title&quot;:&quot;VitsTokenizer&quot;},{&quot;local&quot;:&quot;transformers.VitsModel&quot;,&quot;title&quot;:&quot;VitsModel&quot;}],&quot;title&quot;:&quot;VITS&quot;}"><!-- HEAD_svelte-1phssyn_END --> <p></p> <h1 class="relative group"><a id="vits" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#vits"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-2yk53d">VITS</span></h1> <h2 class="relative group"><a id="overview" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#overview"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1jsw1pg">Overview</span></h2> <p data-svelte-h="svelte-1843zcg">The VITS model was proposed in <a href="https://arxiv.org/abs/2106.06103" rel="nofollow">Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech</a> by Jaehyeon Kim, Jungil Kong, Juhee Son.</p> <p data-svelte-h="svelte-tb18uu">VITS (<strong>V</strong>ariational <strong>I</strong>nference with adversarial learning for end-to-end <strong>T</strong>ext-to-<strong>S</strong>peech) is an end-to-end speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior.</p> <p data-svelte-h="svelte-niuxot">A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers, much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to synthesise speech with different rhythms from the same input text.</p> <p data-svelte-h="svelte-1qg7ygc">The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training. To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor, the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform.</p> <p data-svelte-h="svelte-vfdo9a">The abstract from the paper is the following:</p> <p data-svelte-h="svelte-2k481d"><em>Several recent end-to-end text-to-speech (TTS) models enabling single-stage training and parallel sampling have been proposed, but their sample quality does not match that of two-stage TTS systems. In this work, we present a parallel end-to-end TTS method that generates more natural sounding audio than current two-stage models. Our method adopts variational inference augmented with normalizing flows and an adversarial training process, which improves the expressive power of generative modeling. We also propose a stochastic duration predictor to synthesize speech with diverse rhythms from input text. With the uncertainty modeling over latent variables and the stochastic duration predictor, our method expresses the natural one-to-many relationship in which a text input can be spoken in multiple ways with different pitches and rhythms. A subjective human evaluation (mean opinion score, or MOS) on the LJ Speech, a single speaker dataset, shows that our method outperforms the best publicly available TTS systems and achieves a MOS comparable to ground truth.</em></p> <p data-svelte-h="svelte-1fq9wh5">This model can also be used with TTS checkpoints from <a href="https://arxiv.org/abs/2305.13516" rel="nofollow">Massively Multilingual Speech (MMS)</a> as these checkpoints use the same architecture and a slightly modified tokenizer.</p> <p data-svelte-h="svelte-1svcgd6">This model was contributed by <a href="https://huggingface.co/Matthijs" rel="nofollow">Matthijs</a> and <a href="https://huggingface.co/sanchit-gandhi" rel="nofollow">sanchit-gandhi</a>. The original code can be found <a href="https://github.com/jaywalnut310/vits" rel="nofollow">here</a>.</p> <h2 class="relative group"><a id="model-usage" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#model-usage"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-18rd3g9">Model Usage</span></h2> <p data-svelte-h="svelte-1y4n4h">Both the VITS and MMS-TTS checkpoints can be used with the same API. Since the flow-based model is non-deterministic, it is good practice to set a seed to ensure reproducibility of the outputs. For languages with a Roman alphabet, such as English or French, the tokenizer can be used directly to pre-process the text inputs. The following code example runs a forward pass using the MMS-TTS English checkpoint:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-keyword">import</span> torch <span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> VitsTokenizer, VitsModel, set_seed tokenizer = VitsTokenizer.from_pretrained(<span class="hljs-string">"facebook/mms-tts-eng"</span>) model = VitsModel.from_pretrained(<span class="hljs-string">"facebook/mms-tts-eng"</span>) inputs = tokenizer(text=<span class="hljs-string">"Hello - my dog is cute"</span>, return_tensors=<span class="hljs-string">"pt"</span>) set_seed(<span class="hljs-number">555</span>) <span class="hljs-comment"># make deterministic</span> <span class="hljs-keyword">with</span> torch.no_grad(): outputs = model(**inputs) waveform = outputs.waveform[<span class="hljs-number">0</span>]<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-1wosc4r">The resulting waveform can be saved as a <code>.wav</code> file:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-keyword">import</span> scipy scipy.io.wavfile.write(<span class="hljs-string">"techno.wav"</span>, rate=model.config.sampling_rate, data=waveform)<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-1es17w7">Or displayed in a Jupyter Notebook / Google Colab:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-keyword">from</span> IPython.display <span class="hljs-keyword">import</span> Audio Audio(waveform, rate=model.config.sampling_rate)<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-yzebxy">For certain languages with a non-Roman alphabet, such as Arabic, Mandarin or Hindi, the <a href="https://github.com/isi-nlp/uroman" rel="nofollow"><code>uroman</code></a> perl package is required to pre-process the text inputs to the Roman alphabet.</p> <p data-svelte-h="svelte-19haabz">You can check whether you require the <code>uroman</code> package for your language by inspecting the <code>is_uroman</code> attribute of the pre-trained <code>tokenizer</code>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> VitsTokenizer tokenizer = VitsTokenizer.from_pretrained(<span class="hljs-string">"facebook/mms-tts-eng"</span>) <span class="hljs-built_in">print</span>(tokenizer.is_uroman)<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-y9ddjb">If required, you should apply the uroman package to your text inputs <strong>prior</strong> to passing them to the <code>VitsTokenizer</code>, since currently the tokenizer does not support performing the pre-processing itself.</p> <p data-svelte-h="svelte-14ycm1z">To do this, first clone the uroman repository to your local machine and set the bash variable <code>UROMAN</code> to the local path:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START -->git <span class="hljs-built_in">clone</span> https://github.com/isi-nlp/uroman.git <span class="hljs-built_in">cd</span> uroman <span class="hljs-built_in">export</span> UROMAN=$(<span class="hljs-built_in">pwd</span>)<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-d0ppi5">You can then pre-process the text input using the following code snippet. You can either rely on using the bash variable <code>UROMAN</code> to point to the uroman repository, or you can pass the uroman directory as an argument to the <code>uromaize</code> function:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-keyword">import</span> torch <span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> VitsTokenizer, VitsModel, set_seed <span class="hljs-keyword">import</span> os <span class="hljs-keyword">import</span> subprocess tokenizer = VitsTokenizer.from_pretrained(<span class="hljs-string">"facebook/mms-tts-kor"</span>) model = VitsModel.from_pretrained(<span class="hljs-string">"facebook/mms-tts-kor"</span>) <span class="hljs-keyword">def</span> <span class="hljs-title function_">uromanize</span>(<span class="hljs-params">input_string, uroman_path</span>): <span class="hljs-string">"""Convert non-Roman strings to Roman using the `uroman` perl package."""</span> script_path = os.path.join(uroman_path, <span class="hljs-string">"bin"</span>, <span class="hljs-string">"uroman.pl"</span>) command = [<span class="hljs-string">"perl"</span>, script_path] process = subprocess.Popen(command, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) <span class="hljs-comment"># Execute the perl command</span> stdout, stderr = process.communicate(<span class="hljs-built_in">input</span>=input_string.encode()) <span class="hljs-keyword">if</span> process.returncode != <span class="hljs-number">0</span>: <span class="hljs-keyword">raise</span> ValueError(<span class="hljs-string">f"Error <span class="hljs-subst">{process.returncode}</span>: <span class="hljs-subst">{stderr.decode()}</span>"</span>) <span class="hljs-comment"># Return the output as a string and skip the new-line character at the end</span> <span class="hljs-keyword">return</span> stdout.decode()[:-<span class="hljs-number">1</span>] text = <span class="hljs-string">"이봐 무슨 일이야"</span> uromaized_text = uromanize(text, uroman_path=os.environ[<span class="hljs-string">"UROMAN"</span>]) inputs = tokenizer(text=uromaized_text, return_tensors=<span class="hljs-string">"pt"</span>) set_seed(<span class="hljs-number">555</span>) <span class="hljs-comment"># make deterministic</span> <span class="hljs-keyword">with</span> torch.no_grad(): outputs = model(inputs[<span class="hljs-string">"input_ids"</span>]) waveform = outputs.waveform[<span class="hljs-number">0</span>]<!-- HTML_TAG_END --></pre></div> <h2 class="relative group"><a id="transformers.VitsConfig" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsConfig"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-en263j">VitsConfig</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.VitsConfig"><!-- HTML_TAG_START --><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">VitsConfig</span></span></h3><!-- HTML_TAG_END --> <a id="transformers.VitsConfig" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.VitsConfig"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vits/configuration_vits.py#L29" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">vocab_size<span class="opacity-60"> = 38</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_size<span class="opacity-60"> = 192</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_hidden_layers<span class="opacity-60"> = 6</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_attention_heads<span class="opacity-60"> = 2</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">window_size<span class="opacity-60"> = 4</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_bias<span class="opacity-60"> = True</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">ffn_dim<span class="opacity-60"> = 768</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">layerdrop<span class="opacity-60"> = 0.1</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">ffn_kernel_size<span class="opacity-60"> = 3</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">flow_size<span class="opacity-60"> = 192</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">spectrogram_bins<span class="opacity-60"> = 513</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_act<span class="opacity-60"> = 'relu'</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_dropout<span class="opacity-60"> = 0.1</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_dropout<span class="opacity-60"> = 0.1</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">activation_dropout<span class="opacity-60"> = 0.1</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">initializer_range<span class="opacity-60"> = 0.02</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">layer_norm_eps<span class="opacity-60"> = 1e-05</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_stochastic_duration_prediction<span class="opacity-60"> = True</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_speakers<span class="opacity-60"> = 1</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">speaker_embedding_size<span class="opacity-60"> = 0</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">upsample_initial_channel<span class="opacity-60"> = 512</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">upsample_rates<span class="opacity-60"> = [8, 8, 2, 2]</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">upsample_kernel_sizes<span class="opacity-60"> = [16, 16, 4, 4]</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">resblock_kernel_sizes<span class="opacity-60"> = [3, 7, 11]</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">resblock_dilation_sizes<span class="opacity-60"> = [[1, 3, 5], [1, 3, 5], [1, 3, 5]]</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">leaky_relu_slope<span class="opacity-60"> = 0.1</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">depth_separable_channels<span class="opacity-60"> = 2</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">depth_separable_num_layers<span class="opacity-60"> = 3</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">duration_predictor_flow_bins<span class="opacity-60"> = 10</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">duration_predictor_tail_bound<span class="opacity-60"> = 5.0</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">duration_predictor_kernel_size<span class="opacity-60"> = 3</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">duration_predictor_dropout<span class="opacity-60"> = 0.5</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">duration_predictor_num_flows<span class="opacity-60"> = 4</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">duration_predictor_filter_channels<span class="opacity-60"> = 256</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">prior_encoder_num_flows<span class="opacity-60"> = 4</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">prior_encoder_num_wavenet_layers<span class="opacity-60"> = 4</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">posterior_encoder_num_wavenet_layers<span class="opacity-60"> = 16</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">wavenet_kernel_size<span class="opacity-60"> = 5</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">wavenet_dilation_rate<span class="opacity-60"> = 1</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">wavenet_dropout<span class="opacity-60"> = 0.0</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">speaking_rate<span class="opacity-60"> = 1.0</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">noise_scale<span class="opacity-60"> = 0.667</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">noise_scale_duration<span class="opacity-60"> = 0.8</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">sampling_rate<span class="opacity-60"> = 16000</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsConfig.vocab_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsConfig.vocab_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>vocab_size</strong> (<code>int</code>, <em>optional</em>, defaults to 38) — Vocabulary size of the VITS model. Defines the number of different tokens that can be represented by the <code>inputs_ids</code> passed to the forward method of <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsModel">VitsModel</a>.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsConfig.hidden_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsConfig.hidden_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>hidden_size</strong> (<code>int</code>, <em>optional</em>, defaults to 192) — Dimensionality of the text encoder layers.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsConfig.num_hidden_layers" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsConfig.num_hidden_layers"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>num_hidden_layers</strong> (<code>int</code>, <em>optional</em>, defaults to 6) — Number of hidden layers in the Transformer encoder.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsConfig.num_attention_heads" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsConfig.num_attention_heads"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>num_attention_heads</strong> (<code>int</code>, <em>optional</em>, defaults to 2) — Number of attention heads for each attention layer in the Transformer encoder.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsConfig.window_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsConfig.window_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>window_size</strong> (<code>int</code>, <em>optional</em>, defaults to 4) — Window size for the relative positional embeddings in the attention layers of the Transformer encoder.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsConfig.use_bias" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsConfig.use_bias"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>use_bias</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether to use bias in the key, query, value projection layers in the Transformer encoder.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsConfig.ffn_dim" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsConfig.ffn_dim"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>ffn_dim</strong> (<code>int</code>, <em>optional</em>, defaults to 768) — Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsConfig.layerdrop" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsConfig.layerdrop"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>layerdrop</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) — The LayerDrop probability for the encoder. See the [LayerDrop paper](see <a href="https://arxiv.org/abs/1909.11556" rel="nofollow">https://arxiv.org/abs/1909.11556</a>) for more details.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsConfig.ffn_kernel_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsConfig.ffn_kernel_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>ffn_kernel_size</strong> (<code>int</code>, <em>optional</em>, defaults to 3) — Kernel size of the 1D convolution layers used by the feed-forward network in the Transformer encoder.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsConfig.flow_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsConfig.flow_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>flow_size</strong> (<code>int</code>, <em>optional</em>, defaults to 192) — Dimensionality of the flow layers.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsConfig.spectrogram_bins" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsConfig.spectrogram_bins"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>spectrogram_bins</strong> (<code>int</code>, <em>optional</em>, defaults to 513) — Number of frequency bins in the target spectrogram.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsConfig.hidden_act" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsConfig.hidden_act"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>hidden_act</strong> (<code>str</code> or <code>function</code>, <em>optional</em>, defaults to <code>"relu"</code>) — The non-linear activation function (function or string) in the encoder and pooler. If string, <code>"gelu"</code>, <code>"relu"</code>, <code>"selu"</code> and <code>"gelu_new"</code> are supported.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsConfig.hidden_dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsConfig.hidden_dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>hidden_dropout</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings and encoder.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsConfig.attention_dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsConfig.attention_dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>attention_dropout</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) — The dropout ratio for the attention probabilities.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsConfig.activation_dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsConfig.activation_dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>activation_dropout</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) — The dropout ratio for activations inside the fully connected layer.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsConfig.initializer_range" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsConfig.initializer_range"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>initializer_range</strong> (<code>float</code>, <em>optional</em>, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsConfig.layer_norm_eps" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsConfig.layer_norm_eps"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>layer_norm_eps</strong> (<code>float</code>, <em>optional</em>, defaults to 1e-5) — The epsilon used by the layer normalization layers.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsConfig.use_stochastic_duration_prediction" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsConfig.use_stochastic_duration_prediction"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>use_stochastic_duration_prediction</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether to use the stochastic duration prediction module or the regular duration predictor.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsConfig.num_speakers" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsConfig.num_speakers"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>num_speakers</strong> (<code>int</code>, <em>optional</em>, defaults to 1) — Number of speakers if this is a multi-speaker model.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsConfig.speaker_embedding_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsConfig.speaker_embedding_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>speaker_embedding_size</strong> (<code>int</code>, <em>optional</em>, defaults to 0) — Number of channels used by the speaker embeddings. Is zero for single-speaker models.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsConfig.upsample_initial_channel" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsConfig.upsample_initial_channel"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>upsample_initial_channel</strong> (<code>int</code>, <em>optional</em>, defaults to 512) — The number of input channels into the HiFi-GAN upsampling network.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsConfig.upsample_rates" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsConfig.upsample_rates"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>upsample_rates</strong> (<code>Tuple[int]</code> or <code>List[int]</code>, <em>optional</em>, defaults to <code>[8, 8, 2, 2]</code>) — A tuple of integers defining the stride of each 1D convolutional layer in the HiFi-GAN upsampling network. The length of <code>upsample_rates</code> defines the number of convolutional layers and has to match the length of <code>upsample_kernel_sizes</code>.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsConfig.upsample_kernel_sizes" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsConfig.upsample_kernel_sizes"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>upsample_kernel_sizes</strong> (<code>Tuple[int]</code> or <code>List[int]</code>, <em>optional</em>, defaults to <code>[16, 16, 4, 4]</code>) — A tuple of integers defining the kernel size of each 1D convolutional layer in the HiFi-GAN upsampling network. The length of <code>upsample_kernel_sizes</code> defines the number of convolutional layers and has to match the length of <code>upsample_rates</code>.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsConfig.resblock_kernel_sizes" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsConfig.resblock_kernel_sizes"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>resblock_kernel_sizes</strong> (<code>Tuple[int]</code> or <code>List[int]</code>, <em>optional</em>, defaults to <code>[3, 7, 11]</code>) — A tuple of integers defining the kernel sizes of the 1D convolutional layers in the HiFi-GAN multi-receptive field fusion (MRF) module.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsConfig.resblock_dilation_sizes" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsConfig.resblock_dilation_sizes"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>resblock_dilation_sizes</strong> (<code>Tuple[Tuple[int]]</code> or <code>List[List[int]]</code>, <em>optional</em>, defaults to <code>[[1, 3, 5], [1, 3, 5], [1, 3, 5]]</code>) — A nested tuple of integers defining the dilation rates of the dilated 1D convolutional layers in the HiFi-GAN multi-receptive field fusion (MRF) module.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsConfig.leaky_relu_slope" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsConfig.leaky_relu_slope"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>leaky_relu_slope</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) — The angle of the negative slope used by the leaky ReLU activation.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsConfig.depth_separable_channels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsConfig.depth_separable_channels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>depth_separable_channels</strong> (<code>int</code>, <em>optional</em>, defaults to 2) — Number of channels to use in each depth-separable block.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsConfig.depth_separable_num_layers" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsConfig.depth_separable_num_layers"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>depth_separable_num_layers</strong> (<code>int</code>, <em>optional</em>, defaults to 3) — Number of convolutional layers to use in each depth-separable block.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsConfig.duration_predictor_flow_bins" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsConfig.duration_predictor_flow_bins"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>duration_predictor_flow_bins</strong> (<code>int</code>, <em>optional</em>, defaults to 10) — Number of channels to map using the unonstrained rational spline in the duration predictor model.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsConfig.duration_predictor_tail_bound" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsConfig.duration_predictor_tail_bound"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>duration_predictor_tail_bound</strong> (<code>float</code>, <em>optional</em>, defaults to 5.0) — Value of the tail bin boundary when computing the unconstrained rational spline in the duration predictor model.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsConfig.duration_predictor_kernel_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsConfig.duration_predictor_kernel_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>duration_predictor_kernel_size</strong> (<code>int</code>, <em>optional</em>, defaults to 3) — Kernel size of the 1D convolution layers used in the duration predictor model.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsConfig.duration_predictor_dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsConfig.duration_predictor_dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>duration_predictor_dropout</strong> (<code>float</code>, <em>optional</em>, defaults to 0.5) — The dropout ratio for the duration predictor model.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsConfig.duration_predictor_num_flows" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsConfig.duration_predictor_num_flows"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>duration_predictor_num_flows</strong> (<code>int</code>, <em>optional</em>, defaults to 4) — Number of flow stages used by the duration predictor model.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsConfig.duration_predictor_filter_channels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsConfig.duration_predictor_filter_channels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>duration_predictor_filter_channels</strong> (<code>int</code>, <em>optional</em>, defaults to 256) — Number of channels for the convolution layers used in the duration predictor model.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsConfig.prior_encoder_num_flows" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsConfig.prior_encoder_num_flows"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>prior_encoder_num_flows</strong> (<code>int</code>, <em>optional</em>, defaults to 4) — Number of flow stages used by the prior encoder flow model.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsConfig.prior_encoder_num_wavenet_layers" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsConfig.prior_encoder_num_wavenet_layers"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>prior_encoder_num_wavenet_layers</strong> (<code>int</code>, <em>optional</em>, defaults to 4) — Number of WaveNet layers used by the prior encoder flow model.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsConfig.posterior_encoder_num_wavenet_layers" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsConfig.posterior_encoder_num_wavenet_layers"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>posterior_encoder_num_wavenet_layers</strong> (<code>int</code>, <em>optional</em>, defaults to 16) — Number of WaveNet layers used by the posterior encoder model.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsConfig.wavenet_kernel_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsConfig.wavenet_kernel_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>wavenet_kernel_size</strong> (<code>int</code>, <em>optional</em>, defaults to 5) — Kernel size of the 1D convolution layers used in the WaveNet model.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsConfig.wavenet_dilation_rate" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsConfig.wavenet_dilation_rate"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>wavenet_dilation_rate</strong> (<code>int</code>, <em>optional</em>, defaults to 1) — Dilation rates of the dilated 1D convolutional layers used in the WaveNet model.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsConfig.wavenet_dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsConfig.wavenet_dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>wavenet_dropout</strong> (<code>float</code>, <em>optional</em>, defaults to 0.0) — The dropout ratio for the WaveNet layers.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsConfig.speaking_rate" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsConfig.speaking_rate"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>speaking_rate</strong> (<code>float</code>, <em>optional</em>, defaults to 1.0) — Speaking rate. Larger values give faster synthesised speech.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsConfig.noise_scale" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsConfig.noise_scale"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>noise_scale</strong> (<code>float</code>, <em>optional</em>, defaults to 0.667) — How random the speech prediction is. Larger values create more variation in the predicted speech.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsConfig.noise_scale_duration" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsConfig.noise_scale_duration"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>noise_scale_duration</strong> (<code>float</code>, <em>optional</em>, defaults to 0.8) — How random the duration prediction is. Larger values create more variation in the predicted durations.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsConfig.sampling_rate" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsConfig.sampling_rate"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>sampling_rate</strong> (<code>int</code>, <em>optional</em>, defaults to 16000) — The sampling rate at which the output audio waveform is digitalized expressed in hertz (Hz).<!-- HTML_TAG_END --> </span></span> </li></ul> </div></div> <p data-svelte-h="svelte-33m3f9">This is the configuration class to store the configuration of a <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsModel">VitsModel</a>. It is used to instantiate a VITS model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the VITS <a href="https://huggingface.co/facebook/mms-tts-eng" rel="nofollow">facebook/mms-tts-eng</a> architecture.</p> <p data-svelte-h="svelte-10kqkkl">Configuration objects inherit from <a href="/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> and can be used to control the model outputs. Read the documentation from <a href="/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> for more information.</p> <div class="relative group rounded-md"><a id="transformers.VitsConfig.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsConfig.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> VitsModel, VitsConfig <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Initializing a "facebook/mms-tts-eng" style configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>configuration = VitsConfig() <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Initializing a model (with random weights) from the "facebook/mms-tts-eng" style configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model = VitsModel(configuration) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Accessing the model configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>configuration = model.config<!-- HTML_TAG_END --></pre></div></div></div> <h2 class="relative group"><a id="transformers.VitsTokenizer" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsTokenizer"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1ll4zhm">VitsTokenizer</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.VitsTokenizer"><!-- HTML_TAG_START --><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">VitsTokenizer</span></span></h3><!-- HTML_TAG_END --> <a id="transformers.VitsTokenizer" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.VitsTokenizer"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vits/tokenization_vits.py#L57" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">vocab_file<span class="opacity-60"></span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pad_token<span class="opacity-60"> = '&lt;pad&gt;'</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">unk_token<span class="opacity-60"> = '&lt;unk&gt;'</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">language<span class="opacity-60"> = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">add_blank<span class="opacity-60"> = True</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">normalize<span class="opacity-60"> = True</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">phonemize<span class="opacity-60"> = True</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">is_uroman<span class="opacity-60"> = False</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsTokenizer.vocab_file" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsTokenizer.vocab_file"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>vocab_file</strong> (<code>str</code>) — Path to the vocabulary file.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsTokenizer.language" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsTokenizer.language"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>language</strong> (<code>str</code>, <em>optional</em>) — Language identifier.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsTokenizer.add_blank" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsTokenizer.add_blank"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>add_blank</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether to insert token id 0 in between the other tokens.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsTokenizer.normalize" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsTokenizer.normalize"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>normalize</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether to normalize the input text by removing all casing and punctuation.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsTokenizer.phonemize" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsTokenizer.phonemize"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>phonemize</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether to convert the input text into phonemes.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsTokenizer.is_uroman" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsTokenizer.is_uroman"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>is_uroman</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether the <code>uroman</code> Romanizer needs to be applied to the input text prior to tokenizing.<!-- HTML_TAG_END --> </span></span> </li></ul> </div></div> <p data-svelte-h="svelte-7pjy9f">Construct a VITS tokenizer. Also supports MMS-TTS.</p> <p data-svelte-h="svelte-1b0fouy">This tokenizer inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizer">PreTrainedTokenizer</a> which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.VitsTokenizer.__call__"><!-- HTML_TAG_START --><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>__call__</span></h4><!-- HTML_TAG_END --> <a id="transformers.VitsTokenizer.__call__" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.VitsTokenizer.__call__"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/tokenization_utils_base.py#L2732" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">text<span class="opacity-60">: typing.Union[str, typing.List[str], typing.List[typing.List[str]]] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">text_pair<span class="opacity-60">: typing.Union[str, typing.List[str], typing.List[typing.List[str]], NoneType] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">text_target<span class="opacity-60">: typing.Union[str, typing.List[str], typing.List[typing.List[str]]] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">text_pair_target<span class="opacity-60">: typing.Union[str, typing.List[str], typing.List[typing.List[str]], NoneType] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">add_special_tokens<span class="opacity-60">: bool = True</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">padding<span class="opacity-60">: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = False</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">truncation<span class="opacity-60">: typing.Union[bool, str, transformers.tokenization_utils_base.TruncationStrategy] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">max_length<span class="opacity-60">: typing.Optional[int] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">stride<span class="opacity-60">: int = 0</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">is_split_into_words<span class="opacity-60">: bool = False</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pad_to_multiple_of<span class="opacity-60">: typing.Optional[int] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_tensors<span class="opacity-60">: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_token_type_ids<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_attention_mask<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_overflowing_tokens<span class="opacity-60">: bool = False</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_special_tokens_mask<span class="opacity-60">: bool = False</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_offsets_mapping<span class="opacity-60">: bool = False</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_length<span class="opacity-60">: bool = False</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">verbose<span class="opacity-60">: bool = True</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><!-- HTML_TAG_START --><span><a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.BatchEncoding">BatchEncoding</a></span><!-- HTML_TAG_END --></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsTokenizer.__call__.text" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsTokenizer.__call__.text"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>text</strong> (<code>str</code>, <code>List[str]</code>, <code>List[List[str]]</code>, <em>optional</em>) — The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set <code>is_split_into_words=True</code> (to lift the ambiguity with a batch of sequences).<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsTokenizer.__call__.text_pair" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsTokenizer.__call__.text_pair"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>text_pair</strong> (<code>str</code>, <code>List[str]</code>, <code>List[List[str]]</code>, <em>optional</em>) — The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set <code>is_split_into_words=True</code> (to lift the ambiguity with a batch of sequences).<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsTokenizer.__call__.text_target" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsTokenizer.__call__.text_target"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>text_target</strong> (<code>str</code>, <code>List[str]</code>, <code>List[List[str]]</code>, <em>optional</em>) — The sequence or batch of sequences to be encoded as target texts. Each sequence can be a string or a list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set <code>is_split_into_words=True</code> (to lift the ambiguity with a batch of sequences).<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsTokenizer.__call__.text_pair_target" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsTokenizer.__call__.text_pair_target"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>text_pair_target</strong> (<code>str</code>, <code>List[str]</code>, <code>List[List[str]]</code>, <em>optional</em>) — The sequence or batch of sequences to be encoded as target texts. Each sequence can be a string or a list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set <code>is_split_into_words=True</code> (to lift the ambiguity with a batch of sequences).<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsTokenizer.__call__.add_special_tokens" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsTokenizer.__call__.add_special_tokens"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>add_special_tokens</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether or not to add special tokens when encoding the sequences. This will use the underlying <code>PretrainedTokenizerBase.build_inputs_with_special_tokens</code> function, which defines which tokens are automatically added to the input ids. This is usefull if you want to add <code>bos</code> or <code>eos</code> tokens automatically.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsTokenizer.__call__.padding" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsTokenizer.__call__.padding"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>padding</strong> (<code>bool</code>, <code>str</code> or <a href="/docs/transformers/v4.34.0/en/internal/file_utils#transformers.utils.PaddingStrategy">PaddingStrategy</a>, <em>optional</em>, defaults to <code>False</code>) — Activates and controls padding. Accepts the following values:<p></p> <ul> <li><code>True</code> or <code>'longest'</code>: Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).</li> <li><code>'max_length'</code>: Pad to a maximum length specified with the argument <code>max_length</code> or to the maximum acceptable input length for the model if that argument is not provided.</li> <li><code>False</code> or <code>'do_not_pad'</code> (default): No padding (i.e., can output a batch with sequences of different lengths).</li> </ul><!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsTokenizer.__call__.truncation" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsTokenizer.__call__.truncation"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>truncation</strong> (<code>bool</code>, <code>str</code> or <a href="/docs/transformers/v4.34.0/en/internal/tokenization_utils#transformers.tokenization_utils_base.TruncationStrategy">TruncationStrategy</a>, <em>optional</em>, defaults to <code>False</code>) — Activates and controls truncation. Accepts the following values:<p></p> <ul> <li><code>True</code> or <code>'longest_first'</code>: Truncate to a maximum length specified with the argument <code>max_length</code> or to the maximum acceptable input length for the model if that argument is not provided. This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided.</li> <li><code>'only_first'</code>: Truncate to a maximum length specified with the argument <code>max_length</code> or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.</li> <li><code>'only_second'</code>: Truncate to a maximum length specified with the argument <code>max_length</code> or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.</li> <li><code>False</code> or <code>'do_not_truncate'</code> (default): No truncation (i.e., can output batch with sequence lengths greater than the model maximum admissible input size).</li> </ul><!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsTokenizer.__call__.max_length" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsTokenizer.__call__.max_length"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>max_length</strong> (<code>int</code>, <em>optional</em>) — Controls the maximum length to use by one of the truncation/padding parameters.<p></p> <p>If left unset or set to <code>None</code>, this will use the predefined model maximum length if a maximum length is required by one of the truncation/padding parameters. If the model has no specific maximum input length (like XLNet) truncation/padding to a maximum length will be deactivated.<!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsTokenizer.__call__.stride" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsTokenizer.__call__.stride"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>stride</strong> (<code>int</code>, <em>optional</em>, defaults to 0) — If set to a number along with <code>max_length</code>, the overflowing tokens returned when <code>return_overflowing_tokens=True</code> will contain some tokens from the end of the truncated sequence returned to provide some overlap between truncated and overflowing sequences. The value of this argument defines the number of overlapping tokens.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsTokenizer.__call__.is_split_into_words" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsTokenizer.__call__.is_split_into_words"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>is_split_into_words</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not the input is already pre-tokenized (e.g., split into words). If set to <code>True</code>, the tokenizer assumes the input is already split into words (for instance, by splitting it on whitespace) which it will tokenize. This is useful for NER or token classification.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsTokenizer.__call__.pad_to_multiple_of" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsTokenizer.__call__.pad_to_multiple_of"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>pad_to_multiple_of</strong> (<code>int</code>, <em>optional</em>) — If set will pad the sequence to a multiple of the provided value. Requires <code>padding</code> to be activated. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability <code>&gt;= 7.5</code> (Volta).<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsTokenizer.__call__.return_tensors" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsTokenizer.__call__.return_tensors"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>return_tensors</strong> (<code>str</code> or <a href="/docs/transformers/v4.34.0/en/internal/file_utils#transformers.TensorType">TensorType</a>, <em>optional</em>) — If set, will return tensors instead of list of python integers. Acceptable values are:<p></p> <ul> <li><code>'tf'</code>: Return TensorFlow <code>tf.constant</code> objects.</li> <li><code>'pt'</code>: Return PyTorch <code>torch.Tensor</code> objects.</li> <li><code>'np'</code>: Return Numpy <code>np.ndarray</code> objects.</li> </ul><!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsTokenizer.__call__.return_token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsTokenizer.__call__.return_token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>return_token_type_ids</strong> (<code>bool</code>, <em>optional</em>) — Whether to return token type IDs. If left to the default, will return the token type IDs according to the specific tokenizer’s default, defined by the <code>return_outputs</code> attribute.<p></p> <p><a href="../glossary#token-type-ids">What are token type IDs?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsTokenizer.__call__.return_attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsTokenizer.__call__.return_attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>return_attention_mask</strong> (<code>bool</code>, <em>optional</em>) — Whether to return the attention mask. If left to the default, will return the attention mask according to the specific tokenizer’s default, defined by the <code>return_outputs</code> attribute.<p></p> <p><a href="../glossary#attention-mask">What are attention masks?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsTokenizer.__call__.return_overflowing_tokens" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsTokenizer.__call__.return_overflowing_tokens"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>return_overflowing_tokens</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not to return overflowing token sequences. If a pair of sequences of input ids (or a batch of pairs) is provided with <code>truncation_strategy = longest_first</code> or <code>True</code>, an error is raised instead of returning overflowing tokens.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsTokenizer.__call__.return_special_tokens_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsTokenizer.__call__.return_special_tokens_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>return_special_tokens_mask</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not to return special tokens mask information.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsTokenizer.__call__.return_offsets_mapping" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsTokenizer.__call__.return_offsets_mapping"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>return_offsets_mapping</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not to return <code>(char_start, char_end)</code> for each token.<p></p> <p>This is only available on fast tokenizers inheriting from <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast">PreTrainedTokenizerFast</a>, if using Python’s tokenizer, this method will raise <code>NotImplementedError</code>.<!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsTokenizer.__call__.return_length" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsTokenizer.__call__.return_length"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>return_length</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not to return the lengths of the encoded inputs.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsTokenizer.__call__.verbose" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsTokenizer.__call__.verbose"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>verbose</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether or not to print more information and warnings. **kwargs — passed to the <code>self.tokenize()</code> method<!-- HTML_TAG_END --> </span></span> </li></ul> <div id="transformers.VitsTokenizer.__call__.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <!-- HTML_TAG_START --> <p><a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.BatchEncoding">BatchEncoding</a></p> <!-- HTML_TAG_END --> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"><!-- HTML_TAG_START --> </p><p>A <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.BatchEncoding">BatchEncoding</a> with the following fields:</p> <ul> <li> <p><strong>input_ids</strong> — List of token ids to be fed to a model.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p> </li> <li> <p><strong>token_type_ids</strong> — List of token type ids to be fed to a model (when <code>return_token_type_ids=True</code> or if <em>“token_type_ids”</em> is in <code>self.model_input_names</code>).</p> <p><a href="../glossary#token-type-ids">What are token type IDs?</a></p> </li> <li> <p><strong>attention_mask</strong> — List of indices specifying which tokens should be attended to by the model (when <code>return_attention_mask=True</code> or if <em>“attention_mask”</em> is in <code>self.model_input_names</code>).</p> <p><a href="../glossary#attention-mask">What are attention masks?</a></p> </li> <li> <p><strong>overflowing_tokens</strong> — List of overflowing tokens sequences (when a <code>max_length</code> is specified and <code>return_overflowing_tokens=True</code>).</p> </li> <li> <p><strong>num_truncated_tokens</strong> — Number of tokens truncated (when a <code>max_length</code> is specified and <code>return_overflowing_tokens=True</code>).</p> </li> <li> <p><strong>special_tokens_mask</strong> — List of 0s and 1s, with 1 specifying added special tokens and 0 specifying regular sequence tokens (when <code>add_special_tokens=True</code> and <code>return_special_tokens_mask=True</code>).</p> </li> <li> <p><strong>length</strong> — The length of the inputs (when <code>return_length=True</code>)</p> </li> </ul> <!-- HTML_TAG_END --><p></p> </div></div> <p data-svelte-h="svelte-kpxj0c">Main method to tokenize and prepare for the model one or several sequence(s) or one or several pair(s) of sequences.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.VitsTokenizer.save_vocabulary"><!-- HTML_TAG_START --><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>save_vocabulary</span></h4><!-- HTML_TAG_END --> <a id="transformers.VitsTokenizer.save_vocabulary" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.VitsTokenizer.save_vocabulary"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vits/tokenization_vits.py#L238" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">save_directory<span class="opacity-60">: str</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">filename_prefix<span class="opacity-60">: typing.Optional[str] = None</span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> </div></div></div></div> <h2 class="relative group"><a id="transformers.VitsModel" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsModel"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1gvma4q">VitsModel</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.VitsModel"><!-- HTML_TAG_START --><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">VitsModel</span></span></h3><!-- HTML_TAG_END --> <a id="transformers.VitsModel" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.VitsModel"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vits/modeling_vits.py#L1356" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60">: VitsConfig</span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsModel.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsModel.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsConfig">VitsConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.<!-- HTML_TAG_END --> </span></span> </li></ul> </div></div> <p data-svelte-h="svelte-y0dvlg">The complete VITS model, for text-to-speech synthesis. This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-hswkmf">This model is also a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.VitsModel.forward"><!-- HTML_TAG_START --><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4><!-- HTML_TAG_END --> <a id="transformers.VitsModel.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.VitsModel.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vits/modeling_vits.py#L1386" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">speaker_id<span class="opacity-60">: typing.Optional[int] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><!-- HTML_TAG_START --><span><code>transformers.models.vits.modeling_vits.VitsModelOutput</code> or <code>tuple(torch.FloatTensor)</code></span><!-- HTML_TAG_END --></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsModel.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsModel.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsModel.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsModel.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>attention_mask</strong> (<code>torch.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing convolution and attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsModel.forward.speaker_id" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsModel.forward.speaker_id"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>speaker_id</strong> (<code>int</code>, <em>optional</em>) — Which speaker embedding to use. Only used for multispeaker models.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsModel.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsModel.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsModel.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsModel.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsModel.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsModel.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VitsModel.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsModel.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>labels</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, config.spectrogram_bins, sequence_length)</code>, <em>optional</em>) — Float values of target spectrogram. Timesteps set to <code>-100.0</code> are ignored (masked) for the loss computation.<!-- HTML_TAG_END --> </span></span> </li></ul> <div id="transformers.VitsModel.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <!-- HTML_TAG_START --> <p><code>transformers.models.vits.modeling_vits.VitsModelOutput</code> or <code>tuple(torch.FloatTensor)</code></p> <!-- HTML_TAG_END --> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"><!-- HTML_TAG_START --> </p><p>A <code>transformers.models.vits.modeling_vits.VitsModelOutput</code> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsConfig">VitsConfig</a>) and inputs.</p> <ul> <li> <p><strong>waveform</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>) — The final audio waveform predicted by the model.</p> </li> <li> <p><strong>sequence_lengths</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size,)</code>) — The length in samples of each element in the <code>waveform</code> batch.</p> </li> <li> <p><strong>spectrogram</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, num_bins)</code>) — The log-mel spectrogram predicted at the output of the flow model. This spectrogram is passed to the Hi-Fi GAN decoder model to obtain the final audio waveform.</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attention weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> <!-- HTML_TAG_END --><p></p> </div></div> <p data-svelte-h="svelte-1j5yxmh">The <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsModel">VitsModel</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.VitsModel.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VitsModel.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> VitsTokenizer, VitsModel, set_seed <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = VitsTokenizer.from_pretrained(<span class="hljs-string">"facebook/mms-tts-eng"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = VitsModel.from_pretrained(<span class="hljs-string">"facebook/mms-tts-eng"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(text=<span class="hljs-string">"Hello - my dog is cute"</span>, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>set_seed(<span class="hljs-number">555</span>) <span class="hljs-comment"># make deterministic</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> outputs = model(inputs[<span class="hljs-string">"input_ids"</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs.waveform.shape torch.Size([<span class="hljs-number">1</span>, <span class="hljs-number">45824</span>])<!-- HTML_TAG_END --></pre></div></div></div></div> <p></p> <script> { __sveltekit_1yybmhh = { assets: "/docs/transformers/v4.34.0/en", base: "/docs/transformers/v4.34.0/en", env: {} }; const element = document.currentScript.parentElement; const data = [null,null]; Promise.all([ import("/docs/transformers/v4.34.0/en/_app/immutable/entry/start.c2db227a.js"), import("/docs/transformers/v4.34.0/en/_app/immutable/entry/app.879d9b87.js") ]).then(([kit, app]) => { kit.start(app, element, { node_ids: [0, 268], data, form: null, error: null }); }); } </script> <!-- HTML_TAG_END --></div> <div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/unispeech-sat" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>UniSpeech-SAT</a> <a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">Wav2Vec2<span class="ml-2 translate-y-px">→</span></a></div></div></div> <div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapter&quot;:{&quot;title&quot;:&quot;VITS&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;vits&quot;,&quot;url&quot;:&quot;#vits&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;overview&quot;,&quot;url&quot;:&quot;#overview&quot;},{&quot;title&quot;:&quot;Model Usage&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model-usage&quot;,&quot;url&quot;:&quot;#model-usage&quot;},{&quot;title&quot;:&quot;VitsConfig&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.VitsConfig&quot;,&quot;url&quot;:&quot;#transformers.VitsConfig&quot;},{&quot;title&quot;:&quot;VitsTokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.VitsTokenizer&quot;,&quot;url&quot;:&quot;#transformers.VitsTokenizer&quot;},{&quot;title&quot;:&quot;VitsModel&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.VitsModel&quot;,&quot;url&quot;:&quot;#transformers.VitsModel&quot;}]}}" data-target="SubSideMenu"> <nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#vits" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-vits"><!-- HTML_TAG_START -->VITS<!-- HTML_TAG_END --></a> <a href="#overview" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-overview"><!-- HTML_TAG_START --><wbr>Overview<!-- HTML_TAG_END --></a> <a href="#model-usage" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-model-usage"><!-- HTML_TAG_START --><wbr>Model <wbr>Usage<!-- HTML_TAG_END --></a> <a href="#transformers.VitsConfig" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.VitsConfig"><!-- HTML_TAG_START --><wbr>Vits<wbr>Config<!-- HTML_TAG_END --></a> <a href="#transformers.VitsTokenizer" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.VitsTokenizer"><!-- HTML_TAG_START --><wbr>Vits<wbr>Tokenizer<!-- HTML_TAG_END --></a> <a href="#transformers.VitsModel" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.VitsModel"><!-- HTML_TAG_START --><wbr>Vits<wbr>Model<!-- HTML_TAG_END --></a> </nav></div></div></div> <div id="doc-footer"></div></main> </div> <script> import("/front/build/kube-b0520c1/index.js"); window.moonSha = "kube-b0520c1/"; window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc","environment":"production","userAgent":"HuggingFace (production)"}`); </script> <!-- Stripe --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://js.stripe.com/v3/"; script.async = true; document.head.appendChild(script); } </script> <!-- Google analytics v4 --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL"; script.async = true; document.head.appendChild(script); window.dataLayer = window.dataLayer || []; function gtag() { if (window.dataLayer !== undefined) { window.dataLayer.push(arguments); } } gtag("js", new Date()); gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/v4.34.0/en/model_doc/vits" }); /// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" }); /// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent /// TODO: ask the user for their consent and update this with gtag('consent', 'update') } </script> <!-- Google Analytics v3 (deprecated) --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { (function (i, s, o, g, r, a, m) { i["GoogleAnalyticsObject"] = r; (i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments); }), (i[r].l = 1 * new Date()); (a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]); a.async = 1; a.src = g; m.parentNode.insertBefore(a, m); })(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics"); ganalytics("create", "UA-83738774-2", "auto"); ganalytics("send", "pageview", "/docs/transformers/v4.34.0/en/model_doc/vits"); } </script> <iframe name="__privateStripeMetricsController0330" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-27c67c0d52761104439bb051c7856ab1.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fv4.34.0%2Fen%2Fmodel_doc%2Fvits&amp;title=VITS&amp;referrer=&amp;muid=b15a8ef9-7618-4d98-9abd-1d7fdb18f47df4c702&amp;sid=0da2c795-975c-45a5-a090-0475ca1e345f07aeed&amp;version=6&amp;preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html>
2023-10-05T13:33:27.260Z
Video Vision Transformer (ViViT)
https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/vivit
# Video Vision Transformer (ViViT) ## Overview The Vivit model was proposed in [ViViT: A Video Vision Transformer](https://arxiv.org/abs/2103.15691) by Anurag Arnab, Mostafa Dehghani, Georg Heigold, Chen Sun, Mario Lučić, Cordelia Schmid. The paper proposes one of the first successful pure-transformer based set of models for video understanding. The abstract from the paper is the following: _We present pure-transformer based models for video classification, drawing upon the recent success of such models in image classification. Our model extracts spatio-temporal tokens from the input video, which are then encoded by a series of transformer layers. In order to handle the long sequences of tokens encountered in video, we propose several, efficient variants of our model which factorise the spatial- and temporal-dimensions of the input. Although transformer-based models are known to only be effective when large training datasets are available, we show how we can effectively regularise the model during training and leverage pretrained image models to be able to train on comparatively small datasets. We conduct thorough ablation studies, and achieve state-of-the-art results on multiple video classification benchmarks including Kinetics 400 and 600, Epic Kitchens, Something-Something v2 and Moments in Time, outperforming prior methods based on deep 3D convolutional networks._ This model was contributed by [jegormeister](https://huggingface.co/jegormeister). The original code (written in JAX) can be found [here](https://github.com/google-research/scenic/tree/main/scenic/projects/vivit). ## VivitConfig ### class transformers.VivitConfig [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vivit/configuration_vivit.py#L31) ( image\_size = 224 num\_frames = 32 tubelet\_size = \[2, 16, 16\] num\_channels = 3 hidden\_size = 768 num\_hidden\_layers = 12 num\_attention\_heads = 12 intermediate\_size = 3072 hidden\_act = 'gelu\_fast' hidden\_dropout\_prob = 0.0 attention\_probs\_dropout\_prob = 0.0 initializer\_range = 0.02 layer\_norm\_eps = 1e-06 qkv\_bias = True \*\*kwargs ) Parameters - **image\_size** (`int`, _optional_, defaults to 224) — The size (resolution) of each image. - **num\_frames** (`int`, _optional_, defaults to 32) — The number of frames in each video. - **tubelet\_size** (`List[int]`, _optional_, defaults to `[2, 16, 16]`) — The size (resolution) of each tubelet. - **num\_channels** (`int`, _optional_, defaults to 3) — The number of input channels. - **hidden\_size** (`int`, _optional_, defaults to 768) — Dimensionality of the encoder layers and the pooler layer. - **num\_hidden\_layers** (`int`, _optional_, defaults to 12) — Number of hidden layers in the Transformer encoder. - **num\_attention\_heads** (`int`, _optional_, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder. - **intermediate\_size** (`int`, _optional_, defaults to 3072) — Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder. - **hidden\_act** (`str` or `function`, _optional_, defaults to `"gelu_fast"`) — The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"`, `"gelu_fast"` and `"gelu_new"` are supported. - **hidden\_dropout\_prob** (`float`, _optional_, defaults to 0.0) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. - **attention\_probs\_dropout\_prob** (`float`, _optional_, defaults to 0.0) — The dropout ratio for the attention probabilities. - **initializer\_range** (`float`, _optional_, defaults to 0.02) — The standard deviation of the truncated\_normal\_initializer for initializing all weight matrices. - **layer\_norm\_eps** (`float`, _optional_, defaults to 1e-06) — The epsilon used by the layer normalization layers. - **qkv\_bias** (`bool`, _optional_, defaults to `True`) — Whether to add a bias to the queries, keys and values. This is the configuration class to store the configuration of a [VivitModel](/docs/transformers/v4.34.0/en/model_doc/vivit#transformers.VivitModel). It is used to instantiate a ViViT model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the ViViT [google/vivit-b-16x2-kinetics400](https://huggingface.co/google/vivit-b-16x2-kinetics400) architecture. Configuration objects inherit from [PretrainedConfig](/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig) and can be used to control the model outputs. Read the documentation from [PretrainedConfig](/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig) for more information. Example: ``` >>> from transformers import VivitConfig, VivitModel >>> >>> configuration = VivitConfig() >>> >>> model = VivitModel(configuration) >>> >>> configuration = model.config ``` ## VivitImageProcessor ### class transformers.VivitImageProcessor [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vivit/image_processing_vivit.py#L64) ( do\_resize: bool = True size: typing.Dict\[str, int\] = None resample: Resampling = <Resampling.BILINEAR: 2> do\_center\_crop: bool = True crop\_size: typing.Dict\[str, int\] = None do\_rescale: bool = True rescale\_factor: typing.Union\[int, float\] = 0.00784313725490196 offset: bool = True do\_normalize: bool = True image\_mean: typing.Union\[float, typing.List\[float\], NoneType\] = None image\_std: typing.Union\[float, typing.List\[float\], NoneType\] = None \*\*kwargs ) Parameters - **do\_resize** (`bool`, _optional_, defaults to `True`) — Whether to resize the image’s (height, width) dimensions to the specified `size`. Can be overridden by the `do_resize` parameter in the `preprocess` method. - **size** (`Dict[str, int]` _optional_, defaults to `{"shortest_edge" -- 256}`): Size of the output image after resizing. The shortest edge of the image will be resized to `size["shortest_edge"]` while maintaining the aspect ratio of the original image. Can be overriden by `size` in the `preprocess` method. - **resample** (`PILImageResampling`, _optional_, defaults to `PILImageResampling.BILINEAR`) — Resampling filter to use if resizing the image. Can be overridden by the `resample` parameter in the `preprocess` method. - **do\_center\_crop** (`bool`, _optional_, defaults to `True`) — Whether to center crop the image to the specified `crop_size`. Can be overridden by the `do_center_crop` parameter in the `preprocess` method. - **crop\_size** (`Dict[str, int]`, _optional_, defaults to `{"height" -- 224, "width": 224}`): Size of the image after applying the center crop. Can be overridden by the `crop_size` parameter in the `preprocess` method. - **do\_rescale** (`bool`, _optional_, defaults to `True`) — Whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by the `do_rescale` parameter in the `preprocess` method. - **rescale\_factor** (`int` or `float`, _optional_, defaults to 1/127.5) — Defines the scale factor to use if rescaling the image. Can be overridden by the `rescale_factor` parameter in the `preprocess` method. - **offset** (`bool`, _optional_, defaults to `True`) — Whether to scale the image in both negative and positive directions. Can be overriden by the `offset` in the `preprocess` method. - **do\_normalize** (`bool`, _optional_, defaults to `True`) — Whether to normalize the image. Can be overridden by the `do_normalize` parameter in the `preprocess` method. - **image\_mean** (`float` or `List[float]`, _optional_, defaults to `IMAGENET_STANDARD_MEAN`) — Mean to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method. - **image\_std** (`float` or `List[float]`, _optional_, defaults to `IMAGENET_STANDARD_STD`) — Standard deviation to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the `image_std` parameter in the `preprocess` method. Constructs a Vivit image processor. #### preprocess [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vivit/image_processing_vivit.py#L285) ( videos: typing.Union\[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List\[ForwardRef('PIL.Image.Image')\], typing.List\[numpy.ndarray\], typing.List\[ForwardRef('torch.Tensor')\]\] do\_resize: bool = None size: typing.Dict\[str, int\] = None resample: Resampling = None do\_center\_crop: bool = None crop\_size: typing.Dict\[str, int\] = None do\_rescale: bool = None rescale\_factor: float = None offset: bool = None do\_normalize: bool = None image\_mean: typing.Union\[float, typing.List\[float\], NoneType\] = None image\_std: typing.Union\[float, typing.List\[float\], NoneType\] = None return\_tensors: typing.Union\[str, transformers.utils.generic.TensorType, NoneType\] = None data\_format: ChannelDimension = <ChannelDimension.FIRST: 'channels\_first'> input\_data\_format: typing.Union\[str, transformers.image\_utils.ChannelDimension, NoneType\] = None \*\*kwargs ) Parameters - **videos** (`ImageInput`) — Video frames to preprocess. Expects a single or batch of video frames with pixel values ranging from 0 to 255. If passing in frames with pixel values between 0 and 1, set `do_rescale=False`. - **do\_resize** (`bool`, _optional_, defaults to `self.do_resize`) — Whether to resize the image. - **size** (`Dict[str, int]`, _optional_, defaults to `self.size`) — Size of the image after applying resize. - **resample** (`PILImageResampling`, _optional_, defaults to `self.resample`) — Resampling filter to use if resizing the image. This can be one of the enum `PILImageResampling`, Only has an effect if `do_resize` is set to `True`. - **do\_center\_crop** (`bool`, _optional_, defaults to `self.do_centre_crop`) — Whether to centre crop the image. - **crop\_size** (`Dict[str, int]`, _optional_, defaults to `self.crop_size`) — Size of the image after applying the centre crop. - **do\_rescale** (`bool`, _optional_, defaults to `self.do_rescale`) — Whether to rescale the image values between `[-1 - 1]` if `offset` is `True`, `[0, 1]` otherwise. - **rescale\_factor** (`float`, _optional_, defaults to `self.rescale_factor`) — Rescale factor to rescale the image by if `do_rescale` is set to `True`. - **offset** (`bool`, _optional_, defaults to `self.offset`) — Whether to scale the image in both negative and positive directions. - **do\_normalize** (`bool`, _optional_, defaults to `self.do_normalize`) — Whether to normalize the image. - **image\_mean** (`float` or `List[float]`, _optional_, defaults to `self.image_mean`) — Image mean. - **image\_std** (`float` or `List[float]`, _optional_, defaults to `self.image_std`) — Image standard deviation. - **return\_tensors** (`str` or `TensorType`, _optional_) — The type of tensors to return. Can be one of: - Unset: Return a list of `np.ndarray`. - `TensorType.TENSORFLOW` or `'tf'`: Return a batch of type `tf.Tensor`. - `TensorType.PYTORCH` or `'pt'`: Return a batch of type `torch.Tensor`. - `TensorType.NUMPY` or `'np'`: Return a batch of type `np.ndarray`. - `TensorType.JAX` or `'jax'`: Return a batch of type `jax.numpy.ndarray`. - **data\_format** (`ChannelDimension` or `str`, _optional_, defaults to `ChannelDimension.FIRST`) — The channel dimension format for the output image. Can be one of: - `ChannelDimension.FIRST`: image in (num\_channels, height, width) format. - `ChannelDimension.LAST`: image in (height, width, num\_channels) format. - Unset: Use the inferred channel dimension format of the input image. - **input\_data\_format** (`ChannelDimension` or `str`, _optional_) — The channel dimension format for the input image. If unset, the channel dimension format is inferred from the input image. Can be one of: - `"channels_first"` or `ChannelDimension.FIRST`: image in (num\_channels, height, width) format. - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num\_channels) format. - `"none"` or `ChannelDimension.NONE`: image in (height, width) format. Preprocess an image or batch of images. ## VivitModel ### class transformers.VivitModel [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vivit/modeling_vivit.py#L460) ( config add\_pooling\_layer = True ) Parameters - **config** ([VivitConfig](/docs/transformers/v4.34.0/en/model_doc/vivit#transformers.VivitConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. The bare ViViT Transformer model outputting raw hidden-states without any specific head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vivit/modeling_vivit.py#L488) ( pixel\_values: typing.Optional\[torch.FloatTensor\] = None head\_mask: typing.Optional\[torch.FloatTensor\] = None output\_attentions: typing.Optional\[bool\] = None output\_hidden\_states: typing.Optional\[bool\] = None return\_dict: typing.Optional\[bool\] = None ) → [transformers.modeling\_outputs.BaseModelOutputWithPooling](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.BaseModelOutputWithPooling) or `tuple(torch.FloatTensor)` Parameters - **pixel\_values** (`torch.FloatTensor` of shape `(batch_size, num_frames, num_channels, height, width)`) — Pixel values. Pixel values can be obtained using [VivitImageProcessor](/docs/transformers/v4.34.0/en/model_doc/vivit#transformers.VivitImageProcessor). See [VivitImageProcessor.preprocess()](/docs/transformers/v4.34.0/en/model_doc/vivit#transformers.VivitImageProcessor.preprocess) for details. - **head\_mask** (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, _optional_) — Mask to nullify selected heads of the self-attention modules. Mask values selected in `[0, 1]`: - 1 indicates the head is **not masked**, - 0 indicates the head is **masked**. - **output\_attentions** (`bool`, _optional_) — Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. - **output\_hidden\_states** (`bool`, _optional_) — Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for more detail. - **return\_dict** (`bool`, _optional_) — Whether or not to return a [ModelOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput) instead of a plain tuple. A [transformers.modeling\_outputs.BaseModelOutputWithPooling](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.BaseModelOutputWithPooling) or a tuple of `torch.FloatTensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various elements depending on the configuration ([VivitConfig](/docs/transformers/v4.34.0/en/model_doc/vivit#transformers.VivitConfig)) and inputs. - **last\_hidden\_state** (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`) — Sequence of hidden-states at the output of the last layer of the model. - **pooler\_output** (`torch.FloatTensor` of shape `(batch_size, hidden_size)`) — Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining. - **hidden\_states** (`tuple(torch.FloatTensor)`, _optional_, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`) — Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. - **attentions** (`tuple(torch.FloatTensor)`, _optional_, returned when `output_attentions=True` is passed or when `config.output_attentions=True`) — Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The [VivitModel](/docs/transformers/v4.34.0/en/model_doc/vivit#transformers.VivitModel) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: ``` >>> import av >>> import numpy as np >>> from transformers import VivitImageProcessor, VivitModel >>> from huggingface_hub import hf_hub_download >>> np.random.seed(0) >>> def read_video_pyav(container, indices): ... ''' ... Decode the video with PyAV decoder. ... Args: ... container (`av.container.input.InputContainer`): PyAV container. ... indices (`List[int]`): List of frame indices to decode. ... Returns: ... result (np.ndarray): np array of decoded frames of shape (num_frames, height, width, 3). ... ''' ... frames = [] ... container.seek(0) ... start_index = indices[0] ... end_index = indices[-1] ... for i, frame in enumerate(container.decode(video=0)): ... if i > end_index: ... break ... if i >= start_index and i in indices: ... frames.append(frame) ... return np.stack([x.to_ndarray(format="rgb24") for x in frames]) >>> def sample_frame_indices(clip_len, frame_sample_rate, seg_len): ... ''' ... Sample a given number of frame indices from the video. ... Args: ... clip_len (`int`): Total number of frames to sample. ... frame_sample_rate (`int`): Sample every n-th frame. ... seg_len (`int`): Maximum allowed index of sample's last frame. ... Returns: ... indices (`List[int]`): List of sampled frame indices ... ''' ... converted_len = int(clip_len * frame_sample_rate) ... end_idx = np.random.randint(converted_len, seg_len) ... start_idx = end_idx - converted_len ... indices = np.linspace(start_idx, end_idx, num=clip_len) ... indices = np.clip(indices, start_idx, end_idx - 1).astype(np.int64) ... return indices >>> >>> file_path = hf_hub_download( ... repo_id="nielsr/video-demo", filename="eating_spaghetti.mp4", repo_type="dataset" ... ) >>> container = av.open(file_path) >>> >>> indices = sample_frame_indices(clip_len=32, frame_sample_rate=1, seg_len=container.streams.video[0].frames) >>> video = read_video_pyav(container=container, indices=indices) >>> image_processor = VivitImageProcessor.from_pretrained("google/vivit-b-16x2-kinetics400") >>> model = VivitModel.from_pretrained("google/vivit-b-16x2-kinetics400") >>> >>> inputs = image_processor(list(video), return_tensors="pt") >>> >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state >>> list(last_hidden_states.shape) [1, 3137, 768] ``` ## VivitForVideoClassification ### class transformers.VivitForVideoClassification [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vivit/modeling_vivit.py#L614) ( config ) Parameters - **config** ([VivitConfig](/docs/transformers/v4.34.0/en/model_doc/vivit#transformers.VivitConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. ViViT Transformer model with a video classification head on top (a linear layer on top of the final hidden state of the \[CLS\] token) e.g. for Kinetics-400. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vivit/modeling_vivit.py#L627) ( pixel\_values: typing.Optional\[torch.FloatTensor\] = None head\_mask: typing.Optional\[torch.FloatTensor\] = None labels: typing.Optional\[torch.LongTensor\] = None output\_attentions: typing.Optional\[bool\] = None output\_hidden\_states: typing.Optional\[bool\] = None return\_dict: typing.Optional\[bool\] = None ) → [transformers.modeling\_outputs.ImageClassifierOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.ImageClassifierOutput) or `tuple(torch.FloatTensor)` Parameters - **pixel\_values** (`torch.FloatTensor` of shape `(batch_size, num_frames, num_channels, height, width)`) — Pixel values. Pixel values can be obtained using [VivitImageProcessor](/docs/transformers/v4.34.0/en/model_doc/vivit#transformers.VivitImageProcessor). See [VivitImageProcessor.preprocess()](/docs/transformers/v4.34.0/en/model_doc/vivit#transformers.VivitImageProcessor.preprocess) for details. - **head\_mask** (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, _optional_) — Mask to nullify selected heads of the self-attention modules. Mask values selected in `[0, 1]`: - 1 indicates the head is **not masked**, - 0 indicates the head is **masked**. - **output\_attentions** (`bool`, _optional_) — Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. - **output\_hidden\_states** (`bool`, _optional_) — Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for more detail. - **return\_dict** (`bool`, _optional_) — Whether or not to return a [ModelOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput) instead of a plain tuple. - **labels** (`torch.LongTensor` of shape `(batch_size,)`, _optional_) — Labels for computing the image classification/regression loss. Indices should be in `[0, ..., config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If `config.num_labels > 1` a classification loss is computed (Cross-Entropy). A [transformers.modeling\_outputs.ImageClassifierOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.ImageClassifierOutput) or a tuple of `torch.FloatTensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various elements depending on the configuration ([VivitConfig](/docs/transformers/v4.34.0/en/model_doc/vivit#transformers.VivitConfig)) and inputs. - **loss** (`torch.FloatTensor` of shape `(1,)`, _optional_, returned when `labels` is provided) — Classification (or regression if config.num\_labels==1) loss. - **logits** (`torch.FloatTensor` of shape `(batch_size, config.num_labels)`) — Classification (or regression if config.num\_labels==1) scores (before SoftMax). - **hidden\_states** (`tuple(torch.FloatTensor)`, _optional_, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`) — Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each stage) of shape `(batch_size, sequence_length, hidden_size)`. Hidden-states (also called feature maps) of the model at the output of each stage. - **attentions** (`tuple(torch.FloatTensor)`, _optional_, returned when `output_attentions=True` is passed or when `config.output_attentions=True`) — Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, patch_size, sequence_length)`. Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The [VivitForVideoClassification](/docs/transformers/v4.34.0/en/model_doc/vivit#transformers.VivitForVideoClassification) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: ``` >>> import av >>> import numpy as np >>> import torch >>> from transformers import VivitImageProcessor, VivitForVideoClassification >>> from huggingface_hub import hf_hub_download >>> np.random.seed(0) >>> def read_video_pyav(container, indices): ... ''' ... Decode the video with PyAV decoder. ... Args: ... container (`av.container.input.InputContainer`): PyAV container. ... indices (`List[int]`): List of frame indices to decode. ... Returns: ... result (np.ndarray): np array of decoded frames of shape (num_frames, height, width, 3). ... ''' ... frames = [] ... container.seek(0) ... start_index = indices[0] ... end_index = indices[-1] ... for i, frame in enumerate(container.decode(video=0)): ... if i > end_index: ... break ... if i >= start_index and i in indices: ... frames.append(frame) ... return np.stack([x.to_ndarray(format="rgb24") for x in frames]) >>> def sample_frame_indices(clip_len, frame_sample_rate, seg_len): ... ''' ... Sample a given number of frame indices from the video. ... Args: ... clip_len (`int`): Total number of frames to sample. ... frame_sample_rate (`int`): Sample every n-th frame. ... seg_len (`int`): Maximum allowed index of sample's last frame. ... Returns: ... indices (`List[int]`): List of sampled frame indices ... ''' ... converted_len = int(clip_len * frame_sample_rate) ... end_idx = np.random.randint(converted_len, seg_len) ... start_idx = end_idx - converted_len ... indices = np.linspace(start_idx, end_idx, num=clip_len) ... indices = np.clip(indices, start_idx, end_idx - 1).astype(np.int64) ... return indices >>> >>> file_path = hf_hub_download( ... repo_id="nielsr/video-demo", filename="eating_spaghetti.mp4", repo_type="dataset" ... ) >>> container = av.open(file_path) >>> >>> indices = sample_frame_indices(clip_len=32, frame_sample_rate=4, seg_len=container.streams.video[0].frames) >>> video = read_video_pyav(container=container, indices=indices) >>> image_processor = VivitImageProcessor.from_pretrained("google/vivit-b-16x2-kinetics400") >>> model = VivitForVideoClassification.from_pretrained("google/vivit-b-16x2-kinetics400") >>> inputs = image_processor(list(video), return_tensors="pt") >>> with torch.no_grad(): ... outputs = model(**inputs) ... logits = outputs.logits >>> >>> predicted_label = logits.argmax(-1).item() >>> print(model.config.id2label[predicted_label]) LABEL_116 ```
<!DOCTYPE html><html class=""><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"> <meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science."> <meta property="fb:app_id" content="1321688464574422"> <meta name="twitter:card" content="summary_large_image"> <meta name="twitter:site" content="@huggingface"> <meta property="og:title" content="Video Vision Transformer (ViViT)"> <meta property="og:type" content="website"> <meta property="og:url" content="https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/vivit"> <meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png"> <link rel="stylesheet" href="/front/build/kube-b0520c1/style.css"> <link rel="preconnect" href="https://fonts.gstatic.com"> <link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&amp;display=swap" rel="stylesheet"> <link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&amp;display=swap" rel="stylesheet"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" as="style" onload="this.onload=null;this.rel='stylesheet'"> <noscript> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" /> </noscript> <title>Video Vision Transformer (ViViT)</title> <script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script> <script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head> <body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage"> <div class="flex min-h-screen flex-col"> <div class="SVELTE_HYDRATER contents" data-props="{&quot;classNames&quot;:&quot;&quot;,&quot;isWide&quot;:true,&quot;isZh&quot;:false}" data-target="MainHeader"><header class="border-b border-gray-100 "><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <div class="flex flex-none items-center justify-center p-0.5 place-self-stretch lg:hidden"><button class="relative z-40 flex h-6 w-8 items-center justify-center" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div></div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="rounded-full border border-transparent bg-gray-900 px-3 py-1 leading-none text-white hover:border-black hover:bg-white hover:text-black" href="/join">Sign Up</a></li></ul></nav></div></header></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div> <main class="flex flex-1 flex-col"><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapters&quot;:[{&quot;title&quot;:&quot;Get started&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;🤗 Transformers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;index&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/index&quot;},{&quot;title&quot;:&quot;Quick tour&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;quicktour&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/quicktour&quot;},{&quot;title&quot;:&quot;Installation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;installation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/installation&quot;}]},{&quot;title&quot;:&quot;Tutorials&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Run inference with pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_tutorial&quot;},{&quot;title&quot;:&quot;Write portable code with AutoClass&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;autoclass_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/autoclass_tutorial&quot;},{&quot;title&quot;:&quot;Preprocess data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocessing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/preprocessing&quot;},{&quot;title&quot;:&quot;Fine-tune a pretrained model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;training&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/training&quot;},{&quot;title&quot;:&quot;Train with a script&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;run_scripts&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/run_scripts&quot;},{&quot;title&quot;:&quot;Set up distributed training with 🤗 Accelerate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;accelerate&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/accelerate&quot;},{&quot;title&quot;:&quot;Load and train adapters with 🤗 PEFT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;peft&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/peft&quot;},{&quot;title&quot;:&quot;Share your model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_sharing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_sharing&quot;},{&quot;title&quot;:&quot;Agents&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers_agents&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/transformers_agents&quot;},{&quot;title&quot;:&quot;Generation with LLMs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;llm_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/llm_tutorial&quot;}]},{&quot;title&quot;:&quot;Task Guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Natural Language Processing&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text classification&quot;,&quot;id&quot;:&quot;tasks/sequence_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/sequence_classification&quot;},{&quot;title&quot;:&quot;Token classification&quot;,&quot;id&quot;:&quot;tasks/token_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/token_classification&quot;},{&quot;title&quot;:&quot;Question answering&quot;,&quot;id&quot;:&quot;tasks/question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/question_answering&quot;},{&quot;title&quot;:&quot;Causal language modeling&quot;,&quot;id&quot;:&quot;tasks/language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/language_modeling&quot;},{&quot;title&quot;:&quot;Masked language modeling&quot;,&quot;id&quot;:&quot;tasks/masked_language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/masked_language_modeling&quot;},{&quot;title&quot;:&quot;Translation&quot;,&quot;id&quot;:&quot;tasks/translation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/translation&quot;},{&quot;title&quot;:&quot;Summarization&quot;,&quot;id&quot;:&quot;tasks/summarization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/summarization&quot;},{&quot;title&quot;:&quot;Multiple choice&quot;,&quot;id&quot;:&quot;tasks/multiple_choice&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/multiple_choice&quot;}]},{&quot;title&quot;:&quot;Audio&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio classification&quot;,&quot;id&quot;:&quot;tasks/audio_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/audio_classification&quot;},{&quot;title&quot;:&quot;Automatic speech recognition&quot;,&quot;id&quot;:&quot;tasks/asr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/asr&quot;}]},{&quot;title&quot;:&quot;Computer Vision&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image classification&quot;,&quot;id&quot;:&quot;tasks/image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_classification&quot;},{&quot;title&quot;:&quot;Semantic segmentation&quot;,&quot;id&quot;:&quot;tasks/semantic_segmentation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/semantic_segmentation&quot;},{&quot;title&quot;:&quot;Video classification&quot;,&quot;id&quot;:&quot;tasks/video_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/video_classification&quot;},{&quot;title&quot;:&quot;Object detection&quot;,&quot;id&quot;:&quot;tasks/object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot object detection&quot;,&quot;id&quot;:&quot;tasks/zero_shot_object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot image classification&quot;,&quot;id&quot;:&quot;tasks/zero_shot_image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification&quot;},{&quot;title&quot;:&quot;Depth estimation&quot;,&quot;id&quot;:&quot;tasks/monocular_depth_estimation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation&quot;}]},{&quot;title&quot;:&quot;Multimodal&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image captioning&quot;,&quot;id&quot;:&quot;tasks/image_captioning&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_captioning&quot;},{&quot;title&quot;:&quot;Document Question Answering&quot;,&quot;id&quot;:&quot;tasks/document_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/document_question_answering&quot;},{&quot;title&quot;:&quot;Visual Question Answering&quot;,&quot;id&quot;:&quot;tasks/visual_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/visual_question_answering&quot;},{&quot;title&quot;:&quot;Text to speech&quot;,&quot;id&quot;:&quot;tasks/text-to-speech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/text-to-speech&quot;}]},{&quot;title&quot;:&quot;Generation&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Customize the generation strategy&quot;,&quot;id&quot;:&quot;generation_strategies&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/generation_strategies&quot;}]},{&quot;title&quot;:&quot;Prompting&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image tasks with IDEFICS&quot;,&quot;id&quot;:&quot;tasks/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/idefics&quot;}]}]},{&quot;title&quot;:&quot;Developer guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Use fast tokenizers from 🤗 Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;fast_tokenizers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/fast_tokenizers&quot;},{&quot;title&quot;:&quot;Run inference with multilingual models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;multilingual&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/multilingual&quot;},{&quot;title&quot;:&quot;Use model-specific APIs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;create_a_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/create_a_model&quot;},{&quot;title&quot;:&quot;Share a custom model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_models&quot;},{&quot;title&quot;:&quot;Templates for chat models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;chat_templating&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/chat_templating&quot;},{&quot;title&quot;:&quot;Run training on Amazon SageMaker&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;sagemaker&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/sagemaker&quot;},{&quot;title&quot;:&quot;Export to ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;serialization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/serialization&quot;},{&quot;title&quot;:&quot;Export to TFLite&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tflite&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tflite&quot;},{&quot;title&quot;:&quot;Export to TorchScript&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;torchscript&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/torchscript&quot;},{&quot;title&quot;:&quot;Benchmarks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;benchmarks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/benchmarks&quot;},{&quot;title&quot;:&quot;Notebooks with examples&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;notebooks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/notebooks&quot;},{&quot;title&quot;:&quot;Community resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;community&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/community&quot;},{&quot;title&quot;:&quot;Custom Tools and Prompts&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_tools&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_tools&quot;},{&quot;title&quot;:&quot;Troubleshoot&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;troubleshooting&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/troubleshooting&quot;}]},{&quot;title&quot;:&quot;Performance and scalability&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;performance&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/performance&quot;},{&quot;title&quot;:&quot;Efficient training techniques&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Methods and tools for efficient training on a single GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_one&quot;},{&quot;title&quot;:&quot;Multiple GPUs and parallelism&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_many&quot;},{&quot;title&quot;:&quot;Efficient training on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu&quot;},{&quot;title&quot;:&quot;Distributed CPU training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu_many&quot;},{&quot;title&quot;:&quot;Training on TPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu&quot;},{&quot;title&quot;:&quot;Training on TPU with TensorFlow&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu_tf&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu_tf&quot;},{&quot;title&quot;:&quot;Training on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_special&quot;},{&quot;title&quot;:&quot;Custom hardware for training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_hardware&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_hardware&quot;},{&quot;title&quot;:&quot;Hyperparameter Search using Trainer API&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;hpo_train&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/hpo_train&quot;}]},{&quot;title&quot;:&quot;Optimizing inference&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Inference on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_cpu&quot;},{&quot;title&quot;:&quot;Inference on one GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_one&quot;},{&quot;title&quot;:&quot;Inference on many GPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_many&quot;},{&quot;title&quot;:&quot;Inference on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_special&quot;}]},{&quot;title&quot;:&quot;Instantiating a big model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;big_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/big_models&quot;},{&quot;title&quot;:&quot;Troubleshooting&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;debugging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/debugging&quot;},{&quot;title&quot;:&quot;XLA Integration for TensorFlow Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tf_xla&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tf_xla&quot;},{&quot;title&quot;:&quot;Optimize inference using `torch.compile()`&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_torch_compile&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_torch_compile&quot;}]},{&quot;title&quot;:&quot;Contribute&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;How to contribute to transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;contributing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/contributing&quot;},{&quot;title&quot;:&quot;How to add a model to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_model&quot;},{&quot;title&quot;:&quot;How to convert a 🤗 Transformers model to TensorFlow?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_tensorflow_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_tensorflow_model&quot;},{&quot;title&quot;:&quot;How to add a pipeline to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_pipeline&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_pipeline&quot;},{&quot;title&quot;:&quot;Testing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;testing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/testing&quot;},{&quot;title&quot;:&quot;Checks on a Pull Request&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pr_checks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pr_checks&quot;}]},{&quot;title&quot;:&quot;Conceptual guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Philosophy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;philosophy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/philosophy&quot;},{&quot;title&quot;:&quot;Glossary&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;glossary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/glossary&quot;},{&quot;title&quot;:&quot;What 🤗 Transformers can do&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;task_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/task_summary&quot;},{&quot;title&quot;:&quot;How 🤗 Transformers solve tasks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks_explained&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks_explained&quot;},{&quot;title&quot;:&quot;The Transformer model family&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_summary&quot;},{&quot;title&quot;:&quot;Summary of the tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tokenizer_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tokenizer_summary&quot;},{&quot;title&quot;:&quot;Attention mechanisms&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;attention&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/attention&quot;},{&quot;title&quot;:&quot;Padding and truncation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pad_truncation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pad_truncation&quot;},{&quot;title&quot;:&quot;BERTology&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;bertology&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/bertology&quot;},{&quot;title&quot;:&quot;Perplexity of fixed-length models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perplexity&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perplexity&quot;},{&quot;title&quot;:&quot;Pipelines for webserver inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_webserver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_webserver&quot;},{&quot;title&quot;:&quot;Model training anatomy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_memory_anatomy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_memory_anatomy&quot;}]},{&quot;title&quot;:&quot;API&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Main Classes&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Agents and Tools&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/agent&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/agent&quot;},{&quot;title&quot;:&quot;Auto Classes&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/auto&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/auto&quot;},{&quot;title&quot;:&quot;Callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/callback&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/callback&quot;},{&quot;title&quot;:&quot;Configuration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/configuration&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/configuration&quot;},{&quot;title&quot;:&quot;Data Collator&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/data_collator&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/data_collator&quot;},{&quot;title&quot;:&quot;Keras callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/keras_callbacks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/keras_callbacks&quot;},{&quot;title&quot;:&quot;Logging&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/logging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/logging&quot;},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/model&quot;},{&quot;title&quot;:&quot;Text Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/text_generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/text_generation&quot;},{&quot;title&quot;:&quot;ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/onnx&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/onnx&quot;},{&quot;title&quot;:&quot;Optimization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/optimizer_schedules&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules&quot;},{&quot;title&quot;:&quot;Model outputs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/output&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/output&quot;},{&quot;title&quot;:&quot;Pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/pipelines&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/pipelines&quot;},{&quot;title&quot;:&quot;Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/processors&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/processors&quot;},{&quot;title&quot;:&quot;Quantization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/quantization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/quantization&quot;},{&quot;title&quot;:&quot;Tokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/tokenizer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/tokenizer&quot;},{&quot;title&quot;:&quot;Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/trainer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/trainer&quot;},{&quot;title&quot;:&quot;DeepSpeed Integration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/deepspeed&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/deepspeed&quot;},{&quot;title&quot;:&quot;Feature Extractor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/feature_extractor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/feature_extractor&quot;},{&quot;title&quot;:&quot;Image Processor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/image_processor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/image_processor&quot;}]},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALBERT&quot;,&quot;id&quot;:&quot;model_doc/albert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/albert&quot;},{&quot;title&quot;:&quot;BART&quot;,&quot;id&quot;:&quot;model_doc/bart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bart&quot;},{&quot;title&quot;:&quot;BARThez&quot;,&quot;id&quot;:&quot;model_doc/barthez&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/barthez&quot;},{&quot;title&quot;:&quot;BARTpho&quot;,&quot;id&quot;:&quot;model_doc/bartpho&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bartpho&quot;},{&quot;title&quot;:&quot;BERT&quot;,&quot;id&quot;:&quot;model_doc/bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert&quot;},{&quot;title&quot;:&quot;BertGeneration&quot;,&quot;id&quot;:&quot;model_doc/bert-generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-generation&quot;},{&quot;title&quot;:&quot;BertJapanese&quot;,&quot;id&quot;:&quot;model_doc/bert-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-japanese&quot;},{&quot;title&quot;:&quot;Bertweet&quot;,&quot;id&quot;:&quot;model_doc/bertweet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bertweet&quot;},{&quot;title&quot;:&quot;BigBird&quot;,&quot;id&quot;:&quot;model_doc/big_bird&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/big_bird&quot;},{&quot;title&quot;:&quot;BigBirdPegasus&quot;,&quot;id&quot;:&quot;model_doc/bigbird_pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus&quot;},{&quot;title&quot;:&quot;BioGpt&quot;,&quot;id&quot;:&quot;model_doc/biogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/biogpt&quot;},{&quot;title&quot;:&quot;Blenderbot&quot;,&quot;id&quot;:&quot;model_doc/blenderbot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot&quot;},{&quot;title&quot;:&quot;Blenderbot Small&quot;,&quot;id&quot;:&quot;model_doc/blenderbot-small&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot-small&quot;},{&quot;title&quot;:&quot;BLOOM&quot;,&quot;id&quot;:&quot;model_doc/bloom&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bloom&quot;},{&quot;title&quot;:&quot;BORT&quot;,&quot;id&quot;:&quot;model_doc/bort&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bort&quot;},{&quot;title&quot;:&quot;ByT5&quot;,&quot;id&quot;:&quot;model_doc/byt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/byt5&quot;},{&quot;title&quot;:&quot;CamemBERT&quot;,&quot;id&quot;:&quot;model_doc/camembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/camembert&quot;},{&quot;title&quot;:&quot;CANINE&quot;,&quot;id&quot;:&quot;model_doc/canine&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/canine&quot;},{&quot;title&quot;:&quot;CodeGen&quot;,&quot;id&quot;:&quot;model_doc/codegen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/codegen&quot;},{&quot;title&quot;:&quot;CodeLlama&quot;,&quot;id&quot;:&quot;model_doc/code_llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/code_llama&quot;},{&quot;title&quot;:&quot;ConvBERT&quot;,&quot;id&quot;:&quot;model_doc/convbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convbert&quot;},{&quot;title&quot;:&quot;CPM&quot;,&quot;id&quot;:&quot;model_doc/cpm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpm&quot;},{&quot;title&quot;:&quot;CPMANT&quot;,&quot;id&quot;:&quot;model_doc/cpmant&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpmant&quot;},{&quot;title&quot;:&quot;CTRL&quot;,&quot;id&quot;:&quot;model_doc/ctrl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ctrl&quot;},{&quot;title&quot;:&quot;DeBERTa&quot;,&quot;id&quot;:&quot;model_doc/deberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta&quot;},{&quot;title&quot;:&quot;DeBERTa-v2&quot;,&quot;id&quot;:&quot;model_doc/deberta-v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta-v2&quot;},{&quot;title&quot;:&quot;DialoGPT&quot;,&quot;id&quot;:&quot;model_doc/dialogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dialogpt&quot;},{&quot;title&quot;:&quot;DistilBERT&quot;,&quot;id&quot;:&quot;model_doc/distilbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/distilbert&quot;},{&quot;title&quot;:&quot;DPR&quot;,&quot;id&quot;:&quot;model_doc/dpr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpr&quot;},{&quot;title&quot;:&quot;ELECTRA&quot;,&quot;id&quot;:&quot;model_doc/electra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/electra&quot;},{&quot;title&quot;:&quot;Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encoder-decoder&quot;},{&quot;title&quot;:&quot;ERNIE&quot;,&quot;id&quot;:&quot;model_doc/ernie&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie&quot;},{&quot;title&quot;:&quot;ErnieM&quot;,&quot;id&quot;:&quot;model_doc/ernie_m&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie_m&quot;},{&quot;title&quot;:&quot;ESM&quot;,&quot;id&quot;:&quot;model_doc/esm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/esm&quot;},{&quot;title&quot;:&quot;Falcon&quot;,&quot;id&quot;:&quot;model_doc/falcon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/falcon&quot;},{&quot;title&quot;:&quot;FLAN-T5&quot;,&quot;id&quot;:&quot;model_doc/flan-t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-t5&quot;},{&quot;title&quot;:&quot;FLAN-UL2&quot;,&quot;id&quot;:&quot;model_doc/flan-ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-ul2&quot;},{&quot;title&quot;:&quot;FlauBERT&quot;,&quot;id&quot;:&quot;model_doc/flaubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flaubert&quot;},{&quot;title&quot;:&quot;FNet&quot;,&quot;id&quot;:&quot;model_doc/fnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fnet&quot;},{&quot;title&quot;:&quot;FSMT&quot;,&quot;id&quot;:&quot;model_doc/fsmt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fsmt&quot;},{&quot;title&quot;:&quot;Funnel Transformer&quot;,&quot;id&quot;:&quot;model_doc/funnel&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/funnel&quot;},{&quot;title&quot;:&quot;GPT&quot;,&quot;id&quot;:&quot;model_doc/openai-gpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/openai-gpt&quot;},{&quot;title&quot;:&quot;GPT Neo&quot;,&quot;id&quot;:&quot;model_doc/gpt_neo&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neo&quot;},{&quot;title&quot;:&quot;GPT NeoX&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox&quot;},{&quot;title&quot;:&quot;GPT NeoX Japanese&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox_japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese&quot;},{&quot;title&quot;:&quot;GPT-J&quot;,&quot;id&quot;:&quot;model_doc/gptj&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptj&quot;},{&quot;title&quot;:&quot;GPT2&quot;,&quot;id&quot;:&quot;model_doc/gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt2&quot;},{&quot;title&quot;:&quot;GPTBigCode&quot;,&quot;id&quot;:&quot;model_doc/gpt_bigcode&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode&quot;},{&quot;title&quot;:&quot;GPTSAN Japanese&quot;,&quot;id&quot;:&quot;model_doc/gptsan-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese&quot;},{&quot;title&quot;:&quot;GPTSw3&quot;,&quot;id&quot;:&quot;model_doc/gpt-sw3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt-sw3&quot;},{&quot;title&quot;:&quot;HerBERT&quot;,&quot;id&quot;:&quot;model_doc/herbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/herbert&quot;},{&quot;title&quot;:&quot;I-BERT&quot;,&quot;id&quot;:&quot;model_doc/ibert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ibert&quot;},{&quot;title&quot;:&quot;Jukebox&quot;,&quot;id&quot;:&quot;model_doc/jukebox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/jukebox&quot;},{&quot;title&quot;:&quot;LED&quot;,&quot;id&quot;:&quot;model_doc/led&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/led&quot;},{&quot;title&quot;:&quot;LLaMA&quot;,&quot;id&quot;:&quot;model_doc/llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama&quot;},{&quot;title&quot;:&quot;Llama2&quot;,&quot;id&quot;:&quot;model_doc/llama2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama2&quot;},{&quot;title&quot;:&quot;Longformer&quot;,&quot;id&quot;:&quot;model_doc/longformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longformer&quot;},{&quot;title&quot;:&quot;LongT5&quot;,&quot;id&quot;:&quot;model_doc/longt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longt5&quot;},{&quot;title&quot;:&quot;LUKE&quot;,&quot;id&quot;:&quot;model_doc/luke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/luke&quot;},{&quot;title&quot;:&quot;M2M100&quot;,&quot;id&quot;:&quot;model_doc/m2m_100&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/m2m_100&quot;},{&quot;title&quot;:&quot;MarianMT&quot;,&quot;id&quot;:&quot;model_doc/marian&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/marian&quot;},{&quot;title&quot;:&quot;MarkupLM&quot;,&quot;id&quot;:&quot;model_doc/markuplm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/markuplm&quot;},{&quot;title&quot;:&quot;MBart and MBart-50&quot;,&quot;id&quot;:&quot;model_doc/mbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mbart&quot;},{&quot;title&quot;:&quot;MEGA&quot;,&quot;id&quot;:&quot;model_doc/mega&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mega&quot;},{&quot;title&quot;:&quot;MegatronBERT&quot;,&quot;id&quot;:&quot;model_doc/megatron-bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron-bert&quot;},{&quot;title&quot;:&quot;MegatronGPT2&quot;,&quot;id&quot;:&quot;model_doc/megatron_gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2&quot;},{&quot;title&quot;:&quot;Mistral&quot;,&quot;id&quot;:&quot;model_doc/mistral&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mistral&quot;},{&quot;title&quot;:&quot;mLUKE&quot;,&quot;id&quot;:&quot;model_doc/mluke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mluke&quot;},{&quot;title&quot;:&quot;MobileBERT&quot;,&quot;id&quot;:&quot;model_doc/mobilebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilebert&quot;},{&quot;title&quot;:&quot;MPNet&quot;,&quot;id&quot;:&quot;model_doc/mpnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpnet&quot;},{&quot;title&quot;:&quot;MPT&quot;,&quot;id&quot;:&quot;model_doc/mpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpt&quot;},{&quot;title&quot;:&quot;MRA&quot;,&quot;id&quot;:&quot;model_doc/mra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mra&quot;},{&quot;title&quot;:&quot;MT5&quot;,&quot;id&quot;:&quot;model_doc/mt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mt5&quot;},{&quot;title&quot;:&quot;MVP&quot;,&quot;id&quot;:&quot;model_doc/mvp&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mvp&quot;},{&quot;title&quot;:&quot;NEZHA&quot;,&quot;id&quot;:&quot;model_doc/nezha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nezha&quot;},{&quot;title&quot;:&quot;NLLB&quot;,&quot;id&quot;:&quot;model_doc/nllb&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb&quot;},{&quot;title&quot;:&quot;NLLB-MoE&quot;,&quot;id&quot;:&quot;model_doc/nllb-moe&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb-moe&quot;},{&quot;title&quot;:&quot;Nyströmformer&quot;,&quot;id&quot;:&quot;model_doc/nystromformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nystromformer&quot;},{&quot;title&quot;:&quot;Open-Llama&quot;,&quot;id&quot;:&quot;model_doc/open-llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/open-llama&quot;},{&quot;title&quot;:&quot;OPT&quot;,&quot;id&quot;:&quot;model_doc/opt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/opt&quot;},{&quot;title&quot;:&quot;Pegasus&quot;,&quot;id&quot;:&quot;model_doc/pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus&quot;},{&quot;title&quot;:&quot;PEGASUS-X&quot;,&quot;id&quot;:&quot;model_doc/pegasus_x&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus_x&quot;},{&quot;title&quot;:&quot;Persimmon&quot;,&quot;id&quot;:&quot;model_doc/persimmon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/persimmon&quot;},{&quot;title&quot;:&quot;PhoBERT&quot;,&quot;id&quot;:&quot;model_doc/phobert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/phobert&quot;},{&quot;title&quot;:&quot;PLBart&quot;,&quot;id&quot;:&quot;model_doc/plbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/plbart&quot;},{&quot;title&quot;:&quot;ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/prophetnet&quot;},{&quot;title&quot;:&quot;QDQBert&quot;,&quot;id&quot;:&quot;model_doc/qdqbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/qdqbert&quot;},{&quot;title&quot;:&quot;RAG&quot;,&quot;id&quot;:&quot;model_doc/rag&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rag&quot;},{&quot;title&quot;:&quot;REALM&quot;,&quot;id&quot;:&quot;model_doc/realm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/realm&quot;},{&quot;title&quot;:&quot;Reformer&quot;,&quot;id&quot;:&quot;model_doc/reformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/reformer&quot;},{&quot;title&quot;:&quot;RemBERT&quot;,&quot;id&quot;:&quot;model_doc/rembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rembert&quot;},{&quot;title&quot;:&quot;RetriBERT&quot;,&quot;id&quot;:&quot;model_doc/retribert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/retribert&quot;},{&quot;title&quot;:&quot;RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta&quot;},{&quot;title&quot;:&quot;RoBERTa-PreLayerNorm&quot;,&quot;id&quot;:&quot;model_doc/roberta-prelayernorm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm&quot;},{&quot;title&quot;:&quot;RoCBert&quot;,&quot;id&quot;:&quot;model_doc/roc_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roc_bert&quot;},{&quot;title&quot;:&quot;RoFormer&quot;,&quot;id&quot;:&quot;model_doc/roformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roformer&quot;},{&quot;title&quot;:&quot;RWKV&quot;,&quot;id&quot;:&quot;model_doc/rwkv&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rwkv&quot;},{&quot;title&quot;:&quot;Splinter&quot;,&quot;id&quot;:&quot;model_doc/splinter&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/splinter&quot;},{&quot;title&quot;:&quot;SqueezeBERT&quot;,&quot;id&quot;:&quot;model_doc/squeezebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/squeezebert&quot;},{&quot;title&quot;:&quot;SwitchTransformers&quot;,&quot;id&quot;:&quot;model_doc/switch_transformers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/switch_transformers&quot;},{&quot;title&quot;:&quot;T5&quot;,&quot;id&quot;:&quot;model_doc/t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5&quot;},{&quot;title&quot;:&quot;T5v1.1&quot;,&quot;id&quot;:&quot;model_doc/t5v1.1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5v1.1&quot;},{&quot;title&quot;:&quot;TAPEX&quot;,&quot;id&quot;:&quot;model_doc/tapex&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapex&quot;},{&quot;title&quot;:&quot;Transformer XL&quot;,&quot;id&quot;:&quot;model_doc/transfo-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/transfo-xl&quot;},{&quot;title&quot;:&quot;UL2&quot;,&quot;id&quot;:&quot;model_doc/ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ul2&quot;},{&quot;title&quot;:&quot;UMT5&quot;,&quot;id&quot;:&quot;model_doc/umt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/umt5&quot;},{&quot;title&quot;:&quot;X-MOD&quot;,&quot;id&quot;:&quot;model_doc/xmod&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xmod&quot;},{&quot;title&quot;:&quot;XGLM&quot;,&quot;id&quot;:&quot;model_doc/xglm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xglm&quot;},{&quot;title&quot;:&quot;XLM&quot;,&quot;id&quot;:&quot;model_doc/xlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm&quot;},{&quot;title&quot;:&quot;XLM-ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/xlm-prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa-XL&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl&quot;},{&quot;title&quot;:&quot;XLM-V&quot;,&quot;id&quot;:&quot;model_doc/xlm-v&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-v&quot;},{&quot;title&quot;:&quot;XLNet&quot;,&quot;id&quot;:&quot;model_doc/xlnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlnet&quot;},{&quot;title&quot;:&quot;YOSO&quot;,&quot;id&quot;:&quot;model_doc/yoso&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yoso&quot;}]},{&quot;title&quot;:&quot;Vision models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;BEiT&quot;,&quot;id&quot;:&quot;model_doc/beit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/beit&quot;},{&quot;title&quot;:&quot;BiT&quot;,&quot;id&quot;:&quot;model_doc/bit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bit&quot;},{&quot;title&quot;:&quot;Conditional DETR&quot;,&quot;id&quot;:&quot;model_doc/conditional_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/conditional_detr&quot;},{&quot;title&quot;:&quot;ConvNeXT&quot;,&quot;id&quot;:&quot;model_doc/convnext&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnext&quot;},{&quot;title&quot;:&quot;ConvNeXTV2&quot;,&quot;id&quot;:&quot;model_doc/convnextv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnextv2&quot;},{&quot;title&quot;:&quot;CvT&quot;,&quot;id&quot;:&quot;model_doc/cvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cvt&quot;},{&quot;title&quot;:&quot;Deformable DETR&quot;,&quot;id&quot;:&quot;model_doc/deformable_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deformable_detr&quot;},{&quot;title&quot;:&quot;DeiT&quot;,&quot;id&quot;:&quot;model_doc/deit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deit&quot;},{&quot;title&quot;:&quot;DETA&quot;,&quot;id&quot;:&quot;model_doc/deta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deta&quot;},{&quot;title&quot;:&quot;DETR&quot;,&quot;id&quot;:&quot;model_doc/detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/detr&quot;},{&quot;title&quot;:&quot;DiNAT&quot;,&quot;id&quot;:&quot;model_doc/dinat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinat&quot;},{&quot;title&quot;:&quot;DINO V2&quot;,&quot;id&quot;:&quot;model_doc/dinov2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinov2&quot;},{&quot;title&quot;:&quot;DiT&quot;,&quot;id&quot;:&quot;model_doc/dit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dit&quot;},{&quot;title&quot;:&quot;DPT&quot;,&quot;id&quot;:&quot;model_doc/dpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpt&quot;},{&quot;title&quot;:&quot;EfficientFormer&quot;,&quot;id&quot;:&quot;model_doc/efficientformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientformer&quot;},{&quot;title&quot;:&quot;EfficientNet&quot;,&quot;id&quot;:&quot;model_doc/efficientnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientnet&quot;},{&quot;title&quot;:&quot;FocalNet&quot;,&quot;id&quot;:&quot;model_doc/focalnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/focalnet&quot;},{&quot;title&quot;:&quot;GLPN&quot;,&quot;id&quot;:&quot;model_doc/glpn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/glpn&quot;},{&quot;title&quot;:&quot;ImageGPT&quot;,&quot;id&quot;:&quot;model_doc/imagegpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/imagegpt&quot;},{&quot;title&quot;:&quot;LeViT&quot;,&quot;id&quot;:&quot;model_doc/levit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/levit&quot;},{&quot;title&quot;:&quot;Mask2Former&quot;,&quot;id&quot;:&quot;model_doc/mask2former&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mask2former&quot;},{&quot;title&quot;:&quot;MaskFormer&quot;,&quot;id&quot;:&quot;model_doc/maskformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/maskformer&quot;},{&quot;title&quot;:&quot;MobileNetV1&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v1&quot;},{&quot;title&quot;:&quot;MobileNetV2&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v2&quot;},{&quot;title&quot;:&quot;MobileViT&quot;,&quot;id&quot;:&quot;model_doc/mobilevit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevit&quot;},{&quot;title&quot;:&quot;MobileViTV2&quot;,&quot;id&quot;:&quot;model_doc/mobilevitv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevitv2&quot;},{&quot;title&quot;:&quot;NAT&quot;,&quot;id&quot;:&quot;model_doc/nat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nat&quot;},{&quot;title&quot;:&quot;PoolFormer&quot;,&quot;id&quot;:&quot;model_doc/poolformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/poolformer&quot;},{&quot;title&quot;:&quot;Pyramid Vision Transformer (PVT)&quot;,&quot;id&quot;:&quot;model_doc/pvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pvt&quot;},{&quot;title&quot;:&quot;RegNet&quot;,&quot;id&quot;:&quot;model_doc/regnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/regnet&quot;},{&quot;title&quot;:&quot;ResNet&quot;,&quot;id&quot;:&quot;model_doc/resnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/resnet&quot;},{&quot;title&quot;:&quot;SegFormer&quot;,&quot;id&quot;:&quot;model_doc/segformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/segformer&quot;},{&quot;title&quot;:&quot;SwiftFormer&quot;,&quot;id&quot;:&quot;model_doc/swiftformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swiftformer&quot;},{&quot;title&quot;:&quot;Swin Transformer&quot;,&quot;id&quot;:&quot;model_doc/swin&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin&quot;},{&quot;title&quot;:&quot;Swin Transformer V2&quot;,&quot;id&quot;:&quot;model_doc/swinv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swinv2&quot;},{&quot;title&quot;:&quot;Swin2SR&quot;,&quot;id&quot;:&quot;model_doc/swin2sr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin2sr&quot;},{&quot;title&quot;:&quot;Table Transformer&quot;,&quot;id&quot;:&quot;model_doc/table-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/table-transformer&quot;},{&quot;title&quot;:&quot;TimeSformer&quot;,&quot;id&quot;:&quot;model_doc/timesformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/timesformer&quot;},{&quot;title&quot;:&quot;UperNet&quot;,&quot;id&quot;:&quot;model_doc/upernet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/upernet&quot;},{&quot;title&quot;:&quot;VAN&quot;,&quot;id&quot;:&quot;model_doc/van&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/van&quot;},{&quot;title&quot;:&quot;VideoMAE&quot;,&quot;id&quot;:&quot;model_doc/videomae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/videomae&quot;},{&quot;title&quot;:&quot;Vision Transformer (ViT)&quot;,&quot;id&quot;:&quot;model_doc/vit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit&quot;},{&quot;title&quot;:&quot;ViT Hybrid&quot;,&quot;id&quot;:&quot;model_doc/vit_hybrid&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_hybrid&quot;},{&quot;title&quot;:&quot;ViTDet&quot;,&quot;id&quot;:&quot;model_doc/vitdet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitdet&quot;},{&quot;title&quot;:&quot;ViTMAE&quot;,&quot;id&quot;:&quot;model_doc/vit_mae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_mae&quot;},{&quot;title&quot;:&quot;ViTMatte&quot;,&quot;id&quot;:&quot;model_doc/vitmatte&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitmatte&quot;},{&quot;title&quot;:&quot;ViTMSN&quot;,&quot;id&quot;:&quot;model_doc/vit_msn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_msn&quot;},{&quot;title&quot;:&quot;ViViT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/vivit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vivit&quot;},{&quot;title&quot;:&quot;YOLOS&quot;,&quot;id&quot;:&quot;model_doc/yolos&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yolos&quot;}]},{&quot;title&quot;:&quot;Audio models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio Spectrogram Transformer&quot;,&quot;id&quot;:&quot;model_doc/audio-spectrogram-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer&quot;},{&quot;title&quot;:&quot;Bark&quot;,&quot;id&quot;:&quot;model_doc/bark&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bark&quot;},{&quot;title&quot;:&quot;CLAP&quot;,&quot;id&quot;:&quot;model_doc/clap&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clap&quot;},{&quot;title&quot;:&quot;EnCodec&quot;,&quot;id&quot;:&quot;model_doc/encodec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encodec&quot;},{&quot;title&quot;:&quot;Hubert&quot;,&quot;id&quot;:&quot;model_doc/hubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/hubert&quot;},{&quot;title&quot;:&quot;MCTCT&quot;,&quot;id&quot;:&quot;model_doc/mctct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mctct&quot;},{&quot;title&quot;:&quot;MMS&quot;,&quot;id&quot;:&quot;model_doc/mms&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mms&quot;},{&quot;title&quot;:&quot;MusicGen&quot;,&quot;id&quot;:&quot;model_doc/musicgen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/musicgen&quot;},{&quot;title&quot;:&quot;Pop2Piano&quot;,&quot;id&quot;:&quot;model_doc/pop2piano&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pop2piano&quot;},{&quot;title&quot;:&quot;SEW&quot;,&quot;id&quot;:&quot;model_doc/sew&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew&quot;},{&quot;title&quot;:&quot;SEW-D&quot;,&quot;id&quot;:&quot;model_doc/sew-d&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew-d&quot;},{&quot;title&quot;:&quot;Speech2Text&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text&quot;},{&quot;title&quot;:&quot;Speech2Text2&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text_2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2&quot;},{&quot;title&quot;:&quot;SpeechT5&quot;,&quot;id&quot;:&quot;model_doc/speecht5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speecht5&quot;},{&quot;title&quot;:&quot;UniSpeech&quot;,&quot;id&quot;:&quot;model_doc/unispeech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech&quot;},{&quot;title&quot;:&quot;UniSpeech-SAT&quot;,&quot;id&quot;:&quot;model_doc/unispeech-sat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech-sat&quot;},{&quot;title&quot;:&quot;VITS&quot;,&quot;id&quot;:&quot;model_doc/vits&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vits&quot;},{&quot;title&quot;:&quot;Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2&quot;},{&quot;title&quot;:&quot;Wav2Vec2-Conformer&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2-conformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer&quot;},{&quot;title&quot;:&quot;Wav2Vec2Phoneme&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2_phoneme&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme&quot;},{&quot;title&quot;:&quot;WavLM&quot;,&quot;id&quot;:&quot;model_doc/wavlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wavlm&quot;},{&quot;title&quot;:&quot;Whisper&quot;,&quot;id&quot;:&quot;model_doc/whisper&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/whisper&quot;},{&quot;title&quot;:&quot;XLS-R&quot;,&quot;id&quot;:&quot;model_doc/xls_r&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xls_r&quot;},{&quot;title&quot;:&quot;XLSR-Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/xlsr_wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2&quot;}]},{&quot;title&quot;:&quot;Multimodal models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALIGN&quot;,&quot;id&quot;:&quot;model_doc/align&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/align&quot;},{&quot;title&quot;:&quot;AltCLIP&quot;,&quot;id&quot;:&quot;model_doc/altclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/altclip&quot;},{&quot;title&quot;:&quot;BLIP&quot;,&quot;id&quot;:&quot;model_doc/blip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip&quot;},{&quot;title&quot;:&quot;BLIP-2&quot;,&quot;id&quot;:&quot;model_doc/blip-2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip-2&quot;},{&quot;title&quot;:&quot;BridgeTower&quot;,&quot;id&quot;:&quot;model_doc/bridgetower&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bridgetower&quot;},{&quot;title&quot;:&quot;BROS&quot;,&quot;id&quot;:&quot;model_doc/bros&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bros&quot;},{&quot;title&quot;:&quot;Chinese-CLIP&quot;,&quot;id&quot;:&quot;model_doc/chinese_clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/chinese_clip&quot;},{&quot;title&quot;:&quot;CLIP&quot;,&quot;id&quot;:&quot;model_doc/clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clip&quot;},{&quot;title&quot;:&quot;CLIPSeg&quot;,&quot;id&quot;:&quot;model_doc/clipseg&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clipseg&quot;},{&quot;title&quot;:&quot;Data2Vec&quot;,&quot;id&quot;:&quot;model_doc/data2vec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/data2vec&quot;},{&quot;title&quot;:&quot;DePlot&quot;,&quot;id&quot;:&quot;model_doc/deplot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deplot&quot;},{&quot;title&quot;:&quot;Donut&quot;,&quot;id&quot;:&quot;model_doc/donut&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/donut&quot;},{&quot;title&quot;:&quot;FLAVA&quot;,&quot;id&quot;:&quot;model_doc/flava&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flava&quot;},{&quot;title&quot;:&quot;GIT&quot;,&quot;id&quot;:&quot;model_doc/git&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/git&quot;},{&quot;title&quot;:&quot;GroupViT&quot;,&quot;id&quot;:&quot;model_doc/groupvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/groupvit&quot;},{&quot;title&quot;:&quot;IDEFICS&quot;,&quot;id&quot;:&quot;model_doc/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/idefics&quot;},{&quot;title&quot;:&quot;InstructBLIP&quot;,&quot;id&quot;:&quot;model_doc/instructblip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/instructblip&quot;},{&quot;title&quot;:&quot;LayoutLM&quot;,&quot;id&quot;:&quot;model_doc/layoutlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlm&quot;},{&quot;title&quot;:&quot;LayoutLMV2&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv2&quot;},{&quot;title&quot;:&quot;LayoutLMV3&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv3&quot;},{&quot;title&quot;:&quot;LayoutXLM&quot;,&quot;id&quot;:&quot;model_doc/layoutxlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutxlm&quot;},{&quot;title&quot;:&quot;LiLT&quot;,&quot;id&quot;:&quot;model_doc/lilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lilt&quot;},{&quot;title&quot;:&quot;LXMERT&quot;,&quot;id&quot;:&quot;model_doc/lxmert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lxmert&quot;},{&quot;title&quot;:&quot;MatCha&quot;,&quot;id&quot;:&quot;model_doc/matcha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/matcha&quot;},{&quot;title&quot;:&quot;MGP-STR&quot;,&quot;id&quot;:&quot;model_doc/mgp-str&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mgp-str&quot;},{&quot;title&quot;:&quot;Nougat&quot;,&quot;id&quot;:&quot;model_doc/nougat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nougat&quot;},{&quot;title&quot;:&quot;OneFormer&quot;,&quot;id&quot;:&quot;model_doc/oneformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/oneformer&quot;},{&quot;title&quot;:&quot;OWL-ViT&quot;,&quot;id&quot;:&quot;model_doc/owlvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/owlvit&quot;},{&quot;title&quot;:&quot;Perceiver&quot;,&quot;id&quot;:&quot;model_doc/perceiver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/perceiver&quot;},{&quot;title&quot;:&quot;Pix2Struct&quot;,&quot;id&quot;:&quot;model_doc/pix2struct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pix2struct&quot;},{&quot;title&quot;:&quot;Segment Anything&quot;,&quot;id&quot;:&quot;model_doc/sam&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sam&quot;},{&quot;title&quot;:&quot;Speech Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/speech-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder&quot;},{&quot;title&quot;:&quot;TAPAS&quot;,&quot;id&quot;:&quot;model_doc/tapas&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapas&quot;},{&quot;title&quot;:&quot;TrOCR&quot;,&quot;id&quot;:&quot;model_doc/trocr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trocr&quot;},{&quot;title&quot;:&quot;TVLT&quot;,&quot;id&quot;:&quot;model_doc/tvlt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tvlt&quot;},{&quot;title&quot;:&quot;ViLT&quot;,&quot;id&quot;:&quot;model_doc/vilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vilt&quot;},{&quot;title&quot;:&quot;Vision Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/vision-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder&quot;},{&quot;title&quot;:&quot;Vision Text Dual Encoder&quot;,&quot;id&quot;:&quot;model_doc/vision-text-dual-encoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder&quot;},{&quot;title&quot;:&quot;VisualBERT&quot;,&quot;id&quot;:&quot;model_doc/visual_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/visual_bert&quot;},{&quot;title&quot;:&quot;X-CLIP&quot;,&quot;id&quot;:&quot;model_doc/xclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xclip&quot;}]},{&quot;title&quot;:&quot;Reinforcement learning models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Decision Transformer&quot;,&quot;id&quot;:&quot;model_doc/decision_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/decision_transformer&quot;},{&quot;title&quot;:&quot;Trajectory Transformer&quot;,&quot;id&quot;:&quot;model_doc/trajectory_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer&quot;}]},{&quot;title&quot;:&quot;Time series models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Autoformer&quot;,&quot;id&quot;:&quot;model_doc/autoformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/autoformer&quot;},{&quot;title&quot;:&quot;Informer&quot;,&quot;id&quot;:&quot;model_doc/informer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/informer&quot;},{&quot;title&quot;:&quot;Time Series Transformer&quot;,&quot;id&quot;:&quot;model_doc/time_series_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/time_series_transformer&quot;}]},{&quot;title&quot;:&quot;Graph models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Graphormer&quot;,&quot;id&quot;:&quot;model_doc/graphormer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/graphormer&quot;}]}]},{&quot;title&quot;:&quot;Internal Helpers&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Custom Layers and Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/modeling_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/modeling_utils&quot;},{&quot;title&quot;:&quot;Utilities for pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/pipelines_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/pipelines_utils&quot;},{&quot;title&quot;:&quot;Utilities for Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/tokenization_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/tokenization_utils&quot;},{&quot;title&quot;:&quot;Utilities for Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/trainer_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/trainer_utils&quot;},{&quot;title&quot;:&quot;Utilities for Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/generation_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/generation_utils&quot;},{&quot;title&quot;:&quot;Utilities for Image Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/image_processing_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/image_processing_utils&quot;},{&quot;title&quot;:&quot;Utilities for Audio processing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/audio_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/audio_utils&quot;},{&quot;title&quot;:&quot;General Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/file_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/file_utils&quot;},{&quot;title&quot;:&quot;Utilities for Time Series&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/time_series_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/time_series_utils&quot;}]}]}],&quot;chapterId&quot;:&quot;model_doc/vivit&quot;,&quot;docType&quot;:&quot;docs&quot;,&quot;isLoggedIn&quot;:false,&quot;lang&quot;:&quot;en&quot;,&quot;langs&quot;:[&quot;de&quot;,&quot;en&quot;,&quot;es&quot;,&quot;fr&quot;,&quot;it&quot;,&quot;ko&quot;,&quot;pt&quot;,&quot;zh&quot;],&quot;library&quot;:&quot;transformers&quot;,&quot;theme&quot;:&quot;light&quot;,&quot;version&quot;:&quot;v4.34.0&quot;,&quot;versions&quot;:[{&quot;version&quot;:&quot;main&quot;},{&quot;version&quot;:&quot;v4.34.0&quot;},{&quot;version&quot;:&quot;v4.33.3&quot;},{&quot;version&quot;:&quot;v4.33.2&quot;},{&quot;version&quot;:&quot;v4.33.0&quot;},{&quot;version&quot;:&quot;v4.32.1&quot;},{&quot;version&quot;:&quot;v4.32.0&quot;},{&quot;version&quot;:&quot;v4.31.0&quot;},{&quot;version&quot;:&quot;v4.30.0&quot;},{&quot;version&quot;:&quot;v4.29.1&quot;},{&quot;version&quot;:&quot;v4.29.0&quot;},{&quot;version&quot;:&quot;v4.28.1&quot;},{&quot;version&quot;:&quot;v4.28.0&quot;},{&quot;version&quot;:&quot;v4.27.2&quot;},{&quot;version&quot;:&quot;v4.27.1&quot;},{&quot;version&quot;:&quot;v4.27.0&quot;},{&quot;version&quot;:&quot;v4.26.1&quot;},{&quot;version&quot;:&quot;v4.26.0&quot;},{&quot;version&quot;:&quot;v4.25.1&quot;},{&quot;version&quot;:&quot;v4.24.0&quot;},{&quot;version&quot;:&quot;v4.23.1&quot;},{&quot;version&quot;:&quot;v4.23.0&quot;},{&quot;version&quot;:&quot;v4.22.2&quot;},{&quot;version&quot;:&quot;v4.22.1&quot;},{&quot;version&quot;:&quot;v4.22.0&quot;},{&quot;version&quot;:&quot;v4.21.3&quot;},{&quot;version&quot;:&quot;v4.21.2&quot;},{&quot;version&quot;:&quot;v4.21.1&quot;},{&quot;version&quot;:&quot;v4.21.0&quot;},{&quot;version&quot;:&quot;v4.20.1&quot;},{&quot;version&quot;:&quot;v4.20.0&quot;},{&quot;version&quot;:&quot;v4.19.4&quot;},{&quot;version&quot;:&quot;v4.19.3&quot;},{&quot;version&quot;:&quot;v4.19.2&quot;},{&quot;version&quot;:&quot;v4.19.0&quot;},{&quot;version&quot;:&quot;v4.18.0&quot;},{&quot;version&quot;:&quot;v4.17.0&quot;},{&quot;version&quot;:&quot;v4.16.2&quot;},{&quot;version&quot;:&quot;v4.16.1&quot;},{&quot;version&quot;:&quot;v4.16.0&quot;},{&quot;version&quot;:&quot;v4.15.0&quot;},{&quot;version&quot;:&quot;v4.14.1&quot;},{&quot;version&quot;:&quot;v4.13.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.5&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.4&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.0.0&quot;},{&quot;version&quot;:&quot;doc-builder-html&quot;}],&quot;title&quot;:&quot;Video Vision Transformer (ViViT)&quot;}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"> <div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation </p> <div class="flex items-center"><p class="font-semibold">Video Vision Transformer (ViViT)</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "> <button class=" " type="button"> <h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> </button> <div class="flex items-center"> <select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1" selected="">v4.34.0</option><option value="2">v4.33.3</option><option value="3">v4.32.1</option><option value="4">v4.31.0</option><option value="5">v4.30.0</option><option value="6">v4.29.1</option><option value="7">v4.28.1</option><option value="8">v4.27.2</option><option value="9">v4.26.1</option><option value="10">v4.25.1</option><option value="11">v4.24.0</option><option value="12">v4.23.1</option><option value="13">v4.22.2</option><option value="14">v4.21.3</option><option value="15">v4.20.1</option><option value="16">v4.19.4</option><option value="17">v4.18.0</option><option value="18">v4.17.0</option><option value="19">v4.16.2</option><option value="20">v4.15.0</option><option value="21">v4.14.1</option><option value="22">v4.13.0</option><option value="23">v4.12.5</option><option value="24">v4.11.3</option><option value="25">v4.10.1</option><option value="26">v4.9.2</option><option value="27">v4.8.2</option><option value="28">v4.7.0</option><option value="29">v4.6.0</option><option value="30">v4.5.1</option><option value="31">v4.4.2</option><option value="32">v4.3.3</option><option value="33">v4.2.2</option><option value="34">v4.1.1</option><option value="35">v4.0.1</option><option value="36">v3.5.1</option><option value="37">v3.4.0</option><option value="38">v3.3.1</option><option value="39">v3.2.0</option><option value="40">v3.1.0</option><option value="41">v3.0.2</option><option value="42">v2.11.0</option><option value="43">v2.10.0</option><option value="44">v2.9.1</option><option value="45">v2.8.0</option><option value="46">v2.7.0</option><option value="47">v2.6.0</option><option value="48">v2.5.1</option><option value="49">v2.4.1</option><option value="50">v2.3.0</option><option value="51">v2.2.2</option><option value="52">v2.1.1</option><option value="53">v2.0.0</option><option value="54">v1.2.0</option><option value="55">v1.1.0</option><option value="56">v1.0.0</option><option value="57">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en" selected="">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"> <button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"> <svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> </a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Get started<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/index"><!-- HTML_TAG_START -->🤗 Transformers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/quicktour"><!-- HTML_TAG_START -->Quick tour<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/installation"><!-- HTML_TAG_START -->Installation<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Tutorials<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_tutorial"><!-- HTML_TAG_START -->Run inference with pipelines<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/autoclass_tutorial"><!-- HTML_TAG_START -->Write portable code with AutoClass<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/preprocessing"><!-- HTML_TAG_START -->Preprocess data<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/training"><!-- HTML_TAG_START -->Fine-tune a pretrained model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/run_scripts"><!-- HTML_TAG_START -->Train with a script<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/accelerate"><!-- HTML_TAG_START -->Set up distributed training with 🤗 Accelerate<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/peft"><!-- HTML_TAG_START -->Load and train adapters with 🤗 PEFT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_sharing"><!-- HTML_TAG_START -->Share your model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/transformers_agents"><!-- HTML_TAG_START -->Agents<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/llm_tutorial"><!-- HTML_TAG_START -->Generation with LLMs<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Task Guides<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Natural Language Processing<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Audio<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Computer Vision<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Multimodal<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Generation<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Prompting<!-- HTML_TAG_END --></span> </span></span> </div></div> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Developer guides<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/fast_tokenizers"><!-- HTML_TAG_START -->Use fast tokenizers from 🤗 Tokenizers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/multilingual"><!-- HTML_TAG_START -->Run inference with multilingual models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/create_a_model"><!-- HTML_TAG_START -->Use model-specific APIs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_models"><!-- HTML_TAG_START -->Share a custom model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/chat_templating"><!-- HTML_TAG_START -->Templates for chat models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/sagemaker"><!-- HTML_TAG_START -->Run training on Amazon SageMaker<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/serialization"><!-- HTML_TAG_START -->Export to ONNX<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tflite"><!-- HTML_TAG_START -->Export to TFLite<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/torchscript"><!-- HTML_TAG_START -->Export to TorchScript<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/benchmarks"><!-- HTML_TAG_START -->Benchmarks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/notebooks"><!-- HTML_TAG_START -->Notebooks with examples<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/community"><!-- HTML_TAG_START -->Community resources<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_tools"><!-- HTML_TAG_START -->Custom Tools and Prompts<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/troubleshooting"><!-- HTML_TAG_START -->Troubleshoot<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Performance and scalability<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/performance"><!-- HTML_TAG_START -->Overview<!-- HTML_TAG_END --> </a> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Efficient training techniques<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_one"><!-- HTML_TAG_START -->Methods and tools for efficient training on a single GPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_many"><!-- HTML_TAG_START -->Multiple GPUs and parallelism<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu"><!-- HTML_TAG_START -->Efficient training on CPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu_many"><!-- HTML_TAG_START -->Distributed CPU training<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu"><!-- HTML_TAG_START -->Training on TPUs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu_tf"><!-- HTML_TAG_START -->Training on TPU with TensorFlow<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_special"><!-- HTML_TAG_START -->Training on Specialized Hardware<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_hardware"><!-- HTML_TAG_START -->Custom hardware for training<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/hpo_train"><!-- HTML_TAG_START -->Hyperparameter Search using Trainer API<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Optimizing inference<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_cpu"><!-- HTML_TAG_START -->Inference on CPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_one"><!-- HTML_TAG_START -->Inference on one GPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_many"><!-- HTML_TAG_START -->Inference on many GPUs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_special"><!-- HTML_TAG_START -->Inference on Specialized Hardware<!-- HTML_TAG_END --> </a> </div><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/big_models"><!-- HTML_TAG_START -->Instantiating a big model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/debugging"><!-- HTML_TAG_START -->Troubleshooting<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tf_xla"><!-- HTML_TAG_START -->XLA Integration for TensorFlow Models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perf_torch_compile"><!-- HTML_TAG_START -->Optimize inference using `torch.compile()`<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Contribute<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/contributing"><!-- HTML_TAG_START -->How to contribute to transformers?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_model"><!-- HTML_TAG_START -->How to add a model to 🤗 Transformers?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_tensorflow_model"><!-- HTML_TAG_START -->How to convert a 🤗 Transformers model to TensorFlow?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_pipeline"><!-- HTML_TAG_START -->How to add a pipeline to 🤗 Transformers?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/testing"><!-- HTML_TAG_START -->Testing<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pr_checks"><!-- HTML_TAG_START -->Checks on a Pull Request<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Conceptual guides<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/philosophy"><!-- HTML_TAG_START -->Philosophy<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/glossary"><!-- HTML_TAG_START -->Glossary<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/task_summary"><!-- HTML_TAG_START -->What 🤗 Transformers can do<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tasks_explained"><!-- HTML_TAG_START -->How 🤗 Transformers solve tasks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_summary"><!-- HTML_TAG_START -->The Transformer model family<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tokenizer_summary"><!-- HTML_TAG_START -->Summary of the tokenizers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/attention"><!-- HTML_TAG_START -->Attention mechanisms<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pad_truncation"><!-- HTML_TAG_START -->Padding and truncation<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/bertology"><!-- HTML_TAG_START -->BERTology<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perplexity"><!-- HTML_TAG_START -->Perplexity of fixed-length models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_webserver"><!-- HTML_TAG_START -->Pipelines for webserver inference<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_memory_anatomy"><!-- HTML_TAG_START -->Model training anatomy<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->API<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Main Classes<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/agent"><!-- HTML_TAG_START -->Agents and Tools<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/model_doc/auto"><!-- HTML_TAG_START -->Auto Classes<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/callback"><!-- HTML_TAG_START -->Callbacks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/configuration"><!-- HTML_TAG_START -->Configuration<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/data_collator"><!-- HTML_TAG_START -->Data Collator<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks"><!-- HTML_TAG_START -->Keras callbacks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/logging"><!-- HTML_TAG_START -->Logging<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/model"><!-- HTML_TAG_START -->Models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/text_generation"><!-- HTML_TAG_START -->Text Generation<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/onnx"><!-- HTML_TAG_START -->ONNX<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules"><!-- HTML_TAG_START -->Optimization<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/output"><!-- HTML_TAG_START -->Model outputs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/pipelines"><!-- HTML_TAG_START -->Pipelines<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/processors"><!-- HTML_TAG_START -->Processors<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/quantization"><!-- HTML_TAG_START -->Quantization<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/tokenizer"><!-- HTML_TAG_START -->Tokenizer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/trainer"><!-- HTML_TAG_START -->Trainer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/deepspeed"><!-- HTML_TAG_START -->DeepSpeed Integration<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor"><!-- HTML_TAG_START -->Feature Extractor<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/image_processor"><!-- HTML_TAG_START -->Image Processor<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Text models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Vision models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/beit"><!-- HTML_TAG_START -->BEiT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bit"><!-- HTML_TAG_START -->BiT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/conditional_detr"><!-- HTML_TAG_START -->Conditional DETR<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/convnext"><!-- HTML_TAG_START -->ConvNeXT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/convnextv2"><!-- HTML_TAG_START -->ConvNeXTV2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/cvt"><!-- HTML_TAG_START -->CvT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/deformable_detr"><!-- HTML_TAG_START -->Deformable DETR<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/deit"><!-- HTML_TAG_START -->DeiT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/deta"><!-- HTML_TAG_START -->DETA<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/detr"><!-- HTML_TAG_START -->DETR<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/dinat"><!-- HTML_TAG_START -->DiNAT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/dinov2"><!-- HTML_TAG_START -->DINO V2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/dit"><!-- HTML_TAG_START -->DiT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/dpt"><!-- HTML_TAG_START -->DPT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/efficientformer"><!-- HTML_TAG_START -->EfficientFormer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/efficientnet"><!-- HTML_TAG_START -->EfficientNet<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/focalnet"><!-- HTML_TAG_START -->FocalNet<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/glpn"><!-- HTML_TAG_START -->GLPN<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/imagegpt"><!-- HTML_TAG_START -->ImageGPT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/levit"><!-- HTML_TAG_START -->LeViT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mask2former"><!-- HTML_TAG_START -->Mask2Former<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/maskformer"><!-- HTML_TAG_START -->MaskFormer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mobilenet_v1"><!-- HTML_TAG_START -->MobileNetV1<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mobilenet_v2"><!-- HTML_TAG_START -->MobileNetV2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mobilevit"><!-- HTML_TAG_START -->MobileViT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mobilevitv2"><!-- HTML_TAG_START -->MobileViTV2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nat"><!-- HTML_TAG_START -->NAT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/poolformer"><!-- HTML_TAG_START -->PoolFormer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/pvt"><!-- HTML_TAG_START -->Pyramid Vision Transformer (PVT)<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/regnet"><!-- HTML_TAG_START -->RegNet<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/resnet"><!-- HTML_TAG_START -->ResNet<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/segformer"><!-- HTML_TAG_START -->SegFormer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/swiftformer"><!-- HTML_TAG_START -->SwiftFormer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/swin"><!-- HTML_TAG_START -->Swin Transformer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/swinv2"><!-- HTML_TAG_START -->Swin Transformer V2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/swin2sr"><!-- HTML_TAG_START -->Swin2SR<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/table-transformer"><!-- HTML_TAG_START -->Table Transformer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/timesformer"><!-- HTML_TAG_START -->TimeSformer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/upernet"><!-- HTML_TAG_START -->UperNet<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/van"><!-- HTML_TAG_START -->VAN<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/videomae"><!-- HTML_TAG_START -->VideoMAE<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/vit"><!-- HTML_TAG_START -->Vision Transformer (ViT)<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/vit_hybrid"><!-- HTML_TAG_START -->ViT Hybrid<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/vitdet"><!-- HTML_TAG_START -->ViTDet<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/vit_mae"><!-- HTML_TAG_START -->ViTMAE<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/vitmatte"><!-- HTML_TAG_START -->ViTMatte<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/vit_msn"><!-- HTML_TAG_START -->ViTMSN<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/vivit"><!-- HTML_TAG_START -->ViViT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/yolos"><!-- HTML_TAG_START -->YOLOS<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Audio models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Multimodal models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Reinforcement learning models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Time series models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Graph models<!-- HTML_TAG_END --></span> </span></span> </div></div> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Internal Helpers<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/modeling_utils"><!-- HTML_TAG_START -->Custom Layers and Utilities<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/pipelines_utils"><!-- HTML_TAG_START -->Utilities for pipelines<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/tokenization_utils"><!-- HTML_TAG_START -->Utilities for Tokenizers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/trainer_utils"><!-- HTML_TAG_START -->Utilities for Trainer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/generation_utils"><!-- HTML_TAG_START -->Utilities for Generation<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/image_processing_utils"><!-- HTML_TAG_START -->Utilities for Image Processors<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/audio_utils"><!-- HTML_TAG_START -->Utilities for Audio processing<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/file_utils"><!-- HTML_TAG_START -->General Utilities<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/time_series_utils"><!-- HTML_TAG_START -->Utilities for Time Series<!-- HTML_TAG_END --> </a> </div> </div></nav></div></div></div> <div class="z-1 min-w-0 flex-1"> <div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg"> <div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div> <p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience </p> <div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes </div></div></div> <div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a> <p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div> <div class="prose-doc prose relative mx-auto max-w-4xl break-words"><!-- HTML_TAG_START --> <link href="/docs/transformers/v4.34.0/en/_app/immutable/assets/0.e3b0c442.css" rel="modulepreload"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/entry/start.c2db227a.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/scheduler.9bc65507.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/singletons.e3057404.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/index.3b203c72.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/paths.e7de6301.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/entry/app.879d9b87.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/index.78c82d43.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/0.242aaaff.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/each.e59479a4.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/269.5c8b831d.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/Tip.87d55b76.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/Docstring.4e7352e2.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/globals.7f7f1b26.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/IconCopyLink.bedaa44d.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/CodeBlock.73e038be.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/ExampleCodeBlock.872b014d.js"><!-- HEAD_svelte-1phssyn_START --><meta name="hf:doc:metadata" content="{&quot;local&quot;:&quot;video-vision-transformer-vivit&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;overview&quot;,&quot;title&quot;:&quot;Overview&quot;},{&quot;local&quot;:&quot;transformers.VivitConfig&quot;,&quot;title&quot;:&quot;VivitConfig&quot;},{&quot;local&quot;:&quot;transformers.VivitImageProcessor&quot;,&quot;title&quot;:&quot;VivitImageProcessor&quot;},{&quot;local&quot;:&quot;transformers.VivitModel&quot;,&quot;title&quot;:&quot;VivitModel&quot;},{&quot;local&quot;:&quot;transformers.VivitForVideoClassification&quot;,&quot;title&quot;:&quot;VivitForVideoClassification&quot;}],&quot;title&quot;:&quot;Video Vision Transformer (ViViT)&quot;}"><!-- HEAD_svelte-1phssyn_END --> <p></p> <h1 class="relative group"><a id="video-vision-transformer-vivit" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#video-vision-transformer-vivit"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-14sr4u6">Video Vision Transformer (ViViT)</span></h1> <h2 class="relative group"><a id="overview" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#overview"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1jsw1pg">Overview</span></h2> <p data-svelte-h="svelte-1dq9qfe">The Vivit model was proposed in <a href="https://arxiv.org/abs/2103.15691" rel="nofollow">ViViT: A Video Vision Transformer</a> by Anurag Arnab, Mostafa Dehghani, Georg Heigold, Chen Sun, Mario Lučić, Cordelia Schmid. The paper proposes one of the first successful pure-transformer based set of models for video understanding.</p> <p data-svelte-h="svelte-vfdo9a">The abstract from the paper is the following:</p> <p data-svelte-h="svelte-1lz2sl9"><em>We present pure-transformer based models for video classification, drawing upon the recent success of such models in image classification. Our model extracts spatio-temporal tokens from the input video, which are then encoded by a series of transformer layers. In order to handle the long sequences of tokens encountered in video, we propose several, efficient variants of our model which factorise the spatial- and temporal-dimensions of the input. Although transformer-based models are known to only be effective when large training datasets are available, we show how we can effectively regularise the model during training and leverage pretrained image models to be able to train on comparatively small datasets. We conduct thorough ablation studies, and achieve state-of-the-art results on multiple video classification benchmarks including Kinetics 400 and 600, Epic Kitchens, Something-Something v2 and Moments in Time, outperforming prior methods based on deep 3D convolutional networks.</em></p> <p data-svelte-h="svelte-jnd3xg">This model was contributed by <a href="https://huggingface.co/jegormeister" rel="nofollow">jegormeister</a>. The original code (written in JAX) can be found <a href="https://github.com/google-research/scenic/tree/main/scenic/projects/vivit" rel="nofollow">here</a>.</p> <h2 class="relative group"><a id="transformers.VivitConfig" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VivitConfig"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-pwomi5">VivitConfig</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.VivitConfig"><!-- HTML_TAG_START --><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">VivitConfig</span></span></h3><!-- HTML_TAG_END --> <a id="transformers.VivitConfig" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.VivitConfig"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vivit/configuration_vivit.py#L31" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">image_size<span class="opacity-60"> = 224</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_frames<span class="opacity-60"> = 32</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">tubelet_size<span class="opacity-60"> = [2, 16, 16]</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_channels<span class="opacity-60"> = 3</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_size<span class="opacity-60"> = 768</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_hidden_layers<span class="opacity-60"> = 12</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_attention_heads<span class="opacity-60"> = 12</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">intermediate_size<span class="opacity-60"> = 3072</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_act<span class="opacity-60"> = 'gelu_fast'</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_dropout_prob<span class="opacity-60"> = 0.0</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_probs_dropout_prob<span class="opacity-60"> = 0.0</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">initializer_range<span class="opacity-60"> = 0.02</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">layer_norm_eps<span class="opacity-60"> = 1e-06</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">qkv_bias<span class="opacity-60"> = True</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VivitConfig.image_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VivitConfig.image_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>image_size</strong> (<code>int</code>, <em>optional</em>, defaults to 224) — The size (resolution) of each image.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VivitConfig.num_frames" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VivitConfig.num_frames"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>num_frames</strong> (<code>int</code>, <em>optional</em>, defaults to 32) — The number of frames in each video.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VivitConfig.tubelet_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VivitConfig.tubelet_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>tubelet_size</strong> (<code>List[int]</code>, <em>optional</em>, defaults to <code>[2, 16, 16]</code>) — The size (resolution) of each tubelet.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VivitConfig.num_channels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VivitConfig.num_channels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>num_channels</strong> (<code>int</code>, <em>optional</em>, defaults to 3) — The number of input channels.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VivitConfig.hidden_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VivitConfig.hidden_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>hidden_size</strong> (<code>int</code>, <em>optional</em>, defaults to 768) — Dimensionality of the encoder layers and the pooler layer.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VivitConfig.num_hidden_layers" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VivitConfig.num_hidden_layers"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>num_hidden_layers</strong> (<code>int</code>, <em>optional</em>, defaults to 12) — Number of hidden layers in the Transformer encoder.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VivitConfig.num_attention_heads" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VivitConfig.num_attention_heads"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>num_attention_heads</strong> (<code>int</code>, <em>optional</em>, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VivitConfig.intermediate_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VivitConfig.intermediate_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>intermediate_size</strong> (<code>int</code>, <em>optional</em>, defaults to 3072) — Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VivitConfig.hidden_act" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VivitConfig.hidden_act"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>hidden_act</strong> (<code>str</code> or <code>function</code>, <em>optional</em>, defaults to <code>"gelu_fast"</code>) — The non-linear activation function (function or string) in the encoder and pooler. If string, <code>"gelu"</code>, <code>"relu"</code>, <code>"selu"</code>, <code>"gelu_fast"</code> and <code>"gelu_new"</code> are supported.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VivitConfig.hidden_dropout_prob" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VivitConfig.hidden_dropout_prob"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>hidden_dropout_prob</strong> (<code>float</code>, <em>optional</em>, defaults to 0.0) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VivitConfig.attention_probs_dropout_prob" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VivitConfig.attention_probs_dropout_prob"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>attention_probs_dropout_prob</strong> (<code>float</code>, <em>optional</em>, defaults to 0.0) — The dropout ratio for the attention probabilities.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VivitConfig.initializer_range" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VivitConfig.initializer_range"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>initializer_range</strong> (<code>float</code>, <em>optional</em>, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VivitConfig.layer_norm_eps" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VivitConfig.layer_norm_eps"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>layer_norm_eps</strong> (<code>float</code>, <em>optional</em>, defaults to 1e-06) — The epsilon used by the layer normalization layers.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VivitConfig.qkv_bias" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VivitConfig.qkv_bias"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>qkv_bias</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether to add a bias to the queries, keys and values.<!-- HTML_TAG_END --> </span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1mqmnup">This is the configuration class to store the configuration of a <a href="/docs/transformers/v4.34.0/en/model_doc/vivit#transformers.VivitModel">VivitModel</a>. It is used to instantiate a ViViT model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the ViViT <a href="https://huggingface.co/google/vivit-b-16x2-kinetics400" rel="nofollow">google/vivit-b-16x2-kinetics400</a> architecture.</p> <p data-svelte-h="svelte-10kqkkl">Configuration objects inherit from <a href="/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> and can be used to control the model outputs. Read the documentation from <a href="/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> for more information.</p> <div class="relative group rounded-md"><a id="transformers.VivitConfig.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VivitConfig.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> VivitConfig, VivitModel <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Initializing a ViViT google/vivit-b-16x2-kinetics400 style configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>configuration = VivitConfig() <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Initializing a model (with random weights) from the google/vivit-b-16x2-kinetics400 style configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model = VivitModel(configuration) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Accessing the model configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>configuration = model.config<!-- HTML_TAG_END --></pre></div></div></div> <h2 class="relative group"><a id="transformers.VivitImageProcessor" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VivitImageProcessor"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-zgqh5o">VivitImageProcessor</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.VivitImageProcessor"><!-- HTML_TAG_START --><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">VivitImageProcessor</span></span></h3><!-- HTML_TAG_END --> <a id="transformers.VivitImageProcessor" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.VivitImageProcessor"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vivit/image_processing_vivit.py#L64" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">do_resize<span class="opacity-60">: bool = True</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">size<span class="opacity-60">: typing.Dict[str, int] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">resample<span class="opacity-60">: Resampling = &lt;Resampling.BILINEAR: 2&gt;</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">do_center_crop<span class="opacity-60">: bool = True</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">crop_size<span class="opacity-60">: typing.Dict[str, int] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">do_rescale<span class="opacity-60">: bool = True</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">rescale_factor<span class="opacity-60">: typing.Union[int, float] = 0.00784313725490196</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">offset<span class="opacity-60">: bool = True</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">do_normalize<span class="opacity-60">: bool = True</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">image_mean<span class="opacity-60">: typing.Union[float, typing.List[float], NoneType] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">image_std<span class="opacity-60">: typing.Union[float, typing.List[float], NoneType] = None</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VivitImageProcessor.do_resize" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VivitImageProcessor.do_resize"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>do_resize</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether to resize the image’s (height, width) dimensions to the specified <code>size</code>. Can be overridden by the <code>do_resize</code> parameter in the <code>preprocess</code> method.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VivitImageProcessor.size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VivitImageProcessor.size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>size</strong> (<code>Dict[str, int]</code> <em>optional</em>, defaults to <code>{"shortest_edge" -- 256}</code>): Size of the output image after resizing. The shortest edge of the image will be resized to <code>size["shortest_edge"]</code> while maintaining the aspect ratio of the original image. Can be overriden by <code>size</code> in the <code>preprocess</code> method.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VivitImageProcessor.resample" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VivitImageProcessor.resample"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>resample</strong> (<code>PILImageResampling</code>, <em>optional</em>, defaults to <code>PILImageResampling.BILINEAR</code>) — Resampling filter to use if resizing the image. Can be overridden by the <code>resample</code> parameter in the <code>preprocess</code> method.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VivitImageProcessor.do_center_crop" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VivitImageProcessor.do_center_crop"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>do_center_crop</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether to center crop the image to the specified <code>crop_size</code>. Can be overridden by the <code>do_center_crop</code> parameter in the <code>preprocess</code> method.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VivitImageProcessor.crop_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VivitImageProcessor.crop_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>crop_size</strong> (<code>Dict[str, int]</code>, <em>optional</em>, defaults to <code>{"height" -- 224, "width": 224}</code>): Size of the image after applying the center crop. Can be overridden by the <code>crop_size</code> parameter in the <code>preprocess</code> method.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VivitImageProcessor.do_rescale" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VivitImageProcessor.do_rescale"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>do_rescale</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether to rescale the image by the specified scale <code>rescale_factor</code>. Can be overridden by the <code>do_rescale</code> parameter in the <code>preprocess</code> method.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VivitImageProcessor.rescale_factor" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VivitImageProcessor.rescale_factor"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>rescale_factor</strong> (<code>int</code> or <code>float</code>, <em>optional</em>, defaults to 1/127.5) — Defines the scale factor to use if rescaling the image. Can be overridden by the <code>rescale_factor</code> parameter in the <code>preprocess</code> method.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VivitImageProcessor.offset" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VivitImageProcessor.offset"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>offset</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether to scale the image in both negative and positive directions. Can be overriden by the <code>offset</code> in the <code>preprocess</code> method.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VivitImageProcessor.do_normalize" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VivitImageProcessor.do_normalize"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>do_normalize</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether to normalize the image. Can be overridden by the <code>do_normalize</code> parameter in the <code>preprocess</code> method.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VivitImageProcessor.image_mean" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VivitImageProcessor.image_mean"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>image_mean</strong> (<code>float</code> or <code>List[float]</code>, <em>optional</em>, defaults to <code>IMAGENET_STANDARD_MEAN</code>) — Mean to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the <code>image_mean</code> parameter in the <code>preprocess</code> method.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VivitImageProcessor.image_std" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VivitImageProcessor.image_std"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>image_std</strong> (<code>float</code> or <code>List[float]</code>, <em>optional</em>, defaults to <code>IMAGENET_STANDARD_STD</code>) — Standard deviation to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the <code>image_std</code> parameter in the <code>preprocess</code> method.<!-- HTML_TAG_END --> </span></span> </li></ul> </div></div> <p data-svelte-h="svelte-imia5q">Constructs a Vivit image processor.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.VivitImageProcessor.preprocess"><!-- HTML_TAG_START --><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>preprocess</span></h4><!-- HTML_TAG_END --> <a id="transformers.VivitImageProcessor.preprocess" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.VivitImageProcessor.preprocess"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vivit/image_processing_vivit.py#L285" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">videos<span class="opacity-60">: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]]</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">do_resize<span class="opacity-60">: bool = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">size<span class="opacity-60">: typing.Dict[str, int] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">resample<span class="opacity-60">: Resampling = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">do_center_crop<span class="opacity-60">: bool = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">crop_size<span class="opacity-60">: typing.Dict[str, int] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">do_rescale<span class="opacity-60">: bool = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">rescale_factor<span class="opacity-60">: float = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">offset<span class="opacity-60">: bool = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">do_normalize<span class="opacity-60">: bool = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">image_mean<span class="opacity-60">: typing.Union[float, typing.List[float], NoneType] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">image_std<span class="opacity-60">: typing.Union[float, typing.List[float], NoneType] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_tensors<span class="opacity-60">: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">data_format<span class="opacity-60">: ChannelDimension = &lt;ChannelDimension.FIRST: 'channels_first'&gt;</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_data_format<span class="opacity-60">: typing.Union[str, transformers.image_utils.ChannelDimension, NoneType] = None</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VivitImageProcessor.preprocess.videos" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VivitImageProcessor.preprocess.videos"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>videos</strong> (<code>ImageInput</code>) — Video frames to preprocess. Expects a single or batch of video frames with pixel values ranging from 0 to 255. If passing in frames with pixel values between 0 and 1, set <code>do_rescale=False</code>.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VivitImageProcessor.preprocess.do_resize" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VivitImageProcessor.preprocess.do_resize"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>do_resize</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>self.do_resize</code>) — Whether to resize the image.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VivitImageProcessor.preprocess.size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VivitImageProcessor.preprocess.size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>size</strong> (<code>Dict[str, int]</code>, <em>optional</em>, defaults to <code>self.size</code>) — Size of the image after applying resize.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VivitImageProcessor.preprocess.resample" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VivitImageProcessor.preprocess.resample"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>resample</strong> (<code>PILImageResampling</code>, <em>optional</em>, defaults to <code>self.resample</code>) — Resampling filter to use if resizing the image. This can be one of the enum <code>PILImageResampling</code>, Only has an effect if <code>do_resize</code> is set to <code>True</code>.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VivitImageProcessor.preprocess.do_center_crop" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VivitImageProcessor.preprocess.do_center_crop"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>do_center_crop</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>self.do_centre_crop</code>) — Whether to centre crop the image.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VivitImageProcessor.preprocess.crop_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VivitImageProcessor.preprocess.crop_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>crop_size</strong> (<code>Dict[str, int]</code>, <em>optional</em>, defaults to <code>self.crop_size</code>) — Size of the image after applying the centre crop.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VivitImageProcessor.preprocess.do_rescale" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VivitImageProcessor.preprocess.do_rescale"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>do_rescale</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>self.do_rescale</code>) — Whether to rescale the image values between <code>[-1 - 1]</code> if <code>offset</code> is <code>True</code>, <code>[0, 1]</code> otherwise.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VivitImageProcessor.preprocess.rescale_factor" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VivitImageProcessor.preprocess.rescale_factor"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>rescale_factor</strong> (<code>float</code>, <em>optional</em>, defaults to <code>self.rescale_factor</code>) — Rescale factor to rescale the image by if <code>do_rescale</code> is set to <code>True</code>.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VivitImageProcessor.preprocess.offset" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VivitImageProcessor.preprocess.offset"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>offset</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>self.offset</code>) — Whether to scale the image in both negative and positive directions.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VivitImageProcessor.preprocess.do_normalize" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VivitImageProcessor.preprocess.do_normalize"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>do_normalize</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>self.do_normalize</code>) — Whether to normalize the image.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VivitImageProcessor.preprocess.image_mean" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VivitImageProcessor.preprocess.image_mean"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>image_mean</strong> (<code>float</code> or <code>List[float]</code>, <em>optional</em>, defaults to <code>self.image_mean</code>) — Image mean.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VivitImageProcessor.preprocess.image_std" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VivitImageProcessor.preprocess.image_std"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>image_std</strong> (<code>float</code> or <code>List[float]</code>, <em>optional</em>, defaults to <code>self.image_std</code>) — Image standard deviation.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VivitImageProcessor.preprocess.return_tensors" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VivitImageProcessor.preprocess.return_tensors"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>return_tensors</strong> (<code>str</code> or <code>TensorType</code>, <em>optional</em>) — The type of tensors to return. Can be one of:<ul> <li>Unset: Return a list of <code>np.ndarray</code>.</li> <li><code>TensorType.TENSORFLOW</code> or <code>'tf'</code>: Return a batch of type <code>tf.Tensor</code>.</li> <li><code>TensorType.PYTORCH</code> or <code>'pt'</code>: Return a batch of type <code>torch.Tensor</code>.</li> <li><code>TensorType.NUMPY</code> or <code>'np'</code>: Return a batch of type <code>np.ndarray</code>.</li> <li><code>TensorType.JAX</code> or <code>'jax'</code>: Return a batch of type <code>jax.numpy.ndarray</code>.</li> </ul><!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VivitImageProcessor.preprocess.data_format" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VivitImageProcessor.preprocess.data_format"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>data_format</strong> (<code>ChannelDimension</code> or <code>str</code>, <em>optional</em>, defaults to <code>ChannelDimension.FIRST</code>) — The channel dimension format for the output image. Can be one of:<ul> <li><code>ChannelDimension.FIRST</code>: image in (num_channels, height, width) format.</li> <li><code>ChannelDimension.LAST</code>: image in (height, width, num_channels) format.</li> <li>Unset: Use the inferred channel dimension format of the input image.</li> </ul><!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VivitImageProcessor.preprocess.input_data_format" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VivitImageProcessor.preprocess.input_data_format"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>input_data_format</strong> (<code>ChannelDimension</code> or <code>str</code>, <em>optional</em>) — The channel dimension format for the input image. If unset, the channel dimension format is inferred from the input image. Can be one of:<ul> <li><code>"channels_first"</code> or <code>ChannelDimension.FIRST</code>: image in (num_channels, height, width) format.</li> <li><code>"channels_last"</code> or <code>ChannelDimension.LAST</code>: image in (height, width, num_channels) format.</li> <li><code>"none"</code> or <code>ChannelDimension.NONE</code>: image in (height, width) format.</li> </ul><!-- HTML_TAG_END --> </span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1x3yxsa">Preprocess an image or batch of images.</p></div></div> <h2 class="relative group"><a id="transformers.VivitModel" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VivitModel"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-uo83nq">VivitModel</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.VivitModel"><!-- HTML_TAG_START --><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">VivitModel</span></span></h3><!-- HTML_TAG_END --> <a id="transformers.VivitModel" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.VivitModel"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vivit/modeling_vivit.py#L460" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">add_pooling_layer<span class="opacity-60"> = True</span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VivitModel.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VivitModel.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/vivit#transformers.VivitConfig">VivitConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.<!-- HTML_TAG_END --> </span></span> </li></ul> </div></div> <p data-svelte-h="svelte-6a15m9">The bare ViViT Transformer model outputting raw hidden-states without any specific head on top. This model is a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.VivitModel.forward"><!-- HTML_TAG_START --><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4><!-- HTML_TAG_END --> <a id="transformers.VivitModel.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.VivitModel.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vivit/modeling_vivit.py#L488" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pixel_values<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><!-- HTML_TAG_START --><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.BaseModelOutputWithPooling">transformers.modeling_outputs.BaseModelOutputWithPooling</a> or <code>tuple(torch.FloatTensor)</code></span><!-- HTML_TAG_END --></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VivitModel.forward.pixel_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VivitModel.forward.pixel_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>pixel_values</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_frames, num_channels, height, width)</code>) — Pixel values. Pixel values can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/vivit#transformers.VivitImageProcessor">VivitImageProcessor</a>. See <a href="/docs/transformers/v4.34.0/en/model_doc/vivit#transformers.VivitImageProcessor.preprocess">VivitImageProcessor.preprocess()</a> for details.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VivitModel.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VivitModel.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul><!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VivitModel.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VivitModel.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VivitModel.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VivitModel.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VivitModel.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VivitModel.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.<!-- HTML_TAG_END --> </span></span> </li></ul> <div id="transformers.VivitModel.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <!-- HTML_TAG_START --> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.BaseModelOutputWithPooling">transformers.modeling_outputs.BaseModelOutputWithPooling</a> or <code>tuple(torch.FloatTensor)</code></p> <!-- HTML_TAG_END --> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"><!-- HTML_TAG_START --> </p><p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.BaseModelOutputWithPooling">transformers.modeling_outputs.BaseModelOutputWithPooling</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/vivit#transformers.VivitConfig">VivitConfig</a>) and inputs.</p> <ul> <li> <p><strong>last_hidden_state</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>) — Sequence of hidden-states at the output of the last layer of the model.</p> </li> <li> <p><strong>pooler_output</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, hidden_size)</code>) — Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining.</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> <!-- HTML_TAG_END --><p></p> </div></div> <p data-svelte-h="svelte-1dd50lf">The <a href="/docs/transformers/v4.34.0/en/model_doc/vivit#transformers.VivitModel">VivitModel</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.VivitModel.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VivitModel.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-kvfsh7">Examples:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> av <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> VivitImageProcessor, VivitModel <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> huggingface_hub <span class="hljs-keyword">import</span> hf_hub_download <span class="hljs-meta">&gt;&gt;&gt; </span>np.random.seed(<span class="hljs-number">0</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">read_video_pyav</span>(<span class="hljs-params">container, indices</span>): <span class="hljs-meta">... </span> <span class="hljs-string">''' <span class="hljs-meta">... </span> Decode the video with PyAV decoder. <span class="hljs-meta">... </span> Args: <span class="hljs-meta">... </span> container (`av.container.input.InputContainer`): PyAV container. <span class="hljs-meta">... </span> indices (`List[int]`): List of frame indices to decode. <span class="hljs-meta">... </span> Returns: <span class="hljs-meta">... </span> result (np.ndarray): np array of decoded frames of shape (num_frames, height, width, 3). <span class="hljs-meta">... </span> '''</span> <span class="hljs-meta">... </span> frames = [] <span class="hljs-meta">... </span> container.seek(<span class="hljs-number">0</span>) <span class="hljs-meta">... </span> start_index = indices[<span class="hljs-number">0</span>] <span class="hljs-meta">... </span> end_index = indices[-<span class="hljs-number">1</span>] <span class="hljs-meta">... </span> <span class="hljs-keyword">for</span> i, frame <span class="hljs-keyword">in</span> <span class="hljs-built_in">enumerate</span>(container.decode(video=<span class="hljs-number">0</span>)): <span class="hljs-meta">... </span> <span class="hljs-keyword">if</span> i &gt; end_index: <span class="hljs-meta">... </span> <span class="hljs-keyword">break</span> <span class="hljs-meta">... </span> <span class="hljs-keyword">if</span> i &gt;= start_index <span class="hljs-keyword">and</span> i <span class="hljs-keyword">in</span> indices: <span class="hljs-meta">... </span> frames.append(frame) <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> np.stack([x.to_ndarray(<span class="hljs-built_in">format</span>=<span class="hljs-string">"rgb24"</span>) <span class="hljs-keyword">for</span> x <span class="hljs-keyword">in</span> frames]) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">sample_frame_indices</span>(<span class="hljs-params">clip_len, frame_sample_rate, seg_len</span>): <span class="hljs-meta">... </span> <span class="hljs-string">''' <span class="hljs-meta">... </span> Sample a given number of frame indices from the video. <span class="hljs-meta">... </span> Args: <span class="hljs-meta">... </span> clip_len (`int`): Total number of frames to sample. <span class="hljs-meta">... </span> frame_sample_rate (`int`): Sample every n-th frame. <span class="hljs-meta">... </span> seg_len (`int`): Maximum allowed index of sample's last frame. <span class="hljs-meta">... </span> Returns: <span class="hljs-meta">... </span> indices (`List[int]`): List of sampled frame indices <span class="hljs-meta">... </span> '''</span> <span class="hljs-meta">... </span> converted_len = <span class="hljs-built_in">int</span>(clip_len * frame_sample_rate) <span class="hljs-meta">... </span> end_idx = np.random.randint(converted_len, seg_len) <span class="hljs-meta">... </span> start_idx = end_idx - converted_len <span class="hljs-meta">... </span> indices = np.linspace(start_idx, end_idx, num=clip_len) <span class="hljs-meta">... </span> indices = np.clip(indices, start_idx, end_idx - <span class="hljs-number">1</span>).astype(np.int64) <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> indices <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># video clip consists of 300 frames (10 seconds at 30 FPS)</span> <span class="hljs-meta">&gt;&gt;&gt; </span>file_path = hf_hub_download( <span class="hljs-meta">... </span> repo_id=<span class="hljs-string">"nielsr/video-demo"</span>, filename=<span class="hljs-string">"eating_spaghetti.mp4"</span>, repo_type=<span class="hljs-string">"dataset"</span> <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>container = av.<span class="hljs-built_in">open</span>(file_path) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># sample 32 frames</span> <span class="hljs-meta">&gt;&gt;&gt; </span>indices = sample_frame_indices(clip_len=<span class="hljs-number">32</span>, frame_sample_rate=<span class="hljs-number">1</span>, seg_len=container.streams.video[<span class="hljs-number">0</span>].frames) <span class="hljs-meta">&gt;&gt;&gt; </span>video = read_video_pyav(container=container, indices=indices) <span class="hljs-meta">&gt;&gt;&gt; </span>image_processor = VivitImageProcessor.from_pretrained(<span class="hljs-string">"google/vivit-b-16x2-kinetics400"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = VivitModel.from_pretrained(<span class="hljs-string">"google/vivit-b-16x2-kinetics400"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># prepare video for the model</span> <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = image_processor(<span class="hljs-built_in">list</span>(video), return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># forward pass</span> <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(**inputs) <span class="hljs-meta">&gt;&gt;&gt; </span>last_hidden_states = outputs.last_hidden_state <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-built_in">list</span>(last_hidden_states.shape) [<span class="hljs-number">1</span>, <span class="hljs-number">3137</span>, <span class="hljs-number">768</span>]<!-- HTML_TAG_END --></pre></div></div></div></div> <h2 class="relative group"><a id="transformers.VivitForVideoClassification" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VivitForVideoClassification"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1ynx8dl">VivitForVideoClassification</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.VivitForVideoClassification"><!-- HTML_TAG_START --><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">VivitForVideoClassification</span></span></h3><!-- HTML_TAG_END --> <a id="transformers.VivitForVideoClassification" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.VivitForVideoClassification"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vivit/modeling_vivit.py#L614" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VivitForVideoClassification.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VivitForVideoClassification.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/vivit#transformers.VivitConfig">VivitConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.<!-- HTML_TAG_END --> </span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1u6snus">ViViT Transformer model with a video classification head on top (a linear layer on top of the final hidden state of the [CLS] token) e.g. for Kinetics-400. This model is a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.VivitForVideoClassification.forward"><!-- HTML_TAG_START --><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4><!-- HTML_TAG_END --> <a id="transformers.VivitForVideoClassification.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.VivitForVideoClassification.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vivit/modeling_vivit.py#L627" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pixel_values<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><!-- HTML_TAG_START --><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.ImageClassifierOutput">transformers.modeling_outputs.ImageClassifierOutput</a> or <code>tuple(torch.FloatTensor)</code></span><!-- HTML_TAG_END --></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VivitForVideoClassification.forward.pixel_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VivitForVideoClassification.forward.pixel_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>pixel_values</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_frames, num_channels, height, width)</code>) — Pixel values. Pixel values can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/vivit#transformers.VivitImageProcessor">VivitImageProcessor</a>. See <a href="/docs/transformers/v4.34.0/en/model_doc/vivit#transformers.VivitImageProcessor.preprocess">VivitImageProcessor.preprocess()</a> for details.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VivitForVideoClassification.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VivitForVideoClassification.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul><!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VivitForVideoClassification.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VivitForVideoClassification.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VivitForVideoClassification.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VivitForVideoClassification.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VivitForVideoClassification.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VivitForVideoClassification.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VivitForVideoClassification.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VivitForVideoClassification.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Labels for computing the image classification/regression loss. Indices should be in <code>[0, ..., config.num_labels - 1]</code>. If <code>config.num_labels == 1</code> a regression loss is computed (Mean-Square loss), If <code>config.num_labels &gt; 1</code> a classification loss is computed (Cross-Entropy).<!-- HTML_TAG_END --> </span></span> </li></ul> <div id="transformers.VivitForVideoClassification.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <!-- HTML_TAG_START --> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.ImageClassifierOutput">transformers.modeling_outputs.ImageClassifierOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <!-- HTML_TAG_END --> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"><!-- HTML_TAG_START --> </p><p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.ImageClassifierOutput">transformers.modeling_outputs.ImageClassifierOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/vivit#transformers.VivitConfig">VivitConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Classification (or regression if config.num_labels==1) loss.</p> </li> <li> <p><strong>logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, config.num_labels)</code>) — Classification (or regression if config.num_labels==1) scores (before SoftMax).</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each stage) of shape <code>(batch_size, sequence_length, hidden_size)</code>. Hidden-states (also called feature maps) of the model at the output of each stage.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, patch_size, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> <!-- HTML_TAG_END --><p></p> </div></div> <p data-svelte-h="svelte-997bex">The <a href="/docs/transformers/v4.34.0/en/model_doc/vivit#transformers.VivitForVideoClassification">VivitForVideoClassification</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.VivitForVideoClassification.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VivitForVideoClassification.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-kvfsh7">Examples:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> av <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> VivitImageProcessor, VivitForVideoClassification <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> huggingface_hub <span class="hljs-keyword">import</span> hf_hub_download <span class="hljs-meta">&gt;&gt;&gt; </span>np.random.seed(<span class="hljs-number">0</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">read_video_pyav</span>(<span class="hljs-params">container, indices</span>): <span class="hljs-meta">... </span> <span class="hljs-string">''' <span class="hljs-meta">... </span> Decode the video with PyAV decoder. <span class="hljs-meta">... </span> Args: <span class="hljs-meta">... </span> container (`av.container.input.InputContainer`): PyAV container. <span class="hljs-meta">... </span> indices (`List[int]`): List of frame indices to decode. <span class="hljs-meta">... </span> Returns: <span class="hljs-meta">... </span> result (np.ndarray): np array of decoded frames of shape (num_frames, height, width, 3). <span class="hljs-meta">... </span> '''</span> <span class="hljs-meta">... </span> frames = [] <span class="hljs-meta">... </span> container.seek(<span class="hljs-number">0</span>) <span class="hljs-meta">... </span> start_index = indices[<span class="hljs-number">0</span>] <span class="hljs-meta">... </span> end_index = indices[-<span class="hljs-number">1</span>] <span class="hljs-meta">... </span> <span class="hljs-keyword">for</span> i, frame <span class="hljs-keyword">in</span> <span class="hljs-built_in">enumerate</span>(container.decode(video=<span class="hljs-number">0</span>)): <span class="hljs-meta">... </span> <span class="hljs-keyword">if</span> i &gt; end_index: <span class="hljs-meta">... </span> <span class="hljs-keyword">break</span> <span class="hljs-meta">... </span> <span class="hljs-keyword">if</span> i &gt;= start_index <span class="hljs-keyword">and</span> i <span class="hljs-keyword">in</span> indices: <span class="hljs-meta">... </span> frames.append(frame) <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> np.stack([x.to_ndarray(<span class="hljs-built_in">format</span>=<span class="hljs-string">"rgb24"</span>) <span class="hljs-keyword">for</span> x <span class="hljs-keyword">in</span> frames]) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">sample_frame_indices</span>(<span class="hljs-params">clip_len, frame_sample_rate, seg_len</span>): <span class="hljs-meta">... </span> <span class="hljs-string">''' <span class="hljs-meta">... </span> Sample a given number of frame indices from the video. <span class="hljs-meta">... </span> Args: <span class="hljs-meta">... </span> clip_len (`int`): Total number of frames to sample. <span class="hljs-meta">... </span> frame_sample_rate (`int`): Sample every n-th frame. <span class="hljs-meta">... </span> seg_len (`int`): Maximum allowed index of sample's last frame. <span class="hljs-meta">... </span> Returns: <span class="hljs-meta">... </span> indices (`List[int]`): List of sampled frame indices <span class="hljs-meta">... </span> '''</span> <span class="hljs-meta">... </span> converted_len = <span class="hljs-built_in">int</span>(clip_len * frame_sample_rate) <span class="hljs-meta">... </span> end_idx = np.random.randint(converted_len, seg_len) <span class="hljs-meta">... </span> start_idx = end_idx - converted_len <span class="hljs-meta">... </span> indices = np.linspace(start_idx, end_idx, num=clip_len) <span class="hljs-meta">... </span> indices = np.clip(indices, start_idx, end_idx - <span class="hljs-number">1</span>).astype(np.int64) <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> indices <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># video clip consists of 300 frames (10 seconds at 30 FPS)</span> <span class="hljs-meta">&gt;&gt;&gt; </span>file_path = hf_hub_download( <span class="hljs-meta">... </span> repo_id=<span class="hljs-string">"nielsr/video-demo"</span>, filename=<span class="hljs-string">"eating_spaghetti.mp4"</span>, repo_type=<span class="hljs-string">"dataset"</span> <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>container = av.<span class="hljs-built_in">open</span>(file_path) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># sample 32 frames</span> <span class="hljs-meta">&gt;&gt;&gt; </span>indices = sample_frame_indices(clip_len=<span class="hljs-number">32</span>, frame_sample_rate=<span class="hljs-number">4</span>, seg_len=container.streams.video[<span class="hljs-number">0</span>].frames) <span class="hljs-meta">&gt;&gt;&gt; </span>video = read_video_pyav(container=container, indices=indices) <span class="hljs-meta">&gt;&gt;&gt; </span>image_processor = VivitImageProcessor.from_pretrained(<span class="hljs-string">"google/vivit-b-16x2-kinetics400"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = VivitForVideoClassification.from_pretrained(<span class="hljs-string">"google/vivit-b-16x2-kinetics400"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = image_processor(<span class="hljs-built_in">list</span>(video), return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> outputs = model(**inputs) <span class="hljs-meta">... </span> logits = outputs.logits <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># model predicts one of the 400 Kinetics-400 classes</span> <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_label = logits.argmax(-<span class="hljs-number">1</span>).item() <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-built_in">print</span>(model.config.id2label[predicted_label]) LABEL_116<!-- HTML_TAG_END --></pre></div></div></div></div> <p></p> <script> { __sveltekit_1yybmhh = { assets: "/docs/transformers/v4.34.0/en", base: "/docs/transformers/v4.34.0/en", env: {} }; const element = document.currentScript.parentElement; const data = [null,null]; Promise.all([ import("/docs/transformers/v4.34.0/en/_app/immutable/entry/start.c2db227a.js"), import("/docs/transformers/v4.34.0/en/_app/immutable/entry/app.879d9b87.js") ]).then(([kit, app]) => { kit.start(app, element, { node_ids: [0, 269], data, form: null, error: null }); }); } </script> <!-- HTML_TAG_END --></div> <div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/vit_msn" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>ViTMSN</a> <a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/yolos" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">YOLOS<span class="ml-2 translate-y-px">→</span></a></div></div></div> <div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapter&quot;:{&quot;title&quot;:&quot;Video Vision Transformer (ViViT)&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;video-vision-transformer-vivit&quot;,&quot;url&quot;:&quot;#video-vision-transformer-vivit&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;overview&quot;,&quot;url&quot;:&quot;#overview&quot;},{&quot;title&quot;:&quot;VivitConfig&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.VivitConfig&quot;,&quot;url&quot;:&quot;#transformers.VivitConfig&quot;},{&quot;title&quot;:&quot;VivitImageProcessor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.VivitImageProcessor&quot;,&quot;url&quot;:&quot;#transformers.VivitImageProcessor&quot;},{&quot;title&quot;:&quot;VivitModel&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.VivitModel&quot;,&quot;url&quot;:&quot;#transformers.VivitModel&quot;},{&quot;title&quot;:&quot;VivitForVideoClassification&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.VivitForVideoClassification&quot;,&quot;url&quot;:&quot;#transformers.VivitForVideoClassification&quot;}]}}" data-target="SubSideMenu"> <nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#video-vision-transformer-vivit" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-video-vision-transformer-vivit"><!-- HTML_TAG_START --><wbr>Video <wbr>Vision <wbr>Transformer (<wbr>Vi<wbr>Vi<wbr>T)<!-- HTML_TAG_END --></a> <a href="#overview" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-overview"><!-- HTML_TAG_START --><wbr>Overview<!-- HTML_TAG_END --></a> <a href="#transformers.VivitConfig" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.VivitConfig"><!-- HTML_TAG_START --><wbr>Vivit<wbr>Config<!-- HTML_TAG_END --></a> <a href="#transformers.VivitImageProcessor" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.VivitImageProcessor"><!-- HTML_TAG_START --><wbr>Vivit<wbr>Image<wbr>Processor<!-- HTML_TAG_END --></a> <a href="#transformers.VivitModel" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.VivitModel"><!-- HTML_TAG_START --><wbr>Vivit<wbr>Model<!-- HTML_TAG_END --></a> <a href="#transformers.VivitForVideoClassification" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.VivitForVideoClassification"><!-- HTML_TAG_START --><wbr>Vivit<wbr>For<wbr>Video<wbr>Classification<!-- HTML_TAG_END --></a> </nav></div></div></div> <div id="doc-footer"></div></main> </div> <script> import("/front/build/kube-b0520c1/index.js"); window.moonSha = "kube-b0520c1/"; window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc","environment":"production","userAgent":"HuggingFace (production)"}`); </script> <!-- Stripe --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://js.stripe.com/v3/"; script.async = true; document.head.appendChild(script); } </script> <!-- Google analytics v4 --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL"; script.async = true; document.head.appendChild(script); window.dataLayer = window.dataLayer || []; function gtag() { if (window.dataLayer !== undefined) { window.dataLayer.push(arguments); } } gtag("js", new Date()); gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/v4.34.0/en/model_doc/vivit" }); /// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" }); /// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent /// TODO: ask the user for their consent and update this with gtag('consent', 'update') } </script> <!-- Google Analytics v3 (deprecated) --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { (function (i, s, o, g, r, a, m) { i["GoogleAnalyticsObject"] = r; (i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments); }), (i[r].l = 1 * new Date()); (a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]); a.async = 1; a.src = g; m.parentNode.insertBefore(a, m); })(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics"); ganalytics("create", "UA-83738774-2", "auto"); ganalytics("send", "pageview", "/docs/transformers/v4.34.0/en/model_doc/vivit"); } </script> <iframe name="__privateStripeMetricsController6520" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-27c67c0d52761104439bb051c7856ab1.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fv4.34.0%2Fen%2Fmodel_doc%2Fvivit&amp;title=Video%20Vision%20Transformer%20(ViViT)&amp;referrer=&amp;muid=b15a8ef9-7618-4d98-9abd-1d7fdb18f47df4c702&amp;sid=0da2c795-975c-45a5-a090-0475ca1e345f07aeed&amp;version=6&amp;preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html>
2023-10-05T13:33:27.529Z
Wav2Vec2
https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/wav2vec2
# Wav2Vec2 ## Overview The Wav2Vec2 model was proposed in [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli. The abstract from the paper is the following: _We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data._ Tips: - Wav2Vec2 is a speech model that accepts a float array corresponding to the raw waveform of the speech signal. - Wav2Vec2 model was trained using connectionist temporal classification (CTC) so the model output has to be decoded using [Wav2Vec2CTCTokenizer](/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2CTCTokenizer). This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten). ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Wav2Vec2. If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. - A notebook on how to [leverage a pretrained Wav2Vec2 model for emotion classification](https://colab.research.google.com/github/m3hrdadfi/soxan/blob/main/notebooks/Emotion_recognition_in_Greek_speech_using_Wav2Vec2.ipynb). 🌎 - [Wav2Vec2ForCTC](/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2ForCTC) is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/audio-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/audio_classification.ipynb). - [Audio classification task guide](../tasks/audio_classification) Automatic Speech Recognition - A blog post on [boosting Wav2Vec2 with n-grams in 🤗 Transformers](https://huggingface.co/blog/wav2vec2-with-ngram). - A blog post on how to [finetune Wav2Vec2 for English ASR with 🤗 Transformers](https://huggingface.co/blog/fine-tune-wav2vec2-english). - A blog post on [finetuning XLS-R for Multi-Lingual ASR with 🤗 Transformers](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2). - A notebook on how to [create YouTube captions from any video by transcribing audio with Wav2Vec2](https://colab.research.google.com/github/Muennighoff/ytclipcc/blob/main/wav2vec_youtube_captions.ipynb). 🌎 - [Wav2Vec2ForCTC](/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2ForCTC) is supported by a notebook on [how to finetune a speech recognition model in English](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/speech_recognition.ipynb), and [how to finetune a speech recognition model in any language](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multi_lingual_speech_recognition.ipynb). - [Automatic speech recognition task guide](../tasks/asr) 🚀 Deploy - A blog post on how to deploy Wav2Vec2 for [Automatic Speech Recogntion with Hugging Face’s Transformers & Amazon SageMaker](https://www.philschmid.de/automatic-speech-recognition-sagemaker). ## Wav2Vec2Config ### class transformers.Wav2Vec2Config [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/configuration_wav2vec2.py#L32) ( vocab\_size = 32hidden\_size = 768num\_hidden\_layers = 12num\_attention\_heads = 12intermediate\_size = 3072hidden\_act = 'gelu'hidden\_dropout = 0.1activation\_dropout = 0.1attention\_dropout = 0.1feat\_proj\_dropout = 0.0feat\_quantizer\_dropout = 0.0final\_dropout = 0.1layerdrop = 0.1initializer\_range = 0.02layer\_norm\_eps = 1e-05feat\_extract\_norm = 'group'feat\_extract\_activation = 'gelu'conv\_dim = (512, 512, 512, 512, 512, 512, 512)conv\_stride = (5, 2, 2, 2, 2, 2, 2)conv\_kernel = (10, 3, 3, 3, 3, 2, 2)conv\_bias = Falsenum\_conv\_pos\_embeddings = 128num\_conv\_pos\_embedding\_groups = 16do\_stable\_layer\_norm = Falseapply\_spec\_augment = Truemask\_time\_prob = 0.05mask\_time\_length = 10mask\_time\_min\_masks = 2mask\_feature\_prob = 0.0mask\_feature\_length = 10mask\_feature\_min\_masks = 0num\_codevectors\_per\_group = 320num\_codevector\_groups = 2contrastive\_logits\_temperature = 0.1num\_negatives = 100codevector\_dim = 256proj\_codevector\_dim = 256diversity\_loss\_weight = 0.1ctc\_loss\_reduction = 'sum'ctc\_zero\_infinity = Falseuse\_weighted\_layer\_sum = Falseclassifier\_proj\_size = 256tdnn\_dim = (512, 512, 512, 512, 1500)tdnn\_kernel = (5, 3, 3, 1, 1)tdnn\_dilation = (1, 2, 3, 1, 1)xvector\_output\_dim = 512pad\_token\_id = 0bos\_token\_id = 1eos\_token\_id = 2add\_adapter = Falseadapter\_kernel\_size = 3adapter\_stride = 2num\_adapter\_layers = 3output\_hidden\_size = Noneadapter\_attn\_dim = None\*\*kwargs ) This is the configuration class to store the configuration of a [Wav2Vec2Model](/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Model). It is used to instantiate an Wav2Vec2 model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Wav2Vec2 [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) architecture. Configuration objects inherit from [PretrainedConfig](/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig) and can be used to control the model outputs. Read the documentation from [PretrainedConfig](/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig) for more information. Example: ``` >>> from transformers import Wav2Vec2Config, Wav2Vec2Model >>> >>> configuration = Wav2Vec2Config() >>> >>> model = Wav2Vec2Model(configuration) >>> >>> configuration = model.config ``` ## Wav2Vec2CTCTokenizer ### class transformers.Wav2Vec2CTCTokenizer [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/tokenization_wav2vec2.py#L127) ( vocab\_filebos\_token = '<s>'eos\_token = '</s>'unk\_token = '<unk>'pad\_token = '<pad>'word\_delimiter\_token = '|'replace\_word\_delimiter\_char = ' 'do\_lower\_case = Falsetarget\_lang = None\*\*kwargs ) Constructs a Wav2Vec2CTC tokenizer. This tokenizer inherits from [PreTrainedTokenizer](/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizer) which contains some of the main methods. Users should refer to the superclass for more information regarding such methods. #### \_\_call\_\_ [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/tokenization_utils_base.py#L2732) ( text: typing.Union\[str, typing.List\[str\], typing.List\[typing.List\[str\]\]\] = Nonetext\_pair: typing.Union\[str, typing.List\[str\], typing.List\[typing.List\[str\]\], NoneType\] = Nonetext\_target: typing.Union\[str, typing.List\[str\], typing.List\[typing.List\[str\]\]\] = Nonetext\_pair\_target: typing.Union\[str, typing.List\[str\], typing.List\[typing.List\[str\]\], NoneType\] = Noneadd\_special\_tokens: bool = Truepadding: typing.Union\[bool, str, transformers.utils.generic.PaddingStrategy\] = Falsetruncation: typing.Union\[bool, str, transformers.tokenization\_utils\_base.TruncationStrategy\] = Nonemax\_length: typing.Optional\[int\] = Nonestride: int = 0is\_split\_into\_words: bool = Falsepad\_to\_multiple\_of: typing.Optional\[int\] = Nonereturn\_tensors: typing.Union\[str, transformers.utils.generic.TensorType, NoneType\] = Nonereturn\_token\_type\_ids: typing.Optional\[bool\] = Nonereturn\_attention\_mask: typing.Optional\[bool\] = Nonereturn\_overflowing\_tokens: bool = Falsereturn\_special\_tokens\_mask: bool = Falsereturn\_offsets\_mapping: bool = Falsereturn\_length: bool = Falseverbose: bool = True\*\*kwargs ) → [BatchEncoding](/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.BatchEncoding) Main method to tokenize and prepare for the model one or several sequence(s) or one or several pair(s) of sequences. #### save\_vocabulary [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/tokenization_wav2vec2.py#L649) ( save\_directory: strfilename\_prefix: typing.Optional\[str\] = None ) #### decode [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/tokenization_wav2vec2.py#L544) ( token\_ids: typing.Union\[int, typing.List\[int\], ForwardRef('np.ndarray'), ForwardRef('torch.Tensor'), ForwardRef('tf.Tensor')\]skip\_special\_tokens: bool = Falseclean\_up\_tokenization\_spaces: bool = Noneoutput\_char\_offsets: bool = Falseoutput\_word\_offsets: bool = False\*\*kwargs ) → `str` or `Wav2Vec2CTCTokenizerOutput` Converts a sequence of ids in a string, using the tokenizer and vocabulary with options to remove special tokens and clean up tokenization spaces. Similar to doing `self.convert_tokens_to_string(self.convert_ids_to_tokens(token_ids))`. Example: ``` >>> >>> from transformers import AutoTokenizer, AutoFeatureExtractor, AutoModelForCTC >>> from datasets import load_dataset >>> import datasets >>> import torch >>> >>> model = AutoModelForCTC.from_pretrained("facebook/wav2vec2-base-960h") >>> tokenizer = AutoTokenizer.from_pretrained("facebook/wav2vec2-base-960h") >>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-base-960h") >>> >>> dataset = load_dataset("common_voice", "en", split="train", streaming=True) >>> dataset = dataset.cast_column("audio", datasets.Audio(sampling_rate=16_000)) >>> dataset_iter = iter(dataset) >>> sample = next(dataset_iter) >>> >>> input_values = feature_extractor(sample["audio"]["array"], return_tensors="pt").input_values >>> logits = model(input_values).logits[0] >>> pred_ids = torch.argmax(logits, axis=-1) >>> >>> outputs = tokenizer.decode(pred_ids, output_word_offsets=True) >>> >>> time_offset = model.config.inputs_to_logits_ratio / feature_extractor.sampling_rate >>> word_offsets = [ ... { ... "word": d["word"], ... "start_time": round(d["start_offset"] * time_offset, 2), ... "end_time": round(d["end_offset"] * time_offset, 2), ... } ... for d in outputs.word_offsets ... ] >>> >>> >>> word_offsets[:3] [{'word': 'WHY', 'start_time': 1.42, 'end_time': 1.54}, {'word': 'DOES', 'start_time': 1.64, 'end_time': 1.9}, {'word': 'MILISANDRA', 'start_time': 2.26, 'end_time': 2.9}] ``` #### batch\_decode [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/tokenization_wav2vec2.py#L474) ( sequences: typing.Union\[typing.List\[int\], typing.List\[typing.List\[int\]\], ForwardRef('np.ndarray'), ForwardRef('torch.Tensor'), ForwardRef('tf.Tensor')\]skip\_special\_tokens: bool = Falseclean\_up\_tokenization\_spaces: bool = Noneoutput\_char\_offsets: bool = Falseoutput\_word\_offsets: bool = False\*\*kwargs ) → `List[str]` or `Wav2Vec2CTCTokenizerOutput` Convert a list of lists of token ids into a list of strings by calling decode. Set the target language of a nested multi-lingual dictionary ## Wav2Vec2FeatureExtractor ( feature\_size = 1sampling\_rate = 16000padding\_value = 0.0return\_attention\_mask = Falsedo\_normalize = True\*\*kwargs ) Constructs a Wav2Vec2 feature extractor. This feature extractor inherits from [SequenceFeatureExtractor](/docs/transformers/v4.34.0/en/main_classes/feature_extractor#transformers.SequenceFeatureExtractor) which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. ( raw\_speech: typing.Union\[numpy.ndarray, typing.List\[float\], typing.List\[numpy.ndarray\], typing.List\[typing.List\[float\]\]\]padding: typing.Union\[bool, str, transformers.utils.generic.PaddingStrategy\] = Falsemax\_length: typing.Optional\[int\] = Nonetruncation: bool = Falsepad\_to\_multiple\_of: typing.Optional\[int\] = Nonereturn\_attention\_mask: typing.Optional\[bool\] = Nonereturn\_tensors: typing.Union\[str, transformers.utils.generic.TensorType, NoneType\] = Nonesampling\_rate: typing.Optional\[int\] = None\*\*kwargs ) Main method to featurize and prepare for the model one or several sequence(s). ## Wav2Vec2Processor Constructs a Wav2Vec2 processor which wraps a Wav2Vec2 feature extractor and a Wav2Vec2 CTC tokenizer into a single processor. [Wav2Vec2Processor](/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Processor) offers all the functionalities of [Wav2Vec2FeatureExtractor](/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2FeatureExtractor) and [PreTrainedTokenizer](/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizer). See the docstring of [**call**()](/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Processor.__call__) and [decode()](/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Processor.decode) for more information. When used in normal mode, this method forwards all its arguments to Wav2Vec2FeatureExtractor’s [**call**()](/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2FeatureExtractor.__call__) and returns its output. If used in the context `as_target_processor()` this method forwards all its arguments to PreTrainedTokenizer’s [**call**()](/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__). Please refer to the docstring of the above two methods for more information. When used in normal mode, this method forwards all its arguments to Wav2Vec2FeatureExtractor’s [pad()](/docs/transformers/v4.34.0/en/main_classes/feature_extractor#transformers.SequenceFeatureExtractor.pad) and returns its output. If used in the context `as_target_processor()` this method forwards all its arguments to PreTrainedTokenizer’s [pad()](/docs/transformers/v4.34.0/en/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.pad). Please refer to the docstring of the above two methods for more information. #### from\_pretrained [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/processing_wav2vec2.py#L48) ( pretrained\_model\_name\_or\_path\*\*kwargs ) #### save\_pretrained [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/processing_utils.py#L93) ( save\_directorypush\_to\_hub: bool = False\*\*kwargs ) Parameters - **save\_directory** (`str` or `os.PathLike`) — Directory where the feature extractor JSON file and the tokenizer files will be saved (directory will be created if it does not exist). - **push\_to\_hub** (`bool`, _optional_, defaults to `False`) — Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the repository you want to push to with `repo_id` (will default to the name of `save_directory` in your namespace). - **kwargs** (`Dict[str, Any]`, _optional_) — Additional key word arguments passed along to the [push\_to\_hub()](/docs/transformers/v4.34.0/en/main_classes/processors#transformers.ProcessorMixin.push_to_hub) method. Saves the attributes of this processor (feature extractor, tokenizer…) in the specified directory so that it can be reloaded using the [from\_pretrained()](/docs/transformers/v4.34.0/en/model_doc/nougat#transformers.NougatProcessor.from_pretrained) method. This class method is simply calling [save\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/feature_extractor#transformers.FeatureExtractionMixin.save_pretrained) and [save\_pretrained()](/docs/transformers/v4.34.0/en/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.save_pretrained). Please refer to the docstrings of the methods above for more information. This method forwards all its arguments to PreTrainedTokenizer’s [batch\_decode()](/docs/transformers/v4.34.0/en/model_doc/speecht5#transformers.SpeechT5Tokenizer.batch_decode). Please refer to the docstring of this method for more information. This method forwards all its arguments to PreTrainedTokenizer’s [decode()](/docs/transformers/v4.34.0/en/model_doc/speecht5#transformers.SpeechT5Tokenizer.decode). Please refer to the docstring of this method for more information. ## Wav2Vec2ProcessorWithLM ### class transformers.Wav2Vec2ProcessorWithLM [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2_with_lm/processing_wav2vec2_with_lm.py#L67) ( feature\_extractor: FeatureExtractionMixintokenizer: PreTrainedTokenizerBasedecoder: BeamSearchDecoderCTC ) Parameters - **feature\_extractor** ([Wav2Vec2FeatureExtractor](/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2FeatureExtractor)) — An instance of [Wav2Vec2FeatureExtractor](/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2FeatureExtractor). The feature extractor is a required input. - **tokenizer** ([Wav2Vec2CTCTokenizer](/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2CTCTokenizer)) — An instance of [Wav2Vec2CTCTokenizer](/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2CTCTokenizer). The tokenizer is a required input. - **decoder** (`pyctcdecode.BeamSearchDecoderCTC`) — An instance of `pyctcdecode.BeamSearchDecoderCTC`. The decoder is a required input. Constructs a Wav2Vec2 processor which wraps a Wav2Vec2 feature extractor, a Wav2Vec2 CTC tokenizer and a decoder with language model support into a single processor for language model boosted speech recognition decoding. When used in normal mode, this method forwards all its arguments to Wav2Vec2FeatureExtractor’s [**call**()](/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2FeatureExtractor.__call__) and returns its output. If used in the context `as_target_processor()` this method forwards all its arguments to Wav2Vec2CTCTokenizer’s [**call**()](/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__). Please refer to the docstring of the above two methods for more information. When used in normal mode, this method forwards all its arguments to Wav2Vec2FeatureExtractor’s [pad()](/docs/transformers/v4.34.0/en/main_classes/feature_extractor#transformers.SequenceFeatureExtractor.pad) and returns its output. If used in the context `as_target_processor()` this method forwards all its arguments to Wav2Vec2CTCTokenizer’s [pad()](/docs/transformers/v4.34.0/en/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.pad). Please refer to the docstring of the above two methods for more information. #### from\_pretrained [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2_with_lm/processing_wav2vec2_with_lm.py#L112) ( pretrained\_model\_name\_or\_path\*\*kwargs ) Parameters - **pretrained\_model\_name\_or\_path** (`str` or `os.PathLike`) — This can be either: - a string, the _model id_ of a pretrained feature\_extractor hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like `bert-base-uncased`, or namespaced under a user or organization name, like `dbmdz/bert-base-german-cased`. - a path to a _directory_ containing a feature extractor file saved using the [save\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/feature_extractor#transformers.FeatureExtractionMixin.save_pretrained) method, e.g., `./my_model_directory/`. - a path or url to a saved feature extractor JSON _file_, e.g., `./my_model_directory/preprocessor_config.json`. \*\*kwargs — Additional keyword arguments passed along to both [SequenceFeatureExtractor](/docs/transformers/v4.34.0/en/main_classes/feature_extractor#transformers.SequenceFeatureExtractor) and [PreTrainedTokenizer](/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizer) Instantiate a [Wav2Vec2ProcessorWithLM](/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2ProcessorWithLM) from a pretrained Wav2Vec2 processor. This class method is simply calling Wav2Vec2FeatureExtractor’s [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/feature_extractor#transformers.FeatureExtractionMixin.from_pretrained), Wav2Vec2CTCTokenizer’s [from\_pretrained()](/docs/transformers/v4.34.0/en/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.from_pretrained), and `pyctcdecode.BeamSearchDecoderCTC.load_from_hf_hub`. Please refer to the docstrings of the methods above for more information. #### batch\_decode [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2_with_lm/processing_wav2vec2_with_lm.py#L284) ( logits: ndarraypool: typing.Union\[<bound method BaseContext.Pool of <multiprocessing.context.DefaultContext object at 0x7f0b4ec9b370>>, NoneType\] = Nonenum\_processes: typing.Optional\[int\] = Nonebeam\_width: typing.Optional\[int\] = Nonebeam\_prune\_logp: typing.Optional\[float\] = Nonetoken\_min\_logp: typing.Optional\[float\] = Nonehotwords: typing.Optional\[typing.Iterable\[str\]\] = Nonehotword\_weight: typing.Optional\[float\] = Nonealpha: typing.Optional\[float\] = Nonebeta: typing.Optional\[float\] = Noneunk\_score\_offset: typing.Optional\[float\] = Nonelm\_score\_boundary: typing.Optional\[bool\] = Noneoutput\_word\_offsets: bool = Falsen\_best: int = 1 ) Batch decode output logits to audio transcription with language model support. This function makes use of Python’s multiprocessing. Currently, multiprocessing is available only on Unix systems (see this [issue](https://github.com/kensho-technologies/pyctcdecode/issues/65)). If you are decoding multiple batches, consider creating a `Pool` and passing it to `batch_decode`. Otherwise, `batch_decode` will be very slow since it will create a fresh `Pool` for each call. See usage example below. Example: See [Decoding multiple audios](#decoding-multiple-audios). #### decode [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2_with_lm/processing_wav2vec2_with_lm.py#L469) ( logits: ndarraybeam\_width: typing.Optional\[int\] = Nonebeam\_prune\_logp: typing.Optional\[float\] = Nonetoken\_min\_logp: typing.Optional\[float\] = Nonehotwords: typing.Optional\[typing.Iterable\[str\]\] = Nonehotword\_weight: typing.Optional\[float\] = Nonealpha: typing.Optional\[float\] = Nonebeta: typing.Optional\[float\] = Noneunk\_score\_offset: typing.Optional\[float\] = Nonelm\_score\_boundary: typing.Optional\[bool\] = Noneoutput\_word\_offsets: bool = Falsen\_best: int = 1 ) Decode output logits to audio transcription with language model support. Example: ``` >>> >>> from transformers import AutoTokenizer, AutoProcessor, AutoModelForCTC >>> from datasets import load_dataset >>> import datasets >>> import torch >>> >>> model = AutoModelForCTC.from_pretrained("patrickvonplaten/wav2vec2-base-100h-with-lm") >>> processor = AutoProcessor.from_pretrained("patrickvonplaten/wav2vec2-base-100h-with-lm") >>> >>> dataset = load_dataset("common_voice", "en", split="train", streaming=True) >>> dataset = dataset.cast_column("audio", datasets.Audio(sampling_rate=16_000)) >>> dataset_iter = iter(dataset) >>> sample = next(dataset_iter) >>> >>> input_values = processor(sample["audio"]["array"], return_tensors="pt").input_values >>> with torch.no_grad(): ... logits = model(input_values).logits[0].cpu().numpy() >>> >>> outputs = processor.decode(logits, output_word_offsets=True) >>> >>> time_offset = model.config.inputs_to_logits_ratio / processor.feature_extractor.sampling_rate >>> word_offsets = [ ... { ... "word": d["word"], ... "start_time": round(d["start_offset"] * time_offset, 2), ... "end_time": round(d["end_offset"] * time_offset, 2), ... } ... for d in outputs.word_offsets ... ] >>> >>> >>> word_offsets[:4] [{'word': 'WHY', 'start_time': 1.42, 'end_time': 1.54}, {'word': 'DOES', 'start_time': 1.66, 'end_time': 1.9}, {'word': 'MILISANDRA', 'start_time': 2.26, 'end_time': 2.9}, {'word': 'LOOK', 'start_time': 3.0, 'end_time': 3.16}] ``` ### Decoding multiple audios If you are planning to decode multiple batches of audios, you should consider using [batch\_decode()](/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2ProcessorWithLM.batch_decode) and passing an instantiated `multiprocessing.Pool`. Otherwise, [batch\_decode()](/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2ProcessorWithLM.batch_decode) performance will be slower than calling [decode()](/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2ProcessorWithLM.decode) for each audio individually, as it internally instantiates a new `Pool` for every call. See the example below: ``` >>> >>> from multiprocessing import get_context >>> from transformers import AutoTokenizer, AutoProcessor, AutoModelForCTC >>> from datasets import load_dataset >>> import datasets >>> import torch >>> >>> model = AutoModelForCTC.from_pretrained("patrickvonplaten/wav2vec2-base-100h-with-lm").to("cuda") >>> processor = AutoProcessor.from_pretrained("patrickvonplaten/wav2vec2-base-100h-with-lm") >>> >>> dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> dataset = dataset.cast_column("audio", datasets.Audio(sampling_rate=16_000)) >>> def map_to_array(batch): ... batch["speech"] = batch["audio"]["array"] ... return batch >>> >>> dataset = dataset.map(map_to_array, remove_columns=["audio"]) >>> def map_to_pred(batch, pool): ... inputs = processor(batch["speech"], sampling_rate=16_000, padding=True, return_tensors="pt") ... inputs = {k: v.to("cuda") for k, v in inputs.items()} ... with torch.no_grad(): ... logits = model(**inputs).logits ... transcription = processor.batch_decode(logits.cpu().numpy(), pool).text ... batch["transcription"] = transcription ... return batch >>> >>> >>> >>> with get_context("fork").Pool(processes=2) as pool: ... result = dataset.map( ... map_to_pred, batched=True, batch_size=2, fn_kwargs={"pool": pool}, remove_columns=["speech"] ... ) >>> result["transcription"][:2] ['MISTER QUILTER IS THE APOSTLE OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL', "NOR IS MISTER COULTER'S MANNER LESS INTERESTING THAN HIS MATTER"] ``` ## Wav2Vec2 specific outputs ### class transformers.models.wav2vec2\_with\_lm.processing\_wav2vec2\_with\_lm.Wav2Vec2DecoderWithLMOutput [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2_with_lm/processing_wav2vec2_with_lm.py#L45) ( text: typing.Union\[typing.List\[typing.List\[str\]\], typing.List\[str\], str\]logit\_score: typing.Union\[typing.List\[typing.List\[float\]\], typing.List\[float\], float\] = Nonelm\_score: typing.Union\[typing.List\[typing.List\[float\]\], typing.List\[float\], float\] = Noneword\_offsets: typing.Union\[typing.List\[typing.List\[typing.List\[typing.Dict\[str, typing.Union\[int, str\]\]\]\]\], typing.List\[typing.List\[typing.Dict\[str, typing.Union\[int, str\]\]\]\], typing.List\[typing.Dict\[str, typing.Union\[int, str\]\]\]\] = None ) Parameters - **text** (list of `str` or `str`) — Decoded logits in text from. Usually the speech transcription. - **logit\_score** (list of `float` or `float`) — Total logit score of the beams associated with produced text. - **lm\_score** (list of `float`) — Fused lm\_score of the beams associated with produced text. - **word\_offsets** (list of `List[Dict[str, Union[int, str]]]` or `List[Dict[str, Union[int, str]]]`) — Offsets of the decoded words. In combination with sampling rate and model downsampling rate word offsets can be used to compute time stamps for each word. Output type of `Wav2Vec2DecoderWithLM`, with transcription. ### class transformers.modeling\_outputs.Wav2Vec2BaseModelOutput [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/modeling_outputs.py#L1286) ( last\_hidden\_state: FloatTensor = Noneextract\_features: FloatTensor = Nonehidden\_states: typing.Optional\[typing.Tuple\[torch.FloatTensor\]\] = Noneattentions: typing.Optional\[typing.Tuple\[torch.FloatTensor\]\] = None ) Base class for models that have been trained with the Wav2Vec2 loss objective. ### class transformers.models.wav2vec2.modeling\_wav2vec2.Wav2Vec2ForPreTrainingOutput [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L100) ( loss: typing.Optional\[torch.FloatTensor\] = Noneprojected\_states: FloatTensor = Noneprojected\_quantized\_states: FloatTensor = Nonecodevector\_perplexity: FloatTensor = Nonehidden\_states: typing.Optional\[typing.Tuple\[torch.FloatTensor\]\] = Noneattentions: typing.Optional\[typing.Tuple\[torch.FloatTensor\]\] = Nonecontrastive\_loss: typing.Optional\[torch.FloatTensor\] = Nonediversity\_loss: typing.Optional\[torch.FloatTensor\] = None ) Output type of [Wav2Vec2ForPreTraining](/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2ForPreTraining), with potential hidden states and attentions. ### class transformers.models.wav2vec2.modeling\_flax\_wav2vec2.FlaxWav2Vec2BaseModelOutput [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/modeling_flax_wav2vec2.py#L45) ( last\_hidden\_state: Array = Noneextract\_features: Array = Nonehidden\_states: typing.Optional\[typing.Tuple\[jax.Array\]\] = Noneattentions: typing.Optional\[typing.Tuple\[jax.Array\]\] = None ) Output type of `FlaxWav2Vec2BaseModelOutput`, with potential hidden states and attentions. “Returns a new object replacing the specified fields with new values. ### class transformers.models.wav2vec2.modeling\_flax\_wav2vec2.FlaxWav2Vec2ForPreTrainingOutput [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/modeling_flax_wav2vec2.py#L75) ( projected\_states: Array = Noneprojected\_quantized\_states: Array = Nonecodevector\_perplexity: Array = Nonehidden\_states: typing.Optional\[typing.Tuple\[jax.Array\]\] = Noneattentions: typing.Optional\[typing.Tuple\[jax.Array\]\] = None ) Output type of `FlaxWav2Vec2ForPreTrainingOutput`, with potential hidden states and attentions. “Returns a new object replacing the specified fields with new values. ## Wav2Vec2Model ### class transformers.Wav2Vec2Model [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L1456) ( config: Wav2Vec2Config ) Parameters - **config** ([Wav2Vec2Config](/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Config)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. The bare Wav2Vec2 Model transformer outputting raw hidden-states without any specific head on top. Wav2Vec2 was proposed in [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli. This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.). This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L1542) ( input\_values: typing.Optional\[torch.Tensor\]attention\_mask: typing.Optional\[torch.Tensor\] = Nonemask\_time\_indices: typing.Optional\[torch.FloatTensor\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → [transformers.modeling\_outputs.Wav2Vec2BaseModelOutput](/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.modeling_outputs.Wav2Vec2BaseModelOutput) or `tuple(torch.FloatTensor)` The [Wav2Vec2Model](/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Model) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoProcessor, Wav2Vec2Model >>> import torch >>> from datasets import load_dataset >>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") >>> dataset = dataset.sort("id") >>> sampling_rate = dataset.features["audio"].sampling_rate >>> processor = AutoProcessor.from_pretrained("facebook/wav2vec2-base-960h") >>> model = Wav2Vec2Model.from_pretrained("facebook/wav2vec2-base-960h") >>> >>> inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt") >>> with torch.no_grad(): ... outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state >>> list(last_hidden_states.shape) [1, 292, 768] ``` ## Wav2Vec2ForCTC ### class transformers.Wav2Vec2ForCTC [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L1875) ( configtarget\_lang: typing.Optional\[str\] = None ) Parameters - **config** ([Wav2Vec2Config](/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Config)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. Wav2Vec2 Model with a `language modeling` head on top for Connectionist Temporal Classification (CTC). Wav2Vec2 was proposed in [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli. This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.). This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L1947) ( input\_values: typing.Optional\[torch.Tensor\]attention\_mask: typing.Optional\[torch.Tensor\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = Nonelabels: typing.Optional\[torch.Tensor\] = None ) → [transformers.modeling\_outputs.CausalLMOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.CausalLMOutput) or `tuple(torch.FloatTensor)` The [Wav2Vec2ForCTC](/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2ForCTC) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoProcessor, Wav2Vec2ForCTC >>> from datasets import load_dataset >>> import torch >>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") >>> dataset = dataset.sort("id") >>> sampling_rate = dataset.features["audio"].sampling_rate >>> processor = AutoProcessor.from_pretrained("facebook/wav2vec2-base-960h") >>> model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h") >>> >>> inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_ids = torch.argmax(logits, dim=-1) >>> >>> transcription = processor.batch_decode(predicted_ids) >>> transcription[0] 'MISTER QUILTER IS THE APOSTLE OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL' >>> inputs["labels"] = processor(text=dataset[0]["text"], return_tensors="pt").input_ids >>> >>> loss = model(**inputs).loss >>> round(loss.item(), 2) 53.48 ``` #### load\_adapter [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L1209) ( target\_lang: strforce\_load = True\*\*kwargs ) Load a language adapter model from a pre-trained adapter model. Activate the special [“offline-mode”](https://huggingface.co/transformers/installation.html#offline-mode) to use this method in a firewalled environment. Examples: ``` >>> from transformers import Wav2Vec2ForCTC, AutoProcessor >>> ckpt = "facebook/mms-1b-all" >>> processor = AutoProcessor.from_pretrained(ckpt) >>> model = Wav2Vec2ForCTC.from_pretrained(ckpt, target_lang="eng") >>> >>> processor.tokenizer.set_target_lang("spa") >>> model.load_adapter("spa") ``` ## Wav2Vec2ForSequenceClassification ### class transformers.Wav2Vec2ForSequenceClassification [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L2034) ( config ) Parameters - **config** ([Wav2Vec2Config](/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Config)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. Wav2Vec2 Model with a sequence classification head on top (a linear layer over the pooled output) for tasks like SUPERB Keyword Spotting. Wav2Vec2 was proposed in [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli. This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.). This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L2079) ( input\_values: typing.Optional\[torch.Tensor\]attention\_mask: typing.Optional\[torch.Tensor\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = Nonelabels: typing.Optional\[torch.Tensor\] = None ) → [transformers.modeling\_outputs.SequenceClassifierOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.SequenceClassifierOutput) or `tuple(torch.FloatTensor)` The [Wav2Vec2ForSequenceClassification](/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2ForSequenceClassification) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoFeatureExtractor, Wav2Vec2ForSequenceClassification >>> from datasets import load_dataset >>> import torch >>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") >>> dataset = dataset.sort("id") >>> sampling_rate = dataset.features["audio"].sampling_rate >>> feature_extractor = AutoFeatureExtractor.from_pretrained("superb/wav2vec2-base-superb-ks") >>> model = Wav2Vec2ForSequenceClassification.from_pretrained("superb/wav2vec2-base-superb-ks") >>> >>> inputs = feature_extractor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_ids = torch.argmax(logits, dim=-1).item() >>> predicted_label = model.config.id2label[predicted_class_ids] >>> predicted_label '_unknown_' >>> >>> target_label = model.config.id2label[0] >>> inputs["labels"] = torch.tensor([model.config.label2id[target_label]]) >>> loss = model(**inputs).loss >>> round(loss.item(), 2) 6.54 ``` ## Wav2Vec2ForAudioFrameClassification ### class transformers.Wav2Vec2ForAudioFrameClassification [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L2156) ( config ) Parameters - **config** ([Wav2Vec2Config](/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Config)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. Wav2Vec2 Model with a frame classification head on top for tasks like Speaker Diarization. Wav2Vec2 was proposed in [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli. This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.). This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L2200) ( input\_values: typing.Optional\[torch.Tensor\]attention\_mask: typing.Optional\[torch.Tensor\] = Nonelabels: typing.Optional\[torch.Tensor\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → [transformers.modeling\_outputs.TokenClassifierOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.TokenClassifierOutput) or `tuple(torch.FloatTensor)` The [Wav2Vec2ForAudioFrameClassification](/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2ForAudioFrameClassification) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoFeatureExtractor, Wav2Vec2ForAudioFrameClassification >>> from datasets import load_dataset >>> import torch >>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") >>> dataset = dataset.sort("id") >>> sampling_rate = dataset.features["audio"].sampling_rate >>> feature_extractor = AutoFeatureExtractor.from_pretrained("anton-l/wav2vec2-base-superb-sd") >>> model = Wav2Vec2ForAudioFrameClassification.from_pretrained("anton-l/wav2vec2-base-superb-sd") >>> >>> inputs = feature_extractor(dataset[0]["audio"]["array"], return_tensors="pt", sampling_rate=sampling_rate) >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> probabilities = torch.sigmoid(logits[0]) >>> >>> labels = (probabilities > 0.5).long() >>> labels[0].tolist() [0, 0] ``` ## Wav2Vec2ForXVector ### class transformers.Wav2Vec2ForXVector [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L2317) ( config ) Parameters - **config** ([Wav2Vec2Config](/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Config)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. Wav2Vec2 Model with an XVector feature extraction head on top for tasks like Speaker Verification. Wav2Vec2 was proposed in [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli. This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.). This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L2379) ( input\_values: typing.Optional\[torch.Tensor\]attention\_mask: typing.Optional\[torch.Tensor\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = Nonelabels: typing.Optional\[torch.Tensor\] = None ) → [transformers.modeling\_outputs.XVectorOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.XVectorOutput) or `tuple(torch.FloatTensor)` The [Wav2Vec2ForXVector](/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2ForXVector) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoFeatureExtractor, Wav2Vec2ForXVector >>> from datasets import load_dataset >>> import torch >>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") >>> dataset = dataset.sort("id") >>> sampling_rate = dataset.features["audio"].sampling_rate >>> feature_extractor = AutoFeatureExtractor.from_pretrained("anton-l/wav2vec2-base-superb-sv") >>> model = Wav2Vec2ForXVector.from_pretrained("anton-l/wav2vec2-base-superb-sv") >>> >>> inputs = feature_extractor( ... [d["array"] for d in dataset[:2]["audio"]], sampling_rate=sampling_rate, return_tensors="pt", padding=True ... ) >>> with torch.no_grad(): ... embeddings = model(**inputs).embeddings >>> embeddings = torch.nn.functional.normalize(embeddings, dim=-1).cpu() >>> >>> cosine_sim = torch.nn.CosineSimilarity(dim=-1) >>> similarity = cosine_sim(embeddings[0], embeddings[1]) >>> threshold = 0.7 >>> if similarity < threshold: ... print("Speakers are not the same!") >>> round(similarity.item(), 2) 0.98 ``` ## Wav2Vec2ForPreTraining ### class transformers.Wav2Vec2ForPreTraining [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L1604) ( config: Wav2Vec2Config ) Parameters - **config** ([Wav2Vec2Config](/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Config)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. Wav2Vec2 Model with a quantizer and `VQ` head on top. Wav2Vec2 was proposed in [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli. This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.). This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L1664) ( input\_values: typing.Optional\[torch.Tensor\]attention\_mask: typing.Optional\[torch.Tensor\] = Nonemask\_time\_indices: typing.Optional\[torch.BoolTensor\] = Nonesampled\_negative\_indices: typing.Optional\[torch.BoolTensor\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → [transformers.models.wav2vec2.modeling\_wav2vec2.Wav2Vec2ForPreTrainingOutput](/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForPreTrainingOutput) or `tuple(torch.FloatTensor)` The [Wav2Vec2ForPreTraining](/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2ForPreTraining) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> import torch >>> from transformers import AutoFeatureExtractor, Wav2Vec2ForPreTraining >>> from transformers.models.wav2vec2.modeling_wav2vec2 import _compute_mask_indices, _sample_negative_indices >>> from datasets import load_dataset >>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-base") >>> model = Wav2Vec2ForPreTraining.from_pretrained("facebook/wav2vec2-base") >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> input_values = feature_extractor(ds[0]["audio"]["array"], return_tensors="pt").input_values >>> >>> batch_size, raw_sequence_length = input_values.shape >>> sequence_length = model._get_feat_extract_output_lengths(raw_sequence_length).item() >>> mask_time_indices = _compute_mask_indices( ... shape=(batch_size, sequence_length), mask_prob=0.2, mask_length=2 ... ) >>> sampled_negative_indices = _sample_negative_indices( ... features_shape=(batch_size, sequence_length), ... num_negatives=model.config.num_negatives, ... mask_time_indices=mask_time_indices, ... ) >>> mask_time_indices = torch.tensor(data=mask_time_indices, device=input_values.device, dtype=torch.long) >>> sampled_negative_indices = torch.tensor( ... data=sampled_negative_indices, device=input_values.device, dtype=torch.long ... ) >>> with torch.no_grad(): ... outputs = model(input_values, mask_time_indices=mask_time_indices) >>> >>> cosine_sim = torch.cosine_similarity(outputs.projected_states, outputs.projected_quantized_states, dim=-1) >>> >>> cosine_sim[mask_time_indices.to(torch.bool)].mean() > 0.5 tensor(True) >>> >>> model = model.train() >>> loss = model( ... input_values, mask_time_indices=mask_time_indices, sampled_negative_indices=sampled_negative_indices ... ).loss ``` ## TFWav2Vec2Model ### class transformers.TFWav2Vec2Model [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py#L1353) ( \*args\*\*kwargs ) Parameters - **config** ([Wav2Vec2Config](/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Config)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. The bare TFWav2Vec2 Model transformer outputing raw hidden-states without any specific head on top. This model inherits from [TFPreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a [tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in `transformers` accept two formats as input: - having all inputs as keyword arguments (like PyTorch models), or - having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like `model.fit()` things should “just work” for you - just pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: - a single Tensor with `input_values` only and nothing else: `model(input_values)` - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: `model([input_values, attention_mask])` or `model([input_values, attention_mask, token_type_ids])` - a dictionary with one or several input Tensors associated to the input names given in the docstring: `model({"input_values": input_values, "token_type_ids": token_type_ids})` Note that when creating models and layers with [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! #### call [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py#L1359) ( input\_values: tf.Tensorattention\_mask: tf.Tensor | None = Nonetoken\_type\_ids: tf.Tensor | None = Noneposition\_ids: tf.Tensor | None = Nonehead\_mask: tf.Tensor | None = Noneinputs\_embeds: tf.Tensor | None = Noneoutput\_attentions: Optional\[bool\] = Noneoutput\_hidden\_states: Optional\[bool\] = Nonereturn\_dict: Optional\[bool\] = Nonetraining: bool = False ) → [transformers.modeling\_tf\_outputs.TFBaseModelOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFBaseModelOutput) or `tuple(tf.Tensor)` The [TFWav2Vec2Model](/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.TFWav2Vec2Model) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoProcessor, TFWav2Vec2Model >>> from datasets import load_dataset >>> import soundfile as sf >>> processor = AutoProcessor.from_pretrained("facebook/wav2vec2-base-960h") >>> model = TFWav2Vec2Model.from_pretrained("facebook/wav2vec2-base-960h") >>> def map_to_array(batch): ... speech, _ = sf.read(batch["file"]) ... batch["speech"] = speech ... return batch >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> ds = ds.map(map_to_array) >>> input_values = processor(ds["speech"][0], return_tensors="tf").input_values >>> hidden_states = model(input_values).last_hidden_state ``` ## TFWav2Vec2ForSequenceClassification ### class transformers.TFWav2Vec2ForSequenceClassification [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py#L1576) ( \*args\*\*kwargs ) #### call [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py#L1617) ( input\_values: tf.Tensorattention\_mask: tf.Tensor | None = Noneoutput\_attentions: bool | None = Noneoutput\_hidden\_states: bool | None = Nonereturn\_dict: bool | None = Nonelabels: tf.Tensor | None = Nonetraining: bool = False ) ## TFWav2Vec2ForCTC ### class transformers.TFWav2Vec2ForCTC [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py#L1427) ( \*args\*\*kwargs ) Parameters - **config** ([Wav2Vec2Config](/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Config)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. TFWav2Vec2 Model with a `language modeling` head on top for Connectionist Temporal Classification (CTC). This model inherits from [TFPreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a [tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in `transformers` accept two formats as input: - having all inputs as keyword arguments (like PyTorch models), or - having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like `model.fit()` things should “just work” for you - just pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: - a single Tensor with `input_values` only and nothing else: `model(input_values)` - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: `model([input_values, attention_mask])` or `model([input_values, attention_mask, token_type_ids])` - a dictionary with one or several input Tensors associated to the input names given in the docstring: `model({"input_values": input_values, "token_type_ids": token_type_ids})` Note that when creating models and layers with [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! #### call [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py#L1454) ( input\_values: tf.Tensorattention\_mask: tf.Tensor | None = Nonetoken\_type\_ids: tf.Tensor | None = Noneposition\_ids: tf.Tensor | None = Nonehead\_mask: tf.Tensor | None = Noneinputs\_embeds: tf.Tensor | None = Noneoutput\_attentions: Optional\[bool\] = Nonelabels: tf.Tensor | None = Noneoutput\_hidden\_states: Optional\[bool\] = Nonereturn\_dict: Optional\[bool\] = Nonetraining: Optional\[bool\] = False ) → [transformers.modeling\_tf\_outputs.TFCausalLMOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFCausalLMOutput) or `tuple(tf.Tensor)` The [TFWav2Vec2ForCTC](/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.TFWav2Vec2ForCTC) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> import tensorflow as tf >>> from transformers import AutoProcessor, TFWav2Vec2ForCTC >>> from datasets import load_dataset >>> import soundfile as sf >>> processor = AutoProcessor.from_pretrained("facebook/wav2vec2-base-960h") >>> model = TFWav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h") >>> def map_to_array(batch): ... speech, _ = sf.read(batch["file"]) ... batch["speech"] = speech ... return batch >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> ds = ds.map(map_to_array) >>> input_values = processor(ds["speech"][0], return_tensors="tf").input_values >>> logits = model(input_values).logits >>> predicted_ids = tf.argmax(logits, axis=-1) >>> transcription = processor.decode(predicted_ids[0]) >>> >>> target_transcription = "A MAN SAID TO THE UNIVERSE SIR I EXIST" >>> >>> labels = processor(text=transcription, return_tensors="tf").input_ids >>> loss = model(input_values, labels=labels).loss ``` ## FlaxWav2Vec2Model ### class transformers.FlaxWav2Vec2Model [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/modeling_flax_wav2vec2.py#L1055) ( config: Wav2Vec2Configinput\_shape: typing.Tuple = (1, 1024)seed: int = 0dtype: dtype = <class 'jax.numpy.float32'>\_do\_init: bool = True\*\*kwargs ) Parameters - **config** ([Wav2Vec2Config](/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Config)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.from_pretrained) method to load the model weights. - **dtype** (`jax.numpy.dtype`, _optional_, defaults to `jax.numpy.float32`) — The data type of the computation. Can be one of `jax.numpy.float32`, `jax.numpy.float16` (on GPUs) and `jax.numpy.bfloat16` (on TPUs). This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given `dtype`. **Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.** If you wish to change the dtype of the model parameters, see [to\_fp16()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.to_fp16) and [to\_bf16()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.to_bf16). The bare Wav2Vec2 Model transformer outputting raw hidden-states without any specific head on top. Wav2Vec2 was proposed in [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli. This model inherits from [FlaxPreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a Flax Linen [flax.nn.Module](https://flax.readthedocs.io/en/latest/_autosummary/flax.nn.module.html) subclass. Use it as a regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior. Finally, this model supports inherent JAX features such as: - [Just-In-Time (JIT) compilation](https://jax.readthedocs.io/en/latest/jax.html#just-in-time-compilation-jit) - [Automatic Differentiation](https://jax.readthedocs.io/en/latest/jax.html#automatic-differentiation) - [Vectorization](https://jax.readthedocs.io/en/latest/jax.html#vectorization-vmap) - [Parallelization](https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap) The `FlaxWav2Vec2PreTrainedModel` forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoProcessor, FlaxWav2Vec2Model >>> from datasets import load_dataset >>> import soundfile as sf >>> processor = AutoProcessor.from_pretrained("facebook/wav2vec2-large-lv60") >>> model = FlaxWav2Vec2Model.from_pretrained("facebook/wav2vec2-large-lv60") >>> def map_to_array(batch): ... speech, _ = sf.read(batch["file"]) ... batch["speech"] = speech ... return batch >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> ds = ds.map(map_to_array) >>> input_values = processor( ... ds["speech"][0], sampling_rate=16_000, return_tensors="np" ... ).input_values >>> hidden_states = model(input_values).last_hidden_state ``` ## FlaxWav2Vec2ForCTC ### class transformers.FlaxWav2Vec2ForCTC [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/modeling_flax_wav2vec2.py#L1173) ( config: Wav2Vec2Configinput\_shape: typing.Tuple = (1, 1024)seed: int = 0dtype: dtype = <class 'jax.numpy.float32'>\_do\_init: bool = True\*\*kwargs ) Parameters - **config** ([Wav2Vec2Config](/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Config)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.from_pretrained) method to load the model weights. - **dtype** (`jax.numpy.dtype`, _optional_, defaults to `jax.numpy.float32`) — The data type of the computation. Can be one of `jax.numpy.float32`, `jax.numpy.float16` (on GPUs) and `jax.numpy.bfloat16` (on TPUs). This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given `dtype`. **Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.** If you wish to change the dtype of the model parameters, see [to\_fp16()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.to_fp16) and [to\_bf16()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.to_bf16). Wav2Vec2 Model with a `language modeling` head on top for Connectionist Temporal Classification (CTC). Wav2Vec2 was proposed in [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli. This model inherits from [FlaxPreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a Flax Linen [flax.nn.Module](https://flax.readthedocs.io/en/latest/_autosummary/flax.nn.module.html) subclass. Use it as a regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior. Finally, this model supports inherent JAX features such as: - [Just-In-Time (JIT) compilation](https://jax.readthedocs.io/en/latest/jax.html#just-in-time-compilation-jit) - [Automatic Differentiation](https://jax.readthedocs.io/en/latest/jax.html#automatic-differentiation) - [Vectorization](https://jax.readthedocs.io/en/latest/jax.html#vectorization-vmap) - [Parallelization](https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap) #### \_\_call\_\_ [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/modeling_flax_wav2vec2.py#L888) ( input\_valuesattention\_mask = Nonemask\_time\_indices = Noneparams: dict = Nonedropout\_rng: PRNGKey = Nonetrain: bool = Falseoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonefreeze\_feature\_encoder: bool = Falsereturn\_dict: typing.Optional\[bool\] = None ) → [transformers.modeling\_flax\_outputs.FlaxMaskedLMOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxMaskedLMOutput) or `tuple(torch.FloatTensor)` The `FlaxWav2Vec2PreTrainedModel` forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> import jax.numpy as jnp >>> from transformers import AutoProcessor, FlaxWav2Vec2ForCTC >>> from datasets import load_dataset >>> import soundfile as sf >>> processor = AutoProcessor.from_pretrained("facebook/wav2vec2-large-960h-lv60") >>> model = FlaxWav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-large-960h-lv60") >>> def map_to_array(batch): ... speech, _ = sf.read(batch["file"]) ... batch["speech"] = speech ... return batch >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> ds = ds.map(map_to_array) >>> input_values = processor( ... ds["speech"][0], sampling_rate=16_000, return_tensors="np" ... ).input_values >>> logits = model(input_values).logits >>> predicted_ids = jnp.argmax(logits, axis=-1) >>> transcription = processor.decode(predicted_ids[0]) >>> ``` ## FlaxWav2Vec2ForPreTraining ### class transformers.FlaxWav2Vec2ForPreTraining [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/modeling_flax_wav2vec2.py#L1319) ( config: Wav2Vec2Configinput\_shape: typing.Tuple = (1, 1024)seed: int = 0dtype: dtype = <class 'jax.numpy.float32'>\_do\_init: bool = True\*\*kwargs ) Parameters - **config** ([Wav2Vec2Config](/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Config)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.from_pretrained) method to load the model weights. - **dtype** (`jax.numpy.dtype`, _optional_, defaults to `jax.numpy.float32`) — The data type of the computation. Can be one of `jax.numpy.float32`, `jax.numpy.float16` (on GPUs) and `jax.numpy.bfloat16` (on TPUs). This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given `dtype`. **Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.** If you wish to change the dtype of the model parameters, see [to\_fp16()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.to_fp16) and [to\_bf16()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.to_bf16). Wav2Vec2 Model with a quantizer and `VQ` head on top. Wav2Vec2 was proposed in [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli. This model inherits from [FlaxPreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a Flax Linen [flax.nn.Module](https://flax.readthedocs.io/en/latest/_autosummary/flax.nn.module.html) subclass. Use it as a regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior. Finally, this model supports inherent JAX features such as: - [Just-In-Time (JIT) compilation](https://jax.readthedocs.io/en/latest/jax.html#just-in-time-compilation-jit) - [Automatic Differentiation](https://jax.readthedocs.io/en/latest/jax.html#automatic-differentiation) - [Vectorization](https://jax.readthedocs.io/en/latest/jax.html#vectorization-vmap) - [Parallelization](https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap) #### \_\_call\_\_ [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/modeling_flax_wav2vec2.py#L1322) ( input\_valuesattention\_mask = Nonemask\_time\_indices = Nonegumbel\_temperature: int = 1params: dict = Nonedropout\_rng: PRNGKey = Nonegumbel\_rng: PRNGKey = Nonetrain: bool = Falseoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonefreeze\_feature\_encoder: bool = Falsereturn\_dict: typing.Optional\[bool\] = None ) → [transformers.models.wav2vec2.modeling\_flax\_wav2vec2.FlaxWav2Vec2ForPreTrainingOutput](/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2ForPreTrainingOutput) or `tuple(torch.FloatTensor)` The [FlaxWav2Vec2ForPreTraining](/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.FlaxWav2Vec2ForPreTraining) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> import optax >>> import numpy as np >>> import jax.numpy as jnp >>> from transformers import AutoFeatureExtractor, FlaxWav2Vec2ForPreTraining >>> from transformers.models.wav2vec2.modeling_flax_wav2vec2 import _compute_mask_indices >>> from datasets import load_dataset >>> import soundfile as sf >>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-large-lv60") >>> model = FlaxWav2Vec2ForPreTraining.from_pretrained("facebook/wav2vec2-large-lv60") >>> def map_to_array(batch): ... speech, _ = sf.read(batch["file"]) ... batch["speech"] = speech ... return batch >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> ds = ds.map(map_to_array) >>> input_values = feature_extractor(ds["speech"][0], return_tensors="np").input_values >>> >>> batch_size, raw_sequence_length = input_values.shape >>> sequence_length = model._get_feat_extract_output_lengths(raw_sequence_length) >>> mask_time_indices = _compute_mask_indices((batch_size, sequence_length), mask_prob=0.2, mask_length=2) >>> outputs = model(input_values, mask_time_indices=mask_time_indices) >>> >>> cosine_sim = optax.cosine_similarity(outputs.projected_states, outputs.projected_quantized_states) >>> >>> assert np.asarray(cosine_sim)[mask_time_indices].mean() > 0.5 ```
<!DOCTYPE html><html class=""><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"> <meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science."> <meta property="fb:app_id" content="1321688464574422"> <meta name="twitter:card" content="summary_large_image"> <meta name="twitter:site" content="@huggingface"> <meta property="og:title" content="Wav2Vec2"> <meta property="og:type" content="website"> <meta property="og:url" content="https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/wav2vec2"> <meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png"> <link rel="stylesheet" href="/front/build/kube-b0520c1/style.css"> <link rel="preconnect" href="https://fonts.gstatic.com"> <link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&amp;display=swap" rel="stylesheet"> <link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&amp;display=swap" rel="stylesheet"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" as="style" onload="this.onload=null;this.rel='stylesheet'"> <noscript> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" /> </noscript> <title>Wav2Vec2</title> <script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script> <script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><link rel="stylesheet" href="/docs/transformers/v4.34.0/en/_app/immutable/assets/0.e3b0c442.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/1.38c5c2f6.js"><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"><meta name="hf:doc:metadata" content="{&quot;local&quot;:&quot;wav2vec2&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;overview&quot;,&quot;title&quot;:&quot;Overview&quot;},{&quot;local&quot;:&quot;resources&quot;,&quot;title&quot;:&quot;Resources&quot;},{&quot;local&quot;:&quot;transformers.Wav2Vec2Config&quot;,&quot;title&quot;:&quot;Wav2Vec2Config&quot;},{&quot;local&quot;:&quot;transformers.Wav2Vec2CTCTokenizer&quot;,&quot;title&quot;:&quot;Wav2Vec2CTCTokenizer&quot;},{&quot;local&quot;:&quot;transformers.Wav2Vec2FeatureExtractor&quot;,&quot;title&quot;:&quot;Wav2Vec2FeatureExtractor&quot;},{&quot;local&quot;:&quot;transformers.Wav2Vec2Processor&quot;,&quot;title&quot;:&quot;Wav2Vec2Processor&quot;},{&quot;local&quot;:&quot;transformers.Wav2Vec2ProcessorWithLM&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;decoding-multiple-audios&quot;,&quot;title&quot;:&quot;Decoding multiple audios&quot;}],&quot;title&quot;:&quot;Wav2Vec2ProcessorWithLM&quot;},{&quot;local&quot;:&quot;transformers.models.wav2vec2_with_lm.processing_wav2vec2_with_lm.Wav2Vec2DecoderWithLMOutput&quot;,&quot;title&quot;:&quot;Wav2Vec2 specific outputs&quot;},{&quot;local&quot;:&quot;transformers.Wav2Vec2Model&quot;,&quot;title&quot;:&quot;Wav2Vec2Model&quot;},{&quot;local&quot;:&quot;transformers.Wav2Vec2ForCTC&quot;,&quot;title&quot;:&quot;Wav2Vec2ForCTC&quot;},{&quot;local&quot;:&quot;transformers.Wav2Vec2ForSequenceClassification&quot;,&quot;title&quot;:&quot;Wav2Vec2ForSequenceClassification&quot;},{&quot;local&quot;:&quot;transformers.Wav2Vec2ForAudioFrameClassification&quot;,&quot;title&quot;:&quot;Wav2Vec2ForAudioFrameClassification&quot;},{&quot;local&quot;:&quot;transformers.Wav2Vec2ForXVector&quot;,&quot;title&quot;:&quot;Wav2Vec2ForXVector&quot;},{&quot;local&quot;:&quot;transformers.Wav2Vec2ForPreTraining&quot;,&quot;title&quot;:&quot;Wav2Vec2ForPreTraining&quot;},{&quot;local&quot;:&quot;transformers.TFWav2Vec2Model&quot;,&quot;title&quot;:&quot;TFWav2Vec2Model&quot;},{&quot;local&quot;:&quot;transformers.TFWav2Vec2ForSequenceClassification&quot;,&quot;title&quot;:&quot;TFWav2Vec2ForSequenceClassification&quot;},{&quot;local&quot;:&quot;transformers.TFWav2Vec2ForCTC&quot;,&quot;title&quot;:&quot;TFWav2Vec2ForCTC&quot;},{&quot;local&quot;:&quot;transformers.FlaxWav2Vec2Model&quot;,&quot;title&quot;:&quot;FlaxWav2Vec2Model&quot;},{&quot;local&quot;:&quot;transformers.FlaxWav2Vec2ForCTC&quot;,&quot;title&quot;:&quot;FlaxWav2Vec2ForCTC&quot;},{&quot;local&quot;:&quot;transformers.FlaxWav2Vec2ForPreTraining&quot;,&quot;title&quot;:&quot;FlaxWav2Vec2ForPreTraining&quot;}],&quot;title&quot;:&quot;Wav2Vec2&quot;}"></head> <body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage"> <div class="flex min-h-screen flex-col"> <div class="SVELTE_HYDRATER contents" data-props="{&quot;classNames&quot;:&quot;&quot;,&quot;isWide&quot;:true,&quot;isZh&quot;:false}" data-target="MainHeader"><header class="border-b border-gray-100 "><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <div class="flex flex-none items-center justify-center p-0.5 place-self-stretch lg:hidden"><button class="relative z-40 flex h-6 w-8 items-center justify-center" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div></div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="rounded-full border border-transparent bg-gray-900 px-3 py-1 leading-none text-white hover:border-black hover:bg-white hover:text-black" href="/join">Sign Up</a></li></ul></nav></div></header></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div> <main class="flex flex-1 flex-col"><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapters&quot;:[{&quot;title&quot;:&quot;Get started&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;🤗 Transformers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;index&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/index&quot;},{&quot;title&quot;:&quot;Quick tour&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;quicktour&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/quicktour&quot;},{&quot;title&quot;:&quot;Installation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;installation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/installation&quot;}]},{&quot;title&quot;:&quot;Tutorials&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Run inference with pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_tutorial&quot;},{&quot;title&quot;:&quot;Write portable code with AutoClass&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;autoclass_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/autoclass_tutorial&quot;},{&quot;title&quot;:&quot;Preprocess data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocessing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/preprocessing&quot;},{&quot;title&quot;:&quot;Fine-tune a pretrained model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;training&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/training&quot;},{&quot;title&quot;:&quot;Train with a script&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;run_scripts&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/run_scripts&quot;},{&quot;title&quot;:&quot;Set up distributed training with 🤗 Accelerate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;accelerate&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/accelerate&quot;},{&quot;title&quot;:&quot;Load and train adapters with 🤗 PEFT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;peft&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/peft&quot;},{&quot;title&quot;:&quot;Share your model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_sharing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_sharing&quot;},{&quot;title&quot;:&quot;Agents&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers_agents&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/transformers_agents&quot;},{&quot;title&quot;:&quot;Generation with LLMs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;llm_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/llm_tutorial&quot;}]},{&quot;title&quot;:&quot;Task Guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Natural Language Processing&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text classification&quot;,&quot;id&quot;:&quot;tasks/sequence_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/sequence_classification&quot;},{&quot;title&quot;:&quot;Token classification&quot;,&quot;id&quot;:&quot;tasks/token_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/token_classification&quot;},{&quot;title&quot;:&quot;Question answering&quot;,&quot;id&quot;:&quot;tasks/question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/question_answering&quot;},{&quot;title&quot;:&quot;Causal language modeling&quot;,&quot;id&quot;:&quot;tasks/language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/language_modeling&quot;},{&quot;title&quot;:&quot;Masked language modeling&quot;,&quot;id&quot;:&quot;tasks/masked_language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/masked_language_modeling&quot;},{&quot;title&quot;:&quot;Translation&quot;,&quot;id&quot;:&quot;tasks/translation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/translation&quot;},{&quot;title&quot;:&quot;Summarization&quot;,&quot;id&quot;:&quot;tasks/summarization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/summarization&quot;},{&quot;title&quot;:&quot;Multiple choice&quot;,&quot;id&quot;:&quot;tasks/multiple_choice&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/multiple_choice&quot;}]},{&quot;title&quot;:&quot;Audio&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio classification&quot;,&quot;id&quot;:&quot;tasks/audio_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/audio_classification&quot;},{&quot;title&quot;:&quot;Automatic speech recognition&quot;,&quot;id&quot;:&quot;tasks/asr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/asr&quot;}]},{&quot;title&quot;:&quot;Computer Vision&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image classification&quot;,&quot;id&quot;:&quot;tasks/image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_classification&quot;},{&quot;title&quot;:&quot;Semantic segmentation&quot;,&quot;id&quot;:&quot;tasks/semantic_segmentation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/semantic_segmentation&quot;},{&quot;title&quot;:&quot;Video classification&quot;,&quot;id&quot;:&quot;tasks/video_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/video_classification&quot;},{&quot;title&quot;:&quot;Object detection&quot;,&quot;id&quot;:&quot;tasks/object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot object detection&quot;,&quot;id&quot;:&quot;tasks/zero_shot_object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot image classification&quot;,&quot;id&quot;:&quot;tasks/zero_shot_image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification&quot;},{&quot;title&quot;:&quot;Depth estimation&quot;,&quot;id&quot;:&quot;tasks/monocular_depth_estimation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation&quot;}]},{&quot;title&quot;:&quot;Multimodal&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image captioning&quot;,&quot;id&quot;:&quot;tasks/image_captioning&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_captioning&quot;},{&quot;title&quot;:&quot;Document Question Answering&quot;,&quot;id&quot;:&quot;tasks/document_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/document_question_answering&quot;},{&quot;title&quot;:&quot;Visual Question Answering&quot;,&quot;id&quot;:&quot;tasks/visual_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/visual_question_answering&quot;},{&quot;title&quot;:&quot;Text to speech&quot;,&quot;id&quot;:&quot;tasks/text-to-speech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/text-to-speech&quot;}]},{&quot;title&quot;:&quot;Generation&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Customize the generation strategy&quot;,&quot;id&quot;:&quot;generation_strategies&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/generation_strategies&quot;}]},{&quot;title&quot;:&quot;Prompting&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image tasks with IDEFICS&quot;,&quot;id&quot;:&quot;tasks/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/idefics&quot;}]}]},{&quot;title&quot;:&quot;Developer guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Use fast tokenizers from 🤗 Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;fast_tokenizers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/fast_tokenizers&quot;},{&quot;title&quot;:&quot;Run inference with multilingual models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;multilingual&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/multilingual&quot;},{&quot;title&quot;:&quot;Use model-specific APIs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;create_a_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/create_a_model&quot;},{&quot;title&quot;:&quot;Share a custom model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_models&quot;},{&quot;title&quot;:&quot;Templates for chat models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;chat_templating&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/chat_templating&quot;},{&quot;title&quot;:&quot;Run training on Amazon SageMaker&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;sagemaker&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/sagemaker&quot;},{&quot;title&quot;:&quot;Export to ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;serialization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/serialization&quot;},{&quot;title&quot;:&quot;Export to TFLite&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tflite&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tflite&quot;},{&quot;title&quot;:&quot;Export to TorchScript&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;torchscript&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/torchscript&quot;},{&quot;title&quot;:&quot;Benchmarks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;benchmarks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/benchmarks&quot;},{&quot;title&quot;:&quot;Notebooks with examples&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;notebooks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/notebooks&quot;},{&quot;title&quot;:&quot;Community resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;community&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/community&quot;},{&quot;title&quot;:&quot;Custom Tools and Prompts&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_tools&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_tools&quot;},{&quot;title&quot;:&quot;Troubleshoot&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;troubleshooting&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/troubleshooting&quot;}]},{&quot;title&quot;:&quot;Performance and scalability&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;performance&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/performance&quot;},{&quot;title&quot;:&quot;Efficient training techniques&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Methods and tools for efficient training on a single GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_one&quot;},{&quot;title&quot;:&quot;Multiple GPUs and parallelism&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_many&quot;},{&quot;title&quot;:&quot;Efficient training on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu&quot;},{&quot;title&quot;:&quot;Distributed CPU training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu_many&quot;},{&quot;title&quot;:&quot;Training on TPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu&quot;},{&quot;title&quot;:&quot;Training on TPU with TensorFlow&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu_tf&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu_tf&quot;},{&quot;title&quot;:&quot;Training on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_special&quot;},{&quot;title&quot;:&quot;Custom hardware for training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_hardware&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_hardware&quot;},{&quot;title&quot;:&quot;Hyperparameter Search using Trainer API&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;hpo_train&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/hpo_train&quot;}]},{&quot;title&quot;:&quot;Optimizing inference&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Inference on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_cpu&quot;},{&quot;title&quot;:&quot;Inference on one GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_one&quot;},{&quot;title&quot;:&quot;Inference on many GPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_many&quot;},{&quot;title&quot;:&quot;Inference on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_special&quot;}]},{&quot;title&quot;:&quot;Instantiating a big model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;big_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/big_models&quot;},{&quot;title&quot;:&quot;Troubleshooting&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;debugging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/debugging&quot;},{&quot;title&quot;:&quot;XLA Integration for TensorFlow Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tf_xla&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tf_xla&quot;},{&quot;title&quot;:&quot;Optimize inference using `torch.compile()`&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_torch_compile&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_torch_compile&quot;}]},{&quot;title&quot;:&quot;Contribute&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;How to contribute to transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;contributing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/contributing&quot;},{&quot;title&quot;:&quot;How to add a model to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_model&quot;},{&quot;title&quot;:&quot;How to convert a 🤗 Transformers model to TensorFlow?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_tensorflow_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_tensorflow_model&quot;},{&quot;title&quot;:&quot;How to add a pipeline to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_pipeline&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_pipeline&quot;},{&quot;title&quot;:&quot;Testing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;testing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/testing&quot;},{&quot;title&quot;:&quot;Checks on a Pull Request&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pr_checks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pr_checks&quot;}]},{&quot;title&quot;:&quot;Conceptual guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Philosophy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;philosophy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/philosophy&quot;},{&quot;title&quot;:&quot;Glossary&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;glossary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/glossary&quot;},{&quot;title&quot;:&quot;What 🤗 Transformers can do&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;task_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/task_summary&quot;},{&quot;title&quot;:&quot;How 🤗 Transformers solve tasks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks_explained&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks_explained&quot;},{&quot;title&quot;:&quot;The Transformer model family&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_summary&quot;},{&quot;title&quot;:&quot;Summary of the tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tokenizer_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tokenizer_summary&quot;},{&quot;title&quot;:&quot;Attention mechanisms&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;attention&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/attention&quot;},{&quot;title&quot;:&quot;Padding and truncation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pad_truncation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pad_truncation&quot;},{&quot;title&quot;:&quot;BERTology&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;bertology&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/bertology&quot;},{&quot;title&quot;:&quot;Perplexity of fixed-length models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perplexity&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perplexity&quot;},{&quot;title&quot;:&quot;Pipelines for webserver inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_webserver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_webserver&quot;},{&quot;title&quot;:&quot;Model training anatomy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_memory_anatomy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_memory_anatomy&quot;}]},{&quot;title&quot;:&quot;API&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Main Classes&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Agents and Tools&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/agent&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/agent&quot;},{&quot;title&quot;:&quot;Auto Classes&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/auto&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/auto&quot;},{&quot;title&quot;:&quot;Callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/callback&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/callback&quot;},{&quot;title&quot;:&quot;Configuration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/configuration&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/configuration&quot;},{&quot;title&quot;:&quot;Data Collator&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/data_collator&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/data_collator&quot;},{&quot;title&quot;:&quot;Keras callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/keras_callbacks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/keras_callbacks&quot;},{&quot;title&quot;:&quot;Logging&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/logging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/logging&quot;},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/model&quot;},{&quot;title&quot;:&quot;Text Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/text_generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/text_generation&quot;},{&quot;title&quot;:&quot;ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/onnx&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/onnx&quot;},{&quot;title&quot;:&quot;Optimization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/optimizer_schedules&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules&quot;},{&quot;title&quot;:&quot;Model outputs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/output&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/output&quot;},{&quot;title&quot;:&quot;Pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/pipelines&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/pipelines&quot;},{&quot;title&quot;:&quot;Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/processors&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/processors&quot;},{&quot;title&quot;:&quot;Quantization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/quantization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/quantization&quot;},{&quot;title&quot;:&quot;Tokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/tokenizer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/tokenizer&quot;},{&quot;title&quot;:&quot;Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/trainer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/trainer&quot;},{&quot;title&quot;:&quot;DeepSpeed Integration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/deepspeed&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/deepspeed&quot;},{&quot;title&quot;:&quot;Feature Extractor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/feature_extractor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/feature_extractor&quot;},{&quot;title&quot;:&quot;Image Processor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/image_processor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/image_processor&quot;}]},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALBERT&quot;,&quot;id&quot;:&quot;model_doc/albert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/albert&quot;},{&quot;title&quot;:&quot;BART&quot;,&quot;id&quot;:&quot;model_doc/bart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bart&quot;},{&quot;title&quot;:&quot;BARThez&quot;,&quot;id&quot;:&quot;model_doc/barthez&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/barthez&quot;},{&quot;title&quot;:&quot;BARTpho&quot;,&quot;id&quot;:&quot;model_doc/bartpho&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bartpho&quot;},{&quot;title&quot;:&quot;BERT&quot;,&quot;id&quot;:&quot;model_doc/bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert&quot;},{&quot;title&quot;:&quot;BertGeneration&quot;,&quot;id&quot;:&quot;model_doc/bert-generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-generation&quot;},{&quot;title&quot;:&quot;BertJapanese&quot;,&quot;id&quot;:&quot;model_doc/bert-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-japanese&quot;},{&quot;title&quot;:&quot;Bertweet&quot;,&quot;id&quot;:&quot;model_doc/bertweet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bertweet&quot;},{&quot;title&quot;:&quot;BigBird&quot;,&quot;id&quot;:&quot;model_doc/big_bird&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/big_bird&quot;},{&quot;title&quot;:&quot;BigBirdPegasus&quot;,&quot;id&quot;:&quot;model_doc/bigbird_pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus&quot;},{&quot;title&quot;:&quot;BioGpt&quot;,&quot;id&quot;:&quot;model_doc/biogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/biogpt&quot;},{&quot;title&quot;:&quot;Blenderbot&quot;,&quot;id&quot;:&quot;model_doc/blenderbot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot&quot;},{&quot;title&quot;:&quot;Blenderbot Small&quot;,&quot;id&quot;:&quot;model_doc/blenderbot-small&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot-small&quot;},{&quot;title&quot;:&quot;BLOOM&quot;,&quot;id&quot;:&quot;model_doc/bloom&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bloom&quot;},{&quot;title&quot;:&quot;BORT&quot;,&quot;id&quot;:&quot;model_doc/bort&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bort&quot;},{&quot;title&quot;:&quot;ByT5&quot;,&quot;id&quot;:&quot;model_doc/byt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/byt5&quot;},{&quot;title&quot;:&quot;CamemBERT&quot;,&quot;id&quot;:&quot;model_doc/camembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/camembert&quot;},{&quot;title&quot;:&quot;CANINE&quot;,&quot;id&quot;:&quot;model_doc/canine&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/canine&quot;},{&quot;title&quot;:&quot;CodeGen&quot;,&quot;id&quot;:&quot;model_doc/codegen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/codegen&quot;},{&quot;title&quot;:&quot;CodeLlama&quot;,&quot;id&quot;:&quot;model_doc/code_llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/code_llama&quot;},{&quot;title&quot;:&quot;ConvBERT&quot;,&quot;id&quot;:&quot;model_doc/convbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convbert&quot;},{&quot;title&quot;:&quot;CPM&quot;,&quot;id&quot;:&quot;model_doc/cpm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpm&quot;},{&quot;title&quot;:&quot;CPMANT&quot;,&quot;id&quot;:&quot;model_doc/cpmant&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpmant&quot;},{&quot;title&quot;:&quot;CTRL&quot;,&quot;id&quot;:&quot;model_doc/ctrl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ctrl&quot;},{&quot;title&quot;:&quot;DeBERTa&quot;,&quot;id&quot;:&quot;model_doc/deberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta&quot;},{&quot;title&quot;:&quot;DeBERTa-v2&quot;,&quot;id&quot;:&quot;model_doc/deberta-v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta-v2&quot;},{&quot;title&quot;:&quot;DialoGPT&quot;,&quot;id&quot;:&quot;model_doc/dialogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dialogpt&quot;},{&quot;title&quot;:&quot;DistilBERT&quot;,&quot;id&quot;:&quot;model_doc/distilbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/distilbert&quot;},{&quot;title&quot;:&quot;DPR&quot;,&quot;id&quot;:&quot;model_doc/dpr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpr&quot;},{&quot;title&quot;:&quot;ELECTRA&quot;,&quot;id&quot;:&quot;model_doc/electra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/electra&quot;},{&quot;title&quot;:&quot;Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encoder-decoder&quot;},{&quot;title&quot;:&quot;ERNIE&quot;,&quot;id&quot;:&quot;model_doc/ernie&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie&quot;},{&quot;title&quot;:&quot;ErnieM&quot;,&quot;id&quot;:&quot;model_doc/ernie_m&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie_m&quot;},{&quot;title&quot;:&quot;ESM&quot;,&quot;id&quot;:&quot;model_doc/esm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/esm&quot;},{&quot;title&quot;:&quot;Falcon&quot;,&quot;id&quot;:&quot;model_doc/falcon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/falcon&quot;},{&quot;title&quot;:&quot;FLAN-T5&quot;,&quot;id&quot;:&quot;model_doc/flan-t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-t5&quot;},{&quot;title&quot;:&quot;FLAN-UL2&quot;,&quot;id&quot;:&quot;model_doc/flan-ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-ul2&quot;},{&quot;title&quot;:&quot;FlauBERT&quot;,&quot;id&quot;:&quot;model_doc/flaubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flaubert&quot;},{&quot;title&quot;:&quot;FNet&quot;,&quot;id&quot;:&quot;model_doc/fnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fnet&quot;},{&quot;title&quot;:&quot;FSMT&quot;,&quot;id&quot;:&quot;model_doc/fsmt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fsmt&quot;},{&quot;title&quot;:&quot;Funnel Transformer&quot;,&quot;id&quot;:&quot;model_doc/funnel&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/funnel&quot;},{&quot;title&quot;:&quot;GPT&quot;,&quot;id&quot;:&quot;model_doc/openai-gpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/openai-gpt&quot;},{&quot;title&quot;:&quot;GPT Neo&quot;,&quot;id&quot;:&quot;model_doc/gpt_neo&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neo&quot;},{&quot;title&quot;:&quot;GPT NeoX&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox&quot;},{&quot;title&quot;:&quot;GPT NeoX Japanese&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox_japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese&quot;},{&quot;title&quot;:&quot;GPT-J&quot;,&quot;id&quot;:&quot;model_doc/gptj&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptj&quot;},{&quot;title&quot;:&quot;GPT2&quot;,&quot;id&quot;:&quot;model_doc/gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt2&quot;},{&quot;title&quot;:&quot;GPTBigCode&quot;,&quot;id&quot;:&quot;model_doc/gpt_bigcode&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode&quot;},{&quot;title&quot;:&quot;GPTSAN Japanese&quot;,&quot;id&quot;:&quot;model_doc/gptsan-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese&quot;},{&quot;title&quot;:&quot;GPTSw3&quot;,&quot;id&quot;:&quot;model_doc/gpt-sw3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt-sw3&quot;},{&quot;title&quot;:&quot;HerBERT&quot;,&quot;id&quot;:&quot;model_doc/herbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/herbert&quot;},{&quot;title&quot;:&quot;I-BERT&quot;,&quot;id&quot;:&quot;model_doc/ibert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ibert&quot;},{&quot;title&quot;:&quot;Jukebox&quot;,&quot;id&quot;:&quot;model_doc/jukebox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/jukebox&quot;},{&quot;title&quot;:&quot;LED&quot;,&quot;id&quot;:&quot;model_doc/led&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/led&quot;},{&quot;title&quot;:&quot;LLaMA&quot;,&quot;id&quot;:&quot;model_doc/llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama&quot;},{&quot;title&quot;:&quot;Llama2&quot;,&quot;id&quot;:&quot;model_doc/llama2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama2&quot;},{&quot;title&quot;:&quot;Longformer&quot;,&quot;id&quot;:&quot;model_doc/longformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longformer&quot;},{&quot;title&quot;:&quot;LongT5&quot;,&quot;id&quot;:&quot;model_doc/longt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longt5&quot;},{&quot;title&quot;:&quot;LUKE&quot;,&quot;id&quot;:&quot;model_doc/luke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/luke&quot;},{&quot;title&quot;:&quot;M2M100&quot;,&quot;id&quot;:&quot;model_doc/m2m_100&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/m2m_100&quot;},{&quot;title&quot;:&quot;MarianMT&quot;,&quot;id&quot;:&quot;model_doc/marian&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/marian&quot;},{&quot;title&quot;:&quot;MarkupLM&quot;,&quot;id&quot;:&quot;model_doc/markuplm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/markuplm&quot;},{&quot;title&quot;:&quot;MBart and MBart-50&quot;,&quot;id&quot;:&quot;model_doc/mbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mbart&quot;},{&quot;title&quot;:&quot;MEGA&quot;,&quot;id&quot;:&quot;model_doc/mega&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mega&quot;},{&quot;title&quot;:&quot;MegatronBERT&quot;,&quot;id&quot;:&quot;model_doc/megatron-bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron-bert&quot;},{&quot;title&quot;:&quot;MegatronGPT2&quot;,&quot;id&quot;:&quot;model_doc/megatron_gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2&quot;},{&quot;title&quot;:&quot;Mistral&quot;,&quot;id&quot;:&quot;model_doc/mistral&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mistral&quot;},{&quot;title&quot;:&quot;mLUKE&quot;,&quot;id&quot;:&quot;model_doc/mluke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mluke&quot;},{&quot;title&quot;:&quot;MobileBERT&quot;,&quot;id&quot;:&quot;model_doc/mobilebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilebert&quot;},{&quot;title&quot;:&quot;MPNet&quot;,&quot;id&quot;:&quot;model_doc/mpnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpnet&quot;},{&quot;title&quot;:&quot;MPT&quot;,&quot;id&quot;:&quot;model_doc/mpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpt&quot;},{&quot;title&quot;:&quot;MRA&quot;,&quot;id&quot;:&quot;model_doc/mra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mra&quot;},{&quot;title&quot;:&quot;MT5&quot;,&quot;id&quot;:&quot;model_doc/mt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mt5&quot;},{&quot;title&quot;:&quot;MVP&quot;,&quot;id&quot;:&quot;model_doc/mvp&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mvp&quot;},{&quot;title&quot;:&quot;NEZHA&quot;,&quot;id&quot;:&quot;model_doc/nezha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nezha&quot;},{&quot;title&quot;:&quot;NLLB&quot;,&quot;id&quot;:&quot;model_doc/nllb&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb&quot;},{&quot;title&quot;:&quot;NLLB-MoE&quot;,&quot;id&quot;:&quot;model_doc/nllb-moe&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb-moe&quot;},{&quot;title&quot;:&quot;Nyströmformer&quot;,&quot;id&quot;:&quot;model_doc/nystromformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nystromformer&quot;},{&quot;title&quot;:&quot;Open-Llama&quot;,&quot;id&quot;:&quot;model_doc/open-llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/open-llama&quot;},{&quot;title&quot;:&quot;OPT&quot;,&quot;id&quot;:&quot;model_doc/opt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/opt&quot;},{&quot;title&quot;:&quot;Pegasus&quot;,&quot;id&quot;:&quot;model_doc/pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus&quot;},{&quot;title&quot;:&quot;PEGASUS-X&quot;,&quot;id&quot;:&quot;model_doc/pegasus_x&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus_x&quot;},{&quot;title&quot;:&quot;Persimmon&quot;,&quot;id&quot;:&quot;model_doc/persimmon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/persimmon&quot;},{&quot;title&quot;:&quot;PhoBERT&quot;,&quot;id&quot;:&quot;model_doc/phobert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/phobert&quot;},{&quot;title&quot;:&quot;PLBart&quot;,&quot;id&quot;:&quot;model_doc/plbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/plbart&quot;},{&quot;title&quot;:&quot;ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/prophetnet&quot;},{&quot;title&quot;:&quot;QDQBert&quot;,&quot;id&quot;:&quot;model_doc/qdqbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/qdqbert&quot;},{&quot;title&quot;:&quot;RAG&quot;,&quot;id&quot;:&quot;model_doc/rag&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rag&quot;},{&quot;title&quot;:&quot;REALM&quot;,&quot;id&quot;:&quot;model_doc/realm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/realm&quot;},{&quot;title&quot;:&quot;Reformer&quot;,&quot;id&quot;:&quot;model_doc/reformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/reformer&quot;},{&quot;title&quot;:&quot;RemBERT&quot;,&quot;id&quot;:&quot;model_doc/rembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rembert&quot;},{&quot;title&quot;:&quot;RetriBERT&quot;,&quot;id&quot;:&quot;model_doc/retribert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/retribert&quot;},{&quot;title&quot;:&quot;RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta&quot;},{&quot;title&quot;:&quot;RoBERTa-PreLayerNorm&quot;,&quot;id&quot;:&quot;model_doc/roberta-prelayernorm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm&quot;},{&quot;title&quot;:&quot;RoCBert&quot;,&quot;id&quot;:&quot;model_doc/roc_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roc_bert&quot;},{&quot;title&quot;:&quot;RoFormer&quot;,&quot;id&quot;:&quot;model_doc/roformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roformer&quot;},{&quot;title&quot;:&quot;RWKV&quot;,&quot;id&quot;:&quot;model_doc/rwkv&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rwkv&quot;},{&quot;title&quot;:&quot;Splinter&quot;,&quot;id&quot;:&quot;model_doc/splinter&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/splinter&quot;},{&quot;title&quot;:&quot;SqueezeBERT&quot;,&quot;id&quot;:&quot;model_doc/squeezebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/squeezebert&quot;},{&quot;title&quot;:&quot;SwitchTransformers&quot;,&quot;id&quot;:&quot;model_doc/switch_transformers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/switch_transformers&quot;},{&quot;title&quot;:&quot;T5&quot;,&quot;id&quot;:&quot;model_doc/t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5&quot;},{&quot;title&quot;:&quot;T5v1.1&quot;,&quot;id&quot;:&quot;model_doc/t5v1.1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5v1.1&quot;},{&quot;title&quot;:&quot;TAPEX&quot;,&quot;id&quot;:&quot;model_doc/tapex&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapex&quot;},{&quot;title&quot;:&quot;Transformer XL&quot;,&quot;id&quot;:&quot;model_doc/transfo-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/transfo-xl&quot;},{&quot;title&quot;:&quot;UL2&quot;,&quot;id&quot;:&quot;model_doc/ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ul2&quot;},{&quot;title&quot;:&quot;UMT5&quot;,&quot;id&quot;:&quot;model_doc/umt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/umt5&quot;},{&quot;title&quot;:&quot;X-MOD&quot;,&quot;id&quot;:&quot;model_doc/xmod&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xmod&quot;},{&quot;title&quot;:&quot;XGLM&quot;,&quot;id&quot;:&quot;model_doc/xglm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xglm&quot;},{&quot;title&quot;:&quot;XLM&quot;,&quot;id&quot;:&quot;model_doc/xlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm&quot;},{&quot;title&quot;:&quot;XLM-ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/xlm-prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa-XL&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl&quot;},{&quot;title&quot;:&quot;XLM-V&quot;,&quot;id&quot;:&quot;model_doc/xlm-v&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-v&quot;},{&quot;title&quot;:&quot;XLNet&quot;,&quot;id&quot;:&quot;model_doc/xlnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlnet&quot;},{&quot;title&quot;:&quot;YOSO&quot;,&quot;id&quot;:&quot;model_doc/yoso&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yoso&quot;}]},{&quot;title&quot;:&quot;Vision models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;BEiT&quot;,&quot;id&quot;:&quot;model_doc/beit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/beit&quot;},{&quot;title&quot;:&quot;BiT&quot;,&quot;id&quot;:&quot;model_doc/bit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bit&quot;},{&quot;title&quot;:&quot;Conditional DETR&quot;,&quot;id&quot;:&quot;model_doc/conditional_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/conditional_detr&quot;},{&quot;title&quot;:&quot;ConvNeXT&quot;,&quot;id&quot;:&quot;model_doc/convnext&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnext&quot;},{&quot;title&quot;:&quot;ConvNeXTV2&quot;,&quot;id&quot;:&quot;model_doc/convnextv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnextv2&quot;},{&quot;title&quot;:&quot;CvT&quot;,&quot;id&quot;:&quot;model_doc/cvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cvt&quot;},{&quot;title&quot;:&quot;Deformable DETR&quot;,&quot;id&quot;:&quot;model_doc/deformable_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deformable_detr&quot;},{&quot;title&quot;:&quot;DeiT&quot;,&quot;id&quot;:&quot;model_doc/deit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deit&quot;},{&quot;title&quot;:&quot;DETA&quot;,&quot;id&quot;:&quot;model_doc/deta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deta&quot;},{&quot;title&quot;:&quot;DETR&quot;,&quot;id&quot;:&quot;model_doc/detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/detr&quot;},{&quot;title&quot;:&quot;DiNAT&quot;,&quot;id&quot;:&quot;model_doc/dinat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinat&quot;},{&quot;title&quot;:&quot;DINO V2&quot;,&quot;id&quot;:&quot;model_doc/dinov2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinov2&quot;},{&quot;title&quot;:&quot;DiT&quot;,&quot;id&quot;:&quot;model_doc/dit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dit&quot;},{&quot;title&quot;:&quot;DPT&quot;,&quot;id&quot;:&quot;model_doc/dpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpt&quot;},{&quot;title&quot;:&quot;EfficientFormer&quot;,&quot;id&quot;:&quot;model_doc/efficientformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientformer&quot;},{&quot;title&quot;:&quot;EfficientNet&quot;,&quot;id&quot;:&quot;model_doc/efficientnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientnet&quot;},{&quot;title&quot;:&quot;FocalNet&quot;,&quot;id&quot;:&quot;model_doc/focalnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/focalnet&quot;},{&quot;title&quot;:&quot;GLPN&quot;,&quot;id&quot;:&quot;model_doc/glpn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/glpn&quot;},{&quot;title&quot;:&quot;ImageGPT&quot;,&quot;id&quot;:&quot;model_doc/imagegpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/imagegpt&quot;},{&quot;title&quot;:&quot;LeViT&quot;,&quot;id&quot;:&quot;model_doc/levit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/levit&quot;},{&quot;title&quot;:&quot;Mask2Former&quot;,&quot;id&quot;:&quot;model_doc/mask2former&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mask2former&quot;},{&quot;title&quot;:&quot;MaskFormer&quot;,&quot;id&quot;:&quot;model_doc/maskformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/maskformer&quot;},{&quot;title&quot;:&quot;MobileNetV1&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v1&quot;},{&quot;title&quot;:&quot;MobileNetV2&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v2&quot;},{&quot;title&quot;:&quot;MobileViT&quot;,&quot;id&quot;:&quot;model_doc/mobilevit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevit&quot;},{&quot;title&quot;:&quot;MobileViTV2&quot;,&quot;id&quot;:&quot;model_doc/mobilevitv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevitv2&quot;},{&quot;title&quot;:&quot;NAT&quot;,&quot;id&quot;:&quot;model_doc/nat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nat&quot;},{&quot;title&quot;:&quot;PoolFormer&quot;,&quot;id&quot;:&quot;model_doc/poolformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/poolformer&quot;},{&quot;title&quot;:&quot;Pyramid Vision Transformer (PVT)&quot;,&quot;id&quot;:&quot;model_doc/pvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pvt&quot;},{&quot;title&quot;:&quot;RegNet&quot;,&quot;id&quot;:&quot;model_doc/regnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/regnet&quot;},{&quot;title&quot;:&quot;ResNet&quot;,&quot;id&quot;:&quot;model_doc/resnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/resnet&quot;},{&quot;title&quot;:&quot;SegFormer&quot;,&quot;id&quot;:&quot;model_doc/segformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/segformer&quot;},{&quot;title&quot;:&quot;SwiftFormer&quot;,&quot;id&quot;:&quot;model_doc/swiftformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swiftformer&quot;},{&quot;title&quot;:&quot;Swin Transformer&quot;,&quot;id&quot;:&quot;model_doc/swin&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin&quot;},{&quot;title&quot;:&quot;Swin Transformer V2&quot;,&quot;id&quot;:&quot;model_doc/swinv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swinv2&quot;},{&quot;title&quot;:&quot;Swin2SR&quot;,&quot;id&quot;:&quot;model_doc/swin2sr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin2sr&quot;},{&quot;title&quot;:&quot;Table Transformer&quot;,&quot;id&quot;:&quot;model_doc/table-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/table-transformer&quot;},{&quot;title&quot;:&quot;TimeSformer&quot;,&quot;id&quot;:&quot;model_doc/timesformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/timesformer&quot;},{&quot;title&quot;:&quot;UperNet&quot;,&quot;id&quot;:&quot;model_doc/upernet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/upernet&quot;},{&quot;title&quot;:&quot;VAN&quot;,&quot;id&quot;:&quot;model_doc/van&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/van&quot;},{&quot;title&quot;:&quot;VideoMAE&quot;,&quot;id&quot;:&quot;model_doc/videomae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/videomae&quot;},{&quot;title&quot;:&quot;Vision Transformer (ViT)&quot;,&quot;id&quot;:&quot;model_doc/vit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit&quot;},{&quot;title&quot;:&quot;ViT Hybrid&quot;,&quot;id&quot;:&quot;model_doc/vit_hybrid&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_hybrid&quot;},{&quot;title&quot;:&quot;ViTDet&quot;,&quot;id&quot;:&quot;model_doc/vitdet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitdet&quot;},{&quot;title&quot;:&quot;ViTMAE&quot;,&quot;id&quot;:&quot;model_doc/vit_mae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_mae&quot;},{&quot;title&quot;:&quot;ViTMatte&quot;,&quot;id&quot;:&quot;model_doc/vitmatte&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitmatte&quot;},{&quot;title&quot;:&quot;ViTMSN&quot;,&quot;id&quot;:&quot;model_doc/vit_msn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_msn&quot;},{&quot;title&quot;:&quot;ViViT&quot;,&quot;id&quot;:&quot;model_doc/vivit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vivit&quot;},{&quot;title&quot;:&quot;YOLOS&quot;,&quot;id&quot;:&quot;model_doc/yolos&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yolos&quot;}]},{&quot;title&quot;:&quot;Audio models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio Spectrogram Transformer&quot;,&quot;id&quot;:&quot;model_doc/audio-spectrogram-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer&quot;},{&quot;title&quot;:&quot;Bark&quot;,&quot;id&quot;:&quot;model_doc/bark&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bark&quot;},{&quot;title&quot;:&quot;CLAP&quot;,&quot;id&quot;:&quot;model_doc/clap&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clap&quot;},{&quot;title&quot;:&quot;EnCodec&quot;,&quot;id&quot;:&quot;model_doc/encodec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encodec&quot;},{&quot;title&quot;:&quot;Hubert&quot;,&quot;id&quot;:&quot;model_doc/hubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/hubert&quot;},{&quot;title&quot;:&quot;MCTCT&quot;,&quot;id&quot;:&quot;model_doc/mctct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mctct&quot;},{&quot;title&quot;:&quot;MMS&quot;,&quot;id&quot;:&quot;model_doc/mms&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mms&quot;},{&quot;title&quot;:&quot;MusicGen&quot;,&quot;id&quot;:&quot;model_doc/musicgen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/musicgen&quot;},{&quot;title&quot;:&quot;Pop2Piano&quot;,&quot;id&quot;:&quot;model_doc/pop2piano&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pop2piano&quot;},{&quot;title&quot;:&quot;SEW&quot;,&quot;id&quot;:&quot;model_doc/sew&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew&quot;},{&quot;title&quot;:&quot;SEW-D&quot;,&quot;id&quot;:&quot;model_doc/sew-d&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew-d&quot;},{&quot;title&quot;:&quot;Speech2Text&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text&quot;},{&quot;title&quot;:&quot;Speech2Text2&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text_2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2&quot;},{&quot;title&quot;:&quot;SpeechT5&quot;,&quot;id&quot;:&quot;model_doc/speecht5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speecht5&quot;},{&quot;title&quot;:&quot;UniSpeech&quot;,&quot;id&quot;:&quot;model_doc/unispeech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech&quot;},{&quot;title&quot;:&quot;UniSpeech-SAT&quot;,&quot;id&quot;:&quot;model_doc/unispeech-sat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech-sat&quot;},{&quot;title&quot;:&quot;VITS&quot;,&quot;id&quot;:&quot;model_doc/vits&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vits&quot;},{&quot;title&quot;:&quot;Wav2Vec2&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2&quot;},{&quot;title&quot;:&quot;Wav2Vec2-Conformer&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2-conformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer&quot;},{&quot;title&quot;:&quot;Wav2Vec2Phoneme&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2_phoneme&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme&quot;},{&quot;title&quot;:&quot;WavLM&quot;,&quot;id&quot;:&quot;model_doc/wavlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wavlm&quot;},{&quot;title&quot;:&quot;Whisper&quot;,&quot;id&quot;:&quot;model_doc/whisper&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/whisper&quot;},{&quot;title&quot;:&quot;XLS-R&quot;,&quot;id&quot;:&quot;model_doc/xls_r&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xls_r&quot;},{&quot;title&quot;:&quot;XLSR-Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/xlsr_wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2&quot;}]},{&quot;title&quot;:&quot;Multimodal models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALIGN&quot;,&quot;id&quot;:&quot;model_doc/align&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/align&quot;},{&quot;title&quot;:&quot;AltCLIP&quot;,&quot;id&quot;:&quot;model_doc/altclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/altclip&quot;},{&quot;title&quot;:&quot;BLIP&quot;,&quot;id&quot;:&quot;model_doc/blip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip&quot;},{&quot;title&quot;:&quot;BLIP-2&quot;,&quot;id&quot;:&quot;model_doc/blip-2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip-2&quot;},{&quot;title&quot;:&quot;BridgeTower&quot;,&quot;id&quot;:&quot;model_doc/bridgetower&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bridgetower&quot;},{&quot;title&quot;:&quot;BROS&quot;,&quot;id&quot;:&quot;model_doc/bros&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bros&quot;},{&quot;title&quot;:&quot;Chinese-CLIP&quot;,&quot;id&quot;:&quot;model_doc/chinese_clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/chinese_clip&quot;},{&quot;title&quot;:&quot;CLIP&quot;,&quot;id&quot;:&quot;model_doc/clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clip&quot;},{&quot;title&quot;:&quot;CLIPSeg&quot;,&quot;id&quot;:&quot;model_doc/clipseg&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clipseg&quot;},{&quot;title&quot;:&quot;Data2Vec&quot;,&quot;id&quot;:&quot;model_doc/data2vec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/data2vec&quot;},{&quot;title&quot;:&quot;DePlot&quot;,&quot;id&quot;:&quot;model_doc/deplot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deplot&quot;},{&quot;title&quot;:&quot;Donut&quot;,&quot;id&quot;:&quot;model_doc/donut&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/donut&quot;},{&quot;title&quot;:&quot;FLAVA&quot;,&quot;id&quot;:&quot;model_doc/flava&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flava&quot;},{&quot;title&quot;:&quot;GIT&quot;,&quot;id&quot;:&quot;model_doc/git&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/git&quot;},{&quot;title&quot;:&quot;GroupViT&quot;,&quot;id&quot;:&quot;model_doc/groupvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/groupvit&quot;},{&quot;title&quot;:&quot;IDEFICS&quot;,&quot;id&quot;:&quot;model_doc/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/idefics&quot;},{&quot;title&quot;:&quot;InstructBLIP&quot;,&quot;id&quot;:&quot;model_doc/instructblip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/instructblip&quot;},{&quot;title&quot;:&quot;LayoutLM&quot;,&quot;id&quot;:&quot;model_doc/layoutlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlm&quot;},{&quot;title&quot;:&quot;LayoutLMV2&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv2&quot;},{&quot;title&quot;:&quot;LayoutLMV3&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv3&quot;},{&quot;title&quot;:&quot;LayoutXLM&quot;,&quot;id&quot;:&quot;model_doc/layoutxlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutxlm&quot;},{&quot;title&quot;:&quot;LiLT&quot;,&quot;id&quot;:&quot;model_doc/lilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lilt&quot;},{&quot;title&quot;:&quot;LXMERT&quot;,&quot;id&quot;:&quot;model_doc/lxmert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lxmert&quot;},{&quot;title&quot;:&quot;MatCha&quot;,&quot;id&quot;:&quot;model_doc/matcha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/matcha&quot;},{&quot;title&quot;:&quot;MGP-STR&quot;,&quot;id&quot;:&quot;model_doc/mgp-str&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mgp-str&quot;},{&quot;title&quot;:&quot;Nougat&quot;,&quot;id&quot;:&quot;model_doc/nougat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nougat&quot;},{&quot;title&quot;:&quot;OneFormer&quot;,&quot;id&quot;:&quot;model_doc/oneformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/oneformer&quot;},{&quot;title&quot;:&quot;OWL-ViT&quot;,&quot;id&quot;:&quot;model_doc/owlvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/owlvit&quot;},{&quot;title&quot;:&quot;Perceiver&quot;,&quot;id&quot;:&quot;model_doc/perceiver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/perceiver&quot;},{&quot;title&quot;:&quot;Pix2Struct&quot;,&quot;id&quot;:&quot;model_doc/pix2struct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pix2struct&quot;},{&quot;title&quot;:&quot;Segment Anything&quot;,&quot;id&quot;:&quot;model_doc/sam&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sam&quot;},{&quot;title&quot;:&quot;Speech Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/speech-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder&quot;},{&quot;title&quot;:&quot;TAPAS&quot;,&quot;id&quot;:&quot;model_doc/tapas&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapas&quot;},{&quot;title&quot;:&quot;TrOCR&quot;,&quot;id&quot;:&quot;model_doc/trocr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trocr&quot;},{&quot;title&quot;:&quot;TVLT&quot;,&quot;id&quot;:&quot;model_doc/tvlt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tvlt&quot;},{&quot;title&quot;:&quot;ViLT&quot;,&quot;id&quot;:&quot;model_doc/vilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vilt&quot;},{&quot;title&quot;:&quot;Vision Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/vision-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder&quot;},{&quot;title&quot;:&quot;Vision Text Dual Encoder&quot;,&quot;id&quot;:&quot;model_doc/vision-text-dual-encoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder&quot;},{&quot;title&quot;:&quot;VisualBERT&quot;,&quot;id&quot;:&quot;model_doc/visual_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/visual_bert&quot;},{&quot;title&quot;:&quot;X-CLIP&quot;,&quot;id&quot;:&quot;model_doc/xclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xclip&quot;}]},{&quot;title&quot;:&quot;Reinforcement learning models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Decision Transformer&quot;,&quot;id&quot;:&quot;model_doc/decision_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/decision_transformer&quot;},{&quot;title&quot;:&quot;Trajectory Transformer&quot;,&quot;id&quot;:&quot;model_doc/trajectory_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer&quot;}]},{&quot;title&quot;:&quot;Time series models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Autoformer&quot;,&quot;id&quot;:&quot;model_doc/autoformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/autoformer&quot;},{&quot;title&quot;:&quot;Informer&quot;,&quot;id&quot;:&quot;model_doc/informer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/informer&quot;},{&quot;title&quot;:&quot;Time Series Transformer&quot;,&quot;id&quot;:&quot;model_doc/time_series_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/time_series_transformer&quot;}]},{&quot;title&quot;:&quot;Graph models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Graphormer&quot;,&quot;id&quot;:&quot;model_doc/graphormer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/graphormer&quot;}]}]},{&quot;title&quot;:&quot;Internal Helpers&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Custom Layers and Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/modeling_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/modeling_utils&quot;},{&quot;title&quot;:&quot;Utilities for pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/pipelines_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/pipelines_utils&quot;},{&quot;title&quot;:&quot;Utilities for Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/tokenization_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/tokenization_utils&quot;},{&quot;title&quot;:&quot;Utilities for Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/trainer_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/trainer_utils&quot;},{&quot;title&quot;:&quot;Utilities for Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/generation_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/generation_utils&quot;},{&quot;title&quot;:&quot;Utilities for Image Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/image_processing_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/image_processing_utils&quot;},{&quot;title&quot;:&quot;Utilities for Audio processing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/audio_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/audio_utils&quot;},{&quot;title&quot;:&quot;General Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/file_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/file_utils&quot;},{&quot;title&quot;:&quot;Utilities for Time Series&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/time_series_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/time_series_utils&quot;}]}]}],&quot;chapterId&quot;:&quot;model_doc/wav2vec2&quot;,&quot;docType&quot;:&quot;docs&quot;,&quot;isLoggedIn&quot;:false,&quot;lang&quot;:&quot;en&quot;,&quot;langs&quot;:[&quot;de&quot;,&quot;en&quot;,&quot;es&quot;,&quot;fr&quot;,&quot;it&quot;,&quot;ko&quot;,&quot;pt&quot;,&quot;zh&quot;],&quot;library&quot;:&quot;transformers&quot;,&quot;theme&quot;:&quot;light&quot;,&quot;version&quot;:&quot;v4.34.0&quot;,&quot;versions&quot;:[{&quot;version&quot;:&quot;main&quot;},{&quot;version&quot;:&quot;v4.34.0&quot;},{&quot;version&quot;:&quot;v4.33.3&quot;},{&quot;version&quot;:&quot;v4.33.2&quot;},{&quot;version&quot;:&quot;v4.33.0&quot;},{&quot;version&quot;:&quot;v4.32.1&quot;},{&quot;version&quot;:&quot;v4.32.0&quot;},{&quot;version&quot;:&quot;v4.31.0&quot;},{&quot;version&quot;:&quot;v4.30.0&quot;},{&quot;version&quot;:&quot;v4.29.1&quot;},{&quot;version&quot;:&quot;v4.29.0&quot;},{&quot;version&quot;:&quot;v4.28.1&quot;},{&quot;version&quot;:&quot;v4.28.0&quot;},{&quot;version&quot;:&quot;v4.27.2&quot;},{&quot;version&quot;:&quot;v4.27.1&quot;},{&quot;version&quot;:&quot;v4.27.0&quot;},{&quot;version&quot;:&quot;v4.26.1&quot;},{&quot;version&quot;:&quot;v4.26.0&quot;},{&quot;version&quot;:&quot;v4.25.1&quot;},{&quot;version&quot;:&quot;v4.24.0&quot;},{&quot;version&quot;:&quot;v4.23.1&quot;},{&quot;version&quot;:&quot;v4.23.0&quot;},{&quot;version&quot;:&quot;v4.22.2&quot;},{&quot;version&quot;:&quot;v4.22.1&quot;},{&quot;version&quot;:&quot;v4.22.0&quot;},{&quot;version&quot;:&quot;v4.21.3&quot;},{&quot;version&quot;:&quot;v4.21.2&quot;},{&quot;version&quot;:&quot;v4.21.1&quot;},{&quot;version&quot;:&quot;v4.21.0&quot;},{&quot;version&quot;:&quot;v4.20.1&quot;},{&quot;version&quot;:&quot;v4.20.0&quot;},{&quot;version&quot;:&quot;v4.19.4&quot;},{&quot;version&quot;:&quot;v4.19.3&quot;},{&quot;version&quot;:&quot;v4.19.2&quot;},{&quot;version&quot;:&quot;v4.19.0&quot;},{&quot;version&quot;:&quot;v4.18.0&quot;},{&quot;version&quot;:&quot;v4.17.0&quot;},{&quot;version&quot;:&quot;v4.16.2&quot;},{&quot;version&quot;:&quot;v4.16.1&quot;},{&quot;version&quot;:&quot;v4.16.0&quot;},{&quot;version&quot;:&quot;v4.15.0&quot;},{&quot;version&quot;:&quot;v4.14.1&quot;},{&quot;version&quot;:&quot;v4.13.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.5&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.4&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.0.0&quot;},{&quot;version&quot;:&quot;doc-builder-html&quot;}],&quot;title&quot;:&quot;Wav2Vec2&quot;}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"><div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation</p> <div class="flex items-center"><p class="font-semibold">Wav2Vec2</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "><button class=" " type="button"><h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> <span class="ml-auto rounded border border-gray-200 bg-gray-100 px-0.5 text-xs dark:border-gray-800 dark:bg-gray-800"><kbd class="font-sans">⌘K</kbd></span></button> <div class="flex items-center"><select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1">v4.34.0</option><option value="2">v4.33.3</option><option value="3">v4.32.1</option><option value="4">v4.31.0</option><option value="5">v4.30.0</option><option value="6">v4.29.1</option><option value="7">v4.28.1</option><option value="8">v4.27.2</option><option value="9">v4.26.1</option><option value="10">v4.25.1</option><option value="11">v4.24.0</option><option value="12">v4.23.1</option><option value="13">v4.22.2</option><option value="14">v4.21.3</option><option value="15">v4.20.1</option><option value="16">v4.19.4</option><option value="17">v4.18.0</option><option value="18">v4.17.0</option><option value="19">v4.16.2</option><option value="20">v4.15.0</option><option value="21">v4.14.1</option><option value="22">v4.13.0</option><option value="23">v4.12.5</option><option value="24">v4.11.3</option><option value="25">v4.10.1</option><option value="26">v4.9.2</option><option value="27">v4.8.2</option><option value="28">v4.7.0</option><option value="29">v4.6.0</option><option value="30">v4.5.1</option><option value="31">v4.4.2</option><option value="32">v4.3.3</option><option value="33">v4.2.2</option><option value="34">v4.1.1</option><option value="35">v4.0.1</option><option value="36">v3.5.1</option><option value="37">v3.4.0</option><option value="38">v3.3.1</option><option value="39">v3.2.0</option><option value="40">v3.1.0</option><option value="41">v3.0.2</option><option value="42">v2.11.0</option><option value="43">v2.10.0</option><option value="44">v2.9.1</option><option value="45">v2.8.0</option><option value="46">v2.7.0</option><option value="47">v2.6.0</option><option value="48">v2.5.1</option><option value="49">v2.4.1</option><option value="50">v2.3.0</option><option value="51">v2.2.2</option><option value="52">v2.1.1</option><option value="53">v2.0.0</option><option value="54">v1.2.0</option><option value="55">v1.1.0</option><option value="56">v1.0.0</option><option value="57">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"><button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"><svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> 112,792</a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Get started</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/index">🤗 Transformers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/quicktour">Quick tour </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/installation">Installation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Tutorials</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_tutorial">Run inference with pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/autoclass_tutorial">Write portable code with AutoClass </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/preprocessing">Preprocess data </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/training">Fine-tune a pretrained model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/run_scripts">Train with a script </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/accelerate">Set up distributed training with 🤗 Accelerate </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/peft">Load and train adapters with 🤗 PEFT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_sharing">Share your model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/transformers_agents">Agents </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/llm_tutorial">Generation with LLMs </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Task Guides</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Natural Language Processing</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Computer Vision</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Generation</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Prompting</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Developer guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/fast_tokenizers">Use fast tokenizers from 🤗 Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/multilingual">Run inference with multilingual models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/create_a_model">Use model-specific APIs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_models">Share a custom model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/chat_templating">Templates for chat models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/sagemaker">Run training on Amazon SageMaker </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/serialization">Export to ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tflite">Export to TFLite </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/torchscript">Export to TorchScript </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/benchmarks">Benchmarks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/notebooks">Notebooks with examples </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/community">Community resources </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_tools">Custom Tools and Prompts </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/troubleshooting">Troubleshoot </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Performance and scalability</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/performance">Overview </a><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Efficient training techniques</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_one">Methods and tools for efficient training on a single GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_many">Multiple GPUs and parallelism </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu">Efficient training on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu_many">Distributed CPU training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu">Training on TPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu_tf">Training on TPU with TensorFlow </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_special">Training on Specialized Hardware </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_hardware">Custom hardware for training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/hpo_train">Hyperparameter Search using Trainer API </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Optimizing inference</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_cpu">Inference on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_one">Inference on one GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_many">Inference on many GPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_special">Inference on Specialized Hardware </a> </div><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/big_models">Instantiating a big model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/debugging">Troubleshooting </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tf_xla">XLA Integration for TensorFlow Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perf_torch_compile">Optimize inference using `torch.compile()` </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Contribute</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/contributing">How to contribute to transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_model">How to add a model to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_tensorflow_model">How to convert a 🤗 Transformers model to TensorFlow? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_pipeline">How to add a pipeline to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/testing">Testing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pr_checks">Checks on a Pull Request </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Conceptual guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/philosophy">Philosophy </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/glossary">Glossary </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/task_summary">What 🤗 Transformers can do </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tasks_explained">How 🤗 Transformers solve tasks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_summary">The Transformer model family </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tokenizer_summary">Summary of the tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/attention">Attention mechanisms </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pad_truncation">Padding and truncation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/bertology">BERTology </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perplexity">Perplexity of fixed-length models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_webserver">Pipelines for webserver inference </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_memory_anatomy">Model training anatomy </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>API</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Main Classes</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/agent">Agents and Tools </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/model_doc/auto">Auto Classes </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/callback">Callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/configuration">Configuration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/data_collator">Data Collator </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks">Keras callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/logging">Logging </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/model">Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/text_generation">Text Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/onnx">ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules">Optimization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/output">Model outputs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/pipelines">Pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/processors">Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/quantization">Quantization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/tokenizer">Tokenizer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/trainer">Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/deepspeed">DeepSpeed Integration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor">Feature Extractor </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/image_processor">Image Processor </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Models</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Text models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Vision models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Audio models</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer">Audio Spectrogram Transformer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bark">Bark </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/clap">CLAP </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/encodec">EnCodec </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/hubert">Hubert </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mctct">MCTCT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mms">MMS </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/musicgen">MusicGen </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/pop2piano">Pop2Piano </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/sew">SEW </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/sew-d">SEW-D </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/speech_to_text">Speech2Text </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2">Speech2Text2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/speecht5">SpeechT5 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/unispeech">UniSpeech </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/unispeech-sat">UniSpeech-SAT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/vits">VITS </a><a data-sveltekit-reload="" class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2">Wav2Vec2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer">Wav2Vec2-Conformer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme">Wav2Vec2Phoneme </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/wavlm">WavLM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/whisper">Whisper </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xls_r">XLS-R </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2">XLSR-Wav2Vec2 </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Reinforcement learning models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Time series models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Graph models</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Internal Helpers</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/modeling_utils">Custom Layers and Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/pipelines_utils">Utilities for pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/tokenization_utils">Utilities for Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/trainer_utils">Utilities for Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/generation_utils">Utilities for Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/image_processing_utils">Utilities for Image Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/audio_utils">Utilities for Audio processing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/file_utils">General Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/time_series_utils">Utilities for Time Series </a> </div> </div></nav></div></div></div> <div class="z-1 min-w-0 flex-1"> <div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg"> <div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div> <p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience </p> <div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes </div></div></div> <div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a> <p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div> <div class="prose-doc prose relative mx-auto max-w-4xl break-words"> <p></p> <h1 class="relative group"><a id="wav2vec2" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#wav2vec2"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-7ljbqd">Wav2Vec2</span></h1> <h2 class="relative group"><a id="overview" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#overview"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1jsw1pg">Overview</span></h2> <p data-svelte-h="svelte-1an18c0">The Wav2Vec2 model was proposed in <a href="https://arxiv.org/abs/2006.11477" rel="nofollow">wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations</a> by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.</p> <p data-svelte-h="svelte-vfdo9a">The abstract from the paper is the following:</p> <p data-svelte-h="svelte-11lu23i"><em>We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.</em></p> <p data-svelte-h="svelte-axv494">Tips:</p> <ul data-svelte-h="svelte-1qqdfdu"><li>Wav2Vec2 is a speech model that accepts a float array corresponding to the raw waveform of the speech signal.</li> <li>Wav2Vec2 model was trained using connectionist temporal classification (CTC) so the model output has to be decoded using <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2CTCTokenizer">Wav2Vec2CTCTokenizer</a>.</li></ul> <p data-svelte-h="svelte-1t6iyb9">This model was contributed by <a href="https://huggingface.co/patrickvonplaten" rel="nofollow">patrickvonplaten</a>.</p> <h2 class="relative group"><a id="resources" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#resources"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-w4zzv6">Resources</span></h2> <p data-svelte-h="svelte-x9q80l">A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Wav2Vec2. If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.</p> <div class="inline-flex items-center border pr-1 rounded-xl "><svg class="mr-1 tag-ico tag-ico-green" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M25 4H10a2.002 2.002 0 0 0-2 2v14.556A3.955 3.955 0 0 0 6 20a4 4 0 1 0 4 4V12h15v8.556A3.954 3.954 0 0 0 23 20a4 4 0 1 0 4 4V6a2.002 2.002 0 0 0-2-2zM6 26a2 2 0 1 1 2-2a2.002 2.002 0 0 1-2 2zm17 0a2 2 0 1 1 2-2a2.003 2.003 0 0 1-2 2zM10 6h15v4H10z" fill="currentColor"></path></svg> <span>Audio Classification</span></div> <ul data-svelte-h="svelte-11ro85d"><li>A notebook on how to <a href="https://colab.research.google.com/github/m3hrdadfi/soxan/blob/main/notebooks/Emotion_recognition_in_Greek_speech_using_Wav2Vec2.ipynb" rel="nofollow">leverage a pretrained Wav2Vec2 model for emotion classification</a>. 🌎</li> <li><a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2ForCTC">Wav2Vec2ForCTC</a> is supported by this <a href="https://github.com/huggingface/transformers/tree/main/examples/pytorch/audio-classification" rel="nofollow">example script</a> and <a href="https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/audio_classification.ipynb" rel="nofollow">notebook</a>.</li> <li><a href="../tasks/audio_classification">Audio classification task guide</a></li></ul> <div class="inline-flex items-center border pr-1 rounded-xl "><svg class="mr-1 tag-ico tag-ico-yellow" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 18 18"><path fill-rule="evenodd" clip-rule="evenodd" d="M8.38893 3.42133C7.9778 3.14662 7.49446 3 7 3C6.33696 3 5.70108 3.26339 5.23223 3.73223C4.76339 4.20107 4.5 4.83696 4.5 5.5C4.5 5.99445 4.64662 6.4778 4.92133 6.88893C5.19603 7.30005 5.58648 7.62048 6.04329 7.8097C6.50011 7.99892 7.00278 8.04843 7.48773 7.95196C7.97268 7.8555 8.41814 7.6174 8.76777 7.26777C9.1174 6.91814 9.3555 6.47268 9.45197 5.98773C9.54843 5.50277 9.49892 5.00011 9.3097 4.54329C9.12048 4.08648 8.80005 3.69603 8.38893 3.42133ZM5.05551 2.58986C5.63108 2.20527 6.30777 2 7 2C7.92826 2 8.8185 2.36875 9.47488 3.02513C10.1313 3.6815 10.5 4.57174 10.5 5.5C10.5 6.19223 10.2947 6.86892 9.91015 7.4445C9.52556 8.02007 8.97894 8.46867 8.33939 8.73358C7.69985 8.99849 6.99612 9.0678 6.31719 8.93275C5.63825 8.7977 5.01461 8.46436 4.52513 7.97487C4.03564 7.48539 3.7023 6.86175 3.56725 6.18282C3.4322 5.50388 3.50152 4.80015 3.76642 4.16061C4.03133 3.52107 4.47993 2.97444 5.05551 2.58986ZM14.85 9.6425L15.7075 10.5C15.8005 10.5927 15.8743 10.7029 15.9245 10.8242C15.9747 10.9456 16.0004 11.0757 16 11.207V16H2V13.5C2.00106 12.5721 2.37015 11.6824 3.0263 11.0263C3.68244 10.3701 4.57207 10.0011 5.5 10H8.5C9.42793 10.0011 10.3176 10.3701 10.9737 11.0263C11.6299 11.6824 11.9989 12.5721 12 13.5V15H15V11.207L14.143 10.35C13.9426 10.4476 13.7229 10.4989 13.5 10.5C13.2033 10.5 12.9133 10.412 12.6666 10.2472C12.42 10.0824 12.2277 9.84811 12.1142 9.57403C12.0006 9.29994 11.9709 8.99834 12.0288 8.70737C12.0867 8.41639 12.2296 8.14912 12.4393 7.93934C12.6491 7.72956 12.9164 7.5867 13.2074 7.52882C13.4983 7.47094 13.7999 7.50065 14.074 7.61418C14.3481 7.72771 14.5824 7.91997 14.7472 8.16665C14.912 8.41332 15 8.70333 15 9C14.9988 9.22271 14.9475 9.44229 14.85 9.6425ZM3.73311 11.7331C3.26444 12.2018 3.00079 12.8372 3 13.5V15H11V13.5C10.9992 12.8372 10.7356 12.2018 10.2669 11.7331C9.79822 11.2644 9.1628 11.0008 8.5 11H5.5C4.8372 11.0008 4.20178 11.2644 3.73311 11.7331Z" fill="currentColor"></path></svg> <span>Automatic Speech Recognition</span></div> <ul data-svelte-h="svelte-1k3732j"><li>A blog post on <a href="https://huggingface.co/blog/wav2vec2-with-ngram" rel="nofollow">boosting Wav2Vec2 with n-grams in 🤗 Transformers</a>.</li> <li>A blog post on how to <a href="https://huggingface.co/blog/fine-tune-wav2vec2-english" rel="nofollow">finetune Wav2Vec2 for English ASR with 🤗 Transformers</a>.</li> <li>A blog post on <a href="https://huggingface.co/blog/fine-tune-xlsr-wav2vec2" rel="nofollow">finetuning XLS-R for Multi-Lingual ASR with 🤗 Transformers</a>.</li> <li>A notebook on how to <a href="https://colab.research.google.com/github/Muennighoff/ytclipcc/blob/main/wav2vec_youtube_captions.ipynb" rel="nofollow">create YouTube captions from any video by transcribing audio with Wav2Vec2</a>. 🌎</li> <li><a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2ForCTC">Wav2Vec2ForCTC</a> is supported by a notebook on <a href="https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/speech_recognition.ipynb" rel="nofollow">how to finetune a speech recognition model in English</a>, and <a href="https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multi_lingual_speech_recognition.ipynb" rel="nofollow">how to finetune a speech recognition model in any language</a>.</li> <li><a href="../tasks/asr">Automatic speech recognition task guide</a></li></ul> <p data-svelte-h="svelte-lk14e4">🚀 Deploy</p> <ul data-svelte-h="svelte-1ehmdx2"><li>A blog post on how to deploy Wav2Vec2 for <a href="https://www.philschmid.de/automatic-speech-recognition-sagemaker" rel="nofollow">Automatic Speech Recogntion with Hugging Face’s Transformers &amp; Amazon SageMaker</a>.</li></ul> <h2 class="relative group"><a id="transformers.Wav2Vec2Config" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Config"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1rrgboz">Wav2Vec2Config</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.Wav2Vec2Config"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">Wav2Vec2Config</span></span></h3> <a id="transformers.Wav2Vec2Config" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.Wav2Vec2Config"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/configuration_wav2vec2.py#L32" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">vocab_size<span class="opacity-60"> = 32</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_size<span class="opacity-60"> = 768</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_hidden_layers<span class="opacity-60"> = 12</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_attention_heads<span class="opacity-60"> = 12</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">intermediate_size<span class="opacity-60"> = 3072</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_act<span class="opacity-60"> = 'gelu'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_dropout<span class="opacity-60"> = 0.1</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">activation_dropout<span class="opacity-60"> = 0.1</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_dropout<span class="opacity-60"> = 0.1</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">feat_proj_dropout<span class="opacity-60"> = 0.0</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">feat_quantizer_dropout<span class="opacity-60"> = 0.0</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">final_dropout<span class="opacity-60"> = 0.1</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">layerdrop<span class="opacity-60"> = 0.1</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">initializer_range<span class="opacity-60"> = 0.02</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">layer_norm_eps<span class="opacity-60"> = 1e-05</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">feat_extract_norm<span class="opacity-60"> = 'group'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">feat_extract_activation<span class="opacity-60"> = 'gelu'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">conv_dim<span class="opacity-60"> = (512, 512, 512, 512, 512, 512, 512)</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">conv_stride<span class="opacity-60"> = (5, 2, 2, 2, 2, 2, 2)</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">conv_kernel<span class="opacity-60"> = (10, 3, 3, 3, 3, 2, 2)</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">conv_bias<span class="opacity-60"> = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_conv_pos_embeddings<span class="opacity-60"> = 128</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_conv_pos_embedding_groups<span class="opacity-60"> = 16</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">do_stable_layer_norm<span class="opacity-60"> = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">apply_spec_augment<span class="opacity-60"> = True</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_time_prob<span class="opacity-60"> = 0.05</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_time_length<span class="opacity-60"> = 10</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_time_min_masks<span class="opacity-60"> = 2</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_feature_prob<span class="opacity-60"> = 0.0</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_feature_length<span class="opacity-60"> = 10</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_feature_min_masks<span class="opacity-60"> = 0</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_codevectors_per_group<span class="opacity-60"> = 320</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_codevector_groups<span class="opacity-60"> = 2</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">contrastive_logits_temperature<span class="opacity-60"> = 0.1</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_negatives<span class="opacity-60"> = 100</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">codevector_dim<span class="opacity-60"> = 256</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">proj_codevector_dim<span class="opacity-60"> = 256</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">diversity_loss_weight<span class="opacity-60"> = 0.1</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">ctc_loss_reduction<span class="opacity-60"> = 'sum'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">ctc_zero_infinity<span class="opacity-60"> = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_weighted_layer_sum<span class="opacity-60"> = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">classifier_proj_size<span class="opacity-60"> = 256</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">tdnn_dim<span class="opacity-60"> = (512, 512, 512, 512, 1500)</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">tdnn_kernel<span class="opacity-60"> = (5, 3, 3, 1, 1)</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">tdnn_dilation<span class="opacity-60"> = (1, 2, 3, 1, 1)</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">xvector_output_dim<span class="opacity-60"> = 512</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pad_token_id<span class="opacity-60"> = 0</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">bos_token_id<span class="opacity-60"> = 1</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">eos_token_id<span class="opacity-60"> = 2</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">add_adapter<span class="opacity-60"> = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">adapter_kernel_size<span class="opacity-60"> = 3</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">adapter_stride<span class="opacity-60"> = 2</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_adapter_layers<span class="opacity-60"> = 3</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_size<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">adapter_attn_dim<span class="opacity-60"> = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 53 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Config.vocab_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Config.vocab_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>vocab_size</strong> (<code>int</code>, <em>optional</em>, defaults to 32) — Vocabulary size of the Wav2Vec2 model. Defines the number of different tokens that can be represented by the <code>inputs_ids</code> passed when calling <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Model">Wav2Vec2Model</a> or <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.TFWav2Vec2Model">TFWav2Vec2Model</a>. Vocabulary size of the model. Defines the different tokens that can be represented by the <em>inputs_ids</em> passed to the forward method of <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Model">Wav2Vec2Model</a>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Config.hidden_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Config.hidden_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hidden_size</strong> (<code>int</code>, <em>optional</em>, defaults to 768) — Dimensionality of the encoder layers and the pooler layer.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Config.num_hidden_layers" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Config.num_hidden_layers"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>num_hidden_layers</strong> (<code>int</code>, <em>optional</em>, defaults to 12) — Number of hidden layers in the Transformer encoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Config.num_attention_heads" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Config.num_attention_heads"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>num_attention_heads</strong> (<code>int</code>, <em>optional</em>, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Config.intermediate_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Config.intermediate_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>intermediate_size</strong> (<code>int</code>, <em>optional</em>, defaults to 3072) — Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Config.hidden_act" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Config.hidden_act"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hidden_act</strong> (<code>str</code> or <code>function</code>, <em>optional</em>, defaults to <code>"gelu"</code>) — The non-linear activation function (function or string) in the encoder and pooler. If string, <code>"gelu"</code>, <code>"relu"</code>, <code>"selu"</code> and <code>"gelu_new"</code> are supported.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Config.hidden_dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Config.hidden_dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hidden_dropout</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Config.activation_dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Config.activation_dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>activation_dropout</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) — The dropout ratio for activations inside the fully connected layer.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Config.attention_dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Config.attention_dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_dropout</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) — The dropout ratio for the attention probabilities.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Config.final_dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Config.final_dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>final_dropout</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) — The dropout probability for the final projection layer of <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2ForCTC">Wav2Vec2ForCTC</a>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Config.layerdrop" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Config.layerdrop"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>layerdrop</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) — The LayerDrop probability. See the [LayerDrop paper](see <a href="https://arxiv.org/abs/1909.11556" rel="nofollow">https://arxiv.org/abs/1909.11556</a>) for more details.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Config.initializer_range" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Config.initializer_range"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>initializer_range</strong> (<code>float</code>, <em>optional</em>, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Config.layer_norm_eps" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Config.layer_norm_eps"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>layer_norm_eps</strong> (<code>float</code>, <em>optional</em>, defaults to 1e-12) — The epsilon used by the layer normalization layers.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Config.feat_extract_norm" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Config.feat_extract_norm"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>feat_extract_norm</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"group"</code>) — The norm to be applied to 1D convolutional layers in feature encoder. One of <code>"group"</code> for group normalization of only the first 1D convolutional layer or <code>"layer"</code> for layer normalization of all 1D convolutional layers.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Config.feat_proj_dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Config.feat_proj_dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>feat_proj_dropout</strong> (<code>float</code>, <em>optional</em>, defaults to 0.0) — The dropout probability for output of the feature encoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Config.feat_extract_activation" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Config.feat_extract_activation"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>feat_extract_activation</strong> (<code>str, </code>optional<code>, defaults to </code>“gelu”<code>) -- The non-linear activation function (function or string) in the 1D convolutional layers of the feature extractor. If string, </code>“gelu”<code>, </code>“relu”<code>, </code>“selu”<code>and</code>“gelu_new”` are supported.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Config.feat_quantizer_dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Config.feat_quantizer_dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>feat_quantizer_dropout</strong> (<code>float</code>, <em>optional</em>, defaults to 0.0) — The dropout probabilitiy for quantized feature encoder states.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Config.conv_dim" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Config.conv_dim"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>conv_dim</strong> (<code>Tuple[int]</code> or <code>List[int]</code>, <em>optional</em>, defaults to <code>(512, 512, 512, 512, 512, 512, 512)</code>) — A tuple of integers defining the number of input and output channels of each 1D convolutional layer in the feature encoder. The length of <em>conv_dim</em> defines the number of 1D convolutional layers.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Config.conv_stride" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Config.conv_stride"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>conv_stride</strong> (<code>Tuple[int]</code> or <code>List[int]</code>, <em>optional</em>, defaults to <code>(5, 2, 2, 2, 2, 2, 2)</code>) — A tuple of integers defining the stride of each 1D convolutional layer in the feature encoder. The length of <em>conv_stride</em> defines the number of convolutional layers and has to match the length of <em>conv_dim</em>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Config.conv_kernel" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Config.conv_kernel"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>conv_kernel</strong> (<code>Tuple[int]</code> or <code>List[int]</code>, <em>optional</em>, defaults to <code>(10, 3, 3, 3, 3, 3, 3)</code>) — A tuple of integers defining the kernel size of each 1D convolutional layer in the feature encoder. The length of <em>conv_kernel</em> defines the number of convolutional layers and has to match the length of <em>conv_dim</em>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Config.conv_bias" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Config.conv_bias"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>conv_bias</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether the 1D convolutional layers have a bias.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Config.num_conv_pos_embeddings" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Config.num_conv_pos_embeddings"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>num_conv_pos_embeddings</strong> (<code>int</code>, <em>optional</em>, defaults to 128) — Number of convolutional positional embeddings. Defines the kernel size of 1D convolutional positional embeddings layer.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Config.num_conv_pos_embedding_groups" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Config.num_conv_pos_embedding_groups"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>num_conv_pos_embedding_groups</strong> (<code>int</code>, <em>optional</em>, defaults to 16) — Number of groups of 1D convolutional positional embeddings layer.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Config.do_stable_layer_norm" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Config.do_stable_layer_norm"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>do_stable_layer_norm</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether to apply <em>stable</em> layer norm architecture of the Transformer encoder. <code>do_stable_layer_norm is True</code> corresponds to applying layer norm before the attention layer, whereas <code>do_stable_layer_norm is False</code> corresponds to applying layer norm after the attention layer.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Config.apply_spec_augment" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Config.apply_spec_augment"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>apply_spec_augment</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether to apply <em>SpecAugment</em> data augmentation to the outputs of the feature encoder. For reference see <a href="https://arxiv.org/abs/1904.08779" rel="nofollow">SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition</a>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Config.mask_time_prob" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Config.mask_time_prob"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mask_time_prob</strong> (<code>float</code>, <em>optional</em>, defaults to 0.05) — Percentage (between 0 and 1) of all feature vectors along the time axis which will be masked. The masking procecure generates ”mask_time_prob<em>len(time_axis)/mask_time_length” independent masks over the axis. If reasoning from the propability of each feature vector to be chosen as the start of the vector span to be masked, </em>mask_time_prob<em> should be `prob_vector_start</em>mask_time_length<code>. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant if </code>apply_spec_augment is True`.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Config.mask_time_length" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Config.mask_time_length"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mask_time_length</strong> (<code>int</code>, <em>optional</em>, defaults to 10) — Length of vector span along the time axis.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Config.mask_time_min_masks" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Config.mask_time_min_masks"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mask_time_min_masks</strong> (<code>int</code>, <em>optional</em>, defaults to 2), — The minimum number of masks of length <code>mask_feature_length</code> generated along the time axis, each time step, irrespectively of <code>mask_feature_prob</code>. Only relevant if ”mask_time_prob*len(time_axis)/mask_time_length &lt; mask_time_min_masks”</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Config.mask_feature_prob" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Config.mask_feature_prob"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mask_feature_prob</strong> (<code>float</code>, <em>optional</em>, defaults to 0.0) — Percentage (between 0 and 1) of all feature vectors along the feature axis which will be masked. The masking procecure generates ”mask_feature_prob<em>len(feature_axis)/mask_time_length” independent masks over the axis. If reasoning from the propability of each feature vector to be chosen as the start of the vector span to be masked, </em>mask_feature_prob<em> should be `prob_vector_start</em>mask_feature_length<code>. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant if </code>apply_spec_augment is True`.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Config.mask_feature_length" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Config.mask_feature_length"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mask_feature_length</strong> (<code>int</code>, <em>optional</em>, defaults to 10) — Length of vector span along the feature axis.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Config.mask_feature_min_masks" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Config.mask_feature_min_masks"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mask_feature_min_masks</strong> (<code>int</code>, <em>optional</em>, defaults to 0), — The minimum number of masks of length <code>mask_feature_length</code> generated along the feature axis, each time step, irrespectively of <code>mask_feature_prob</code>. Only relevant if ”mask_feature_prob*len(feature_axis)/mask_feature_length &lt; mask_feature_min_masks”</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Config.num_codevectors_per_group" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Config.num_codevectors_per_group"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>num_codevectors_per_group</strong> (<code>int</code>, <em>optional</em>, defaults to 320) — Number of entries in each quantization codebook (group).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Config.num_codevector_groups" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Config.num_codevector_groups"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>num_codevector_groups</strong> (<code>int</code>, <em>optional</em>, defaults to 2) — Number of codevector groups for product codevector quantization.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Config.contrastive_logits_temperature" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Config.contrastive_logits_temperature"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>contrastive_logits_temperature</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) — The temperature <em>kappa</em> in the contrastive loss.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Config.feat_quantizer_dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Config.feat_quantizer_dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>feat_quantizer_dropout</strong> (<code>float</code>, <em>optional</em>, defaults to 0.0) — The dropout probabilitiy for the output of the feature encoder that’s used by the quantizer.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Config.num_negatives" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Config.num_negatives"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>num_negatives</strong> (<code>int</code>, <em>optional</em>, defaults to 100) — Number of negative samples for the contrastive loss.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Config.codevector_dim" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Config.codevector_dim"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>codevector_dim</strong> (<code>int</code>, <em>optional</em>, defaults to 256) — Dimensionality of the quantized feature vectors.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Config.proj_codevector_dim" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Config.proj_codevector_dim"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>proj_codevector_dim</strong> (<code>int</code>, <em>optional</em>, defaults to 256) — Dimensionality of the final projection of both the quantized and the transformer features.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Config.diversity_loss_weight" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Config.diversity_loss_weight"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>diversity_loss_weight</strong> (<code>int</code>, <em>optional</em>, defaults to 0.1) — The weight of the codebook diversity loss component.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Config.ctc_loss_reduction" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Config.ctc_loss_reduction"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>ctc_loss_reduction</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"sum"</code>) — Specifies the reduction to apply to the output of <code>torch.nn.CTCLoss</code>. Only relevant when training an instance of <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2ForCTC">Wav2Vec2ForCTC</a>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Config.ctc_zero_infinity" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Config.ctc_zero_infinity"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>ctc_zero_infinity</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether to zero infinite losses and the associated gradients of <code>torch.nn.CTCLoss</code>. Infinite losses mainly occur when the inputs are too short to be aligned to the targets. Only relevant when training an instance of <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2ForCTC">Wav2Vec2ForCTC</a>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Config.use_weighted_layer_sum" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Config.use_weighted_layer_sum"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>use_weighted_layer_sum</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether to use a weighted average of layer outputs with learned weights. Only relevant when using an instance of <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2ForSequenceClassification">Wav2Vec2ForSequenceClassification</a>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Config.classifier_proj_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Config.classifier_proj_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>classifier_proj_size</strong> (<code>int</code>, <em>optional</em>, defaults to 256) — Dimensionality of the projection before token mean-pooling for classification.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Config.tdnn_dim" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Config.tdnn_dim"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>tdnn_dim</strong> (<code>Tuple[int]</code> or <code>List[int]</code>, <em>optional</em>, defaults to <code>(512, 512, 512, 512, 1500)</code>) — A tuple of integers defining the number of output channels of each 1D convolutional layer in the <em>TDNN</em> module of the <em>XVector</em> model. The length of <em>tdnn_dim</em> defines the number of <em>TDNN</em> layers.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Config.tdnn_kernel" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Config.tdnn_kernel"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>tdnn_kernel</strong> (<code>Tuple[int]</code> or <code>List[int]</code>, <em>optional</em>, defaults to <code>(5, 3, 3, 1, 1)</code>) — A tuple of integers defining the kernel size of each 1D convolutional layer in the <em>TDNN</em> module of the <em>XVector</em> model. The length of <em>tdnn_kernel</em> has to match the length of <em>tdnn_dim</em>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Config.tdnn_dilation" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Config.tdnn_dilation"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>tdnn_dilation</strong> (<code>Tuple[int]</code> or <code>List[int]</code>, <em>optional</em>, defaults to <code>(1, 2, 3, 1, 1)</code>) — A tuple of integers defining the dilation factor of each 1D convolutional layer in <em>TDNN</em> module of the <em>XVector</em> model. The length of <em>tdnn_dilation</em> has to match the length of <em>tdnn_dim</em>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Config.xvector_output_dim" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Config.xvector_output_dim"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>xvector_output_dim</strong> (<code>int</code>, <em>optional</em>, defaults to 512) — Dimensionality of the <em>XVector</em> embedding vectors.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Config.add_adapter" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Config.add_adapter"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>add_adapter</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether a convolutional network should be stacked on top of the Wav2Vec2 Encoder. Can be very useful for warm-starting Wav2Vec2 for SpeechEncoderDecoder models.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Config.adapter_kernel_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Config.adapter_kernel_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>adapter_kernel_size</strong> (<code>int</code>, <em>optional</em>, defaults to 3) — Kernel size of the convolutional layers in the adapter network. Only relevant if <code>add_adapter is True</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Config.adapter_stride" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Config.adapter_stride"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>adapter_stride</strong> (<code>int</code>, <em>optional</em>, defaults to 2) — Stride of the convolutional layers in the adapter network. Only relevant if <code>add_adapter is True</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Config.num_adapter_layers" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Config.num_adapter_layers"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>num_adapter_layers</strong> (<code>int</code>, <em>optional</em>, defaults to 3) — Number of convolutional layers that should be used in the adapter network. Only relevant if <code>add_adapter is True</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Config.adapter_attn_dim" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Config.adapter_attn_dim"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>adapter_attn_dim</strong> (<code>int</code>, <em>optional</em>) — Dimension of the attention adapter weights to be used in each attention block. An example of a model using attention adapters is <a href="https://huggingface.co/facebook/mms-1b-all" rel="nofollow">facebook/mms-1b-all</a>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Config.output_hidden_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Config.output_hidden_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_size</strong> (<code>int</code>, <em>optional</em>) — Dimensionality of the encoder output layer. If not defined, this defaults to <em>hidden-size</em>. Only relevant if <code>add_adapter is True</code>.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-7q3exd">This is the configuration class to store the configuration of a <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Model">Wav2Vec2Model</a>. It is used to instantiate an Wav2Vec2 model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Wav2Vec2 <a href="https://huggingface.co/facebook/wav2vec2-base-960h" rel="nofollow">facebook/wav2vec2-base-960h</a> architecture.</p> <p data-svelte-h="svelte-10kqkkl">Configuration objects inherit from <a href="/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> and can be used to control the model outputs. Read the documentation from <a href="/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> for more information.</p> <div class="relative group rounded-md"><a id="transformers.Wav2Vec2Config.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Config.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> Wav2Vec2Config, Wav2Vec2Model <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Initializing a Wav2Vec2 facebook/wav2vec2-base-960h style configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>configuration = Wav2Vec2Config() <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Initializing a model (with random weights) from the facebook/wav2vec2-base-960h style configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model = Wav2Vec2Model(configuration) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Accessing the model configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>configuration = model.config</pre></div></div></div> <h2 class="relative group"><a id="transformers.Wav2Vec2CTCTokenizer" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2CTCTokenizer"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-bfx7t2">Wav2Vec2CTCTokenizer</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.Wav2Vec2CTCTokenizer"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">Wav2Vec2CTCTokenizer</span></span></h3> <a id="transformers.Wav2Vec2CTCTokenizer" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.Wav2Vec2CTCTokenizer"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/tokenization_wav2vec2.py#L127" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">vocab_file<span class="opacity-60"></span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">bos_token<span class="opacity-60"> = '&lt;s&gt;'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">eos_token<span class="opacity-60"> = '&lt;/s&gt;'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">unk_token<span class="opacity-60"> = '&lt;unk&gt;'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pad_token<span class="opacity-60"> = '&lt;pad&gt;'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">word_delimiter_token<span class="opacity-60"> = '|'</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">replace_word_delimiter_char<span class="opacity-60"> = ' '</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">do_lower_case<span class="opacity-60"> = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">target_lang<span class="opacity-60"> = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 8 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2CTCTokenizer.vocab_file" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2CTCTokenizer.vocab_file"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>vocab_file</strong> (<code>str</code>) — File containing the vocabulary.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2CTCTokenizer.bos_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2CTCTokenizer.bos_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>bos_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;s&gt;"</code>) — The beginning of sentence token.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2CTCTokenizer.eos_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2CTCTokenizer.eos_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>eos_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;/s&gt;"</code>) — The end of sentence token.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2CTCTokenizer.unk_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2CTCTokenizer.unk_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>unk_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;unk&gt;"</code>) — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2CTCTokenizer.pad_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2CTCTokenizer.pad_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>pad_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;pad&gt;"</code>) — The token used for padding, for example when batching sequences of different lengths.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2CTCTokenizer.word_delimiter_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2CTCTokenizer.word_delimiter_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>word_delimiter_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"|"</code>) — The token used for defining the end of a word.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2CTCTokenizer.do_lower_case" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2CTCTokenizer.do_lower_case"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>do_lower_case</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not to accept lowercase input and lowercase the output when decoding.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2CTCTokenizer.target_lang" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2CTCTokenizer.target_lang"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>target_lang</strong> (<code>str</code>, <em>optional</em>) — A target language the tokenizer should set by default. <code>target_lang</code> has to be defined for multi-lingual, nested vocabulary such as <a href="https://huggingface.co/facebook/mms-1b-all" rel="nofollow">facebook/mms-1b-all</a>.<p></p> <p>**kwargs — Additional keyword arguments passed along to <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizer">PreTrainedTokenizer</a></p></span></span> </li></ul> </div></div> <p data-svelte-h="svelte-gxvlja">Constructs a Wav2Vec2CTC tokenizer.</p> <p data-svelte-h="svelte-1ery4iu">This tokenizer inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizer">PreTrainedTokenizer</a> which contains some of the main methods. Users should refer to the superclass for more information regarding such methods.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.Wav2Vec2CTCTokenizer.__call__"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>__call__</span></h4> <a id="transformers.Wav2Vec2CTCTokenizer.__call__" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.Wav2Vec2CTCTokenizer.__call__"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/tokenization_utils_base.py#L2732" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">text<span class="opacity-60">: typing.Union[str, typing.List[str], typing.List[typing.List[str]]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">text_pair<span class="opacity-60">: typing.Union[str, typing.List[str], typing.List[typing.List[str]], NoneType] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">text_target<span class="opacity-60">: typing.Union[str, typing.List[str], typing.List[typing.List[str]]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">text_pair_target<span class="opacity-60">: typing.Union[str, typing.List[str], typing.List[typing.List[str]], NoneType] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">add_special_tokens<span class="opacity-60">: bool = True</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">padding<span class="opacity-60">: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">truncation<span class="opacity-60">: typing.Union[bool, str, transformers.tokenization_utils_base.TruncationStrategy] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">max_length<span class="opacity-60">: typing.Optional[int] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">stride<span class="opacity-60">: int = 0</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">is_split_into_words<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pad_to_multiple_of<span class="opacity-60">: typing.Optional[int] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_tensors<span class="opacity-60">: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_token_type_ids<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_attention_mask<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_overflowing_tokens<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_special_tokens_mask<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_offsets_mapping<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_length<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">verbose<span class="opacity-60">: bool = True</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.BatchEncoding">BatchEncoding</a></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 19 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2CTCTokenizer.__call__.text" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2CTCTokenizer.__call__.text"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>text</strong> (<code>str</code>, <code>List[str]</code>, <code>List[List[str]]</code>, <em>optional</em>) — The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set <code>is_split_into_words=True</code> (to lift the ambiguity with a batch of sequences).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2CTCTokenizer.__call__.text_pair" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2CTCTokenizer.__call__.text_pair"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>text_pair</strong> (<code>str</code>, <code>List[str]</code>, <code>List[List[str]]</code>, <em>optional</em>) — The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set <code>is_split_into_words=True</code> (to lift the ambiguity with a batch of sequences).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2CTCTokenizer.__call__.text_target" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2CTCTokenizer.__call__.text_target"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>text_target</strong> (<code>str</code>, <code>List[str]</code>, <code>List[List[str]]</code>, <em>optional</em>) — The sequence or batch of sequences to be encoded as target texts. Each sequence can be a string or a list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set <code>is_split_into_words=True</code> (to lift the ambiguity with a batch of sequences).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2CTCTokenizer.__call__.text_pair_target" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2CTCTokenizer.__call__.text_pair_target"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>text_pair_target</strong> (<code>str</code>, <code>List[str]</code>, <code>List[List[str]]</code>, <em>optional</em>) — The sequence or batch of sequences to be encoded as target texts. Each sequence can be a string or a list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set <code>is_split_into_words=True</code> (to lift the ambiguity with a batch of sequences).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2CTCTokenizer.__call__.add_special_tokens" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2CTCTokenizer.__call__.add_special_tokens"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>add_special_tokens</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether or not to add special tokens when encoding the sequences. This will use the underlying <code>PretrainedTokenizerBase.build_inputs_with_special_tokens</code> function, which defines which tokens are automatically added to the input ids. This is usefull if you want to add <code>bos</code> or <code>eos</code> tokens automatically.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2CTCTokenizer.__call__.padding" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2CTCTokenizer.__call__.padding"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>padding</strong> (<code>bool</code>, <code>str</code> or <a href="/docs/transformers/v4.34.0/en/internal/file_utils#transformers.utils.PaddingStrategy">PaddingStrategy</a>, <em>optional</em>, defaults to <code>False</code>) — Activates and controls padding. Accepts the following values:<p></p> <ul> <li><code>True</code> or <code>'longest'</code>: Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).</li> <li><code>'max_length'</code>: Pad to a maximum length specified with the argument <code>max_length</code> or to the maximum acceptable input length for the model if that argument is not provided.</li> <li><code>False</code> or <code>'do_not_pad'</code> (default): No padding (i.e., can output a batch with sequences of different lengths).</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2CTCTokenizer.__call__.truncation" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2CTCTokenizer.__call__.truncation"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>truncation</strong> (<code>bool</code>, <code>str</code> or <a href="/docs/transformers/v4.34.0/en/internal/tokenization_utils#transformers.tokenization_utils_base.TruncationStrategy">TruncationStrategy</a>, <em>optional</em>, defaults to <code>False</code>) — Activates and controls truncation. Accepts the following values:<p></p> <ul> <li><code>True</code> or <code>'longest_first'</code>: Truncate to a maximum length specified with the argument <code>max_length</code> or to the maximum acceptable input length for the model if that argument is not provided. This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided.</li> <li><code>'only_first'</code>: Truncate to a maximum length specified with the argument <code>max_length</code> or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.</li> <li><code>'only_second'</code>: Truncate to a maximum length specified with the argument <code>max_length</code> or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.</li> <li><code>False</code> or <code>'do_not_truncate'</code> (default): No truncation (i.e., can output batch with sequence lengths greater than the model maximum admissible input size).</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2CTCTokenizer.__call__.max_length" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2CTCTokenizer.__call__.max_length"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>max_length</strong> (<code>int</code>, <em>optional</em>) — Controls the maximum length to use by one of the truncation/padding parameters.<p></p> <p>If left unset or set to <code>None</code>, this will use the predefined model maximum length if a maximum length is required by one of the truncation/padding parameters. If the model has no specific maximum input length (like XLNet) truncation/padding to a maximum length will be deactivated.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2CTCTokenizer.__call__.stride" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2CTCTokenizer.__call__.stride"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>stride</strong> (<code>int</code>, <em>optional</em>, defaults to 0) — If set to a number along with <code>max_length</code>, the overflowing tokens returned when <code>return_overflowing_tokens=True</code> will contain some tokens from the end of the truncated sequence returned to provide some overlap between truncated and overflowing sequences. The value of this argument defines the number of overlapping tokens.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2CTCTokenizer.__call__.is_split_into_words" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2CTCTokenizer.__call__.is_split_into_words"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>is_split_into_words</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not the input is already pre-tokenized (e.g., split into words). If set to <code>True</code>, the tokenizer assumes the input is already split into words (for instance, by splitting it on whitespace) which it will tokenize. This is useful for NER or token classification.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2CTCTokenizer.__call__.pad_to_multiple_of" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2CTCTokenizer.__call__.pad_to_multiple_of"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>pad_to_multiple_of</strong> (<code>int</code>, <em>optional</em>) — If set will pad the sequence to a multiple of the provided value. Requires <code>padding</code> to be activated. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability <code>&gt;= 7.5</code> (Volta).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2CTCTokenizer.__call__.return_tensors" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2CTCTokenizer.__call__.return_tensors"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_tensors</strong> (<code>str</code> or <a href="/docs/transformers/v4.34.0/en/internal/file_utils#transformers.TensorType">TensorType</a>, <em>optional</em>) — If set, will return tensors instead of list of python integers. Acceptable values are:<p></p> <ul> <li><code>'tf'</code>: Return TensorFlow <code>tf.constant</code> objects.</li> <li><code>'pt'</code>: Return PyTorch <code>torch.Tensor</code> objects.</li> <li><code>'np'</code>: Return Numpy <code>np.ndarray</code> objects.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2CTCTokenizer.__call__.return_token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2CTCTokenizer.__call__.return_token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_token_type_ids</strong> (<code>bool</code>, <em>optional</em>) — Whether to return token type IDs. If left to the default, will return the token type IDs according to the specific tokenizer’s default, defined by the <code>return_outputs</code> attribute.<p></p> <p><a href="../glossary#token-type-ids">What are token type IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2CTCTokenizer.__call__.return_attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2CTCTokenizer.__call__.return_attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_attention_mask</strong> (<code>bool</code>, <em>optional</em>) — Whether to return the attention mask. If left to the default, will return the attention mask according to the specific tokenizer’s default, defined by the <code>return_outputs</code> attribute.<p></p> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2CTCTokenizer.__call__.return_overflowing_tokens" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2CTCTokenizer.__call__.return_overflowing_tokens"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_overflowing_tokens</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not to return overflowing token sequences. If a pair of sequences of input ids (or a batch of pairs) is provided with <code>truncation_strategy = longest_first</code> or <code>True</code>, an error is raised instead of returning overflowing tokens.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2CTCTokenizer.__call__.return_special_tokens_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2CTCTokenizer.__call__.return_special_tokens_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_special_tokens_mask</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not to return special tokens mask information.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2CTCTokenizer.__call__.return_offsets_mapping" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2CTCTokenizer.__call__.return_offsets_mapping"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_offsets_mapping</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not to return <code>(char_start, char_end)</code> for each token.<p></p> <p>This is only available on fast tokenizers inheriting from <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast">PreTrainedTokenizerFast</a>, if using Python’s tokenizer, this method will raise <code>NotImplementedError</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2CTCTokenizer.__call__.return_length" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2CTCTokenizer.__call__.return_length"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_length</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not to return the lengths of the encoded inputs.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2CTCTokenizer.__call__.verbose" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2CTCTokenizer.__call__.verbose"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>verbose</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether or not to print more information and warnings. **kwargs — passed to the <code>self.tokenize()</code> method</span></span> </li></ul> <div id="transformers.Wav2Vec2CTCTokenizer.__call__.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.BatchEncoding">BatchEncoding</a></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.BatchEncoding">BatchEncoding</a> with the following fields:</p> <ul> <li> <p><strong>input_ids</strong> — List of token ids to be fed to a model.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p> </li> <li> <p><strong>token_type_ids</strong> — List of token type ids to be fed to a model (when <code>return_token_type_ids=True</code> or if <em>“token_type_ids”</em> is in <code>self.model_input_names</code>).</p> <p><a href="../glossary#token-type-ids">What are token type IDs?</a></p> </li> <li> <p><strong>attention_mask</strong> — List of indices specifying which tokens should be attended to by the model (when <code>return_attention_mask=True</code> or if <em>“attention_mask”</em> is in <code>self.model_input_names</code>).</p> <p><a href="../glossary#attention-mask">What are attention masks?</a></p> </li> <li> <p><strong>overflowing_tokens</strong> — List of overflowing tokens sequences (when a <code>max_length</code> is specified and <code>return_overflowing_tokens=True</code>).</p> </li> <li> <p><strong>num_truncated_tokens</strong> — Number of tokens truncated (when a <code>max_length</code> is specified and <code>return_overflowing_tokens=True</code>).</p> </li> <li> <p><strong>special_tokens_mask</strong> — List of 0s and 1s, with 1 specifying added special tokens and 0 specifying regular sequence tokens (when <code>add_special_tokens=True</code> and <code>return_special_tokens_mask=True</code>).</p> </li> <li> <p><strong>length</strong> — The length of the inputs (when <code>return_length=True</code>)</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-kpxj0c">Main method to tokenize and prepare for the model one or several sequence(s) or one or several pair(s) of sequences.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.Wav2Vec2CTCTokenizer.save_vocabulary"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>save_vocabulary</span></h4> <a id="transformers.Wav2Vec2CTCTokenizer.save_vocabulary" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.Wav2Vec2CTCTokenizer.save_vocabulary"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/tokenization_wav2vec2.py#L649" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">save_directory<span class="opacity-60">: str</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">filename_prefix<span class="opacity-60">: typing.Optional[str] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> </div></div></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.Wav2Vec2CTCTokenizer.decode"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>decode</span></h4> <a id="transformers.Wav2Vec2CTCTokenizer.decode" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.Wav2Vec2CTCTokenizer.decode"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/tokenization_wav2vec2.py#L544" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids<span class="opacity-60">: typing.Union[int, typing.List[int], ForwardRef('np.ndarray'), ForwardRef('torch.Tensor'), ForwardRef('tf.Tensor')]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">skip_special_tokens<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">clean_up_tokenization_spaces<span class="opacity-60">: bool = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_char_offsets<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_word_offsets<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>str</code> or <code>Wav2Vec2CTCTokenizerOutput</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 6 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2CTCTokenizer.decode.token_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2CTCTokenizer.decode.token_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids</strong> (<code>Union[int, List[int], np.ndarray, torch.Tensor, tf.Tensor]</code>) — List of tokenized input ids. Can be obtained using the <code>__call__</code> method.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2CTCTokenizer.decode.skip_special_tokens" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2CTCTokenizer.decode.skip_special_tokens"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>skip_special_tokens</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not to remove special tokens in the decoding.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2CTCTokenizer.decode.clean_up_tokenization_spaces" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2CTCTokenizer.decode.clean_up_tokenization_spaces"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>clean_up_tokenization_spaces</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to clean up the tokenization spaces.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2CTCTokenizer.decode.output_char_offsets" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2CTCTokenizer.decode.output_char_offsets"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_char_offsets</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not to output character offsets. Character offsets can be used in combination with the sampling rate and model downsampling rate to compute the time-stamps of transcribed characters.<p></p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"> <p>Please take a look at the example below to better understand how to make use of <code>output_char_offsets</code>.</p> </div></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2CTCTokenizer.decode.output_word_offsets" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2CTCTokenizer.decode.output_word_offsets"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_word_offsets</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not to output word offsets. Word offsets can be used in combination with the sampling rate and model downsampling rate to compute the time-stamps of transcribed words.<p></p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"> <p>Please take a look at the example below to better understand how to make use of <code>output_word_offsets</code>.</p> </div></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2CTCTokenizer.decode.kwargs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2CTCTokenizer.decode.kwargs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>kwargs</strong> (additional keyword arguments, <em>optional</em>) — Will be passed to the underlying model specific decode method.</span></span> </li></ul> <div id="transformers.Wav2Vec2CTCTokenizer.decode.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><code>str</code> or <code>Wav2Vec2CTCTokenizerOutput</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>The list of decoded sentences. Will be a <code>Wav2Vec2CTCTokenizerOutput</code> when <code>output_char_offsets == True</code> or <code>output_word_offsets == True</code>.</p> </p> </div></div> <p data-svelte-h="svelte-vbfkpu">Converts a sequence of ids in a string, using the tokenizer and vocabulary with options to remove special tokens and clean up tokenization spaces.</p> <p data-svelte-h="svelte-125uxon">Similar to doing <code>self.convert_tokens_to_string(self.convert_ids_to_tokens(token_ids))</code>.</p> <div class="relative group rounded-md"><a id="transformers.Wav2Vec2CTCTokenizer.decode.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2CTCTokenizer.decode.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Let's see how to retrieve time steps for a model</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, AutoFeatureExtractor, AutoModelForCTC <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> datasets <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># import model, feature extractor, tokenizer</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model = AutoModelForCTC.from_pretrained(<span class="hljs-string">"facebook/wav2vec2-base-960h"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"facebook/wav2vec2-base-960h"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>feature_extractor = AutoFeatureExtractor.from_pretrained(<span class="hljs-string">"facebook/wav2vec2-base-960h"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># load first sample of English common_voice</span> <span class="hljs-meta">&gt;&gt;&gt; </span>dataset = load_dataset(<span class="hljs-string">"common_voice"</span>, <span class="hljs-string">"en"</span>, split=<span class="hljs-string">"train"</span>, streaming=<span class="hljs-literal">True</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>dataset = dataset.cast_column(<span class="hljs-string">"audio"</span>, datasets.Audio(sampling_rate=<span class="hljs-number">16_000</span>)) <span class="hljs-meta">&gt;&gt;&gt; </span>dataset_iter = <span class="hljs-built_in">iter</span>(dataset) <span class="hljs-meta">&gt;&gt;&gt; </span>sample = <span class="hljs-built_in">next</span>(dataset_iter) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># forward sample through model to get greedily predicted transcription ids</span> <span class="hljs-meta">&gt;&gt;&gt; </span>input_values = feature_extractor(sample[<span class="hljs-string">"audio"</span>][<span class="hljs-string">"array"</span>], return_tensors=<span class="hljs-string">"pt"</span>).input_values <span class="hljs-meta">&gt;&gt;&gt; </span>logits = model(input_values).logits[<span class="hljs-number">0</span>] <span class="hljs-meta">&gt;&gt;&gt; </span>pred_ids = torch.argmax(logits, axis=-<span class="hljs-number">1</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># retrieve word stamps (analogous commands for `output_char_offsets`)</span> <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = tokenizer.decode(pred_ids, output_word_offsets=<span class="hljs-literal">True</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># compute `time_offset` in seconds as product of downsampling ratio and sampling_rate</span> <span class="hljs-meta">&gt;&gt;&gt; </span>time_offset = model.config.inputs_to_logits_ratio / feature_extractor.sampling_rate <span class="hljs-meta">&gt;&gt;&gt; </span>word_offsets = [ <span class="hljs-meta">... </span> { <span class="hljs-meta">... </span> <span class="hljs-string">"word"</span>: d[<span class="hljs-string">"word"</span>], <span class="hljs-meta">... </span> <span class="hljs-string">"start_time"</span>: <span class="hljs-built_in">round</span>(d[<span class="hljs-string">"start_offset"</span>] * time_offset, <span class="hljs-number">2</span>), <span class="hljs-meta">... </span> <span class="hljs-string">"end_time"</span>: <span class="hljs-built_in">round</span>(d[<span class="hljs-string">"end_offset"</span>] * time_offset, <span class="hljs-number">2</span>), <span class="hljs-meta">... </span> } <span class="hljs-meta">... </span> <span class="hljs-keyword">for</span> d <span class="hljs-keyword">in</span> outputs.word_offsets <span class="hljs-meta">... </span>] <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># compare word offsets with audio `common_voice_en_100038.mp3` online on the dataset viewer:</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># https://huggingface.co/datasets/common_voice/viewer/en/train</span> <span class="hljs-meta">&gt;&gt;&gt; </span>word_offsets[:<span class="hljs-number">3</span>] [{<span class="hljs-string">'word'</span>: <span class="hljs-string">'WHY'</span>, <span class="hljs-string">'start_time'</span>: <span class="hljs-number">1.42</span>, <span class="hljs-string">'end_time'</span>: <span class="hljs-number">1.54</span>}, {<span class="hljs-string">'word'</span>: <span class="hljs-string">'DOES'</span>, <span class="hljs-string">'start_time'</span>: <span class="hljs-number">1.64</span>, <span class="hljs-string">'end_time'</span>: <span class="hljs-number">1.9</span>}, {<span class="hljs-string">'word'</span>: <span class="hljs-string">'MILISANDRA'</span>, <span class="hljs-string">'start_time'</span>: <span class="hljs-number">2.26</span>, <span class="hljs-string">'end_time'</span>: <span class="hljs-number">2.9</span>}]</pre></div></div></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.Wav2Vec2CTCTokenizer.batch_decode"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>batch_decode</span></h4> <a id="transformers.Wav2Vec2CTCTokenizer.batch_decode" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.Wav2Vec2CTCTokenizer.batch_decode"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/tokenization_wav2vec2.py#L474" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">sequences<span class="opacity-60">: typing.Union[typing.List[int], typing.List[typing.List[int]], ForwardRef('np.ndarray'), ForwardRef('torch.Tensor'), ForwardRef('tf.Tensor')]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">skip_special_tokens<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">clean_up_tokenization_spaces<span class="opacity-60">: bool = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_char_offsets<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_word_offsets<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>List[str]</code> or <code>Wav2Vec2CTCTokenizerOutput</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 6 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2CTCTokenizer.batch_decode.sequences" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2CTCTokenizer.batch_decode.sequences"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>sequences</strong> (<code>Union[List[int], List[List[int]], np.ndarray, torch.Tensor, tf.Tensor]</code>) — List of tokenized input ids. Can be obtained using the <code>__call__</code> method.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2CTCTokenizer.batch_decode.skip_special_tokens" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2CTCTokenizer.batch_decode.skip_special_tokens"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>skip_special_tokens</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not to remove special tokens in the decoding.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2CTCTokenizer.batch_decode.clean_up_tokenization_spaces" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2CTCTokenizer.batch_decode.clean_up_tokenization_spaces"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>clean_up_tokenization_spaces</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to clean up the tokenization spaces.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2CTCTokenizer.batch_decode.output_char_offsets" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2CTCTokenizer.batch_decode.output_char_offsets"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_char_offsets</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not to output character offsets. Character offsets can be used in combination with the sampling rate and model downsampling rate to compute the time-stamps of transcribed characters.<p></p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"> <p>Please take a look at the Example of <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2CTCTokenizer.decode">decode()</a> to better understand how to make use of <code>output_char_offsets</code>. <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2CTCTokenizer.batch_decode">batch_decode()</a> works the same way with batched output.</p> </div></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2CTCTokenizer.batch_decode.output_word_offsets" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2CTCTokenizer.batch_decode.output_word_offsets"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_word_offsets</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not to output word offsets. Word offsets can be used in combination with the sampling rate and model downsampling rate to compute the time-stamps of transcribed words.<p></p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"> <p>Please take a look at the Example of <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2CTCTokenizer.decode">decode()</a> to better understand how to make use of <code>output_word_offsets</code>. <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2CTCTokenizer.batch_decode">batch_decode()</a> works the same way with batched output.</p> </div></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2CTCTokenizer.batch_decode.kwargs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2CTCTokenizer.batch_decode.kwargs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>kwargs</strong> (additional keyword arguments, <em>optional</em>) — Will be passed to the underlying model specific decode method.</span></span> </li></ul> <div id="transformers.Wav2Vec2CTCTokenizer.batch_decode.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><code>List[str]</code> or <code>Wav2Vec2CTCTokenizerOutput</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>The list of decoded sentences. Will be a <code>Wav2Vec2CTCTokenizerOutput</code> when <code>output_char_offsets == True</code> or <code>output_word_offsets == True</code>.</p> </p> </div></div> <p data-svelte-h="svelte-1deng2j">Convert a list of lists of token ids into a list of strings by calling decode.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.Wav2Vec2CTCTokenizer.set_target_lang"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>set_target_lang</span></h4> <a id="transformers.Wav2Vec2CTCTokenizer.set_target_lang" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.Wav2Vec2CTCTokenizer.set_target_lang"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/tokenization_wav2vec2.py#L218" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">target_lang<span class="opacity-60">: str</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> </div></div> <p data-svelte-h="svelte-1cp5sz1">Set the target language of a nested multi-lingual dictionary</p></div></div> <h2 class="relative group"><a id="transformers.Wav2Vec2FeatureExtractor" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2FeatureExtractor"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1gsuftb">Wav2Vec2FeatureExtractor</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.Wav2Vec2FeatureExtractor"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">Wav2Vec2FeatureExtractor</span></span></h3> <a id="transformers.Wav2Vec2FeatureExtractor" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.Wav2Vec2FeatureExtractor"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/feature_extraction_wav2vec2.py#L31" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">feature_size<span class="opacity-60"> = 1</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">sampling_rate<span class="opacity-60"> = 16000</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">padding_value<span class="opacity-60"> = 0.0</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_attention_mask<span class="opacity-60"> = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">do_normalize<span class="opacity-60"> = True</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 5 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2FeatureExtractor.feature_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2FeatureExtractor.feature_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>feature_size</strong> (<code>int</code>, defaults to 1) — The feature dimension of the extracted features.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2FeatureExtractor.sampling_rate" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2FeatureExtractor.sampling_rate"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>sampling_rate</strong> (<code>int</code>, defaults to 16000) — The sampling rate at which the audio files should be digitalized expressed in hertz (Hz).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2FeatureExtractor.padding_value" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2FeatureExtractor.padding_value"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>padding_value</strong> (<code>float</code>, defaults to 0.0) — The value that is used to fill the padding values.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2FeatureExtractor.do_normalize" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2FeatureExtractor.do_normalize"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>do_normalize</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether or not to zero-mean unit-variance normalize the input. Normalizing can help to significantly improve the performance for some models, <em>e.g.</em>, <a href="https://huggingface.co/models?search=lv60" rel="nofollow">wav2vec2-lv60</a>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2FeatureExtractor.return_attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2FeatureExtractor.return_attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_attention_mask</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2FeatureExtractor.__call__"><strong>call</strong>()</a> should return <code>attention_mask</code>.<p></p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"> <p>Wav2Vec2 models that have set <code>config.feat_extract_norm == "group"</code>, such as <a href="https://huggingface.co/facebook/wav2vec2-base-960h" rel="nofollow">wav2vec2-base</a>, have <strong>not</strong> been trained using <code>attention_mask</code>. For such models, <code>input_values</code> should simply be padded with 0 and no <code>attention_mask</code> should be passed.</p> <p>For Wav2Vec2 models that have set <code>config.feat_extract_norm == "layer"</code>, such as <a href="https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self" rel="nofollow">wav2vec2-lv60</a>, <code>attention_mask</code> should be passed for batched inference.</p> </div></span></span> </li></ul> </div></div> <p data-svelte-h="svelte-10lmbzj">Constructs a Wav2Vec2 feature extractor.</p> <p data-svelte-h="svelte-bnr2z1">This feature extractor inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor#transformers.SequenceFeatureExtractor">SequenceFeatureExtractor</a> which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.Wav2Vec2FeatureExtractor.__call__"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>__call__</span></h4> <a id="transformers.Wav2Vec2FeatureExtractor.__call__" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.Wav2Vec2FeatureExtractor.__call__"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/feature_extraction_wav2vec2.py#L102" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">raw_speech<span class="opacity-60">: typing.Union[numpy.ndarray, typing.List[float], typing.List[numpy.ndarray], typing.List[typing.List[float]]]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">padding<span class="opacity-60">: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">max_length<span class="opacity-60">: typing.Optional[int] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">truncation<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pad_to_multiple_of<span class="opacity-60">: typing.Optional[int] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_attention_mask<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_tensors<span class="opacity-60">: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">sampling_rate<span class="opacity-60">: typing.Optional[int] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 9 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2FeatureExtractor.__call__.raw_speech" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2FeatureExtractor.__call__.raw_speech"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>raw_speech</strong> (<code>np.ndarray</code>, <code>List[float]</code>, <code>List[np.ndarray]</code>, <code>List[List[float]]</code>) — The sequence or batch of sequences to be padded. Each sequence can be a numpy array, a list of float values, a list of numpy arrays or a list of list of float values. Must be mono channel audio, not stereo, i.e. single float per timestep.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2FeatureExtractor.__call__.padding" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2FeatureExtractor.__call__.padding"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>padding</strong> (<code>bool</code>, <code>str</code> or <a href="/docs/transformers/v4.34.0/en/internal/file_utils#transformers.utils.PaddingStrategy">PaddingStrategy</a>, <em>optional</em>, defaults to <code>False</code>) — Select a strategy to pad the returned sequences (according to the model’s padding side and padding index) among:<p></p> <ul> <li><code>True</code> or <code>'longest'</code>: Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).</li> <li><code>'max_length'</code>: Pad to a maximum length specified with the argument <code>max_length</code> or to the maximum acceptable input length for the model if that argument is not provided.</li> <li><code>False</code> or <code>'do_not_pad'</code> (default): No padding (i.e., can output a batch with sequences of different lengths).</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2FeatureExtractor.__call__.max_length" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2FeatureExtractor.__call__.max_length"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>max_length</strong> (<code>int</code>, <em>optional</em>) — Maximum length of the returned list and optionally padding length (see above).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2FeatureExtractor.__call__.truncation" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2FeatureExtractor.__call__.truncation"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>truncation</strong> (<code>bool</code>) — Activates truncation to cut input sequences longer than <em>max_length</em> to <em>max_length</em>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2FeatureExtractor.__call__.pad_to_multiple_of" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2FeatureExtractor.__call__.pad_to_multiple_of"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>pad_to_multiple_of</strong> (<code>int</code>, <em>optional</em>) — If set will pad the sequence to a multiple of the provided value.<p></p> <p>This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability <code>&gt;= 7.5</code> (Volta), or on TPUs which benefit from having sequence lengths be a multiple of 128.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2FeatureExtractor.__call__.return_attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2FeatureExtractor.__call__.return_attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_attention_mask</strong> (<code>bool</code>, <em>optional</em>) — Whether to return the attention mask. If left to the default, will return the attention mask according to the specific feature_extractor’s default.<p></p> <p><a href="../glossary#attention-mask">What are attention masks?</a></p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"> <p>Wav2Vec2 models that have set <code>config.feat_extract_norm == "group"</code>, such as <a href="https://huggingface.co/facebook/wav2vec2-base-960h" rel="nofollow">wav2vec2-base</a>, have <strong>not</strong> been trained using <code>attention_mask</code>. For such models, <code>input_values</code> should simply be padded with 0 and no <code>attention_mask</code> should be passed.</p> <p>For Wav2Vec2 models that have set <code>config.feat_extract_norm == "layer"</code>, such as <a href="https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self" rel="nofollow">wav2vec2-lv60</a>, <code>attention_mask</code> should be passed for batched inference.</p> </div></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2FeatureExtractor.__call__.return_tensors" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2FeatureExtractor.__call__.return_tensors"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_tensors</strong> (<code>str</code> or <a href="/docs/transformers/v4.34.0/en/internal/file_utils#transformers.TensorType">TensorType</a>, <em>optional</em>) — If set, will return tensors instead of list of python integers. Acceptable values are:<p></p> <ul> <li><code>'tf'</code>: Return TensorFlow <code>tf.constant</code> objects.</li> <li><code>'pt'</code>: Return PyTorch <code>torch.Tensor</code> objects.</li> <li><code>'np'</code>: Return Numpy <code>np.ndarray</code> objects.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2FeatureExtractor.__call__.sampling_rate" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2FeatureExtractor.__call__.sampling_rate"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>sampling_rate</strong> (<code>int</code>, <em>optional</em>) — The sampling rate at which the <code>raw_speech</code> input was sampled. It is strongly recommended to pass <code>sampling_rate</code> at the forward call to prevent silent errors.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2FeatureExtractor.__call__.padding_value" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2FeatureExtractor.__call__.padding_value"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>padding_value</strong> (<code>float</code>, defaults to 0.0) —</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1a6wgfx">Main method to featurize and prepare for the model one or several sequence(s).</p></div></div> <h2 class="relative group"><a id="transformers.Wav2Vec2Processor" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Processor"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1qka2if">Wav2Vec2Processor</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.Wav2Vec2Processor"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">Wav2Vec2Processor</span></span></h3> <a id="transformers.Wav2Vec2Processor" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.Wav2Vec2Processor"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/processing_wav2vec2.py#L26" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">feature_extractor<span class="opacity-60"></span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">tokenizer<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Processor.feature_extractor" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Processor.feature_extractor"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>feature_extractor</strong> (<code>Wav2Vec2FeatureExtractor</code>) — An instance of <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2FeatureExtractor">Wav2Vec2FeatureExtractor</a>. The feature extractor is a required input.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Processor.tokenizer" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Processor.tokenizer"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>tokenizer</strong> (<a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizer">PreTrainedTokenizer</a>) — An instance of <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizer">PreTrainedTokenizer</a>. The tokenizer is a required input.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-2ag3qy">Constructs a Wav2Vec2 processor which wraps a Wav2Vec2 feature extractor and a Wav2Vec2 CTC tokenizer into a single processor.</p> <p data-svelte-h="svelte-18cyxdw"><a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Processor">Wav2Vec2Processor</a> offers all the functionalities of <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2FeatureExtractor">Wav2Vec2FeatureExtractor</a> and <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizer">PreTrainedTokenizer</a>. See the docstring of <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Processor.__call__"><strong>call</strong>()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Processor.decode">decode()</a> for more information.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.Wav2Vec2Processor.__call__"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>__call__</span></h4> <a id="transformers.Wav2Vec2Processor.__call__" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.Wav2Vec2Processor.__call__"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/processing_wav2vec2.py#L67" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">*args<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> </div></div> <p data-svelte-h="svelte-wgo6ch">When used in normal mode, this method forwards all its arguments to Wav2Vec2FeatureExtractor’s <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2FeatureExtractor.__call__"><strong>call</strong>()</a> and returns its output. If used in the context <code>as_target_processor()</code> this method forwards all its arguments to PreTrainedTokenizer’s <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__"><strong>call</strong>()</a>. Please refer to the docstring of the above two methods for more information.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.Wav2Vec2Processor.pad"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>pad</span></h4> <a id="transformers.Wav2Vec2Processor.pad" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.Wav2Vec2Processor.pad"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/processing_wav2vec2.py#L105" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">*args<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> </div></div> <p data-svelte-h="svelte-ke8r2k">When used in normal mode, this method forwards all its arguments to Wav2Vec2FeatureExtractor’s <a href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor#transformers.SequenceFeatureExtractor.pad">pad()</a> and returns its output. If used in the context <code>as_target_processor()</code> this method forwards all its arguments to PreTrainedTokenizer’s <a href="/docs/transformers/v4.34.0/en/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.pad">pad()</a>. Please refer to the docstring of the above two methods for more information.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.Wav2Vec2Processor.from_pretrained"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>from_pretrained</span></h4> <a id="transformers.Wav2Vec2Processor.from_pretrained" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.Wav2Vec2Processor.from_pretrained"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/processing_wav2vec2.py#L48" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pretrained_model_name_or_path<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> </div></div></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.Wav2Vec2Processor.save_pretrained"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>save_pretrained</span></h4> <a id="transformers.Wav2Vec2Processor.save_pretrained" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.Wav2Vec2Processor.save_pretrained"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/processing_utils.py#L93" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">save_directory<span class="opacity-60"></span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">push_to_hub<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Processor.save_pretrained.save_directory" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Processor.save_pretrained.save_directory"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>save_directory</strong> (<code>str</code> or <code>os.PathLike</code>) — Directory where the feature extractor JSON file and the tokenizer files will be saved (directory will be created if it does not exist).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Processor.save_pretrained.push_to_hub" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Processor.save_pretrained.push_to_hub"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>push_to_hub</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the repository you want to push to with <code>repo_id</code> (will default to the name of <code>save_directory</code> in your namespace).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Processor.save_pretrained.kwargs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Processor.save_pretrained.kwargs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>kwargs</strong> (<code>Dict[str, Any]</code>, <em>optional</em>) — Additional key word arguments passed along to the <a href="/docs/transformers/v4.34.0/en/main_classes/processors#transformers.ProcessorMixin.push_to_hub">push_to_hub()</a> method.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-7negql">Saves the attributes of this processor (feature extractor, tokenizer…) in the specified directory so that it can be reloaded using the <a href="/docs/transformers/v4.34.0/en/model_doc/nougat#transformers.NougatProcessor.from_pretrained">from_pretrained()</a> method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1rzuu8q">This class method is simply calling <a href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor#transformers.FeatureExtractionMixin.save_pretrained">save_pretrained()</a> and <a href="/docs/transformers/v4.34.0/en/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.save_pretrained">save_pretrained()</a>. Please refer to the docstrings of the methods above for more information.</p></div></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.Wav2Vec2Processor.batch_decode"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>batch_decode</span></h4> <a id="transformers.Wav2Vec2Processor.batch_decode" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.Wav2Vec2Processor.batch_decode"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/processing_wav2vec2.py#L135" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">*args<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> </div></div> <p data-svelte-h="svelte-14y56zg">This method forwards all its arguments to PreTrainedTokenizer’s <a href="/docs/transformers/v4.34.0/en/model_doc/speecht5#transformers.SpeechT5Tokenizer.batch_decode">batch_decode()</a>. Please refer to the docstring of this method for more information.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.Wav2Vec2Processor.decode"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>decode</span></h4> <a id="transformers.Wav2Vec2Processor.decode" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.Wav2Vec2Processor.decode"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/processing_wav2vec2.py#L142" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">*args<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> </div></div> <p data-svelte-h="svelte-130gfym">This method forwards all its arguments to PreTrainedTokenizer’s <a href="/docs/transformers/v4.34.0/en/model_doc/speecht5#transformers.SpeechT5Tokenizer.decode">decode()</a>. Please refer to the docstring of this method for more information.</p></div></div> <h2 class="relative group"><a id="transformers.Wav2Vec2ProcessorWithLM" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ProcessorWithLM"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-fhqihc">Wav2Vec2ProcessorWithLM</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.Wav2Vec2ProcessorWithLM"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">Wav2Vec2ProcessorWithLM</span></span></h3> <a id="transformers.Wav2Vec2ProcessorWithLM" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.Wav2Vec2ProcessorWithLM"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2_with_lm/processing_wav2vec2_with_lm.py#L67" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">feature_extractor<span class="opacity-60">: FeatureExtractionMixin</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">tokenizer<span class="opacity-60">: PreTrainedTokenizerBase</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder<span class="opacity-60">: BeamSearchDecoderCTC</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ProcessorWithLM.feature_extractor" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ProcessorWithLM.feature_extractor"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>feature_extractor</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2FeatureExtractor">Wav2Vec2FeatureExtractor</a>) — An instance of <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2FeatureExtractor">Wav2Vec2FeatureExtractor</a>. The feature extractor is a required input.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ProcessorWithLM.tokenizer" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ProcessorWithLM.tokenizer"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>tokenizer</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2CTCTokenizer">Wav2Vec2CTCTokenizer</a>) — An instance of <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2CTCTokenizer">Wav2Vec2CTCTokenizer</a>. The tokenizer is a required input.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ProcessorWithLM.decoder" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ProcessorWithLM.decoder"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder</strong> (<code>pyctcdecode.BeamSearchDecoderCTC</code>) — An instance of <code>pyctcdecode.BeamSearchDecoderCTC</code>. The decoder is a required input.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-tn9a17">Constructs a Wav2Vec2 processor which wraps a Wav2Vec2 feature extractor, a Wav2Vec2 CTC tokenizer and a decoder with language model support into a single processor for language model boosted speech recognition decoding.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.Wav2Vec2ProcessorWithLM.__call__"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>__call__</span></h4> <a id="transformers.Wav2Vec2ProcessorWithLM.__call__" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.Wav2Vec2ProcessorWithLM.__call__"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2_with_lm/processing_wav2vec2_with_lm.py#L214" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">*args<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> </div></div> <p data-svelte-h="svelte-1n26gth">When used in normal mode, this method forwards all its arguments to Wav2Vec2FeatureExtractor’s <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2FeatureExtractor.__call__"><strong>call</strong>()</a> and returns its output. If used in the context <code>as_target_processor()</code> this method forwards all its arguments to Wav2Vec2CTCTokenizer’s <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__"><strong>call</strong>()</a>. Please refer to the docstring of the above two methods for more information.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.Wav2Vec2ProcessorWithLM.pad"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>pad</span></h4> <a id="transformers.Wav2Vec2ProcessorWithLM.pad" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.Wav2Vec2ProcessorWithLM.pad"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2_with_lm/processing_wav2vec2_with_lm.py#L253" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">*args<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> </div></div> <p data-svelte-h="svelte-i7ughs">When used in normal mode, this method forwards all its arguments to Wav2Vec2FeatureExtractor’s <a href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor#transformers.SequenceFeatureExtractor.pad">pad()</a> and returns its output. If used in the context <code>as_target_processor()</code> this method forwards all its arguments to Wav2Vec2CTCTokenizer’s <a href="/docs/transformers/v4.34.0/en/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.pad">pad()</a>. Please refer to the docstring of the above two methods for more information.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.Wav2Vec2ProcessorWithLM.from_pretrained"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>from_pretrained</span></h4> <a id="transformers.Wav2Vec2ProcessorWithLM.from_pretrained" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.Wav2Vec2ProcessorWithLM.from_pretrained"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2_with_lm/processing_wav2vec2_with_lm.py#L112" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pretrained_model_name_or_path<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ProcessorWithLM.from_pretrained.pretrained_model_name_or_path" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ProcessorWithLM.from_pretrained.pretrained_model_name_or_path"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>pretrained_model_name_or_path</strong> (<code>str</code> or <code>os.PathLike</code>) — This can be either:<p></p> <ul> <li>a string, the <em>model id</em> of a pretrained feature_extractor hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like <code>bert-base-uncased</code>, or namespaced under a user or organization name, like <code>dbmdz/bert-base-german-cased</code>.</li> <li>a path to a <em>directory</em> containing a feature extractor file saved using the <a href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor#transformers.FeatureExtractionMixin.save_pretrained">save_pretrained()</a> method, e.g., <code>./my_model_directory/</code>.</li> <li>a path or url to a saved feature extractor JSON <em>file</em>, e.g., <code>./my_model_directory/preprocessor_config.json</code>. **kwargs — Additional keyword arguments passed along to both <a href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor#transformers.SequenceFeatureExtractor">SequenceFeatureExtractor</a> and <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizer">PreTrainedTokenizer</a></li> </ul></span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1hrrlix">Instantiate a <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2ProcessorWithLM">Wav2Vec2ProcessorWithLM</a> from a pretrained Wav2Vec2 processor.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1cfs9nd">This class method is simply calling Wav2Vec2FeatureExtractor’s <a href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor#transformers.FeatureExtractionMixin.from_pretrained">from_pretrained()</a>, Wav2Vec2CTCTokenizer’s <a href="/docs/transformers/v4.34.0/en/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.from_pretrained">from_pretrained()</a>, and <code>pyctcdecode.BeamSearchDecoderCTC.load_from_hf_hub</code>.</p> <p data-svelte-h="svelte-1v5a8ev">Please refer to the docstrings of the methods above for more information.</p></div></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.Wav2Vec2ProcessorWithLM.save_pretrained"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>save_pretrained</span></h4> <a id="transformers.Wav2Vec2ProcessorWithLM.save_pretrained" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.Wav2Vec2ProcessorWithLM.save_pretrained"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2_with_lm/processing_wav2vec2_with_lm.py#L108" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">save_directory<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> </div></div></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.Wav2Vec2ProcessorWithLM.batch_decode"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>batch_decode</span></h4> <a id="transformers.Wav2Vec2ProcessorWithLM.batch_decode" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.Wav2Vec2ProcessorWithLM.batch_decode"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2_with_lm/processing_wav2vec2_with_lm.py#L284" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">logits<span class="opacity-60">: ndarray</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pool<span class="opacity-60">: typing.Union[&lt;bound method BaseContext.Pool of &lt;multiprocessing.context.DefaultContext object at 0x7f0b4ec9b370&gt;&gt;, NoneType] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_processes<span class="opacity-60">: typing.Optional[int] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">beam_width<span class="opacity-60">: typing.Optional[int] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">beam_prune_logp<span class="opacity-60">: typing.Optional[float] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_min_logp<span class="opacity-60">: typing.Optional[float] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hotwords<span class="opacity-60">: typing.Optional[typing.Iterable[str]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hotword_weight<span class="opacity-60">: typing.Optional[float] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">alpha<span class="opacity-60">: typing.Optional[float] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">beta<span class="opacity-60">: typing.Optional[float] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">unk_score_offset<span class="opacity-60">: typing.Optional[float] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">lm_score_boundary<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_word_offsets<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">n_best<span class="opacity-60">: int = 1</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 14 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ProcessorWithLM.batch_decode.logits" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ProcessorWithLM.batch_decode.logits"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>logits</strong> (<code>np.ndarray</code>) — The logits output vector of the model representing the log probabilities for each token.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ProcessorWithLM.batch_decode.pool" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ProcessorWithLM.batch_decode.pool"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>pool</strong> (<code>multiprocessing.Pool</code>, <em>optional</em>) — An optional user-managed pool. If not set, one will be automatically created and closed. The pool should be instantiated <em>after</em> <code>Wav2Vec2ProcessorWithLM</code>. Otherwise, the LM won’t be available to the pool’s sub-processes.<p></p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"> <p>Currently, only pools created with a ‘fork’ context can be used. If a ‘spawn’ pool is passed, it will be ignored and sequential decoding will be used instead.</p> </div></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ProcessorWithLM.batch_decode.num_processes" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ProcessorWithLM.batch_decode.num_processes"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>num_processes</strong> (<code>int</code>, <em>optional</em>) — If <code>pool</code> is not set, number of processes on which the function should be parallelized over. Defaults to the number of available CPUs.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ProcessorWithLM.batch_decode.beam_width" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ProcessorWithLM.batch_decode.beam_width"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>beam_width</strong> (<code>int</code>, <em>optional</em>) — Maximum number of beams at each step in decoding. Defaults to pyctcdecode’s DEFAULT_BEAM_WIDTH.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ProcessorWithLM.batch_decode.beam_prune_logp" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ProcessorWithLM.batch_decode.beam_prune_logp"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>beam_prune_logp</strong> (<code>int</code>, <em>optional</em>) — Beams that are much worse than best beam will be pruned Defaults to pyctcdecode’s DEFAULT_PRUNE_LOGP.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ProcessorWithLM.batch_decode.token_min_logp" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ProcessorWithLM.batch_decode.token_min_logp"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_min_logp</strong> (<code>int</code>, <em>optional</em>) — Tokens below this logp are skipped unless they are argmax of frame Defaults to pyctcdecode’s DEFAULT_MIN_TOKEN_LOGP.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ProcessorWithLM.batch_decode.hotwords" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ProcessorWithLM.batch_decode.hotwords"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hotwords</strong> (<code>List[str]</code>, <em>optional</em>) — List of words with extra importance, can be OOV for LM</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ProcessorWithLM.batch_decode.hotword_weight" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ProcessorWithLM.batch_decode.hotword_weight"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hotword_weight</strong> (<code>int</code>, <em>optional</em>) — Weight factor for hotword importance Defaults to pyctcdecode’s DEFAULT_HOTWORD_WEIGHT.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ProcessorWithLM.batch_decode.alpha" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ProcessorWithLM.batch_decode.alpha"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>alpha</strong> (<code>float</code>, <em>optional</em>) — Weight for language model during shallow fusion</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ProcessorWithLM.batch_decode.beta" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ProcessorWithLM.batch_decode.beta"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>beta</strong> (<code>float</code>, <em>optional</em>) — Weight for length score adjustment of during scoring</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ProcessorWithLM.batch_decode.unk_score_offset" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ProcessorWithLM.batch_decode.unk_score_offset"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>unk_score_offset</strong> (<code>float</code>, <em>optional</em>) — Amount of log score offset for unknown tokens</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ProcessorWithLM.batch_decode.lm_score_boundary" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ProcessorWithLM.batch_decode.lm_score_boundary"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>lm_score_boundary</strong> (<code>bool</code>, <em>optional</em>) — Whether to have kenlm respect boundaries when scoring</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ProcessorWithLM.batch_decode.output_word_offsets" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ProcessorWithLM.batch_decode.output_word_offsets"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_word_offsets</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not to output word offsets. Word offsets can be used in combination with the sampling rate and model downsampling rate to compute the time-stamps of transcribed words.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ProcessorWithLM.batch_decode.n_best" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ProcessorWithLM.batch_decode.n_best"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>n_best</strong> (<code>int</code>, <em>optional</em>, defaults to <code>1</code>) — Number of best hypotheses to return. If <code>n_best</code> is greater than 1, the returned <code>text</code> will be a list of lists of strings, <code>logit_score</code> will be a list of lists of floats, and <code>lm_score</code> will be a list of lists of floats, where the length of the outer list will correspond to the batch size and the length of the inner list will correspond to the number of returned hypotheses . The value should be &gt;= 1.<p></p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"> <p>Please take a look at the Example of <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2ProcessorWithLM.decode">decode()</a> to better understand how to make use of <code>output_word_offsets</code>. <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2ProcessorWithLM.batch_decode">batch_decode()</a> works the same way with batched output.</p> </div></span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1xixice">Batch decode output logits to audio transcription with language model support.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-mir1wd">This function makes use of Python’s multiprocessing. Currently, multiprocessing is available only on Unix systems (see this <a href="https://github.com/kensho-technologies/pyctcdecode/issues/65" rel="nofollow">issue</a>).</p> <p data-svelte-h="svelte-4v42s1">If you are decoding multiple batches, consider creating a <code>Pool</code> and passing it to <code>batch_decode</code>. Otherwise, <code>batch_decode</code> will be very slow since it will create a fresh <code>Pool</code> for each call. See usage example below.</p></div> <p data-svelte-h="svelte-1g4be31">Example: See <a href="#decoding-multiple-audios">Decoding multiple audios</a>.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.Wav2Vec2ProcessorWithLM.decode"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>decode</span></h4> <a id="transformers.Wav2Vec2ProcessorWithLM.decode" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.Wav2Vec2ProcessorWithLM.decode"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2_with_lm/processing_wav2vec2_with_lm.py#L469" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">logits<span class="opacity-60">: ndarray</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">beam_width<span class="opacity-60">: typing.Optional[int] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">beam_prune_logp<span class="opacity-60">: typing.Optional[float] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_min_logp<span class="opacity-60">: typing.Optional[float] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hotwords<span class="opacity-60">: typing.Optional[typing.Iterable[str]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hotword_weight<span class="opacity-60">: typing.Optional[float] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">alpha<span class="opacity-60">: typing.Optional[float] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">beta<span class="opacity-60">: typing.Optional[float] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">unk_score_offset<span class="opacity-60">: typing.Optional[float] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">lm_score_boundary<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_word_offsets<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">n_best<span class="opacity-60">: int = 1</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 12 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ProcessorWithLM.decode.logits" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ProcessorWithLM.decode.logits"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>logits</strong> (<code>np.ndarray</code>) — The logits output vector of the model representing the log probabilities for each token.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ProcessorWithLM.decode.beam_width" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ProcessorWithLM.decode.beam_width"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>beam_width</strong> (<code>int</code>, <em>optional</em>) — Maximum number of beams at each step in decoding. Defaults to pyctcdecode’s DEFAULT_BEAM_WIDTH.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ProcessorWithLM.decode.beam_prune_logp" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ProcessorWithLM.decode.beam_prune_logp"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>beam_prune_logp</strong> (<code>int</code>, <em>optional</em>) — A threshold to prune beams with log-probs less than best_beam_logp + beam_prune_logp. The value should be &lt;= 0. Defaults to pyctcdecode’s DEFAULT_PRUNE_LOGP.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ProcessorWithLM.decode.token_min_logp" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ProcessorWithLM.decode.token_min_logp"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_min_logp</strong> (<code>int</code>, <em>optional</em>) — Tokens with log-probs below token_min_logp are skipped unless they are have the maximum log-prob for an utterance. Defaults to pyctcdecode’s DEFAULT_MIN_TOKEN_LOGP.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ProcessorWithLM.decode.hotwords" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ProcessorWithLM.decode.hotwords"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hotwords</strong> (<code>List[str]</code>, <em>optional</em>) — List of words with extra importance which can be missing from the LM’s vocabulary, e.g. [“huggingface”]</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ProcessorWithLM.decode.hotword_weight" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ProcessorWithLM.decode.hotword_weight"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hotword_weight</strong> (<code>int</code>, <em>optional</em>) — Weight multiplier that boosts hotword scores. Defaults to pyctcdecode’s DEFAULT_HOTWORD_WEIGHT.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ProcessorWithLM.decode.alpha" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ProcessorWithLM.decode.alpha"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>alpha</strong> (<code>float</code>, <em>optional</em>) — Weight for language model during shallow fusion</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ProcessorWithLM.decode.beta" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ProcessorWithLM.decode.beta"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>beta</strong> (<code>float</code>, <em>optional</em>) — Weight for length score adjustment of during scoring</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ProcessorWithLM.decode.unk_score_offset" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ProcessorWithLM.decode.unk_score_offset"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>unk_score_offset</strong> (<code>float</code>, <em>optional</em>) — Amount of log score offset for unknown tokens</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ProcessorWithLM.decode.lm_score_boundary" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ProcessorWithLM.decode.lm_score_boundary"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>lm_score_boundary</strong> (<code>bool</code>, <em>optional</em>) — Whether to have kenlm respect boundaries when scoring</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ProcessorWithLM.decode.output_word_offsets" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ProcessorWithLM.decode.output_word_offsets"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_word_offsets</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not to output word offsets. Word offsets can be used in combination with the sampling rate and model downsampling rate to compute the time-stamps of transcribed words.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ProcessorWithLM.decode.n_best" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ProcessorWithLM.decode.n_best"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>n_best</strong> (<code>int</code>, <em>optional</em>, defaults to <code>1</code>) — Number of best hypotheses to return. If <code>n_best</code> is greater than 1, the returned <code>text</code> will be a list of strings, <code>logit_score</code> will be a list of floats, and <code>lm_score</code> will be a list of floats, where the length of these lists will correspond to the number of returned hypotheses. The value should be &gt;= 1.<p></p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"> <p>Please take a look at the example below to better understand how to make use of <code>output_word_offsets</code>.</p> </div></span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1e03u9y">Decode output logits to audio transcription with language model support.</p> <div class="relative group rounded-md"><a id="transformers.Wav2Vec2ProcessorWithLM.decode.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ProcessorWithLM.decode.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Let's see how to retrieve time steps for a model</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, AutoProcessor, AutoModelForCTC <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> datasets <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># import model, feature extractor, tokenizer</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model = AutoModelForCTC.from_pretrained(<span class="hljs-string">"patrickvonplaten/wav2vec2-base-100h-with-lm"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>processor = AutoProcessor.from_pretrained(<span class="hljs-string">"patrickvonplaten/wav2vec2-base-100h-with-lm"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># load first sample of English common_voice</span> <span class="hljs-meta">&gt;&gt;&gt; </span>dataset = load_dataset(<span class="hljs-string">"common_voice"</span>, <span class="hljs-string">"en"</span>, split=<span class="hljs-string">"train"</span>, streaming=<span class="hljs-literal">True</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>dataset = dataset.cast_column(<span class="hljs-string">"audio"</span>, datasets.Audio(sampling_rate=<span class="hljs-number">16_000</span>)) <span class="hljs-meta">&gt;&gt;&gt; </span>dataset_iter = <span class="hljs-built_in">iter</span>(dataset) <span class="hljs-meta">&gt;&gt;&gt; </span>sample = <span class="hljs-built_in">next</span>(dataset_iter) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># forward sample through model to get greedily predicted transcription ids</span> <span class="hljs-meta">&gt;&gt;&gt; </span>input_values = processor(sample[<span class="hljs-string">"audio"</span>][<span class="hljs-string">"array"</span>], return_tensors=<span class="hljs-string">"pt"</span>).input_values <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> logits = model(input_values).logits[<span class="hljs-number">0</span>].cpu().numpy() <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># retrieve word stamps (analogous commands for `output_char_offsets`)</span> <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = processor.decode(logits, output_word_offsets=<span class="hljs-literal">True</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># compute `time_offset` in seconds as product of downsampling ratio and sampling_rate</span> <span class="hljs-meta">&gt;&gt;&gt; </span>time_offset = model.config.inputs_to_logits_ratio / processor.feature_extractor.sampling_rate <span class="hljs-meta">&gt;&gt;&gt; </span>word_offsets = [ <span class="hljs-meta">... </span> { <span class="hljs-meta">... </span> <span class="hljs-string">"word"</span>: d[<span class="hljs-string">"word"</span>], <span class="hljs-meta">... </span> <span class="hljs-string">"start_time"</span>: <span class="hljs-built_in">round</span>(d[<span class="hljs-string">"start_offset"</span>] * time_offset, <span class="hljs-number">2</span>), <span class="hljs-meta">... </span> <span class="hljs-string">"end_time"</span>: <span class="hljs-built_in">round</span>(d[<span class="hljs-string">"end_offset"</span>] * time_offset, <span class="hljs-number">2</span>), <span class="hljs-meta">... </span> } <span class="hljs-meta">... </span> <span class="hljs-keyword">for</span> d <span class="hljs-keyword">in</span> outputs.word_offsets <span class="hljs-meta">... </span>] <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># compare word offsets with audio `common_voice_en_100038.mp3` online on the dataset viewer:</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># https://huggingface.co/datasets/common_voice/viewer/en/train</span> <span class="hljs-meta">&gt;&gt;&gt; </span>word_offsets[:<span class="hljs-number">4</span>] [{<span class="hljs-string">'word'</span>: <span class="hljs-string">'WHY'</span>, <span class="hljs-string">'start_time'</span>: <span class="hljs-number">1.42</span>, <span class="hljs-string">'end_time'</span>: <span class="hljs-number">1.54</span>}, {<span class="hljs-string">'word'</span>: <span class="hljs-string">'DOES'</span>, <span class="hljs-string">'start_time'</span>: <span class="hljs-number">1.66</span>, <span class="hljs-string">'end_time'</span>: <span class="hljs-number">1.9</span>}, {<span class="hljs-string">'word'</span>: <span class="hljs-string">'MILISANDRA'</span>, <span class="hljs-string">'start_time'</span>: <span class="hljs-number">2.26</span>, <span class="hljs-string">'end_time'</span>: <span class="hljs-number">2.9</span>}, {<span class="hljs-string">'word'</span>: <span class="hljs-string">'LOOK'</span>, <span class="hljs-string">'start_time'</span>: <span class="hljs-number">3.0</span>, <span class="hljs-string">'end_time'</span>: <span class="hljs-number">3.16</span>}]</pre></div></div></div></div> <h3 class="relative group"><a id="decoding-multiple-audios" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#decoding-multiple-audios"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1uikt7r">Decoding multiple audios</span></h3> <p data-svelte-h="svelte-dowsih">If you are planning to decode multiple batches of audios, you should consider using <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2ProcessorWithLM.batch_decode">batch_decode()</a> and passing an instantiated <code>multiprocessing.Pool</code>. Otherwise, <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2ProcessorWithLM.batch_decode">batch_decode()</a> performance will be slower than calling <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2ProcessorWithLM.decode">decode()</a> for each audio individually, as it internally instantiates a new <code>Pool</code> for every call. See the example below:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Let's see how to use a user-managed pool for batch decoding multiple audios</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> multiprocessing <span class="hljs-keyword">import</span> get_context <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, AutoProcessor, AutoModelForCTC <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> datasets <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># import model, feature extractor, tokenizer</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model = AutoModelForCTC.from_pretrained(<span class="hljs-string">"patrickvonplaten/wav2vec2-base-100h-with-lm"</span>).to(<span class="hljs-string">"cuda"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>processor = AutoProcessor.from_pretrained(<span class="hljs-string">"patrickvonplaten/wav2vec2-base-100h-with-lm"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># load example dataset</span> <span class="hljs-meta">&gt;&gt;&gt; </span>dataset = load_dataset(<span class="hljs-string">"hf-internal-testing/librispeech_asr_dummy"</span>, <span class="hljs-string">"clean"</span>, split=<span class="hljs-string">"validation"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>dataset = dataset.cast_column(<span class="hljs-string">"audio"</span>, datasets.Audio(sampling_rate=<span class="hljs-number">16_000</span>)) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">map_to_array</span>(<span class="hljs-params">batch</span>): <span class="hljs-meta">... </span> batch[<span class="hljs-string">"speech"</span>] = batch[<span class="hljs-string">"audio"</span>][<span class="hljs-string">"array"</span>] <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> batch <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># prepare speech data for batch inference</span> <span class="hljs-meta">&gt;&gt;&gt; </span>dataset = dataset.<span class="hljs-built_in">map</span>(map_to_array, remove_columns=[<span class="hljs-string">"audio"</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">map_to_pred</span>(<span class="hljs-params">batch, pool</span>): <span class="hljs-meta">... </span> inputs = processor(batch[<span class="hljs-string">"speech"</span>], sampling_rate=<span class="hljs-number">16_000</span>, padding=<span class="hljs-literal">True</span>, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">... </span> inputs = {k: v.to(<span class="hljs-string">"cuda"</span>) <span class="hljs-keyword">for</span> k, v <span class="hljs-keyword">in</span> inputs.items()} <span class="hljs-meta">... </span> <span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> logits = model(**inputs).logits <span class="hljs-meta">... </span> transcription = processor.batch_decode(logits.cpu().numpy(), pool).text <span class="hljs-meta">... </span> batch[<span class="hljs-string">"transcription"</span>] = transcription <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> batch <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># note: pool should be instantiated *after* `Wav2Vec2ProcessorWithLM`.</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># otherwise, the LM won't be available to the pool's sub-processes</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># select number of processes and batch_size based on number of CPU cores available and on dataset size</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> get_context(<span class="hljs-string">"fork"</span>).Pool(processes=<span class="hljs-number">2</span>) <span class="hljs-keyword">as</span> pool: <span class="hljs-meta">... </span> result = dataset.<span class="hljs-built_in">map</span>( <span class="hljs-meta">... </span> map_to_pred, batched=<span class="hljs-literal">True</span>, batch_size=<span class="hljs-number">2</span>, fn_kwargs={<span class="hljs-string">"pool"</span>: pool}, remove_columns=[<span class="hljs-string">"speech"</span>] <span class="hljs-meta">... </span> ) <span class="hljs-meta">&gt;&gt;&gt; </span>result[<span class="hljs-string">"transcription"</span>][:<span class="hljs-number">2</span>] [<span class="hljs-string">'MISTER QUILTER IS THE APOSTLE OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL'</span>, <span class="hljs-string">"NOR IS MISTER COULTER'S MANNER LESS INTERESTING THAN HIS MATTER"</span>]</pre></div> <h2 class="relative group"><a id="transformers.models.wav2vec2_with_lm.processing_wav2vec2_with_lm.Wav2Vec2DecoderWithLMOutput" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.wav2vec2_with_lm.processing_wav2vec2_with_lm.Wav2Vec2DecoderWithLMOutput"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1lmbwfd">Wav2Vec2 specific outputs</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.models.wav2vec2_with_lm.processing_wav2vec2_with_lm.Wav2Vec2DecoderWithLMOutput"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.models.wav2vec2_with_lm.processing_wav2vec2_with_lm.</span><span class="font-semibold">Wav2Vec2DecoderWithLMOutput</span></span></h3> <a id="transformers.models.wav2vec2_with_lm.processing_wav2vec2_with_lm.Wav2Vec2DecoderWithLMOutput" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.models.wav2vec2_with_lm.processing_wav2vec2_with_lm.Wav2Vec2DecoderWithLMOutput"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2_with_lm/processing_wav2vec2_with_lm.py#L45" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">text<span class="opacity-60">: typing.Union[typing.List[typing.List[str]], typing.List[str], str]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">logit_score<span class="opacity-60">: typing.Union[typing.List[typing.List[float]], typing.List[float], float] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">lm_score<span class="opacity-60">: typing.Union[typing.List[typing.List[float]], typing.List[float], float] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">word_offsets<span class="opacity-60">: typing.Union[typing.List[typing.List[typing.List[typing.Dict[str, typing.Union[int, str]]]]], typing.List[typing.List[typing.Dict[str, typing.Union[int, str]]]], typing.List[typing.Dict[str, typing.Union[int, str]]]] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.wav2vec2_with_lm.processing_wav2vec2_with_lm.Wav2Vec2DecoderWithLMOutput.text" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.wav2vec2_with_lm.processing_wav2vec2_with_lm.Wav2Vec2DecoderWithLMOutput.text"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>text</strong> (list of <code>str</code> or <code>str</code>) — Decoded logits in text from. Usually the speech transcription.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.wav2vec2_with_lm.processing_wav2vec2_with_lm.Wav2Vec2DecoderWithLMOutput.logit_score" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.wav2vec2_with_lm.processing_wav2vec2_with_lm.Wav2Vec2DecoderWithLMOutput.logit_score"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>logit_score</strong> (list of <code>float</code> or <code>float</code>) — Total logit score of the beams associated with produced text.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.wav2vec2_with_lm.processing_wav2vec2_with_lm.Wav2Vec2DecoderWithLMOutput.lm_score" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.wav2vec2_with_lm.processing_wav2vec2_with_lm.Wav2Vec2DecoderWithLMOutput.lm_score"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>lm_score</strong> (list of <code>float</code>) — Fused lm_score of the beams associated with produced text.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.wav2vec2_with_lm.processing_wav2vec2_with_lm.Wav2Vec2DecoderWithLMOutput.word_offsets" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.wav2vec2_with_lm.processing_wav2vec2_with_lm.Wav2Vec2DecoderWithLMOutput.word_offsets"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>word_offsets</strong> (list of <code>List[Dict[str, Union[int, str]]]</code> or <code>List[Dict[str, Union[int, str]]]</code>) — Offsets of the decoded words. In combination with sampling rate and model downsampling rate word offsets can be used to compute time stamps for each word.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-amzs60">Output type of <code>Wav2Vec2DecoderWithLM</code>, with transcription.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.modeling_outputs.Wav2Vec2BaseModelOutput"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.modeling_outputs.</span><span class="font-semibold">Wav2Vec2BaseModelOutput</span></span></h3> <a id="transformers.modeling_outputs.Wav2Vec2BaseModelOutput" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.modeling_outputs.Wav2Vec2BaseModelOutput"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/modeling_outputs.py#L1286" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">last_hidden_state<span class="opacity-60">: FloatTensor = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">extract_features<span class="opacity-60">: FloatTensor = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_states<span class="opacity-60">: typing.Optional[typing.Tuple[torch.FloatTensor]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attentions<span class="opacity-60">: typing.Optional[typing.Tuple[torch.FloatTensor]] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 4 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.modeling_outputs.Wav2Vec2BaseModelOutput.last_hidden_state" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.modeling_outputs.Wav2Vec2BaseModelOutput.last_hidden_state"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>last_hidden_state</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>) — Sequence of hidden-states at the output of the last layer of the model.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.modeling_outputs.Wav2Vec2BaseModelOutput.extract_features" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.modeling_outputs.Wav2Vec2BaseModelOutput.extract_features"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>extract_features</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, conv_dim[-1])</code>) — Sequence of extracted feature vectors of the last convolutional layer of the model.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.modeling_outputs.Wav2Vec2BaseModelOutput.hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.modeling_outputs.Wav2Vec2BaseModelOutput.hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.<p></p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.modeling_outputs.Wav2Vec2BaseModelOutput.attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.modeling_outputs.Wav2Vec2BaseModelOutput.attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.<p></p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p></span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1tmia4b">Base class for models that have been trained with the Wav2Vec2 loss objective.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForPreTrainingOutput"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.models.wav2vec2.modeling_wav2vec2.</span><span class="font-semibold">Wav2Vec2ForPreTrainingOutput</span></span></h3> <a id="transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForPreTrainingOutput" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForPreTrainingOutput"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L100" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">loss<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">projected_states<span class="opacity-60">: FloatTensor = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">projected_quantized_states<span class="opacity-60">: FloatTensor = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">codevector_perplexity<span class="opacity-60">: FloatTensor = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_states<span class="opacity-60">: typing.Optional[typing.Tuple[torch.FloatTensor]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attentions<span class="opacity-60">: typing.Optional[typing.Tuple[torch.FloatTensor]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">contrastive_loss<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">diversity_loss<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 7 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForPreTrainingOutput.loss" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForPreTrainingOutput.loss"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>loss</strong> (<em>optional</em>, returned when <code>sample_negative_indices</code> are passed, <code>torch.FloatTensor</code> of shape <code>(1,)</code>) — Total loss as the sum of the contrastive loss (L_m) and the diversity loss (L_d) as stated in the <a href="https://arxiv.org/pdf/2006.11477.pdf" rel="nofollow">official paper</a> . (classification) loss.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForPreTrainingOutput.projected_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForPreTrainingOutput.projected_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>projected_states</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, config.proj_codevector_dim)</code>) — Hidden-states of the model projected to <em>config.proj_codevector_dim</em> that can be used to predict the masked projected quantized states.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForPreTrainingOutput.projected_quantized_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForPreTrainingOutput.projected_quantized_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>projected_quantized_states</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, config.proj_codevector_dim)</code>) — Quantized extracted feature vectors projected to <em>config.proj_codevector_dim</em> representing the positive target vectors for contrastive loss.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForPreTrainingOutput.hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForPreTrainingOutput.hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.<p></p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForPreTrainingOutput.attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForPreTrainingOutput.attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.<p></p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForPreTrainingOutput.contrastive_loss" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForPreTrainingOutput.contrastive_loss"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>contrastive_loss</strong> (<em>optional</em>, returned when <code>sample_negative_indices</code> are passed, <code>torch.FloatTensor</code> of shape <code>(1,)</code>) — The contrastive loss (L_m) as stated in the <a href="https://arxiv.org/pdf/2006.11477.pdf" rel="nofollow">official paper</a> .</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForPreTrainingOutput.diversity_loss" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForPreTrainingOutput.diversity_loss"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>diversity_loss</strong> (<em>optional</em>, returned when <code>sample_negative_indices</code> are passed, <code>torch.FloatTensor</code> of shape <code>(1,)</code>) — The diversity loss (L_d) as stated in the <a href="https://arxiv.org/pdf/2006.11477.pdf" rel="nofollow">official paper</a> .</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-brfccz">Output type of <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2ForPreTraining">Wav2Vec2ForPreTraining</a>, with potential hidden states and attentions.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2BaseModelOutput"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.models.wav2vec2.modeling_flax_wav2vec2.</span><span class="font-semibold">FlaxWav2Vec2BaseModelOutput</span></span></h3> <a id="transformers.models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2BaseModelOutput" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2BaseModelOutput"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/modeling_flax_wav2vec2.py#L45" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">last_hidden_state<span class="opacity-60">: Array = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">extract_features<span class="opacity-60">: Array = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_states<span class="opacity-60">: typing.Optional[typing.Tuple[jax.Array]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attentions<span class="opacity-60">: typing.Optional[typing.Tuple[jax.Array]] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 4 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2BaseModelOutput.last_hidden_state" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2BaseModelOutput.last_hidden_state"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>last_hidden_state</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>) — Sequence of hidden-states at the output of the last layer of the model.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2BaseModelOutput.extract_features" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2BaseModelOutput.extract_features"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>extract_features</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, sequence_length, last_conv_dim)</code>) — Sequence of extracted feature vectors of the last convolutional layer of the model with <code>last_conv_dim</code> being the dimension of the last convolutional layer.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2BaseModelOutput.hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2BaseModelOutput.hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hidden_states</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>jnp.ndarray</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.<p></p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2BaseModelOutput.attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2BaseModelOutput.attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attentions</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>jnp.ndarray</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.<p></p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p></span></span> </li></ul> </div></div> <p data-svelte-h="svelte-11w6tdr">Output type of <code>FlaxWav2Vec2BaseModelOutput</code>, with potential hidden states and attentions.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2BaseModelOutput.replace"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>replace</span></h4> <a id="transformers.models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2BaseModelOutput.replace" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2BaseModelOutput.replace"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/flax/struct.py#L111" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**updates<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> </div></div> <p data-svelte-h="svelte-5ihtpa">“Returns a new object replacing the specified fields with new values.</p></div></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2ForPreTrainingOutput"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.models.wav2vec2.modeling_flax_wav2vec2.</span><span class="font-semibold">FlaxWav2Vec2ForPreTrainingOutput</span></span></h3> <a id="transformers.models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2ForPreTrainingOutput" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2ForPreTrainingOutput"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/modeling_flax_wav2vec2.py#L75" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">projected_states<span class="opacity-60">: Array = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">projected_quantized_states<span class="opacity-60">: Array = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">codevector_perplexity<span class="opacity-60">: Array = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_states<span class="opacity-60">: typing.Optional[typing.Tuple[jax.Array]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attentions<span class="opacity-60">: typing.Optional[typing.Tuple[jax.Array]] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 5 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2ForPreTrainingOutput.loss" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2ForPreTrainingOutput.loss"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>loss</strong> (<em>optional</em>, returned when model is in train mode, <code>jnp.ndarray</code> of shape <code>(1,)</code>) — Total loss as the sum of the contrastive loss (L_m) and the diversity loss (L_d) as stated in the <a href="https://arxiv.org/pdf/2006.11477.pdf" rel="nofollow">official paper</a> . (classification) loss.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2ForPreTrainingOutput.projected_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2ForPreTrainingOutput.projected_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>projected_states</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, sequence_length, config.proj_codevector_dim)</code>) — Hidden-states of the model projected to <em>config.proj_codevector_dim</em> that can be used to predict the masked projected quantized states.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2ForPreTrainingOutput.projected_quantized_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2ForPreTrainingOutput.projected_quantized_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>projected_quantized_states</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, sequence_length, config.proj_codevector_dim)</code>) — Quantized extracted feature vectors projected to <em>config.proj_codevector_dim</em> representing the positive target vectors for contrastive loss.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2ForPreTrainingOutput.hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2ForPreTrainingOutput.hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hidden_states</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>jnp.ndarray</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.<p></p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2ForPreTrainingOutput.attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2ForPreTrainingOutput.attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attentions</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>jnp.ndarray</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.<p></p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p></span></span> </li></ul> </div></div> <p data-svelte-h="svelte-ldrfqp">Output type of <code>FlaxWav2Vec2ForPreTrainingOutput</code>, with potential hidden states and attentions.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2ForPreTrainingOutput.replace"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>replace</span></h4> <a id="transformers.models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2ForPreTrainingOutput.replace" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2ForPreTrainingOutput.replace"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/flax/struct.py#L111" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**updates<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> </div></div> <p data-svelte-h="svelte-5ihtpa">“Returns a new object replacing the specified fields with new values.</p></div></div> <h2 class="relative group"><a id="transformers.Wav2Vec2Model" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Model"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-nyb3ua">Wav2Vec2Model</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.Wav2Vec2Model"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">Wav2Vec2Model</span></span></h3> <a id="transformers.Wav2Vec2Model" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.Wav2Vec2Model"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L1456" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60">: Wav2Vec2Config</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Model.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Model.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Config">Wav2Vec2Config</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-te9mu1">The bare Wav2Vec2 Model transformer outputting raw hidden-states without any specific head on top. Wav2Vec2 was proposed in <a href="https://arxiv.org/abs/2006.11477" rel="nofollow">wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations</a> by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.</p> <p data-svelte-h="svelte-1e6yl4y">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.).</p> <p data-svelte-h="svelte-68lg8f">This model is a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.Wav2Vec2Model.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.Wav2Vec2Model.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.Wav2Vec2Model.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L1542" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_values<span class="opacity-60">: typing.Optional[torch.Tensor]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_time_indices<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.modeling_outputs.Wav2Vec2BaseModelOutput">transformers.modeling_outputs.Wav2Vec2BaseModelOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 5 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Model.forward.input_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Model.forward.input_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_values</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Float values of input raw speech waveform. Values can be obtained by loading a <code>.flac</code> or <code>.wav</code> audio file into an array of type <code>List[float]</code> or a <code>numpy.ndarray</code>, <em>e.g.</em> via the soundfile library (<code>pip install soundfile</code>). To prepare the array into <code>input_values</code>, the <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoProcessor">AutoProcessor</a> should be used for padding and conversion into a tensor of type <code>torch.FloatTensor</code>. See <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Processor.__call__">Wav2Vec2Processor.<strong>call</strong>()</a> for details.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Model.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Model.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing convolution and attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p> <div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400"> <p><code>attention_mask</code> should only be passed if the corresponding processor has <code>config.return_attention_mask == True</code>. For all models whose processor has <code>config.return_attention_mask == False</code>, such as <a href="https://huggingface.co/facebook/wav2vec2-base-960h" rel="nofollow">wav2vec2-base</a>, <code>attention_mask</code> should <strong>not</strong> be passed to avoid degraded performance when doing batched inference. For such models <code>input_values</code> should simply be padded with 0 and passed without <code>attention_mask</code>. Be aware that these models also yield slightly different results depending on whether <code>input_values</code> is padded or not.</p> </div></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Model.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Model.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Model.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Model.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2Model.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Model.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li></ul> <div id="transformers.Wav2Vec2Model.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.modeling_outputs.Wav2Vec2BaseModelOutput">transformers.modeling_outputs.Wav2Vec2BaseModelOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.modeling_outputs.Wav2Vec2BaseModelOutput">transformers.modeling_outputs.Wav2Vec2BaseModelOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Config">Wav2Vec2Config</a>) and inputs.</p> <ul> <li> <p><strong>last_hidden_state</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>) — Sequence of hidden-states at the output of the last layer of the model.</p> </li> <li> <p><strong>extract_features</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, conv_dim[-1])</code>) — Sequence of extracted feature vectors of the last convolutional layer of the model.</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-1p0d50t">The <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Model">Wav2Vec2Model</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.Wav2Vec2Model.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2Model.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoProcessor, Wav2Vec2Model <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset <span class="hljs-meta">&gt;&gt;&gt; </span>dataset = load_dataset(<span class="hljs-string">"hf-internal-testing/librispeech_asr_demo"</span>, <span class="hljs-string">"clean"</span>, split=<span class="hljs-string">"validation"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>dataset = dataset.sort(<span class="hljs-string">"id"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>sampling_rate = dataset.features[<span class="hljs-string">"audio"</span>].sampling_rate <span class="hljs-meta">&gt;&gt;&gt; </span>processor = AutoProcessor.from_pretrained(<span class="hljs-string">"facebook/wav2vec2-base-960h"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = Wav2Vec2Model.from_pretrained(<span class="hljs-string">"facebook/wav2vec2-base-960h"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># audio file is decoded on the fly</span> <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = processor(dataset[<span class="hljs-number">0</span>][<span class="hljs-string">"audio"</span>][<span class="hljs-string">"array"</span>], sampling_rate=sampling_rate, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> outputs = model(**inputs) <span class="hljs-meta">&gt;&gt;&gt; </span>last_hidden_states = outputs.last_hidden_state <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-built_in">list</span>(last_hidden_states.shape) [<span class="hljs-number">1</span>, <span class="hljs-number">292</span>, <span class="hljs-number">768</span>]</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.Wav2Vec2ForCTC" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ForCTC"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-7u6ka6">Wav2Vec2ForCTC</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.Wav2Vec2ForCTC"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">Wav2Vec2ForCTC</span></span></h3> <a id="transformers.Wav2Vec2ForCTC" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.Wav2Vec2ForCTC"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L1875" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">target_lang<span class="opacity-60">: typing.Optional[str] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ForCTC.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ForCTC.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Config">Wav2Vec2Config</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1p0g25y">Wav2Vec2 Model with a <code>language modeling</code> head on top for Connectionist Temporal Classification (CTC). Wav2Vec2 was proposed in <a href="https://arxiv.org/abs/2006.11477" rel="nofollow">wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations</a> by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.</p> <p data-svelte-h="svelte-1e6yl4y">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.).</p> <p data-svelte-h="svelte-68lg8f">This model is a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.Wav2Vec2ForCTC.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.Wav2Vec2ForCTC.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.Wav2Vec2ForCTC.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L1947" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_values<span class="opacity-60">: typing.Optional[torch.Tensor]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.CausalLMOutput">transformers.modeling_outputs.CausalLMOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 6 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ForCTC.forward.input_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ForCTC.forward.input_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_values</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Float values of input raw speech waveform. Values can be obtained by loading a <code>.flac</code> or <code>.wav</code> audio file into an array of type <code>List[float]</code> or a <code>numpy.ndarray</code>, <em>e.g.</em> via the soundfile library (<code>pip install soundfile</code>). To prepare the array into <code>input_values</code>, the <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoProcessor">AutoProcessor</a> should be used for padding and conversion into a tensor of type <code>torch.FloatTensor</code>. See <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Processor.__call__">Wav2Vec2Processor.<strong>call</strong>()</a> for details.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ForCTC.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ForCTC.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing convolution and attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p> <div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400"> <p><code>attention_mask</code> should only be passed if the corresponding processor has <code>config.return_attention_mask == True</code>. For all models whose processor has <code>config.return_attention_mask == False</code>, such as <a href="https://huggingface.co/facebook/wav2vec2-base-960h" rel="nofollow">wav2vec2-base</a>, <code>attention_mask</code> should <strong>not</strong> be passed to avoid degraded performance when doing batched inference. For such models <code>input_values</code> should simply be padded with 0 and passed without <code>attention_mask</code>. Be aware that these models also yield slightly different results depending on whether <code>input_values</code> is padded or not.</p> </div></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ForCTC.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ForCTC.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ForCTC.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ForCTC.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ForCTC.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ForCTC.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ForCTC.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ForCTC.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, target_length)</code>, <em>optional</em>) — Labels for connectionist temporal classification. Note that <code>target_length</code> has to be smaller or equal to the sequence length of the output logits. Indices are selected in <code>[-100, 0, ..., config.vocab_size - 1]</code>. All labels set to <code>-100</code> are ignored (masked), the loss is only computed for labels in <code>[0, ..., config.vocab_size - 1]</code>.</span></span> </li></ul> <div id="transformers.Wav2Vec2ForCTC.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.CausalLMOutput">transformers.modeling_outputs.CausalLMOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.CausalLMOutput">transformers.modeling_outputs.CausalLMOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Config">Wav2Vec2Config</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Language modeling loss (for next-token prediction).</p> </li> <li> <p><strong>logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, config.vocab_size)</code>) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-1ji4sqj">The <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2ForCTC">Wav2Vec2ForCTC</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.Wav2Vec2ForCTC.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ForCTC.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoProcessor, Wav2Vec2ForCTC <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>dataset = load_dataset(<span class="hljs-string">"hf-internal-testing/librispeech_asr_demo"</span>, <span class="hljs-string">"clean"</span>, split=<span class="hljs-string">"validation"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>dataset = dataset.sort(<span class="hljs-string">"id"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>sampling_rate = dataset.features[<span class="hljs-string">"audio"</span>].sampling_rate <span class="hljs-meta">&gt;&gt;&gt; </span>processor = AutoProcessor.from_pretrained(<span class="hljs-string">"facebook/wav2vec2-base-960h"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = Wav2Vec2ForCTC.from_pretrained(<span class="hljs-string">"facebook/wav2vec2-base-960h"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># audio file is decoded on the fly</span> <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = processor(dataset[<span class="hljs-number">0</span>][<span class="hljs-string">"audio"</span>][<span class="hljs-string">"array"</span>], sampling_rate=sampling_rate, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> logits = model(**inputs).logits <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_ids = torch.argmax(logits, dim=-<span class="hljs-number">1</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># transcribe speech</span> <span class="hljs-meta">&gt;&gt;&gt; </span>transcription = processor.batch_decode(predicted_ids) <span class="hljs-meta">&gt;&gt;&gt; </span>transcription[<span class="hljs-number">0</span>] <span class="hljs-string">'MISTER QUILTER IS THE APOSTLE OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL'</span> <span class="hljs-meta">&gt;&gt;&gt; </span>inputs[<span class="hljs-string">"labels"</span>] = processor(text=dataset[<span class="hljs-number">0</span>][<span class="hljs-string">"text"</span>], return_tensors=<span class="hljs-string">"pt"</span>).input_ids <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># compute loss</span> <span class="hljs-meta">&gt;&gt;&gt; </span>loss = model(**inputs).loss <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-built_in">round</span>(loss.item(), <span class="hljs-number">2</span>) <span class="hljs-number">53.48</span></pre></div></div></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.Wav2Vec2ForCTC.load_adapter"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>load_adapter</span></h4> <a id="transformers.Wav2Vec2ForCTC.load_adapter" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.Wav2Vec2ForCTC.load_adapter"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L1209" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">target_lang<span class="opacity-60">: str</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">force_load<span class="opacity-60"> = True</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 10 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ForCTC.load_adapter.target_lang" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ForCTC.load_adapter.target_lang"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>target_lang</strong> (<code>str</code>) — Has to be a language id of an existing adapter weight. Adapter weights are stored in the format adapter.<lang>.safetensors or adapter.<lang>.bin</lang></lang></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ForCTC.load_adapter.force_load" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ForCTC.load_adapter.force_load"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>force_load</strong> (<code>bool</code>, defaults to <code>True</code>) — Whether the weights shall be loaded even if <code>target_lang</code> matches <code>self.target_lang</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ForCTC.load_adapter.cache_dir" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ForCTC.load_adapter.cache_dir"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>cache_dir</strong> (<code>Union[str, os.PathLike]</code>, <em>optional</em>) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ForCTC.load_adapter.force_download" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ForCTC.load_adapter.force_download"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>force_download</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ForCTC.load_adapter.resume_download" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ForCTC.load_adapter.resume_download"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>resume_download</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ForCTC.load_adapter.proxies" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ForCTC.load_adapter.proxies"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>proxies</strong> (<code>Dict[str, str]</code>, <em>optional</em>) — A dictionary of proxy servers to use by protocol or endpoint, e.g., <code>{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}</code>. The proxies are used on each request.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ForCTC.load_adapter.local_files_only(bool," class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ForCTC.load_adapter.local_files_only(bool,"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>local_files_only(<code>bool</code>,</strong> <em>optional</em>, defaults to <code>False</code>) — Whether or not to only look at local files (i.e., do not try to download the model).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ForCTC.load_adapter.token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ForCTC.load_adapter.token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token</strong> (<code>str</code> or <code>bool</code>, <em>optional</em>) — The token to use as HTTP bearer authorization for remote files. If <code>True</code>, or not specified, will use the token generated when running <code>huggingface-cli login</code> (stored in <code>~/.huggingface</code>).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ForCTC.load_adapter.revision" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ForCTC.load_adapter.revision"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>revision</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"main"</code>) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so <code>revision</code> can be any identifier allowed by git.<p></p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"> <p>To test a pull request you made on the Hub, you can pass `revision=“refs/pr/<pr_number>“.</pr_number></p> </div></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ForCTC.load_adapter.mirror" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ForCTC.load_adapter.mirror"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mirror</strong> (<code>str</code>, <em>optional</em>) — Mirror source to accelerate downloads in China. If you are from China and have an accessibility problem, you can set this option to resolve it. Note that we do not guarantee the timeliness or safety. Please refer to the mirror site for more information.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1uzynjt">Load a language adapter model from a pre-trained adapter model.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-13hahdn">Activate the special <a href="https://huggingface.co/transformers/installation.html#offline-mode" rel="nofollow">“offline-mode”</a> to use this method in a firewalled environment.</p></div> <div class="relative group rounded-md"><a id="transformers.Wav2Vec2ForCTC.load_adapter.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ForCTC.load_adapter.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-kvfsh7">Examples:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> Wav2Vec2ForCTC, AutoProcessor <span class="hljs-meta">&gt;&gt;&gt; </span>ckpt = <span class="hljs-string">"facebook/mms-1b-all"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>processor = AutoProcessor.from_pretrained(ckpt) <span class="hljs-meta">&gt;&gt;&gt; </span>model = Wav2Vec2ForCTC.from_pretrained(ckpt, target_lang=<span class="hljs-string">"eng"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># set specific language</span> <span class="hljs-meta">&gt;&gt;&gt; </span>processor.tokenizer.set_target_lang(<span class="hljs-string">"spa"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model.load_adapter(<span class="hljs-string">"spa"</span>)</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.Wav2Vec2ForSequenceClassification" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ForSequenceClassification"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1c9jcz1">Wav2Vec2ForSequenceClassification</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.Wav2Vec2ForSequenceClassification"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">Wav2Vec2ForSequenceClassification</span></span></h3> <a id="transformers.Wav2Vec2ForSequenceClassification" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.Wav2Vec2ForSequenceClassification"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L2034" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ForSequenceClassification.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ForSequenceClassification.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Config">Wav2Vec2Config</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-dzph96">Wav2Vec2 Model with a sequence classification head on top (a linear layer over the pooled output) for tasks like SUPERB Keyword Spotting.</p> <p data-svelte-h="svelte-q3avhg">Wav2Vec2 was proposed in <a href="https://arxiv.org/abs/2006.11477" rel="nofollow">wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations</a> by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.</p> <p data-svelte-h="svelte-1e6yl4y">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.).</p> <p data-svelte-h="svelte-68lg8f">This model is a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.Wav2Vec2ForSequenceClassification.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.Wav2Vec2ForSequenceClassification.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.Wav2Vec2ForSequenceClassification.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L2079" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_values<span class="opacity-60">: typing.Optional[torch.Tensor]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.SequenceClassifierOutput">transformers.modeling_outputs.SequenceClassifierOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 6 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ForSequenceClassification.forward.input_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ForSequenceClassification.forward.input_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_values</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Float values of input raw speech waveform. Values can be obtained by loading a <code>.flac</code> or <code>.wav</code> audio file into an array of type <code>List[float]</code> or a <code>numpy.ndarray</code>, <em>e.g.</em> via the soundfile library (<code>pip install soundfile</code>). To prepare the array into <code>input_values</code>, the <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoProcessor">AutoProcessor</a> should be used for padding and conversion into a tensor of type <code>torch.FloatTensor</code>. See <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Processor.__call__">Wav2Vec2Processor.<strong>call</strong>()</a> for details.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ForSequenceClassification.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ForSequenceClassification.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing convolution and attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p> <div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400"> <p><code>attention_mask</code> should only be passed if the corresponding processor has <code>config.return_attention_mask == True</code>. For all models whose processor has <code>config.return_attention_mask == False</code>, such as <a href="https://huggingface.co/facebook/wav2vec2-base-960h" rel="nofollow">wav2vec2-base</a>, <code>attention_mask</code> should <strong>not</strong> be passed to avoid degraded performance when doing batched inference. For such models <code>input_values</code> should simply be padded with 0 and passed without <code>attention_mask</code>. Be aware that these models also yield slightly different results depending on whether <code>input_values</code> is padded or not.</p> </div></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ForSequenceClassification.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ForSequenceClassification.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ForSequenceClassification.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ForSequenceClassification.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ForSequenceClassification.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ForSequenceClassification.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ForSequenceClassification.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ForSequenceClassification.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Labels for computing the sequence classification/regression loss. Indices should be in <code>[0, ..., config.num_labels - 1]</code>. If <code>config.num_labels == 1</code> a regression loss is computed (Mean-Square loss), If <code>config.num_labels &gt; 1</code> a classification loss is computed (Cross-Entropy).</span></span> </li></ul> <div id="transformers.Wav2Vec2ForSequenceClassification.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.SequenceClassifierOutput">transformers.modeling_outputs.SequenceClassifierOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.SequenceClassifierOutput">transformers.modeling_outputs.SequenceClassifierOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Config">Wav2Vec2Config</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Classification (or regression if config.num_labels==1) loss.</p> </li> <li> <p><strong>logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, config.num_labels)</code>) — Classification (or regression if config.num_labels==1) scores (before SoftMax).</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-1xxtes9">The <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2ForSequenceClassification">Wav2Vec2ForSequenceClassification</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.Wav2Vec2ForSequenceClassification.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ForSequenceClassification.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoFeatureExtractor, Wav2Vec2ForSequenceClassification <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>dataset = load_dataset(<span class="hljs-string">"hf-internal-testing/librispeech_asr_demo"</span>, <span class="hljs-string">"clean"</span>, split=<span class="hljs-string">"validation"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>dataset = dataset.sort(<span class="hljs-string">"id"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>sampling_rate = dataset.features[<span class="hljs-string">"audio"</span>].sampling_rate <span class="hljs-meta">&gt;&gt;&gt; </span>feature_extractor = AutoFeatureExtractor.from_pretrained(<span class="hljs-string">"superb/wav2vec2-base-superb-ks"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = Wav2Vec2ForSequenceClassification.from_pretrained(<span class="hljs-string">"superb/wav2vec2-base-superb-ks"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># audio file is decoded on the fly</span> <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = feature_extractor(dataset[<span class="hljs-number">0</span>][<span class="hljs-string">"audio"</span>][<span class="hljs-string">"array"</span>], sampling_rate=sampling_rate, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> logits = model(**inputs).logits <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_class_ids = torch.argmax(logits, dim=-<span class="hljs-number">1</span>).item() <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_label = model.config.id2label[predicted_class_ids] <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_label <span class="hljs-string">'_unknown_'</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># compute loss - target_label is e.g. "down"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>target_label = model.config.id2label[<span class="hljs-number">0</span>] <span class="hljs-meta">&gt;&gt;&gt; </span>inputs[<span class="hljs-string">"labels"</span>] = torch.tensor([model.config.label2id[target_label]]) <span class="hljs-meta">&gt;&gt;&gt; </span>loss = model(**inputs).loss <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-built_in">round</span>(loss.item(), <span class="hljs-number">2</span>) <span class="hljs-number">6.54</span></pre></div></div></div></div> <h2 class="relative group"><a id="transformers.Wav2Vec2ForAudioFrameClassification" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ForAudioFrameClassification"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-5e0417">Wav2Vec2ForAudioFrameClassification</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.Wav2Vec2ForAudioFrameClassification"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">Wav2Vec2ForAudioFrameClassification</span></span></h3> <a id="transformers.Wav2Vec2ForAudioFrameClassification" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.Wav2Vec2ForAudioFrameClassification"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L2156" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ForAudioFrameClassification.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ForAudioFrameClassification.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Config">Wav2Vec2Config</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1jplo9i">Wav2Vec2 Model with a frame classification head on top for tasks like Speaker Diarization.</p> <p data-svelte-h="svelte-q3avhg">Wav2Vec2 was proposed in <a href="https://arxiv.org/abs/2006.11477" rel="nofollow">wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations</a> by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.</p> <p data-svelte-h="svelte-1e6yl4y">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.).</p> <p data-svelte-h="svelte-68lg8f">This model is a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.Wav2Vec2ForAudioFrameClassification.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.Wav2Vec2ForAudioFrameClassification.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.Wav2Vec2ForAudioFrameClassification.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L2200" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_values<span class="opacity-60">: typing.Optional[torch.Tensor]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.TokenClassifierOutput">transformers.modeling_outputs.TokenClassifierOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 6 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ForAudioFrameClassification.forward.input_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ForAudioFrameClassification.forward.input_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_values</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Float values of input raw speech waveform. Values can be obtained by loading a <code>.flac</code> or <code>.wav</code> audio file into an array of type <code>List[float]</code> or a <code>numpy.ndarray</code>, <em>e.g.</em> via the soundfile library (<code>pip install soundfile</code>). To prepare the array into <code>input_values</code>, the <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoProcessor">AutoProcessor</a> should be used for padding and conversion into a tensor of type <code>torch.FloatTensor</code>. See <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Processor.__call__">Wav2Vec2Processor.<strong>call</strong>()</a> for details.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ForAudioFrameClassification.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ForAudioFrameClassification.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing convolution and attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p> <div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400"> <p><code>attention_mask</code> should only be passed if the corresponding processor has <code>config.return_attention_mask == True</code>. For all models whose processor has <code>config.return_attention_mask == False</code>, such as <a href="https://huggingface.co/facebook/wav2vec2-base-960h" rel="nofollow">wav2vec2-base</a>, <code>attention_mask</code> should <strong>not</strong> be passed to avoid degraded performance when doing batched inference. For such models <code>input_values</code> should simply be padded with 0 and passed without <code>attention_mask</code>. Be aware that these models also yield slightly different results depending on whether <code>input_values</code> is padded or not.</p> </div></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ForAudioFrameClassification.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ForAudioFrameClassification.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ForAudioFrameClassification.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ForAudioFrameClassification.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ForAudioFrameClassification.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ForAudioFrameClassification.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ForAudioFrameClassification.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ForAudioFrameClassification.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Labels for computing the sequence classification/regression loss. Indices should be in <code>[0, ..., config.num_labels - 1]</code>. If <code>config.num_labels == 1</code> a regression loss is computed (Mean-Square loss), If <code>config.num_labels &gt; 1</code> a classification loss is computed (Cross-Entropy).</span></span> </li></ul> <div id="transformers.Wav2Vec2ForAudioFrameClassification.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.TokenClassifierOutput">transformers.modeling_outputs.TokenClassifierOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.TokenClassifierOutput">transformers.modeling_outputs.TokenClassifierOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Config">Wav2Vec2Config</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Classification loss.</p> </li> <li> <p><strong>logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, config.num_labels)</code>) — Classification scores (before SoftMax).</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-1lio74l">The <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2ForAudioFrameClassification">Wav2Vec2ForAudioFrameClassification</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.Wav2Vec2ForAudioFrameClassification.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ForAudioFrameClassification.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoFeatureExtractor, Wav2Vec2ForAudioFrameClassification <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>dataset = load_dataset(<span class="hljs-string">"hf-internal-testing/librispeech_asr_demo"</span>, <span class="hljs-string">"clean"</span>, split=<span class="hljs-string">"validation"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>dataset = dataset.sort(<span class="hljs-string">"id"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>sampling_rate = dataset.features[<span class="hljs-string">"audio"</span>].sampling_rate <span class="hljs-meta">&gt;&gt;&gt; </span>feature_extractor = AutoFeatureExtractor.from_pretrained(<span class="hljs-string">"anton-l/wav2vec2-base-superb-sd"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = Wav2Vec2ForAudioFrameClassification.from_pretrained(<span class="hljs-string">"anton-l/wav2vec2-base-superb-sd"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># audio file is decoded on the fly</span> <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = feature_extractor(dataset[<span class="hljs-number">0</span>][<span class="hljs-string">"audio"</span>][<span class="hljs-string">"array"</span>], return_tensors=<span class="hljs-string">"pt"</span>, sampling_rate=sampling_rate) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> logits = model(**inputs).logits <span class="hljs-meta">&gt;&gt;&gt; </span>probabilities = torch.sigmoid(logits[<span class="hljs-number">0</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># labels is a one-hot array of shape (num_frames, num_speakers)</span> <span class="hljs-meta">&gt;&gt;&gt; </span>labels = (probabilities &gt; <span class="hljs-number">0.5</span>).long() <span class="hljs-meta">&gt;&gt;&gt; </span>labels[<span class="hljs-number">0</span>].tolist() [<span class="hljs-number">0</span>, <span class="hljs-number">0</span>]</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.Wav2Vec2ForXVector" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ForXVector"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-7dzf9v">Wav2Vec2ForXVector</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.Wav2Vec2ForXVector"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">Wav2Vec2ForXVector</span></span></h3> <a id="transformers.Wav2Vec2ForXVector" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.Wav2Vec2ForXVector"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L2317" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ForXVector.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ForXVector.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Config">Wav2Vec2Config</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-nregqm">Wav2Vec2 Model with an XVector feature extraction head on top for tasks like Speaker Verification.</p> <p data-svelte-h="svelte-q3avhg">Wav2Vec2 was proposed in <a href="https://arxiv.org/abs/2006.11477" rel="nofollow">wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations</a> by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.</p> <p data-svelte-h="svelte-1e6yl4y">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.).</p> <p data-svelte-h="svelte-68lg8f">This model is a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.Wav2Vec2ForXVector.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.Wav2Vec2ForXVector.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.Wav2Vec2ForXVector.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L2379" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_values<span class="opacity-60">: typing.Optional[torch.Tensor]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.XVectorOutput">transformers.modeling_outputs.XVectorOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 6 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ForXVector.forward.input_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ForXVector.forward.input_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_values</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Float values of input raw speech waveform. Values can be obtained by loading a <code>.flac</code> or <code>.wav</code> audio file into an array of type <code>List[float]</code> or a <code>numpy.ndarray</code>, <em>e.g.</em> via the soundfile library (<code>pip install soundfile</code>). To prepare the array into <code>input_values</code>, the <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoProcessor">AutoProcessor</a> should be used for padding and conversion into a tensor of type <code>torch.FloatTensor</code>. See <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Processor.__call__">Wav2Vec2Processor.<strong>call</strong>()</a> for details.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ForXVector.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ForXVector.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing convolution and attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p> <div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400"> <p><code>attention_mask</code> should only be passed if the corresponding processor has <code>config.return_attention_mask == True</code>. For all models whose processor has <code>config.return_attention_mask == False</code>, such as <a href="https://huggingface.co/facebook/wav2vec2-base-960h" rel="nofollow">wav2vec2-base</a>, <code>attention_mask</code> should <strong>not</strong> be passed to avoid degraded performance when doing batched inference. For such models <code>input_values</code> should simply be padded with 0 and passed without <code>attention_mask</code>. Be aware that these models also yield slightly different results depending on whether <code>input_values</code> is padded or not.</p> </div></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ForXVector.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ForXVector.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ForXVector.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ForXVector.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ForXVector.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ForXVector.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ForXVector.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ForXVector.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Labels for computing the sequence classification/regression loss. Indices should be in <code>[0, ..., config.num_labels - 1]</code>. If <code>config.num_labels == 1</code> a regression loss is computed (Mean-Square loss), If <code>config.num_labels &gt; 1</code> a classification loss is computed (Cross-Entropy).</span></span> </li></ul> <div id="transformers.Wav2Vec2ForXVector.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.XVectorOutput">transformers.modeling_outputs.XVectorOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.XVectorOutput">transformers.modeling_outputs.XVectorOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Config">Wav2Vec2Config</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Classification loss.</p> </li> <li> <p><strong>logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, config.xvector_output_dim)</code>) — Classification hidden states before AMSoftmax.</p> </li> <li> <p><strong>embeddings</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, config.xvector_output_dim)</code>) — Utterance embeddings used for vector similarity-based retrieval.</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-19c4575">The <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2ForXVector">Wav2Vec2ForXVector</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.Wav2Vec2ForXVector.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ForXVector.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoFeatureExtractor, Wav2Vec2ForXVector <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>dataset = load_dataset(<span class="hljs-string">"hf-internal-testing/librispeech_asr_demo"</span>, <span class="hljs-string">"clean"</span>, split=<span class="hljs-string">"validation"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>dataset = dataset.sort(<span class="hljs-string">"id"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>sampling_rate = dataset.features[<span class="hljs-string">"audio"</span>].sampling_rate <span class="hljs-meta">&gt;&gt;&gt; </span>feature_extractor = AutoFeatureExtractor.from_pretrained(<span class="hljs-string">"anton-l/wav2vec2-base-superb-sv"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = Wav2Vec2ForXVector.from_pretrained(<span class="hljs-string">"anton-l/wav2vec2-base-superb-sv"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># audio file is decoded on the fly</span> <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = feature_extractor( <span class="hljs-meta">... </span> [d[<span class="hljs-string">"array"</span>] <span class="hljs-keyword">for</span> d <span class="hljs-keyword">in</span> dataset[:<span class="hljs-number">2</span>][<span class="hljs-string">"audio"</span>]], sampling_rate=sampling_rate, return_tensors=<span class="hljs-string">"pt"</span>, padding=<span class="hljs-literal">True</span> <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> embeddings = model(**inputs).embeddings <span class="hljs-meta">&gt;&gt;&gt; </span>embeddings = torch.nn.functional.normalize(embeddings, dim=-<span class="hljs-number">1</span>).cpu() <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># the resulting embeddings can be used for cosine similarity-based retrieval</span> <span class="hljs-meta">&gt;&gt;&gt; </span>cosine_sim = torch.nn.CosineSimilarity(dim=-<span class="hljs-number">1</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>similarity = cosine_sim(embeddings[<span class="hljs-number">0</span>], embeddings[<span class="hljs-number">1</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span>threshold = <span class="hljs-number">0.7</span> <span class="hljs-comment"># the optimal threshold is dataset-dependent</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">if</span> similarity &lt; threshold: <span class="hljs-meta">... </span> <span class="hljs-built_in">print</span>(<span class="hljs-string">"Speakers are not the same!"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-built_in">round</span>(similarity.item(), <span class="hljs-number">2</span>) <span class="hljs-number">0.98</span></pre></div></div></div></div> <h2 class="relative group"><a id="transformers.Wav2Vec2ForPreTraining" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ForPreTraining"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1rd46un">Wav2Vec2ForPreTraining</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.Wav2Vec2ForPreTraining"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">Wav2Vec2ForPreTraining</span></span></h3> <a id="transformers.Wav2Vec2ForPreTraining" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.Wav2Vec2ForPreTraining"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L1604" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60">: Wav2Vec2Config</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ForPreTraining.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ForPreTraining.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Config">Wav2Vec2Config</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-wn7jg8">Wav2Vec2 Model with a quantizer and <code>VQ</code> head on top. Wav2Vec2 was proposed in <a href="https://arxiv.org/abs/2006.11477" rel="nofollow">wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations</a> by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.</p> <p data-svelte-h="svelte-1e6yl4y">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.).</p> <p data-svelte-h="svelte-68lg8f">This model is a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.Wav2Vec2ForPreTraining.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.Wav2Vec2ForPreTraining.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.Wav2Vec2ForPreTraining.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L1664" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_values<span class="opacity-60">: typing.Optional[torch.Tensor]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_time_indices<span class="opacity-60">: typing.Optional[torch.BoolTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">sampled_negative_indices<span class="opacity-60">: typing.Optional[torch.BoolTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForPreTrainingOutput">transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForPreTrainingOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 7 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ForPreTraining.forward.input_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ForPreTraining.forward.input_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_values</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Float values of input raw speech waveform. Values can be obtained by loading a <code>.flac</code> or <code>.wav</code> audio file into an array of type <code>List[float]</code> or a <code>numpy.ndarray</code>, <em>e.g.</em> via the soundfile library (<code>pip install soundfile</code>). To prepare the array into <code>input_values</code>, the <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoProcessor">AutoProcessor</a> should be used for padding and conversion into a tensor of type <code>torch.FloatTensor</code>. See <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Processor.__call__">Wav2Vec2Processor.<strong>call</strong>()</a> for details.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ForPreTraining.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ForPreTraining.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing convolution and attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p> <div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400"> <p><code>attention_mask</code> should only be passed if the corresponding processor has <code>config.return_attention_mask == True</code>. For all models whose processor has <code>config.return_attention_mask == False</code>, such as <a href="https://huggingface.co/facebook/wav2vec2-base-960h" rel="nofollow">wav2vec2-base</a>, <code>attention_mask</code> should <strong>not</strong> be passed to avoid degraded performance when doing batched inference. For such models <code>input_values</code> should simply be padded with 0 and passed without <code>attention_mask</code>. Be aware that these models also yield slightly different results depending on whether <code>input_values</code> is padded or not.</p> </div></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ForPreTraining.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ForPreTraining.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ForPreTraining.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ForPreTraining.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ForPreTraining.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ForPreTraining.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ForPreTraining.forward.mask_time_indices" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ForPreTraining.forward.mask_time_indices"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mask_time_indices</strong> (<code>torch.BoolTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices to mask extracted features for contrastive loss. When in training mode, model learns to predict masked extracted features in <em>config.proj_codevector_dim</em> space.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ForPreTraining.forward.sampled_negative_indices" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ForPreTraining.forward.sampled_negative_indices"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>sampled_negative_indices</strong> (<code>torch.BoolTensor</code> of shape <code>(batch_size, sequence_length, num_negatives)</code>, <em>optional</em>) — Indices indicating which quantized target vectors are used as negative sampled vectors in contrastive loss. Required input for pre-training.</span></span> </li></ul> <div id="transformers.Wav2Vec2ForPreTraining.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForPreTrainingOutput">transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForPreTrainingOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForPreTrainingOutput">transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForPreTrainingOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Config">Wav2Vec2Config</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<em>optional</em>, returned when <code>sample_negative_indices</code> are passed, <code>torch.FloatTensor</code> of shape <code>(1,)</code>) — Total loss as the sum of the contrastive loss (L_m) and the diversity loss (L_d) as stated in the <a href="https://arxiv.org/pdf/2006.11477.pdf" rel="nofollow">official paper</a> . (classification) loss.</p> </li> <li> <p><strong>projected_states</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, config.proj_codevector_dim)</code>) — Hidden-states of the model projected to <em>config.proj_codevector_dim</em> that can be used to predict the masked projected quantized states.</p> </li> <li> <p><strong>projected_quantized_states</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, config.proj_codevector_dim)</code>) — Quantized extracted feature vectors projected to <em>config.proj_codevector_dim</em> representing the positive target vectors for contrastive loss.</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> <li> <p><strong>contrastive_loss</strong> (<em>optional</em>, returned when <code>sample_negative_indices</code> are passed, <code>torch.FloatTensor</code> of shape <code>(1,)</code>) — The contrastive loss (L_m) as stated in the <a href="https://arxiv.org/pdf/2006.11477.pdf" rel="nofollow">official paper</a> .</p> </li> <li> <p><strong>diversity_loss</strong> (<em>optional</em>, returned when <code>sample_negative_indices</code> are passed, <code>torch.FloatTensor</code> of shape <code>(1,)</code>) — The diversity loss (L_d) as stated in the <a href="https://arxiv.org/pdf/2006.11477.pdf" rel="nofollow">official paper</a> .</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-1bdeu9p">The <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2ForPreTraining">Wav2Vec2ForPreTraining</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.Wav2Vec2ForPreTraining.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ForPreTraining.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoFeatureExtractor, Wav2Vec2ForPreTraining <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers.models.wav2vec2.modeling_wav2vec2 <span class="hljs-keyword">import</span> _compute_mask_indices, _sample_negative_indices <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset <span class="hljs-meta">&gt;&gt;&gt; </span>feature_extractor = AutoFeatureExtractor.from_pretrained(<span class="hljs-string">"facebook/wav2vec2-base"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = Wav2Vec2ForPreTraining.from_pretrained(<span class="hljs-string">"facebook/wav2vec2-base"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>ds = load_dataset(<span class="hljs-string">"hf-internal-testing/librispeech_asr_dummy"</span>, <span class="hljs-string">"clean"</span>, split=<span class="hljs-string">"validation"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>input_values = feature_extractor(ds[<span class="hljs-number">0</span>][<span class="hljs-string">"audio"</span>][<span class="hljs-string">"array"</span>], return_tensors=<span class="hljs-string">"pt"</span>).input_values <span class="hljs-comment"># Batch size 1</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># compute masked indices</span> <span class="hljs-meta">&gt;&gt;&gt; </span>batch_size, raw_sequence_length = input_values.shape <span class="hljs-meta">&gt;&gt;&gt; </span>sequence_length = model._get_feat_extract_output_lengths(raw_sequence_length).item() <span class="hljs-meta">&gt;&gt;&gt; </span>mask_time_indices = _compute_mask_indices( <span class="hljs-meta">... </span> shape=(batch_size, sequence_length), mask_prob=<span class="hljs-number">0.2</span>, mask_length=<span class="hljs-number">2</span> <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>sampled_negative_indices = _sample_negative_indices( <span class="hljs-meta">... </span> features_shape=(batch_size, sequence_length), <span class="hljs-meta">... </span> num_negatives=model.config.num_negatives, <span class="hljs-meta">... </span> mask_time_indices=mask_time_indices, <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>mask_time_indices = torch.tensor(data=mask_time_indices, device=input_values.device, dtype=torch.long) <span class="hljs-meta">&gt;&gt;&gt; </span>sampled_negative_indices = torch.tensor( <span class="hljs-meta">... </span> data=sampled_negative_indices, device=input_values.device, dtype=torch.long <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> outputs = model(input_values, mask_time_indices=mask_time_indices) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># compute cosine similarity between predicted (=projected_states) and target (=projected_quantized_states)</span> <span class="hljs-meta">&gt;&gt;&gt; </span>cosine_sim = torch.cosine_similarity(outputs.projected_states, outputs.projected_quantized_states, dim=-<span class="hljs-number">1</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># show that cosine similarity is much higher than random</span> <span class="hljs-meta">&gt;&gt;&gt; </span>cosine_sim[mask_time_indices.to(torch.<span class="hljs-built_in">bool</span>)].mean() &gt; <span class="hljs-number">0.5</span> tensor(<span class="hljs-literal">True</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># for contrastive loss training model should be put into train mode</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model = model.train() <span class="hljs-meta">&gt;&gt;&gt; </span>loss = model( <span class="hljs-meta">... </span> input_values, mask_time_indices=mask_time_indices, sampled_negative_indices=sampled_negative_indices <span class="hljs-meta">... </span>).loss</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.TFWav2Vec2Model" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFWav2Vec2Model"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1j60y4k">TFWav2Vec2Model</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFWav2Vec2Model"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">TFWav2Vec2Model</span></span></h3> <a id="transformers.TFWav2Vec2Model" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFWav2Vec2Model"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py#L1353" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">*args<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFWav2Vec2Model.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFWav2Vec2Model.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Config">Wav2Vec2Config</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-7z1r57">The bare TFWav2Vec2 Model transformer outputing raw hidden-states without any specific head on top.</p> <p data-svelte-h="svelte-1i0vt4o">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel">TFPreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-1ivrf8m">This model is also a <a href="https://www.tensorflow.org/api_docs/python/tf/keras/Model" rel="nofollow">tf.keras.Model</a> subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1ajbfxg">TensorFlow models and layers in <code>transformers</code> accept two formats as input:</p> <ul data-svelte-h="svelte-qm1t26"><li>having all inputs as keyword arguments (like PyTorch models), or</li> <li>having all inputs as a list, tuple or dict in the first positional argument.</li></ul> <p data-svelte-h="svelte-1v9qsc5">The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like <code>model.fit()</code> things should “just work” for you - just pass your inputs and labels in any format that <code>model.fit()</code> supports! If, however, you want to use the second format outside of Keras methods like <code>fit()</code> and <code>predict()</code>, such as when creating your own layers or models with the Keras <code>Functional</code> API, there are three possibilities you can use to gather all the input Tensors in the first positional argument:</p> <ul data-svelte-h="svelte-1x9eg56"><li>a single Tensor with <code>input_values</code> only and nothing else: <code>model(input_values)</code></li> <li>a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: <code>model([input_values, attention_mask])</code> or <code>model([input_values, attention_mask, token_type_ids])</code></li> <li>a dictionary with one or several input Tensors associated to the input names given in the docstring: <code>model({"input_values": input_values, "token_type_ids": token_type_ids})</code></li></ul> <p data-svelte-h="svelte-1an3odd">Note that when creating models and layers with <a href="https://keras.io/guides/making_new_layers_and_models_via_subclassing/" rel="nofollow">subclassing</a> then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function!</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFWav2Vec2Model.call"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>call</span></h4> <a id="transformers.TFWav2Vec2Model.call" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFWav2Vec2Model.call"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py#L1359" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_values<span class="opacity-60">: tf.Tensor</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">training<span class="opacity-60">: bool = False</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFBaseModelOutput">transformers.modeling_tf_outputs.TFBaseModelOutput</a> or <code>tuple(tf.Tensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 10 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFWav2Vec2Model.call.input_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFWav2Vec2Model.call.input_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_values</strong> (<code>np.ndarray</code>, <code>tf.Tensor</code>, <code>List[tf.Tensor]</code> <code>Dict[str, tf.Tensor]</code> or <code>Dict[str, np.ndarray]</code> and each example must have the shape <code>({0})</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> and <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFWav2Vec2Model.call.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFWav2Vec2Model.call.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>np.ndarray</code> or <code>tf.Tensor</code> of shape <code>({0})</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFWav2Vec2Model.call.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFWav2Vec2Model.call.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>np.ndarray</code> or <code>tf.Tensor</code> of shape <code>({0})</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token.</li> </ul> <p><a href="../glossary#token-type-ids">What are token type IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFWav2Vec2Model.call.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFWav2Vec2Model.call.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>np.ndarray</code> or <code>tf.Tensor</code> of shape <code>({0})</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.<p></p> <p><a href="../glossary#position-ids">What are position IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFWav2Vec2Model.call.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFWav2Vec2Model.call.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>np.ndarray</code> or <code>tf.Tensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFWav2Vec2Model.call.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFWav2Vec2Model.call.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>np.ndarray</code> or <code>tf.Tensor</code> of shape <code>({0}, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_values</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_values</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFWav2Vec2Model.call.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFWav2Vec2Model.call.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFWav2Vec2Model.call.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFWav2Vec2Model.call.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFWav2Vec2Model.call.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFWav2Vec2Model.call.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFWav2Vec2Model.call.training" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFWav2Vec2Model.call.training"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>training</strong> (<code>bool</code>, <em>optional</em>, defaults to `False“) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation).</span></span> </li></ul> <div id="transformers.TFWav2Vec2Model.call.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFBaseModelOutput">transformers.modeling_tf_outputs.TFBaseModelOutput</a> or <code>tuple(tf.Tensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFBaseModelOutput">transformers.modeling_tf_outputs.TFBaseModelOutput</a> or a tuple of <code>tf.Tensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Config">Wav2Vec2Config</a>) and inputs.</p> <ul> <li> <p><strong>last_hidden_state</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>) — Sequence of hidden-states at the output of the last layer of the model.</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(tf.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>tf.Tensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>tf.Tensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-1hqk19t">The <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.TFWav2Vec2Model">TFWav2Vec2Model</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.TFWav2Vec2Model.call.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFWav2Vec2Model.call.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoProcessor, TFWav2Vec2Model <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> soundfile <span class="hljs-keyword">as</span> sf <span class="hljs-meta">&gt;&gt;&gt; </span>processor = AutoProcessor.from_pretrained(<span class="hljs-string">"facebook/wav2vec2-base-960h"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = TFWav2Vec2Model.from_pretrained(<span class="hljs-string">"facebook/wav2vec2-base-960h"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">map_to_array</span>(<span class="hljs-params">batch</span>): <span class="hljs-meta">... </span> speech, _ = sf.read(batch[<span class="hljs-string">"file"</span>]) <span class="hljs-meta">... </span> batch[<span class="hljs-string">"speech"</span>] = speech <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> batch <span class="hljs-meta">&gt;&gt;&gt; </span>ds = load_dataset(<span class="hljs-string">"hf-internal-testing/librispeech_asr_dummy"</span>, <span class="hljs-string">"clean"</span>, split=<span class="hljs-string">"validation"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>ds = ds.<span class="hljs-built_in">map</span>(map_to_array) <span class="hljs-meta">&gt;&gt;&gt; </span>input_values = processor(ds[<span class="hljs-string">"speech"</span>][<span class="hljs-number">0</span>], return_tensors=<span class="hljs-string">"tf"</span>).input_values <span class="hljs-comment"># Batch size 1</span> <span class="hljs-meta">&gt;&gt;&gt; </span>hidden_states = model(input_values).last_hidden_state</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.TFWav2Vec2ForSequenceClassification" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFWav2Vec2ForSequenceClassification"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1kp3ajn">TFWav2Vec2ForSequenceClassification</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFWav2Vec2ForSequenceClassification"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">TFWav2Vec2ForSequenceClassification</span></span></h3> <a id="transformers.TFWav2Vec2ForSequenceClassification" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFWav2Vec2ForSequenceClassification"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py#L1576" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">*args<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> </div></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFWav2Vec2ForSequenceClassification.call"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>call</span></h4> <a id="transformers.TFWav2Vec2ForSequenceClassification.call" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFWav2Vec2ForSequenceClassification.call"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py#L1617" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_values<span class="opacity-60">: tf.Tensor</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: tf.Tensor | None = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: bool | None = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: bool | None = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: bool | None = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: tf.Tensor | None = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">training<span class="opacity-60">: bool = False</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> </div></div></div></div> <h2 class="relative group"><a id="transformers.TFWav2Vec2ForCTC" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFWav2Vec2ForCTC"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1hgr9v4">TFWav2Vec2ForCTC</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFWav2Vec2ForCTC"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">TFWav2Vec2ForCTC</span></span></h3> <a id="transformers.TFWav2Vec2ForCTC" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFWav2Vec2ForCTC"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py#L1427" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">*args<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFWav2Vec2ForCTC.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFWav2Vec2ForCTC.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Config">Wav2Vec2Config</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-uqrpa">TFWav2Vec2 Model with a <code>language modeling</code> head on top for Connectionist Temporal Classification (CTC).</p> <p data-svelte-h="svelte-1i0vt4o">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel">TFPreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-1ivrf8m">This model is also a <a href="https://www.tensorflow.org/api_docs/python/tf/keras/Model" rel="nofollow">tf.keras.Model</a> subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1ajbfxg">TensorFlow models and layers in <code>transformers</code> accept two formats as input:</p> <ul data-svelte-h="svelte-qm1t26"><li>having all inputs as keyword arguments (like PyTorch models), or</li> <li>having all inputs as a list, tuple or dict in the first positional argument.</li></ul> <p data-svelte-h="svelte-1v9qsc5">The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like <code>model.fit()</code> things should “just work” for you - just pass your inputs and labels in any format that <code>model.fit()</code> supports! If, however, you want to use the second format outside of Keras methods like <code>fit()</code> and <code>predict()</code>, such as when creating your own layers or models with the Keras <code>Functional</code> API, there are three possibilities you can use to gather all the input Tensors in the first positional argument:</p> <ul data-svelte-h="svelte-1x9eg56"><li>a single Tensor with <code>input_values</code> only and nothing else: <code>model(input_values)</code></li> <li>a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: <code>model([input_values, attention_mask])</code> or <code>model([input_values, attention_mask, token_type_ids])</code></li> <li>a dictionary with one or several input Tensors associated to the input names given in the docstring: <code>model({"input_values": input_values, "token_type_ids": token_type_ids})</code></li></ul> <p data-svelte-h="svelte-1an3odd">Note that when creating models and layers with <a href="https://keras.io/guides/making_new_layers_and_models_via_subclassing/" rel="nofollow">subclassing</a> then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function!</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFWav2Vec2ForCTC.call"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>call</span></h4> <a id="transformers.TFWav2Vec2ForCTC.call" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFWav2Vec2ForCTC.call"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py#L1454" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_values<span class="opacity-60">: tf.Tensor</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">training<span class="opacity-60">: Optional[bool] = False</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFCausalLMOutput">transformers.modeling_tf_outputs.TFCausalLMOutput</a> or <code>tuple(tf.Tensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 11 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFWav2Vec2ForCTC.call.input_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFWav2Vec2ForCTC.call.input_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_values</strong> (<code>np.ndarray</code>, <code>tf.Tensor</code>, <code>List[tf.Tensor]</code> <code>Dict[str, tf.Tensor]</code> or <code>Dict[str, np.ndarray]</code> and each example must have the shape <code>({0})</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> and <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFWav2Vec2ForCTC.call.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFWav2Vec2ForCTC.call.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>np.ndarray</code> or <code>tf.Tensor</code> of shape <code>({0})</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFWav2Vec2ForCTC.call.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFWav2Vec2ForCTC.call.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>np.ndarray</code> or <code>tf.Tensor</code> of shape <code>({0})</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token.</li> </ul> <p><a href="../glossary#token-type-ids">What are token type IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFWav2Vec2ForCTC.call.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFWav2Vec2ForCTC.call.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>np.ndarray</code> or <code>tf.Tensor</code> of shape <code>({0})</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.<p></p> <p><a href="../glossary#position-ids">What are position IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFWav2Vec2ForCTC.call.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFWav2Vec2ForCTC.call.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>np.ndarray</code> or <code>tf.Tensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFWav2Vec2ForCTC.call.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFWav2Vec2ForCTC.call.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>np.ndarray</code> or <code>tf.Tensor</code> of shape <code>({0}, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_values</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_values</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFWav2Vec2ForCTC.call.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFWav2Vec2ForCTC.call.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFWav2Vec2ForCTC.call.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFWav2Vec2ForCTC.call.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFWav2Vec2ForCTC.call.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFWav2Vec2ForCTC.call.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFWav2Vec2ForCTC.call.training" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFWav2Vec2ForCTC.call.training"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>training</strong> (<code>bool</code>, <em>optional</em>, defaults to `False“) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFWav2Vec2ForCTC.call.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFWav2Vec2ForCTC.call.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>tf.Tensor</code> or <code>np.ndarray</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Labels for computing the masked language modeling loss. Indices should be in <code>[-100, 0, ..., config.vocab_size]</code> (see <code>input_values</code> docstring) Tokens with indices set to <code>-100</code> are ignored (masked), the loss is only computed for the tokens with labels in <code>[0, ..., config.vocab_size]</code></span></span> </li></ul> <div id="transformers.TFWav2Vec2ForCTC.call.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFCausalLMOutput">transformers.modeling_tf_outputs.TFCausalLMOutput</a> or <code>tuple(tf.Tensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFCausalLMOutput">transformers.modeling_tf_outputs.TFCausalLMOutput</a> or a tuple of <code>tf.Tensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Config">Wav2Vec2Config</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>tf.Tensor</code> of shape <code>(n,)</code>, <em>optional</em>, where n is the number of non-masked labels, returned when <code>labels</code> is provided) — Language modeling loss (for next-token prediction).</p> </li> <li> <p><strong>logits</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length, config.vocab_size)</code>) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>tf.Tensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>tf.Tensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-71nslr">The <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.TFWav2Vec2ForCTC">TFWav2Vec2ForCTC</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.TFWav2Vec2ForCTC.call.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFWav2Vec2ForCTC.call.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> tensorflow <span class="hljs-keyword">as</span> tf <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoProcessor, TFWav2Vec2ForCTC <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> soundfile <span class="hljs-keyword">as</span> sf <span class="hljs-meta">&gt;&gt;&gt; </span>processor = AutoProcessor.from_pretrained(<span class="hljs-string">"facebook/wav2vec2-base-960h"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = TFWav2Vec2ForCTC.from_pretrained(<span class="hljs-string">"facebook/wav2vec2-base-960h"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">map_to_array</span>(<span class="hljs-params">batch</span>): <span class="hljs-meta">... </span> speech, _ = sf.read(batch[<span class="hljs-string">"file"</span>]) <span class="hljs-meta">... </span> batch[<span class="hljs-string">"speech"</span>] = speech <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> batch <span class="hljs-meta">&gt;&gt;&gt; </span>ds = load_dataset(<span class="hljs-string">"hf-internal-testing/librispeech_asr_dummy"</span>, <span class="hljs-string">"clean"</span>, split=<span class="hljs-string">"validation"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>ds = ds.<span class="hljs-built_in">map</span>(map_to_array) <span class="hljs-meta">&gt;&gt;&gt; </span>input_values = processor(ds[<span class="hljs-string">"speech"</span>][<span class="hljs-number">0</span>], return_tensors=<span class="hljs-string">"tf"</span>).input_values <span class="hljs-comment"># Batch size 1</span> <span class="hljs-meta">&gt;&gt;&gt; </span>logits = model(input_values).logits <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_ids = tf.argmax(logits, axis=-<span class="hljs-number">1</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>transcription = processor.decode(predicted_ids[<span class="hljs-number">0</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># compute loss</span> <span class="hljs-meta">&gt;&gt;&gt; </span>target_transcription = <span class="hljs-string">"A MAN SAID TO THE UNIVERSE SIR I EXIST"</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Pass transcription as `text` to encode labels</span> <span class="hljs-meta">&gt;&gt;&gt; </span>labels = processor(text=transcription, return_tensors=<span class="hljs-string">"tf"</span>).input_ids <span class="hljs-meta">&gt;&gt;&gt; </span>loss = model(input_values, labels=labels).loss</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.FlaxWav2Vec2Model" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWav2Vec2Model"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1g9xyub">FlaxWav2Vec2Model</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FlaxWav2Vec2Model"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">FlaxWav2Vec2Model</span></span></h3> <a id="transformers.FlaxWav2Vec2Model" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FlaxWav2Vec2Model"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/modeling_flax_wav2vec2.py#L1055" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60">: Wav2Vec2Config</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_shape<span class="opacity-60">: typing.Tuple = (1, 1024)</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">seed<span class="opacity-60">: int = 0</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">dtype<span class="opacity-60">: dtype = &lt;class 'jax.numpy.float32'&gt;</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">_do_init<span class="opacity-60">: bool = True</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxWav2Vec2Model.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWav2Vec2Model.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Config">Wav2Vec2Config</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxWav2Vec2Model.dtype" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWav2Vec2Model.dtype"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>dtype</strong> (<code>jax.numpy.dtype</code>, <em>optional</em>, defaults to <code>jax.numpy.float32</code>) — The data type of the computation. Can be one of <code>jax.numpy.float32</code>, <code>jax.numpy.float16</code> (on GPUs) and <code>jax.numpy.bfloat16</code> (on TPUs).<p></p> <p>This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given <code>dtype</code>.</p> <p><strong>Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.</strong></p> <p>If you wish to change the dtype of the model parameters, see <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.to_fp16">to_fp16()</a> and <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.to_bf16">to_bf16()</a>.</p></span></span> </li></ul> </div></div> <p data-svelte-h="svelte-te9mu1">The bare Wav2Vec2 Model transformer outputting raw hidden-states without any specific head on top. Wav2Vec2 was proposed in <a href="https://arxiv.org/abs/2006.11477" rel="nofollow">wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations</a> by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.</p> <p data-svelte-h="svelte-1b68hcc">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel">FlaxPreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-idybz1">This model is also a Flax Linen <a href="https://flax.readthedocs.io/en/latest/_autosummary/flax.nn.module.html" rel="nofollow">flax.nn.Module</a> subclass. Use it as a regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.</p> <p data-svelte-h="svelte-1pplc4a">Finally, this model supports inherent JAX features such as:</p> <ul data-svelte-h="svelte-1w7z84m"><li><a href="https://jax.readthedocs.io/en/latest/jax.html#just-in-time-compilation-jit" rel="nofollow">Just-In-Time (JIT) compilation</a></li> <li><a href="https://jax.readthedocs.io/en/latest/jax.html#automatic-differentiation" rel="nofollow">Automatic Differentiation</a></li> <li><a href="https://jax.readthedocs.io/en/latest/jax.html#vectorization-vmap" rel="nofollow">Vectorization</a></li> <li><a href="https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap" rel="nofollow">Parallelization</a></li></ul> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FlaxWav2Vec2Model.__call__"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>__call__</span></h4> <a id="transformers.FlaxWav2Vec2Model.__call__" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FlaxWav2Vec2Model.__call__"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/modeling_flax_wav2vec2.py#L888" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_values<span class="opacity-60"></span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_time_indices<span class="opacity-60"> = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">params<span class="opacity-60">: dict = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">dropout_rng<span class="opacity-60">: PRNGKey = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">train<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">freeze_feature_encoder<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2BaseModelOutput">transformers.models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2BaseModelOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 6 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxWav2Vec2Model.__call__.input_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWav2Vec2Model.__call__.input_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_values</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, sequence_length)</code>) — Float values of input raw speech waveform. Values can be obtained by loading a <code>.flac</code> or <code>.wav</code> audio file into an array of type <code>List[float]</code> or a <code>numpy.ndarray</code>, <em>e.g.</em> via the soundfile library (<code>pip install soundfile</code>). To prepare the array into <code>input_values</code>, the <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoProcessor">AutoProcessor</a> should be used for padding and conversion into a tensor of type <code>jnp.ndarray</code>. See <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Processor.__call__">Wav2Vec2Processor.<strong>call</strong>()</a> for details.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxWav2Vec2Model.__call__.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWav2Vec2Model.__call__.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing convolution and attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a> .. warning:: <code>attention_mask</code> should only be passed if the corresponding processor has <code>config.return_attention_mask == True</code>. For all models whose processor has <code>config.return_attention_mask == False</code>, such as <a href="https://huggingface.co/facebook/wav2vec2-base-960h" rel="nofollow">wav2vec2-base</a>, <code>attention_mask</code> should <strong>not</strong> be passed to avoid degraded performance when doing batched inference. For such models <code>input_values</code> should simply be padded with 0 and passed without <code>attention_mask</code>. Be aware that these models also yield slightly different results depending on whether <code>input_values</code> is padded or not.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxWav2Vec2Model.__call__.mask_time_indices" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWav2Vec2Model.__call__.mask_time_indices"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mask_time_indices</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices to mask extracted features for contrastive loss. When in training mode, model learns to predict masked extracted features in <em>config.proj_codevector_dim</em> space.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxWav2Vec2Model.__call__.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWav2Vec2Model.__call__.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxWav2Vec2Model.__call__.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWav2Vec2Model.__call__.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxWav2Vec2Model.__call__.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWav2Vec2Model.__call__.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li></ul> <div id="transformers.FlaxWav2Vec2Model.__call__.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2BaseModelOutput">transformers.models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2BaseModelOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2BaseModelOutput">transformers.models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2BaseModelOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<code>&lt;class 'transformers.models.wav2vec2.configuration_wav2vec2.Wav2Vec2Config'&gt;</code>) and inputs.</p> <ul> <li> <p><strong>last_hidden_state</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>) — Sequence of hidden-states at the output of the last layer of the model.</p> </li> <li> <p><strong>extract_features</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, sequence_length, last_conv_dim)</code>) — Sequence of extracted feature vectors of the last convolutional layer of the model with <code>last_conv_dim</code> being the dimension of the last convolutional layer.</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>jnp.ndarray</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>jnp.ndarray</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-dpjknz">The <code>FlaxWav2Vec2PreTrainedModel</code> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.FlaxWav2Vec2Model.__call__.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWav2Vec2Model.__call__.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoProcessor, FlaxWav2Vec2Model <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> soundfile <span class="hljs-keyword">as</span> sf <span class="hljs-meta">&gt;&gt;&gt; </span>processor = AutoProcessor.from_pretrained(<span class="hljs-string">"facebook/wav2vec2-large-lv60"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = FlaxWav2Vec2Model.from_pretrained(<span class="hljs-string">"facebook/wav2vec2-large-lv60"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">map_to_array</span>(<span class="hljs-params">batch</span>): <span class="hljs-meta">... </span> speech, _ = sf.read(batch[<span class="hljs-string">"file"</span>]) <span class="hljs-meta">... </span> batch[<span class="hljs-string">"speech"</span>] = speech <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> batch <span class="hljs-meta">&gt;&gt;&gt; </span>ds = load_dataset(<span class="hljs-string">"hf-internal-testing/librispeech_asr_dummy"</span>, <span class="hljs-string">"clean"</span>, split=<span class="hljs-string">"validation"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>ds = ds.<span class="hljs-built_in">map</span>(map_to_array) <span class="hljs-meta">&gt;&gt;&gt; </span>input_values = processor( <span class="hljs-meta">... </span> ds[<span class="hljs-string">"speech"</span>][<span class="hljs-number">0</span>], sampling_rate=<span class="hljs-number">16_000</span>, return_tensors=<span class="hljs-string">"np"</span> <span class="hljs-meta">... </span>).input_values <span class="hljs-comment"># Batch size 1</span> <span class="hljs-meta">&gt;&gt;&gt; </span>hidden_states = model(input_values).last_hidden_state</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.FlaxWav2Vec2ForCTC" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWav2Vec2ForCTC"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-t04ou7">FlaxWav2Vec2ForCTC</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FlaxWav2Vec2ForCTC"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">FlaxWav2Vec2ForCTC</span></span></h3> <a id="transformers.FlaxWav2Vec2ForCTC" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FlaxWav2Vec2ForCTC"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/modeling_flax_wav2vec2.py#L1173" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60">: Wav2Vec2Config</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_shape<span class="opacity-60">: typing.Tuple = (1, 1024)</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">seed<span class="opacity-60">: int = 0</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">dtype<span class="opacity-60">: dtype = &lt;class 'jax.numpy.float32'&gt;</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">_do_init<span class="opacity-60">: bool = True</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxWav2Vec2ForCTC.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWav2Vec2ForCTC.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Config">Wav2Vec2Config</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxWav2Vec2ForCTC.dtype" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWav2Vec2ForCTC.dtype"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>dtype</strong> (<code>jax.numpy.dtype</code>, <em>optional</em>, defaults to <code>jax.numpy.float32</code>) — The data type of the computation. Can be one of <code>jax.numpy.float32</code>, <code>jax.numpy.float16</code> (on GPUs) and <code>jax.numpy.bfloat16</code> (on TPUs).<p></p> <p>This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given <code>dtype</code>.</p> <p><strong>Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.</strong></p> <p>If you wish to change the dtype of the model parameters, see <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.to_fp16">to_fp16()</a> and <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.to_bf16">to_bf16()</a>.</p></span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1p0g25y">Wav2Vec2 Model with a <code>language modeling</code> head on top for Connectionist Temporal Classification (CTC). Wav2Vec2 was proposed in <a href="https://arxiv.org/abs/2006.11477" rel="nofollow">wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations</a> by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.</p> <p data-svelte-h="svelte-1b68hcc">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel">FlaxPreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-idybz1">This model is also a Flax Linen <a href="https://flax.readthedocs.io/en/latest/_autosummary/flax.nn.module.html" rel="nofollow">flax.nn.Module</a> subclass. Use it as a regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.</p> <p data-svelte-h="svelte-1pplc4a">Finally, this model supports inherent JAX features such as:</p> <ul data-svelte-h="svelte-1w7z84m"><li><a href="https://jax.readthedocs.io/en/latest/jax.html#just-in-time-compilation-jit" rel="nofollow">Just-In-Time (JIT) compilation</a></li> <li><a href="https://jax.readthedocs.io/en/latest/jax.html#automatic-differentiation" rel="nofollow">Automatic Differentiation</a></li> <li><a href="https://jax.readthedocs.io/en/latest/jax.html#vectorization-vmap" rel="nofollow">Vectorization</a></li> <li><a href="https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap" rel="nofollow">Parallelization</a></li></ul> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FlaxWav2Vec2ForCTC.__call__"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>__call__</span></h4> <a id="transformers.FlaxWav2Vec2ForCTC.__call__" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FlaxWav2Vec2ForCTC.__call__"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/modeling_flax_wav2vec2.py#L888" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_values<span class="opacity-60"></span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_time_indices<span class="opacity-60"> = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">params<span class="opacity-60">: dict = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">dropout_rng<span class="opacity-60">: PRNGKey = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">train<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">freeze_feature_encoder<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxMaskedLMOutput">transformers.modeling_flax_outputs.FlaxMaskedLMOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 6 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxWav2Vec2ForCTC.__call__.input_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWav2Vec2ForCTC.__call__.input_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_values</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, sequence_length)</code>) — Float values of input raw speech waveform. Values can be obtained by loading a <code>.flac</code> or <code>.wav</code> audio file into an array of type <code>List[float]</code> or a <code>numpy.ndarray</code>, <em>e.g.</em> via the soundfile library (<code>pip install soundfile</code>). To prepare the array into <code>input_values</code>, the <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoProcessor">AutoProcessor</a> should be used for padding and conversion into a tensor of type <code>jnp.ndarray</code>. See <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Processor.__call__">Wav2Vec2Processor.<strong>call</strong>()</a> for details.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxWav2Vec2ForCTC.__call__.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWav2Vec2ForCTC.__call__.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing convolution and attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a> .. warning:: <code>attention_mask</code> should only be passed if the corresponding processor has <code>config.return_attention_mask == True</code>. For all models whose processor has <code>config.return_attention_mask == False</code>, such as <a href="https://huggingface.co/facebook/wav2vec2-base-960h" rel="nofollow">wav2vec2-base</a>, <code>attention_mask</code> should <strong>not</strong> be passed to avoid degraded performance when doing batched inference. For such models <code>input_values</code> should simply be padded with 0 and passed without <code>attention_mask</code>. Be aware that these models also yield slightly different results depending on whether <code>input_values</code> is padded or not.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxWav2Vec2ForCTC.__call__.mask_time_indices" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWav2Vec2ForCTC.__call__.mask_time_indices"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mask_time_indices</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices to mask extracted features for contrastive loss. When in training mode, model learns to predict masked extracted features in <em>config.proj_codevector_dim</em> space.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxWav2Vec2ForCTC.__call__.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWav2Vec2ForCTC.__call__.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxWav2Vec2ForCTC.__call__.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWav2Vec2ForCTC.__call__.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxWav2Vec2ForCTC.__call__.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWav2Vec2ForCTC.__call__.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li></ul> <div id="transformers.FlaxWav2Vec2ForCTC.__call__.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxMaskedLMOutput">transformers.modeling_flax_outputs.FlaxMaskedLMOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxMaskedLMOutput">transformers.modeling_flax_outputs.FlaxMaskedLMOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<code>&lt;class 'transformers.models.wav2vec2.configuration_wav2vec2.Wav2Vec2Config'&gt;</code>) and inputs.</p> <ul> <li> <p><strong>logits</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, sequence_length, config.vocab_size)</code>) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>jnp.ndarray</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>jnp.ndarray</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-dpjknz">The <code>FlaxWav2Vec2PreTrainedModel</code> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.FlaxWav2Vec2ForCTC.__call__.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWav2Vec2ForCTC.__call__.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> jax.numpy <span class="hljs-keyword">as</span> jnp <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoProcessor, FlaxWav2Vec2ForCTC <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> soundfile <span class="hljs-keyword">as</span> sf <span class="hljs-meta">&gt;&gt;&gt; </span>processor = AutoProcessor.from_pretrained(<span class="hljs-string">"facebook/wav2vec2-large-960h-lv60"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = FlaxWav2Vec2ForCTC.from_pretrained(<span class="hljs-string">"facebook/wav2vec2-large-960h-lv60"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">map_to_array</span>(<span class="hljs-params">batch</span>): <span class="hljs-meta">... </span> speech, _ = sf.read(batch[<span class="hljs-string">"file"</span>]) <span class="hljs-meta">... </span> batch[<span class="hljs-string">"speech"</span>] = speech <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> batch <span class="hljs-meta">&gt;&gt;&gt; </span>ds = load_dataset(<span class="hljs-string">"hf-internal-testing/librispeech_asr_dummy"</span>, <span class="hljs-string">"clean"</span>, split=<span class="hljs-string">"validation"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>ds = ds.<span class="hljs-built_in">map</span>(map_to_array) <span class="hljs-meta">&gt;&gt;&gt; </span>input_values = processor( <span class="hljs-meta">... </span> ds[<span class="hljs-string">"speech"</span>][<span class="hljs-number">0</span>], sampling_rate=<span class="hljs-number">16_000</span>, return_tensors=<span class="hljs-string">"np"</span> <span class="hljs-meta">... </span>).input_values <span class="hljs-comment"># Batch size 1</span> <span class="hljs-meta">&gt;&gt;&gt; </span>logits = model(input_values).logits <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_ids = jnp.argmax(logits, axis=-<span class="hljs-number">1</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>transcription = processor.decode(predicted_ids[<span class="hljs-number">0</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># should give: "A MAN SAID TO THE UNIVERSE SIR I EXIST"</span></pre></div></div></div></div> <h2 class="relative group"><a id="transformers.FlaxWav2Vec2ForPreTraining" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWav2Vec2ForPreTraining"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-xlykpa">FlaxWav2Vec2ForPreTraining</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FlaxWav2Vec2ForPreTraining"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">FlaxWav2Vec2ForPreTraining</span></span></h3> <a id="transformers.FlaxWav2Vec2ForPreTraining" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FlaxWav2Vec2ForPreTraining"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/modeling_flax_wav2vec2.py#L1319" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60">: Wav2Vec2Config</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_shape<span class="opacity-60">: typing.Tuple = (1, 1024)</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">seed<span class="opacity-60">: int = 0</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">dtype<span class="opacity-60">: dtype = &lt;class 'jax.numpy.float32'&gt;</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">_do_init<span class="opacity-60">: bool = True</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxWav2Vec2ForPreTraining.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWav2Vec2ForPreTraining.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Config">Wav2Vec2Config</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxWav2Vec2ForPreTraining.dtype" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWav2Vec2ForPreTraining.dtype"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>dtype</strong> (<code>jax.numpy.dtype</code>, <em>optional</em>, defaults to <code>jax.numpy.float32</code>) — The data type of the computation. Can be one of <code>jax.numpy.float32</code>, <code>jax.numpy.float16</code> (on GPUs) and <code>jax.numpy.bfloat16</code> (on TPUs).<p></p> <p>This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given <code>dtype</code>.</p> <p><strong>Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.</strong></p> <p>If you wish to change the dtype of the model parameters, see <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.to_fp16">to_fp16()</a> and <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.to_bf16">to_bf16()</a>.</p></span></span> </li></ul> </div></div> <p data-svelte-h="svelte-wn7jg8">Wav2Vec2 Model with a quantizer and <code>VQ</code> head on top. Wav2Vec2 was proposed in <a href="https://arxiv.org/abs/2006.11477" rel="nofollow">wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations</a> by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.</p> <p data-svelte-h="svelte-1b68hcc">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel">FlaxPreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-idybz1">This model is also a Flax Linen <a href="https://flax.readthedocs.io/en/latest/_autosummary/flax.nn.module.html" rel="nofollow">flax.nn.Module</a> subclass. Use it as a regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.</p> <p data-svelte-h="svelte-1pplc4a">Finally, this model supports inherent JAX features such as:</p> <ul data-svelte-h="svelte-1w7z84m"><li><a href="https://jax.readthedocs.io/en/latest/jax.html#just-in-time-compilation-jit" rel="nofollow">Just-In-Time (JIT) compilation</a></li> <li><a href="https://jax.readthedocs.io/en/latest/jax.html#automatic-differentiation" rel="nofollow">Automatic Differentiation</a></li> <li><a href="https://jax.readthedocs.io/en/latest/jax.html#vectorization-vmap" rel="nofollow">Vectorization</a></li> <li><a href="https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap" rel="nofollow">Parallelization</a></li></ul> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FlaxWav2Vec2ForPreTraining.__call__"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>__call__</span></h4> <a id="transformers.FlaxWav2Vec2ForPreTraining.__call__" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FlaxWav2Vec2ForPreTraining.__call__"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2/modeling_flax_wav2vec2.py#L1322" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_values<span class="opacity-60"></span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_time_indices<span class="opacity-60"> = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">gumbel_temperature<span class="opacity-60">: int = 1</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">params<span class="opacity-60">: dict = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">dropout_rng<span class="opacity-60">: PRNGKey = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">gumbel_rng<span class="opacity-60">: PRNGKey = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">train<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">freeze_feature_encoder<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2ForPreTrainingOutput">transformers.models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2ForPreTrainingOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 6 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxWav2Vec2ForPreTraining.__call__.input_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWav2Vec2ForPreTraining.__call__.input_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_values</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, sequence_length)</code>) — Float values of input raw speech waveform. Values can be obtained by loading a <code>.flac</code> or <code>.wav</code> audio file into an array of type <code>List[float]</code> or a <code>numpy.ndarray</code>, <em>e.g.</em> via the soundfile library (<code>pip install soundfile</code>). To prepare the array into <code>input_values</code>, the <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoProcessor">AutoProcessor</a> should be used for padding and conversion into a tensor of type <code>jnp.ndarray</code>. See <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Processor.__call__">Wav2Vec2Processor.<strong>call</strong>()</a> for details.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxWav2Vec2ForPreTraining.__call__.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWav2Vec2ForPreTraining.__call__.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing convolution and attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a> .. warning:: <code>attention_mask</code> should only be passed if the corresponding processor has <code>config.return_attention_mask == True</code>. For all models whose processor has <code>config.return_attention_mask == False</code>, such as <a href="https://huggingface.co/facebook/wav2vec2-base-960h" rel="nofollow">wav2vec2-base</a>, <code>attention_mask</code> should <strong>not</strong> be passed to avoid degraded performance when doing batched inference. For such models <code>input_values</code> should simply be padded with 0 and passed without <code>attention_mask</code>. Be aware that these models also yield slightly different results depending on whether <code>input_values</code> is padded or not.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxWav2Vec2ForPreTraining.__call__.mask_time_indices" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWav2Vec2ForPreTraining.__call__.mask_time_indices"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mask_time_indices</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices to mask extracted features for contrastive loss. When in training mode, model learns to predict masked extracted features in <em>config.proj_codevector_dim</em> space.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxWav2Vec2ForPreTraining.__call__.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWav2Vec2ForPreTraining.__call__.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxWav2Vec2ForPreTraining.__call__.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWav2Vec2ForPreTraining.__call__.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxWav2Vec2ForPreTraining.__call__.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWav2Vec2ForPreTraining.__call__.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li></ul> <div id="transformers.FlaxWav2Vec2ForPreTraining.__call__.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2ForPreTrainingOutput">transformers.models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2ForPreTrainingOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2ForPreTrainingOutput">transformers.models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2ForPreTrainingOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<code>&lt;class 'transformers.models.wav2vec2.configuration_wav2vec2.Wav2Vec2Config'&gt;</code>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<em>optional</em>, returned when model is in train mode, <code>jnp.ndarray</code> of shape <code>(1,)</code>) — Total loss as the sum of the contrastive loss (L_m) and the diversity loss (L_d) as stated in the <a href="https://arxiv.org/pdf/2006.11477.pdf" rel="nofollow">official paper</a> . (classification) loss.</p> </li> <li> <p><strong>projected_states</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, sequence_length, config.proj_codevector_dim)</code>) — Hidden-states of the model projected to <em>config.proj_codevector_dim</em> that can be used to predict the masked projected quantized states.</p> </li> <li> <p><strong>projected_quantized_states</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, sequence_length, config.proj_codevector_dim)</code>) — Quantized extracted feature vectors projected to <em>config.proj_codevector_dim</em> representing the positive target vectors for contrastive loss.</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>jnp.ndarray</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>jnp.ndarray</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-7tflcr">The <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.FlaxWav2Vec2ForPreTraining">FlaxWav2Vec2ForPreTraining</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.FlaxWav2Vec2ForPreTraining.__call__.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWav2Vec2ForPreTraining.__call__.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> optax <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> jax.numpy <span class="hljs-keyword">as</span> jnp <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoFeatureExtractor, FlaxWav2Vec2ForPreTraining <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers.models.wav2vec2.modeling_flax_wav2vec2 <span class="hljs-keyword">import</span> _compute_mask_indices <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> soundfile <span class="hljs-keyword">as</span> sf <span class="hljs-meta">&gt;&gt;&gt; </span>feature_extractor = AutoFeatureExtractor.from_pretrained(<span class="hljs-string">"facebook/wav2vec2-large-lv60"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = FlaxWav2Vec2ForPreTraining.from_pretrained(<span class="hljs-string">"facebook/wav2vec2-large-lv60"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">map_to_array</span>(<span class="hljs-params">batch</span>): <span class="hljs-meta">... </span> speech, _ = sf.read(batch[<span class="hljs-string">"file"</span>]) <span class="hljs-meta">... </span> batch[<span class="hljs-string">"speech"</span>] = speech <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> batch <span class="hljs-meta">&gt;&gt;&gt; </span>ds = load_dataset(<span class="hljs-string">"hf-internal-testing/librispeech_asr_dummy"</span>, <span class="hljs-string">"clean"</span>, split=<span class="hljs-string">"validation"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>ds = ds.<span class="hljs-built_in">map</span>(map_to_array) <span class="hljs-meta">&gt;&gt;&gt; </span>input_values = feature_extractor(ds[<span class="hljs-string">"speech"</span>][<span class="hljs-number">0</span>], return_tensors=<span class="hljs-string">"np"</span>).input_values <span class="hljs-comment"># Batch size 1</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># compute masked indices</span> <span class="hljs-meta">&gt;&gt;&gt; </span>batch_size, raw_sequence_length = input_values.shape <span class="hljs-meta">&gt;&gt;&gt; </span>sequence_length = model._get_feat_extract_output_lengths(raw_sequence_length) <span class="hljs-meta">&gt;&gt;&gt; </span>mask_time_indices = _compute_mask_indices((batch_size, sequence_length), mask_prob=<span class="hljs-number">0.2</span>, mask_length=<span class="hljs-number">2</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(input_values, mask_time_indices=mask_time_indices) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># compute cosine similarity between predicted (=projected_states) and target (=projected_quantized_states)</span> <span class="hljs-meta">&gt;&gt;&gt; </span>cosine_sim = optax.cosine_similarity(outputs.projected_states, outputs.projected_quantized_states) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># show that cosine similarity is much higher than random</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">assert</span> np.asarray(cosine_sim)[mask_time_indices].mean() &gt; <span class="hljs-number">0.5</span></pre></div></div></div></div> <p></p> <div id="svelte-announcer" aria-live="assertive" aria-atomic="true" style="position: absolute; left: 0px; top: 0px; clip: rect(0px, 0px, 0px, 0px); clip-path: inset(50%); overflow: hidden; white-space: nowrap; width: 1px; height: 1px;"></div></div> <div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/vits" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>VITS</a> <a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">Wav2Vec2-Conformer<span class="ml-2 translate-y-px">→</span></a></div></div></div> <div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapter&quot;:{&quot;title&quot;:&quot;Wav2Vec2&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;wav2vec2&quot;,&quot;url&quot;:&quot;#wav2vec2&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;overview&quot;,&quot;url&quot;:&quot;#overview&quot;},{&quot;title&quot;:&quot;Resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;resources&quot;,&quot;url&quot;:&quot;#resources&quot;},{&quot;title&quot;:&quot;Wav2Vec2Config&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.Wav2Vec2Config&quot;,&quot;url&quot;:&quot;#transformers.Wav2Vec2Config&quot;},{&quot;title&quot;:&quot;Wav2Vec2CTCTokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.Wav2Vec2CTCTokenizer&quot;,&quot;url&quot;:&quot;#transformers.Wav2Vec2CTCTokenizer&quot;},{&quot;title&quot;:&quot;Wav2Vec2FeatureExtractor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.Wav2Vec2FeatureExtractor&quot;,&quot;url&quot;:&quot;#transformers.Wav2Vec2FeatureExtractor&quot;},{&quot;title&quot;:&quot;Wav2Vec2Processor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.Wav2Vec2Processor&quot;,&quot;url&quot;:&quot;#transformers.Wav2Vec2Processor&quot;},{&quot;title&quot;:&quot;Wav2Vec2ProcessorWithLM&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.Wav2Vec2ProcessorWithLM&quot;,&quot;url&quot;:&quot;#transformers.Wav2Vec2ProcessorWithLM&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Decoding multiple audios&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;decoding-multiple-audios&quot;,&quot;url&quot;:&quot;#decoding-multiple-audios&quot;}]},{&quot;title&quot;:&quot;Wav2Vec2 specific outputs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.models.wav2vec2_with_lm.processing_wav2vec2_with_lm.Wav2Vec2DecoderWithLMOutput&quot;,&quot;url&quot;:&quot;#transformers.models.wav2vec2_with_lm.processing_wav2vec2_with_lm.Wav2Vec2DecoderWithLMOutput&quot;},{&quot;title&quot;:&quot;Wav2Vec2Model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.Wav2Vec2Model&quot;,&quot;url&quot;:&quot;#transformers.Wav2Vec2Model&quot;},{&quot;title&quot;:&quot;Wav2Vec2ForCTC&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.Wav2Vec2ForCTC&quot;,&quot;url&quot;:&quot;#transformers.Wav2Vec2ForCTC&quot;},{&quot;title&quot;:&quot;Wav2Vec2ForSequenceClassification&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.Wav2Vec2ForSequenceClassification&quot;,&quot;url&quot;:&quot;#transformers.Wav2Vec2ForSequenceClassification&quot;},{&quot;title&quot;:&quot;Wav2Vec2ForAudioFrameClassification&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.Wav2Vec2ForAudioFrameClassification&quot;,&quot;url&quot;:&quot;#transformers.Wav2Vec2ForAudioFrameClassification&quot;},{&quot;title&quot;:&quot;Wav2Vec2ForXVector&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.Wav2Vec2ForXVector&quot;,&quot;url&quot;:&quot;#transformers.Wav2Vec2ForXVector&quot;},{&quot;title&quot;:&quot;Wav2Vec2ForPreTraining&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.Wav2Vec2ForPreTraining&quot;,&quot;url&quot;:&quot;#transformers.Wav2Vec2ForPreTraining&quot;},{&quot;title&quot;:&quot;TFWav2Vec2Model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.TFWav2Vec2Model&quot;,&quot;url&quot;:&quot;#transformers.TFWav2Vec2Model&quot;},{&quot;title&quot;:&quot;TFWav2Vec2ForSequenceClassification&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.TFWav2Vec2ForSequenceClassification&quot;,&quot;url&quot;:&quot;#transformers.TFWav2Vec2ForSequenceClassification&quot;},{&quot;title&quot;:&quot;TFWav2Vec2ForCTC&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.TFWav2Vec2ForCTC&quot;,&quot;url&quot;:&quot;#transformers.TFWav2Vec2ForCTC&quot;},{&quot;title&quot;:&quot;FlaxWav2Vec2Model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.FlaxWav2Vec2Model&quot;,&quot;url&quot;:&quot;#transformers.FlaxWav2Vec2Model&quot;},{&quot;title&quot;:&quot;FlaxWav2Vec2ForCTC&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.FlaxWav2Vec2ForCTC&quot;,&quot;url&quot;:&quot;#transformers.FlaxWav2Vec2ForCTC&quot;},{&quot;title&quot;:&quot;FlaxWav2Vec2ForPreTraining&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.FlaxWav2Vec2ForPreTraining&quot;,&quot;url&quot;:&quot;#transformers.FlaxWav2Vec2ForPreTraining&quot;}]}}" data-target="SubSideMenu"><nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#wav2vec2" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-wav2vec2"><wbr>Wav2<wbr>Vec2</a> <a href="#overview" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-overview"><wbr>Overview</a> <a href="#resources" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-resources"><wbr>Resources</a> <a href="#transformers.Wav2Vec2Config" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.Wav2Vec2Config"><wbr>Wav2<wbr>Vec2<wbr>Config</a> <a href="#transformers.Wav2Vec2CTCTokenizer" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.Wav2Vec2CTCTokenizer"><wbr>Wav2<wbr>Vec2CTC<wbr>Tokenizer</a> <a href="#transformers.Wav2Vec2FeatureExtractor" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.Wav2Vec2FeatureExtractor"><wbr>Wav2<wbr>Vec2<wbr>Feature<wbr>Extractor</a> <a href="#transformers.Wav2Vec2Processor" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.Wav2Vec2Processor"><wbr>Wav2<wbr>Vec2<wbr>Processor</a> <a href="#transformers.Wav2Vec2ProcessorWithLM" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.Wav2Vec2ProcessorWithLM"><wbr>Wav2<wbr>Vec2<wbr>Processor<wbr>WithLM</a> <a href="#decoding-multiple-audios" class="pl-8 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-decoding-multiple-audios"><wbr>Decoding multiple audios</a> <a href="#transformers.models.wav2vec2_with_lm.processing_wav2vec2_with_lm.Wav2Vec2DecoderWithLMOutput" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.models.wav2vec2_with_lm.processing_wav2vec2_with_lm.Wav2Vec2DecoderWithLMOutput"><wbr>Wav2<wbr>Vec2 specific outputs</a> <a href="#transformers.Wav2Vec2Model" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.Wav2Vec2Model"><wbr>Wav2<wbr>Vec2<wbr>Model</a> <a href="#transformers.Wav2Vec2ForCTC" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.Wav2Vec2ForCTC"><wbr>Wav2<wbr>Vec2<wbr>ForCTC</a> <a href="#transformers.Wav2Vec2ForSequenceClassification" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.Wav2Vec2ForSequenceClassification"><wbr>Wav2<wbr>Vec2<wbr>For<wbr>Sequence<wbr>Classification</a> <a href="#transformers.Wav2Vec2ForAudioFrameClassification" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.Wav2Vec2ForAudioFrameClassification"><wbr>Wav2<wbr>Vec2<wbr>For<wbr>Audio<wbr>Frame<wbr>Classification</a> <a href="#transformers.Wav2Vec2ForXVector" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.Wav2Vec2ForXVector"><wbr>Wav2<wbr>Vec2<wbr>ForX<wbr>Vector</a> <a href="#transformers.Wav2Vec2ForPreTraining" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.Wav2Vec2ForPreTraining"><wbr>Wav2<wbr>Vec2<wbr>For<wbr>Pre<wbr>Training</a> <a href="#transformers.TFWav2Vec2Model" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.TFWav2Vec2Model">TF<wbr>Wav2<wbr>Vec2<wbr>Model</a> <a href="#transformers.TFWav2Vec2ForSequenceClassification" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.TFWav2Vec2ForSequenceClassification">TF<wbr>Wav2<wbr>Vec2<wbr>For<wbr>Sequence<wbr>Classification</a> <a href="#transformers.TFWav2Vec2ForCTC" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.TFWav2Vec2ForCTC">TF<wbr>Wav2<wbr>Vec2<wbr>ForCTC</a> <a href="#transformers.FlaxWav2Vec2Model" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.FlaxWav2Vec2Model"><wbr>Flax<wbr>Wav2<wbr>Vec2<wbr>Model</a> <a href="#transformers.FlaxWav2Vec2ForCTC" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.FlaxWav2Vec2ForCTC"><wbr>Flax<wbr>Wav2<wbr>Vec2<wbr>ForCTC</a> <a href="#transformers.FlaxWav2Vec2ForPreTraining" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.FlaxWav2Vec2ForPreTraining"><wbr>Flax<wbr>Wav2<wbr>Vec2<wbr>For<wbr>Pre<wbr>Training</a> </nav></div></div></div> <div id="doc-footer"></div></main> </div> <script> import("/front/build/kube-b0520c1/index.js"); window.moonSha = "kube-b0520c1/"; window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc","environment":"production","userAgent":"HuggingFace (production)"}`); </script> <!-- Stripe --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://js.stripe.com/v3/"; script.async = true; document.head.appendChild(script); } </script> <!-- Google analytics v4 --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL"; script.async = true; document.head.appendChild(script); window.dataLayer = window.dataLayer || []; function gtag() { if (window.dataLayer !== undefined) { window.dataLayer.push(arguments); } } gtag("js", new Date()); gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/v4.34.0/en/model_doc/wav2vec2" }); /// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" }); /// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent /// TODO: ask the user for their consent and update this with gtag('consent', 'update') } </script> <!-- Google Analytics v3 (deprecated) --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { (function (i, s, o, g, r, a, m) { i["GoogleAnalyticsObject"] = r; (i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments); }), (i[r].l = 1 * new Date()); (a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]); a.async = 1; a.src = g; m.parentNode.insertBefore(a, m); })(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics"); ganalytics("create", "UA-83738774-2", "auto"); ganalytics("send", "pageview", "/docs/transformers/v4.34.0/en/model_doc/wav2vec2"); } </script> <iframe name="__privateStripeMetricsController1790" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-27c67c0d52761104439bb051c7856ab1.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fv4.34.0%2Fen%2Fmodel_doc%2Fwav2vec2&amp;title=Wav2Vec2&amp;referrer=&amp;muid=b15a8ef9-7618-4d98-9abd-1d7fdb18f47df4c702&amp;sid=0da2c795-975c-45a5-a090-0475ca1e345f07aeed&amp;version=6&amp;preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html>
2023-10-05T13:33:29.150Z
Wav2Vec2Phoneme
https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme
# Wav2Vec2Phoneme Transformers documentation Natural Language Processing Performance and scalability Reinforcement learning models ## Overview The Wav2Vec2Phoneme model was proposed in [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition (Xu et al., 2021](https://arxiv.org/abs/2109.11680) by Qiantong Xu, Alexei Baevski, Michael Auli. The abstract from the paper is the following: _Recent progress in self-training, self-supervised pretraining and unsupervised learning enabled well performing speech recognition systems without any labeled data. However, in many cases there is labeled data available for related languages which is not utilized by these methods. This paper extends previous work on zero-shot cross-lingual transfer learning by fine-tuning a multilingually pretrained wav2vec 2.0 model to transcribe unseen languages. This is done by mapping phonemes of the training languages to the target language using articulatory features. Experiments show that this simple method significantly outperforms prior work which introduced task-specific architectures and used only part of a monolingually pretrained model._ Tips: - Wav2Vec2Phoneme uses the exact same architecture as Wav2Vec2 - Wav2Vec2Phoneme is a speech model that accepts a float array corresponding to the raw waveform of the speech signal. - Wav2Vec2Phoneme model was trained using connectionist temporal classification (CTC) so the model output has to be decoded using [Wav2Vec2PhonemeCTCTokenizer](/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme#transformers.Wav2Vec2PhonemeCTCTokenizer). - Wav2Vec2Phoneme can be fine-tuned on multiple language at once and decode unseen languages in a single forward pass to a sequence of phonemes - By default the model outputs a sequence of phonemes. In order to transform the phonemes to a sequence of words one should make use of a dictionary and language model. Relevant checkpoints can be found under [https://huggingface.co/models?other=phoneme-recognition](https://huggingface.co/models?other=phoneme-recognition). This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten) The original code can be found [here](https://github.com/pytorch/fairseq/tree/master/fairseq/models/wav2vec). Wav2Vec2Phoneme’s architecture is based on the Wav2Vec2 model, so one can refer to `Wav2Vec2`’s documentation page except for the tokenizer. ## Wav2Vec2PhonemeCTCTokenizer ### class transformers.Wav2Vec2PhonemeCTCTokenizer [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2_phoneme/tokenization_wav2vec2_phoneme.py#L94) ( vocab\_filebos\_token = '<s>'eos\_token = '</s>'unk\_token = '<unk>'pad\_token = '<pad>'phone\_delimiter\_token = ' 'word\_delimiter\_token = Nonedo\_phonemize = Truephonemizer\_lang = 'en-us'phonemizer\_backend = 'espeak'\*\*kwargs ) Constructs a Wav2Vec2PhonemeCTC tokenizer. This tokenizer inherits from [PreTrainedTokenizer](/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizer) which contains some of the main methods. Users should refer to the superclass for more information regarding such methods. #### \_\_call\_\_ [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/tokenization_utils_base.py#L2732) ( text: typing.Union\[str, typing.List\[str\], typing.List\[typing.List\[str\]\]\] = Nonetext\_pair: typing.Union\[str, typing.List\[str\], typing.List\[typing.List\[str\]\], NoneType\] = Nonetext\_target: typing.Union\[str, typing.List\[str\], typing.List\[typing.List\[str\]\]\] = Nonetext\_pair\_target: typing.Union\[str, typing.List\[str\], typing.List\[typing.List\[str\]\], NoneType\] = Noneadd\_special\_tokens: bool = Truepadding: typing.Union\[bool, str, transformers.utils.generic.PaddingStrategy\] = Falsetruncation: typing.Union\[bool, str, transformers.tokenization\_utils\_base.TruncationStrategy\] = Nonemax\_length: typing.Optional\[int\] = Nonestride: int = 0is\_split\_into\_words: bool = Falsepad\_to\_multiple\_of: typing.Optional\[int\] = Nonereturn\_tensors: typing.Union\[str, transformers.utils.generic.TensorType, NoneType\] = Nonereturn\_token\_type\_ids: typing.Optional\[bool\] = Nonereturn\_attention\_mask: typing.Optional\[bool\] = Nonereturn\_overflowing\_tokens: bool = Falsereturn\_special\_tokens\_mask: bool = Falsereturn\_offsets\_mapping: bool = Falsereturn\_length: bool = Falseverbose: bool = True\*\*kwargs ) → [BatchEncoding](/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.BatchEncoding) Main method to tokenize and prepare for the model one or several sequence(s) or one or several pair(s) of sequences. #### batch\_decode [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2_phoneme/tokenization_wav2vec2_phoneme.py#L523) ( sequences: typing.Union\[typing.List\[int\], typing.List\[typing.List\[int\]\], ForwardRef('np.ndarray'), ForwardRef('torch.Tensor'), ForwardRef('tf.Tensor')\]skip\_special\_tokens: bool = Falseclean\_up\_tokenization\_spaces: bool = Noneoutput\_char\_offsets: bool = False\*\*kwargs ) → `List[str]` or `~models.wav2vec2.tokenization_wav2vec2_phoneme.Wav2Vec2PhonemeCTCTokenizerOutput` Convert a list of lists of token ids into a list of strings by calling decode. #### decode [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2_phoneme/tokenization_wav2vec2_phoneme.py#L467) ( token\_ids: typing.Union\[int, typing.List\[int\], ForwardRef('np.ndarray'), ForwardRef('torch.Tensor'), ForwardRef('tf.Tensor')\]skip\_special\_tokens: bool = Falseclean\_up\_tokenization\_spaces: bool = Noneoutput\_char\_offsets: bool = False\*\*kwargs ) → `str` or `~models.wav2vec2.tokenization_wav2vec2_phoneme.Wav2Vec2PhonemeCTCTokenizerOutput` Converts a sequence of ids in a string, using the tokenizer and vocabulary with options to remove special tokens and clean up tokenization spaces. Similar to doing `self.convert_tokens_to_string(self.convert_ids_to_tokens(token_ids))`. #### phonemize [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2_phoneme/tokenization_wav2vec2_phoneme.py#L268) ( text: strphonemizer\_lang: typing.Optional\[str\] = None )
<!DOCTYPE html><html class=""><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"> <meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science."> <meta property="fb:app_id" content="1321688464574422"> <meta name="twitter:card" content="summary_large_image"> <meta name="twitter:site" content="@huggingface"> <meta property="og:title" content="Wav2Vec2Phoneme"> <meta property="og:type" content="website"> <meta property="og:url" content="https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme"> <meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png"> <link rel="stylesheet" href="/front/build/kube-b0520c1/style.css"> <link rel="preconnect" href="https://fonts.gstatic.com"> <link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&amp;display=swap" rel="stylesheet"> <link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&amp;display=swap" rel="stylesheet"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" as="style" onload="this.onload=null;this.rel='stylesheet'"> <noscript> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" /> </noscript> <title>Wav2Vec2Phoneme</title> <script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script> <script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"><link rel="stylesheet" href="/docs/transformers/v4.34.0/en/_app/immutable/assets/0.e3b0c442.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/1.38c5c2f6.js"><meta name="hf:doc:metadata" content="{&quot;local&quot;:&quot;wav2vec2phoneme&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;overview&quot;,&quot;title&quot;:&quot;Overview&quot;},{&quot;local&quot;:&quot;transformers.Wav2Vec2PhonemeCTCTokenizer&quot;,&quot;title&quot;:&quot;Wav2Vec2PhonemeCTCTokenizer&quot;}],&quot;title&quot;:&quot;Wav2Vec2Phoneme&quot;}"></head> <body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage"> <div class="flex min-h-screen flex-col"> <div class="SVELTE_HYDRATER contents" data-props="{&quot;classNames&quot;:&quot;&quot;,&quot;isWide&quot;:true,&quot;isZh&quot;:false}" data-target="MainHeader"><header class="border-b border-gray-100 "><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <div class="flex flex-none items-center justify-center p-0.5 place-self-stretch lg:hidden"><button class="relative z-40 flex h-6 w-8 items-center justify-center" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div></div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="rounded-full border border-transparent bg-gray-900 px-3 py-1 leading-none text-white hover:border-black hover:bg-white hover:text-black" href="/join">Sign Up</a></li></ul></nav></div></header></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div> <main class="flex flex-1 flex-col"><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapters&quot;:[{&quot;title&quot;:&quot;Get started&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;🤗 Transformers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;index&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/index&quot;},{&quot;title&quot;:&quot;Quick tour&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;quicktour&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/quicktour&quot;},{&quot;title&quot;:&quot;Installation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;installation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/installation&quot;}]},{&quot;title&quot;:&quot;Tutorials&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Run inference with pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_tutorial&quot;},{&quot;title&quot;:&quot;Write portable code with AutoClass&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;autoclass_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/autoclass_tutorial&quot;},{&quot;title&quot;:&quot;Preprocess data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocessing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/preprocessing&quot;},{&quot;title&quot;:&quot;Fine-tune a pretrained model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;training&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/training&quot;},{&quot;title&quot;:&quot;Train with a script&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;run_scripts&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/run_scripts&quot;},{&quot;title&quot;:&quot;Set up distributed training with 🤗 Accelerate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;accelerate&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/accelerate&quot;},{&quot;title&quot;:&quot;Load and train adapters with 🤗 PEFT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;peft&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/peft&quot;},{&quot;title&quot;:&quot;Share your model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_sharing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_sharing&quot;},{&quot;title&quot;:&quot;Agents&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers_agents&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/transformers_agents&quot;},{&quot;title&quot;:&quot;Generation with LLMs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;llm_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/llm_tutorial&quot;}]},{&quot;title&quot;:&quot;Task Guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Natural Language Processing&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text classification&quot;,&quot;id&quot;:&quot;tasks/sequence_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/sequence_classification&quot;},{&quot;title&quot;:&quot;Token classification&quot;,&quot;id&quot;:&quot;tasks/token_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/token_classification&quot;},{&quot;title&quot;:&quot;Question answering&quot;,&quot;id&quot;:&quot;tasks/question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/question_answering&quot;},{&quot;title&quot;:&quot;Causal language modeling&quot;,&quot;id&quot;:&quot;tasks/language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/language_modeling&quot;},{&quot;title&quot;:&quot;Masked language modeling&quot;,&quot;id&quot;:&quot;tasks/masked_language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/masked_language_modeling&quot;},{&quot;title&quot;:&quot;Translation&quot;,&quot;id&quot;:&quot;tasks/translation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/translation&quot;},{&quot;title&quot;:&quot;Summarization&quot;,&quot;id&quot;:&quot;tasks/summarization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/summarization&quot;},{&quot;title&quot;:&quot;Multiple choice&quot;,&quot;id&quot;:&quot;tasks/multiple_choice&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/multiple_choice&quot;}]},{&quot;title&quot;:&quot;Audio&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio classification&quot;,&quot;id&quot;:&quot;tasks/audio_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/audio_classification&quot;},{&quot;title&quot;:&quot;Automatic speech recognition&quot;,&quot;id&quot;:&quot;tasks/asr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/asr&quot;}]},{&quot;title&quot;:&quot;Computer Vision&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image classification&quot;,&quot;id&quot;:&quot;tasks/image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_classification&quot;},{&quot;title&quot;:&quot;Semantic segmentation&quot;,&quot;id&quot;:&quot;tasks/semantic_segmentation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/semantic_segmentation&quot;},{&quot;title&quot;:&quot;Video classification&quot;,&quot;id&quot;:&quot;tasks/video_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/video_classification&quot;},{&quot;title&quot;:&quot;Object detection&quot;,&quot;id&quot;:&quot;tasks/object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot object detection&quot;,&quot;id&quot;:&quot;tasks/zero_shot_object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot image classification&quot;,&quot;id&quot;:&quot;tasks/zero_shot_image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification&quot;},{&quot;title&quot;:&quot;Depth estimation&quot;,&quot;id&quot;:&quot;tasks/monocular_depth_estimation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation&quot;}]},{&quot;title&quot;:&quot;Multimodal&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image captioning&quot;,&quot;id&quot;:&quot;tasks/image_captioning&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_captioning&quot;},{&quot;title&quot;:&quot;Document Question Answering&quot;,&quot;id&quot;:&quot;tasks/document_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/document_question_answering&quot;},{&quot;title&quot;:&quot;Visual Question Answering&quot;,&quot;id&quot;:&quot;tasks/visual_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/visual_question_answering&quot;},{&quot;title&quot;:&quot;Text to speech&quot;,&quot;id&quot;:&quot;tasks/text-to-speech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/text-to-speech&quot;}]},{&quot;title&quot;:&quot;Generation&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Customize the generation strategy&quot;,&quot;id&quot;:&quot;generation_strategies&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/generation_strategies&quot;}]},{&quot;title&quot;:&quot;Prompting&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image tasks with IDEFICS&quot;,&quot;id&quot;:&quot;tasks/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/idefics&quot;}]}]},{&quot;title&quot;:&quot;Developer guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Use fast tokenizers from 🤗 Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;fast_tokenizers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/fast_tokenizers&quot;},{&quot;title&quot;:&quot;Run inference with multilingual models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;multilingual&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/multilingual&quot;},{&quot;title&quot;:&quot;Use model-specific APIs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;create_a_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/create_a_model&quot;},{&quot;title&quot;:&quot;Share a custom model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_models&quot;},{&quot;title&quot;:&quot;Templates for chat models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;chat_templating&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/chat_templating&quot;},{&quot;title&quot;:&quot;Run training on Amazon SageMaker&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;sagemaker&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/sagemaker&quot;},{&quot;title&quot;:&quot;Export to ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;serialization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/serialization&quot;},{&quot;title&quot;:&quot;Export to TFLite&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tflite&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tflite&quot;},{&quot;title&quot;:&quot;Export to TorchScript&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;torchscript&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/torchscript&quot;},{&quot;title&quot;:&quot;Benchmarks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;benchmarks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/benchmarks&quot;},{&quot;title&quot;:&quot;Notebooks with examples&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;notebooks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/notebooks&quot;},{&quot;title&quot;:&quot;Community resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;community&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/community&quot;},{&quot;title&quot;:&quot;Custom Tools and Prompts&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_tools&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_tools&quot;},{&quot;title&quot;:&quot;Troubleshoot&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;troubleshooting&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/troubleshooting&quot;}]},{&quot;title&quot;:&quot;Performance and scalability&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;performance&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/performance&quot;},{&quot;title&quot;:&quot;Efficient training techniques&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Methods and tools for efficient training on a single GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_one&quot;},{&quot;title&quot;:&quot;Multiple GPUs and parallelism&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_many&quot;},{&quot;title&quot;:&quot;Efficient training on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu&quot;},{&quot;title&quot;:&quot;Distributed CPU training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu_many&quot;},{&quot;title&quot;:&quot;Training on TPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu&quot;},{&quot;title&quot;:&quot;Training on TPU with TensorFlow&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu_tf&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu_tf&quot;},{&quot;title&quot;:&quot;Training on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_special&quot;},{&quot;title&quot;:&quot;Custom hardware for training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_hardware&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_hardware&quot;},{&quot;title&quot;:&quot;Hyperparameter Search using Trainer API&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;hpo_train&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/hpo_train&quot;}]},{&quot;title&quot;:&quot;Optimizing inference&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Inference on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_cpu&quot;},{&quot;title&quot;:&quot;Inference on one GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_one&quot;},{&quot;title&quot;:&quot;Inference on many GPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_many&quot;},{&quot;title&quot;:&quot;Inference on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_special&quot;}]},{&quot;title&quot;:&quot;Instantiating a big model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;big_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/big_models&quot;},{&quot;title&quot;:&quot;Troubleshooting&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;debugging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/debugging&quot;},{&quot;title&quot;:&quot;XLA Integration for TensorFlow Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tf_xla&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tf_xla&quot;},{&quot;title&quot;:&quot;Optimize inference using `torch.compile()`&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_torch_compile&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_torch_compile&quot;}]},{&quot;title&quot;:&quot;Contribute&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;How to contribute to transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;contributing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/contributing&quot;},{&quot;title&quot;:&quot;How to add a model to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_model&quot;},{&quot;title&quot;:&quot;How to convert a 🤗 Transformers model to TensorFlow?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_tensorflow_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_tensorflow_model&quot;},{&quot;title&quot;:&quot;How to add a pipeline to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_pipeline&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_pipeline&quot;},{&quot;title&quot;:&quot;Testing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;testing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/testing&quot;},{&quot;title&quot;:&quot;Checks on a Pull Request&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pr_checks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pr_checks&quot;}]},{&quot;title&quot;:&quot;Conceptual guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Philosophy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;philosophy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/philosophy&quot;},{&quot;title&quot;:&quot;Glossary&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;glossary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/glossary&quot;},{&quot;title&quot;:&quot;What 🤗 Transformers can do&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;task_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/task_summary&quot;},{&quot;title&quot;:&quot;How 🤗 Transformers solve tasks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks_explained&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks_explained&quot;},{&quot;title&quot;:&quot;The Transformer model family&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_summary&quot;},{&quot;title&quot;:&quot;Summary of the tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tokenizer_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tokenizer_summary&quot;},{&quot;title&quot;:&quot;Attention mechanisms&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;attention&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/attention&quot;},{&quot;title&quot;:&quot;Padding and truncation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pad_truncation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pad_truncation&quot;},{&quot;title&quot;:&quot;BERTology&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;bertology&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/bertology&quot;},{&quot;title&quot;:&quot;Perplexity of fixed-length models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perplexity&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perplexity&quot;},{&quot;title&quot;:&quot;Pipelines for webserver inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_webserver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_webserver&quot;},{&quot;title&quot;:&quot;Model training anatomy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_memory_anatomy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_memory_anatomy&quot;}]},{&quot;title&quot;:&quot;API&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Main Classes&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Agents and Tools&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/agent&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/agent&quot;},{&quot;title&quot;:&quot;Auto Classes&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/auto&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/auto&quot;},{&quot;title&quot;:&quot;Callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/callback&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/callback&quot;},{&quot;title&quot;:&quot;Configuration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/configuration&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/configuration&quot;},{&quot;title&quot;:&quot;Data Collator&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/data_collator&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/data_collator&quot;},{&quot;title&quot;:&quot;Keras callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/keras_callbacks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/keras_callbacks&quot;},{&quot;title&quot;:&quot;Logging&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/logging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/logging&quot;},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/model&quot;},{&quot;title&quot;:&quot;Text Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/text_generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/text_generation&quot;},{&quot;title&quot;:&quot;ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/onnx&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/onnx&quot;},{&quot;title&quot;:&quot;Optimization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/optimizer_schedules&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules&quot;},{&quot;title&quot;:&quot;Model outputs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/output&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/output&quot;},{&quot;title&quot;:&quot;Pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/pipelines&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/pipelines&quot;},{&quot;title&quot;:&quot;Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/processors&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/processors&quot;},{&quot;title&quot;:&quot;Quantization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/quantization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/quantization&quot;},{&quot;title&quot;:&quot;Tokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/tokenizer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/tokenizer&quot;},{&quot;title&quot;:&quot;Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/trainer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/trainer&quot;},{&quot;title&quot;:&quot;DeepSpeed Integration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/deepspeed&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/deepspeed&quot;},{&quot;title&quot;:&quot;Feature Extractor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/feature_extractor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/feature_extractor&quot;},{&quot;title&quot;:&quot;Image Processor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/image_processor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/image_processor&quot;}]},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALBERT&quot;,&quot;id&quot;:&quot;model_doc/albert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/albert&quot;},{&quot;title&quot;:&quot;BART&quot;,&quot;id&quot;:&quot;model_doc/bart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bart&quot;},{&quot;title&quot;:&quot;BARThez&quot;,&quot;id&quot;:&quot;model_doc/barthez&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/barthez&quot;},{&quot;title&quot;:&quot;BARTpho&quot;,&quot;id&quot;:&quot;model_doc/bartpho&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bartpho&quot;},{&quot;title&quot;:&quot;BERT&quot;,&quot;id&quot;:&quot;model_doc/bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert&quot;},{&quot;title&quot;:&quot;BertGeneration&quot;,&quot;id&quot;:&quot;model_doc/bert-generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-generation&quot;},{&quot;title&quot;:&quot;BertJapanese&quot;,&quot;id&quot;:&quot;model_doc/bert-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-japanese&quot;},{&quot;title&quot;:&quot;Bertweet&quot;,&quot;id&quot;:&quot;model_doc/bertweet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bertweet&quot;},{&quot;title&quot;:&quot;BigBird&quot;,&quot;id&quot;:&quot;model_doc/big_bird&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/big_bird&quot;},{&quot;title&quot;:&quot;BigBirdPegasus&quot;,&quot;id&quot;:&quot;model_doc/bigbird_pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus&quot;},{&quot;title&quot;:&quot;BioGpt&quot;,&quot;id&quot;:&quot;model_doc/biogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/biogpt&quot;},{&quot;title&quot;:&quot;Blenderbot&quot;,&quot;id&quot;:&quot;model_doc/blenderbot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot&quot;},{&quot;title&quot;:&quot;Blenderbot Small&quot;,&quot;id&quot;:&quot;model_doc/blenderbot-small&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot-small&quot;},{&quot;title&quot;:&quot;BLOOM&quot;,&quot;id&quot;:&quot;model_doc/bloom&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bloom&quot;},{&quot;title&quot;:&quot;BORT&quot;,&quot;id&quot;:&quot;model_doc/bort&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bort&quot;},{&quot;title&quot;:&quot;ByT5&quot;,&quot;id&quot;:&quot;model_doc/byt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/byt5&quot;},{&quot;title&quot;:&quot;CamemBERT&quot;,&quot;id&quot;:&quot;model_doc/camembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/camembert&quot;},{&quot;title&quot;:&quot;CANINE&quot;,&quot;id&quot;:&quot;model_doc/canine&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/canine&quot;},{&quot;title&quot;:&quot;CodeGen&quot;,&quot;id&quot;:&quot;model_doc/codegen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/codegen&quot;},{&quot;title&quot;:&quot;CodeLlama&quot;,&quot;id&quot;:&quot;model_doc/code_llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/code_llama&quot;},{&quot;title&quot;:&quot;ConvBERT&quot;,&quot;id&quot;:&quot;model_doc/convbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convbert&quot;},{&quot;title&quot;:&quot;CPM&quot;,&quot;id&quot;:&quot;model_doc/cpm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpm&quot;},{&quot;title&quot;:&quot;CPMANT&quot;,&quot;id&quot;:&quot;model_doc/cpmant&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpmant&quot;},{&quot;title&quot;:&quot;CTRL&quot;,&quot;id&quot;:&quot;model_doc/ctrl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ctrl&quot;},{&quot;title&quot;:&quot;DeBERTa&quot;,&quot;id&quot;:&quot;model_doc/deberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta&quot;},{&quot;title&quot;:&quot;DeBERTa-v2&quot;,&quot;id&quot;:&quot;model_doc/deberta-v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta-v2&quot;},{&quot;title&quot;:&quot;DialoGPT&quot;,&quot;id&quot;:&quot;model_doc/dialogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dialogpt&quot;},{&quot;title&quot;:&quot;DistilBERT&quot;,&quot;id&quot;:&quot;model_doc/distilbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/distilbert&quot;},{&quot;title&quot;:&quot;DPR&quot;,&quot;id&quot;:&quot;model_doc/dpr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpr&quot;},{&quot;title&quot;:&quot;ELECTRA&quot;,&quot;id&quot;:&quot;model_doc/electra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/electra&quot;},{&quot;title&quot;:&quot;Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encoder-decoder&quot;},{&quot;title&quot;:&quot;ERNIE&quot;,&quot;id&quot;:&quot;model_doc/ernie&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie&quot;},{&quot;title&quot;:&quot;ErnieM&quot;,&quot;id&quot;:&quot;model_doc/ernie_m&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie_m&quot;},{&quot;title&quot;:&quot;ESM&quot;,&quot;id&quot;:&quot;model_doc/esm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/esm&quot;},{&quot;title&quot;:&quot;Falcon&quot;,&quot;id&quot;:&quot;model_doc/falcon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/falcon&quot;},{&quot;title&quot;:&quot;FLAN-T5&quot;,&quot;id&quot;:&quot;model_doc/flan-t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-t5&quot;},{&quot;title&quot;:&quot;FLAN-UL2&quot;,&quot;id&quot;:&quot;model_doc/flan-ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-ul2&quot;},{&quot;title&quot;:&quot;FlauBERT&quot;,&quot;id&quot;:&quot;model_doc/flaubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flaubert&quot;},{&quot;title&quot;:&quot;FNet&quot;,&quot;id&quot;:&quot;model_doc/fnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fnet&quot;},{&quot;title&quot;:&quot;FSMT&quot;,&quot;id&quot;:&quot;model_doc/fsmt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fsmt&quot;},{&quot;title&quot;:&quot;Funnel Transformer&quot;,&quot;id&quot;:&quot;model_doc/funnel&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/funnel&quot;},{&quot;title&quot;:&quot;GPT&quot;,&quot;id&quot;:&quot;model_doc/openai-gpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/openai-gpt&quot;},{&quot;title&quot;:&quot;GPT Neo&quot;,&quot;id&quot;:&quot;model_doc/gpt_neo&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neo&quot;},{&quot;title&quot;:&quot;GPT NeoX&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox&quot;},{&quot;title&quot;:&quot;GPT NeoX Japanese&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox_japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese&quot;},{&quot;title&quot;:&quot;GPT-J&quot;,&quot;id&quot;:&quot;model_doc/gptj&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptj&quot;},{&quot;title&quot;:&quot;GPT2&quot;,&quot;id&quot;:&quot;model_doc/gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt2&quot;},{&quot;title&quot;:&quot;GPTBigCode&quot;,&quot;id&quot;:&quot;model_doc/gpt_bigcode&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode&quot;},{&quot;title&quot;:&quot;GPTSAN Japanese&quot;,&quot;id&quot;:&quot;model_doc/gptsan-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese&quot;},{&quot;title&quot;:&quot;GPTSw3&quot;,&quot;id&quot;:&quot;model_doc/gpt-sw3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt-sw3&quot;},{&quot;title&quot;:&quot;HerBERT&quot;,&quot;id&quot;:&quot;model_doc/herbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/herbert&quot;},{&quot;title&quot;:&quot;I-BERT&quot;,&quot;id&quot;:&quot;model_doc/ibert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ibert&quot;},{&quot;title&quot;:&quot;Jukebox&quot;,&quot;id&quot;:&quot;model_doc/jukebox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/jukebox&quot;},{&quot;title&quot;:&quot;LED&quot;,&quot;id&quot;:&quot;model_doc/led&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/led&quot;},{&quot;title&quot;:&quot;LLaMA&quot;,&quot;id&quot;:&quot;model_doc/llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama&quot;},{&quot;title&quot;:&quot;Llama2&quot;,&quot;id&quot;:&quot;model_doc/llama2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama2&quot;},{&quot;title&quot;:&quot;Longformer&quot;,&quot;id&quot;:&quot;model_doc/longformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longformer&quot;},{&quot;title&quot;:&quot;LongT5&quot;,&quot;id&quot;:&quot;model_doc/longt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longt5&quot;},{&quot;title&quot;:&quot;LUKE&quot;,&quot;id&quot;:&quot;model_doc/luke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/luke&quot;},{&quot;title&quot;:&quot;M2M100&quot;,&quot;id&quot;:&quot;model_doc/m2m_100&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/m2m_100&quot;},{&quot;title&quot;:&quot;MarianMT&quot;,&quot;id&quot;:&quot;model_doc/marian&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/marian&quot;},{&quot;title&quot;:&quot;MarkupLM&quot;,&quot;id&quot;:&quot;model_doc/markuplm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/markuplm&quot;},{&quot;title&quot;:&quot;MBart and MBart-50&quot;,&quot;id&quot;:&quot;model_doc/mbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mbart&quot;},{&quot;title&quot;:&quot;MEGA&quot;,&quot;id&quot;:&quot;model_doc/mega&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mega&quot;},{&quot;title&quot;:&quot;MegatronBERT&quot;,&quot;id&quot;:&quot;model_doc/megatron-bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron-bert&quot;},{&quot;title&quot;:&quot;MegatronGPT2&quot;,&quot;id&quot;:&quot;model_doc/megatron_gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2&quot;},{&quot;title&quot;:&quot;Mistral&quot;,&quot;id&quot;:&quot;model_doc/mistral&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mistral&quot;},{&quot;title&quot;:&quot;mLUKE&quot;,&quot;id&quot;:&quot;model_doc/mluke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mluke&quot;},{&quot;title&quot;:&quot;MobileBERT&quot;,&quot;id&quot;:&quot;model_doc/mobilebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilebert&quot;},{&quot;title&quot;:&quot;MPNet&quot;,&quot;id&quot;:&quot;model_doc/mpnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpnet&quot;},{&quot;title&quot;:&quot;MPT&quot;,&quot;id&quot;:&quot;model_doc/mpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpt&quot;},{&quot;title&quot;:&quot;MRA&quot;,&quot;id&quot;:&quot;model_doc/mra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mra&quot;},{&quot;title&quot;:&quot;MT5&quot;,&quot;id&quot;:&quot;model_doc/mt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mt5&quot;},{&quot;title&quot;:&quot;MVP&quot;,&quot;id&quot;:&quot;model_doc/mvp&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mvp&quot;},{&quot;title&quot;:&quot;NEZHA&quot;,&quot;id&quot;:&quot;model_doc/nezha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nezha&quot;},{&quot;title&quot;:&quot;NLLB&quot;,&quot;id&quot;:&quot;model_doc/nllb&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb&quot;},{&quot;title&quot;:&quot;NLLB-MoE&quot;,&quot;id&quot;:&quot;model_doc/nllb-moe&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb-moe&quot;},{&quot;title&quot;:&quot;Nyströmformer&quot;,&quot;id&quot;:&quot;model_doc/nystromformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nystromformer&quot;},{&quot;title&quot;:&quot;Open-Llama&quot;,&quot;id&quot;:&quot;model_doc/open-llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/open-llama&quot;},{&quot;title&quot;:&quot;OPT&quot;,&quot;id&quot;:&quot;model_doc/opt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/opt&quot;},{&quot;title&quot;:&quot;Pegasus&quot;,&quot;id&quot;:&quot;model_doc/pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus&quot;},{&quot;title&quot;:&quot;PEGASUS-X&quot;,&quot;id&quot;:&quot;model_doc/pegasus_x&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus_x&quot;},{&quot;title&quot;:&quot;Persimmon&quot;,&quot;id&quot;:&quot;model_doc/persimmon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/persimmon&quot;},{&quot;title&quot;:&quot;PhoBERT&quot;,&quot;id&quot;:&quot;model_doc/phobert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/phobert&quot;},{&quot;title&quot;:&quot;PLBart&quot;,&quot;id&quot;:&quot;model_doc/plbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/plbart&quot;},{&quot;title&quot;:&quot;ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/prophetnet&quot;},{&quot;title&quot;:&quot;QDQBert&quot;,&quot;id&quot;:&quot;model_doc/qdqbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/qdqbert&quot;},{&quot;title&quot;:&quot;RAG&quot;,&quot;id&quot;:&quot;model_doc/rag&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rag&quot;},{&quot;title&quot;:&quot;REALM&quot;,&quot;id&quot;:&quot;model_doc/realm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/realm&quot;},{&quot;title&quot;:&quot;Reformer&quot;,&quot;id&quot;:&quot;model_doc/reformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/reformer&quot;},{&quot;title&quot;:&quot;RemBERT&quot;,&quot;id&quot;:&quot;model_doc/rembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rembert&quot;},{&quot;title&quot;:&quot;RetriBERT&quot;,&quot;id&quot;:&quot;model_doc/retribert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/retribert&quot;},{&quot;title&quot;:&quot;RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta&quot;},{&quot;title&quot;:&quot;RoBERTa-PreLayerNorm&quot;,&quot;id&quot;:&quot;model_doc/roberta-prelayernorm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm&quot;},{&quot;title&quot;:&quot;RoCBert&quot;,&quot;id&quot;:&quot;model_doc/roc_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roc_bert&quot;},{&quot;title&quot;:&quot;RoFormer&quot;,&quot;id&quot;:&quot;model_doc/roformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roformer&quot;},{&quot;title&quot;:&quot;RWKV&quot;,&quot;id&quot;:&quot;model_doc/rwkv&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rwkv&quot;},{&quot;title&quot;:&quot;Splinter&quot;,&quot;id&quot;:&quot;model_doc/splinter&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/splinter&quot;},{&quot;title&quot;:&quot;SqueezeBERT&quot;,&quot;id&quot;:&quot;model_doc/squeezebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/squeezebert&quot;},{&quot;title&quot;:&quot;SwitchTransformers&quot;,&quot;id&quot;:&quot;model_doc/switch_transformers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/switch_transformers&quot;},{&quot;title&quot;:&quot;T5&quot;,&quot;id&quot;:&quot;model_doc/t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5&quot;},{&quot;title&quot;:&quot;T5v1.1&quot;,&quot;id&quot;:&quot;model_doc/t5v1.1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5v1.1&quot;},{&quot;title&quot;:&quot;TAPEX&quot;,&quot;id&quot;:&quot;model_doc/tapex&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapex&quot;},{&quot;title&quot;:&quot;Transformer XL&quot;,&quot;id&quot;:&quot;model_doc/transfo-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/transfo-xl&quot;},{&quot;title&quot;:&quot;UL2&quot;,&quot;id&quot;:&quot;model_doc/ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ul2&quot;},{&quot;title&quot;:&quot;UMT5&quot;,&quot;id&quot;:&quot;model_doc/umt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/umt5&quot;},{&quot;title&quot;:&quot;X-MOD&quot;,&quot;id&quot;:&quot;model_doc/xmod&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xmod&quot;},{&quot;title&quot;:&quot;XGLM&quot;,&quot;id&quot;:&quot;model_doc/xglm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xglm&quot;},{&quot;title&quot;:&quot;XLM&quot;,&quot;id&quot;:&quot;model_doc/xlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm&quot;},{&quot;title&quot;:&quot;XLM-ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/xlm-prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa-XL&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl&quot;},{&quot;title&quot;:&quot;XLM-V&quot;,&quot;id&quot;:&quot;model_doc/xlm-v&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-v&quot;},{&quot;title&quot;:&quot;XLNet&quot;,&quot;id&quot;:&quot;model_doc/xlnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlnet&quot;},{&quot;title&quot;:&quot;YOSO&quot;,&quot;id&quot;:&quot;model_doc/yoso&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yoso&quot;}]},{&quot;title&quot;:&quot;Vision models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;BEiT&quot;,&quot;id&quot;:&quot;model_doc/beit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/beit&quot;},{&quot;title&quot;:&quot;BiT&quot;,&quot;id&quot;:&quot;model_doc/bit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bit&quot;},{&quot;title&quot;:&quot;Conditional DETR&quot;,&quot;id&quot;:&quot;model_doc/conditional_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/conditional_detr&quot;},{&quot;title&quot;:&quot;ConvNeXT&quot;,&quot;id&quot;:&quot;model_doc/convnext&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnext&quot;},{&quot;title&quot;:&quot;ConvNeXTV2&quot;,&quot;id&quot;:&quot;model_doc/convnextv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnextv2&quot;},{&quot;title&quot;:&quot;CvT&quot;,&quot;id&quot;:&quot;model_doc/cvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cvt&quot;},{&quot;title&quot;:&quot;Deformable DETR&quot;,&quot;id&quot;:&quot;model_doc/deformable_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deformable_detr&quot;},{&quot;title&quot;:&quot;DeiT&quot;,&quot;id&quot;:&quot;model_doc/deit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deit&quot;},{&quot;title&quot;:&quot;DETA&quot;,&quot;id&quot;:&quot;model_doc/deta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deta&quot;},{&quot;title&quot;:&quot;DETR&quot;,&quot;id&quot;:&quot;model_doc/detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/detr&quot;},{&quot;title&quot;:&quot;DiNAT&quot;,&quot;id&quot;:&quot;model_doc/dinat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinat&quot;},{&quot;title&quot;:&quot;DINO V2&quot;,&quot;id&quot;:&quot;model_doc/dinov2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinov2&quot;},{&quot;title&quot;:&quot;DiT&quot;,&quot;id&quot;:&quot;model_doc/dit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dit&quot;},{&quot;title&quot;:&quot;DPT&quot;,&quot;id&quot;:&quot;model_doc/dpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpt&quot;},{&quot;title&quot;:&quot;EfficientFormer&quot;,&quot;id&quot;:&quot;model_doc/efficientformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientformer&quot;},{&quot;title&quot;:&quot;EfficientNet&quot;,&quot;id&quot;:&quot;model_doc/efficientnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientnet&quot;},{&quot;title&quot;:&quot;FocalNet&quot;,&quot;id&quot;:&quot;model_doc/focalnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/focalnet&quot;},{&quot;title&quot;:&quot;GLPN&quot;,&quot;id&quot;:&quot;model_doc/glpn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/glpn&quot;},{&quot;title&quot;:&quot;ImageGPT&quot;,&quot;id&quot;:&quot;model_doc/imagegpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/imagegpt&quot;},{&quot;title&quot;:&quot;LeViT&quot;,&quot;id&quot;:&quot;model_doc/levit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/levit&quot;},{&quot;title&quot;:&quot;Mask2Former&quot;,&quot;id&quot;:&quot;model_doc/mask2former&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mask2former&quot;},{&quot;title&quot;:&quot;MaskFormer&quot;,&quot;id&quot;:&quot;model_doc/maskformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/maskformer&quot;},{&quot;title&quot;:&quot;MobileNetV1&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v1&quot;},{&quot;title&quot;:&quot;MobileNetV2&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v2&quot;},{&quot;title&quot;:&quot;MobileViT&quot;,&quot;id&quot;:&quot;model_doc/mobilevit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevit&quot;},{&quot;title&quot;:&quot;MobileViTV2&quot;,&quot;id&quot;:&quot;model_doc/mobilevitv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevitv2&quot;},{&quot;title&quot;:&quot;NAT&quot;,&quot;id&quot;:&quot;model_doc/nat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nat&quot;},{&quot;title&quot;:&quot;PoolFormer&quot;,&quot;id&quot;:&quot;model_doc/poolformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/poolformer&quot;},{&quot;title&quot;:&quot;Pyramid Vision Transformer (PVT)&quot;,&quot;id&quot;:&quot;model_doc/pvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pvt&quot;},{&quot;title&quot;:&quot;RegNet&quot;,&quot;id&quot;:&quot;model_doc/regnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/regnet&quot;},{&quot;title&quot;:&quot;ResNet&quot;,&quot;id&quot;:&quot;model_doc/resnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/resnet&quot;},{&quot;title&quot;:&quot;SegFormer&quot;,&quot;id&quot;:&quot;model_doc/segformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/segformer&quot;},{&quot;title&quot;:&quot;SwiftFormer&quot;,&quot;id&quot;:&quot;model_doc/swiftformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swiftformer&quot;},{&quot;title&quot;:&quot;Swin Transformer&quot;,&quot;id&quot;:&quot;model_doc/swin&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin&quot;},{&quot;title&quot;:&quot;Swin Transformer V2&quot;,&quot;id&quot;:&quot;model_doc/swinv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swinv2&quot;},{&quot;title&quot;:&quot;Swin2SR&quot;,&quot;id&quot;:&quot;model_doc/swin2sr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin2sr&quot;},{&quot;title&quot;:&quot;Table Transformer&quot;,&quot;id&quot;:&quot;model_doc/table-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/table-transformer&quot;},{&quot;title&quot;:&quot;TimeSformer&quot;,&quot;id&quot;:&quot;model_doc/timesformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/timesformer&quot;},{&quot;title&quot;:&quot;UperNet&quot;,&quot;id&quot;:&quot;model_doc/upernet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/upernet&quot;},{&quot;title&quot;:&quot;VAN&quot;,&quot;id&quot;:&quot;model_doc/van&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/van&quot;},{&quot;title&quot;:&quot;VideoMAE&quot;,&quot;id&quot;:&quot;model_doc/videomae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/videomae&quot;},{&quot;title&quot;:&quot;Vision Transformer (ViT)&quot;,&quot;id&quot;:&quot;model_doc/vit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit&quot;},{&quot;title&quot;:&quot;ViT Hybrid&quot;,&quot;id&quot;:&quot;model_doc/vit_hybrid&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_hybrid&quot;},{&quot;title&quot;:&quot;ViTDet&quot;,&quot;id&quot;:&quot;model_doc/vitdet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitdet&quot;},{&quot;title&quot;:&quot;ViTMAE&quot;,&quot;id&quot;:&quot;model_doc/vit_mae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_mae&quot;},{&quot;title&quot;:&quot;ViTMatte&quot;,&quot;id&quot;:&quot;model_doc/vitmatte&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitmatte&quot;},{&quot;title&quot;:&quot;ViTMSN&quot;,&quot;id&quot;:&quot;model_doc/vit_msn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_msn&quot;},{&quot;title&quot;:&quot;ViViT&quot;,&quot;id&quot;:&quot;model_doc/vivit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vivit&quot;},{&quot;title&quot;:&quot;YOLOS&quot;,&quot;id&quot;:&quot;model_doc/yolos&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yolos&quot;}]},{&quot;title&quot;:&quot;Audio models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio Spectrogram Transformer&quot;,&quot;id&quot;:&quot;model_doc/audio-spectrogram-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer&quot;},{&quot;title&quot;:&quot;Bark&quot;,&quot;id&quot;:&quot;model_doc/bark&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bark&quot;},{&quot;title&quot;:&quot;CLAP&quot;,&quot;id&quot;:&quot;model_doc/clap&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clap&quot;},{&quot;title&quot;:&quot;EnCodec&quot;,&quot;id&quot;:&quot;model_doc/encodec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encodec&quot;},{&quot;title&quot;:&quot;Hubert&quot;,&quot;id&quot;:&quot;model_doc/hubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/hubert&quot;},{&quot;title&quot;:&quot;MCTCT&quot;,&quot;id&quot;:&quot;model_doc/mctct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mctct&quot;},{&quot;title&quot;:&quot;MMS&quot;,&quot;id&quot;:&quot;model_doc/mms&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mms&quot;},{&quot;title&quot;:&quot;MusicGen&quot;,&quot;id&quot;:&quot;model_doc/musicgen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/musicgen&quot;},{&quot;title&quot;:&quot;Pop2Piano&quot;,&quot;id&quot;:&quot;model_doc/pop2piano&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pop2piano&quot;},{&quot;title&quot;:&quot;SEW&quot;,&quot;id&quot;:&quot;model_doc/sew&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew&quot;},{&quot;title&quot;:&quot;SEW-D&quot;,&quot;id&quot;:&quot;model_doc/sew-d&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew-d&quot;},{&quot;title&quot;:&quot;Speech2Text&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text&quot;},{&quot;title&quot;:&quot;Speech2Text2&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text_2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2&quot;},{&quot;title&quot;:&quot;SpeechT5&quot;,&quot;id&quot;:&quot;model_doc/speecht5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speecht5&quot;},{&quot;title&quot;:&quot;UniSpeech&quot;,&quot;id&quot;:&quot;model_doc/unispeech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech&quot;},{&quot;title&quot;:&quot;UniSpeech-SAT&quot;,&quot;id&quot;:&quot;model_doc/unispeech-sat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech-sat&quot;},{&quot;title&quot;:&quot;VITS&quot;,&quot;id&quot;:&quot;model_doc/vits&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vits&quot;},{&quot;title&quot;:&quot;Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2&quot;},{&quot;title&quot;:&quot;Wav2Vec2-Conformer&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2-conformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer&quot;},{&quot;title&quot;:&quot;Wav2Vec2Phoneme&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/wav2vec2_phoneme&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme&quot;},{&quot;title&quot;:&quot;WavLM&quot;,&quot;id&quot;:&quot;model_doc/wavlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wavlm&quot;},{&quot;title&quot;:&quot;Whisper&quot;,&quot;id&quot;:&quot;model_doc/whisper&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/whisper&quot;},{&quot;title&quot;:&quot;XLS-R&quot;,&quot;id&quot;:&quot;model_doc/xls_r&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xls_r&quot;},{&quot;title&quot;:&quot;XLSR-Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/xlsr_wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2&quot;}]},{&quot;title&quot;:&quot;Multimodal models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALIGN&quot;,&quot;id&quot;:&quot;model_doc/align&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/align&quot;},{&quot;title&quot;:&quot;AltCLIP&quot;,&quot;id&quot;:&quot;model_doc/altclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/altclip&quot;},{&quot;title&quot;:&quot;BLIP&quot;,&quot;id&quot;:&quot;model_doc/blip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip&quot;},{&quot;title&quot;:&quot;BLIP-2&quot;,&quot;id&quot;:&quot;model_doc/blip-2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip-2&quot;},{&quot;title&quot;:&quot;BridgeTower&quot;,&quot;id&quot;:&quot;model_doc/bridgetower&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bridgetower&quot;},{&quot;title&quot;:&quot;BROS&quot;,&quot;id&quot;:&quot;model_doc/bros&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bros&quot;},{&quot;title&quot;:&quot;Chinese-CLIP&quot;,&quot;id&quot;:&quot;model_doc/chinese_clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/chinese_clip&quot;},{&quot;title&quot;:&quot;CLIP&quot;,&quot;id&quot;:&quot;model_doc/clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clip&quot;},{&quot;title&quot;:&quot;CLIPSeg&quot;,&quot;id&quot;:&quot;model_doc/clipseg&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clipseg&quot;},{&quot;title&quot;:&quot;Data2Vec&quot;,&quot;id&quot;:&quot;model_doc/data2vec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/data2vec&quot;},{&quot;title&quot;:&quot;DePlot&quot;,&quot;id&quot;:&quot;model_doc/deplot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deplot&quot;},{&quot;title&quot;:&quot;Donut&quot;,&quot;id&quot;:&quot;model_doc/donut&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/donut&quot;},{&quot;title&quot;:&quot;FLAVA&quot;,&quot;id&quot;:&quot;model_doc/flava&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flava&quot;},{&quot;title&quot;:&quot;GIT&quot;,&quot;id&quot;:&quot;model_doc/git&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/git&quot;},{&quot;title&quot;:&quot;GroupViT&quot;,&quot;id&quot;:&quot;model_doc/groupvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/groupvit&quot;},{&quot;title&quot;:&quot;IDEFICS&quot;,&quot;id&quot;:&quot;model_doc/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/idefics&quot;},{&quot;title&quot;:&quot;InstructBLIP&quot;,&quot;id&quot;:&quot;model_doc/instructblip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/instructblip&quot;},{&quot;title&quot;:&quot;LayoutLM&quot;,&quot;id&quot;:&quot;model_doc/layoutlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlm&quot;},{&quot;title&quot;:&quot;LayoutLMV2&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv2&quot;},{&quot;title&quot;:&quot;LayoutLMV3&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv3&quot;},{&quot;title&quot;:&quot;LayoutXLM&quot;,&quot;id&quot;:&quot;model_doc/layoutxlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutxlm&quot;},{&quot;title&quot;:&quot;LiLT&quot;,&quot;id&quot;:&quot;model_doc/lilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lilt&quot;},{&quot;title&quot;:&quot;LXMERT&quot;,&quot;id&quot;:&quot;model_doc/lxmert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lxmert&quot;},{&quot;title&quot;:&quot;MatCha&quot;,&quot;id&quot;:&quot;model_doc/matcha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/matcha&quot;},{&quot;title&quot;:&quot;MGP-STR&quot;,&quot;id&quot;:&quot;model_doc/mgp-str&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mgp-str&quot;},{&quot;title&quot;:&quot;Nougat&quot;,&quot;id&quot;:&quot;model_doc/nougat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nougat&quot;},{&quot;title&quot;:&quot;OneFormer&quot;,&quot;id&quot;:&quot;model_doc/oneformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/oneformer&quot;},{&quot;title&quot;:&quot;OWL-ViT&quot;,&quot;id&quot;:&quot;model_doc/owlvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/owlvit&quot;},{&quot;title&quot;:&quot;Perceiver&quot;,&quot;id&quot;:&quot;model_doc/perceiver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/perceiver&quot;},{&quot;title&quot;:&quot;Pix2Struct&quot;,&quot;id&quot;:&quot;model_doc/pix2struct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pix2struct&quot;},{&quot;title&quot;:&quot;Segment Anything&quot;,&quot;id&quot;:&quot;model_doc/sam&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sam&quot;},{&quot;title&quot;:&quot;Speech Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/speech-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder&quot;},{&quot;title&quot;:&quot;TAPAS&quot;,&quot;id&quot;:&quot;model_doc/tapas&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapas&quot;},{&quot;title&quot;:&quot;TrOCR&quot;,&quot;id&quot;:&quot;model_doc/trocr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trocr&quot;},{&quot;title&quot;:&quot;TVLT&quot;,&quot;id&quot;:&quot;model_doc/tvlt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tvlt&quot;},{&quot;title&quot;:&quot;ViLT&quot;,&quot;id&quot;:&quot;model_doc/vilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vilt&quot;},{&quot;title&quot;:&quot;Vision Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/vision-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder&quot;},{&quot;title&quot;:&quot;Vision Text Dual Encoder&quot;,&quot;id&quot;:&quot;model_doc/vision-text-dual-encoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder&quot;},{&quot;title&quot;:&quot;VisualBERT&quot;,&quot;id&quot;:&quot;model_doc/visual_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/visual_bert&quot;},{&quot;title&quot;:&quot;X-CLIP&quot;,&quot;id&quot;:&quot;model_doc/xclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xclip&quot;}]},{&quot;title&quot;:&quot;Reinforcement learning models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Decision Transformer&quot;,&quot;id&quot;:&quot;model_doc/decision_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/decision_transformer&quot;},{&quot;title&quot;:&quot;Trajectory Transformer&quot;,&quot;id&quot;:&quot;model_doc/trajectory_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer&quot;}]},{&quot;title&quot;:&quot;Time series models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Autoformer&quot;,&quot;id&quot;:&quot;model_doc/autoformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/autoformer&quot;},{&quot;title&quot;:&quot;Informer&quot;,&quot;id&quot;:&quot;model_doc/informer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/informer&quot;},{&quot;title&quot;:&quot;Time Series Transformer&quot;,&quot;id&quot;:&quot;model_doc/time_series_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/time_series_transformer&quot;}]},{&quot;title&quot;:&quot;Graph models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Graphormer&quot;,&quot;id&quot;:&quot;model_doc/graphormer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/graphormer&quot;}]}]},{&quot;title&quot;:&quot;Internal Helpers&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Custom Layers and Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/modeling_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/modeling_utils&quot;},{&quot;title&quot;:&quot;Utilities for pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/pipelines_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/pipelines_utils&quot;},{&quot;title&quot;:&quot;Utilities for Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/tokenization_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/tokenization_utils&quot;},{&quot;title&quot;:&quot;Utilities for Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/trainer_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/trainer_utils&quot;},{&quot;title&quot;:&quot;Utilities for Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/generation_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/generation_utils&quot;},{&quot;title&quot;:&quot;Utilities for Image Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/image_processing_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/image_processing_utils&quot;},{&quot;title&quot;:&quot;Utilities for Audio processing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/audio_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/audio_utils&quot;},{&quot;title&quot;:&quot;General Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/file_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/file_utils&quot;},{&quot;title&quot;:&quot;Utilities for Time Series&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/time_series_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/time_series_utils&quot;}]}]}],&quot;chapterId&quot;:&quot;model_doc/wav2vec2_phoneme&quot;,&quot;docType&quot;:&quot;docs&quot;,&quot;isLoggedIn&quot;:false,&quot;lang&quot;:&quot;en&quot;,&quot;langs&quot;:[&quot;de&quot;,&quot;en&quot;,&quot;es&quot;,&quot;fr&quot;,&quot;it&quot;,&quot;ko&quot;,&quot;pt&quot;,&quot;zh&quot;],&quot;library&quot;:&quot;transformers&quot;,&quot;theme&quot;:&quot;light&quot;,&quot;version&quot;:&quot;v4.34.0&quot;,&quot;versions&quot;:[{&quot;version&quot;:&quot;main&quot;},{&quot;version&quot;:&quot;v4.34.0&quot;},{&quot;version&quot;:&quot;v4.33.3&quot;},{&quot;version&quot;:&quot;v4.33.2&quot;},{&quot;version&quot;:&quot;v4.33.0&quot;},{&quot;version&quot;:&quot;v4.32.1&quot;},{&quot;version&quot;:&quot;v4.32.0&quot;},{&quot;version&quot;:&quot;v4.31.0&quot;},{&quot;version&quot;:&quot;v4.30.0&quot;},{&quot;version&quot;:&quot;v4.29.1&quot;},{&quot;version&quot;:&quot;v4.29.0&quot;},{&quot;version&quot;:&quot;v4.28.1&quot;},{&quot;version&quot;:&quot;v4.28.0&quot;},{&quot;version&quot;:&quot;v4.27.2&quot;},{&quot;version&quot;:&quot;v4.27.1&quot;},{&quot;version&quot;:&quot;v4.27.0&quot;},{&quot;version&quot;:&quot;v4.26.1&quot;},{&quot;version&quot;:&quot;v4.26.0&quot;},{&quot;version&quot;:&quot;v4.25.1&quot;},{&quot;version&quot;:&quot;v4.24.0&quot;},{&quot;version&quot;:&quot;v4.23.1&quot;},{&quot;version&quot;:&quot;v4.23.0&quot;},{&quot;version&quot;:&quot;v4.22.2&quot;},{&quot;version&quot;:&quot;v4.22.1&quot;},{&quot;version&quot;:&quot;v4.22.0&quot;},{&quot;version&quot;:&quot;v4.21.3&quot;},{&quot;version&quot;:&quot;v4.21.2&quot;},{&quot;version&quot;:&quot;v4.21.1&quot;},{&quot;version&quot;:&quot;v4.21.0&quot;},{&quot;version&quot;:&quot;v4.20.1&quot;},{&quot;version&quot;:&quot;v4.20.0&quot;},{&quot;version&quot;:&quot;v4.19.4&quot;},{&quot;version&quot;:&quot;v4.19.3&quot;},{&quot;version&quot;:&quot;v4.19.2&quot;},{&quot;version&quot;:&quot;v4.19.0&quot;},{&quot;version&quot;:&quot;v4.18.0&quot;},{&quot;version&quot;:&quot;v4.17.0&quot;},{&quot;version&quot;:&quot;v4.16.2&quot;},{&quot;version&quot;:&quot;v4.16.1&quot;},{&quot;version&quot;:&quot;v4.16.0&quot;},{&quot;version&quot;:&quot;v4.15.0&quot;},{&quot;version&quot;:&quot;v4.14.1&quot;},{&quot;version&quot;:&quot;v4.13.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.5&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.4&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.0.0&quot;},{&quot;version&quot;:&quot;doc-builder-html&quot;}],&quot;title&quot;:&quot;Wav2Vec2Phoneme&quot;}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"><div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation</p> <div class="flex items-center"><p class="font-semibold">Wav2Vec2Phoneme</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "><button class=" " type="button"><h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> <span class="ml-auto rounded border border-gray-200 bg-gray-100 px-0.5 text-xs dark:border-gray-800 dark:bg-gray-800"><kbd class="font-sans">⌘K</kbd></span></button> <div class="flex items-center"><select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1">v4.34.0</option><option value="2">v4.33.3</option><option value="3">v4.32.1</option><option value="4">v4.31.0</option><option value="5">v4.30.0</option><option value="6">v4.29.1</option><option value="7">v4.28.1</option><option value="8">v4.27.2</option><option value="9">v4.26.1</option><option value="10">v4.25.1</option><option value="11">v4.24.0</option><option value="12">v4.23.1</option><option value="13">v4.22.2</option><option value="14">v4.21.3</option><option value="15">v4.20.1</option><option value="16">v4.19.4</option><option value="17">v4.18.0</option><option value="18">v4.17.0</option><option value="19">v4.16.2</option><option value="20">v4.15.0</option><option value="21">v4.14.1</option><option value="22">v4.13.0</option><option value="23">v4.12.5</option><option value="24">v4.11.3</option><option value="25">v4.10.1</option><option value="26">v4.9.2</option><option value="27">v4.8.2</option><option value="28">v4.7.0</option><option value="29">v4.6.0</option><option value="30">v4.5.1</option><option value="31">v4.4.2</option><option value="32">v4.3.3</option><option value="33">v4.2.2</option><option value="34">v4.1.1</option><option value="35">v4.0.1</option><option value="36">v3.5.1</option><option value="37">v3.4.0</option><option value="38">v3.3.1</option><option value="39">v3.2.0</option><option value="40">v3.1.0</option><option value="41">v3.0.2</option><option value="42">v2.11.0</option><option value="43">v2.10.0</option><option value="44">v2.9.1</option><option value="45">v2.8.0</option><option value="46">v2.7.0</option><option value="47">v2.6.0</option><option value="48">v2.5.1</option><option value="49">v2.4.1</option><option value="50">v2.3.0</option><option value="51">v2.2.2</option><option value="52">v2.1.1</option><option value="53">v2.0.0</option><option value="54">v1.2.0</option><option value="55">v1.1.0</option><option value="56">v1.0.0</option><option value="57">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"><button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"><svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> 112,792</a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Get started</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/index">🤗 Transformers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/quicktour">Quick tour </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/installation">Installation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Tutorials</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_tutorial">Run inference with pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/autoclass_tutorial">Write portable code with AutoClass </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/preprocessing">Preprocess data </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/training">Fine-tune a pretrained model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/run_scripts">Train with a script </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/accelerate">Set up distributed training with 🤗 Accelerate </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/peft">Load and train adapters with 🤗 PEFT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_sharing">Share your model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/transformers_agents">Agents </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/llm_tutorial">Generation with LLMs </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Task Guides</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Natural Language Processing</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Computer Vision</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Generation</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Prompting</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Developer guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/fast_tokenizers">Use fast tokenizers from 🤗 Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/multilingual">Run inference with multilingual models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/create_a_model">Use model-specific APIs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_models">Share a custom model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/chat_templating">Templates for chat models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/sagemaker">Run training on Amazon SageMaker </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/serialization">Export to ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tflite">Export to TFLite </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/torchscript">Export to TorchScript </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/benchmarks">Benchmarks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/notebooks">Notebooks with examples </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/community">Community resources </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_tools">Custom Tools and Prompts </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/troubleshooting">Troubleshoot </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Performance and scalability</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/performance">Overview </a><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Efficient training techniques</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_one">Methods and tools for efficient training on a single GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_many">Multiple GPUs and parallelism </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu">Efficient training on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu_many">Distributed CPU training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu">Training on TPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu_tf">Training on TPU with TensorFlow </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_special">Training on Specialized Hardware </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_hardware">Custom hardware for training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/hpo_train">Hyperparameter Search using Trainer API </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Optimizing inference</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_cpu">Inference on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_one">Inference on one GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_many">Inference on many GPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_special">Inference on Specialized Hardware </a> </div><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/big_models">Instantiating a big model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/debugging">Troubleshooting </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tf_xla">XLA Integration for TensorFlow Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perf_torch_compile">Optimize inference using `torch.compile()` </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Contribute</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/contributing">How to contribute to transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_model">How to add a model to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_tensorflow_model">How to convert a 🤗 Transformers model to TensorFlow? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_pipeline">How to add a pipeline to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/testing">Testing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pr_checks">Checks on a Pull Request </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Conceptual guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/philosophy">Philosophy </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/glossary">Glossary </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/task_summary">What 🤗 Transformers can do </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tasks_explained">How 🤗 Transformers solve tasks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_summary">The Transformer model family </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tokenizer_summary">Summary of the tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/attention">Attention mechanisms </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pad_truncation">Padding and truncation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/bertology">BERTology </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perplexity">Perplexity of fixed-length models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_webserver">Pipelines for webserver inference </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_memory_anatomy">Model training anatomy </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>API</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Main Classes</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/agent">Agents and Tools </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/model_doc/auto">Auto Classes </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/callback">Callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/configuration">Configuration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/data_collator">Data Collator </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks">Keras callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/logging">Logging </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/model">Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/text_generation">Text Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/onnx">ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules">Optimization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/output">Model outputs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/pipelines">Pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/processors">Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/quantization">Quantization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/tokenizer">Tokenizer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/trainer">Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/deepspeed">DeepSpeed Integration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor">Feature Extractor </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/image_processor">Image Processor </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Models</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Text models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Vision models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Audio models</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer">Audio Spectrogram Transformer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bark">Bark </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/clap">CLAP </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/encodec">EnCodec </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/hubert">Hubert </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mctct">MCTCT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mms">MMS </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/musicgen">MusicGen </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/pop2piano">Pop2Piano </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/sew">SEW </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/sew-d">SEW-D </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/speech_to_text">Speech2Text </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2">Speech2Text2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/speecht5">SpeechT5 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/unispeech">UniSpeech </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/unispeech-sat">UniSpeech-SAT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/vits">VITS </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2">Wav2Vec2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer">Wav2Vec2-Conformer </a><a data-sveltekit-reload="" class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme">Wav2Vec2Phoneme </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/wavlm">WavLM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/whisper">Whisper </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xls_r">XLS-R </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2">XLSR-Wav2Vec2 </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Reinforcement learning models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Time series models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Graph models</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Internal Helpers</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/modeling_utils">Custom Layers and Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/pipelines_utils">Utilities for pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/tokenization_utils">Utilities for Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/trainer_utils">Utilities for Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/generation_utils">Utilities for Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/image_processing_utils">Utilities for Image Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/audio_utils">Utilities for Audio processing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/file_utils">General Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/time_series_utils">Utilities for Time Series </a> </div> </div></nav></div></div></div> <div class="z-1 min-w-0 flex-1"> <div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg"> <div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div> <p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience </p> <div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes </div></div></div> <div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a> <p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div> <div class="prose-doc prose relative mx-auto max-w-4xl break-words"> <p></p> <h1 class="relative group"><a id="wav2vec2phoneme" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#wav2vec2phoneme"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-dw0x3d">Wav2Vec2Phoneme</span></h1> <h2 class="relative group"><a id="overview" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#overview"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1jsw1pg">Overview</span></h2> <p data-svelte-h="svelte-1gxcmtl">The Wav2Vec2Phoneme model was proposed in <a href="https://arxiv.org/abs/2109.11680" rel="nofollow">Simple and Effective Zero-shot Cross-lingual Phoneme Recognition (Xu et al., 2021</a> by Qiantong Xu, Alexei Baevski, Michael Auli.</p> <p data-svelte-h="svelte-vfdo9a">The abstract from the paper is the following:</p> <p data-svelte-h="svelte-148nuc2"><em>Recent progress in self-training, self-supervised pretraining and unsupervised learning enabled well performing speech recognition systems without any labeled data. However, in many cases there is labeled data available for related languages which is not utilized by these methods. This paper extends previous work on zero-shot cross-lingual transfer learning by fine-tuning a multilingually pretrained wav2vec 2.0 model to transcribe unseen languages. This is done by mapping phonemes of the training languages to the target language using articulatory features. Experiments show that this simple method significantly outperforms prior work which introduced task-specific architectures and used only part of a monolingually pretrained model.</em></p> <p data-svelte-h="svelte-axv494">Tips:</p> <ul data-svelte-h="svelte-1o1weh8"><li>Wav2Vec2Phoneme uses the exact same architecture as Wav2Vec2</li> <li>Wav2Vec2Phoneme is a speech model that accepts a float array corresponding to the raw waveform of the speech signal.</li> <li>Wav2Vec2Phoneme model was trained using connectionist temporal classification (CTC) so the model output has to be decoded using <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme#transformers.Wav2Vec2PhonemeCTCTokenizer">Wav2Vec2PhonemeCTCTokenizer</a>.</li> <li>Wav2Vec2Phoneme can be fine-tuned on multiple language at once and decode unseen languages in a single forward pass to a sequence of phonemes</li> <li>By default the model outputs a sequence of phonemes. In order to transform the phonemes to a sequence of words one should make use of a dictionary and language model.</li></ul> <p data-svelte-h="svelte-5tdu8e">Relevant checkpoints can be found under <a href="https://huggingface.co/models?other=phoneme-recognition" rel="nofollow">https://huggingface.co/models?other=phoneme-recognition</a>.</p> <p data-svelte-h="svelte-13jbx2b">This model was contributed by <a href="https://huggingface.co/patrickvonplaten" rel="nofollow">patrickvonplaten</a></p> <p data-svelte-h="svelte-12gzw10">The original code can be found <a href="https://github.com/pytorch/fairseq/tree/master/fairseq/models/wav2vec" rel="nofollow">here</a>.</p> <p data-svelte-h="svelte-1o8zv3q">Wav2Vec2Phoneme’s architecture is based on the Wav2Vec2 model, so one can refer to <code>Wav2Vec2</code>’s documentation page except for the tokenizer.</p> <h2 class="relative group"><a id="transformers.Wav2Vec2PhonemeCTCTokenizer" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2PhonemeCTCTokenizer"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-n06dd4">Wav2Vec2PhonemeCTCTokenizer</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.Wav2Vec2PhonemeCTCTokenizer"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">Wav2Vec2PhonemeCTCTokenizer</span></span></h3> <a id="transformers.Wav2Vec2PhonemeCTCTokenizer" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.Wav2Vec2PhonemeCTCTokenizer"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2_phoneme/tokenization_wav2vec2_phoneme.py#L94" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">vocab_file<span class="opacity-60"></span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">bos_token<span class="opacity-60"> = '&lt;s&gt;'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">eos_token<span class="opacity-60"> = '&lt;/s&gt;'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">unk_token<span class="opacity-60"> = '&lt;unk&gt;'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pad_token<span class="opacity-60"> = '&lt;pad&gt;'</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">phone_delimiter_token<span class="opacity-60"> = ' '</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">word_delimiter_token<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">do_phonemize<span class="opacity-60"> = True</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">phonemizer_lang<span class="opacity-60"> = 'en-us'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">phonemizer_backend<span class="opacity-60"> = 'espeak'</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 8 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2PhonemeCTCTokenizer.vocab_file" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2PhonemeCTCTokenizer.vocab_file"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>vocab_file</strong> (<code>str</code>) — File containing the vocabulary.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2PhonemeCTCTokenizer.bos_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2PhonemeCTCTokenizer.bos_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>bos_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;s&gt;"</code>) — The beginning of sentence token.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2PhonemeCTCTokenizer.eos_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2PhonemeCTCTokenizer.eos_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>eos_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;/s&gt;"</code>) — The end of sentence token.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2PhonemeCTCTokenizer.unk_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2PhonemeCTCTokenizer.unk_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>unk_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;unk&gt;"</code>) — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2PhonemeCTCTokenizer.pad_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2PhonemeCTCTokenizer.pad_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>pad_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;pad&gt;"</code>) — The token used for padding, for example when batching sequences of different lengths.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2PhonemeCTCTokenizer.do_phonemize" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2PhonemeCTCTokenizer.do_phonemize"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>do_phonemize</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether the tokenizer should phonetize the input or not. Only if a sequence of phonemes is passed to the tokenizer, <code>do_phonemize</code> should be set to <code>False</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2PhonemeCTCTokenizer.phonemizer_lang" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2PhonemeCTCTokenizer.phonemizer_lang"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>phonemizer_lang</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"en-us"</code>) — The language of the phoneme set to which the tokenizer should phonetize the input text to.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2PhonemeCTCTokenizer.phonemizer_backend" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2PhonemeCTCTokenizer.phonemizer_backend"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>phonemizer_backend</strong> (<code>str</code>, <em>optional</em>. defaults to <code>"espeak"</code>) — The backend phonetization library that shall be used by the phonemizer library. Defaults to <code>espeak-ng</code>. See the <a href="https://github.com/bootphon/phonemizer#readme" rel="nofollow">phonemizer package</a>. for more information.<p></p> <p>**kwargs — Additional keyword arguments passed along to <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizer">PreTrainedTokenizer</a></p></span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1g1me1i">Constructs a Wav2Vec2PhonemeCTC tokenizer.</p> <p data-svelte-h="svelte-1ery4iu">This tokenizer inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizer">PreTrainedTokenizer</a> which contains some of the main methods. Users should refer to the superclass for more information regarding such methods.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.Wav2Vec2PhonemeCTCTokenizer.__call__"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>__call__</span></h4> <a id="transformers.Wav2Vec2PhonemeCTCTokenizer.__call__" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.Wav2Vec2PhonemeCTCTokenizer.__call__"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/tokenization_utils_base.py#L2732" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">text<span class="opacity-60">: typing.Union[str, typing.List[str], typing.List[typing.List[str]]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">text_pair<span class="opacity-60">: typing.Union[str, typing.List[str], typing.List[typing.List[str]], NoneType] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">text_target<span class="opacity-60">: typing.Union[str, typing.List[str], typing.List[typing.List[str]]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">text_pair_target<span class="opacity-60">: typing.Union[str, typing.List[str], typing.List[typing.List[str]], NoneType] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">add_special_tokens<span class="opacity-60">: bool = True</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">padding<span class="opacity-60">: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">truncation<span class="opacity-60">: typing.Union[bool, str, transformers.tokenization_utils_base.TruncationStrategy] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">max_length<span class="opacity-60">: typing.Optional[int] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">stride<span class="opacity-60">: int = 0</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">is_split_into_words<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pad_to_multiple_of<span class="opacity-60">: typing.Optional[int] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_tensors<span class="opacity-60">: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_token_type_ids<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_attention_mask<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_overflowing_tokens<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_special_tokens_mask<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_offsets_mapping<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_length<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">verbose<span class="opacity-60">: bool = True</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.BatchEncoding">BatchEncoding</a></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 19 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2PhonemeCTCTokenizer.__call__.text" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2PhonemeCTCTokenizer.__call__.text"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>text</strong> (<code>str</code>, <code>List[str]</code>, <code>List[List[str]]</code>, <em>optional</em>) — The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set <code>is_split_into_words=True</code> (to lift the ambiguity with a batch of sequences).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2PhonemeCTCTokenizer.__call__.text_pair" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2PhonemeCTCTokenizer.__call__.text_pair"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>text_pair</strong> (<code>str</code>, <code>List[str]</code>, <code>List[List[str]]</code>, <em>optional</em>) — The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set <code>is_split_into_words=True</code> (to lift the ambiguity with a batch of sequences).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2PhonemeCTCTokenizer.__call__.text_target" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2PhonemeCTCTokenizer.__call__.text_target"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>text_target</strong> (<code>str</code>, <code>List[str]</code>, <code>List[List[str]]</code>, <em>optional</em>) — The sequence or batch of sequences to be encoded as target texts. Each sequence can be a string or a list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set <code>is_split_into_words=True</code> (to lift the ambiguity with a batch of sequences).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2PhonemeCTCTokenizer.__call__.text_pair_target" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2PhonemeCTCTokenizer.__call__.text_pair_target"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>text_pair_target</strong> (<code>str</code>, <code>List[str]</code>, <code>List[List[str]]</code>, <em>optional</em>) — The sequence or batch of sequences to be encoded as target texts. Each sequence can be a string or a list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set <code>is_split_into_words=True</code> (to lift the ambiguity with a batch of sequences).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2PhonemeCTCTokenizer.__call__.add_special_tokens" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2PhonemeCTCTokenizer.__call__.add_special_tokens"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>add_special_tokens</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether or not to add special tokens when encoding the sequences. This will use the underlying <code>PretrainedTokenizerBase.build_inputs_with_special_tokens</code> function, which defines which tokens are automatically added to the input ids. This is usefull if you want to add <code>bos</code> or <code>eos</code> tokens automatically.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2PhonemeCTCTokenizer.__call__.padding" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2PhonemeCTCTokenizer.__call__.padding"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>padding</strong> (<code>bool</code>, <code>str</code> or <a href="/docs/transformers/v4.34.0/en/internal/file_utils#transformers.utils.PaddingStrategy">PaddingStrategy</a>, <em>optional</em>, defaults to <code>False</code>) — Activates and controls padding. Accepts the following values:<p></p> <ul> <li><code>True</code> or <code>'longest'</code>: Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).</li> <li><code>'max_length'</code>: Pad to a maximum length specified with the argument <code>max_length</code> or to the maximum acceptable input length for the model if that argument is not provided.</li> <li><code>False</code> or <code>'do_not_pad'</code> (default): No padding (i.e., can output a batch with sequences of different lengths).</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2PhonemeCTCTokenizer.__call__.truncation" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2PhonemeCTCTokenizer.__call__.truncation"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>truncation</strong> (<code>bool</code>, <code>str</code> or <a href="/docs/transformers/v4.34.0/en/internal/tokenization_utils#transformers.tokenization_utils_base.TruncationStrategy">TruncationStrategy</a>, <em>optional</em>, defaults to <code>False</code>) — Activates and controls truncation. Accepts the following values:<p></p> <ul> <li><code>True</code> or <code>'longest_first'</code>: Truncate to a maximum length specified with the argument <code>max_length</code> or to the maximum acceptable input length for the model if that argument is not provided. This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided.</li> <li><code>'only_first'</code>: Truncate to a maximum length specified with the argument <code>max_length</code> or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.</li> <li><code>'only_second'</code>: Truncate to a maximum length specified with the argument <code>max_length</code> or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.</li> <li><code>False</code> or <code>'do_not_truncate'</code> (default): No truncation (i.e., can output batch with sequence lengths greater than the model maximum admissible input size).</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2PhonemeCTCTokenizer.__call__.max_length" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2PhonemeCTCTokenizer.__call__.max_length"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>max_length</strong> (<code>int</code>, <em>optional</em>) — Controls the maximum length to use by one of the truncation/padding parameters.<p></p> <p>If left unset or set to <code>None</code>, this will use the predefined model maximum length if a maximum length is required by one of the truncation/padding parameters. If the model has no specific maximum input length (like XLNet) truncation/padding to a maximum length will be deactivated.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2PhonemeCTCTokenizer.__call__.stride" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2PhonemeCTCTokenizer.__call__.stride"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>stride</strong> (<code>int</code>, <em>optional</em>, defaults to 0) — If set to a number along with <code>max_length</code>, the overflowing tokens returned when <code>return_overflowing_tokens=True</code> will contain some tokens from the end of the truncated sequence returned to provide some overlap between truncated and overflowing sequences. The value of this argument defines the number of overlapping tokens.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2PhonemeCTCTokenizer.__call__.is_split_into_words" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2PhonemeCTCTokenizer.__call__.is_split_into_words"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>is_split_into_words</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not the input is already pre-tokenized (e.g., split into words). If set to <code>True</code>, the tokenizer assumes the input is already split into words (for instance, by splitting it on whitespace) which it will tokenize. This is useful for NER or token classification.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2PhonemeCTCTokenizer.__call__.pad_to_multiple_of" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2PhonemeCTCTokenizer.__call__.pad_to_multiple_of"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>pad_to_multiple_of</strong> (<code>int</code>, <em>optional</em>) — If set will pad the sequence to a multiple of the provided value. Requires <code>padding</code> to be activated. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability <code>&gt;= 7.5</code> (Volta).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2PhonemeCTCTokenizer.__call__.return_tensors" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2PhonemeCTCTokenizer.__call__.return_tensors"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_tensors</strong> (<code>str</code> or <a href="/docs/transformers/v4.34.0/en/internal/file_utils#transformers.TensorType">TensorType</a>, <em>optional</em>) — If set, will return tensors instead of list of python integers. Acceptable values are:<p></p> <ul> <li><code>'tf'</code>: Return TensorFlow <code>tf.constant</code> objects.</li> <li><code>'pt'</code>: Return PyTorch <code>torch.Tensor</code> objects.</li> <li><code>'np'</code>: Return Numpy <code>np.ndarray</code> objects.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2PhonemeCTCTokenizer.__call__.return_token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2PhonemeCTCTokenizer.__call__.return_token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_token_type_ids</strong> (<code>bool</code>, <em>optional</em>) — Whether to return token type IDs. If left to the default, will return the token type IDs according to the specific tokenizer’s default, defined by the <code>return_outputs</code> attribute.<p></p> <p><a href="../glossary#token-type-ids">What are token type IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2PhonemeCTCTokenizer.__call__.return_attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2PhonemeCTCTokenizer.__call__.return_attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_attention_mask</strong> (<code>bool</code>, <em>optional</em>) — Whether to return the attention mask. If left to the default, will return the attention mask according to the specific tokenizer’s default, defined by the <code>return_outputs</code> attribute.<p></p> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2PhonemeCTCTokenizer.__call__.return_overflowing_tokens" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2PhonemeCTCTokenizer.__call__.return_overflowing_tokens"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_overflowing_tokens</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not to return overflowing token sequences. If a pair of sequences of input ids (or a batch of pairs) is provided with <code>truncation_strategy = longest_first</code> or <code>True</code>, an error is raised instead of returning overflowing tokens.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2PhonemeCTCTokenizer.__call__.return_special_tokens_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2PhonemeCTCTokenizer.__call__.return_special_tokens_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_special_tokens_mask</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not to return special tokens mask information.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2PhonemeCTCTokenizer.__call__.return_offsets_mapping" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2PhonemeCTCTokenizer.__call__.return_offsets_mapping"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_offsets_mapping</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not to return <code>(char_start, char_end)</code> for each token.<p></p> <p>This is only available on fast tokenizers inheriting from <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast">PreTrainedTokenizerFast</a>, if using Python’s tokenizer, this method will raise <code>NotImplementedError</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2PhonemeCTCTokenizer.__call__.return_length" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2PhonemeCTCTokenizer.__call__.return_length"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_length</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not to return the lengths of the encoded inputs.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2PhonemeCTCTokenizer.__call__.verbose" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2PhonemeCTCTokenizer.__call__.verbose"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>verbose</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether or not to print more information and warnings. **kwargs — passed to the <code>self.tokenize()</code> method</span></span> </li></ul> <div id="transformers.Wav2Vec2PhonemeCTCTokenizer.__call__.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.BatchEncoding">BatchEncoding</a></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.BatchEncoding">BatchEncoding</a> with the following fields:</p> <ul> <li> <p><strong>input_ids</strong> — List of token ids to be fed to a model.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p> </li> <li> <p><strong>token_type_ids</strong> — List of token type ids to be fed to a model (when <code>return_token_type_ids=True</code> or if <em>“token_type_ids”</em> is in <code>self.model_input_names</code>).</p> <p><a href="../glossary#token-type-ids">What are token type IDs?</a></p> </li> <li> <p><strong>attention_mask</strong> — List of indices specifying which tokens should be attended to by the model (when <code>return_attention_mask=True</code> or if <em>“attention_mask”</em> is in <code>self.model_input_names</code>).</p> <p><a href="../glossary#attention-mask">What are attention masks?</a></p> </li> <li> <p><strong>overflowing_tokens</strong> — List of overflowing tokens sequences (when a <code>max_length</code> is specified and <code>return_overflowing_tokens=True</code>).</p> </li> <li> <p><strong>num_truncated_tokens</strong> — Number of tokens truncated (when a <code>max_length</code> is specified and <code>return_overflowing_tokens=True</code>).</p> </li> <li> <p><strong>special_tokens_mask</strong> — List of 0s and 1s, with 1 specifying added special tokens and 0 specifying regular sequence tokens (when <code>add_special_tokens=True</code> and <code>return_special_tokens_mask=True</code>).</p> </li> <li> <p><strong>length</strong> — The length of the inputs (when <code>return_length=True</code>)</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-kpxj0c">Main method to tokenize and prepare for the model one or several sequence(s) or one or several pair(s) of sequences.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.Wav2Vec2PhonemeCTCTokenizer.batch_decode"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>batch_decode</span></h4> <a id="transformers.Wav2Vec2PhonemeCTCTokenizer.batch_decode" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.Wav2Vec2PhonemeCTCTokenizer.batch_decode"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2_phoneme/tokenization_wav2vec2_phoneme.py#L523" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">sequences<span class="opacity-60">: typing.Union[typing.List[int], typing.List[typing.List[int]], ForwardRef('np.ndarray'), ForwardRef('torch.Tensor'), ForwardRef('tf.Tensor')]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">skip_special_tokens<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">clean_up_tokenization_spaces<span class="opacity-60">: bool = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_char_offsets<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>List[str]</code> or <code>~models.wav2vec2.tokenization_wav2vec2_phoneme.Wav2Vec2PhonemeCTCTokenizerOutput</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 5 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2PhonemeCTCTokenizer.batch_decode.sequences" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2PhonemeCTCTokenizer.batch_decode.sequences"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>sequences</strong> (<code>Union[List[int], List[List[int]], np.ndarray, torch.Tensor, tf.Tensor]</code>) — List of tokenized input ids. Can be obtained using the <code>__call__</code> method.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2PhonemeCTCTokenizer.batch_decode.skip_special_tokens" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2PhonemeCTCTokenizer.batch_decode.skip_special_tokens"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>skip_special_tokens</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not to remove special tokens in the decoding.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2PhonemeCTCTokenizer.batch_decode.clean_up_tokenization_spaces" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2PhonemeCTCTokenizer.batch_decode.clean_up_tokenization_spaces"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>clean_up_tokenization_spaces</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to clean up the tokenization spaces.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2PhonemeCTCTokenizer.batch_decode.output_char_offsets" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2PhonemeCTCTokenizer.batch_decode.output_char_offsets"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_char_offsets</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not to output character offsets. Character offsets can be used in combination with the sampling rate and model downsampling rate to compute the time-stamps of transcribed characters.<p></p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"> <p>Please take a look at the Example of <code>~models.wav2vec2.tokenization_wav2vec2.decode</code> to better understand how to make use of <code>output_word_offsets</code>. <code>~model.wav2vec2_phoneme.tokenization_wav2vec2_phoneme.batch_decode</code> works analogous with phonemes and batched output.</p> </div></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2PhonemeCTCTokenizer.batch_decode.kwargs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2PhonemeCTCTokenizer.batch_decode.kwargs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>kwargs</strong> (additional keyword arguments, <em>optional</em>) — Will be passed to the underlying model specific decode method.</span></span> </li></ul> <div id="transformers.Wav2Vec2PhonemeCTCTokenizer.batch_decode.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><code>List[str]</code> or <code>~models.wav2vec2.tokenization_wav2vec2_phoneme.Wav2Vec2PhonemeCTCTokenizerOutput</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>The decoded sentence. Will be a <code>~models.wav2vec2.tokenization_wav2vec2_phoneme.Wav2Vec2PhonemeCTCTokenizerOutput</code> when <code>output_char_offsets == True</code>.</p> </p> </div></div> <p data-svelte-h="svelte-1deng2j">Convert a list of lists of token ids into a list of strings by calling decode.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.Wav2Vec2PhonemeCTCTokenizer.decode"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>decode</span></h4> <a id="transformers.Wav2Vec2PhonemeCTCTokenizer.decode" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.Wav2Vec2PhonemeCTCTokenizer.decode"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2_phoneme/tokenization_wav2vec2_phoneme.py#L467" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids<span class="opacity-60">: typing.Union[int, typing.List[int], ForwardRef('np.ndarray'), ForwardRef('torch.Tensor'), ForwardRef('tf.Tensor')]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">skip_special_tokens<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">clean_up_tokenization_spaces<span class="opacity-60">: bool = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_char_offsets<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>str</code> or <code>~models.wav2vec2.tokenization_wav2vec2_phoneme.Wav2Vec2PhonemeCTCTokenizerOutput</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 5 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2PhonemeCTCTokenizer.decode.token_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2PhonemeCTCTokenizer.decode.token_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids</strong> (<code>Union[int, List[int], np.ndarray, torch.Tensor, tf.Tensor]</code>) — List of tokenized input ids. Can be obtained using the <code>__call__</code> method.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2PhonemeCTCTokenizer.decode.skip_special_tokens" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2PhonemeCTCTokenizer.decode.skip_special_tokens"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>skip_special_tokens</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not to remove special tokens in the decoding.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2PhonemeCTCTokenizer.decode.clean_up_tokenization_spaces" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2PhonemeCTCTokenizer.decode.clean_up_tokenization_spaces"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>clean_up_tokenization_spaces</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to clean up the tokenization spaces.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2PhonemeCTCTokenizer.decode.output_char_offsets" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2PhonemeCTCTokenizer.decode.output_char_offsets"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_char_offsets</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not to output character offsets. Character offsets can be used in combination with the sampling rate and model downsampling rate to compute the time-stamps of transcribed characters.<p></p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"> <p>Please take a look at the Example of <code>~models.wav2vec2.tokenization_wav2vec2.decode</code> to better understand how to make use of <code>output_word_offsets</code>. <code>~model.wav2vec2_phoneme.tokenization_wav2vec2_phoneme.batch_decode</code> works the same way with phonemes.</p> </div></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2PhonemeCTCTokenizer.decode.kwargs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2PhonemeCTCTokenizer.decode.kwargs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>kwargs</strong> (additional keyword arguments, <em>optional</em>) — Will be passed to the underlying model specific decode method.</span></span> </li></ul> <div id="transformers.Wav2Vec2PhonemeCTCTokenizer.decode.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><code>str</code> or <code>~models.wav2vec2.tokenization_wav2vec2_phoneme.Wav2Vec2PhonemeCTCTokenizerOutput</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>The decoded sentence. Will be a <code>~models.wav2vec2.tokenization_wav2vec2_phoneme.Wav2Vec2PhonemeCTCTokenizerOutput</code> when <code>output_char_offsets == True</code>.</p> </p> </div></div> <p data-svelte-h="svelte-vbfkpu">Converts a sequence of ids in a string, using the tokenizer and vocabulary with options to remove special tokens and clean up tokenization spaces.</p> <p data-svelte-h="svelte-125uxon">Similar to doing <code>self.convert_tokens_to_string(self.convert_ids_to_tokens(token_ids))</code>.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.Wav2Vec2PhonemeCTCTokenizer.phonemize"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>phonemize</span></h4> <a id="transformers.Wav2Vec2PhonemeCTCTokenizer.phonemize" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.Wav2Vec2PhonemeCTCTokenizer.phonemize"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2_phoneme/tokenization_wav2vec2_phoneme.py#L268" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">text<span class="opacity-60">: str</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">phonemizer_lang<span class="opacity-60">: typing.Optional[str] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> </div></div></div></div> <p></p> <div id="svelte-announcer" aria-live="assertive" aria-atomic="true" style="position: absolute; left: 0px; top: 0px; clip: rect(0px, 0px, 0px, 0px); clip-path: inset(50%); overflow: hidden; white-space: nowrap; width: 1px; height: 1px;"></div></div> <div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>Wav2Vec2-Conformer</a> <a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/wavlm" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">WavLM<span class="ml-2 translate-y-px">→</span></a></div></div></div> <div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapter&quot;:{&quot;title&quot;:&quot;Wav2Vec2Phoneme&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;wav2vec2phoneme&quot;,&quot;url&quot;:&quot;#wav2vec2phoneme&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;overview&quot;,&quot;url&quot;:&quot;#overview&quot;},{&quot;title&quot;:&quot;Wav2Vec2PhonemeCTCTokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.Wav2Vec2PhonemeCTCTokenizer&quot;,&quot;url&quot;:&quot;#transformers.Wav2Vec2PhonemeCTCTokenizer&quot;}]}}" data-target="SubSideMenu"><nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#wav2vec2phoneme" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-wav2vec2phoneme"><wbr>Wav2<wbr>Vec2<wbr>Phoneme</a> <a href="#overview" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-overview"><wbr>Overview</a> <a href="#transformers.Wav2Vec2PhonemeCTCTokenizer" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.Wav2Vec2PhonemeCTCTokenizer"><wbr>Wav2<wbr>Vec2<wbr>PhonemeCTC<wbr>Tokenizer</a> </nav></div></div></div> <div id="doc-footer"></div></main> </div> <script> import("/front/build/kube-b0520c1/index.js"); window.moonSha = "kube-b0520c1/"; window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc","environment":"production","userAgent":"HuggingFace (production)"}`); </script> <!-- Stripe --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://js.stripe.com/v3/"; script.async = true; document.head.appendChild(script); } </script> <!-- Google analytics v4 --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL"; script.async = true; document.head.appendChild(script); window.dataLayer = window.dataLayer || []; function gtag() { if (window.dataLayer !== undefined) { window.dataLayer.push(arguments); } } gtag("js", new Date()); gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme" }); /// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" }); /// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent /// TODO: ask the user for their consent and update this with gtag('consent', 'update') } </script> <!-- Google Analytics v3 (deprecated) --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { (function (i, s, o, g, r, a, m) { i["GoogleAnalyticsObject"] = r; (i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments); }), (i[r].l = 1 * new Date()); (a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]); a.async = 1; a.src = g; m.parentNode.insertBefore(a, m); })(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics"); ganalytics("create", "UA-83738774-2", "auto"); ganalytics("send", "pageview", "/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme"); } </script> <iframe name="__privateStripeMetricsController8020" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-27c67c0d52761104439bb051c7856ab1.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fv4.34.0%2Fen%2Fmodel_doc%2Fwav2vec2_phoneme&amp;title=Wav2Vec2Phoneme&amp;referrer=&amp;muid=b15a8ef9-7618-4d98-9abd-1d7fdb18f47df4c702&amp;sid=0da2c795-975c-45a5-a090-0475ca1e345f07aeed&amp;version=6&amp;preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html>
2023-10-05T13:33:29.310Z
Wav2Vec2-Conformer
https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer
# Wav2Vec2-Conformer ## Overview The Wav2Vec2-Conformer was added to an updated version of [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino. The official results of the model can be found in Table 3 and Table 4 of the paper. The Wav2Vec2-Conformer weights were released by the Meta AI team within the [Fairseq library](https://github.com/pytorch/fairseq/blob/main/examples/wav2vec/README.md#pre-trained-models). Tips: - Wav2Vec2-Conformer follows the same architecture as Wav2Vec2, but replaces the _Attention_\-block with a _Conformer_\-block as introduced in [Conformer: Convolution-augmented Transformer for Speech Recognition](https://arxiv.org/abs/2005.08100). - For the same number of layers, Wav2Vec2-Conformer requires more parameters than Wav2Vec2, but also yields an improved word error rate. - Wav2Vec2-Conformer uses the same tokenizer and feature extractor as Wav2Vec2. - Wav2Vec2-Conformer can use either no relative position embeddings, Transformer-XL-like position embeddings, or rotary position embeddings by setting the correct `config.position_embeddings_type`. This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten). The original code can be found [here](https://github.com/pytorch/fairseq/tree/main/examples/wav2vec). ## Documentation resources - [Audio classification task guide](../tasks/audio_classification) - [Automatic speech recognition task guide](../tasks/asr) ## Wav2Vec2ConformerConfig ### class transformers.Wav2Vec2ConformerConfig [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2_conformer/configuration_wav2vec2_conformer.py#L33) ( vocab\_size = Nonehidden\_size = 768num\_hidden\_layers = 12num\_attention\_heads = 12intermediate\_size = 3072hidden\_act = 'gelu'hidden\_dropout = 0.1activation\_dropout = 0.1attention\_dropout = 0.1feat\_proj\_dropout = 0.0feat\_quantizer\_dropout = 0.0final\_dropout = 0.1layerdrop = 0.1initializer\_range = 0.02layer\_norm\_eps = 1e-05feat\_extract\_norm = 'group'feat\_extract\_activation = 'gelu'conv\_dim = (512, 512, 512, 512, 512, 512, 512)conv\_stride = (5, 2, 2, 2, 2, 2, 2)conv\_kernel = (10, 3, 3, 3, 3, 2, 2)conv\_bias = Falsenum\_conv\_pos\_embeddings = 128num\_conv\_pos\_embedding\_groups = 16apply\_spec\_augment = Truemask\_time\_prob = 0.05mask\_time\_length = 10mask\_time\_min\_masks = 2mask\_feature\_prob = 0.0mask\_feature\_length = 10mask\_feature\_min\_masks = 0num\_codevectors\_per\_group = 320num\_codevector\_groups = 2contrastive\_logits\_temperature = 0.1num\_negatives = 100codevector\_dim = 256proj\_codevector\_dim = 256diversity\_loss\_weight = 0.1ctc\_loss\_reduction = 'sum'ctc\_zero\_infinity = Falseuse\_weighted\_layer\_sum = Falseclassifier\_proj\_size = 256tdnn\_dim = (512, 512, 512, 512, 1500)tdnn\_kernel = (5, 3, 3, 1, 1)tdnn\_dilation = (1, 2, 3, 1, 1)xvector\_output\_dim = 512pad\_token\_id = 0bos\_token\_id = 1eos\_token\_id = 2add\_adapter = Falseadapter\_kernel\_size = 3adapter\_stride = 2num\_adapter\_layers = 3output\_hidden\_size = Noneposition\_embeddings\_type = 'relative'rotary\_embedding\_base = 10000max\_source\_positions = 5000conv\_depthwise\_kernel\_size = 31conformer\_conv\_dropout = 0.1\*\*kwargs ) This is the configuration class to store the configuration of a [Wav2Vec2ConformerModel](/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer#transformers.Wav2Vec2ConformerModel). It is used to instantiate an Wav2Vec2Conformer model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Wav2Vec2Conformer [facebook/wav2vec2-conformer-rel-pos-large](https://huggingface.co/facebook/wav2vec2-conformer-rel-pos-large) architecture. Configuration objects inherit from [PretrainedConfig](/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig) and can be used to control the model outputs. Read the documentation from [PretrainedConfig](/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig) for more information. Example: ``` >>> from transformers import Wav2Vec2ConformerConfig, Wav2Vec2ConformerModel >>> >>> configuration = Wav2Vec2ConformerConfig() >>> >>> model = Wav2Vec2ConformerModel(configuration) >>> >>> configuration = model.config ``` ## Wav2Vec2Conformer specific outputs ### class transformers.models.wav2vec2\_conformer.modeling\_wav2vec2\_conformer.Wav2Vec2ConformerForPreTrainingOutput [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py#L74) ( loss: typing.Optional\[torch.FloatTensor\] = Noneprojected\_states: FloatTensor = Noneprojected\_quantized\_states: FloatTensor = Nonecodevector\_perplexity: FloatTensor = Nonehidden\_states: typing.Optional\[typing.Tuple\[torch.FloatTensor\]\] = Noneattentions: typing.Optional\[typing.Tuple\[torch.FloatTensor\]\] = Nonecontrastive\_loss: typing.Optional\[torch.FloatTensor\] = Nonediversity\_loss: typing.Optional\[torch.FloatTensor\] = None ) Output type of [Wav2Vec2ConformerForPreTraining](/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer#transformers.Wav2Vec2ConformerForPreTraining), with potential hidden states and attentions. ## Wav2Vec2ConformerModel ### class transformers.Wav2Vec2ConformerModel [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py#L1247) ( config: Wav2Vec2ConformerConfig ) Parameters - **config** ([Wav2Vec2ConformerConfig](/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer#transformers.Wav2Vec2ConformerConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. The bare Wav2Vec2Conformer Model transformer outputting raw hidden-states without any specific head on top. Wav2Vec2Conformer was proposed in [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli. This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.). This model is a PyTorch [nn.Module](https://pytorch.org/docs/stable/nn.html#nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py#L1320) ( input\_values: typing.Optional\[torch.Tensor\]attention\_mask: typing.Optional\[torch.Tensor\] = Nonemask\_time\_indices: typing.Optional\[torch.FloatTensor\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → [transformers.modeling\_outputs.Wav2Vec2BaseModelOutput](/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.modeling_outputs.Wav2Vec2BaseModelOutput) or `tuple(torch.FloatTensor)` The [Wav2Vec2ConformerModel](/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer#transformers.Wav2Vec2ConformerModel) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoProcessor, Wav2Vec2ConformerModel >>> import torch >>> from datasets import load_dataset >>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") >>> dataset = dataset.sort("id") >>> sampling_rate = dataset.features["audio"].sampling_rate >>> processor = AutoProcessor.from_pretrained("facebook/wav2vec2-conformer-rope-large-960h-ft") >>> model = Wav2Vec2ConformerModel.from_pretrained("facebook/wav2vec2-conformer-rope-large-960h-ft") >>> >>> inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt") >>> with torch.no_grad(): ... outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state >>> list(last_hidden_states.shape) [1, 292, 1024] ``` ## Wav2Vec2ConformerForCTC ### class transformers.Wav2Vec2ConformerForCTC [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py#L1606) ( configtarget\_lang: typing.Optional\[str\] = None ) Parameters - **config** ([Wav2Vec2ConformerConfig](/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer#transformers.Wav2Vec2ConformerConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. Wav2Vec2Conformer Model with a `language modeling` head on top for Connectionist Temporal Classification (CTC). Wav2Vec2Conformer was proposed in [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli. This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.). This model is a PyTorch [nn.Module](https://pytorch.org/docs/stable/nn.html#nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py#L1639) ( input\_values: typing.Optional\[torch.Tensor\]attention\_mask: typing.Optional\[torch.Tensor\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = Nonelabels: typing.Optional\[torch.Tensor\] = None ) → [transformers.modeling\_outputs.CausalLMOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.CausalLMOutput) or `tuple(torch.FloatTensor)` The [Wav2Vec2ConformerForCTC](/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer#transformers.Wav2Vec2ConformerForCTC) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoProcessor, Wav2Vec2ConformerForCTC >>> from datasets import load_dataset >>> import torch >>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") >>> dataset = dataset.sort("id") >>> sampling_rate = dataset.features["audio"].sampling_rate >>> processor = AutoProcessor.from_pretrained("facebook/wav2vec2-conformer-rope-large-960h-ft") >>> model = Wav2Vec2ConformerForCTC.from_pretrained("facebook/wav2vec2-conformer-rope-large-960h-ft") >>> >>> inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_ids = torch.argmax(logits, dim=-1) >>> >>> transcription = processor.batch_decode(predicted_ids) >>> transcription[0] 'MISTER QUILTER IS THE APOSTLE OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL' >>> inputs["labels"] = processor(text=dataset[0]["text"], return_tensors="pt").input_ids >>> >>> loss = model(**inputs).loss >>> round(loss.item(), 2) 64.21 ``` ## Wav2Vec2ConformerForSequenceClassification ### class transformers.Wav2Vec2ConformerForSequenceClassification [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py#L1727) ( config ) Parameters - **config** ([Wav2Vec2ConformerConfig](/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer#transformers.Wav2Vec2ConformerConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. Wav2Vec2Conformer Model with a sequence classification head on top (a linear layer over the pooled output) for tasks like SUPERB Keyword Spotting. Wav2Vec2Conformer was proposed in [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli. This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.). This model is a PyTorch [nn.Module](https://pytorch.org/docs/stable/nn.html#nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py#L1762) ( input\_values: typing.Optional\[torch.Tensor\]attention\_mask: typing.Optional\[torch.Tensor\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = Nonelabels: typing.Optional\[torch.Tensor\] = None ) → [transformers.modeling\_outputs.SequenceClassifierOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.SequenceClassifierOutput) or `tuple(torch.FloatTensor)` The [Wav2Vec2ConformerForSequenceClassification](/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer#transformers.Wav2Vec2ConformerForSequenceClassification) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoFeatureExtractor, Wav2Vec2ConformerForSequenceClassification >>> from datasets import load_dataset >>> import torch >>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") >>> dataset = dataset.sort("id") >>> sampling_rate = dataset.features["audio"].sampling_rate >>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-conformer-rope-large-960h-ft") >>> model = Wav2Vec2ConformerForSequenceClassification.from_pretrained("facebook/wav2vec2-conformer-rope-large-960h-ft") >>> >>> inputs = feature_extractor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_ids = torch.argmax(logits, dim=-1).item() >>> predicted_label = model.config.id2label[predicted_class_ids] >>> >>> target_label = model.config.id2label[0] >>> inputs["labels"] = torch.tensor([model.config.label2id[target_label]]) >>> loss = model(**inputs).loss ``` ## Wav2Vec2ConformerForAudioFrameClassification ### class transformers.Wav2Vec2ConformerForAudioFrameClassification [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py#L1838) ( config ) Parameters - **config** ([Wav2Vec2ConformerConfig](/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer#transformers.Wav2Vec2ConformerConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. Wav2Vec2Conformer Model with a frame classification head on top for tasks like Speaker Diarization. Wav2Vec2Conformer was proposed in [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli. This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.). This model is a PyTorch [nn.Module](https://pytorch.org/docs/stable/nn.html#nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py#L1873) ( input\_values: typing.Optional\[torch.Tensor\]attention\_mask: typing.Optional\[torch.Tensor\] = Nonelabels: typing.Optional\[torch.Tensor\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → [transformers.modeling\_outputs.TokenClassifierOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.TokenClassifierOutput) or `tuple(torch.FloatTensor)` The [Wav2Vec2ConformerForAudioFrameClassification](/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer#transformers.Wav2Vec2ConformerForAudioFrameClassification) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoFeatureExtractor, Wav2Vec2ConformerForAudioFrameClassification >>> from datasets import load_dataset >>> import torch >>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") >>> dataset = dataset.sort("id") >>> sampling_rate = dataset.features["audio"].sampling_rate >>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-conformer-rope-large-960h-ft") >>> model = Wav2Vec2ConformerForAudioFrameClassification.from_pretrained("facebook/wav2vec2-conformer-rope-large-960h-ft") >>> >>> inputs = feature_extractor(dataset[0]["audio"]["array"], return_tensors="pt", sampling_rate=sampling_rate) >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> probabilities = torch.sigmoid(logits[0]) >>> >>> labels = (probabilities > 0.5).long() ``` ## Wav2Vec2ConformerForXVector ### class transformers.Wav2Vec2ConformerForXVector [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py#L1992) ( config ) Parameters - **config** ([Wav2Vec2ConformerConfig](/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer#transformers.Wav2Vec2ConformerConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. Wav2Vec2Conformer Model with an XVector feature extraction head on top for tasks like Speaker Verification. Wav2Vec2Conformer was proposed in [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli. This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.). This model is a PyTorch [nn.Module](https://pytorch.org/docs/stable/nn.html#nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py#L2045) ( input\_values: typing.Optional\[torch.Tensor\]attention\_mask: typing.Optional\[torch.Tensor\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = Nonelabels: typing.Optional\[torch.Tensor\] = None ) → [transformers.modeling\_outputs.XVectorOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.XVectorOutput) or `tuple(torch.FloatTensor)` The [Wav2Vec2ConformerForXVector](/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer#transformers.Wav2Vec2ConformerForXVector) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoFeatureExtractor, Wav2Vec2ConformerForXVector >>> from datasets import load_dataset >>> import torch >>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") >>> dataset = dataset.sort("id") >>> sampling_rate = dataset.features["audio"].sampling_rate >>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-conformer-rope-large-960h-ft") >>> model = Wav2Vec2ConformerForXVector.from_pretrained("facebook/wav2vec2-conformer-rope-large-960h-ft") >>> >>> inputs = feature_extractor( ... [d["array"] for d in dataset[:2]["audio"]], sampling_rate=sampling_rate, return_tensors="pt", padding=True ... ) >>> with torch.no_grad(): ... embeddings = model(**inputs).embeddings >>> embeddings = torch.nn.functional.normalize(embeddings, dim=-1).cpu() >>> >>> cosine_sim = torch.nn.CosineSimilarity(dim=-1) >>> similarity = cosine_sim(embeddings[0], embeddings[1]) >>> threshold = 0.7 >>> if similarity < threshold: ... print("Speakers are not the same!") ``` ## Wav2Vec2ConformerForPreTraining ### class transformers.Wav2Vec2ConformerForPreTraining [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py#L1385) ( config: Wav2Vec2ConformerConfig ) Parameters - **config** ([Wav2Vec2ConformerConfig](/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer#transformers.Wav2Vec2ConformerConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. Wav2Vec2Conformer Model with a quantizer and `VQ` head on top. Wav2Vec2Conformer was proposed in [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli. This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.). This model is a PyTorch [nn.Module](https://pytorch.org/docs/stable/nn.html#nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. The [Wav2Vec2ConformerForPreTraining](/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer#transformers.Wav2Vec2ConformerForPreTraining) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> import torch >>> from transformers import AutoFeatureExtractor, Wav2Vec2ConformerForPreTraining >>> from transformers.models.wav2vec2_conformer.modeling_wav2vec2_conformer import ( ... _compute_mask_indices, ... _sample_negative_indices, ... ) >>> from datasets import load_dataset >>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-conformer-rel-pos-large") >>> model = Wav2Vec2ConformerForPreTraining.from_pretrained("facebook/wav2vec2-conformer-rel-pos-large") >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> input_values = feature_extractor(ds[0]["audio"]["array"], return_tensors="pt").input_values >>> >>> batch_size, raw_sequence_length = input_values.shape >>> sequence_length = model._get_feat_extract_output_lengths(raw_sequence_length).item() >>> mask_time_indices = _compute_mask_indices( ... shape=(batch_size, sequence_length), mask_prob=0.2, mask_length=2 ... ) >>> sampled_negative_indices = _sample_negative_indices( ... features_shape=(batch_size, sequence_length), ... num_negatives=model.config.num_negatives, ... mask_time_indices=mask_time_indices, ... ) >>> mask_time_indices = torch.tensor(data=mask_time_indices, device=input_values.device, dtype=torch.long) >>> sampled_negative_indices = torch.tensor( ... data=sampled_negative_indices, device=input_values.device, dtype=torch.long ... ) >>> with torch.no_grad(): ... outputs = model(input_values, mask_time_indices=mask_time_indices) >>> >>> cosine_sim = torch.cosine_similarity(outputs.projected_states, outputs.projected_quantized_states, dim=-1) >>> >>> cosine_sim[mask_time_indices.to(torch.bool)].mean() > 0.5 tensor(True) >>> >>> model = model.train() >>> loss = model( ... input_values, mask_time_indices=mask_time_indices, sampled_negative_indices=sampled_negative_indices ... ).loss ```
<!DOCTYPE html><html class=""><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"> <meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science."> <meta property="fb:app_id" content="1321688464574422"> <meta name="twitter:card" content="summary_large_image"> <meta name="twitter:site" content="@huggingface"> <meta property="og:title" content="Wav2Vec2-Conformer"> <meta property="og:type" content="website"> <meta property="og:url" content="https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer"> <meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png"> <link rel="stylesheet" href="/front/build/kube-b0520c1/style.css"> <link rel="preconnect" href="https://fonts.gstatic.com"> <link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&amp;display=swap" rel="stylesheet"> <link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&amp;display=swap" rel="stylesheet"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" as="style" onload="this.onload=null;this.rel='stylesheet'"> <noscript> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" /> </noscript> <title>Wav2Vec2-Conformer</title> <script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script> <script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><link rel="stylesheet" href="/docs/transformers/v4.34.0/en/_app/immutable/assets/0.e3b0c442.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/1.38c5c2f6.js"><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"><meta name="hf:doc:metadata" content="{&quot;local&quot;:&quot;wav2vec2conformer&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;overview&quot;,&quot;title&quot;:&quot;Overview&quot;},{&quot;local&quot;:&quot;documentation-resources&quot;,&quot;title&quot;:&quot;Documentation resources&quot;},{&quot;local&quot;:&quot;transformers.Wav2Vec2ConformerConfig&quot;,&quot;title&quot;:&quot;Wav2Vec2ConformerConfig&quot;},{&quot;local&quot;:&quot;transformers.models.wav2vec2_conformer.modeling_wav2vec2_conformer.Wav2Vec2ConformerForPreTrainingOutput&quot;,&quot;title&quot;:&quot;Wav2Vec2Conformer specific outputs&quot;},{&quot;local&quot;:&quot;transformers.Wav2Vec2ConformerModel&quot;,&quot;title&quot;:&quot;Wav2Vec2ConformerModel&quot;},{&quot;local&quot;:&quot;transformers.Wav2Vec2ConformerForCTC&quot;,&quot;title&quot;:&quot;Wav2Vec2ConformerForCTC&quot;},{&quot;local&quot;:&quot;transformers.Wav2Vec2ConformerForSequenceClassification&quot;,&quot;title&quot;:&quot;Wav2Vec2ConformerForSequenceClassification&quot;},{&quot;local&quot;:&quot;transformers.Wav2Vec2ConformerForAudioFrameClassification&quot;,&quot;title&quot;:&quot;Wav2Vec2ConformerForAudioFrameClassification&quot;},{&quot;local&quot;:&quot;transformers.Wav2Vec2ConformerForXVector&quot;,&quot;title&quot;:&quot;Wav2Vec2ConformerForXVector&quot;},{&quot;local&quot;:&quot;transformers.Wav2Vec2ConformerForPreTraining&quot;,&quot;title&quot;:&quot;Wav2Vec2ConformerForPreTraining&quot;}],&quot;title&quot;:&quot;Wav2Vec2-Conformer&quot;}"></head> <body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage"> <div class="flex min-h-screen flex-col"> <div class="SVELTE_HYDRATER contents" data-props="{&quot;classNames&quot;:&quot;&quot;,&quot;isWide&quot;:true,&quot;isZh&quot;:false}" data-target="MainHeader"><header class="border-b border-gray-100 "><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <div class="flex flex-none items-center justify-center p-0.5 place-self-stretch lg:hidden"><button class="relative z-40 flex h-6 w-8 items-center justify-center" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div></div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="rounded-full border border-transparent bg-gray-900 px-3 py-1 leading-none text-white hover:border-black hover:bg-white hover:text-black" href="/join">Sign Up</a></li></ul></nav></div></header></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div> <main class="flex flex-1 flex-col"><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapters&quot;:[{&quot;title&quot;:&quot;Get started&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;🤗 Transformers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;index&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/index&quot;},{&quot;title&quot;:&quot;Quick tour&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;quicktour&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/quicktour&quot;},{&quot;title&quot;:&quot;Installation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;installation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/installation&quot;}]},{&quot;title&quot;:&quot;Tutorials&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Run inference with pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_tutorial&quot;},{&quot;title&quot;:&quot;Write portable code with AutoClass&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;autoclass_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/autoclass_tutorial&quot;},{&quot;title&quot;:&quot;Preprocess data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocessing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/preprocessing&quot;},{&quot;title&quot;:&quot;Fine-tune a pretrained model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;training&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/training&quot;},{&quot;title&quot;:&quot;Train with a script&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;run_scripts&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/run_scripts&quot;},{&quot;title&quot;:&quot;Set up distributed training with 🤗 Accelerate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;accelerate&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/accelerate&quot;},{&quot;title&quot;:&quot;Load and train adapters with 🤗 PEFT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;peft&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/peft&quot;},{&quot;title&quot;:&quot;Share your model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_sharing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_sharing&quot;},{&quot;title&quot;:&quot;Agents&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers_agents&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/transformers_agents&quot;},{&quot;title&quot;:&quot;Generation with LLMs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;llm_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/llm_tutorial&quot;}]},{&quot;title&quot;:&quot;Task Guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Natural Language Processing&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text classification&quot;,&quot;id&quot;:&quot;tasks/sequence_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/sequence_classification&quot;},{&quot;title&quot;:&quot;Token classification&quot;,&quot;id&quot;:&quot;tasks/token_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/token_classification&quot;},{&quot;title&quot;:&quot;Question answering&quot;,&quot;id&quot;:&quot;tasks/question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/question_answering&quot;},{&quot;title&quot;:&quot;Causal language modeling&quot;,&quot;id&quot;:&quot;tasks/language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/language_modeling&quot;},{&quot;title&quot;:&quot;Masked language modeling&quot;,&quot;id&quot;:&quot;tasks/masked_language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/masked_language_modeling&quot;},{&quot;title&quot;:&quot;Translation&quot;,&quot;id&quot;:&quot;tasks/translation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/translation&quot;},{&quot;title&quot;:&quot;Summarization&quot;,&quot;id&quot;:&quot;tasks/summarization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/summarization&quot;},{&quot;title&quot;:&quot;Multiple choice&quot;,&quot;id&quot;:&quot;tasks/multiple_choice&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/multiple_choice&quot;}]},{&quot;title&quot;:&quot;Audio&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio classification&quot;,&quot;id&quot;:&quot;tasks/audio_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/audio_classification&quot;},{&quot;title&quot;:&quot;Automatic speech recognition&quot;,&quot;id&quot;:&quot;tasks/asr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/asr&quot;}]},{&quot;title&quot;:&quot;Computer Vision&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image classification&quot;,&quot;id&quot;:&quot;tasks/image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_classification&quot;},{&quot;title&quot;:&quot;Semantic segmentation&quot;,&quot;id&quot;:&quot;tasks/semantic_segmentation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/semantic_segmentation&quot;},{&quot;title&quot;:&quot;Video classification&quot;,&quot;id&quot;:&quot;tasks/video_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/video_classification&quot;},{&quot;title&quot;:&quot;Object detection&quot;,&quot;id&quot;:&quot;tasks/object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot object detection&quot;,&quot;id&quot;:&quot;tasks/zero_shot_object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot image classification&quot;,&quot;id&quot;:&quot;tasks/zero_shot_image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification&quot;},{&quot;title&quot;:&quot;Depth estimation&quot;,&quot;id&quot;:&quot;tasks/monocular_depth_estimation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation&quot;}]},{&quot;title&quot;:&quot;Multimodal&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image captioning&quot;,&quot;id&quot;:&quot;tasks/image_captioning&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_captioning&quot;},{&quot;title&quot;:&quot;Document Question Answering&quot;,&quot;id&quot;:&quot;tasks/document_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/document_question_answering&quot;},{&quot;title&quot;:&quot;Visual Question Answering&quot;,&quot;id&quot;:&quot;tasks/visual_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/visual_question_answering&quot;},{&quot;title&quot;:&quot;Text to speech&quot;,&quot;id&quot;:&quot;tasks/text-to-speech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/text-to-speech&quot;}]},{&quot;title&quot;:&quot;Generation&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Customize the generation strategy&quot;,&quot;id&quot;:&quot;generation_strategies&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/generation_strategies&quot;}]},{&quot;title&quot;:&quot;Prompting&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image tasks with IDEFICS&quot;,&quot;id&quot;:&quot;tasks/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/idefics&quot;}]}]},{&quot;title&quot;:&quot;Developer guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Use fast tokenizers from 🤗 Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;fast_tokenizers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/fast_tokenizers&quot;},{&quot;title&quot;:&quot;Run inference with multilingual models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;multilingual&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/multilingual&quot;},{&quot;title&quot;:&quot;Use model-specific APIs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;create_a_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/create_a_model&quot;},{&quot;title&quot;:&quot;Share a custom model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_models&quot;},{&quot;title&quot;:&quot;Templates for chat models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;chat_templating&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/chat_templating&quot;},{&quot;title&quot;:&quot;Run training on Amazon SageMaker&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;sagemaker&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/sagemaker&quot;},{&quot;title&quot;:&quot;Export to ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;serialization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/serialization&quot;},{&quot;title&quot;:&quot;Export to TFLite&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tflite&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tflite&quot;},{&quot;title&quot;:&quot;Export to TorchScript&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;torchscript&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/torchscript&quot;},{&quot;title&quot;:&quot;Benchmarks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;benchmarks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/benchmarks&quot;},{&quot;title&quot;:&quot;Notebooks with examples&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;notebooks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/notebooks&quot;},{&quot;title&quot;:&quot;Community resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;community&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/community&quot;},{&quot;title&quot;:&quot;Custom Tools and Prompts&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_tools&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_tools&quot;},{&quot;title&quot;:&quot;Troubleshoot&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;troubleshooting&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/troubleshooting&quot;}]},{&quot;title&quot;:&quot;Performance and scalability&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;performance&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/performance&quot;},{&quot;title&quot;:&quot;Efficient training techniques&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Methods and tools for efficient training on a single GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_one&quot;},{&quot;title&quot;:&quot;Multiple GPUs and parallelism&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_many&quot;},{&quot;title&quot;:&quot;Efficient training on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu&quot;},{&quot;title&quot;:&quot;Distributed CPU training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu_many&quot;},{&quot;title&quot;:&quot;Training on TPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu&quot;},{&quot;title&quot;:&quot;Training on TPU with TensorFlow&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu_tf&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu_tf&quot;},{&quot;title&quot;:&quot;Training on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_special&quot;},{&quot;title&quot;:&quot;Custom hardware for training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_hardware&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_hardware&quot;},{&quot;title&quot;:&quot;Hyperparameter Search using Trainer API&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;hpo_train&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/hpo_train&quot;}]},{&quot;title&quot;:&quot;Optimizing inference&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Inference on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_cpu&quot;},{&quot;title&quot;:&quot;Inference on one GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_one&quot;},{&quot;title&quot;:&quot;Inference on many GPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_many&quot;},{&quot;title&quot;:&quot;Inference on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_special&quot;}]},{&quot;title&quot;:&quot;Instantiating a big model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;big_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/big_models&quot;},{&quot;title&quot;:&quot;Troubleshooting&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;debugging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/debugging&quot;},{&quot;title&quot;:&quot;XLA Integration for TensorFlow Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tf_xla&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tf_xla&quot;},{&quot;title&quot;:&quot;Optimize inference using `torch.compile()`&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_torch_compile&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_torch_compile&quot;}]},{&quot;title&quot;:&quot;Contribute&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;How to contribute to transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;contributing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/contributing&quot;},{&quot;title&quot;:&quot;How to add a model to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_model&quot;},{&quot;title&quot;:&quot;How to convert a 🤗 Transformers model to TensorFlow?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_tensorflow_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_tensorflow_model&quot;},{&quot;title&quot;:&quot;How to add a pipeline to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_pipeline&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_pipeline&quot;},{&quot;title&quot;:&quot;Testing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;testing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/testing&quot;},{&quot;title&quot;:&quot;Checks on a Pull Request&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pr_checks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pr_checks&quot;}]},{&quot;title&quot;:&quot;Conceptual guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Philosophy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;philosophy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/philosophy&quot;},{&quot;title&quot;:&quot;Glossary&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;glossary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/glossary&quot;},{&quot;title&quot;:&quot;What 🤗 Transformers can do&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;task_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/task_summary&quot;},{&quot;title&quot;:&quot;How 🤗 Transformers solve tasks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks_explained&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks_explained&quot;},{&quot;title&quot;:&quot;The Transformer model family&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_summary&quot;},{&quot;title&quot;:&quot;Summary of the tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tokenizer_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tokenizer_summary&quot;},{&quot;title&quot;:&quot;Attention mechanisms&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;attention&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/attention&quot;},{&quot;title&quot;:&quot;Padding and truncation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pad_truncation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pad_truncation&quot;},{&quot;title&quot;:&quot;BERTology&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;bertology&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/bertology&quot;},{&quot;title&quot;:&quot;Perplexity of fixed-length models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perplexity&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perplexity&quot;},{&quot;title&quot;:&quot;Pipelines for webserver inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_webserver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_webserver&quot;},{&quot;title&quot;:&quot;Model training anatomy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_memory_anatomy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_memory_anatomy&quot;}]},{&quot;title&quot;:&quot;API&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Main Classes&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Agents and Tools&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/agent&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/agent&quot;},{&quot;title&quot;:&quot;Auto Classes&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/auto&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/auto&quot;},{&quot;title&quot;:&quot;Callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/callback&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/callback&quot;},{&quot;title&quot;:&quot;Configuration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/configuration&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/configuration&quot;},{&quot;title&quot;:&quot;Data Collator&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/data_collator&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/data_collator&quot;},{&quot;title&quot;:&quot;Keras callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/keras_callbacks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/keras_callbacks&quot;},{&quot;title&quot;:&quot;Logging&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/logging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/logging&quot;},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/model&quot;},{&quot;title&quot;:&quot;Text Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/text_generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/text_generation&quot;},{&quot;title&quot;:&quot;ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/onnx&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/onnx&quot;},{&quot;title&quot;:&quot;Optimization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/optimizer_schedules&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules&quot;},{&quot;title&quot;:&quot;Model outputs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/output&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/output&quot;},{&quot;title&quot;:&quot;Pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/pipelines&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/pipelines&quot;},{&quot;title&quot;:&quot;Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/processors&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/processors&quot;},{&quot;title&quot;:&quot;Quantization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/quantization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/quantization&quot;},{&quot;title&quot;:&quot;Tokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/tokenizer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/tokenizer&quot;},{&quot;title&quot;:&quot;Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/trainer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/trainer&quot;},{&quot;title&quot;:&quot;DeepSpeed Integration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/deepspeed&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/deepspeed&quot;},{&quot;title&quot;:&quot;Feature Extractor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/feature_extractor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/feature_extractor&quot;},{&quot;title&quot;:&quot;Image Processor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/image_processor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/image_processor&quot;}]},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALBERT&quot;,&quot;id&quot;:&quot;model_doc/albert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/albert&quot;},{&quot;title&quot;:&quot;BART&quot;,&quot;id&quot;:&quot;model_doc/bart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bart&quot;},{&quot;title&quot;:&quot;BARThez&quot;,&quot;id&quot;:&quot;model_doc/barthez&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/barthez&quot;},{&quot;title&quot;:&quot;BARTpho&quot;,&quot;id&quot;:&quot;model_doc/bartpho&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bartpho&quot;},{&quot;title&quot;:&quot;BERT&quot;,&quot;id&quot;:&quot;model_doc/bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert&quot;},{&quot;title&quot;:&quot;BertGeneration&quot;,&quot;id&quot;:&quot;model_doc/bert-generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-generation&quot;},{&quot;title&quot;:&quot;BertJapanese&quot;,&quot;id&quot;:&quot;model_doc/bert-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-japanese&quot;},{&quot;title&quot;:&quot;Bertweet&quot;,&quot;id&quot;:&quot;model_doc/bertweet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bertweet&quot;},{&quot;title&quot;:&quot;BigBird&quot;,&quot;id&quot;:&quot;model_doc/big_bird&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/big_bird&quot;},{&quot;title&quot;:&quot;BigBirdPegasus&quot;,&quot;id&quot;:&quot;model_doc/bigbird_pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus&quot;},{&quot;title&quot;:&quot;BioGpt&quot;,&quot;id&quot;:&quot;model_doc/biogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/biogpt&quot;},{&quot;title&quot;:&quot;Blenderbot&quot;,&quot;id&quot;:&quot;model_doc/blenderbot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot&quot;},{&quot;title&quot;:&quot;Blenderbot Small&quot;,&quot;id&quot;:&quot;model_doc/blenderbot-small&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot-small&quot;},{&quot;title&quot;:&quot;BLOOM&quot;,&quot;id&quot;:&quot;model_doc/bloom&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bloom&quot;},{&quot;title&quot;:&quot;BORT&quot;,&quot;id&quot;:&quot;model_doc/bort&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bort&quot;},{&quot;title&quot;:&quot;ByT5&quot;,&quot;id&quot;:&quot;model_doc/byt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/byt5&quot;},{&quot;title&quot;:&quot;CamemBERT&quot;,&quot;id&quot;:&quot;model_doc/camembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/camembert&quot;},{&quot;title&quot;:&quot;CANINE&quot;,&quot;id&quot;:&quot;model_doc/canine&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/canine&quot;},{&quot;title&quot;:&quot;CodeGen&quot;,&quot;id&quot;:&quot;model_doc/codegen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/codegen&quot;},{&quot;title&quot;:&quot;CodeLlama&quot;,&quot;id&quot;:&quot;model_doc/code_llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/code_llama&quot;},{&quot;title&quot;:&quot;ConvBERT&quot;,&quot;id&quot;:&quot;model_doc/convbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convbert&quot;},{&quot;title&quot;:&quot;CPM&quot;,&quot;id&quot;:&quot;model_doc/cpm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpm&quot;},{&quot;title&quot;:&quot;CPMANT&quot;,&quot;id&quot;:&quot;model_doc/cpmant&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpmant&quot;},{&quot;title&quot;:&quot;CTRL&quot;,&quot;id&quot;:&quot;model_doc/ctrl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ctrl&quot;},{&quot;title&quot;:&quot;DeBERTa&quot;,&quot;id&quot;:&quot;model_doc/deberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta&quot;},{&quot;title&quot;:&quot;DeBERTa-v2&quot;,&quot;id&quot;:&quot;model_doc/deberta-v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta-v2&quot;},{&quot;title&quot;:&quot;DialoGPT&quot;,&quot;id&quot;:&quot;model_doc/dialogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dialogpt&quot;},{&quot;title&quot;:&quot;DistilBERT&quot;,&quot;id&quot;:&quot;model_doc/distilbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/distilbert&quot;},{&quot;title&quot;:&quot;DPR&quot;,&quot;id&quot;:&quot;model_doc/dpr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpr&quot;},{&quot;title&quot;:&quot;ELECTRA&quot;,&quot;id&quot;:&quot;model_doc/electra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/electra&quot;},{&quot;title&quot;:&quot;Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encoder-decoder&quot;},{&quot;title&quot;:&quot;ERNIE&quot;,&quot;id&quot;:&quot;model_doc/ernie&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie&quot;},{&quot;title&quot;:&quot;ErnieM&quot;,&quot;id&quot;:&quot;model_doc/ernie_m&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie_m&quot;},{&quot;title&quot;:&quot;ESM&quot;,&quot;id&quot;:&quot;model_doc/esm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/esm&quot;},{&quot;title&quot;:&quot;Falcon&quot;,&quot;id&quot;:&quot;model_doc/falcon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/falcon&quot;},{&quot;title&quot;:&quot;FLAN-T5&quot;,&quot;id&quot;:&quot;model_doc/flan-t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-t5&quot;},{&quot;title&quot;:&quot;FLAN-UL2&quot;,&quot;id&quot;:&quot;model_doc/flan-ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-ul2&quot;},{&quot;title&quot;:&quot;FlauBERT&quot;,&quot;id&quot;:&quot;model_doc/flaubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flaubert&quot;},{&quot;title&quot;:&quot;FNet&quot;,&quot;id&quot;:&quot;model_doc/fnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fnet&quot;},{&quot;title&quot;:&quot;FSMT&quot;,&quot;id&quot;:&quot;model_doc/fsmt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fsmt&quot;},{&quot;title&quot;:&quot;Funnel Transformer&quot;,&quot;id&quot;:&quot;model_doc/funnel&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/funnel&quot;},{&quot;title&quot;:&quot;GPT&quot;,&quot;id&quot;:&quot;model_doc/openai-gpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/openai-gpt&quot;},{&quot;title&quot;:&quot;GPT Neo&quot;,&quot;id&quot;:&quot;model_doc/gpt_neo&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neo&quot;},{&quot;title&quot;:&quot;GPT NeoX&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox&quot;},{&quot;title&quot;:&quot;GPT NeoX Japanese&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox_japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese&quot;},{&quot;title&quot;:&quot;GPT-J&quot;,&quot;id&quot;:&quot;model_doc/gptj&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptj&quot;},{&quot;title&quot;:&quot;GPT2&quot;,&quot;id&quot;:&quot;model_doc/gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt2&quot;},{&quot;title&quot;:&quot;GPTBigCode&quot;,&quot;id&quot;:&quot;model_doc/gpt_bigcode&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode&quot;},{&quot;title&quot;:&quot;GPTSAN Japanese&quot;,&quot;id&quot;:&quot;model_doc/gptsan-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese&quot;},{&quot;title&quot;:&quot;GPTSw3&quot;,&quot;id&quot;:&quot;model_doc/gpt-sw3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt-sw3&quot;},{&quot;title&quot;:&quot;HerBERT&quot;,&quot;id&quot;:&quot;model_doc/herbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/herbert&quot;},{&quot;title&quot;:&quot;I-BERT&quot;,&quot;id&quot;:&quot;model_doc/ibert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ibert&quot;},{&quot;title&quot;:&quot;Jukebox&quot;,&quot;id&quot;:&quot;model_doc/jukebox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/jukebox&quot;},{&quot;title&quot;:&quot;LED&quot;,&quot;id&quot;:&quot;model_doc/led&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/led&quot;},{&quot;title&quot;:&quot;LLaMA&quot;,&quot;id&quot;:&quot;model_doc/llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama&quot;},{&quot;title&quot;:&quot;Llama2&quot;,&quot;id&quot;:&quot;model_doc/llama2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama2&quot;},{&quot;title&quot;:&quot;Longformer&quot;,&quot;id&quot;:&quot;model_doc/longformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longformer&quot;},{&quot;title&quot;:&quot;LongT5&quot;,&quot;id&quot;:&quot;model_doc/longt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longt5&quot;},{&quot;title&quot;:&quot;LUKE&quot;,&quot;id&quot;:&quot;model_doc/luke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/luke&quot;},{&quot;title&quot;:&quot;M2M100&quot;,&quot;id&quot;:&quot;model_doc/m2m_100&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/m2m_100&quot;},{&quot;title&quot;:&quot;MarianMT&quot;,&quot;id&quot;:&quot;model_doc/marian&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/marian&quot;},{&quot;title&quot;:&quot;MarkupLM&quot;,&quot;id&quot;:&quot;model_doc/markuplm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/markuplm&quot;},{&quot;title&quot;:&quot;MBart and MBart-50&quot;,&quot;id&quot;:&quot;model_doc/mbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mbart&quot;},{&quot;title&quot;:&quot;MEGA&quot;,&quot;id&quot;:&quot;model_doc/mega&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mega&quot;},{&quot;title&quot;:&quot;MegatronBERT&quot;,&quot;id&quot;:&quot;model_doc/megatron-bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron-bert&quot;},{&quot;title&quot;:&quot;MegatronGPT2&quot;,&quot;id&quot;:&quot;model_doc/megatron_gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2&quot;},{&quot;title&quot;:&quot;Mistral&quot;,&quot;id&quot;:&quot;model_doc/mistral&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mistral&quot;},{&quot;title&quot;:&quot;mLUKE&quot;,&quot;id&quot;:&quot;model_doc/mluke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mluke&quot;},{&quot;title&quot;:&quot;MobileBERT&quot;,&quot;id&quot;:&quot;model_doc/mobilebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilebert&quot;},{&quot;title&quot;:&quot;MPNet&quot;,&quot;id&quot;:&quot;model_doc/mpnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpnet&quot;},{&quot;title&quot;:&quot;MPT&quot;,&quot;id&quot;:&quot;model_doc/mpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpt&quot;},{&quot;title&quot;:&quot;MRA&quot;,&quot;id&quot;:&quot;model_doc/mra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mra&quot;},{&quot;title&quot;:&quot;MT5&quot;,&quot;id&quot;:&quot;model_doc/mt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mt5&quot;},{&quot;title&quot;:&quot;MVP&quot;,&quot;id&quot;:&quot;model_doc/mvp&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mvp&quot;},{&quot;title&quot;:&quot;NEZHA&quot;,&quot;id&quot;:&quot;model_doc/nezha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nezha&quot;},{&quot;title&quot;:&quot;NLLB&quot;,&quot;id&quot;:&quot;model_doc/nllb&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb&quot;},{&quot;title&quot;:&quot;NLLB-MoE&quot;,&quot;id&quot;:&quot;model_doc/nllb-moe&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb-moe&quot;},{&quot;title&quot;:&quot;Nyströmformer&quot;,&quot;id&quot;:&quot;model_doc/nystromformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nystromformer&quot;},{&quot;title&quot;:&quot;Open-Llama&quot;,&quot;id&quot;:&quot;model_doc/open-llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/open-llama&quot;},{&quot;title&quot;:&quot;OPT&quot;,&quot;id&quot;:&quot;model_doc/opt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/opt&quot;},{&quot;title&quot;:&quot;Pegasus&quot;,&quot;id&quot;:&quot;model_doc/pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus&quot;},{&quot;title&quot;:&quot;PEGASUS-X&quot;,&quot;id&quot;:&quot;model_doc/pegasus_x&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus_x&quot;},{&quot;title&quot;:&quot;Persimmon&quot;,&quot;id&quot;:&quot;model_doc/persimmon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/persimmon&quot;},{&quot;title&quot;:&quot;PhoBERT&quot;,&quot;id&quot;:&quot;model_doc/phobert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/phobert&quot;},{&quot;title&quot;:&quot;PLBart&quot;,&quot;id&quot;:&quot;model_doc/plbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/plbart&quot;},{&quot;title&quot;:&quot;ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/prophetnet&quot;},{&quot;title&quot;:&quot;QDQBert&quot;,&quot;id&quot;:&quot;model_doc/qdqbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/qdqbert&quot;},{&quot;title&quot;:&quot;RAG&quot;,&quot;id&quot;:&quot;model_doc/rag&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rag&quot;},{&quot;title&quot;:&quot;REALM&quot;,&quot;id&quot;:&quot;model_doc/realm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/realm&quot;},{&quot;title&quot;:&quot;Reformer&quot;,&quot;id&quot;:&quot;model_doc/reformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/reformer&quot;},{&quot;title&quot;:&quot;RemBERT&quot;,&quot;id&quot;:&quot;model_doc/rembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rembert&quot;},{&quot;title&quot;:&quot;RetriBERT&quot;,&quot;id&quot;:&quot;model_doc/retribert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/retribert&quot;},{&quot;title&quot;:&quot;RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta&quot;},{&quot;title&quot;:&quot;RoBERTa-PreLayerNorm&quot;,&quot;id&quot;:&quot;model_doc/roberta-prelayernorm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm&quot;},{&quot;title&quot;:&quot;RoCBert&quot;,&quot;id&quot;:&quot;model_doc/roc_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roc_bert&quot;},{&quot;title&quot;:&quot;RoFormer&quot;,&quot;id&quot;:&quot;model_doc/roformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roformer&quot;},{&quot;title&quot;:&quot;RWKV&quot;,&quot;id&quot;:&quot;model_doc/rwkv&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rwkv&quot;},{&quot;title&quot;:&quot;Splinter&quot;,&quot;id&quot;:&quot;model_doc/splinter&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/splinter&quot;},{&quot;title&quot;:&quot;SqueezeBERT&quot;,&quot;id&quot;:&quot;model_doc/squeezebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/squeezebert&quot;},{&quot;title&quot;:&quot;SwitchTransformers&quot;,&quot;id&quot;:&quot;model_doc/switch_transformers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/switch_transformers&quot;},{&quot;title&quot;:&quot;T5&quot;,&quot;id&quot;:&quot;model_doc/t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5&quot;},{&quot;title&quot;:&quot;T5v1.1&quot;,&quot;id&quot;:&quot;model_doc/t5v1.1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5v1.1&quot;},{&quot;title&quot;:&quot;TAPEX&quot;,&quot;id&quot;:&quot;model_doc/tapex&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapex&quot;},{&quot;title&quot;:&quot;Transformer XL&quot;,&quot;id&quot;:&quot;model_doc/transfo-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/transfo-xl&quot;},{&quot;title&quot;:&quot;UL2&quot;,&quot;id&quot;:&quot;model_doc/ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ul2&quot;},{&quot;title&quot;:&quot;UMT5&quot;,&quot;id&quot;:&quot;model_doc/umt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/umt5&quot;},{&quot;title&quot;:&quot;X-MOD&quot;,&quot;id&quot;:&quot;model_doc/xmod&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xmod&quot;},{&quot;title&quot;:&quot;XGLM&quot;,&quot;id&quot;:&quot;model_doc/xglm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xglm&quot;},{&quot;title&quot;:&quot;XLM&quot;,&quot;id&quot;:&quot;model_doc/xlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm&quot;},{&quot;title&quot;:&quot;XLM-ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/xlm-prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa-XL&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl&quot;},{&quot;title&quot;:&quot;XLM-V&quot;,&quot;id&quot;:&quot;model_doc/xlm-v&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-v&quot;},{&quot;title&quot;:&quot;XLNet&quot;,&quot;id&quot;:&quot;model_doc/xlnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlnet&quot;},{&quot;title&quot;:&quot;YOSO&quot;,&quot;id&quot;:&quot;model_doc/yoso&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yoso&quot;}]},{&quot;title&quot;:&quot;Vision models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;BEiT&quot;,&quot;id&quot;:&quot;model_doc/beit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/beit&quot;},{&quot;title&quot;:&quot;BiT&quot;,&quot;id&quot;:&quot;model_doc/bit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bit&quot;},{&quot;title&quot;:&quot;Conditional DETR&quot;,&quot;id&quot;:&quot;model_doc/conditional_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/conditional_detr&quot;},{&quot;title&quot;:&quot;ConvNeXT&quot;,&quot;id&quot;:&quot;model_doc/convnext&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnext&quot;},{&quot;title&quot;:&quot;ConvNeXTV2&quot;,&quot;id&quot;:&quot;model_doc/convnextv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnextv2&quot;},{&quot;title&quot;:&quot;CvT&quot;,&quot;id&quot;:&quot;model_doc/cvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cvt&quot;},{&quot;title&quot;:&quot;Deformable DETR&quot;,&quot;id&quot;:&quot;model_doc/deformable_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deformable_detr&quot;},{&quot;title&quot;:&quot;DeiT&quot;,&quot;id&quot;:&quot;model_doc/deit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deit&quot;},{&quot;title&quot;:&quot;DETA&quot;,&quot;id&quot;:&quot;model_doc/deta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deta&quot;},{&quot;title&quot;:&quot;DETR&quot;,&quot;id&quot;:&quot;model_doc/detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/detr&quot;},{&quot;title&quot;:&quot;DiNAT&quot;,&quot;id&quot;:&quot;model_doc/dinat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinat&quot;},{&quot;title&quot;:&quot;DINO V2&quot;,&quot;id&quot;:&quot;model_doc/dinov2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinov2&quot;},{&quot;title&quot;:&quot;DiT&quot;,&quot;id&quot;:&quot;model_doc/dit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dit&quot;},{&quot;title&quot;:&quot;DPT&quot;,&quot;id&quot;:&quot;model_doc/dpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpt&quot;},{&quot;title&quot;:&quot;EfficientFormer&quot;,&quot;id&quot;:&quot;model_doc/efficientformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientformer&quot;},{&quot;title&quot;:&quot;EfficientNet&quot;,&quot;id&quot;:&quot;model_doc/efficientnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientnet&quot;},{&quot;title&quot;:&quot;FocalNet&quot;,&quot;id&quot;:&quot;model_doc/focalnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/focalnet&quot;},{&quot;title&quot;:&quot;GLPN&quot;,&quot;id&quot;:&quot;model_doc/glpn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/glpn&quot;},{&quot;title&quot;:&quot;ImageGPT&quot;,&quot;id&quot;:&quot;model_doc/imagegpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/imagegpt&quot;},{&quot;title&quot;:&quot;LeViT&quot;,&quot;id&quot;:&quot;model_doc/levit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/levit&quot;},{&quot;title&quot;:&quot;Mask2Former&quot;,&quot;id&quot;:&quot;model_doc/mask2former&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mask2former&quot;},{&quot;title&quot;:&quot;MaskFormer&quot;,&quot;id&quot;:&quot;model_doc/maskformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/maskformer&quot;},{&quot;title&quot;:&quot;MobileNetV1&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v1&quot;},{&quot;title&quot;:&quot;MobileNetV2&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v2&quot;},{&quot;title&quot;:&quot;MobileViT&quot;,&quot;id&quot;:&quot;model_doc/mobilevit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevit&quot;},{&quot;title&quot;:&quot;MobileViTV2&quot;,&quot;id&quot;:&quot;model_doc/mobilevitv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevitv2&quot;},{&quot;title&quot;:&quot;NAT&quot;,&quot;id&quot;:&quot;model_doc/nat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nat&quot;},{&quot;title&quot;:&quot;PoolFormer&quot;,&quot;id&quot;:&quot;model_doc/poolformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/poolformer&quot;},{&quot;title&quot;:&quot;Pyramid Vision Transformer (PVT)&quot;,&quot;id&quot;:&quot;model_doc/pvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pvt&quot;},{&quot;title&quot;:&quot;RegNet&quot;,&quot;id&quot;:&quot;model_doc/regnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/regnet&quot;},{&quot;title&quot;:&quot;ResNet&quot;,&quot;id&quot;:&quot;model_doc/resnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/resnet&quot;},{&quot;title&quot;:&quot;SegFormer&quot;,&quot;id&quot;:&quot;model_doc/segformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/segformer&quot;},{&quot;title&quot;:&quot;SwiftFormer&quot;,&quot;id&quot;:&quot;model_doc/swiftformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swiftformer&quot;},{&quot;title&quot;:&quot;Swin Transformer&quot;,&quot;id&quot;:&quot;model_doc/swin&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin&quot;},{&quot;title&quot;:&quot;Swin Transformer V2&quot;,&quot;id&quot;:&quot;model_doc/swinv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swinv2&quot;},{&quot;title&quot;:&quot;Swin2SR&quot;,&quot;id&quot;:&quot;model_doc/swin2sr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin2sr&quot;},{&quot;title&quot;:&quot;Table Transformer&quot;,&quot;id&quot;:&quot;model_doc/table-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/table-transformer&quot;},{&quot;title&quot;:&quot;TimeSformer&quot;,&quot;id&quot;:&quot;model_doc/timesformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/timesformer&quot;},{&quot;title&quot;:&quot;UperNet&quot;,&quot;id&quot;:&quot;model_doc/upernet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/upernet&quot;},{&quot;title&quot;:&quot;VAN&quot;,&quot;id&quot;:&quot;model_doc/van&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/van&quot;},{&quot;title&quot;:&quot;VideoMAE&quot;,&quot;id&quot;:&quot;model_doc/videomae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/videomae&quot;},{&quot;title&quot;:&quot;Vision Transformer (ViT)&quot;,&quot;id&quot;:&quot;model_doc/vit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit&quot;},{&quot;title&quot;:&quot;ViT Hybrid&quot;,&quot;id&quot;:&quot;model_doc/vit_hybrid&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_hybrid&quot;},{&quot;title&quot;:&quot;ViTDet&quot;,&quot;id&quot;:&quot;model_doc/vitdet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitdet&quot;},{&quot;title&quot;:&quot;ViTMAE&quot;,&quot;id&quot;:&quot;model_doc/vit_mae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_mae&quot;},{&quot;title&quot;:&quot;ViTMatte&quot;,&quot;id&quot;:&quot;model_doc/vitmatte&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitmatte&quot;},{&quot;title&quot;:&quot;ViTMSN&quot;,&quot;id&quot;:&quot;model_doc/vit_msn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_msn&quot;},{&quot;title&quot;:&quot;ViViT&quot;,&quot;id&quot;:&quot;model_doc/vivit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vivit&quot;},{&quot;title&quot;:&quot;YOLOS&quot;,&quot;id&quot;:&quot;model_doc/yolos&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yolos&quot;}]},{&quot;title&quot;:&quot;Audio models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio Spectrogram Transformer&quot;,&quot;id&quot;:&quot;model_doc/audio-spectrogram-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer&quot;},{&quot;title&quot;:&quot;Bark&quot;,&quot;id&quot;:&quot;model_doc/bark&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bark&quot;},{&quot;title&quot;:&quot;CLAP&quot;,&quot;id&quot;:&quot;model_doc/clap&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clap&quot;},{&quot;title&quot;:&quot;EnCodec&quot;,&quot;id&quot;:&quot;model_doc/encodec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encodec&quot;},{&quot;title&quot;:&quot;Hubert&quot;,&quot;id&quot;:&quot;model_doc/hubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/hubert&quot;},{&quot;title&quot;:&quot;MCTCT&quot;,&quot;id&quot;:&quot;model_doc/mctct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mctct&quot;},{&quot;title&quot;:&quot;MMS&quot;,&quot;id&quot;:&quot;model_doc/mms&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mms&quot;},{&quot;title&quot;:&quot;MusicGen&quot;,&quot;id&quot;:&quot;model_doc/musicgen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/musicgen&quot;},{&quot;title&quot;:&quot;Pop2Piano&quot;,&quot;id&quot;:&quot;model_doc/pop2piano&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pop2piano&quot;},{&quot;title&quot;:&quot;SEW&quot;,&quot;id&quot;:&quot;model_doc/sew&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew&quot;},{&quot;title&quot;:&quot;SEW-D&quot;,&quot;id&quot;:&quot;model_doc/sew-d&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew-d&quot;},{&quot;title&quot;:&quot;Speech2Text&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text&quot;},{&quot;title&quot;:&quot;Speech2Text2&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text_2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2&quot;},{&quot;title&quot;:&quot;SpeechT5&quot;,&quot;id&quot;:&quot;model_doc/speecht5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speecht5&quot;},{&quot;title&quot;:&quot;UniSpeech&quot;,&quot;id&quot;:&quot;model_doc/unispeech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech&quot;},{&quot;title&quot;:&quot;UniSpeech-SAT&quot;,&quot;id&quot;:&quot;model_doc/unispeech-sat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech-sat&quot;},{&quot;title&quot;:&quot;VITS&quot;,&quot;id&quot;:&quot;model_doc/vits&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vits&quot;},{&quot;title&quot;:&quot;Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2&quot;},{&quot;title&quot;:&quot;Wav2Vec2-Conformer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/wav2vec2-conformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer&quot;},{&quot;title&quot;:&quot;Wav2Vec2Phoneme&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2_phoneme&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme&quot;},{&quot;title&quot;:&quot;WavLM&quot;,&quot;id&quot;:&quot;model_doc/wavlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wavlm&quot;},{&quot;title&quot;:&quot;Whisper&quot;,&quot;id&quot;:&quot;model_doc/whisper&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/whisper&quot;},{&quot;title&quot;:&quot;XLS-R&quot;,&quot;id&quot;:&quot;model_doc/xls_r&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xls_r&quot;},{&quot;title&quot;:&quot;XLSR-Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/xlsr_wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2&quot;}]},{&quot;title&quot;:&quot;Multimodal models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALIGN&quot;,&quot;id&quot;:&quot;model_doc/align&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/align&quot;},{&quot;title&quot;:&quot;AltCLIP&quot;,&quot;id&quot;:&quot;model_doc/altclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/altclip&quot;},{&quot;title&quot;:&quot;BLIP&quot;,&quot;id&quot;:&quot;model_doc/blip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip&quot;},{&quot;title&quot;:&quot;BLIP-2&quot;,&quot;id&quot;:&quot;model_doc/blip-2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip-2&quot;},{&quot;title&quot;:&quot;BridgeTower&quot;,&quot;id&quot;:&quot;model_doc/bridgetower&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bridgetower&quot;},{&quot;title&quot;:&quot;BROS&quot;,&quot;id&quot;:&quot;model_doc/bros&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bros&quot;},{&quot;title&quot;:&quot;Chinese-CLIP&quot;,&quot;id&quot;:&quot;model_doc/chinese_clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/chinese_clip&quot;},{&quot;title&quot;:&quot;CLIP&quot;,&quot;id&quot;:&quot;model_doc/clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clip&quot;},{&quot;title&quot;:&quot;CLIPSeg&quot;,&quot;id&quot;:&quot;model_doc/clipseg&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clipseg&quot;},{&quot;title&quot;:&quot;Data2Vec&quot;,&quot;id&quot;:&quot;model_doc/data2vec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/data2vec&quot;},{&quot;title&quot;:&quot;DePlot&quot;,&quot;id&quot;:&quot;model_doc/deplot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deplot&quot;},{&quot;title&quot;:&quot;Donut&quot;,&quot;id&quot;:&quot;model_doc/donut&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/donut&quot;},{&quot;title&quot;:&quot;FLAVA&quot;,&quot;id&quot;:&quot;model_doc/flava&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flava&quot;},{&quot;title&quot;:&quot;GIT&quot;,&quot;id&quot;:&quot;model_doc/git&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/git&quot;},{&quot;title&quot;:&quot;GroupViT&quot;,&quot;id&quot;:&quot;model_doc/groupvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/groupvit&quot;},{&quot;title&quot;:&quot;IDEFICS&quot;,&quot;id&quot;:&quot;model_doc/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/idefics&quot;},{&quot;title&quot;:&quot;InstructBLIP&quot;,&quot;id&quot;:&quot;model_doc/instructblip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/instructblip&quot;},{&quot;title&quot;:&quot;LayoutLM&quot;,&quot;id&quot;:&quot;model_doc/layoutlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlm&quot;},{&quot;title&quot;:&quot;LayoutLMV2&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv2&quot;},{&quot;title&quot;:&quot;LayoutLMV3&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv3&quot;},{&quot;title&quot;:&quot;LayoutXLM&quot;,&quot;id&quot;:&quot;model_doc/layoutxlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutxlm&quot;},{&quot;title&quot;:&quot;LiLT&quot;,&quot;id&quot;:&quot;model_doc/lilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lilt&quot;},{&quot;title&quot;:&quot;LXMERT&quot;,&quot;id&quot;:&quot;model_doc/lxmert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lxmert&quot;},{&quot;title&quot;:&quot;MatCha&quot;,&quot;id&quot;:&quot;model_doc/matcha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/matcha&quot;},{&quot;title&quot;:&quot;MGP-STR&quot;,&quot;id&quot;:&quot;model_doc/mgp-str&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mgp-str&quot;},{&quot;title&quot;:&quot;Nougat&quot;,&quot;id&quot;:&quot;model_doc/nougat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nougat&quot;},{&quot;title&quot;:&quot;OneFormer&quot;,&quot;id&quot;:&quot;model_doc/oneformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/oneformer&quot;},{&quot;title&quot;:&quot;OWL-ViT&quot;,&quot;id&quot;:&quot;model_doc/owlvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/owlvit&quot;},{&quot;title&quot;:&quot;Perceiver&quot;,&quot;id&quot;:&quot;model_doc/perceiver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/perceiver&quot;},{&quot;title&quot;:&quot;Pix2Struct&quot;,&quot;id&quot;:&quot;model_doc/pix2struct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pix2struct&quot;},{&quot;title&quot;:&quot;Segment Anything&quot;,&quot;id&quot;:&quot;model_doc/sam&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sam&quot;},{&quot;title&quot;:&quot;Speech Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/speech-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder&quot;},{&quot;title&quot;:&quot;TAPAS&quot;,&quot;id&quot;:&quot;model_doc/tapas&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapas&quot;},{&quot;title&quot;:&quot;TrOCR&quot;,&quot;id&quot;:&quot;model_doc/trocr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trocr&quot;},{&quot;title&quot;:&quot;TVLT&quot;,&quot;id&quot;:&quot;model_doc/tvlt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tvlt&quot;},{&quot;title&quot;:&quot;ViLT&quot;,&quot;id&quot;:&quot;model_doc/vilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vilt&quot;},{&quot;title&quot;:&quot;Vision Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/vision-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder&quot;},{&quot;title&quot;:&quot;Vision Text Dual Encoder&quot;,&quot;id&quot;:&quot;model_doc/vision-text-dual-encoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder&quot;},{&quot;title&quot;:&quot;VisualBERT&quot;,&quot;id&quot;:&quot;model_doc/visual_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/visual_bert&quot;},{&quot;title&quot;:&quot;X-CLIP&quot;,&quot;id&quot;:&quot;model_doc/xclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xclip&quot;}]},{&quot;title&quot;:&quot;Reinforcement learning models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Decision Transformer&quot;,&quot;id&quot;:&quot;model_doc/decision_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/decision_transformer&quot;},{&quot;title&quot;:&quot;Trajectory Transformer&quot;,&quot;id&quot;:&quot;model_doc/trajectory_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer&quot;}]},{&quot;title&quot;:&quot;Time series models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Autoformer&quot;,&quot;id&quot;:&quot;model_doc/autoformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/autoformer&quot;},{&quot;title&quot;:&quot;Informer&quot;,&quot;id&quot;:&quot;model_doc/informer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/informer&quot;},{&quot;title&quot;:&quot;Time Series Transformer&quot;,&quot;id&quot;:&quot;model_doc/time_series_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/time_series_transformer&quot;}]},{&quot;title&quot;:&quot;Graph models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Graphormer&quot;,&quot;id&quot;:&quot;model_doc/graphormer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/graphormer&quot;}]}]},{&quot;title&quot;:&quot;Internal Helpers&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Custom Layers and Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/modeling_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/modeling_utils&quot;},{&quot;title&quot;:&quot;Utilities for pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/pipelines_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/pipelines_utils&quot;},{&quot;title&quot;:&quot;Utilities for Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/tokenization_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/tokenization_utils&quot;},{&quot;title&quot;:&quot;Utilities for Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/trainer_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/trainer_utils&quot;},{&quot;title&quot;:&quot;Utilities for Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/generation_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/generation_utils&quot;},{&quot;title&quot;:&quot;Utilities for Image Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/image_processing_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/image_processing_utils&quot;},{&quot;title&quot;:&quot;Utilities for Audio processing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/audio_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/audio_utils&quot;},{&quot;title&quot;:&quot;General Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/file_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/file_utils&quot;},{&quot;title&quot;:&quot;Utilities for Time Series&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/time_series_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/time_series_utils&quot;}]}]}],&quot;chapterId&quot;:&quot;model_doc/wav2vec2-conformer&quot;,&quot;docType&quot;:&quot;docs&quot;,&quot;isLoggedIn&quot;:false,&quot;lang&quot;:&quot;en&quot;,&quot;langs&quot;:[&quot;de&quot;,&quot;en&quot;,&quot;es&quot;,&quot;fr&quot;,&quot;it&quot;,&quot;ko&quot;,&quot;pt&quot;,&quot;zh&quot;],&quot;library&quot;:&quot;transformers&quot;,&quot;theme&quot;:&quot;light&quot;,&quot;version&quot;:&quot;v4.34.0&quot;,&quot;versions&quot;:[{&quot;version&quot;:&quot;main&quot;},{&quot;version&quot;:&quot;v4.34.0&quot;},{&quot;version&quot;:&quot;v4.33.3&quot;},{&quot;version&quot;:&quot;v4.33.2&quot;},{&quot;version&quot;:&quot;v4.33.0&quot;},{&quot;version&quot;:&quot;v4.32.1&quot;},{&quot;version&quot;:&quot;v4.32.0&quot;},{&quot;version&quot;:&quot;v4.31.0&quot;},{&quot;version&quot;:&quot;v4.30.0&quot;},{&quot;version&quot;:&quot;v4.29.1&quot;},{&quot;version&quot;:&quot;v4.29.0&quot;},{&quot;version&quot;:&quot;v4.28.1&quot;},{&quot;version&quot;:&quot;v4.28.0&quot;},{&quot;version&quot;:&quot;v4.27.2&quot;},{&quot;version&quot;:&quot;v4.27.1&quot;},{&quot;version&quot;:&quot;v4.27.0&quot;},{&quot;version&quot;:&quot;v4.26.1&quot;},{&quot;version&quot;:&quot;v4.26.0&quot;},{&quot;version&quot;:&quot;v4.25.1&quot;},{&quot;version&quot;:&quot;v4.24.0&quot;},{&quot;version&quot;:&quot;v4.23.1&quot;},{&quot;version&quot;:&quot;v4.23.0&quot;},{&quot;version&quot;:&quot;v4.22.2&quot;},{&quot;version&quot;:&quot;v4.22.1&quot;},{&quot;version&quot;:&quot;v4.22.0&quot;},{&quot;version&quot;:&quot;v4.21.3&quot;},{&quot;version&quot;:&quot;v4.21.2&quot;},{&quot;version&quot;:&quot;v4.21.1&quot;},{&quot;version&quot;:&quot;v4.21.0&quot;},{&quot;version&quot;:&quot;v4.20.1&quot;},{&quot;version&quot;:&quot;v4.20.0&quot;},{&quot;version&quot;:&quot;v4.19.4&quot;},{&quot;version&quot;:&quot;v4.19.3&quot;},{&quot;version&quot;:&quot;v4.19.2&quot;},{&quot;version&quot;:&quot;v4.19.0&quot;},{&quot;version&quot;:&quot;v4.18.0&quot;},{&quot;version&quot;:&quot;v4.17.0&quot;},{&quot;version&quot;:&quot;v4.16.2&quot;},{&quot;version&quot;:&quot;v4.16.1&quot;},{&quot;version&quot;:&quot;v4.16.0&quot;},{&quot;version&quot;:&quot;v4.15.0&quot;},{&quot;version&quot;:&quot;v4.14.1&quot;},{&quot;version&quot;:&quot;v4.13.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.5&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.4&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.0.0&quot;},{&quot;version&quot;:&quot;doc-builder-html&quot;}],&quot;title&quot;:&quot;Wav2Vec2-Conformer&quot;}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"> <div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation </p> <div class="flex items-center"><p class="font-semibold">Wav2Vec2-Conformer</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "> <button class=" " type="button"> <h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> </button> <div class="flex items-center"> <select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1" selected="">v4.34.0</option><option value="2">v4.33.3</option><option value="3">v4.32.1</option><option value="4">v4.31.0</option><option value="5">v4.30.0</option><option value="6">v4.29.1</option><option value="7">v4.28.1</option><option value="8">v4.27.2</option><option value="9">v4.26.1</option><option value="10">v4.25.1</option><option value="11">v4.24.0</option><option value="12">v4.23.1</option><option value="13">v4.22.2</option><option value="14">v4.21.3</option><option value="15">v4.20.1</option><option value="16">v4.19.4</option><option value="17">v4.18.0</option><option value="18">v4.17.0</option><option value="19">v4.16.2</option><option value="20">v4.15.0</option><option value="21">v4.14.1</option><option value="22">v4.13.0</option><option value="23">v4.12.5</option><option value="24">v4.11.3</option><option value="25">v4.10.1</option><option value="26">v4.9.2</option><option value="27">v4.8.2</option><option value="28">v4.7.0</option><option value="29">v4.6.0</option><option value="30">v4.5.1</option><option value="31">v4.4.2</option><option value="32">v4.3.3</option><option value="33">v4.2.2</option><option value="34">v4.1.1</option><option value="35">v4.0.1</option><option value="36">v3.5.1</option><option value="37">v3.4.0</option><option value="38">v3.3.1</option><option value="39">v3.2.0</option><option value="40">v3.1.0</option><option value="41">v3.0.2</option><option value="42">v2.11.0</option><option value="43">v2.10.0</option><option value="44">v2.9.1</option><option value="45">v2.8.0</option><option value="46">v2.7.0</option><option value="47">v2.6.0</option><option value="48">v2.5.1</option><option value="49">v2.4.1</option><option value="50">v2.3.0</option><option value="51">v2.2.2</option><option value="52">v2.1.1</option><option value="53">v2.0.0</option><option value="54">v1.2.0</option><option value="55">v1.1.0</option><option value="56">v1.0.0</option><option value="57">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en" selected="">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"> <button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"> <svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> </a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Get started<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/index"><!-- HTML_TAG_START -->🤗 Transformers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/quicktour"><!-- HTML_TAG_START -->Quick tour<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/installation"><!-- HTML_TAG_START -->Installation<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Tutorials<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_tutorial"><!-- HTML_TAG_START -->Run inference with pipelines<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/autoclass_tutorial"><!-- HTML_TAG_START -->Write portable code with AutoClass<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/preprocessing"><!-- HTML_TAG_START -->Preprocess data<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/training"><!-- HTML_TAG_START -->Fine-tune a pretrained model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/run_scripts"><!-- HTML_TAG_START -->Train with a script<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/accelerate"><!-- HTML_TAG_START -->Set up distributed training with 🤗 Accelerate<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/peft"><!-- HTML_TAG_START -->Load and train adapters with 🤗 PEFT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_sharing"><!-- HTML_TAG_START -->Share your model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/transformers_agents"><!-- HTML_TAG_START -->Agents<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/llm_tutorial"><!-- HTML_TAG_START -->Generation with LLMs<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Task Guides<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Natural Language Processing<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Audio<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Computer Vision<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Multimodal<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Generation<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Prompting<!-- HTML_TAG_END --></span> </span></span> </div></div> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Developer guides<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/fast_tokenizers"><!-- HTML_TAG_START -->Use fast tokenizers from 🤗 Tokenizers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/multilingual"><!-- HTML_TAG_START -->Run inference with multilingual models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/create_a_model"><!-- HTML_TAG_START -->Use model-specific APIs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_models"><!-- HTML_TAG_START -->Share a custom model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/chat_templating"><!-- HTML_TAG_START -->Templates for chat models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/sagemaker"><!-- HTML_TAG_START -->Run training on Amazon SageMaker<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/serialization"><!-- HTML_TAG_START -->Export to ONNX<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tflite"><!-- HTML_TAG_START -->Export to TFLite<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/torchscript"><!-- HTML_TAG_START -->Export to TorchScript<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/benchmarks"><!-- HTML_TAG_START -->Benchmarks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/notebooks"><!-- HTML_TAG_START -->Notebooks with examples<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/community"><!-- HTML_TAG_START -->Community resources<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_tools"><!-- HTML_TAG_START -->Custom Tools and Prompts<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/troubleshooting"><!-- HTML_TAG_START -->Troubleshoot<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Performance and scalability<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/performance"><!-- HTML_TAG_START -->Overview<!-- HTML_TAG_END --> </a> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Efficient training techniques<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_one"><!-- HTML_TAG_START -->Methods and tools for efficient training on a single GPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_many"><!-- HTML_TAG_START -->Multiple GPUs and parallelism<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu"><!-- HTML_TAG_START -->Efficient training on CPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu_many"><!-- HTML_TAG_START -->Distributed CPU training<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu"><!-- HTML_TAG_START -->Training on TPUs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu_tf"><!-- HTML_TAG_START -->Training on TPU with TensorFlow<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_special"><!-- HTML_TAG_START -->Training on Specialized Hardware<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_hardware"><!-- HTML_TAG_START -->Custom hardware for training<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/hpo_train"><!-- HTML_TAG_START -->Hyperparameter Search using Trainer API<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Optimizing inference<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_cpu"><!-- HTML_TAG_START -->Inference on CPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_one"><!-- HTML_TAG_START -->Inference on one GPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_many"><!-- HTML_TAG_START -->Inference on many GPUs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_special"><!-- HTML_TAG_START -->Inference on Specialized Hardware<!-- HTML_TAG_END --> </a> </div><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/big_models"><!-- HTML_TAG_START -->Instantiating a big model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/debugging"><!-- HTML_TAG_START -->Troubleshooting<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tf_xla"><!-- HTML_TAG_START -->XLA Integration for TensorFlow Models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perf_torch_compile"><!-- HTML_TAG_START -->Optimize inference using `torch.compile()`<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Contribute<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/contributing"><!-- HTML_TAG_START -->How to contribute to transformers?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_model"><!-- HTML_TAG_START -->How to add a model to 🤗 Transformers?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_tensorflow_model"><!-- HTML_TAG_START -->How to convert a 🤗 Transformers model to TensorFlow?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_pipeline"><!-- HTML_TAG_START -->How to add a pipeline to 🤗 Transformers?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/testing"><!-- HTML_TAG_START -->Testing<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pr_checks"><!-- HTML_TAG_START -->Checks on a Pull Request<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Conceptual guides<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/philosophy"><!-- HTML_TAG_START -->Philosophy<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/glossary"><!-- HTML_TAG_START -->Glossary<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/task_summary"><!-- HTML_TAG_START -->What 🤗 Transformers can do<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tasks_explained"><!-- HTML_TAG_START -->How 🤗 Transformers solve tasks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_summary"><!-- HTML_TAG_START -->The Transformer model family<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tokenizer_summary"><!-- HTML_TAG_START -->Summary of the tokenizers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/attention"><!-- HTML_TAG_START -->Attention mechanisms<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pad_truncation"><!-- HTML_TAG_START -->Padding and truncation<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/bertology"><!-- HTML_TAG_START -->BERTology<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perplexity"><!-- HTML_TAG_START -->Perplexity of fixed-length models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_webserver"><!-- HTML_TAG_START -->Pipelines for webserver inference<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_memory_anatomy"><!-- HTML_TAG_START -->Model training anatomy<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->API<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Main Classes<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/agent"><!-- HTML_TAG_START -->Agents and Tools<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/model_doc/auto"><!-- HTML_TAG_START -->Auto Classes<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/callback"><!-- HTML_TAG_START -->Callbacks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/configuration"><!-- HTML_TAG_START -->Configuration<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/data_collator"><!-- HTML_TAG_START -->Data Collator<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks"><!-- HTML_TAG_START -->Keras callbacks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/logging"><!-- HTML_TAG_START -->Logging<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/model"><!-- HTML_TAG_START -->Models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/text_generation"><!-- HTML_TAG_START -->Text Generation<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/onnx"><!-- HTML_TAG_START -->ONNX<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules"><!-- HTML_TAG_START -->Optimization<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/output"><!-- HTML_TAG_START -->Model outputs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/pipelines"><!-- HTML_TAG_START -->Pipelines<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/processors"><!-- HTML_TAG_START -->Processors<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/quantization"><!-- HTML_TAG_START -->Quantization<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/tokenizer"><!-- HTML_TAG_START -->Tokenizer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/trainer"><!-- HTML_TAG_START -->Trainer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/deepspeed"><!-- HTML_TAG_START -->DeepSpeed Integration<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor"><!-- HTML_TAG_START -->Feature Extractor<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/image_processor"><!-- HTML_TAG_START -->Image Processor<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Text models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Vision models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Audio models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer"><!-- HTML_TAG_START -->Audio Spectrogram Transformer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bark"><!-- HTML_TAG_START -->Bark<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/clap"><!-- HTML_TAG_START -->CLAP<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/encodec"><!-- HTML_TAG_START -->EnCodec<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/hubert"><!-- HTML_TAG_START -->Hubert<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mctct"><!-- HTML_TAG_START -->MCTCT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mms"><!-- HTML_TAG_START -->MMS<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/musicgen"><!-- HTML_TAG_START -->MusicGen<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/pop2piano"><!-- HTML_TAG_START -->Pop2Piano<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/sew"><!-- HTML_TAG_START -->SEW<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/sew-d"><!-- HTML_TAG_START -->SEW-D<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/speech_to_text"><!-- HTML_TAG_START -->Speech2Text<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2"><!-- HTML_TAG_START -->Speech2Text2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/speecht5"><!-- HTML_TAG_START -->SpeechT5<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/unispeech"><!-- HTML_TAG_START -->UniSpeech<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/unispeech-sat"><!-- HTML_TAG_START -->UniSpeech-SAT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/vits"><!-- HTML_TAG_START -->VITS<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2"><!-- HTML_TAG_START -->Wav2Vec2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer"><!-- HTML_TAG_START -->Wav2Vec2-Conformer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme"><!-- HTML_TAG_START -->Wav2Vec2Phoneme<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/wavlm"><!-- HTML_TAG_START -->WavLM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/whisper"><!-- HTML_TAG_START -->Whisper<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xls_r"><!-- HTML_TAG_START -->XLS-R<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2"><!-- HTML_TAG_START -->XLSR-Wav2Vec2<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Multimodal models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Reinforcement learning models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Time series models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Graph models<!-- HTML_TAG_END --></span> </span></span> </div></div> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Internal Helpers<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/modeling_utils"><!-- HTML_TAG_START -->Custom Layers and Utilities<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/pipelines_utils"><!-- HTML_TAG_START -->Utilities for pipelines<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/tokenization_utils"><!-- HTML_TAG_START -->Utilities for Tokenizers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/trainer_utils"><!-- HTML_TAG_START -->Utilities for Trainer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/generation_utils"><!-- HTML_TAG_START -->Utilities for Generation<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/image_processing_utils"><!-- HTML_TAG_START -->Utilities for Image Processors<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/audio_utils"><!-- HTML_TAG_START -->Utilities for Audio processing<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/file_utils"><!-- HTML_TAG_START -->General Utilities<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/time_series_utils"><!-- HTML_TAG_START -->Utilities for Time Series<!-- HTML_TAG_END --> </a> </div> </div></nav></div></div></div> <div class="z-1 min-w-0 flex-1"> <div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg"> <div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div> <p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience </p> <div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes </div></div></div> <div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a> <p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div> <div class="prose-doc prose relative mx-auto max-w-4xl break-words"> <p></p> <h1 class="relative group"><a id="wav2vec2conformer" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#wav2vec2conformer"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-15zfl69">Wav2Vec2-Conformer</span></h1> <h2 class="relative group"><a id="overview" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#overview"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1jsw1pg">Overview</span></h2> <p data-svelte-h="svelte-1jbvqet">The Wav2Vec2-Conformer was added to an updated version of <a href="https://arxiv.org/abs/2010.05171" rel="nofollow">fairseq S2T: Fast Speech-to-Text Modeling with fairseq</a> by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino.</p> <p data-svelte-h="svelte-5fkfc1">The official results of the model can be found in Table 3 and Table 4 of the paper.</p> <p data-svelte-h="svelte-y9fpue">The Wav2Vec2-Conformer weights were released by the Meta AI team within the <a href="https://github.com/pytorch/fairseq/blob/main/examples/wav2vec/README.md#pre-trained-models" rel="nofollow">Fairseq library</a>.</p> <p data-svelte-h="svelte-axv494">Tips:</p> <ul data-svelte-h="svelte-dttjnw"><li>Wav2Vec2-Conformer follows the same architecture as Wav2Vec2, but replaces the <em>Attention</em>-block with a <em>Conformer</em>-block as introduced in <a href="https://arxiv.org/abs/2005.08100" rel="nofollow">Conformer: Convolution-augmented Transformer for Speech Recognition</a>.</li> <li>For the same number of layers, Wav2Vec2-Conformer requires more parameters than Wav2Vec2, but also yields an improved word error rate.</li> <li>Wav2Vec2-Conformer uses the same tokenizer and feature extractor as Wav2Vec2.</li> <li>Wav2Vec2-Conformer can use either no relative position embeddings, Transformer-XL-like position embeddings, or rotary position embeddings by setting the correct <code>config.position_embeddings_type</code>.</li></ul> <p data-svelte-h="svelte-1l6txl9">This model was contributed by <a href="https://huggingface.co/patrickvonplaten" rel="nofollow">patrickvonplaten</a>. The original code can be found <a href="https://github.com/pytorch/fairseq/tree/main/examples/wav2vec" rel="nofollow">here</a>.</p> <h2 class="relative group"><a id="documentation-resources" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#documentation-resources"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-n3f0j0">Documentation resources</span></h2> <ul data-svelte-h="svelte-11qmliz"><li><a href="../tasks/audio_classification">Audio classification task guide</a></li> <li><a href="../tasks/asr">Automatic speech recognition task guide</a></li></ul> <h2 class="relative group"><a id="transformers.Wav2Vec2ConformerConfig" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerConfig"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-11twg00">Wav2Vec2ConformerConfig</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.Wav2Vec2ConformerConfig"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">Wav2Vec2ConformerConfig</span></span></h3> <a id="transformers.Wav2Vec2ConformerConfig" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.Wav2Vec2ConformerConfig"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2_conformer/configuration_wav2vec2_conformer.py#L33" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">vocab_size<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_size<span class="opacity-60"> = 768</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_hidden_layers<span class="opacity-60"> = 12</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_attention_heads<span class="opacity-60"> = 12</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">intermediate_size<span class="opacity-60"> = 3072</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_act<span class="opacity-60"> = 'gelu'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_dropout<span class="opacity-60"> = 0.1</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">activation_dropout<span class="opacity-60"> = 0.1</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_dropout<span class="opacity-60"> = 0.1</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">feat_proj_dropout<span class="opacity-60"> = 0.0</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">feat_quantizer_dropout<span class="opacity-60"> = 0.0</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">final_dropout<span class="opacity-60"> = 0.1</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">layerdrop<span class="opacity-60"> = 0.1</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">initializer_range<span class="opacity-60"> = 0.02</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">layer_norm_eps<span class="opacity-60"> = 1e-05</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">feat_extract_norm<span class="opacity-60"> = 'group'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">feat_extract_activation<span class="opacity-60"> = 'gelu'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">conv_dim<span class="opacity-60"> = (512, 512, 512, 512, 512, 512, 512)</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">conv_stride<span class="opacity-60"> = (5, 2, 2, 2, 2, 2, 2)</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">conv_kernel<span class="opacity-60"> = (10, 3, 3, 3, 3, 2, 2)</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">conv_bias<span class="opacity-60"> = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_conv_pos_embeddings<span class="opacity-60"> = 128</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_conv_pos_embedding_groups<span class="opacity-60"> = 16</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">apply_spec_augment<span class="opacity-60"> = True</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_time_prob<span class="opacity-60"> = 0.05</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_time_length<span class="opacity-60"> = 10</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_time_min_masks<span class="opacity-60"> = 2</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_feature_prob<span class="opacity-60"> = 0.0</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_feature_length<span class="opacity-60"> = 10</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_feature_min_masks<span class="opacity-60"> = 0</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_codevectors_per_group<span class="opacity-60"> = 320</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_codevector_groups<span class="opacity-60"> = 2</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">contrastive_logits_temperature<span class="opacity-60"> = 0.1</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_negatives<span class="opacity-60"> = 100</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">codevector_dim<span class="opacity-60"> = 256</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">proj_codevector_dim<span class="opacity-60"> = 256</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">diversity_loss_weight<span class="opacity-60"> = 0.1</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">ctc_loss_reduction<span class="opacity-60"> = 'sum'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">ctc_zero_infinity<span class="opacity-60"> = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_weighted_layer_sum<span class="opacity-60"> = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">classifier_proj_size<span class="opacity-60"> = 256</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">tdnn_dim<span class="opacity-60"> = (512, 512, 512, 512, 1500)</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">tdnn_kernel<span class="opacity-60"> = (5, 3, 3, 1, 1)</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">tdnn_dilation<span class="opacity-60"> = (1, 2, 3, 1, 1)</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">xvector_output_dim<span class="opacity-60"> = 512</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pad_token_id<span class="opacity-60"> = 0</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">bos_token_id<span class="opacity-60"> = 1</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">eos_token_id<span class="opacity-60"> = 2</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">add_adapter<span class="opacity-60"> = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">adapter_kernel_size<span class="opacity-60"> = 3</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">adapter_stride<span class="opacity-60"> = 2</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_adapter_layers<span class="opacity-60"> = 3</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_size<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_embeddings_type<span class="opacity-60"> = 'relative'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">rotary_embedding_base<span class="opacity-60"> = 10000</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">max_source_positions<span class="opacity-60"> = 5000</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">conv_depthwise_kernel_size<span class="opacity-60"> = 31</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">conformer_conv_dropout<span class="opacity-60"> = 0.1</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 56 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerConfig.vocab_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerConfig.vocab_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>vocab_size</strong> (<code>int</code>, <em>optional</em>) — Vocabulary size of the Wav2Vec2Conformer model. Defines the number of different tokens that can be represented by the <code>inputs_ids</code> passed when calling <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer#transformers.Wav2Vec2ConformerModel">Wav2Vec2ConformerModel</a>. Vocabulary size of the model. Defines the different tokens that can be represented by the <em>inputs_ids</em> passed to the forward method of <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer#transformers.Wav2Vec2ConformerModel">Wav2Vec2ConformerModel</a>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerConfig.hidden_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerConfig.hidden_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hidden_size</strong> (<code>int</code>, <em>optional</em>, defaults to 768) — Dimensionality of the encoder layers and the pooler layer.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerConfig.num_hidden_layers" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerConfig.num_hidden_layers"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>num_hidden_layers</strong> (<code>int</code>, <em>optional</em>, defaults to 12) — Number of hidden layers in the Transformer encoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerConfig.num_attention_heads" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerConfig.num_attention_heads"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>num_attention_heads</strong> (<code>int</code>, <em>optional</em>, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerConfig.intermediate_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerConfig.intermediate_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>intermediate_size</strong> (<code>int</code>, <em>optional</em>, defaults to 3072) — Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerConfig.hidden_act" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerConfig.hidden_act"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hidden_act</strong> (<code>str</code> or <code>function</code>, <em>optional</em>, defaults to <code>"gelu"</code>) — The non-linear activation function (function or string) in the encoder and pooler. If string, <code>"gelu"</code>, <code>"relu"</code>, <code>"selu"</code> and <code>"gelu_new"</code> are supported.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerConfig.hidden_dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerConfig.hidden_dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hidden_dropout</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerConfig.activation_dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerConfig.activation_dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>activation_dropout</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) — The dropout ratio for activations inside the fully connected layer.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerConfig.attention_dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerConfig.attention_dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_dropout</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) — The dropout ratio for the attention probabilities.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerConfig.final_dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerConfig.final_dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>final_dropout</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) — The dropout probability for the final projection layer of <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer#transformers.Wav2Vec2ConformerForCTC">Wav2Vec2ConformerForCTC</a>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerConfig.layerdrop" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerConfig.layerdrop"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>layerdrop</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) — The LayerDrop probability. See the [LayerDrop paper](see <a href="https://arxiv.org/abs/1909.11556" rel="nofollow">https://arxiv.org/abs/1909.11556</a>) for more details.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerConfig.initializer_range" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerConfig.initializer_range"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>initializer_range</strong> (<code>float</code>, <em>optional</em>, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerConfig.layer_norm_eps" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerConfig.layer_norm_eps"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>layer_norm_eps</strong> (<code>float</code>, <em>optional</em>, defaults to 1e-12) — The epsilon used by the layer normalization layers.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerConfig.feat_extract_norm" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerConfig.feat_extract_norm"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>feat_extract_norm</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"group"</code>) — The norm to be applied to 1D convolutional layers in feature encoder. One of <code>"group"</code> for group normalization of only the first 1D convolutional layer or <code>"layer"</code> for layer normalization of all 1D convolutional layers.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerConfig.feat_proj_dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerConfig.feat_proj_dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>feat_proj_dropout</strong> (<code>float</code>, <em>optional</em>, defaults to 0.0) — The dropout probability for output of the feature encoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerConfig.feat_extract_activation" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerConfig.feat_extract_activation"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>feat_extract_activation</strong> (<code>str, </code>optional<code>, defaults to </code>“gelu”<code>) -- The non-linear activation function (function or string) in the 1D convolutional layers of the feature extractor. If string, </code>“gelu”<code>, </code>“relu”<code>, </code>“selu”<code>and</code>“gelu_new”` are supported.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerConfig.feat_quantizer_dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerConfig.feat_quantizer_dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>feat_quantizer_dropout</strong> (<code>float</code>, <em>optional</em>, defaults to 0.0) — The dropout probabilitiy for quantized feature encoder states.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerConfig.conv_dim" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerConfig.conv_dim"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>conv_dim</strong> (<code>Tuple[int]</code> or <code>List[int]</code>, <em>optional</em>, defaults to <code>(512, 512, 512, 512, 512, 512, 512)</code>) — A tuple of integers defining the number of input and output channels of each 1D convolutional layer in the feature encoder. The length of <em>conv_dim</em> defines the number of 1D convolutional layers.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerConfig.conv_stride" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerConfig.conv_stride"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>conv_stride</strong> (<code>Tuple[int]</code> or <code>List[int]</code>, <em>optional</em>, defaults to <code>(5, 2, 2, 2, 2, 2, 2)</code>) — A tuple of integers defining the stride of each 1D convolutional layer in the feature encoder. The length of <em>conv_stride</em> defines the number of convolutional layers and has to match the length of <em>conv_dim</em>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerConfig.conv_kernel" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerConfig.conv_kernel"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>conv_kernel</strong> (<code>Tuple[int]</code> or <code>List[int]</code>, <em>optional</em>, defaults to <code>(10, 3, 3, 3, 3, 3, 3)</code>) — A tuple of integers defining the kernel size of each 1D convolutional layer in the feature encoder. The length of <em>conv_kernel</em> defines the number of convolutional layers and has to match the length of <em>conv_dim</em>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerConfig.conv_bias" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerConfig.conv_bias"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>conv_bias</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether the 1D convolutional layers have a bias.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerConfig.num_conv_pos_embeddings" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerConfig.num_conv_pos_embeddings"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>num_conv_pos_embeddings</strong> (<code>int</code>, <em>optional</em>, defaults to 128) — Number of convolutional positional embeddings. Defines the kernel size of 1D convolutional positional embeddings layer.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerConfig.num_conv_pos_embedding_groups" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerConfig.num_conv_pos_embedding_groups"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>num_conv_pos_embedding_groups</strong> (<code>int</code>, <em>optional</em>, defaults to 16) — Number of groups of 1D convolutional positional embeddings layer.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerConfig.apply_spec_augment" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerConfig.apply_spec_augment"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>apply_spec_augment</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether to apply <em>SpecAugment</em> data augmentation to the outputs of the feature encoder. For reference see <a href="https://arxiv.org/abs/1904.08779" rel="nofollow">SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition</a>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerConfig.mask_time_prob" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerConfig.mask_time_prob"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mask_time_prob</strong> (<code>float</code>, <em>optional</em>, defaults to 0.05) — Percentage (between 0 and 1) of all feature vectors along the time axis which will be masked. The masking procecure generates ”mask_time_prob<em>len(time_axis)/mask_time_length” independent masks over the axis. If reasoning from the propability of each feature vector to be chosen as the start of the vector span to be masked, </em>mask_time_prob<em> should be `prob_vector_start</em>mask_time_length<code>. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant if </code>apply_spec_augment is True`.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerConfig.mask_time_length" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerConfig.mask_time_length"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mask_time_length</strong> (<code>int</code>, <em>optional</em>, defaults to 10) — Length of vector span along the time axis.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerConfig.mask_time_min_masks" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerConfig.mask_time_min_masks"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mask_time_min_masks</strong> (<code>int</code>, <em>optional</em>, defaults to 2), — The minimum number of masks of length <code>mask_feature_length</code> generated along the time axis, each time step, irrespectively of <code>mask_feature_prob</code>. Only relevant if ”mask_time_prob*len(time_axis)/mask_time_length &lt; mask_time_min_masks”</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerConfig.mask_feature_prob" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerConfig.mask_feature_prob"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mask_feature_prob</strong> (<code>float</code>, <em>optional</em>, defaults to 0.0) — Percentage (between 0 and 1) of all feature vectors along the feature axis which will be masked. The masking procecure generates ”mask_feature_prob<em>len(feature_axis)/mask_time_length” independent masks over the axis. If reasoning from the propability of each feature vector to be chosen as the start of the vector span to be masked, </em>mask_feature_prob<em> should be `prob_vector_start</em>mask_feature_length<code>. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant if </code>apply_spec_augment is True`.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerConfig.mask_feature_length" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerConfig.mask_feature_length"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mask_feature_length</strong> (<code>int</code>, <em>optional</em>, defaults to 10) — Length of vector span along the feature axis.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerConfig.mask_feature_min_masks" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerConfig.mask_feature_min_masks"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mask_feature_min_masks</strong> (<code>int</code>, <em>optional</em>, defaults to 0), — The minimum number of masks of length <code>mask_feature_length</code> generated along the feature axis, each time step, irrespectively of <code>mask_feature_prob</code>. Only relevant if ”mask_feature_prob*len(feature_axis)/mask_feature_length &lt; mask_feature_min_masks”</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerConfig.num_codevectors_per_group" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerConfig.num_codevectors_per_group"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>num_codevectors_per_group</strong> (<code>int</code>, <em>optional</em>, defaults to 320) — Number of entries in each quantization codebook (group).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerConfig.num_codevector_groups" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerConfig.num_codevector_groups"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>num_codevector_groups</strong> (<code>int</code>, <em>optional</em>, defaults to 2) — Number of codevector groups for product codevector quantization.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerConfig.contrastive_logits_temperature" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerConfig.contrastive_logits_temperature"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>contrastive_logits_temperature</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) — The temperature <em>kappa</em> in the contrastive loss.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerConfig.feat_quantizer_dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerConfig.feat_quantizer_dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>feat_quantizer_dropout</strong> (<code>float</code>, <em>optional</em>, defaults to 0.0) — The dropout probabilitiy for the output of the feature encoder that’s used by the quantizer.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerConfig.num_negatives" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerConfig.num_negatives"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>num_negatives</strong> (<code>int</code>, <em>optional</em>, defaults to 100) — Number of negative samples for the contrastive loss.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerConfig.codevector_dim" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerConfig.codevector_dim"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>codevector_dim</strong> (<code>int</code>, <em>optional</em>, defaults to 256) — Dimensionality of the quantized feature vectors.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerConfig.proj_codevector_dim" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerConfig.proj_codevector_dim"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>proj_codevector_dim</strong> (<code>int</code>, <em>optional</em>, defaults to 256) — Dimensionality of the final projection of both the quantized and the transformer features.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerConfig.diversity_loss_weight" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerConfig.diversity_loss_weight"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>diversity_loss_weight</strong> (<code>int</code>, <em>optional</em>, defaults to 0.1) — The weight of the codebook diversity loss component.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerConfig.ctc_loss_reduction" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerConfig.ctc_loss_reduction"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>ctc_loss_reduction</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"sum"</code>) — Specifies the reduction to apply to the output of <code>torch.nn.CTCLoss</code>. Only relevant when training an instance of <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer#transformers.Wav2Vec2ConformerForCTC">Wav2Vec2ConformerForCTC</a>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerConfig.ctc_zero_infinity" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerConfig.ctc_zero_infinity"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>ctc_zero_infinity</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether to zero infinite losses and the associated gradients of <code>torch.nn.CTCLoss</code>. Infinite losses mainly occur when the inputs are too short to be aligned to the targets. Only relevant when training an instance of <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer#transformers.Wav2Vec2ConformerForCTC">Wav2Vec2ConformerForCTC</a>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerConfig.use_weighted_layer_sum" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerConfig.use_weighted_layer_sum"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>use_weighted_layer_sum</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether to use a weighted average of layer outputs with learned weights. Only relevant when using an instance of <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer#transformers.Wav2Vec2ConformerForSequenceClassification">Wav2Vec2ConformerForSequenceClassification</a>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerConfig.classifier_proj_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerConfig.classifier_proj_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>classifier_proj_size</strong> (<code>int</code>, <em>optional</em>, defaults to 256) — Dimensionality of the projection before token mean-pooling for classification.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerConfig.tdnn_dim" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerConfig.tdnn_dim"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>tdnn_dim</strong> (<code>Tuple[int]</code> or <code>List[int]</code>, <em>optional</em>, defaults to <code>(512, 512, 512, 512, 1500)</code>) — A tuple of integers defining the number of output channels of each 1D convolutional layer in the <em>TDNN</em> module of the <em>XVector</em> model. The length of <em>tdnn_dim</em> defines the number of <em>TDNN</em> layers.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerConfig.tdnn_kernel" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerConfig.tdnn_kernel"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>tdnn_kernel</strong> (<code>Tuple[int]</code> or <code>List[int]</code>, <em>optional</em>, defaults to <code>(5, 3, 3, 1, 1)</code>) — A tuple of integers defining the kernel size of each 1D convolutional layer in the <em>TDNN</em> module of the <em>XVector</em> model. The length of <em>tdnn_kernel</em> has to match the length of <em>tdnn_dim</em>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerConfig.tdnn_dilation" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerConfig.tdnn_dilation"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>tdnn_dilation</strong> (<code>Tuple[int]</code> or <code>List[int]</code>, <em>optional</em>, defaults to <code>(1, 2, 3, 1, 1)</code>) — A tuple of integers defining the dilation factor of each 1D convolutional layer in <em>TDNN</em> module of the <em>XVector</em> model. The length of <em>tdnn_dilation</em> has to match the length of <em>tdnn_dim</em>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerConfig.xvector_output_dim" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerConfig.xvector_output_dim"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>xvector_output_dim</strong> (<code>int</code>, <em>optional</em>, defaults to 512) — Dimensionality of the <em>XVector</em> embedding vectors.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerConfig.add_adapter" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerConfig.add_adapter"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>add_adapter</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether a convolutional network should be stacked on top of the Wav2Vec2Conformer Encoder. Can be very useful for warm-starting Wav2Vec2Conformer for SpeechEncoderDecoder models.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerConfig.adapter_kernel_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerConfig.adapter_kernel_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>adapter_kernel_size</strong> (<code>int</code>, <em>optional</em>, defaults to 3) — Kernel size of the convolutional layers in the adapter network. Only relevant if <code>add_adapter is True</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerConfig.adapter_stride" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerConfig.adapter_stride"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>adapter_stride</strong> (<code>int</code>, <em>optional</em>, defaults to 2) — Stride of the convolutional layers in the adapter network. Only relevant if <code>add_adapter is True</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerConfig.num_adapter_layers" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerConfig.num_adapter_layers"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>num_adapter_layers</strong> (<code>int</code>, <em>optional</em>, defaults to 3) — Number of convolutional layers that should be used in the adapter network. Only relevant if <code>add_adapter is True</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerConfig.output_hidden_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerConfig.output_hidden_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_size</strong> (<code>int</code>, <em>optional</em>) — Dimensionality of the encoder output layer. If not defined, this defaults to <em>hidden-size</em>. Only relevant if <code>add_adapter is True</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerConfig.position_embeddings_type" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerConfig.position_embeddings_type"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_embeddings_type</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"relative"</code>) — Can be specified to <code>relative</code> or <code>rotary</code> for relative or rotary position embeddings respectively. If left <code>None</code> no relative position embedding is applied.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerConfig.rotary_embedding_base" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerConfig.rotary_embedding_base"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>rotary_embedding_base</strong> (<code>int</code>, <em>optional</em>, defaults to 10000) — If <code>"rotary"</code> position embeddings are used, defines the size of the embedding base.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerConfig.max_source_positions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerConfig.max_source_positions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>max_source_positions</strong> (<code>int</code>, <em>optional</em>, defaults to 5000) — if <code>"relative"</code> position embeddings are used, defines the maximum source input positions.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerConfig.conv_depthwise_kernel_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerConfig.conv_depthwise_kernel_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>conv_depthwise_kernel_size</strong> (<code>int</code>, defaults to 31) — Kernel size of convolutional depthwise 1D layer in Conformer blocks.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerConfig.conformer_conv_dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerConfig.conformer_conv_dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>conformer_conv_dropout</strong> (<code>float</code>, defaults to 0.1) — The dropout probability for all convolutional layers in Conformer blocks.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1am0p77">This is the configuration class to store the configuration of a <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer#transformers.Wav2Vec2ConformerModel">Wav2Vec2ConformerModel</a>. It is used to instantiate an Wav2Vec2Conformer model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Wav2Vec2Conformer <a href="https://huggingface.co/facebook/wav2vec2-conformer-rel-pos-large" rel="nofollow">facebook/wav2vec2-conformer-rel-pos-large</a> architecture.</p> <p data-svelte-h="svelte-10kqkkl">Configuration objects inherit from <a href="/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> and can be used to control the model outputs. Read the documentation from <a href="/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> for more information.</p> <div class="relative group rounded-md"><a id="transformers.Wav2Vec2ConformerConfig.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerConfig.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> Wav2Vec2ConformerConfig, Wav2Vec2ConformerModel <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Initializing a Wav2Vec2Conformer facebook/wav2vec2-conformer-rel-pos-large style configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>configuration = Wav2Vec2ConformerConfig() <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Initializing a model (with random weights) from the facebook/wav2vec2-conformer-rel-pos-large style configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model = Wav2Vec2ConformerModel(configuration) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Accessing the model configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>configuration = model.config</pre></div></div></div> <h2 class="relative group"><a id="transformers.models.wav2vec2_conformer.modeling_wav2vec2_conformer.Wav2Vec2ConformerForPreTrainingOutput" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.wav2vec2_conformer.modeling_wav2vec2_conformer.Wav2Vec2ConformerForPreTrainingOutput"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-7j61iu">Wav2Vec2Conformer specific outputs</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.models.wav2vec2_conformer.modeling_wav2vec2_conformer.Wav2Vec2ConformerForPreTrainingOutput"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.models.wav2vec2_conformer.modeling_wav2vec2_conformer.</span><span class="font-semibold">Wav2Vec2ConformerForPreTrainingOutput</span></span></h3> <a id="transformers.models.wav2vec2_conformer.modeling_wav2vec2_conformer.Wav2Vec2ConformerForPreTrainingOutput" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.models.wav2vec2_conformer.modeling_wav2vec2_conformer.Wav2Vec2ConformerForPreTrainingOutput"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py#L74" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">loss<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">projected_states<span class="opacity-60">: FloatTensor = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">projected_quantized_states<span class="opacity-60">: FloatTensor = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">codevector_perplexity<span class="opacity-60">: FloatTensor = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_states<span class="opacity-60">: typing.Optional[typing.Tuple[torch.FloatTensor]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attentions<span class="opacity-60">: typing.Optional[typing.Tuple[torch.FloatTensor]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">contrastive_loss<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">diversity_loss<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 7 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.wav2vec2_conformer.modeling_wav2vec2_conformer.Wav2Vec2ConformerForPreTrainingOutput.loss" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.wav2vec2_conformer.modeling_wav2vec2_conformer.Wav2Vec2ConformerForPreTrainingOutput.loss"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>loss</strong> (<em>optional</em>, returned when <code>sample_negative_indices</code> are passed, <code>torch.FloatTensor</code> of shape <code>(1,)</code>) — Total loss as the sum of the contrastive loss (L_m) and the diversity loss (L_d) as stated in the <a href="https://arxiv.org/pdf/2006.11477.pdf" rel="nofollow">official paper</a> . (classification) loss.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.wav2vec2_conformer.modeling_wav2vec2_conformer.Wav2Vec2ConformerForPreTrainingOutput.projected_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.wav2vec2_conformer.modeling_wav2vec2_conformer.Wav2Vec2ConformerForPreTrainingOutput.projected_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>projected_states</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, config.proj_codevector_dim)</code>) — Hidden-states of the model projected to <em>config.proj_codevector_dim</em> that can be used to predict the masked projected quantized states.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.wav2vec2_conformer.modeling_wav2vec2_conformer.Wav2Vec2ConformerForPreTrainingOutput.projected_quantized_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.wav2vec2_conformer.modeling_wav2vec2_conformer.Wav2Vec2ConformerForPreTrainingOutput.projected_quantized_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>projected_quantized_states</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, config.proj_codevector_dim)</code>) — Quantized extracted feature vectors projected to <em>config.proj_codevector_dim</em> representing the positive target vectors for contrastive loss.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.wav2vec2_conformer.modeling_wav2vec2_conformer.Wav2Vec2ConformerForPreTrainingOutput.hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.wav2vec2_conformer.modeling_wav2vec2_conformer.Wav2Vec2ConformerForPreTrainingOutput.hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.<p></p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.wav2vec2_conformer.modeling_wav2vec2_conformer.Wav2Vec2ConformerForPreTrainingOutput.attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.wav2vec2_conformer.modeling_wav2vec2_conformer.Wav2Vec2ConformerForPreTrainingOutput.attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.<p></p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.wav2vec2_conformer.modeling_wav2vec2_conformer.Wav2Vec2ConformerForPreTrainingOutput.contrastive_loss" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.wav2vec2_conformer.modeling_wav2vec2_conformer.Wav2Vec2ConformerForPreTrainingOutput.contrastive_loss"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>contrastive_loss</strong> (<em>optional</em>, returned when <code>sample_negative_indices</code> are passed, <code>torch.FloatTensor</code> of shape <code>(1,)</code>) — The contrastive loss (L_m) as stated in the <a href="https://arxiv.org/pdf/2006.11477.pdf" rel="nofollow">official paper</a> .</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.wav2vec2_conformer.modeling_wav2vec2_conformer.Wav2Vec2ConformerForPreTrainingOutput.diversity_loss" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.wav2vec2_conformer.modeling_wav2vec2_conformer.Wav2Vec2ConformerForPreTrainingOutput.diversity_loss"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>diversity_loss</strong> (<em>optional</em>, returned when <code>sample_negative_indices</code> are passed, <code>torch.FloatTensor</code> of shape <code>(1,)</code>) — The diversity loss (L_d) as stated in the <a href="https://arxiv.org/pdf/2006.11477.pdf" rel="nofollow">official paper</a> .</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-74sdyv">Output type of <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer#transformers.Wav2Vec2ConformerForPreTraining">Wav2Vec2ConformerForPreTraining</a>, with potential hidden states and attentions.</p></div> <h2 class="relative group"><a id="transformers.Wav2Vec2ConformerModel" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerModel"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1uphewz">Wav2Vec2ConformerModel</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.Wav2Vec2ConformerModel"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">Wav2Vec2ConformerModel</span></span></h3> <a id="transformers.Wav2Vec2ConformerModel" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.Wav2Vec2ConformerModel"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py#L1247" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60">: Wav2Vec2ConformerConfig</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerModel.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerModel.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer#transformers.Wav2Vec2ConformerConfig">Wav2Vec2ConformerConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-c3newv">The bare Wav2Vec2Conformer Model transformer outputting raw hidden-states without any specific head on top. Wav2Vec2Conformer was proposed in <a href="https://arxiv.org/abs/2006.11477" rel="nofollow">wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations</a> by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.</p> <p data-svelte-h="svelte-1e6yl4y">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.).</p> <p data-svelte-h="svelte-1uzc5pd">This model is a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#nn.Module" rel="nofollow">nn.Module</a> sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.Wav2Vec2ConformerModel.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.Wav2Vec2ConformerModel.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.Wav2Vec2ConformerModel.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py#L1320" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_values<span class="opacity-60">: typing.Optional[torch.Tensor]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_time_indices<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.modeling_outputs.Wav2Vec2BaseModelOutput">transformers.modeling_outputs.Wav2Vec2BaseModelOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 5 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerModel.forward.input_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerModel.forward.input_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_values</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Float values of input raw speech waveform. Values can be obtained by loading a <code>.flac</code> or <code>.wav</code> audio file into an array of type <code>List[float]</code> or a <code>numpy.ndarray</code>, <em>e.g.</em> via the soundfile library (<code>pip install soundfile</code>). To prepare the array into <code>input_values</code>, the <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoProcessor">AutoProcessor</a> should be used for padding and conversion into a tensor of type <code>torch.FloatTensor</code>. See <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Processor.__call__">Wav2Vec2Processor.<strong>call</strong>()</a> for details.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerModel.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerModel.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing convolution and attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p> <div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400"> <p><code>attention_mask</code> should only be passed if the corresponding processor has <code>config.return_attention_mask == True</code>. For all models whose processor has <code>config.return_attention_mask == False</code>, such as <a href="https://huggingface.co/facebook/wav2vec2-conformer-rel-pos-large" rel="nofollow">wav2vec2-conformer-rel-pos-large</a>, <code>attention_mask</code> should <strong>not</strong> be passed to avoid degraded performance when doing batched inference. For such models <code>input_values</code> should simply be padded with 0 and passed without <code>attention_mask</code>. Be aware that these models also yield slightly different results depending on whether <code>input_values</code> is padded or not.</p> </div></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerModel.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerModel.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerModel.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerModel.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerModel.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerModel.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li></ul> <div id="transformers.Wav2Vec2ConformerModel.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.modeling_outputs.Wav2Vec2BaseModelOutput">transformers.modeling_outputs.Wav2Vec2BaseModelOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.modeling_outputs.Wav2Vec2BaseModelOutput">transformers.modeling_outputs.Wav2Vec2BaseModelOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer#transformers.Wav2Vec2ConformerConfig">Wav2Vec2ConformerConfig</a>) and inputs.</p> <ul> <li> <p><strong>last_hidden_state</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>) — Sequence of hidden-states at the output of the last layer of the model.</p> </li> <li> <p><strong>extract_features</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, conv_dim[-1])</code>) — Sequence of extracted feature vectors of the last convolutional layer of the model.</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-vjfv15">The <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer#transformers.Wav2Vec2ConformerModel">Wav2Vec2ConformerModel</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.Wav2Vec2ConformerModel.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerModel.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoProcessor, Wav2Vec2ConformerModel <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset <span class="hljs-meta">&gt;&gt;&gt; </span>dataset = load_dataset(<span class="hljs-string">"hf-internal-testing/librispeech_asr_demo"</span>, <span class="hljs-string">"clean"</span>, split=<span class="hljs-string">"validation"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>dataset = dataset.sort(<span class="hljs-string">"id"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>sampling_rate = dataset.features[<span class="hljs-string">"audio"</span>].sampling_rate <span class="hljs-meta">&gt;&gt;&gt; </span>processor = AutoProcessor.from_pretrained(<span class="hljs-string">"facebook/wav2vec2-conformer-rope-large-960h-ft"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = Wav2Vec2ConformerModel.from_pretrained(<span class="hljs-string">"facebook/wav2vec2-conformer-rope-large-960h-ft"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># audio file is decoded on the fly</span> <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = processor(dataset[<span class="hljs-number">0</span>][<span class="hljs-string">"audio"</span>][<span class="hljs-string">"array"</span>], sampling_rate=sampling_rate, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> outputs = model(**inputs) <span class="hljs-meta">&gt;&gt;&gt; </span>last_hidden_states = outputs.last_hidden_state <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-built_in">list</span>(last_hidden_states.shape) [<span class="hljs-number">1</span>, <span class="hljs-number">292</span>, <span class="hljs-number">1024</span>]</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.Wav2Vec2ConformerForCTC" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerForCTC"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-y8oygn">Wav2Vec2ConformerForCTC</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.Wav2Vec2ConformerForCTC"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">Wav2Vec2ConformerForCTC</span></span></h3> <a id="transformers.Wav2Vec2ConformerForCTC" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.Wav2Vec2ConformerForCTC"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py#L1606" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">target_lang<span class="opacity-60">: typing.Optional[str] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerForCTC.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerForCTC.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer#transformers.Wav2Vec2ConformerConfig">Wav2Vec2ConformerConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-ebyqt4">Wav2Vec2Conformer Model with a <code>language modeling</code> head on top for Connectionist Temporal Classification (CTC). Wav2Vec2Conformer was proposed in <a href="https://arxiv.org/abs/2006.11477" rel="nofollow">wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations</a> by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.</p> <p data-svelte-h="svelte-1e6yl4y">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.).</p> <p data-svelte-h="svelte-1uzc5pd">This model is a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#nn.Module" rel="nofollow">nn.Module</a> sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.Wav2Vec2ConformerForCTC.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.Wav2Vec2ConformerForCTC.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.Wav2Vec2ConformerForCTC.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py#L1639" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_values<span class="opacity-60">: typing.Optional[torch.Tensor]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.CausalLMOutput">transformers.modeling_outputs.CausalLMOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 6 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerForCTC.forward.input_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerForCTC.forward.input_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_values</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Float values of input raw speech waveform. Values can be obtained by loading a <code>.flac</code> or <code>.wav</code> audio file into an array of type <code>List[float]</code> or a <code>numpy.ndarray</code>, <em>e.g.</em> via the soundfile library (<code>pip install soundfile</code>). To prepare the array into <code>input_values</code>, the <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoProcessor">AutoProcessor</a> should be used for padding and conversion into a tensor of type <code>torch.FloatTensor</code>. See <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Processor.__call__">Wav2Vec2Processor.<strong>call</strong>()</a> for details.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerForCTC.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerForCTC.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing convolution and attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p> <div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400"> <p><code>attention_mask</code> should only be passed if the corresponding processor has <code>config.return_attention_mask == True</code>. For all models whose processor has <code>config.return_attention_mask == False</code>, such as <a href="https://huggingface.co/facebook/wav2vec2-conformer-rel-pos-large" rel="nofollow">wav2vec2-conformer-rel-pos-large</a>, <code>attention_mask</code> should <strong>not</strong> be passed to avoid degraded performance when doing batched inference. For such models <code>input_values</code> should simply be padded with 0 and passed without <code>attention_mask</code>. Be aware that these models also yield slightly different results depending on whether <code>input_values</code> is padded or not.</p> </div></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerForCTC.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerForCTC.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerForCTC.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerForCTC.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerForCTC.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerForCTC.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerForCTC.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerForCTC.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, target_length)</code>, <em>optional</em>) — Labels for connectionist temporal classification. Note that <code>target_length</code> has to be smaller or equal to the sequence length of the output logits. Indices are selected in <code>[-100, 0, ..., config.vocab_size - 1]</code>. All labels set to <code>-100</code> are ignored (masked), the loss is only computed for labels in <code>[0, ..., config.vocab_size - 1]</code>.</span></span> </li></ul> <div id="transformers.Wav2Vec2ConformerForCTC.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.CausalLMOutput">transformers.modeling_outputs.CausalLMOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.CausalLMOutput">transformers.modeling_outputs.CausalLMOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer#transformers.Wav2Vec2ConformerConfig">Wav2Vec2ConformerConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Language modeling loss (for next-token prediction).</p> </li> <li> <p><strong>logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, config.vocab_size)</code>) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-yxvx1p">The <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer#transformers.Wav2Vec2ConformerForCTC">Wav2Vec2ConformerForCTC</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.Wav2Vec2ConformerForCTC.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerForCTC.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoProcessor, Wav2Vec2ConformerForCTC <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>dataset = load_dataset(<span class="hljs-string">"hf-internal-testing/librispeech_asr_demo"</span>, <span class="hljs-string">"clean"</span>, split=<span class="hljs-string">"validation"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>dataset = dataset.sort(<span class="hljs-string">"id"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>sampling_rate = dataset.features[<span class="hljs-string">"audio"</span>].sampling_rate <span class="hljs-meta">&gt;&gt;&gt; </span>processor = AutoProcessor.from_pretrained(<span class="hljs-string">"facebook/wav2vec2-conformer-rope-large-960h-ft"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = Wav2Vec2ConformerForCTC.from_pretrained(<span class="hljs-string">"facebook/wav2vec2-conformer-rope-large-960h-ft"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># audio file is decoded on the fly</span> <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = processor(dataset[<span class="hljs-number">0</span>][<span class="hljs-string">"audio"</span>][<span class="hljs-string">"array"</span>], sampling_rate=sampling_rate, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> logits = model(**inputs).logits <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_ids = torch.argmax(logits, dim=-<span class="hljs-number">1</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># transcribe speech</span> <span class="hljs-meta">&gt;&gt;&gt; </span>transcription = processor.batch_decode(predicted_ids) <span class="hljs-meta">&gt;&gt;&gt; </span>transcription[<span class="hljs-number">0</span>] <span class="hljs-string">'MISTER QUILTER IS THE APOSTLE OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL'</span> <span class="hljs-meta">&gt;&gt;&gt; </span>inputs[<span class="hljs-string">"labels"</span>] = processor(text=dataset[<span class="hljs-number">0</span>][<span class="hljs-string">"text"</span>], return_tensors=<span class="hljs-string">"pt"</span>).input_ids <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># compute loss</span> <span class="hljs-meta">&gt;&gt;&gt; </span>loss = model(**inputs).loss <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-built_in">round</span>(loss.item(), <span class="hljs-number">2</span>) <span class="hljs-number">64.21</span></pre></div></div></div></div> <h2 class="relative group"><a id="transformers.Wav2Vec2ConformerForSequenceClassification" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerForSequenceClassification"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1jg195u">Wav2Vec2ConformerForSequenceClassification</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.Wav2Vec2ConformerForSequenceClassification"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">Wav2Vec2ConformerForSequenceClassification</span></span></h3> <a id="transformers.Wav2Vec2ConformerForSequenceClassification" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.Wav2Vec2ConformerForSequenceClassification"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py#L1727" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerForSequenceClassification.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerForSequenceClassification.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer#transformers.Wav2Vec2ConformerConfig">Wav2Vec2ConformerConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-l7huwv">Wav2Vec2Conformer Model with a sequence classification head on top (a linear layer over the pooled output) for tasks like SUPERB Keyword Spotting.</p> <p data-svelte-h="svelte-uwmik9">Wav2Vec2Conformer was proposed in <a href="https://arxiv.org/abs/2006.11477" rel="nofollow">wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations</a> by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.</p> <p data-svelte-h="svelte-1e6yl4y">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.).</p> <p data-svelte-h="svelte-1uzc5pd">This model is a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#nn.Module" rel="nofollow">nn.Module</a> sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.Wav2Vec2ConformerForSequenceClassification.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.Wav2Vec2ConformerForSequenceClassification.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.Wav2Vec2ConformerForSequenceClassification.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py#L1762" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_values<span class="opacity-60">: typing.Optional[torch.Tensor]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.SequenceClassifierOutput">transformers.modeling_outputs.SequenceClassifierOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 6 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerForSequenceClassification.forward.input_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerForSequenceClassification.forward.input_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_values</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Float values of input raw speech waveform. Values can be obtained by loading a <code>.flac</code> or <code>.wav</code> audio file into an array of type <code>List[float]</code> or a <code>numpy.ndarray</code>, <em>e.g.</em> via the soundfile library (<code>pip install soundfile</code>). To prepare the array into <code>input_values</code>, the <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoProcessor">AutoProcessor</a> should be used for padding and conversion into a tensor of type <code>torch.FloatTensor</code>. See <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Processor.__call__">Wav2Vec2Processor.<strong>call</strong>()</a> for details.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerForSequenceClassification.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerForSequenceClassification.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing convolution and attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p> <div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400"> <p><code>attention_mask</code> should only be passed if the corresponding processor has <code>config.return_attention_mask == True</code>. For all models whose processor has <code>config.return_attention_mask == False</code>, such as <a href="https://huggingface.co/facebook/wav2vec2-conformer-rel-pos-large" rel="nofollow">wav2vec2-conformer-rel-pos-large</a>, <code>attention_mask</code> should <strong>not</strong> be passed to avoid degraded performance when doing batched inference. For such models <code>input_values</code> should simply be padded with 0 and passed without <code>attention_mask</code>. Be aware that these models also yield slightly different results depending on whether <code>input_values</code> is padded or not.</p> </div></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerForSequenceClassification.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerForSequenceClassification.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerForSequenceClassification.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerForSequenceClassification.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerForSequenceClassification.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerForSequenceClassification.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerForSequenceClassification.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerForSequenceClassification.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Labels for computing the sequence classification/regression loss. Indices should be in <code>[0, ..., config.num_labels - 1]</code>. If <code>config.num_labels == 1</code> a regression loss is computed (Mean-Square loss), If <code>config.num_labels &gt; 1</code> a classification loss is computed (Cross-Entropy).</span></span> </li></ul> <div id="transformers.Wav2Vec2ConformerForSequenceClassification.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.SequenceClassifierOutput">transformers.modeling_outputs.SequenceClassifierOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.SequenceClassifierOutput">transformers.modeling_outputs.SequenceClassifierOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer#transformers.Wav2Vec2ConformerConfig">Wav2Vec2ConformerConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Classification (or regression if config.num_labels==1) loss.</p> </li> <li> <p><strong>logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, config.num_labels)</code>) — Classification (or regression if config.num_labels==1) scores (before SoftMax).</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-g862on">The <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer#transformers.Wav2Vec2ConformerForSequenceClassification">Wav2Vec2ConformerForSequenceClassification</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.Wav2Vec2ConformerForSequenceClassification.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerForSequenceClassification.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoFeatureExtractor, Wav2Vec2ConformerForSequenceClassification <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>dataset = load_dataset(<span class="hljs-string">"hf-internal-testing/librispeech_asr_demo"</span>, <span class="hljs-string">"clean"</span>, split=<span class="hljs-string">"validation"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>dataset = dataset.sort(<span class="hljs-string">"id"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>sampling_rate = dataset.features[<span class="hljs-string">"audio"</span>].sampling_rate <span class="hljs-meta">&gt;&gt;&gt; </span>feature_extractor = AutoFeatureExtractor.from_pretrained(<span class="hljs-string">"facebook/wav2vec2-conformer-rope-large-960h-ft"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = Wav2Vec2ConformerForSequenceClassification.from_pretrained(<span class="hljs-string">"facebook/wav2vec2-conformer-rope-large-960h-ft"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># audio file is decoded on the fly</span> <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = feature_extractor(dataset[<span class="hljs-number">0</span>][<span class="hljs-string">"audio"</span>][<span class="hljs-string">"array"</span>], sampling_rate=sampling_rate, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> logits = model(**inputs).logits <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_class_ids = torch.argmax(logits, dim=-<span class="hljs-number">1</span>).item() <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_label = model.config.id2label[predicted_class_ids] <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># compute loss - target_label is e.g. "down"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>target_label = model.config.id2label[<span class="hljs-number">0</span>] <span class="hljs-meta">&gt;&gt;&gt; </span>inputs[<span class="hljs-string">"labels"</span>] = torch.tensor([model.config.label2id[target_label]]) <span class="hljs-meta">&gt;&gt;&gt; </span>loss = model(**inputs).loss</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.Wav2Vec2ConformerForAudioFrameClassification" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerForAudioFrameClassification"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1250wlk">Wav2Vec2ConformerForAudioFrameClassification</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.Wav2Vec2ConformerForAudioFrameClassification"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">Wav2Vec2ConformerForAudioFrameClassification</span></span></h3> <a id="transformers.Wav2Vec2ConformerForAudioFrameClassification" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.Wav2Vec2ConformerForAudioFrameClassification"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py#L1838" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerForAudioFrameClassification.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerForAudioFrameClassification.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer#transformers.Wav2Vec2ConformerConfig">Wav2Vec2ConformerConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-cgrrqr">Wav2Vec2Conformer Model with a frame classification head on top for tasks like Speaker Diarization.</p> <p data-svelte-h="svelte-uwmik9">Wav2Vec2Conformer was proposed in <a href="https://arxiv.org/abs/2006.11477" rel="nofollow">wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations</a> by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.</p> <p data-svelte-h="svelte-1e6yl4y">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.).</p> <p data-svelte-h="svelte-1uzc5pd">This model is a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#nn.Module" rel="nofollow">nn.Module</a> sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.Wav2Vec2ConformerForAudioFrameClassification.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.Wav2Vec2ConformerForAudioFrameClassification.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.Wav2Vec2ConformerForAudioFrameClassification.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py#L1873" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_values<span class="opacity-60">: typing.Optional[torch.Tensor]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.TokenClassifierOutput">transformers.modeling_outputs.TokenClassifierOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 6 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerForAudioFrameClassification.forward.input_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerForAudioFrameClassification.forward.input_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_values</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Float values of input raw speech waveform. Values can be obtained by loading a <code>.flac</code> or <code>.wav</code> audio file into an array of type <code>List[float]</code> or a <code>numpy.ndarray</code>, <em>e.g.</em> via the soundfile library (<code>pip install soundfile</code>). To prepare the array into <code>input_values</code>, the <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoProcessor">AutoProcessor</a> should be used for padding and conversion into a tensor of type <code>torch.FloatTensor</code>. See <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Processor.__call__">Wav2Vec2Processor.<strong>call</strong>()</a> for details.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerForAudioFrameClassification.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerForAudioFrameClassification.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing convolution and attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p> <div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400"> <p><code>attention_mask</code> should only be passed if the corresponding processor has <code>config.return_attention_mask == True</code>. For all models whose processor has <code>config.return_attention_mask == False</code>, such as <a href="https://huggingface.co/facebook/wav2vec2-conformer-rel-pos-large" rel="nofollow">wav2vec2-conformer-rel-pos-large</a>, <code>attention_mask</code> should <strong>not</strong> be passed to avoid degraded performance when doing batched inference. For such models <code>input_values</code> should simply be padded with 0 and passed without <code>attention_mask</code>. Be aware that these models also yield slightly different results depending on whether <code>input_values</code> is padded or not.</p> </div></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerForAudioFrameClassification.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerForAudioFrameClassification.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerForAudioFrameClassification.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerForAudioFrameClassification.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerForAudioFrameClassification.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerForAudioFrameClassification.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerForAudioFrameClassification.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerForAudioFrameClassification.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Labels for computing the sequence classification/regression loss. Indices should be in <code>[0, ..., config.num_labels - 1]</code>. If <code>config.num_labels == 1</code> a regression loss is computed (Mean-Square loss), If <code>config.num_labels &gt; 1</code> a classification loss is computed (Cross-Entropy).</span></span> </li></ul> <div id="transformers.Wav2Vec2ConformerForAudioFrameClassification.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.TokenClassifierOutput">transformers.modeling_outputs.TokenClassifierOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.TokenClassifierOutput">transformers.modeling_outputs.TokenClassifierOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer#transformers.Wav2Vec2ConformerConfig">Wav2Vec2ConformerConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Classification loss.</p> </li> <li> <p><strong>logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, config.num_labels)</code>) — Classification scores (before SoftMax).</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-8g6q1r">The <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer#transformers.Wav2Vec2ConformerForAudioFrameClassification">Wav2Vec2ConformerForAudioFrameClassification</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.Wav2Vec2ConformerForAudioFrameClassification.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerForAudioFrameClassification.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoFeatureExtractor, Wav2Vec2ConformerForAudioFrameClassification <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>dataset = load_dataset(<span class="hljs-string">"hf-internal-testing/librispeech_asr_demo"</span>, <span class="hljs-string">"clean"</span>, split=<span class="hljs-string">"validation"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>dataset = dataset.sort(<span class="hljs-string">"id"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>sampling_rate = dataset.features[<span class="hljs-string">"audio"</span>].sampling_rate <span class="hljs-meta">&gt;&gt;&gt; </span>feature_extractor = AutoFeatureExtractor.from_pretrained(<span class="hljs-string">"facebook/wav2vec2-conformer-rope-large-960h-ft"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = Wav2Vec2ConformerForAudioFrameClassification.from_pretrained(<span class="hljs-string">"facebook/wav2vec2-conformer-rope-large-960h-ft"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># audio file is decoded on the fly</span> <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = feature_extractor(dataset[<span class="hljs-number">0</span>][<span class="hljs-string">"audio"</span>][<span class="hljs-string">"array"</span>], return_tensors=<span class="hljs-string">"pt"</span>, sampling_rate=sampling_rate) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> logits = model(**inputs).logits <span class="hljs-meta">&gt;&gt;&gt; </span>probabilities = torch.sigmoid(logits[<span class="hljs-number">0</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># labels is a one-hot array of shape (num_frames, num_speakers)</span> <span class="hljs-meta">&gt;&gt;&gt; </span>labels = (probabilities &gt; <span class="hljs-number">0.5</span>).long()</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.Wav2Vec2ConformerForXVector" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerForXVector"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1qk51uo">Wav2Vec2ConformerForXVector</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.Wav2Vec2ConformerForXVector"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">Wav2Vec2ConformerForXVector</span></span></h3> <a id="transformers.Wav2Vec2ConformerForXVector" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.Wav2Vec2ConformerForXVector"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py#L1992" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerForXVector.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerForXVector.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer#transformers.Wav2Vec2ConformerConfig">Wav2Vec2ConformerConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1vfzkaz">Wav2Vec2Conformer Model with an XVector feature extraction head on top for tasks like Speaker Verification.</p> <p data-svelte-h="svelte-uwmik9">Wav2Vec2Conformer was proposed in <a href="https://arxiv.org/abs/2006.11477" rel="nofollow">wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations</a> by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.</p> <p data-svelte-h="svelte-1e6yl4y">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.).</p> <p data-svelte-h="svelte-1uzc5pd">This model is a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#nn.Module" rel="nofollow">nn.Module</a> sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.Wav2Vec2ConformerForXVector.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.Wav2Vec2ConformerForXVector.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.Wav2Vec2ConformerForXVector.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py#L2045" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_values<span class="opacity-60">: typing.Optional[torch.Tensor]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.XVectorOutput">transformers.modeling_outputs.XVectorOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 6 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerForXVector.forward.input_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerForXVector.forward.input_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_values</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Float values of input raw speech waveform. Values can be obtained by loading a <code>.flac</code> or <code>.wav</code> audio file into an array of type <code>List[float]</code> or a <code>numpy.ndarray</code>, <em>e.g.</em> via the soundfile library (<code>pip install soundfile</code>). To prepare the array into <code>input_values</code>, the <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoProcessor">AutoProcessor</a> should be used for padding and conversion into a tensor of type <code>torch.FloatTensor</code>. See <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Processor.__call__">Wav2Vec2Processor.<strong>call</strong>()</a> for details.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerForXVector.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerForXVector.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing convolution and attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p> <div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400"> <p><code>attention_mask</code> should only be passed if the corresponding processor has <code>config.return_attention_mask == True</code>. For all models whose processor has <code>config.return_attention_mask == False</code>, such as <a href="https://huggingface.co/facebook/wav2vec2-conformer-rel-pos-large" rel="nofollow">wav2vec2-conformer-rel-pos-large</a>, <code>attention_mask</code> should <strong>not</strong> be passed to avoid degraded performance when doing batched inference. For such models <code>input_values</code> should simply be padded with 0 and passed without <code>attention_mask</code>. Be aware that these models also yield slightly different results depending on whether <code>input_values</code> is padded or not.</p> </div></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerForXVector.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerForXVector.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerForXVector.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerForXVector.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerForXVector.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerForXVector.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerForXVector.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerForXVector.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Labels for computing the sequence classification/regression loss. Indices should be in <code>[0, ..., config.num_labels - 1]</code>. If <code>config.num_labels == 1</code> a regression loss is computed (Mean-Square loss), If <code>config.num_labels &gt; 1</code> a classification loss is computed (Cross-Entropy).</span></span> </li></ul> <div id="transformers.Wav2Vec2ConformerForXVector.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.XVectorOutput">transformers.modeling_outputs.XVectorOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.XVectorOutput">transformers.modeling_outputs.XVectorOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer#transformers.Wav2Vec2ConformerConfig">Wav2Vec2ConformerConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Classification loss.</p> </li> <li> <p><strong>logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, config.xvector_output_dim)</code>) — Classification hidden states before AMSoftmax.</p> </li> <li> <p><strong>embeddings</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, config.xvector_output_dim)</code>) — Utterance embeddings used for vector similarity-based retrieval.</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-5515np">The <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer#transformers.Wav2Vec2ConformerForXVector">Wav2Vec2ConformerForXVector</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.Wav2Vec2ConformerForXVector.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerForXVector.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoFeatureExtractor, Wav2Vec2ConformerForXVector <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>dataset = load_dataset(<span class="hljs-string">"hf-internal-testing/librispeech_asr_demo"</span>, <span class="hljs-string">"clean"</span>, split=<span class="hljs-string">"validation"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>dataset = dataset.sort(<span class="hljs-string">"id"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>sampling_rate = dataset.features[<span class="hljs-string">"audio"</span>].sampling_rate <span class="hljs-meta">&gt;&gt;&gt; </span>feature_extractor = AutoFeatureExtractor.from_pretrained(<span class="hljs-string">"facebook/wav2vec2-conformer-rope-large-960h-ft"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = Wav2Vec2ConformerForXVector.from_pretrained(<span class="hljs-string">"facebook/wav2vec2-conformer-rope-large-960h-ft"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># audio file is decoded on the fly</span> <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = feature_extractor( <span class="hljs-meta">... </span> [d[<span class="hljs-string">"array"</span>] <span class="hljs-keyword">for</span> d <span class="hljs-keyword">in</span> dataset[:<span class="hljs-number">2</span>][<span class="hljs-string">"audio"</span>]], sampling_rate=sampling_rate, return_tensors=<span class="hljs-string">"pt"</span>, padding=<span class="hljs-literal">True</span> <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> embeddings = model(**inputs).embeddings <span class="hljs-meta">&gt;&gt;&gt; </span>embeddings = torch.nn.functional.normalize(embeddings, dim=-<span class="hljs-number">1</span>).cpu() <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># the resulting embeddings can be used for cosine similarity-based retrieval</span> <span class="hljs-meta">&gt;&gt;&gt; </span>cosine_sim = torch.nn.CosineSimilarity(dim=-<span class="hljs-number">1</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>similarity = cosine_sim(embeddings[<span class="hljs-number">0</span>], embeddings[<span class="hljs-number">1</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span>threshold = <span class="hljs-number">0.7</span> <span class="hljs-comment"># the optimal threshold is dataset-dependent</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">if</span> similarity &lt; threshold: <span class="hljs-meta">... </span> <span class="hljs-built_in">print</span>(<span class="hljs-string">"Speakers are not the same!"</span>)</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.Wav2Vec2ConformerForPreTraining" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerForPreTraining"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1ntgg10">Wav2Vec2ConformerForPreTraining</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.Wav2Vec2ConformerForPreTraining"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">Wav2Vec2ConformerForPreTraining</span></span></h3> <a id="transformers.Wav2Vec2ConformerForPreTraining" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.Wav2Vec2ConformerForPreTraining"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py#L1385" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60">: Wav2Vec2ConformerConfig</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerForPreTraining.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerForPreTraining.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer#transformers.Wav2Vec2ConformerConfig">Wav2Vec2ConformerConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1sneuf4">Wav2Vec2Conformer Model with a quantizer and <code>VQ</code> head on top. Wav2Vec2Conformer was proposed in <a href="https://arxiv.org/abs/2006.11477" rel="nofollow">wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations</a> by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.</p> <p data-svelte-h="svelte-1e6yl4y">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.).</p> <p data-svelte-h="svelte-1uzc5pd">This model is a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#nn.Module" rel="nofollow">nn.Module</a> sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.Wav2Vec2ConformerForPreTraining.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.Wav2Vec2ConformerForPreTraining.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.Wav2Vec2ConformerForPreTraining.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py#L1437" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_values<span class="opacity-60">: typing.Optional[torch.Tensor]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_time_indices<span class="opacity-60">: typing.Optional[torch.BoolTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">sampled_negative_indices<span class="opacity-60">: typing.Optional[torch.BoolTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer#transformers.models.wav2vec2_conformer.modeling_wav2vec2_conformer.Wav2Vec2ConformerForPreTrainingOutput">transformers.models.wav2vec2_conformer.modeling_wav2vec2_conformer.Wav2Vec2ConformerForPreTrainingOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 7 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerForPreTraining.forward.input_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerForPreTraining.forward.input_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_values</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Float values of input raw speech waveform. Values can be obtained by loading a <code>.flac</code> or <code>.wav</code> audio file into an array of type <code>List[float]</code> or a <code>numpy.ndarray</code>, <em>e.g.</em> via the soundfile library (<code>pip install soundfile</code>). To prepare the array into <code>input_values</code>, the <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoProcessor">AutoProcessor</a> should be used for padding and conversion into a tensor of type <code>torch.FloatTensor</code>. See <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Processor.__call__">Wav2Vec2Processor.<strong>call</strong>()</a> for details.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerForPreTraining.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerForPreTraining.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing convolution and attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p> <div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400"> <p><code>attention_mask</code> should only be passed if the corresponding processor has <code>config.return_attention_mask == True</code>. For all models whose processor has <code>config.return_attention_mask == False</code>, such as <a href="https://huggingface.co/facebook/wav2vec2-conformer-rel-pos-large" rel="nofollow">wav2vec2-conformer-rel-pos-large</a>, <code>attention_mask</code> should <strong>not</strong> be passed to avoid degraded performance when doing batched inference. For such models <code>input_values</code> should simply be padded with 0 and passed without <code>attention_mask</code>. Be aware that these models also yield slightly different results depending on whether <code>input_values</code> is padded or not.</p> </div></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerForPreTraining.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerForPreTraining.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerForPreTraining.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerForPreTraining.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerForPreTraining.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerForPreTraining.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerForPreTraining.forward.mask_time_indices" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerForPreTraining.forward.mask_time_indices"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mask_time_indices</strong> (<code>torch.BoolTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices to mask extracted features for contrastive loss. When in training mode, model learns to predict masked extracted features in <em>config.proj_codevector_dim</em> space.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.Wav2Vec2ConformerForPreTraining.forward.sampled_negative_indices" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerForPreTraining.forward.sampled_negative_indices"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>sampled_negative_indices</strong> (<code>torch.BoolTensor</code> of shape <code>(batch_size, sequence_length, num_negatives)</code>, <em>optional</em>) — Indices indicating which quantized target vectors are used as negative sampled vectors in contrastive loss. Required input for pre-training.</span></span> </li></ul> <div id="transformers.Wav2Vec2ConformerForPreTraining.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer#transformers.models.wav2vec2_conformer.modeling_wav2vec2_conformer.Wav2Vec2ConformerForPreTrainingOutput">transformers.models.wav2vec2_conformer.modeling_wav2vec2_conformer.Wav2Vec2ConformerForPreTrainingOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer#transformers.models.wav2vec2_conformer.modeling_wav2vec2_conformer.Wav2Vec2ConformerForPreTrainingOutput">transformers.models.wav2vec2_conformer.modeling_wav2vec2_conformer.Wav2Vec2ConformerForPreTrainingOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer#transformers.Wav2Vec2ConformerConfig">Wav2Vec2ConformerConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<em>optional</em>, returned when <code>sample_negative_indices</code> are passed, <code>torch.FloatTensor</code> of shape <code>(1,)</code>) — Total loss as the sum of the contrastive loss (L_m) and the diversity loss (L_d) as stated in the <a href="https://arxiv.org/pdf/2006.11477.pdf" rel="nofollow">official paper</a> . (classification) loss.</p> </li> <li> <p><strong>projected_states</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, config.proj_codevector_dim)</code>) — Hidden-states of the model projected to <em>config.proj_codevector_dim</em> that can be used to predict the masked projected quantized states.</p> </li> <li> <p><strong>projected_quantized_states</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, config.proj_codevector_dim)</code>) — Quantized extracted feature vectors projected to <em>config.proj_codevector_dim</em> representing the positive target vectors for contrastive loss.</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> <li> <p><strong>contrastive_loss</strong> (<em>optional</em>, returned when <code>sample_negative_indices</code> are passed, <code>torch.FloatTensor</code> of shape <code>(1,)</code>) — The contrastive loss (L_m) as stated in the <a href="https://arxiv.org/pdf/2006.11477.pdf" rel="nofollow">official paper</a> .</p> </li> <li> <p><strong>diversity_loss</strong> (<em>optional</em>, returned when <code>sample_negative_indices</code> are passed, <code>torch.FloatTensor</code> of shape <code>(1,)</code>) — The diversity loss (L_d) as stated in the <a href="https://arxiv.org/pdf/2006.11477.pdf" rel="nofollow">official paper</a> .</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-qyayv1">The <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer#transformers.Wav2Vec2ConformerForPreTraining">Wav2Vec2ConformerForPreTraining</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.Wav2Vec2ConformerForPreTraining.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.Wav2Vec2ConformerForPreTraining.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoFeatureExtractor, Wav2Vec2ConformerForPreTraining <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers.models.wav2vec2_conformer.modeling_wav2vec2_conformer <span class="hljs-keyword">import</span> ( <span class="hljs-meta">... </span> _compute_mask_indices, <span class="hljs-meta">... </span> _sample_negative_indices, <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset <span class="hljs-meta">&gt;&gt;&gt; </span>feature_extractor = AutoFeatureExtractor.from_pretrained(<span class="hljs-string">"facebook/wav2vec2-conformer-rel-pos-large"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = Wav2Vec2ConformerForPreTraining.from_pretrained(<span class="hljs-string">"facebook/wav2vec2-conformer-rel-pos-large"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>ds = load_dataset(<span class="hljs-string">"hf-internal-testing/librispeech_asr_dummy"</span>, <span class="hljs-string">"clean"</span>, split=<span class="hljs-string">"validation"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>input_values = feature_extractor(ds[<span class="hljs-number">0</span>][<span class="hljs-string">"audio"</span>][<span class="hljs-string">"array"</span>], return_tensors=<span class="hljs-string">"pt"</span>).input_values <span class="hljs-comment"># Batch size 1</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># compute masked indices</span> <span class="hljs-meta">&gt;&gt;&gt; </span>batch_size, raw_sequence_length = input_values.shape <span class="hljs-meta">&gt;&gt;&gt; </span>sequence_length = model._get_feat_extract_output_lengths(raw_sequence_length).item() <span class="hljs-meta">&gt;&gt;&gt; </span>mask_time_indices = _compute_mask_indices( <span class="hljs-meta">... </span> shape=(batch_size, sequence_length), mask_prob=<span class="hljs-number">0.2</span>, mask_length=<span class="hljs-number">2</span> <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>sampled_negative_indices = _sample_negative_indices( <span class="hljs-meta">... </span> features_shape=(batch_size, sequence_length), <span class="hljs-meta">... </span> num_negatives=model.config.num_negatives, <span class="hljs-meta">... </span> mask_time_indices=mask_time_indices, <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>mask_time_indices = torch.tensor(data=mask_time_indices, device=input_values.device, dtype=torch.long) <span class="hljs-meta">&gt;&gt;&gt; </span>sampled_negative_indices = torch.tensor( <span class="hljs-meta">... </span> data=sampled_negative_indices, device=input_values.device, dtype=torch.long <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> outputs = model(input_values, mask_time_indices=mask_time_indices) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># compute cosine similarity between predicted (=projected_states) and target (=projected_quantized_states)</span> <span class="hljs-meta">&gt;&gt;&gt; </span>cosine_sim = torch.cosine_similarity(outputs.projected_states, outputs.projected_quantized_states, dim=-<span class="hljs-number">1</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># show that cosine similarity is much higher than random</span> <span class="hljs-meta">&gt;&gt;&gt; </span>cosine_sim[mask_time_indices.to(torch.<span class="hljs-built_in">bool</span>)].mean() &gt; <span class="hljs-number">0.5</span> tensor(<span class="hljs-literal">True</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># for contrastive loss training model should be put into train mode</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model = model.train() <span class="hljs-meta">&gt;&gt;&gt; </span>loss = model( <span class="hljs-meta">... </span> input_values, mask_time_indices=mask_time_indices, sampled_negative_indices=sampled_negative_indices <span class="hljs-meta">... </span>).loss</pre></div></div></div></div> <p></p> <div id="svelte-announcer" aria-live="assertive" aria-atomic="true" style="position: absolute; left: 0px; top: 0px; clip: rect(0px, 0px, 0px, 0px); clip-path: inset(50%); overflow: hidden; white-space: nowrap; width: 1px; height: 1px;"></div></div> <div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>Wav2Vec2</a> <a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">Wav2Vec2Phoneme<span class="ml-2 translate-y-px">→</span></a></div></div></div> <div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapter&quot;:{&quot;title&quot;:&quot;Wav2Vec2-Conformer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;wav2vec2conformer&quot;,&quot;url&quot;:&quot;#wav2vec2conformer&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;overview&quot;,&quot;url&quot;:&quot;#overview&quot;},{&quot;title&quot;:&quot;Documentation resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;documentation-resources&quot;,&quot;url&quot;:&quot;#documentation-resources&quot;},{&quot;title&quot;:&quot;Wav2Vec2ConformerConfig&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.Wav2Vec2ConformerConfig&quot;,&quot;url&quot;:&quot;#transformers.Wav2Vec2ConformerConfig&quot;},{&quot;title&quot;:&quot;Wav2Vec2Conformer specific outputs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.models.wav2vec2_conformer.modeling_wav2vec2_conformer.Wav2Vec2ConformerForPreTrainingOutput&quot;,&quot;url&quot;:&quot;#transformers.models.wav2vec2_conformer.modeling_wav2vec2_conformer.Wav2Vec2ConformerForPreTrainingOutput&quot;},{&quot;title&quot;:&quot;Wav2Vec2ConformerModel&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.Wav2Vec2ConformerModel&quot;,&quot;url&quot;:&quot;#transformers.Wav2Vec2ConformerModel&quot;},{&quot;title&quot;:&quot;Wav2Vec2ConformerForCTC&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.Wav2Vec2ConformerForCTC&quot;,&quot;url&quot;:&quot;#transformers.Wav2Vec2ConformerForCTC&quot;},{&quot;title&quot;:&quot;Wav2Vec2ConformerForSequenceClassification&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.Wav2Vec2ConformerForSequenceClassification&quot;,&quot;url&quot;:&quot;#transformers.Wav2Vec2ConformerForSequenceClassification&quot;},{&quot;title&quot;:&quot;Wav2Vec2ConformerForAudioFrameClassification&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.Wav2Vec2ConformerForAudioFrameClassification&quot;,&quot;url&quot;:&quot;#transformers.Wav2Vec2ConformerForAudioFrameClassification&quot;},{&quot;title&quot;:&quot;Wav2Vec2ConformerForXVector&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.Wav2Vec2ConformerForXVector&quot;,&quot;url&quot;:&quot;#transformers.Wav2Vec2ConformerForXVector&quot;},{&quot;title&quot;:&quot;Wav2Vec2ConformerForPreTraining&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.Wav2Vec2ConformerForPreTraining&quot;,&quot;url&quot;:&quot;#transformers.Wav2Vec2ConformerForPreTraining&quot;}]}}" data-target="SubSideMenu"> <nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#wav2vec2conformer" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-wav2vec2conformer"><!-- HTML_TAG_START --><wbr>Wav2<wbr>Vec2-<wbr>Conformer<!-- HTML_TAG_END --></a> <a href="#overview" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-overview"><!-- HTML_TAG_START --><wbr>Overview<!-- HTML_TAG_END --></a> <a href="#documentation-resources" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-documentation-resources"><!-- HTML_TAG_START --><wbr>Documentation resources<!-- HTML_TAG_END --></a> <a href="#transformers.Wav2Vec2ConformerConfig" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.Wav2Vec2ConformerConfig"><!-- HTML_TAG_START --><wbr>Wav2<wbr>Vec2<wbr>Conformer<wbr>Config<!-- HTML_TAG_END --></a> <a href="#transformers.models.wav2vec2_conformer.modeling_wav2vec2_conformer.Wav2Vec2ConformerForPreTrainingOutput" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.models.wav2vec2_conformer.modeling_wav2vec2_conformer.Wav2Vec2ConformerForPreTrainingOutput"><!-- HTML_TAG_START --><wbr>Wav2<wbr>Vec2<wbr>Conformer specific outputs<!-- HTML_TAG_END --></a> <a href="#transformers.Wav2Vec2ConformerModel" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.Wav2Vec2ConformerModel"><!-- HTML_TAG_START --><wbr>Wav2<wbr>Vec2<wbr>Conformer<wbr>Model<!-- HTML_TAG_END --></a> <a href="#transformers.Wav2Vec2ConformerForCTC" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.Wav2Vec2ConformerForCTC"><!-- HTML_TAG_START --><wbr>Wav2<wbr>Vec2<wbr>Conformer<wbr>ForCTC<!-- HTML_TAG_END --></a> <a href="#transformers.Wav2Vec2ConformerForSequenceClassification" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.Wav2Vec2ConformerForSequenceClassification"><!-- HTML_TAG_START --><wbr>Wav2<wbr>Vec2<wbr>Conformer<wbr>For<wbr>Sequence<wbr>Classification<!-- HTML_TAG_END --></a> <a href="#transformers.Wav2Vec2ConformerForAudioFrameClassification" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.Wav2Vec2ConformerForAudioFrameClassification"><!-- HTML_TAG_START --><wbr>Wav2<wbr>Vec2<wbr>Conformer<wbr>For<wbr>Audio<wbr>Frame<wbr>Classification<!-- HTML_TAG_END --></a> <a href="#transformers.Wav2Vec2ConformerForXVector" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.Wav2Vec2ConformerForXVector"><!-- HTML_TAG_START --><wbr>Wav2<wbr>Vec2<wbr>Conformer<wbr>ForX<wbr>Vector<!-- HTML_TAG_END --></a> <a href="#transformers.Wav2Vec2ConformerForPreTraining" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.Wav2Vec2ConformerForPreTraining"><!-- HTML_TAG_START --><wbr>Wav2<wbr>Vec2<wbr>Conformer<wbr>For<wbr>Pre<wbr>Training<!-- HTML_TAG_END --></a> </nav></div></div></div> <div id="doc-footer"></div></main> </div> <script> import("/front/build/kube-b0520c1/index.js"); window.moonSha = "kube-b0520c1/"; window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc","environment":"production","userAgent":"HuggingFace (production)"}`); </script> <!-- Stripe --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://js.stripe.com/v3/"; script.async = true; document.head.appendChild(script); } </script> <!-- Google analytics v4 --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL"; script.async = true; document.head.appendChild(script); window.dataLayer = window.dataLayer || []; function gtag() { if (window.dataLayer !== undefined) { window.dataLayer.push(arguments); } } gtag("js", new Date()); gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer" }); /// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" }); /// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent /// TODO: ask the user for their consent and update this with gtag('consent', 'update') } </script> <!-- Google Analytics v3 (deprecated) --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { (function (i, s, o, g, r, a, m) { i["GoogleAnalyticsObject"] = r; (i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments); }), (i[r].l = 1 * new Date()); (a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]); a.async = 1; a.src = g; m.parentNode.insertBefore(a, m); })(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics"); ganalytics("create", "UA-83738774-2", "auto"); ganalytics("send", "pageview", "/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer"); } </script> <iframe name="__privateStripeMetricsController0060" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-27c67c0d52761104439bb051c7856ab1.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fv4.34.0%2Fen%2Fmodel_doc%2Fwav2vec2-conformer&amp;title=Wav2Vec2-Conformer&amp;referrer=&amp;muid=b15a8ef9-7618-4d98-9abd-1d7fdb18f47df4c702&amp;sid=0da2c795-975c-45a5-a090-0475ca1e345f07aeed&amp;version=6&amp;preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html>
2023-10-05T13:33:29.581Z
X-CLIP
https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/xclip
# X-CLIP ## Overview The X-CLIP model was proposed in [Expanding Language-Image Pretrained Models for General Video Recognition](https://arxiv.org/abs/2208.02816) by Bolin Ni, Houwen Peng, Minghao Chen, Songyang Zhang, Gaofeng Meng, Jianlong Fu, Shiming Xiang, Haibin Ling. X-CLIP is a minimal extension of [CLIP](clip) for video. The model consists of a text encoder, a cross-frame vision encoder, a multi-frame integration Transformer, and a video-specific prompt generator. The abstract from the paper is the following: _Contrastive language-image pretraining has shown great success in learning visual-textual joint representation from web-scale data, demonstrating remarkable “zero-shot” generalization ability for various image tasks. However, how to effectively expand such new language-image pretraining methods to video domains is still an open problem. In this work, we present a simple yet effective approach that adapts the pretrained language-image models to video recognition directly, instead of pretraining a new model from scratch. More concretely, to capture the long-range dependencies of frames along the temporal dimension, we propose a cross-frame attention mechanism that explicitly exchanges information across frames. Such module is lightweight and can be plugged into pretrained language-image models seamlessly. Moreover, we propose a video-specific prompting scheme, which leverages video content information for generating discriminative textual prompts. Extensive experiments demonstrate that our approach is effective and can be generalized to different video recognition scenarios. In particular, under fully-supervised settings, our approach achieves a top-1 accuracy of 87.1% on Kinectics-400, while using 12 times fewer FLOPs compared with Swin-L and ViViT-H. In zero-shot experiments, our approach surpasses the current state-of-the-art methods by +7.6% and +14.9% in terms of top-1 accuracy under two popular protocols. In few-shot scenarios, our approach outperforms previous best methods by +32.1% and +23.1% when the labeled data is extremely limited._ Tips: - Usage of X-CLIP is identical to [CLIP](clip). ![drawing](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/xclip_architecture.png) X-CLIP architecture. Taken from the [original paper.](https://arxiv.org/abs/2208.02816) This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/microsoft/VideoX/tree/master/X-CLIP). ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with X-CLIP. - Demo notebooks for X-CLIP can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/X-CLIP). If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ## XCLIPProcessor ### class transformers.XCLIPProcessor [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/x_clip/processing_x_clip.py#L25) ( image\_processor = None tokenizer = None \*\*kwargs ) Parameters - **image\_processor** ([VideoMAEImageProcessor](/docs/transformers/v4.34.0/en/model_doc/videomae#transformers.VideoMAEImageProcessor)) — The image processor is a required input. - **tokenizer** ([CLIPTokenizerFast](/docs/transformers/v4.34.0/en/model_doc/clip#transformers.CLIPTokenizerFast)) — The tokenizer is a required input. Constructs an X-CLIP processor which wraps a VideoMAE image processor and a CLIP tokenizer into a single processor. [XCLIPProcessor](/docs/transformers/v4.34.0/en/model_doc/xclip#transformers.XCLIPProcessor) offers all the functionalities of [VideoMAEImageProcessor](/docs/transformers/v4.34.0/en/model_doc/videomae#transformers.VideoMAEImageProcessor) and [CLIPTokenizerFast](/docs/transformers/v4.34.0/en/model_doc/clip#transformers.CLIPTokenizerFast). See the `__call__()` and [decode()](/docs/transformers/v4.34.0/en/model_doc/xclip#transformers.XCLIPProcessor.decode) for more information. This method forwards all its arguments to CLIPTokenizerFast’s [batch\_decode()](/docs/transformers/v4.34.0/en/model_doc/speecht5#transformers.SpeechT5Tokenizer.batch_decode). Please refer to the docstring of this method for more information. This method forwards all its arguments to CLIPTokenizerFast’s [decode()](/docs/transformers/v4.34.0/en/model_doc/speecht5#transformers.SpeechT5Tokenizer.decode). Please refer to the docstring of this method for more information. ## XCLIPConfig ### class transformers.XCLIPConfig [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/x_clip/configuration_x_clip.py#L264) ( text\_config = None vision\_config = None projection\_dim = 512 prompt\_layers = 2 prompt\_alpha = 0.1 prompt\_hidden\_act = 'quick\_gelu' prompt\_num\_attention\_heads = 8 prompt\_attention\_dropout = 0.0 prompt\_projection\_dropout = 0.0 logit\_scale\_init\_value = 2.6592 \*\*kwargs ) Parameters - **text\_config** (`dict`, _optional_) — Dictionary of configuration options used to initialize [XCLIPTextConfig](/docs/transformers/v4.34.0/en/model_doc/xclip#transformers.XCLIPTextConfig). - **vision\_config** (`dict`, _optional_) — Dictionary of configuration options used to initialize [XCLIPVisionConfig](/docs/transformers/v4.34.0/en/model_doc/xclip#transformers.XCLIPVisionConfig). - **projection\_dim** (`int`, _optional_, defaults to 512) — Dimentionality of text and vision projection layers. - **prompt\_layers** (`int`, _optional_, defaults to 2) — Number of layers in the video specific prompt generator. - **prompt\_alpha** (`float`, _optional_, defaults to 0.1) — Alpha value to use in the video specific prompt generator. - **prompt\_hidden\_act** (`str` or `function`, _optional_, defaults to `"quick_gelu"`) — The non-linear activation function (function or string) in the video specific prompt generator. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` \``"quick_gelu"` are supported. - **prompt\_num\_attention\_heads** (`int`, _optional_, defaults to 8) — Number of attention heads in the cross-attention of the video specific prompt generator. - **prompt\_attention\_dropout** (`float`, _optional_, defaults to 0.0) — The dropout probability for the attention layers in the video specific prompt generator. - **prompt\_projection\_dropout** (`float`, _optional_, defaults to 0.0) — The dropout probability for the projection layers in the video specific prompt generator. - **logit\_scale\_init\_value** (`float`, _optional_, defaults to 2.6592) — The inital value of the _logit\_scale_ parameter. Default is used as per the original XCLIP implementation. - **kwargs** (_optional_) — Dictionary of keyword arguments. [XCLIPConfig](/docs/transformers/v4.34.0/en/model_doc/xclip#transformers.XCLIPConfig) is the configuration class to store the configuration of a [XCLIPModel](/docs/transformers/v4.34.0/en/model_doc/xclip#transformers.XCLIPModel). It is used to instantiate X-CLIP model according to the specified arguments, defining the text model and vision model configs. Instantiating a configuration with the defaults will yield a similar configuration to that of the X-CLIP [microsoft/xclip-base-patch32](https://huggingface.co/microsoft/xclip-base-patch32) architecture. Configuration objects inherit from [PretrainedConfig](/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig) and can be used to control the model outputs. Read the documentation from [PretrainedConfig](/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig) for more information. #### from\_text\_vision\_configs [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/x_clip/configuration_x_clip.py#L407) ( text\_config: XCLIPTextConfig vision\_config: XCLIPVisionConfig \*\*kwargs ) → [XCLIPConfig](/docs/transformers/v4.34.0/en/model_doc/xclip#transformers.XCLIPConfig) An instance of a configuration object Instantiate a [XCLIPConfig](/docs/transformers/v4.34.0/en/model_doc/xclip#transformers.XCLIPConfig) (or a derived class) from xclip text model configuration and xclip vision model configuration. ## XCLIPTextConfig ### class transformers.XCLIPTextConfig [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/x_clip/configuration_x_clip.py#L31) ( vocab\_size = 49408 hidden\_size = 512 intermediate\_size = 2048 num\_hidden\_layers = 12 num\_attention\_heads = 8 max\_position\_embeddings = 77 hidden\_act = 'quick\_gelu' layer\_norm\_eps = 1e-05 attention\_dropout = 0.0 initializer\_range = 0.02 initializer\_factor = 1.0 pad\_token\_id = 1 bos\_token\_id = 0 eos\_token\_id = 2 \*\*kwargs ) Parameters - **vocab\_size** (`int`, _optional_, defaults to 49408) — Vocabulary size of the X-CLIP text model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [XCLIPModel](/docs/transformers/v4.34.0/en/model_doc/xclip#transformers.XCLIPModel). - **hidden\_size** (`int`, _optional_, defaults to 512) — Dimensionality of the encoder layers and the pooler layer. - **intermediate\_size** (`int`, _optional_, defaults to 2048) — Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder. - **num\_hidden\_layers** (`int`, _optional_, defaults to 12) — Number of hidden layers in the Transformer encoder. - **num\_attention\_heads** (`int`, _optional_, defaults to 8) — Number of attention heads for each attention layer in the Transformer encoder. - **max\_position\_embeddings** (`int`, _optional_, defaults to 77) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). - **hidden\_act** (`str` or `function`, _optional_, defaults to `"quick_gelu"`) — The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` \``"quick_gelu"` are supported. - **layer\_norm\_eps** (`float`, _optional_, defaults to 1e-5) — The epsilon used by the layer normalization layers. - **attention\_dropout** (`float`, _optional_, defaults to 0.0) — The dropout ratio for the attention probabilities. - **initializer\_range** (`float`, _optional_, defaults to 0.02) — The standard deviation of the truncated\_normal\_initializer for initializing all weight matrices. - **initializer\_factor** (\`float“, _optional_, defaults to 1) — A factor for initializing all weight matrices (should be kept to 1, used internally for initialization testing). This is the configuration class to store the configuration of a [XCLIPModel](/docs/transformers/v4.34.0/en/model_doc/xclip#transformers.XCLIPModel). It is used to instantiate an X-CLIP model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the X-CLIP [microsoft/xclip-base-patch32](https://huggingface.co/microsoft/xclip-base-patch32) architecture. Configuration objects inherit from [PretrainedConfig](/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig) and can be used to control the model outputs. Read the documentation from [PretrainedConfig](/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig) for more information. Example: ``` >>> from transformers import XCLIPTextModel, XCLIPTextConfig >>> >>> configuration = XCLIPTextConfig() >>> >>> model = XCLIPTextModel(configuration) >>> >>> configuration = model.config ``` ## XCLIPVisionConfig ### class transformers.XCLIPVisionConfig [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/x_clip/configuration_x_clip.py#L137) ( hidden\_size = 768 intermediate\_size = 3072 num\_hidden\_layers = 12 num\_attention\_heads = 12 mit\_hidden\_size = 512 mit\_intermediate\_size = 2048 mit\_num\_hidden\_layers = 1 mit\_num\_attention\_heads = 8 num\_channels = 3 image\_size = 224 patch\_size = 32 num\_frames = 8 hidden\_act = 'quick\_gelu' layer\_norm\_eps = 1e-05 attention\_dropout = 0.0 initializer\_range = 0.02 initializer\_factor = 1.0 drop\_path\_rate = 0.0 \*\*kwargs ) Parameters - **hidden\_size** (`int`, _optional_, defaults to 768) — Dimensionality of the encoder layers and the pooler layer. - **intermediate\_size** (`int`, _optional_, defaults to 3072) — Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder. - **num\_hidden\_layers** (`int`, _optional_, defaults to 12) — Number of hidden layers in the Transformer encoder. - **num\_attention\_heads** (`int`, _optional_, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder. - **mit\_hidden\_size** (`int`, _optional_, defaults to 512) — Dimensionality of the encoder layers of the Multiframe Integration Transformer (MIT). - **mit\_intermediate\_size** (`int`, _optional_, defaults to 2048) — Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Multiframe Integration Transformer (MIT). - **mit\_num\_hidden\_layers** (`int`, _optional_, defaults to 1) — Number of hidden layers in the Multiframe Integration Transformer (MIT). - **mit\_num\_attention\_heads** (`int`, _optional_, defaults to 8) — Number of attention heads for each attention layer in the Multiframe Integration Transformer (MIT). - **image\_size** (`int`, _optional_, defaults to 224) — The size (resolution) of each image. - **patch\_size** (`int`, _optional_, defaults to 32) — The size (resolution) of each patch. - **hidden\_act** (`str` or `function`, _optional_, defaults to `"quick_gelu"`) — The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"`, `"gelu_new"` and \``"quick_gelu"` are supported. - **layer\_norm\_eps** (`float`, _optional_, defaults to 1e-5) — The epsilon used by the layer normalization layers. - **attention\_dropout** (`float`, _optional_, defaults to 0.0) — The dropout ratio for the attention probabilities. - **initializer\_range** (`float`, _optional_, defaults to 0.02) — The standard deviation of the truncated\_normal\_initializer for initializing all weight matrices. - **initializer\_factor** (\`float“, _optional_, defaults to 1) — A factor for initializing all weight matrices (should be kept to 1, used internally for initialization testing). - **drop\_path\_rate** (`float`, _optional_, defaults to 0.0) — Stochastic depth rate. This is the configuration class to store the configuration of a [XCLIPModel](/docs/transformers/v4.34.0/en/model_doc/xclip#transformers.XCLIPModel). It is used to instantiate an X-CLIP model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the X-CLIP [microsoft/xclip-base-patch32](https://huggingface.co/microsoft/xclip-base-patch32) architecture. Configuration objects inherit from [PretrainedConfig](/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig) and can be used to control the model outputs. Read the documentation from [PretrainedConfig](/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig) for more information. Example: ``` >>> from transformers import XCLIPVisionModel, XCLIPVisionConfig >>> >>> configuration = XCLIPVisionConfig() >>> >>> model = XCLIPVisionModel(configuration) >>> >>> configuration = model.config ``` ## XCLIPModel ### class transformers.XCLIPModel [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/x_clip/modeling_x_clip.py#L1292) ( config: XCLIPConfig ) Parameters - **config** ([XCLIPConfig](/docs/transformers/v4.34.0/en/model_doc/xclip#transformers.XCLIPConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/x_clip/modeling_x_clip.py#L1501) ( input\_ids: typing.Optional\[torch.LongTensor\] = None pixel\_values: typing.Optional\[torch.FloatTensor\] = None attention\_mask: typing.Optional\[torch.Tensor\] = None position\_ids: typing.Optional\[torch.LongTensor\] = None return\_loss: typing.Optional\[bool\] = None output\_attentions: typing.Optional\[bool\] = None output\_hidden\_states: typing.Optional\[bool\] = None return\_dict: typing.Optional\[bool\] = None ) → `transformers.models.x_clip.modeling_x_clip.XCLIPOutput` or `tuple(torch.FloatTensor)` Parameters - **input\_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using [AutoTokenizer](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer). See [PreTrainedTokenizer.encode()](/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode) and [PreTrainedTokenizer.**call**()](/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__) for details. [What are input IDs?](../glossary#input-ids) - **attention\_mask** (`torch.Tensor` of shape `(batch_size, sequence_length)`, _optional_) — Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - **position\_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`, _optional_) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, config.max_position_embeddings - 1]`. [What are position IDs?](../glossary#position-ids) - **pixel\_values** (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using [AutoImageProcessor](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoImageProcessor). See [CLIPImageProcessor.**call**()](/docs/transformers/v4.34.0/en/model_doc/deit#transformers.DeiTFeatureExtractor.__call__) for details. - **return\_loss** (`bool`, _optional_) — Whether or not to return the contrastive loss. - **output\_attentions** (`bool`, _optional_) — Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. - **output\_hidden\_states** (`bool`, _optional_) — Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for more detail. - **return\_dict** (`bool`, _optional_) — Whether or not to return a [ModelOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput) instead of a plain tuple. Returns `transformers.models.x_clip.modeling_x_clip.XCLIPOutput` or `tuple(torch.FloatTensor)` A `transformers.models.x_clip.modeling_x_clip.XCLIPOutput` or a tuple of `torch.FloatTensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various elements depending on the configuration (`<class 'transformers.models.x_clip.configuration_x_clip.XCLIPConfig'>`) and inputs. - **loss** (`torch.FloatTensor` of shape `(1,)`, _optional_, returned when `return_loss` is `True`) — Contrastive loss for video-text similarity. - **logits\_per\_video** (`torch.FloatTensor` of shape `(video_batch_size, text_batch_size)`) — The scaled dot product scores between `video_embeds` and `text_embeds`. This represents the video-text similarity scores. - **logits\_per\_text** (`torch.FloatTensor` of shape `(text_batch_size, video_batch_size)`) — The scaled dot product scores between `text_embeds` and `video_embeds`. This represents the text-video similarity scores. - **text\_embeds(`torch.FloatTensor`** of shape `(batch_size, output_dim`) — The text embeddings obtained by applying the projection layer to the pooled output of [XCLIPTextModel](/docs/transformers/v4.34.0/en/model_doc/xclip#transformers.XCLIPTextModel). - **video\_embeds(`torch.FloatTensor`** of shape `(batch_size, output_dim`) — The video embeddings obtained by applying the projection layer to the pooled output of [XCLIPVisionModel](/docs/transformers/v4.34.0/en/model_doc/xclip#transformers.XCLIPVisionModel). - **text\_model\_output** (`BaseModelOutputWithPooling`) — The output of the [XCLIPTextModel](/docs/transformers/v4.34.0/en/model_doc/xclip#transformers.XCLIPTextModel). - **vision\_model\_output** (`BaseModelOutputWithPooling`) — The output of the [XCLIPVisionModel](/docs/transformers/v4.34.0/en/model_doc/xclip#transformers.XCLIPVisionModel). - **mit\_output** (`BaseModelOutputWithPooling`) — The output of `XCLIPMultiframeIntegrationTransformer` (MIT for short). The [XCLIPModel](/docs/transformers/v4.34.0/en/model_doc/xclip#transformers.XCLIPModel) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: ``` >>> import av >>> import torch >>> import numpy as np >>> from transformers import AutoProcessor, AutoModel >>> from huggingface_hub import hf_hub_download >>> np.random.seed(0) >>> def read_video_pyav(container, indices): ... ''' ... Decode the video with PyAV decoder. ... Args: ... container (`av.container.input.InputContainer`): PyAV container. ... indices (`List[int]`): List of frame indices to decode. ... Returns: ... result (np.ndarray): np array of decoded frames of shape (num_frames, height, width, 3). ... ''' ... frames = [] ... container.seek(0) ... start_index = indices[0] ... end_index = indices[-1] ... for i, frame in enumerate(container.decode(video=0)): ... if i > end_index: ... break ... if i >= start_index and i in indices: ... frames.append(frame) ... return np.stack([x.to_ndarray(format="rgb24") for x in frames]) >>> def sample_frame_indices(clip_len, frame_sample_rate, seg_len): ... ''' ... Sample a given number of frame indices from the video. ... Args: ... clip_len (`int`): Total number of frames to sample. ... frame_sample_rate (`int`): Sample every n-th frame. ... seg_len (`int`): Maximum allowed index of sample's last frame. ... Returns: ... indices (`List[int]`): List of sampled frame indices ... ''' ... converted_len = int(clip_len * frame_sample_rate) ... end_idx = np.random.randint(converted_len, seg_len) ... start_idx = end_idx - converted_len ... indices = np.linspace(start_idx, end_idx, num=clip_len) ... indices = np.clip(indices, start_idx, end_idx - 1).astype(np.int64) ... return indices >>> >>> file_path = hf_hub_download( ... repo_id="nielsr/video-demo", filename="eating_spaghetti.mp4", repo_type="dataset" ... ) >>> container = av.open(file_path) >>> >>> indices = sample_frame_indices(clip_len=8, frame_sample_rate=1, seg_len=container.streams.video[0].frames) >>> video = read_video_pyav(container, indices) >>> processor = AutoProcessor.from_pretrained("microsoft/xclip-base-patch32") >>> model = AutoModel.from_pretrained("microsoft/xclip-base-patch32") >>> inputs = processor( ... text=["playing sports", "eating spaghetti", "go shopping"], ... videos=list(video), ... return_tensors="pt", ... padding=True, ... ) >>> >>> with torch.no_grad(): ... outputs = model(**inputs) >>> logits_per_video = outputs.logits_per_video >>> probs = logits_per_video.softmax(dim=1) >>> print(probs) tensor([[1.9496e-04, 9.9960e-01, 2.0825e-04]]) ``` #### get\_text\_features [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/x_clip/modeling_x_clip.py#L1339) ( input\_ids: typing.Optional\[torch.Tensor\] = None attention\_mask: typing.Optional\[torch.Tensor\] = None position\_ids: typing.Optional\[torch.Tensor\] = None output\_attentions: typing.Optional\[bool\] = None output\_hidden\_states: typing.Optional\[bool\] = None return\_dict: typing.Optional\[bool\] = None ) → text\_features (`torch.FloatTensor` of shape `(batch_size, output_dim`) Parameters - **input\_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using [AutoTokenizer](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer). See [PreTrainedTokenizer.encode()](/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode) and [PreTrainedTokenizer.**call**()](/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__) for details. [What are input IDs?](../glossary#input-ids) - **attention\_mask** (`torch.Tensor` of shape `(batch_size, sequence_length)`, _optional_) — Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - **position\_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`, _optional_) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, config.max_position_embeddings - 1]`. [What are position IDs?](../glossary#position-ids) - **output\_attentions** (`bool`, _optional_) — Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. - **output\_hidden\_states** (`bool`, _optional_) — Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for more detail. - **return\_dict** (`bool`, _optional_) — Whether or not to return a [ModelOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput) instead of a plain tuple. Returns text\_features (`torch.FloatTensor` of shape `(batch_size, output_dim`) The text embeddings obtained by applying the projection layer to the pooled output of [XCLIPTextModel](/docs/transformers/v4.34.0/en/model_doc/xclip#transformers.XCLIPTextModel). The [XCLIPModel](/docs/transformers/v4.34.0/en/model_doc/xclip#transformers.XCLIPModel) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: ``` >>> from transformers import AutoTokenizer, AutoModel >>> tokenizer = AutoTokenizer.from_pretrained("microsoft/xclip-base-patch32") >>> model = AutoModel.from_pretrained("microsoft/xclip-base-patch32") >>> inputs = tokenizer(["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="pt") >>> text_features = model.get_text_features(**inputs) ``` #### get\_video\_features [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/x_clip/modeling_x_clip.py#L1386) ( pixel\_values: typing.Optional\[torch.FloatTensor\] = None output\_attentions: typing.Optional\[bool\] = None output\_hidden\_states: typing.Optional\[bool\] = None return\_dict: typing.Optional\[bool\] = None ) → video\_features (`torch.FloatTensor` of shape `(batch_size, output_dim`) Parameters - **pixel\_values** (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using [AutoImageProcessor](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoImageProcessor). See [CLIPImageProcessor.**call**()](/docs/transformers/v4.34.0/en/model_doc/deit#transformers.DeiTFeatureExtractor.__call__) for details. - **output\_attentions** (`bool`, _optional_) — Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. - **output\_hidden\_states** (`bool`, _optional_) — Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for more detail. - **return\_dict** (`bool`, _optional_) — Whether or not to return a [ModelOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput) instead of a plain tuple. Returns video\_features (`torch.FloatTensor` of shape `(batch_size, output_dim`) The video embeddings obtained by applying the projection layer to the pooled output of [XCLIPVisionModel](/docs/transformers/v4.34.0/en/model_doc/xclip#transformers.XCLIPVisionModel) and `XCLIPMultiframeIntegrationTransformer`. The [XCLIPModel](/docs/transformers/v4.34.0/en/model_doc/xclip#transformers.XCLIPModel) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: ``` >>> import av >>> import torch >>> import numpy as np >>> from transformers import AutoProcessor, AutoModel >>> from huggingface_hub import hf_hub_download >>> np.random.seed(0) >>> def read_video_pyav(container, indices): ... ''' ... Decode the video with PyAV decoder. ... Args: ... container (`av.container.input.InputContainer`): PyAV container. ... indices (`List[int]`): List of frame indices to decode. ... Returns: ... result (np.ndarray): np array of decoded frames of shape (num_frames, height, width, 3). ... ''' ... frames = [] ... container.seek(0) ... start_index = indices[0] ... end_index = indices[-1] ... for i, frame in enumerate(container.decode(video=0)): ... if i > end_index: ... break ... if i >= start_index and i in indices: ... frames.append(frame) ... return np.stack([x.to_ndarray(format="rgb24") for x in frames]) >>> def sample_frame_indices(clip_len, frame_sample_rate, seg_len): ... ''' ... Sample a given number of frame indices from the video. ... Args: ... clip_len (`int`): Total number of frames to sample. ... frame_sample_rate (`int`): Sample every n-th frame. ... seg_len (`int`): Maximum allowed index of sample's last frame. ... Returns: ... indices (`List[int]`): List of sampled frame indices ... ''' ... converted_len = int(clip_len * frame_sample_rate) ... end_idx = np.random.randint(converted_len, seg_len) ... start_idx = end_idx - converted_len ... indices = np.linspace(start_idx, end_idx, num=clip_len) ... indices = np.clip(indices, start_idx, end_idx - 1).astype(np.int64) ... return indices >>> >>> file_path = hf_hub_download( ... repo_id="nielsr/video-demo", filename="eating_spaghetti.mp4", repo_type="dataset" ... ) >>> container = av.open(file_path) >>> >>> indices = sample_frame_indices(clip_len=8, frame_sample_rate=1, seg_len=container.streams.video[0].frames) >>> video = read_video_pyav(container, indices) >>> processor = AutoProcessor.from_pretrained("microsoft/xclip-base-patch32") >>> model = AutoModel.from_pretrained("microsoft/xclip-base-patch32") >>> inputs = processor(videos=list(video), return_tensors="pt") >>> video_features = model.get_video_features(**inputs) ``` ## XCLIPTextModel ### class transformers.XCLIPTextModel [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/x_clip/modeling_x_clip.py#L833) ( config: XCLIPTextConfig ) #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/x_clip/modeling_x_clip.py#L848) ( input\_ids: typing.Optional\[torch.Tensor\] = None attention\_mask: typing.Optional\[torch.Tensor\] = None position\_ids: typing.Optional\[torch.Tensor\] = None output\_attentions: typing.Optional\[bool\] = None output\_hidden\_states: typing.Optional\[bool\] = None return\_dict: typing.Optional\[bool\] = None ) → [transformers.modeling\_outputs.BaseModelOutputWithPooling](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.BaseModelOutputWithPooling) or `tuple(torch.FloatTensor)` Parameters - **input\_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using [AutoTokenizer](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer). See [PreTrainedTokenizer.encode()](/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode) and [PreTrainedTokenizer.**call**()](/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__) for details. [What are input IDs?](../glossary#input-ids) - **attention\_mask** (`torch.Tensor` of shape `(batch_size, sequence_length)`, _optional_) — Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - **position\_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`, _optional_) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, config.max_position_embeddings - 1]`. [What are position IDs?](../glossary#position-ids) - **output\_attentions** (`bool`, _optional_) — Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. - **output\_hidden\_states** (`bool`, _optional_) — Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for more detail. - **return\_dict** (`bool`, _optional_) — Whether or not to return a [ModelOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput) instead of a plain tuple. A [transformers.modeling\_outputs.BaseModelOutputWithPooling](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.BaseModelOutputWithPooling) or a tuple of `torch.FloatTensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various elements depending on the configuration (`<class 'transformers.models.x_clip.configuration_x_clip.XCLIPTextConfig'>`) and inputs. - **last\_hidden\_state** (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`) — Sequence of hidden-states at the output of the last layer of the model. - **pooler\_output** (`torch.FloatTensor` of shape `(batch_size, hidden_size)`) — Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining. - **hidden\_states** (`tuple(torch.FloatTensor)`, _optional_, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`) — Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. - **attentions** (`tuple(torch.FloatTensor)`, _optional_, returned when `output_attentions=True` is passed or when `config.output_attentions=True`) — Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The [XCLIPTextModel](/docs/transformers/v4.34.0/en/model_doc/xclip#transformers.XCLIPTextModel) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: ``` >>> from transformers import AutoTokenizer, XCLIPTextModel >>> model = XCLIPTextModel.from_pretrained("microsoft/xclip-base-patch32") >>> tokenizer = AutoTokenizer.from_pretrained("microsoft/xclip-base-patch32") >>> inputs = tokenizer(["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_state = outputs.last_hidden_state >>> pooled_output = outputs.pooler_output ``` ## XCLIPVisionModel ### class transformers.XCLIPVisionModel [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/x_clip/modeling_x_clip.py#L1048) ( config: XCLIPVisionConfig ) #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/x_clip/modeling_x_clip.py#L1061) ( pixel\_values: typing.Optional\[torch.FloatTensor\] = None output\_attentions: typing.Optional\[bool\] = None output\_hidden\_states: typing.Optional\[bool\] = None return\_dict: typing.Optional\[bool\] = None ) → [transformers.modeling\_outputs.BaseModelOutputWithPooling](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.BaseModelOutputWithPooling) or `tuple(torch.FloatTensor)` Parameters - **pixel\_values** (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using [AutoImageProcessor](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoImageProcessor). See [CLIPImageProcessor.**call**()](/docs/transformers/v4.34.0/en/model_doc/deit#transformers.DeiTFeatureExtractor.__call__) for details. - **output\_attentions** (`bool`, _optional_) — Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. - **output\_hidden\_states** (`bool`, _optional_) — Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for more detail. - **return\_dict** (`bool`, _optional_) — Whether or not to return a [ModelOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput) instead of a plain tuple. A [transformers.modeling\_outputs.BaseModelOutputWithPooling](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.BaseModelOutputWithPooling) or a tuple of `torch.FloatTensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various elements depending on the configuration (`<class 'transformers.models.x_clip.configuration_x_clip.XCLIPVisionConfig'>`) and inputs. - **last\_hidden\_state** (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`) — Sequence of hidden-states at the output of the last layer of the model. - **pooler\_output** (`torch.FloatTensor` of shape `(batch_size, hidden_size)`) — Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining. - **hidden\_states** (`tuple(torch.FloatTensor)`, _optional_, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`) — Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. - **attentions** (`tuple(torch.FloatTensor)`, _optional_, returned when `output_attentions=True` is passed or when `config.output_attentions=True`) — Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The [XCLIPVisionModel](/docs/transformers/v4.34.0/en/model_doc/xclip#transformers.XCLIPVisionModel) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: ``` >>> import av >>> import torch >>> import numpy as np >>> from transformers import AutoProcessor, XCLIPVisionModel >>> from huggingface_hub import hf_hub_download >>> np.random.seed(0) >>> def read_video_pyav(container, indices): ... ''' ... Decode the video with PyAV decoder. ... Args: ... container (`av.container.input.InputContainer`): PyAV container. ... indices (`List[int]`): List of frame indices to decode. ... Returns: ... result (np.ndarray): np array of decoded frames of shape (num_frames, height, width, 3). ... ''' ... frames = [] ... container.seek(0) ... start_index = indices[0] ... end_index = indices[-1] ... for i, frame in enumerate(container.decode(video=0)): ... if i > end_index: ... break ... if i >= start_index and i in indices: ... frames.append(frame) ... return np.stack([x.to_ndarray(format="rgb24") for x in frames]) >>> def sample_frame_indices(clip_len, frame_sample_rate, seg_len): ... ''' ... Sample a given number of frame indices from the video. ... Args: ... clip_len (`int`): Total number of frames to sample. ... frame_sample_rate (`int`): Sample every n-th frame. ... seg_len (`int`): Maximum allowed index of sample's last frame. ... Returns: ... indices (`List[int]`): List of sampled frame indices ... ''' ... converted_len = int(clip_len * frame_sample_rate) ... end_idx = np.random.randint(converted_len, seg_len) ... start_idx = end_idx - converted_len ... indices = np.linspace(start_idx, end_idx, num=clip_len) ... indices = np.clip(indices, start_idx, end_idx - 1).astype(np.int64) ... return indices >>> >>> file_path = hf_hub_download( ... repo_id="nielsr/video-demo", filename="eating_spaghetti.mp4", repo_type="dataset" ... ) >>> container = av.open(file_path) >>> >>> indices = sample_frame_indices(clip_len=8, frame_sample_rate=1, seg_len=container.streams.video[0].frames) >>> video = read_video_pyav(container, indices) >>> processor = AutoProcessor.from_pretrained("microsoft/xclip-base-patch32") >>> model = XCLIPVisionModel.from_pretrained("microsoft/xclip-base-patch32") >>> pixel_values = processor(videos=list(video), return_tensors="pt").pixel_values >>> batch_size, num_frames, num_channels, height, width = pixel_values.shape >>> pixel_values = pixel_values.reshape(-1, num_channels, height, width) >>> outputs = model(pixel_values) >>> last_hidden_state = outputs.last_hidden_state ```
<!DOCTYPE html><html class=""><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"> <meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science."> <meta property="fb:app_id" content="1321688464574422"> <meta name="twitter:card" content="summary_large_image"> <meta name="twitter:site" content="@huggingface"> <meta property="og:title" content="X-CLIP"> <meta property="og:type" content="website"> <meta property="og:url" content="https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/xclip"> <meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png"> <link rel="stylesheet" href="/front/build/kube-b0520c1/style.css"> <link rel="preconnect" href="https://fonts.gstatic.com"> <link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&amp;display=swap" rel="stylesheet"> <link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&amp;display=swap" rel="stylesheet"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" as="style" onload="this.onload=null;this.rel='stylesheet'"> <noscript> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" /> </noscript> <title>X-CLIP</title> <script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script> <script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head> <body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage"> <div class="flex min-h-screen flex-col"> <div class="SVELTE_HYDRATER contents" data-props="{&quot;classNames&quot;:&quot;&quot;,&quot;isWide&quot;:true,&quot;isZh&quot;:false}" data-target="MainHeader"><header class="border-b border-gray-100 "><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <div class="flex flex-none items-center justify-center p-0.5 place-self-stretch lg:hidden"><button class="relative z-40 flex h-6 w-8 items-center justify-center" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div></div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="rounded-full border border-transparent bg-gray-900 px-3 py-1 leading-none text-white hover:border-black hover:bg-white hover:text-black" href="/join">Sign Up</a></li></ul></nav></div></header></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div> <main class="flex flex-1 flex-col"><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapters&quot;:[{&quot;title&quot;:&quot;Get started&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;🤗 Transformers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;index&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/index&quot;},{&quot;title&quot;:&quot;Quick tour&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;quicktour&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/quicktour&quot;},{&quot;title&quot;:&quot;Installation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;installation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/installation&quot;}]},{&quot;title&quot;:&quot;Tutorials&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Run inference with pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_tutorial&quot;},{&quot;title&quot;:&quot;Write portable code with AutoClass&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;autoclass_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/autoclass_tutorial&quot;},{&quot;title&quot;:&quot;Preprocess data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocessing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/preprocessing&quot;},{&quot;title&quot;:&quot;Fine-tune a pretrained model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;training&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/training&quot;},{&quot;title&quot;:&quot;Train with a script&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;run_scripts&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/run_scripts&quot;},{&quot;title&quot;:&quot;Set up distributed training with 🤗 Accelerate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;accelerate&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/accelerate&quot;},{&quot;title&quot;:&quot;Load and train adapters with 🤗 PEFT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;peft&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/peft&quot;},{&quot;title&quot;:&quot;Share your model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_sharing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_sharing&quot;},{&quot;title&quot;:&quot;Agents&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers_agents&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/transformers_agents&quot;},{&quot;title&quot;:&quot;Generation with LLMs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;llm_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/llm_tutorial&quot;}]},{&quot;title&quot;:&quot;Task Guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Natural Language Processing&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text classification&quot;,&quot;id&quot;:&quot;tasks/sequence_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/sequence_classification&quot;},{&quot;title&quot;:&quot;Token classification&quot;,&quot;id&quot;:&quot;tasks/token_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/token_classification&quot;},{&quot;title&quot;:&quot;Question answering&quot;,&quot;id&quot;:&quot;tasks/question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/question_answering&quot;},{&quot;title&quot;:&quot;Causal language modeling&quot;,&quot;id&quot;:&quot;tasks/language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/language_modeling&quot;},{&quot;title&quot;:&quot;Masked language modeling&quot;,&quot;id&quot;:&quot;tasks/masked_language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/masked_language_modeling&quot;},{&quot;title&quot;:&quot;Translation&quot;,&quot;id&quot;:&quot;tasks/translation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/translation&quot;},{&quot;title&quot;:&quot;Summarization&quot;,&quot;id&quot;:&quot;tasks/summarization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/summarization&quot;},{&quot;title&quot;:&quot;Multiple choice&quot;,&quot;id&quot;:&quot;tasks/multiple_choice&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/multiple_choice&quot;}]},{&quot;title&quot;:&quot;Audio&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio classification&quot;,&quot;id&quot;:&quot;tasks/audio_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/audio_classification&quot;},{&quot;title&quot;:&quot;Automatic speech recognition&quot;,&quot;id&quot;:&quot;tasks/asr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/asr&quot;}]},{&quot;title&quot;:&quot;Computer Vision&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image classification&quot;,&quot;id&quot;:&quot;tasks/image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_classification&quot;},{&quot;title&quot;:&quot;Semantic segmentation&quot;,&quot;id&quot;:&quot;tasks/semantic_segmentation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/semantic_segmentation&quot;},{&quot;title&quot;:&quot;Video classification&quot;,&quot;id&quot;:&quot;tasks/video_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/video_classification&quot;},{&quot;title&quot;:&quot;Object detection&quot;,&quot;id&quot;:&quot;tasks/object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot object detection&quot;,&quot;id&quot;:&quot;tasks/zero_shot_object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot image classification&quot;,&quot;id&quot;:&quot;tasks/zero_shot_image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification&quot;},{&quot;title&quot;:&quot;Depth estimation&quot;,&quot;id&quot;:&quot;tasks/monocular_depth_estimation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation&quot;}]},{&quot;title&quot;:&quot;Multimodal&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image captioning&quot;,&quot;id&quot;:&quot;tasks/image_captioning&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_captioning&quot;},{&quot;title&quot;:&quot;Document Question Answering&quot;,&quot;id&quot;:&quot;tasks/document_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/document_question_answering&quot;},{&quot;title&quot;:&quot;Visual Question Answering&quot;,&quot;id&quot;:&quot;tasks/visual_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/visual_question_answering&quot;},{&quot;title&quot;:&quot;Text to speech&quot;,&quot;id&quot;:&quot;tasks/text-to-speech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/text-to-speech&quot;}]},{&quot;title&quot;:&quot;Generation&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Customize the generation strategy&quot;,&quot;id&quot;:&quot;generation_strategies&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/generation_strategies&quot;}]},{&quot;title&quot;:&quot;Prompting&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image tasks with IDEFICS&quot;,&quot;id&quot;:&quot;tasks/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/idefics&quot;}]}]},{&quot;title&quot;:&quot;Developer guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Use fast tokenizers from 🤗 Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;fast_tokenizers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/fast_tokenizers&quot;},{&quot;title&quot;:&quot;Run inference with multilingual models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;multilingual&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/multilingual&quot;},{&quot;title&quot;:&quot;Use model-specific APIs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;create_a_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/create_a_model&quot;},{&quot;title&quot;:&quot;Share a custom model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_models&quot;},{&quot;title&quot;:&quot;Templates for chat models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;chat_templating&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/chat_templating&quot;},{&quot;title&quot;:&quot;Run training on Amazon SageMaker&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;sagemaker&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/sagemaker&quot;},{&quot;title&quot;:&quot;Export to ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;serialization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/serialization&quot;},{&quot;title&quot;:&quot;Export to TFLite&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tflite&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tflite&quot;},{&quot;title&quot;:&quot;Export to TorchScript&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;torchscript&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/torchscript&quot;},{&quot;title&quot;:&quot;Benchmarks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;benchmarks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/benchmarks&quot;},{&quot;title&quot;:&quot;Notebooks with examples&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;notebooks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/notebooks&quot;},{&quot;title&quot;:&quot;Community resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;community&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/community&quot;},{&quot;title&quot;:&quot;Custom Tools and Prompts&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_tools&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_tools&quot;},{&quot;title&quot;:&quot;Troubleshoot&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;troubleshooting&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/troubleshooting&quot;}]},{&quot;title&quot;:&quot;Performance and scalability&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;performance&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/performance&quot;},{&quot;title&quot;:&quot;Efficient training techniques&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Methods and tools for efficient training on a single GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_one&quot;},{&quot;title&quot;:&quot;Multiple GPUs and parallelism&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_many&quot;},{&quot;title&quot;:&quot;Efficient training on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu&quot;},{&quot;title&quot;:&quot;Distributed CPU training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu_many&quot;},{&quot;title&quot;:&quot;Training on TPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu&quot;},{&quot;title&quot;:&quot;Training on TPU with TensorFlow&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu_tf&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu_tf&quot;},{&quot;title&quot;:&quot;Training on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_special&quot;},{&quot;title&quot;:&quot;Custom hardware for training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_hardware&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_hardware&quot;},{&quot;title&quot;:&quot;Hyperparameter Search using Trainer API&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;hpo_train&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/hpo_train&quot;}]},{&quot;title&quot;:&quot;Optimizing inference&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Inference on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_cpu&quot;},{&quot;title&quot;:&quot;Inference on one GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_one&quot;},{&quot;title&quot;:&quot;Inference on many GPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_many&quot;},{&quot;title&quot;:&quot;Inference on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_special&quot;}]},{&quot;title&quot;:&quot;Instantiating a big model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;big_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/big_models&quot;},{&quot;title&quot;:&quot;Troubleshooting&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;debugging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/debugging&quot;},{&quot;title&quot;:&quot;XLA Integration for TensorFlow Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tf_xla&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tf_xla&quot;},{&quot;title&quot;:&quot;Optimize inference using `torch.compile()`&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_torch_compile&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_torch_compile&quot;}]},{&quot;title&quot;:&quot;Contribute&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;How to contribute to transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;contributing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/contributing&quot;},{&quot;title&quot;:&quot;How to add a model to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_model&quot;},{&quot;title&quot;:&quot;How to convert a 🤗 Transformers model to TensorFlow?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_tensorflow_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_tensorflow_model&quot;},{&quot;title&quot;:&quot;How to add a pipeline to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_pipeline&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_pipeline&quot;},{&quot;title&quot;:&quot;Testing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;testing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/testing&quot;},{&quot;title&quot;:&quot;Checks on a Pull Request&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pr_checks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pr_checks&quot;}]},{&quot;title&quot;:&quot;Conceptual guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Philosophy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;philosophy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/philosophy&quot;},{&quot;title&quot;:&quot;Glossary&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;glossary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/glossary&quot;},{&quot;title&quot;:&quot;What 🤗 Transformers can do&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;task_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/task_summary&quot;},{&quot;title&quot;:&quot;How 🤗 Transformers solve tasks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks_explained&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks_explained&quot;},{&quot;title&quot;:&quot;The Transformer model family&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_summary&quot;},{&quot;title&quot;:&quot;Summary of the tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tokenizer_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tokenizer_summary&quot;},{&quot;title&quot;:&quot;Attention mechanisms&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;attention&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/attention&quot;},{&quot;title&quot;:&quot;Padding and truncation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pad_truncation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pad_truncation&quot;},{&quot;title&quot;:&quot;BERTology&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;bertology&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/bertology&quot;},{&quot;title&quot;:&quot;Perplexity of fixed-length models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perplexity&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perplexity&quot;},{&quot;title&quot;:&quot;Pipelines for webserver inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_webserver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_webserver&quot;},{&quot;title&quot;:&quot;Model training anatomy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_memory_anatomy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_memory_anatomy&quot;}]},{&quot;title&quot;:&quot;API&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Main Classes&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Agents and Tools&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/agent&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/agent&quot;},{&quot;title&quot;:&quot;Auto Classes&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/auto&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/auto&quot;},{&quot;title&quot;:&quot;Callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/callback&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/callback&quot;},{&quot;title&quot;:&quot;Configuration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/configuration&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/configuration&quot;},{&quot;title&quot;:&quot;Data Collator&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/data_collator&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/data_collator&quot;},{&quot;title&quot;:&quot;Keras callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/keras_callbacks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/keras_callbacks&quot;},{&quot;title&quot;:&quot;Logging&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/logging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/logging&quot;},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/model&quot;},{&quot;title&quot;:&quot;Text Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/text_generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/text_generation&quot;},{&quot;title&quot;:&quot;ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/onnx&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/onnx&quot;},{&quot;title&quot;:&quot;Optimization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/optimizer_schedules&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules&quot;},{&quot;title&quot;:&quot;Model outputs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/output&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/output&quot;},{&quot;title&quot;:&quot;Pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/pipelines&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/pipelines&quot;},{&quot;title&quot;:&quot;Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/processors&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/processors&quot;},{&quot;title&quot;:&quot;Quantization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/quantization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/quantization&quot;},{&quot;title&quot;:&quot;Tokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/tokenizer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/tokenizer&quot;},{&quot;title&quot;:&quot;Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/trainer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/trainer&quot;},{&quot;title&quot;:&quot;DeepSpeed Integration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/deepspeed&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/deepspeed&quot;},{&quot;title&quot;:&quot;Feature Extractor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/feature_extractor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/feature_extractor&quot;},{&quot;title&quot;:&quot;Image Processor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/image_processor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/image_processor&quot;}]},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALBERT&quot;,&quot;id&quot;:&quot;model_doc/albert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/albert&quot;},{&quot;title&quot;:&quot;BART&quot;,&quot;id&quot;:&quot;model_doc/bart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bart&quot;},{&quot;title&quot;:&quot;BARThez&quot;,&quot;id&quot;:&quot;model_doc/barthez&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/barthez&quot;},{&quot;title&quot;:&quot;BARTpho&quot;,&quot;id&quot;:&quot;model_doc/bartpho&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bartpho&quot;},{&quot;title&quot;:&quot;BERT&quot;,&quot;id&quot;:&quot;model_doc/bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert&quot;},{&quot;title&quot;:&quot;BertGeneration&quot;,&quot;id&quot;:&quot;model_doc/bert-generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-generation&quot;},{&quot;title&quot;:&quot;BertJapanese&quot;,&quot;id&quot;:&quot;model_doc/bert-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-japanese&quot;},{&quot;title&quot;:&quot;Bertweet&quot;,&quot;id&quot;:&quot;model_doc/bertweet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bertweet&quot;},{&quot;title&quot;:&quot;BigBird&quot;,&quot;id&quot;:&quot;model_doc/big_bird&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/big_bird&quot;},{&quot;title&quot;:&quot;BigBirdPegasus&quot;,&quot;id&quot;:&quot;model_doc/bigbird_pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus&quot;},{&quot;title&quot;:&quot;BioGpt&quot;,&quot;id&quot;:&quot;model_doc/biogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/biogpt&quot;},{&quot;title&quot;:&quot;Blenderbot&quot;,&quot;id&quot;:&quot;model_doc/blenderbot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot&quot;},{&quot;title&quot;:&quot;Blenderbot Small&quot;,&quot;id&quot;:&quot;model_doc/blenderbot-small&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot-small&quot;},{&quot;title&quot;:&quot;BLOOM&quot;,&quot;id&quot;:&quot;model_doc/bloom&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bloom&quot;},{&quot;title&quot;:&quot;BORT&quot;,&quot;id&quot;:&quot;model_doc/bort&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bort&quot;},{&quot;title&quot;:&quot;ByT5&quot;,&quot;id&quot;:&quot;model_doc/byt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/byt5&quot;},{&quot;title&quot;:&quot;CamemBERT&quot;,&quot;id&quot;:&quot;model_doc/camembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/camembert&quot;},{&quot;title&quot;:&quot;CANINE&quot;,&quot;id&quot;:&quot;model_doc/canine&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/canine&quot;},{&quot;title&quot;:&quot;CodeGen&quot;,&quot;id&quot;:&quot;model_doc/codegen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/codegen&quot;},{&quot;title&quot;:&quot;CodeLlama&quot;,&quot;id&quot;:&quot;model_doc/code_llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/code_llama&quot;},{&quot;title&quot;:&quot;ConvBERT&quot;,&quot;id&quot;:&quot;model_doc/convbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convbert&quot;},{&quot;title&quot;:&quot;CPM&quot;,&quot;id&quot;:&quot;model_doc/cpm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpm&quot;},{&quot;title&quot;:&quot;CPMANT&quot;,&quot;id&quot;:&quot;model_doc/cpmant&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpmant&quot;},{&quot;title&quot;:&quot;CTRL&quot;,&quot;id&quot;:&quot;model_doc/ctrl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ctrl&quot;},{&quot;title&quot;:&quot;DeBERTa&quot;,&quot;id&quot;:&quot;model_doc/deberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta&quot;},{&quot;title&quot;:&quot;DeBERTa-v2&quot;,&quot;id&quot;:&quot;model_doc/deberta-v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta-v2&quot;},{&quot;title&quot;:&quot;DialoGPT&quot;,&quot;id&quot;:&quot;model_doc/dialogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dialogpt&quot;},{&quot;title&quot;:&quot;DistilBERT&quot;,&quot;id&quot;:&quot;model_doc/distilbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/distilbert&quot;},{&quot;title&quot;:&quot;DPR&quot;,&quot;id&quot;:&quot;model_doc/dpr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpr&quot;},{&quot;title&quot;:&quot;ELECTRA&quot;,&quot;id&quot;:&quot;model_doc/electra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/electra&quot;},{&quot;title&quot;:&quot;Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encoder-decoder&quot;},{&quot;title&quot;:&quot;ERNIE&quot;,&quot;id&quot;:&quot;model_doc/ernie&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie&quot;},{&quot;title&quot;:&quot;ErnieM&quot;,&quot;id&quot;:&quot;model_doc/ernie_m&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie_m&quot;},{&quot;title&quot;:&quot;ESM&quot;,&quot;id&quot;:&quot;model_doc/esm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/esm&quot;},{&quot;title&quot;:&quot;Falcon&quot;,&quot;id&quot;:&quot;model_doc/falcon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/falcon&quot;},{&quot;title&quot;:&quot;FLAN-T5&quot;,&quot;id&quot;:&quot;model_doc/flan-t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-t5&quot;},{&quot;title&quot;:&quot;FLAN-UL2&quot;,&quot;id&quot;:&quot;model_doc/flan-ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-ul2&quot;},{&quot;title&quot;:&quot;FlauBERT&quot;,&quot;id&quot;:&quot;model_doc/flaubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flaubert&quot;},{&quot;title&quot;:&quot;FNet&quot;,&quot;id&quot;:&quot;model_doc/fnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fnet&quot;},{&quot;title&quot;:&quot;FSMT&quot;,&quot;id&quot;:&quot;model_doc/fsmt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fsmt&quot;},{&quot;title&quot;:&quot;Funnel Transformer&quot;,&quot;id&quot;:&quot;model_doc/funnel&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/funnel&quot;},{&quot;title&quot;:&quot;GPT&quot;,&quot;id&quot;:&quot;model_doc/openai-gpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/openai-gpt&quot;},{&quot;title&quot;:&quot;GPT Neo&quot;,&quot;id&quot;:&quot;model_doc/gpt_neo&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neo&quot;},{&quot;title&quot;:&quot;GPT NeoX&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox&quot;},{&quot;title&quot;:&quot;GPT NeoX Japanese&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox_japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese&quot;},{&quot;title&quot;:&quot;GPT-J&quot;,&quot;id&quot;:&quot;model_doc/gptj&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptj&quot;},{&quot;title&quot;:&quot;GPT2&quot;,&quot;id&quot;:&quot;model_doc/gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt2&quot;},{&quot;title&quot;:&quot;GPTBigCode&quot;,&quot;id&quot;:&quot;model_doc/gpt_bigcode&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode&quot;},{&quot;title&quot;:&quot;GPTSAN Japanese&quot;,&quot;id&quot;:&quot;model_doc/gptsan-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese&quot;},{&quot;title&quot;:&quot;GPTSw3&quot;,&quot;id&quot;:&quot;model_doc/gpt-sw3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt-sw3&quot;},{&quot;title&quot;:&quot;HerBERT&quot;,&quot;id&quot;:&quot;model_doc/herbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/herbert&quot;},{&quot;title&quot;:&quot;I-BERT&quot;,&quot;id&quot;:&quot;model_doc/ibert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ibert&quot;},{&quot;title&quot;:&quot;Jukebox&quot;,&quot;id&quot;:&quot;model_doc/jukebox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/jukebox&quot;},{&quot;title&quot;:&quot;LED&quot;,&quot;id&quot;:&quot;model_doc/led&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/led&quot;},{&quot;title&quot;:&quot;LLaMA&quot;,&quot;id&quot;:&quot;model_doc/llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama&quot;},{&quot;title&quot;:&quot;Llama2&quot;,&quot;id&quot;:&quot;model_doc/llama2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama2&quot;},{&quot;title&quot;:&quot;Longformer&quot;,&quot;id&quot;:&quot;model_doc/longformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longformer&quot;},{&quot;title&quot;:&quot;LongT5&quot;,&quot;id&quot;:&quot;model_doc/longt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longt5&quot;},{&quot;title&quot;:&quot;LUKE&quot;,&quot;id&quot;:&quot;model_doc/luke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/luke&quot;},{&quot;title&quot;:&quot;M2M100&quot;,&quot;id&quot;:&quot;model_doc/m2m_100&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/m2m_100&quot;},{&quot;title&quot;:&quot;MarianMT&quot;,&quot;id&quot;:&quot;model_doc/marian&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/marian&quot;},{&quot;title&quot;:&quot;MarkupLM&quot;,&quot;id&quot;:&quot;model_doc/markuplm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/markuplm&quot;},{&quot;title&quot;:&quot;MBart and MBart-50&quot;,&quot;id&quot;:&quot;model_doc/mbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mbart&quot;},{&quot;title&quot;:&quot;MEGA&quot;,&quot;id&quot;:&quot;model_doc/mega&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mega&quot;},{&quot;title&quot;:&quot;MegatronBERT&quot;,&quot;id&quot;:&quot;model_doc/megatron-bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron-bert&quot;},{&quot;title&quot;:&quot;MegatronGPT2&quot;,&quot;id&quot;:&quot;model_doc/megatron_gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2&quot;},{&quot;title&quot;:&quot;Mistral&quot;,&quot;id&quot;:&quot;model_doc/mistral&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mistral&quot;},{&quot;title&quot;:&quot;mLUKE&quot;,&quot;id&quot;:&quot;model_doc/mluke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mluke&quot;},{&quot;title&quot;:&quot;MobileBERT&quot;,&quot;id&quot;:&quot;model_doc/mobilebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilebert&quot;},{&quot;title&quot;:&quot;MPNet&quot;,&quot;id&quot;:&quot;model_doc/mpnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpnet&quot;},{&quot;title&quot;:&quot;MPT&quot;,&quot;id&quot;:&quot;model_doc/mpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpt&quot;},{&quot;title&quot;:&quot;MRA&quot;,&quot;id&quot;:&quot;model_doc/mra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mra&quot;},{&quot;title&quot;:&quot;MT5&quot;,&quot;id&quot;:&quot;model_doc/mt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mt5&quot;},{&quot;title&quot;:&quot;MVP&quot;,&quot;id&quot;:&quot;model_doc/mvp&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mvp&quot;},{&quot;title&quot;:&quot;NEZHA&quot;,&quot;id&quot;:&quot;model_doc/nezha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nezha&quot;},{&quot;title&quot;:&quot;NLLB&quot;,&quot;id&quot;:&quot;model_doc/nllb&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb&quot;},{&quot;title&quot;:&quot;NLLB-MoE&quot;,&quot;id&quot;:&quot;model_doc/nllb-moe&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb-moe&quot;},{&quot;title&quot;:&quot;Nyströmformer&quot;,&quot;id&quot;:&quot;model_doc/nystromformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nystromformer&quot;},{&quot;title&quot;:&quot;Open-Llama&quot;,&quot;id&quot;:&quot;model_doc/open-llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/open-llama&quot;},{&quot;title&quot;:&quot;OPT&quot;,&quot;id&quot;:&quot;model_doc/opt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/opt&quot;},{&quot;title&quot;:&quot;Pegasus&quot;,&quot;id&quot;:&quot;model_doc/pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus&quot;},{&quot;title&quot;:&quot;PEGASUS-X&quot;,&quot;id&quot;:&quot;model_doc/pegasus_x&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus_x&quot;},{&quot;title&quot;:&quot;Persimmon&quot;,&quot;id&quot;:&quot;model_doc/persimmon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/persimmon&quot;},{&quot;title&quot;:&quot;PhoBERT&quot;,&quot;id&quot;:&quot;model_doc/phobert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/phobert&quot;},{&quot;title&quot;:&quot;PLBart&quot;,&quot;id&quot;:&quot;model_doc/plbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/plbart&quot;},{&quot;title&quot;:&quot;ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/prophetnet&quot;},{&quot;title&quot;:&quot;QDQBert&quot;,&quot;id&quot;:&quot;model_doc/qdqbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/qdqbert&quot;},{&quot;title&quot;:&quot;RAG&quot;,&quot;id&quot;:&quot;model_doc/rag&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rag&quot;},{&quot;title&quot;:&quot;REALM&quot;,&quot;id&quot;:&quot;model_doc/realm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/realm&quot;},{&quot;title&quot;:&quot;Reformer&quot;,&quot;id&quot;:&quot;model_doc/reformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/reformer&quot;},{&quot;title&quot;:&quot;RemBERT&quot;,&quot;id&quot;:&quot;model_doc/rembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rembert&quot;},{&quot;title&quot;:&quot;RetriBERT&quot;,&quot;id&quot;:&quot;model_doc/retribert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/retribert&quot;},{&quot;title&quot;:&quot;RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta&quot;},{&quot;title&quot;:&quot;RoBERTa-PreLayerNorm&quot;,&quot;id&quot;:&quot;model_doc/roberta-prelayernorm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm&quot;},{&quot;title&quot;:&quot;RoCBert&quot;,&quot;id&quot;:&quot;model_doc/roc_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roc_bert&quot;},{&quot;title&quot;:&quot;RoFormer&quot;,&quot;id&quot;:&quot;model_doc/roformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roformer&quot;},{&quot;title&quot;:&quot;RWKV&quot;,&quot;id&quot;:&quot;model_doc/rwkv&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rwkv&quot;},{&quot;title&quot;:&quot;Splinter&quot;,&quot;id&quot;:&quot;model_doc/splinter&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/splinter&quot;},{&quot;title&quot;:&quot;SqueezeBERT&quot;,&quot;id&quot;:&quot;model_doc/squeezebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/squeezebert&quot;},{&quot;title&quot;:&quot;SwitchTransformers&quot;,&quot;id&quot;:&quot;model_doc/switch_transformers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/switch_transformers&quot;},{&quot;title&quot;:&quot;T5&quot;,&quot;id&quot;:&quot;model_doc/t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5&quot;},{&quot;title&quot;:&quot;T5v1.1&quot;,&quot;id&quot;:&quot;model_doc/t5v1.1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5v1.1&quot;},{&quot;title&quot;:&quot;TAPEX&quot;,&quot;id&quot;:&quot;model_doc/tapex&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapex&quot;},{&quot;title&quot;:&quot;Transformer XL&quot;,&quot;id&quot;:&quot;model_doc/transfo-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/transfo-xl&quot;},{&quot;title&quot;:&quot;UL2&quot;,&quot;id&quot;:&quot;model_doc/ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ul2&quot;},{&quot;title&quot;:&quot;UMT5&quot;,&quot;id&quot;:&quot;model_doc/umt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/umt5&quot;},{&quot;title&quot;:&quot;X-MOD&quot;,&quot;id&quot;:&quot;model_doc/xmod&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xmod&quot;},{&quot;title&quot;:&quot;XGLM&quot;,&quot;id&quot;:&quot;model_doc/xglm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xglm&quot;},{&quot;title&quot;:&quot;XLM&quot;,&quot;id&quot;:&quot;model_doc/xlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm&quot;},{&quot;title&quot;:&quot;XLM-ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/xlm-prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa-XL&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl&quot;},{&quot;title&quot;:&quot;XLM-V&quot;,&quot;id&quot;:&quot;model_doc/xlm-v&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-v&quot;},{&quot;title&quot;:&quot;XLNet&quot;,&quot;id&quot;:&quot;model_doc/xlnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlnet&quot;},{&quot;title&quot;:&quot;YOSO&quot;,&quot;id&quot;:&quot;model_doc/yoso&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yoso&quot;}]},{&quot;title&quot;:&quot;Vision models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;BEiT&quot;,&quot;id&quot;:&quot;model_doc/beit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/beit&quot;},{&quot;title&quot;:&quot;BiT&quot;,&quot;id&quot;:&quot;model_doc/bit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bit&quot;},{&quot;title&quot;:&quot;Conditional DETR&quot;,&quot;id&quot;:&quot;model_doc/conditional_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/conditional_detr&quot;},{&quot;title&quot;:&quot;ConvNeXT&quot;,&quot;id&quot;:&quot;model_doc/convnext&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnext&quot;},{&quot;title&quot;:&quot;ConvNeXTV2&quot;,&quot;id&quot;:&quot;model_doc/convnextv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnextv2&quot;},{&quot;title&quot;:&quot;CvT&quot;,&quot;id&quot;:&quot;model_doc/cvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cvt&quot;},{&quot;title&quot;:&quot;Deformable DETR&quot;,&quot;id&quot;:&quot;model_doc/deformable_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deformable_detr&quot;},{&quot;title&quot;:&quot;DeiT&quot;,&quot;id&quot;:&quot;model_doc/deit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deit&quot;},{&quot;title&quot;:&quot;DETA&quot;,&quot;id&quot;:&quot;model_doc/deta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deta&quot;},{&quot;title&quot;:&quot;DETR&quot;,&quot;id&quot;:&quot;model_doc/detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/detr&quot;},{&quot;title&quot;:&quot;DiNAT&quot;,&quot;id&quot;:&quot;model_doc/dinat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinat&quot;},{&quot;title&quot;:&quot;DINO V2&quot;,&quot;id&quot;:&quot;model_doc/dinov2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinov2&quot;},{&quot;title&quot;:&quot;DiT&quot;,&quot;id&quot;:&quot;model_doc/dit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dit&quot;},{&quot;title&quot;:&quot;DPT&quot;,&quot;id&quot;:&quot;model_doc/dpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpt&quot;},{&quot;title&quot;:&quot;EfficientFormer&quot;,&quot;id&quot;:&quot;model_doc/efficientformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientformer&quot;},{&quot;title&quot;:&quot;EfficientNet&quot;,&quot;id&quot;:&quot;model_doc/efficientnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientnet&quot;},{&quot;title&quot;:&quot;FocalNet&quot;,&quot;id&quot;:&quot;model_doc/focalnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/focalnet&quot;},{&quot;title&quot;:&quot;GLPN&quot;,&quot;id&quot;:&quot;model_doc/glpn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/glpn&quot;},{&quot;title&quot;:&quot;ImageGPT&quot;,&quot;id&quot;:&quot;model_doc/imagegpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/imagegpt&quot;},{&quot;title&quot;:&quot;LeViT&quot;,&quot;id&quot;:&quot;model_doc/levit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/levit&quot;},{&quot;title&quot;:&quot;Mask2Former&quot;,&quot;id&quot;:&quot;model_doc/mask2former&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mask2former&quot;},{&quot;title&quot;:&quot;MaskFormer&quot;,&quot;id&quot;:&quot;model_doc/maskformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/maskformer&quot;},{&quot;title&quot;:&quot;MobileNetV1&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v1&quot;},{&quot;title&quot;:&quot;MobileNetV2&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v2&quot;},{&quot;title&quot;:&quot;MobileViT&quot;,&quot;id&quot;:&quot;model_doc/mobilevit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevit&quot;},{&quot;title&quot;:&quot;MobileViTV2&quot;,&quot;id&quot;:&quot;model_doc/mobilevitv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevitv2&quot;},{&quot;title&quot;:&quot;NAT&quot;,&quot;id&quot;:&quot;model_doc/nat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nat&quot;},{&quot;title&quot;:&quot;PoolFormer&quot;,&quot;id&quot;:&quot;model_doc/poolformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/poolformer&quot;},{&quot;title&quot;:&quot;Pyramid Vision Transformer (PVT)&quot;,&quot;id&quot;:&quot;model_doc/pvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pvt&quot;},{&quot;title&quot;:&quot;RegNet&quot;,&quot;id&quot;:&quot;model_doc/regnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/regnet&quot;},{&quot;title&quot;:&quot;ResNet&quot;,&quot;id&quot;:&quot;model_doc/resnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/resnet&quot;},{&quot;title&quot;:&quot;SegFormer&quot;,&quot;id&quot;:&quot;model_doc/segformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/segformer&quot;},{&quot;title&quot;:&quot;SwiftFormer&quot;,&quot;id&quot;:&quot;model_doc/swiftformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swiftformer&quot;},{&quot;title&quot;:&quot;Swin Transformer&quot;,&quot;id&quot;:&quot;model_doc/swin&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin&quot;},{&quot;title&quot;:&quot;Swin Transformer V2&quot;,&quot;id&quot;:&quot;model_doc/swinv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swinv2&quot;},{&quot;title&quot;:&quot;Swin2SR&quot;,&quot;id&quot;:&quot;model_doc/swin2sr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin2sr&quot;},{&quot;title&quot;:&quot;Table Transformer&quot;,&quot;id&quot;:&quot;model_doc/table-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/table-transformer&quot;},{&quot;title&quot;:&quot;TimeSformer&quot;,&quot;id&quot;:&quot;model_doc/timesformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/timesformer&quot;},{&quot;title&quot;:&quot;UperNet&quot;,&quot;id&quot;:&quot;model_doc/upernet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/upernet&quot;},{&quot;title&quot;:&quot;VAN&quot;,&quot;id&quot;:&quot;model_doc/van&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/van&quot;},{&quot;title&quot;:&quot;VideoMAE&quot;,&quot;id&quot;:&quot;model_doc/videomae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/videomae&quot;},{&quot;title&quot;:&quot;Vision Transformer (ViT)&quot;,&quot;id&quot;:&quot;model_doc/vit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit&quot;},{&quot;title&quot;:&quot;ViT Hybrid&quot;,&quot;id&quot;:&quot;model_doc/vit_hybrid&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_hybrid&quot;},{&quot;title&quot;:&quot;ViTDet&quot;,&quot;id&quot;:&quot;model_doc/vitdet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitdet&quot;},{&quot;title&quot;:&quot;ViTMAE&quot;,&quot;id&quot;:&quot;model_doc/vit_mae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_mae&quot;},{&quot;title&quot;:&quot;ViTMatte&quot;,&quot;id&quot;:&quot;model_doc/vitmatte&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitmatte&quot;},{&quot;title&quot;:&quot;ViTMSN&quot;,&quot;id&quot;:&quot;model_doc/vit_msn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_msn&quot;},{&quot;title&quot;:&quot;ViViT&quot;,&quot;id&quot;:&quot;model_doc/vivit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vivit&quot;},{&quot;title&quot;:&quot;YOLOS&quot;,&quot;id&quot;:&quot;model_doc/yolos&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yolos&quot;}]},{&quot;title&quot;:&quot;Audio models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio Spectrogram Transformer&quot;,&quot;id&quot;:&quot;model_doc/audio-spectrogram-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer&quot;},{&quot;title&quot;:&quot;Bark&quot;,&quot;id&quot;:&quot;model_doc/bark&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bark&quot;},{&quot;title&quot;:&quot;CLAP&quot;,&quot;id&quot;:&quot;model_doc/clap&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clap&quot;},{&quot;title&quot;:&quot;EnCodec&quot;,&quot;id&quot;:&quot;model_doc/encodec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encodec&quot;},{&quot;title&quot;:&quot;Hubert&quot;,&quot;id&quot;:&quot;model_doc/hubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/hubert&quot;},{&quot;title&quot;:&quot;MCTCT&quot;,&quot;id&quot;:&quot;model_doc/mctct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mctct&quot;},{&quot;title&quot;:&quot;MMS&quot;,&quot;id&quot;:&quot;model_doc/mms&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mms&quot;},{&quot;title&quot;:&quot;MusicGen&quot;,&quot;id&quot;:&quot;model_doc/musicgen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/musicgen&quot;},{&quot;title&quot;:&quot;Pop2Piano&quot;,&quot;id&quot;:&quot;model_doc/pop2piano&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pop2piano&quot;},{&quot;title&quot;:&quot;SEW&quot;,&quot;id&quot;:&quot;model_doc/sew&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew&quot;},{&quot;title&quot;:&quot;SEW-D&quot;,&quot;id&quot;:&quot;model_doc/sew-d&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew-d&quot;},{&quot;title&quot;:&quot;Speech2Text&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text&quot;},{&quot;title&quot;:&quot;Speech2Text2&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text_2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2&quot;},{&quot;title&quot;:&quot;SpeechT5&quot;,&quot;id&quot;:&quot;model_doc/speecht5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speecht5&quot;},{&quot;title&quot;:&quot;UniSpeech&quot;,&quot;id&quot;:&quot;model_doc/unispeech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech&quot;},{&quot;title&quot;:&quot;UniSpeech-SAT&quot;,&quot;id&quot;:&quot;model_doc/unispeech-sat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech-sat&quot;},{&quot;title&quot;:&quot;VITS&quot;,&quot;id&quot;:&quot;model_doc/vits&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vits&quot;},{&quot;title&quot;:&quot;Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2&quot;},{&quot;title&quot;:&quot;Wav2Vec2-Conformer&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2-conformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer&quot;},{&quot;title&quot;:&quot;Wav2Vec2Phoneme&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2_phoneme&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme&quot;},{&quot;title&quot;:&quot;WavLM&quot;,&quot;id&quot;:&quot;model_doc/wavlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wavlm&quot;},{&quot;title&quot;:&quot;Whisper&quot;,&quot;id&quot;:&quot;model_doc/whisper&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/whisper&quot;},{&quot;title&quot;:&quot;XLS-R&quot;,&quot;id&quot;:&quot;model_doc/xls_r&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xls_r&quot;},{&quot;title&quot;:&quot;XLSR-Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/xlsr_wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2&quot;}]},{&quot;title&quot;:&quot;Multimodal models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALIGN&quot;,&quot;id&quot;:&quot;model_doc/align&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/align&quot;},{&quot;title&quot;:&quot;AltCLIP&quot;,&quot;id&quot;:&quot;model_doc/altclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/altclip&quot;},{&quot;title&quot;:&quot;BLIP&quot;,&quot;id&quot;:&quot;model_doc/blip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip&quot;},{&quot;title&quot;:&quot;BLIP-2&quot;,&quot;id&quot;:&quot;model_doc/blip-2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip-2&quot;},{&quot;title&quot;:&quot;BridgeTower&quot;,&quot;id&quot;:&quot;model_doc/bridgetower&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bridgetower&quot;},{&quot;title&quot;:&quot;BROS&quot;,&quot;id&quot;:&quot;model_doc/bros&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bros&quot;},{&quot;title&quot;:&quot;Chinese-CLIP&quot;,&quot;id&quot;:&quot;model_doc/chinese_clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/chinese_clip&quot;},{&quot;title&quot;:&quot;CLIP&quot;,&quot;id&quot;:&quot;model_doc/clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clip&quot;},{&quot;title&quot;:&quot;CLIPSeg&quot;,&quot;id&quot;:&quot;model_doc/clipseg&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clipseg&quot;},{&quot;title&quot;:&quot;Data2Vec&quot;,&quot;id&quot;:&quot;model_doc/data2vec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/data2vec&quot;},{&quot;title&quot;:&quot;DePlot&quot;,&quot;id&quot;:&quot;model_doc/deplot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deplot&quot;},{&quot;title&quot;:&quot;Donut&quot;,&quot;id&quot;:&quot;model_doc/donut&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/donut&quot;},{&quot;title&quot;:&quot;FLAVA&quot;,&quot;id&quot;:&quot;model_doc/flava&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flava&quot;},{&quot;title&quot;:&quot;GIT&quot;,&quot;id&quot;:&quot;model_doc/git&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/git&quot;},{&quot;title&quot;:&quot;GroupViT&quot;,&quot;id&quot;:&quot;model_doc/groupvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/groupvit&quot;},{&quot;title&quot;:&quot;IDEFICS&quot;,&quot;id&quot;:&quot;model_doc/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/idefics&quot;},{&quot;title&quot;:&quot;InstructBLIP&quot;,&quot;id&quot;:&quot;model_doc/instructblip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/instructblip&quot;},{&quot;title&quot;:&quot;LayoutLM&quot;,&quot;id&quot;:&quot;model_doc/layoutlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlm&quot;},{&quot;title&quot;:&quot;LayoutLMV2&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv2&quot;},{&quot;title&quot;:&quot;LayoutLMV3&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv3&quot;},{&quot;title&quot;:&quot;LayoutXLM&quot;,&quot;id&quot;:&quot;model_doc/layoutxlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutxlm&quot;},{&quot;title&quot;:&quot;LiLT&quot;,&quot;id&quot;:&quot;model_doc/lilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lilt&quot;},{&quot;title&quot;:&quot;LXMERT&quot;,&quot;id&quot;:&quot;model_doc/lxmert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lxmert&quot;},{&quot;title&quot;:&quot;MatCha&quot;,&quot;id&quot;:&quot;model_doc/matcha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/matcha&quot;},{&quot;title&quot;:&quot;MGP-STR&quot;,&quot;id&quot;:&quot;model_doc/mgp-str&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mgp-str&quot;},{&quot;title&quot;:&quot;Nougat&quot;,&quot;id&quot;:&quot;model_doc/nougat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nougat&quot;},{&quot;title&quot;:&quot;OneFormer&quot;,&quot;id&quot;:&quot;model_doc/oneformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/oneformer&quot;},{&quot;title&quot;:&quot;OWL-ViT&quot;,&quot;id&quot;:&quot;model_doc/owlvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/owlvit&quot;},{&quot;title&quot;:&quot;Perceiver&quot;,&quot;id&quot;:&quot;model_doc/perceiver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/perceiver&quot;},{&quot;title&quot;:&quot;Pix2Struct&quot;,&quot;id&quot;:&quot;model_doc/pix2struct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pix2struct&quot;},{&quot;title&quot;:&quot;Segment Anything&quot;,&quot;id&quot;:&quot;model_doc/sam&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sam&quot;},{&quot;title&quot;:&quot;Speech Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/speech-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder&quot;},{&quot;title&quot;:&quot;TAPAS&quot;,&quot;id&quot;:&quot;model_doc/tapas&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapas&quot;},{&quot;title&quot;:&quot;TrOCR&quot;,&quot;id&quot;:&quot;model_doc/trocr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trocr&quot;},{&quot;title&quot;:&quot;TVLT&quot;,&quot;id&quot;:&quot;model_doc/tvlt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tvlt&quot;},{&quot;title&quot;:&quot;ViLT&quot;,&quot;id&quot;:&quot;model_doc/vilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vilt&quot;},{&quot;title&quot;:&quot;Vision Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/vision-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder&quot;},{&quot;title&quot;:&quot;Vision Text Dual Encoder&quot;,&quot;id&quot;:&quot;model_doc/vision-text-dual-encoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder&quot;},{&quot;title&quot;:&quot;VisualBERT&quot;,&quot;id&quot;:&quot;model_doc/visual_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/visual_bert&quot;},{&quot;title&quot;:&quot;X-CLIP&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/xclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xclip&quot;}]},{&quot;title&quot;:&quot;Reinforcement learning models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Decision Transformer&quot;,&quot;id&quot;:&quot;model_doc/decision_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/decision_transformer&quot;},{&quot;title&quot;:&quot;Trajectory Transformer&quot;,&quot;id&quot;:&quot;model_doc/trajectory_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer&quot;}]},{&quot;title&quot;:&quot;Time series models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Autoformer&quot;,&quot;id&quot;:&quot;model_doc/autoformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/autoformer&quot;},{&quot;title&quot;:&quot;Informer&quot;,&quot;id&quot;:&quot;model_doc/informer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/informer&quot;},{&quot;title&quot;:&quot;Time Series Transformer&quot;,&quot;id&quot;:&quot;model_doc/time_series_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/time_series_transformer&quot;}]},{&quot;title&quot;:&quot;Graph models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Graphormer&quot;,&quot;id&quot;:&quot;model_doc/graphormer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/graphormer&quot;}]}]},{&quot;title&quot;:&quot;Internal Helpers&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Custom Layers and Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/modeling_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/modeling_utils&quot;},{&quot;title&quot;:&quot;Utilities for pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/pipelines_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/pipelines_utils&quot;},{&quot;title&quot;:&quot;Utilities for Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/tokenization_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/tokenization_utils&quot;},{&quot;title&quot;:&quot;Utilities for Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/trainer_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/trainer_utils&quot;},{&quot;title&quot;:&quot;Utilities for Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/generation_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/generation_utils&quot;},{&quot;title&quot;:&quot;Utilities for Image Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/image_processing_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/image_processing_utils&quot;},{&quot;title&quot;:&quot;Utilities for Audio processing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/audio_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/audio_utils&quot;},{&quot;title&quot;:&quot;General Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/file_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/file_utils&quot;},{&quot;title&quot;:&quot;Utilities for Time Series&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/time_series_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/time_series_utils&quot;}]}]}],&quot;chapterId&quot;:&quot;model_doc/xclip&quot;,&quot;docType&quot;:&quot;docs&quot;,&quot;isLoggedIn&quot;:false,&quot;lang&quot;:&quot;en&quot;,&quot;langs&quot;:[&quot;de&quot;,&quot;en&quot;,&quot;es&quot;,&quot;fr&quot;,&quot;it&quot;,&quot;ko&quot;,&quot;pt&quot;,&quot;zh&quot;],&quot;library&quot;:&quot;transformers&quot;,&quot;theme&quot;:&quot;light&quot;,&quot;version&quot;:&quot;v4.34.0&quot;,&quot;versions&quot;:[{&quot;version&quot;:&quot;main&quot;},{&quot;version&quot;:&quot;v4.34.0&quot;},{&quot;version&quot;:&quot;v4.33.3&quot;},{&quot;version&quot;:&quot;v4.33.2&quot;},{&quot;version&quot;:&quot;v4.33.0&quot;},{&quot;version&quot;:&quot;v4.32.1&quot;},{&quot;version&quot;:&quot;v4.32.0&quot;},{&quot;version&quot;:&quot;v4.31.0&quot;},{&quot;version&quot;:&quot;v4.30.0&quot;},{&quot;version&quot;:&quot;v4.29.1&quot;},{&quot;version&quot;:&quot;v4.29.0&quot;},{&quot;version&quot;:&quot;v4.28.1&quot;},{&quot;version&quot;:&quot;v4.28.0&quot;},{&quot;version&quot;:&quot;v4.27.2&quot;},{&quot;version&quot;:&quot;v4.27.1&quot;},{&quot;version&quot;:&quot;v4.27.0&quot;},{&quot;version&quot;:&quot;v4.26.1&quot;},{&quot;version&quot;:&quot;v4.26.0&quot;},{&quot;version&quot;:&quot;v4.25.1&quot;},{&quot;version&quot;:&quot;v4.24.0&quot;},{&quot;version&quot;:&quot;v4.23.1&quot;},{&quot;version&quot;:&quot;v4.23.0&quot;},{&quot;version&quot;:&quot;v4.22.2&quot;},{&quot;version&quot;:&quot;v4.22.1&quot;},{&quot;version&quot;:&quot;v4.22.0&quot;},{&quot;version&quot;:&quot;v4.21.3&quot;},{&quot;version&quot;:&quot;v4.21.2&quot;},{&quot;version&quot;:&quot;v4.21.1&quot;},{&quot;version&quot;:&quot;v4.21.0&quot;},{&quot;version&quot;:&quot;v4.20.1&quot;},{&quot;version&quot;:&quot;v4.20.0&quot;},{&quot;version&quot;:&quot;v4.19.4&quot;},{&quot;version&quot;:&quot;v4.19.3&quot;},{&quot;version&quot;:&quot;v4.19.2&quot;},{&quot;version&quot;:&quot;v4.19.0&quot;},{&quot;version&quot;:&quot;v4.18.0&quot;},{&quot;version&quot;:&quot;v4.17.0&quot;},{&quot;version&quot;:&quot;v4.16.2&quot;},{&quot;version&quot;:&quot;v4.16.1&quot;},{&quot;version&quot;:&quot;v4.16.0&quot;},{&quot;version&quot;:&quot;v4.15.0&quot;},{&quot;version&quot;:&quot;v4.14.1&quot;},{&quot;version&quot;:&quot;v4.13.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.5&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.4&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.0.0&quot;},{&quot;version&quot;:&quot;doc-builder-html&quot;}],&quot;title&quot;:&quot;X-CLIP&quot;}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"> <div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation </p> <div class="flex items-center"><p class="font-semibold">X-CLIP</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "> <button class=" " type="button"> <h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> </button> <div class="flex items-center"> <select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1" selected="">v4.34.0</option><option value="2">v4.33.3</option><option value="3">v4.32.1</option><option value="4">v4.31.0</option><option value="5">v4.30.0</option><option value="6">v4.29.1</option><option value="7">v4.28.1</option><option value="8">v4.27.2</option><option value="9">v4.26.1</option><option value="10">v4.25.1</option><option value="11">v4.24.0</option><option value="12">v4.23.1</option><option value="13">v4.22.2</option><option value="14">v4.21.3</option><option value="15">v4.20.1</option><option value="16">v4.19.4</option><option value="17">v4.18.0</option><option value="18">v4.17.0</option><option value="19">v4.16.2</option><option value="20">v4.15.0</option><option value="21">v4.14.1</option><option value="22">v4.13.0</option><option value="23">v4.12.5</option><option value="24">v4.11.3</option><option value="25">v4.10.1</option><option value="26">v4.9.2</option><option value="27">v4.8.2</option><option value="28">v4.7.0</option><option value="29">v4.6.0</option><option value="30">v4.5.1</option><option value="31">v4.4.2</option><option value="32">v4.3.3</option><option value="33">v4.2.2</option><option value="34">v4.1.1</option><option value="35">v4.0.1</option><option value="36">v3.5.1</option><option value="37">v3.4.0</option><option value="38">v3.3.1</option><option value="39">v3.2.0</option><option value="40">v3.1.0</option><option value="41">v3.0.2</option><option value="42">v2.11.0</option><option value="43">v2.10.0</option><option value="44">v2.9.1</option><option value="45">v2.8.0</option><option value="46">v2.7.0</option><option value="47">v2.6.0</option><option value="48">v2.5.1</option><option value="49">v2.4.1</option><option value="50">v2.3.0</option><option value="51">v2.2.2</option><option value="52">v2.1.1</option><option value="53">v2.0.0</option><option value="54">v1.2.0</option><option value="55">v1.1.0</option><option value="56">v1.0.0</option><option value="57">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en" selected="">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"> <button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"> <svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> </a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Get started<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/index"><!-- HTML_TAG_START -->🤗 Transformers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/quicktour"><!-- HTML_TAG_START -->Quick tour<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/installation"><!-- HTML_TAG_START -->Installation<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Tutorials<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_tutorial"><!-- HTML_TAG_START -->Run inference with pipelines<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/autoclass_tutorial"><!-- HTML_TAG_START -->Write portable code with AutoClass<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/preprocessing"><!-- HTML_TAG_START -->Preprocess data<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/training"><!-- HTML_TAG_START -->Fine-tune a pretrained model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/run_scripts"><!-- HTML_TAG_START -->Train with a script<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/accelerate"><!-- HTML_TAG_START -->Set up distributed training with 🤗 Accelerate<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/peft"><!-- HTML_TAG_START -->Load and train adapters with 🤗 PEFT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_sharing"><!-- HTML_TAG_START -->Share your model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/transformers_agents"><!-- HTML_TAG_START -->Agents<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/llm_tutorial"><!-- HTML_TAG_START -->Generation with LLMs<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Task Guides<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Natural Language Processing<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Audio<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Computer Vision<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Multimodal<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Generation<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Prompting<!-- HTML_TAG_END --></span> </span></span> </div></div> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Developer guides<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/fast_tokenizers"><!-- HTML_TAG_START -->Use fast tokenizers from 🤗 Tokenizers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/multilingual"><!-- HTML_TAG_START -->Run inference with multilingual models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/create_a_model"><!-- HTML_TAG_START -->Use model-specific APIs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_models"><!-- HTML_TAG_START -->Share a custom model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/chat_templating"><!-- HTML_TAG_START -->Templates for chat models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/sagemaker"><!-- HTML_TAG_START -->Run training on Amazon SageMaker<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/serialization"><!-- HTML_TAG_START -->Export to ONNX<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tflite"><!-- HTML_TAG_START -->Export to TFLite<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/torchscript"><!-- HTML_TAG_START -->Export to TorchScript<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/benchmarks"><!-- HTML_TAG_START -->Benchmarks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/notebooks"><!-- HTML_TAG_START -->Notebooks with examples<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/community"><!-- HTML_TAG_START -->Community resources<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_tools"><!-- HTML_TAG_START -->Custom Tools and Prompts<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/troubleshooting"><!-- HTML_TAG_START -->Troubleshoot<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Performance and scalability<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/performance"><!-- HTML_TAG_START -->Overview<!-- HTML_TAG_END --> </a> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Efficient training techniques<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_one"><!-- HTML_TAG_START -->Methods and tools for efficient training on a single GPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_many"><!-- HTML_TAG_START -->Multiple GPUs and parallelism<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu"><!-- HTML_TAG_START -->Efficient training on CPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu_many"><!-- HTML_TAG_START -->Distributed CPU training<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu"><!-- HTML_TAG_START -->Training on TPUs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu_tf"><!-- HTML_TAG_START -->Training on TPU with TensorFlow<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_special"><!-- HTML_TAG_START -->Training on Specialized Hardware<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_hardware"><!-- HTML_TAG_START -->Custom hardware for training<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/hpo_train"><!-- HTML_TAG_START -->Hyperparameter Search using Trainer API<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Optimizing inference<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_cpu"><!-- HTML_TAG_START -->Inference on CPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_one"><!-- HTML_TAG_START -->Inference on one GPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_many"><!-- HTML_TAG_START -->Inference on many GPUs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_special"><!-- HTML_TAG_START -->Inference on Specialized Hardware<!-- HTML_TAG_END --> </a> </div><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/big_models"><!-- HTML_TAG_START -->Instantiating a big model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/debugging"><!-- HTML_TAG_START -->Troubleshooting<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tf_xla"><!-- HTML_TAG_START -->XLA Integration for TensorFlow Models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perf_torch_compile"><!-- HTML_TAG_START -->Optimize inference using `torch.compile()`<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Contribute<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/contributing"><!-- HTML_TAG_START -->How to contribute to transformers?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_model"><!-- HTML_TAG_START -->How to add a model to 🤗 Transformers?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_tensorflow_model"><!-- HTML_TAG_START -->How to convert a 🤗 Transformers model to TensorFlow?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_pipeline"><!-- HTML_TAG_START -->How to add a pipeline to 🤗 Transformers?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/testing"><!-- HTML_TAG_START -->Testing<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pr_checks"><!-- HTML_TAG_START -->Checks on a Pull Request<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Conceptual guides<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/philosophy"><!-- HTML_TAG_START -->Philosophy<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/glossary"><!-- HTML_TAG_START -->Glossary<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/task_summary"><!-- HTML_TAG_START -->What 🤗 Transformers can do<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tasks_explained"><!-- HTML_TAG_START -->How 🤗 Transformers solve tasks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_summary"><!-- HTML_TAG_START -->The Transformer model family<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tokenizer_summary"><!-- HTML_TAG_START -->Summary of the tokenizers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/attention"><!-- HTML_TAG_START -->Attention mechanisms<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pad_truncation"><!-- HTML_TAG_START -->Padding and truncation<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/bertology"><!-- HTML_TAG_START -->BERTology<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perplexity"><!-- HTML_TAG_START -->Perplexity of fixed-length models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_webserver"><!-- HTML_TAG_START -->Pipelines for webserver inference<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_memory_anatomy"><!-- HTML_TAG_START -->Model training anatomy<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->API<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Main Classes<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/agent"><!-- HTML_TAG_START -->Agents and Tools<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/model_doc/auto"><!-- HTML_TAG_START -->Auto Classes<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/callback"><!-- HTML_TAG_START -->Callbacks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/configuration"><!-- HTML_TAG_START -->Configuration<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/data_collator"><!-- HTML_TAG_START -->Data Collator<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks"><!-- HTML_TAG_START -->Keras callbacks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/logging"><!-- HTML_TAG_START -->Logging<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/model"><!-- HTML_TAG_START -->Models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/text_generation"><!-- HTML_TAG_START -->Text Generation<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/onnx"><!-- HTML_TAG_START -->ONNX<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules"><!-- HTML_TAG_START -->Optimization<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/output"><!-- HTML_TAG_START -->Model outputs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/pipelines"><!-- HTML_TAG_START -->Pipelines<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/processors"><!-- HTML_TAG_START -->Processors<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/quantization"><!-- HTML_TAG_START -->Quantization<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/tokenizer"><!-- HTML_TAG_START -->Tokenizer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/trainer"><!-- HTML_TAG_START -->Trainer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/deepspeed"><!-- HTML_TAG_START -->DeepSpeed Integration<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor"><!-- HTML_TAG_START -->Feature Extractor<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/image_processor"><!-- HTML_TAG_START -->Image Processor<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Text models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Vision models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Audio models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Multimodal models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/align"><!-- HTML_TAG_START -->ALIGN<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/altclip"><!-- HTML_TAG_START -->AltCLIP<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/blip"><!-- HTML_TAG_START -->BLIP<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/blip-2"><!-- HTML_TAG_START -->BLIP-2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bridgetower"><!-- HTML_TAG_START -->BridgeTower<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bros"><!-- HTML_TAG_START -->BROS<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/chinese_clip"><!-- HTML_TAG_START -->Chinese-CLIP<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/clip"><!-- HTML_TAG_START -->CLIP<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/clipseg"><!-- HTML_TAG_START -->CLIPSeg<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/data2vec"><!-- HTML_TAG_START -->Data2Vec<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/deplot"><!-- HTML_TAG_START -->DePlot<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/donut"><!-- HTML_TAG_START -->Donut<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/flava"><!-- HTML_TAG_START -->FLAVA<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/git"><!-- HTML_TAG_START -->GIT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/groupvit"><!-- HTML_TAG_START -->GroupViT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/idefics"><!-- HTML_TAG_START -->IDEFICS<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/instructblip"><!-- HTML_TAG_START -->InstructBLIP<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/layoutlm"><!-- HTML_TAG_START -->LayoutLM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/layoutlmv2"><!-- HTML_TAG_START -->LayoutLMV2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/layoutlmv3"><!-- HTML_TAG_START -->LayoutLMV3<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/layoutxlm"><!-- HTML_TAG_START -->LayoutXLM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/lilt"><!-- HTML_TAG_START -->LiLT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/lxmert"><!-- HTML_TAG_START -->LXMERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/matcha"><!-- HTML_TAG_START -->MatCha<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mgp-str"><!-- HTML_TAG_START -->MGP-STR<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nougat"><!-- HTML_TAG_START -->Nougat<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/oneformer"><!-- HTML_TAG_START -->OneFormer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/owlvit"><!-- HTML_TAG_START -->OWL-ViT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/perceiver"><!-- HTML_TAG_START -->Perceiver<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/pix2struct"><!-- HTML_TAG_START -->Pix2Struct<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/sam"><!-- HTML_TAG_START -->Segment Anything<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder"><!-- HTML_TAG_START -->Speech Encoder Decoder Models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/tapas"><!-- HTML_TAG_START -->TAPAS<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/trocr"><!-- HTML_TAG_START -->TrOCR<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/tvlt"><!-- HTML_TAG_START -->TVLT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/vilt"><!-- HTML_TAG_START -->ViLT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder"><!-- HTML_TAG_START -->Vision Encoder Decoder Models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder"><!-- HTML_TAG_START -->Vision Text Dual Encoder<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/visual_bert"><!-- HTML_TAG_START -->VisualBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xclip"><!-- HTML_TAG_START -->X-CLIP<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Reinforcement learning models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Time series models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Graph models<!-- HTML_TAG_END --></span> </span></span> </div></div> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Internal Helpers<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/modeling_utils"><!-- HTML_TAG_START -->Custom Layers and Utilities<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/pipelines_utils"><!-- HTML_TAG_START -->Utilities for pipelines<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/tokenization_utils"><!-- HTML_TAG_START -->Utilities for Tokenizers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/trainer_utils"><!-- HTML_TAG_START -->Utilities for Trainer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/generation_utils"><!-- HTML_TAG_START -->Utilities for Generation<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/image_processing_utils"><!-- HTML_TAG_START -->Utilities for Image Processors<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/audio_utils"><!-- HTML_TAG_START -->Utilities for Audio processing<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/file_utils"><!-- HTML_TAG_START -->General Utilities<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/time_series_utils"><!-- HTML_TAG_START -->Utilities for Time Series<!-- HTML_TAG_END --> </a> </div> </div></nav></div></div></div> <div class="z-1 min-w-0 flex-1"> <div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg"> <div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div> <p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience </p> <div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes </div></div></div> <div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a> <p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div> <div class="prose-doc prose relative mx-auto max-w-4xl break-words"><!-- HTML_TAG_START --> <link href="/docs/transformers/v4.34.0/en/_app/immutable/assets/0.e3b0c442.css" rel="modulepreload"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/entry/start.c2db227a.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/scheduler.9bc65507.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/singletons.e3057404.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/index.3b203c72.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/paths.e7de6301.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/entry/app.879d9b87.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/index.78c82d43.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/0.242aaaff.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/each.e59479a4.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/275.f37ba5b8.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/Tip.87d55b76.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/Docstring.4e7352e2.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/globals.7f7f1b26.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/IconCopyLink.bedaa44d.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/CodeBlock.73e038be.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/ExampleCodeBlock.872b014d.js"><!-- HEAD_svelte-1phssyn_START --><meta name="hf:doc:metadata" content="{&quot;local&quot;:&quot;xclip&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;overview&quot;,&quot;title&quot;:&quot;Overview&quot;},{&quot;local&quot;:&quot;resources&quot;,&quot;title&quot;:&quot;Resources&quot;},{&quot;local&quot;:&quot;transformers.XCLIPProcessor&quot;,&quot;title&quot;:&quot;XCLIPProcessor&quot;},{&quot;local&quot;:&quot;transformers.XCLIPConfig&quot;,&quot;title&quot;:&quot;XCLIPConfig&quot;},{&quot;local&quot;:&quot;transformers.XCLIPTextConfig&quot;,&quot;title&quot;:&quot;XCLIPTextConfig&quot;},{&quot;local&quot;:&quot;transformers.XCLIPVisionConfig&quot;,&quot;title&quot;:&quot;XCLIPVisionConfig&quot;},{&quot;local&quot;:&quot;transformers.XCLIPModel&quot;,&quot;title&quot;:&quot;XCLIPModel&quot;},{&quot;local&quot;:&quot;transformers.XCLIPTextModel&quot;,&quot;title&quot;:&quot;XCLIPTextModel&quot;},{&quot;local&quot;:&quot;transformers.XCLIPVisionModel&quot;,&quot;title&quot;:&quot;XCLIPVisionModel&quot;}],&quot;title&quot;:&quot;X-CLIP&quot;}"><!-- HEAD_svelte-1phssyn_END --> <p></p> <h1 class="relative group"><a id="xclip" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#xclip"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-iutr3e">X-CLIP</span></h1> <h2 class="relative group"><a id="overview" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#overview"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1jsw1pg">Overview</span></h2> <p data-svelte-h="svelte-6enh90">The X-CLIP model was proposed in <a href="https://arxiv.org/abs/2208.02816" rel="nofollow">Expanding Language-Image Pretrained Models for General Video Recognition</a> by Bolin Ni, Houwen Peng, Minghao Chen, Songyang Zhang, Gaofeng Meng, Jianlong Fu, Shiming Xiang, Haibin Ling. X-CLIP is a minimal extension of <a href="clip">CLIP</a> for video. The model consists of a text encoder, a cross-frame vision encoder, a multi-frame integration Transformer, and a video-specific prompt generator.</p> <p data-svelte-h="svelte-vfdo9a">The abstract from the paper is the following:</p> <p data-svelte-h="svelte-1lxzml7"><em>Contrastive language-image pretraining has shown great success in learning visual-textual joint representation from web-scale data, demonstrating remarkable “zero-shot” generalization ability for various image tasks. However, how to effectively expand such new language-image pretraining methods to video domains is still an open problem. In this work, we present a simple yet effective approach that adapts the pretrained language-image models to video recognition directly, instead of pretraining a new model from scratch. More concretely, to capture the long-range dependencies of frames along the temporal dimension, we propose a cross-frame attention mechanism that explicitly exchanges information across frames. Such module is lightweight and can be plugged into pretrained language-image models seamlessly. Moreover, we propose a video-specific prompting scheme, which leverages video content information for generating discriminative textual prompts. Extensive experiments demonstrate that our approach is effective and can be generalized to different video recognition scenarios. In particular, under fully-supervised settings, our approach achieves a top-1 accuracy of 87.1% on Kinectics-400, while using 12 times fewer FLOPs compared with Swin-L and ViViT-H. In zero-shot experiments, our approach surpasses the current state-of-the-art methods by +7.6% and +14.9% in terms of top-1 accuracy under two popular protocols. In few-shot scenarios, our approach outperforms previous best methods by +32.1% and +23.1% when the labeled data is extremely limited.</em></p> <p data-svelte-h="svelte-axv494">Tips:</p> <ul data-svelte-h="svelte-165fpm9"><li>Usage of X-CLIP is identical to <a href="clip">CLIP</a>.</li></ul> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/xclip_architecture.png" alt="drawing" width="600"> <small data-svelte-h="svelte-1x5olh0">X-CLIP architecture. Taken from the <a href="https://arxiv.org/abs/2208.02816">original paper.</a></small> <p data-svelte-h="svelte-4a2p64">This model was contributed by <a href="https://huggingface.co/nielsr" rel="nofollow">nielsr</a>. The original code can be found <a href="https://github.com/microsoft/VideoX/tree/master/X-CLIP" rel="nofollow">here</a>.</p> <h2 class="relative group"><a id="resources" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#resources"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-w4zzv6">Resources</span></h2> <p data-svelte-h="svelte-6o9lse">A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with X-CLIP.</p> <ul data-svelte-h="svelte-17fq4se"><li>Demo notebooks for X-CLIP can be found <a href="https://github.com/NielsRogge/Transformers-Tutorials/tree/master/X-CLIP" rel="nofollow">here</a>.</li></ul> <p data-svelte-h="svelte-1xesile">If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.</p> <h2 class="relative group"><a id="transformers.XCLIPProcessor" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPProcessor"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1dkja4b">XCLIPProcessor</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XCLIPProcessor"><!-- HTML_TAG_START --><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XCLIPProcessor</span></span></h3><!-- HTML_TAG_END --> <a id="transformers.XCLIPProcessor" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XCLIPProcessor"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/x_clip/processing_x_clip.py#L25" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">image_processor<span class="opacity-60"> = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">tokenizer<span class="opacity-60"> = None</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPProcessor.image_processor" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPProcessor.image_processor"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>image_processor</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/videomae#transformers.VideoMAEImageProcessor">VideoMAEImageProcessor</a>) — The image processor is a required input.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPProcessor.tokenizer" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPProcessor.tokenizer"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>tokenizer</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/clip#transformers.CLIPTokenizerFast">CLIPTokenizerFast</a>) — The tokenizer is a required input.<!-- HTML_TAG_END --> </span></span> </li></ul> </div></div> <p data-svelte-h="svelte-75agku">Constructs an X-CLIP processor which wraps a VideoMAE image processor and a CLIP tokenizer into a single processor.</p> <p data-svelte-h="svelte-a7ih4q"><a href="/docs/transformers/v4.34.0/en/model_doc/xclip#transformers.XCLIPProcessor">XCLIPProcessor</a> offers all the functionalities of <a href="/docs/transformers/v4.34.0/en/model_doc/videomae#transformers.VideoMAEImageProcessor">VideoMAEImageProcessor</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/clip#transformers.CLIPTokenizerFast">CLIPTokenizerFast</a>. See the <code>__call__()</code> and <a href="/docs/transformers/v4.34.0/en/model_doc/xclip#transformers.XCLIPProcessor.decode">decode()</a> for more information.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XCLIPProcessor.batch_decode"><!-- HTML_TAG_START --><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>batch_decode</span></h4><!-- HTML_TAG_END --> <a id="transformers.XCLIPProcessor.batch_decode" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XCLIPProcessor.batch_decode"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/x_clip/processing_x_clip.py#L115" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">*args<span class="opacity-60"></span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> </div></div> <p data-svelte-h="svelte-13zh1xk">This method forwards all its arguments to CLIPTokenizerFast’s <a href="/docs/transformers/v4.34.0/en/model_doc/speecht5#transformers.SpeechT5Tokenizer.batch_decode">batch_decode()</a>. Please refer to the docstring of this method for more information.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XCLIPProcessor.decode"><!-- HTML_TAG_START --><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>decode</span></h4><!-- HTML_TAG_END --> <a id="transformers.XCLIPProcessor.decode" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XCLIPProcessor.decode"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/x_clip/processing_x_clip.py#L122" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">*args<span class="opacity-60"></span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> </div></div> <p data-svelte-h="svelte-1qa4qlq">This method forwards all its arguments to CLIPTokenizerFast’s <a href="/docs/transformers/v4.34.0/en/model_doc/speecht5#transformers.SpeechT5Tokenizer.decode">decode()</a>. Please refer to the docstring of this method for more information.</p></div></div> <h2 class="relative group"><a id="transformers.XCLIPConfig" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPConfig"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-156qv1z">XCLIPConfig</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XCLIPConfig"><!-- HTML_TAG_START --><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XCLIPConfig</span></span></h3><!-- HTML_TAG_END --> <a id="transformers.XCLIPConfig" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XCLIPConfig"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/x_clip/configuration_x_clip.py#L264" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">text_config<span class="opacity-60"> = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">vision_config<span class="opacity-60"> = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">projection_dim<span class="opacity-60"> = 512</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">prompt_layers<span class="opacity-60"> = 2</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">prompt_alpha<span class="opacity-60"> = 0.1</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">prompt_hidden_act<span class="opacity-60"> = 'quick_gelu'</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">prompt_num_attention_heads<span class="opacity-60"> = 8</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">prompt_attention_dropout<span class="opacity-60"> = 0.0</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">prompt_projection_dropout<span class="opacity-60"> = 0.0</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">logit_scale_init_value<span class="opacity-60"> = 2.6592</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPConfig.text_config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPConfig.text_config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>text_config</strong> (<code>dict</code>, <em>optional</em>) — Dictionary of configuration options used to initialize <a href="/docs/transformers/v4.34.0/en/model_doc/xclip#transformers.XCLIPTextConfig">XCLIPTextConfig</a>.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPConfig.vision_config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPConfig.vision_config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>vision_config</strong> (<code>dict</code>, <em>optional</em>) — Dictionary of configuration options used to initialize <a href="/docs/transformers/v4.34.0/en/model_doc/xclip#transformers.XCLIPVisionConfig">XCLIPVisionConfig</a>.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPConfig.projection_dim" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPConfig.projection_dim"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>projection_dim</strong> (<code>int</code>, <em>optional</em>, defaults to 512) — Dimentionality of text and vision projection layers.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPConfig.prompt_layers" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPConfig.prompt_layers"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>prompt_layers</strong> (<code>int</code>, <em>optional</em>, defaults to 2) — Number of layers in the video specific prompt generator.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPConfig.prompt_alpha" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPConfig.prompt_alpha"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>prompt_alpha</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) — Alpha value to use in the video specific prompt generator.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPConfig.prompt_hidden_act" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPConfig.prompt_hidden_act"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>prompt_hidden_act</strong> (<code>str</code> or <code>function</code>, <em>optional</em>, defaults to <code>"quick_gelu"</code>) — The non-linear activation function (function or string) in the video specific prompt generator. If string, <code>"gelu"</code>, <code>"relu"</code>, <code>"selu"</code> and <code>"gelu_new"</code> `<code>"quick_gelu"</code> are supported.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPConfig.prompt_num_attention_heads" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPConfig.prompt_num_attention_heads"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>prompt_num_attention_heads</strong> (<code>int</code>, <em>optional</em>, defaults to 8) — Number of attention heads in the cross-attention of the video specific prompt generator.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPConfig.prompt_attention_dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPConfig.prompt_attention_dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>prompt_attention_dropout</strong> (<code>float</code>, <em>optional</em>, defaults to 0.0) — The dropout probability for the attention layers in the video specific prompt generator.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPConfig.prompt_projection_dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPConfig.prompt_projection_dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>prompt_projection_dropout</strong> (<code>float</code>, <em>optional</em>, defaults to 0.0) — The dropout probability for the projection layers in the video specific prompt generator.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPConfig.logit_scale_init_value" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPConfig.logit_scale_init_value"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>logit_scale_init_value</strong> (<code>float</code>, <em>optional</em>, defaults to 2.6592) — The inital value of the <em>logit_scale</em> parameter. Default is used as per the original XCLIP implementation.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPConfig.kwargs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPConfig.kwargs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>kwargs</strong> (<em>optional</em>) — Dictionary of keyword arguments.<!-- HTML_TAG_END --> </span></span> </li></ul> </div></div> <p data-svelte-h="svelte-17t6hyg"><a href="/docs/transformers/v4.34.0/en/model_doc/xclip#transformers.XCLIPConfig">XCLIPConfig</a> is the configuration class to store the configuration of a <a href="/docs/transformers/v4.34.0/en/model_doc/xclip#transformers.XCLIPModel">XCLIPModel</a>. It is used to instantiate X-CLIP model according to the specified arguments, defining the text model and vision model configs. Instantiating a configuration with the defaults will yield a similar configuration to that of the X-CLIP <a href="https://huggingface.co/microsoft/xclip-base-patch32" rel="nofollow">microsoft/xclip-base-patch32</a> architecture.</p> <p data-svelte-h="svelte-10kqkkl">Configuration objects inherit from <a href="/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> and can be used to control the model outputs. Read the documentation from <a href="/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> for more information.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XCLIPConfig.from_text_vision_configs"><!-- HTML_TAG_START --><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>from_text_vision_configs</span></h4><!-- HTML_TAG_END --> <a id="transformers.XCLIPConfig.from_text_vision_configs" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XCLIPConfig.from_text_vision_configs"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/x_clip/configuration_x_clip.py#L407" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">text_config<span class="opacity-60">: XCLIPTextConfig</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">vision_config<span class="opacity-60">: XCLIPVisionConfig</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><!-- HTML_TAG_START --><span><a href="/docs/transformers/v4.34.0/en/model_doc/xclip#transformers.XCLIPConfig">XCLIPConfig</a></span><!-- HTML_TAG_END --></span></p> <div class="!mb-10 relative docstring-details "> <div id="transformers.XCLIPConfig.from_text_vision_configs.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <!-- HTML_TAG_START --> <p><a href="/docs/transformers/v4.34.0/en/model_doc/xclip#transformers.XCLIPConfig">XCLIPConfig</a></p> <!-- HTML_TAG_END --> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"><!-- HTML_TAG_START --> </p><p>An instance of a configuration object</p> <!-- HTML_TAG_END --><p></p> </div></div> <p data-svelte-h="svelte-602v3q">Instantiate a <a href="/docs/transformers/v4.34.0/en/model_doc/xclip#transformers.XCLIPConfig">XCLIPConfig</a> (or a derived class) from xclip text model configuration and xclip vision model configuration.</p></div></div> <h2 class="relative group"><a id="transformers.XCLIPTextConfig" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPTextConfig"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-kfgxck">XCLIPTextConfig</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XCLIPTextConfig"><!-- HTML_TAG_START --><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XCLIPTextConfig</span></span></h3><!-- HTML_TAG_END --> <a id="transformers.XCLIPTextConfig" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XCLIPTextConfig"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/x_clip/configuration_x_clip.py#L31" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">vocab_size<span class="opacity-60"> = 49408</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_size<span class="opacity-60"> = 512</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">intermediate_size<span class="opacity-60"> = 2048</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_hidden_layers<span class="opacity-60"> = 12</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_attention_heads<span class="opacity-60"> = 8</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">max_position_embeddings<span class="opacity-60"> = 77</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_act<span class="opacity-60"> = 'quick_gelu'</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">layer_norm_eps<span class="opacity-60"> = 1e-05</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_dropout<span class="opacity-60"> = 0.0</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">initializer_range<span class="opacity-60"> = 0.02</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">initializer_factor<span class="opacity-60"> = 1.0</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pad_token_id<span class="opacity-60"> = 1</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">bos_token_id<span class="opacity-60"> = 0</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">eos_token_id<span class="opacity-60"> = 2</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPTextConfig.vocab_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPTextConfig.vocab_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>vocab_size</strong> (<code>int</code>, <em>optional</em>, defaults to 49408) — Vocabulary size of the X-CLIP text model. Defines the number of different tokens that can be represented by the <code>inputs_ids</code> passed when calling <a href="/docs/transformers/v4.34.0/en/model_doc/xclip#transformers.XCLIPModel">XCLIPModel</a>.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPTextConfig.hidden_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPTextConfig.hidden_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>hidden_size</strong> (<code>int</code>, <em>optional</em>, defaults to 512) — Dimensionality of the encoder layers and the pooler layer.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPTextConfig.intermediate_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPTextConfig.intermediate_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>intermediate_size</strong> (<code>int</code>, <em>optional</em>, defaults to 2048) — Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPTextConfig.num_hidden_layers" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPTextConfig.num_hidden_layers"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>num_hidden_layers</strong> (<code>int</code>, <em>optional</em>, defaults to 12) — Number of hidden layers in the Transformer encoder.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPTextConfig.num_attention_heads" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPTextConfig.num_attention_heads"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>num_attention_heads</strong> (<code>int</code>, <em>optional</em>, defaults to 8) — Number of attention heads for each attention layer in the Transformer encoder.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPTextConfig.max_position_embeddings" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPTextConfig.max_position_embeddings"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>max_position_embeddings</strong> (<code>int</code>, <em>optional</em>, defaults to 77) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048).<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPTextConfig.hidden_act" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPTextConfig.hidden_act"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>hidden_act</strong> (<code>str</code> or <code>function</code>, <em>optional</em>, defaults to <code>"quick_gelu"</code>) — The non-linear activation function (function or string) in the encoder and pooler. If string, <code>"gelu"</code>, <code>"relu"</code>, <code>"selu"</code> and <code>"gelu_new"</code> `<code>"quick_gelu"</code> are supported.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPTextConfig.layer_norm_eps" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPTextConfig.layer_norm_eps"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>layer_norm_eps</strong> (<code>float</code>, <em>optional</em>, defaults to 1e-5) — The epsilon used by the layer normalization layers.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPTextConfig.attention_dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPTextConfig.attention_dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>attention_dropout</strong> (<code>float</code>, <em>optional</em>, defaults to 0.0) — The dropout ratio for the attention probabilities.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPTextConfig.initializer_range" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPTextConfig.initializer_range"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>initializer_range</strong> (<code>float</code>, <em>optional</em>, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPTextConfig.initializer_factor" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPTextConfig.initializer_factor"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>initializer_factor</strong> (`float“, <em>optional</em>, defaults to 1) — A factor for initializing all weight matrices (should be kept to 1, used internally for initialization testing).<!-- HTML_TAG_END --> </span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1pc0nwp">This is the configuration class to store the configuration of a <a href="/docs/transformers/v4.34.0/en/model_doc/xclip#transformers.XCLIPModel">XCLIPModel</a>. It is used to instantiate an X-CLIP model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the X-CLIP <a href="https://huggingface.co/microsoft/xclip-base-patch32" rel="nofollow">microsoft/xclip-base-patch32</a> architecture.</p> <p data-svelte-h="svelte-10kqkkl">Configuration objects inherit from <a href="/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> and can be used to control the model outputs. Read the documentation from <a href="/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> for more information.</p> <div class="relative group rounded-md"><a id="transformers.XCLIPTextConfig.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPTextConfig.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> XCLIPTextModel, XCLIPTextConfig <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Initializing a XCLIPTextModel with microsoft/xclip-base-patch32 style configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>configuration = XCLIPTextConfig() <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Initializing a XCLIPTextConfig from the microsoft/xclip-base-patch32 style configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model = XCLIPTextModel(configuration) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Accessing the model configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>configuration = model.config<!-- HTML_TAG_END --></pre></div></div></div> <h2 class="relative group"><a id="transformers.XCLIPVisionConfig" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPVisionConfig"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1cjc9jv">XCLIPVisionConfig</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XCLIPVisionConfig"><!-- HTML_TAG_START --><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XCLIPVisionConfig</span></span></h3><!-- HTML_TAG_END --> <a id="transformers.XCLIPVisionConfig" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XCLIPVisionConfig"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/x_clip/configuration_x_clip.py#L137" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_size<span class="opacity-60"> = 768</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">intermediate_size<span class="opacity-60"> = 3072</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_hidden_layers<span class="opacity-60"> = 12</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_attention_heads<span class="opacity-60"> = 12</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mit_hidden_size<span class="opacity-60"> = 512</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mit_intermediate_size<span class="opacity-60"> = 2048</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mit_num_hidden_layers<span class="opacity-60"> = 1</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mit_num_attention_heads<span class="opacity-60"> = 8</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_channels<span class="opacity-60"> = 3</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">image_size<span class="opacity-60"> = 224</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">patch_size<span class="opacity-60"> = 32</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_frames<span class="opacity-60"> = 8</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_act<span class="opacity-60"> = 'quick_gelu'</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">layer_norm_eps<span class="opacity-60"> = 1e-05</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_dropout<span class="opacity-60"> = 0.0</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">initializer_range<span class="opacity-60"> = 0.02</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">initializer_factor<span class="opacity-60"> = 1.0</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">drop_path_rate<span class="opacity-60"> = 0.0</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPVisionConfig.hidden_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPVisionConfig.hidden_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>hidden_size</strong> (<code>int</code>, <em>optional</em>, defaults to 768) — Dimensionality of the encoder layers and the pooler layer.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPVisionConfig.intermediate_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPVisionConfig.intermediate_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>intermediate_size</strong> (<code>int</code>, <em>optional</em>, defaults to 3072) — Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPVisionConfig.num_hidden_layers" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPVisionConfig.num_hidden_layers"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>num_hidden_layers</strong> (<code>int</code>, <em>optional</em>, defaults to 12) — Number of hidden layers in the Transformer encoder.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPVisionConfig.num_attention_heads" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPVisionConfig.num_attention_heads"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>num_attention_heads</strong> (<code>int</code>, <em>optional</em>, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPVisionConfig.mit_hidden_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPVisionConfig.mit_hidden_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>mit_hidden_size</strong> (<code>int</code>, <em>optional</em>, defaults to 512) — Dimensionality of the encoder layers of the Multiframe Integration Transformer (MIT).<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPVisionConfig.mit_intermediate_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPVisionConfig.mit_intermediate_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>mit_intermediate_size</strong> (<code>int</code>, <em>optional</em>, defaults to 2048) — Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Multiframe Integration Transformer (MIT).<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPVisionConfig.mit_num_hidden_layers" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPVisionConfig.mit_num_hidden_layers"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>mit_num_hidden_layers</strong> (<code>int</code>, <em>optional</em>, defaults to 1) — Number of hidden layers in the Multiframe Integration Transformer (MIT).<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPVisionConfig.mit_num_attention_heads" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPVisionConfig.mit_num_attention_heads"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>mit_num_attention_heads</strong> (<code>int</code>, <em>optional</em>, defaults to 8) — Number of attention heads for each attention layer in the Multiframe Integration Transformer (MIT).<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPVisionConfig.image_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPVisionConfig.image_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>image_size</strong> (<code>int</code>, <em>optional</em>, defaults to 224) — The size (resolution) of each image.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPVisionConfig.patch_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPVisionConfig.patch_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>patch_size</strong> (<code>int</code>, <em>optional</em>, defaults to 32) — The size (resolution) of each patch.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPVisionConfig.hidden_act" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPVisionConfig.hidden_act"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>hidden_act</strong> (<code>str</code> or <code>function</code>, <em>optional</em>, defaults to <code>"quick_gelu"</code>) — The non-linear activation function (function or string) in the encoder and pooler. If string, <code>"gelu"</code>, <code>"relu"</code>, <code>"selu"</code>, <code>"gelu_new"</code> and `<code>"quick_gelu"</code> are supported.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPVisionConfig.layer_norm_eps" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPVisionConfig.layer_norm_eps"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>layer_norm_eps</strong> (<code>float</code>, <em>optional</em>, defaults to 1e-5) — The epsilon used by the layer normalization layers.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPVisionConfig.attention_dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPVisionConfig.attention_dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>attention_dropout</strong> (<code>float</code>, <em>optional</em>, defaults to 0.0) — The dropout ratio for the attention probabilities.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPVisionConfig.initializer_range" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPVisionConfig.initializer_range"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>initializer_range</strong> (<code>float</code>, <em>optional</em>, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPVisionConfig.initializer_factor" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPVisionConfig.initializer_factor"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>initializer_factor</strong> (`float“, <em>optional</em>, defaults to 1) — A factor for initializing all weight matrices (should be kept to 1, used internally for initialization testing).<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPVisionConfig.drop_path_rate" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPVisionConfig.drop_path_rate"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>drop_path_rate</strong> (<code>float</code>, <em>optional</em>, defaults to 0.0) — Stochastic depth rate.<!-- HTML_TAG_END --> </span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1pc0nwp">This is the configuration class to store the configuration of a <a href="/docs/transformers/v4.34.0/en/model_doc/xclip#transformers.XCLIPModel">XCLIPModel</a>. It is used to instantiate an X-CLIP model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the X-CLIP <a href="https://huggingface.co/microsoft/xclip-base-patch32" rel="nofollow">microsoft/xclip-base-patch32</a> architecture.</p> <p data-svelte-h="svelte-10kqkkl">Configuration objects inherit from <a href="/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> and can be used to control the model outputs. Read the documentation from <a href="/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> for more information.</p> <div class="relative group rounded-md"><a id="transformers.XCLIPVisionConfig.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPVisionConfig.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> XCLIPVisionModel, XCLIPVisionConfig <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Initializing a XCLIPVisionModel with microsoft/xclip-base-patch32 style configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>configuration = XCLIPVisionConfig() <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Initializing a XCLIPVisionModel model from the microsoft/xclip-base-patch32 style configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model = XCLIPVisionModel(configuration) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Accessing the model configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>configuration = model.config<!-- HTML_TAG_END --></pre></div></div></div> <h2 class="relative group"><a id="transformers.XCLIPModel" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPModel"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1164z70">XCLIPModel</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XCLIPModel"><!-- HTML_TAG_START --><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XCLIPModel</span></span></h3><!-- HTML_TAG_END --> <a id="transformers.XCLIPModel" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XCLIPModel"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/x_clip/modeling_x_clip.py#L1292" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60">: XCLIPConfig</span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPModel.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPModel.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xclip#transformers.XCLIPConfig">XCLIPConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.<!-- HTML_TAG_END --> </span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1gjh92c">This model is a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XCLIPModel.forward"><!-- HTML_TAG_START --><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4><!-- HTML_TAG_END --> <a id="transformers.XCLIPModel.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XCLIPModel.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/x_clip/modeling_x_clip.py#L1501" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pixel_values<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_loss<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><!-- HTML_TAG_START --><span><code>transformers.models.x_clip.modeling_x_clip.XCLIPOutput</code> or <code>tuple(torch.FloatTensor)</code></span><!-- HTML_TAG_END --></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPModel.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPModel.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPModel.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPModel.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>attention_mask</strong> (<code>torch.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPModel.forward.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPModel.forward.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>position_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.<p></p> <p><a href="../glossary#position-ids">What are position IDs?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPModel.forward.pixel_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPModel.forward.pixel_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>pixel_values</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_channels, height, width)</code>) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoImageProcessor">AutoImageProcessor</a>. See <a href="/docs/transformers/v4.34.0/en/model_doc/deit#transformers.DeiTFeatureExtractor.__call__">CLIPImageProcessor.<strong>call</strong>()</a> for details.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPModel.forward.return_loss" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPModel.forward.return_loss"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>return_loss</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the contrastive loss.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPModel.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPModel.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPModel.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPModel.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPModel.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPModel.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.<!-- HTML_TAG_END --> </span></span> </li></ul> <div id="transformers.XCLIPModel.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <!-- HTML_TAG_START --> <p><code>transformers.models.x_clip.modeling_x_clip.XCLIPOutput</code> or <code>tuple(torch.FloatTensor)</code></p> <!-- HTML_TAG_END --> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"><!-- HTML_TAG_START --> </p><p>A <code>transformers.models.x_clip.modeling_x_clip.XCLIPOutput</code> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<code>&lt;class 'transformers.models.x_clip.configuration_x_clip.XCLIPConfig'&gt;</code>) and inputs.</p> <ul> <li><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>return_loss</code> is <code>True</code>) — Contrastive loss for video-text similarity.</li> <li><strong>logits_per_video</strong> (<code>torch.FloatTensor</code> of shape <code>(video_batch_size, text_batch_size)</code>) — The scaled dot product scores between <code>video_embeds</code> and <code>text_embeds</code>. This represents the video-text similarity scores.</li> <li><strong>logits_per_text</strong> (<code>torch.FloatTensor</code> of shape <code>(text_batch_size, video_batch_size)</code>) — The scaled dot product scores between <code>text_embeds</code> and <code>video_embeds</code>. This represents the text-video similarity scores.</li> <li><strong>text_embeds(<code>torch.FloatTensor</code></strong> of shape <code>(batch_size, output_dim</code>) — The text embeddings obtained by applying the projection layer to the pooled output of <a href="/docs/transformers/v4.34.0/en/model_doc/xclip#transformers.XCLIPTextModel">XCLIPTextModel</a>.</li> <li><strong>video_embeds(<code>torch.FloatTensor</code></strong> of shape <code>(batch_size, output_dim</code>) — The video embeddings obtained by applying the projection layer to the pooled output of <a href="/docs/transformers/v4.34.0/en/model_doc/xclip#transformers.XCLIPVisionModel">XCLIPVisionModel</a>.</li> <li><strong>text_model_output</strong> (<code>BaseModelOutputWithPooling</code>) — The output of the <a href="/docs/transformers/v4.34.0/en/model_doc/xclip#transformers.XCLIPTextModel">XCLIPTextModel</a>.</li> <li><strong>vision_model_output</strong> (<code>BaseModelOutputWithPooling</code>) — The output of the <a href="/docs/transformers/v4.34.0/en/model_doc/xclip#transformers.XCLIPVisionModel">XCLIPVisionModel</a>.</li> <li><strong>mit_output</strong> (<code>BaseModelOutputWithPooling</code>) — The output of <code>XCLIPMultiframeIntegrationTransformer</code> (MIT for short).</li> </ul> <!-- HTML_TAG_END --><p></p> </div></div> <p data-svelte-h="svelte-1ajn5jp">The <a href="/docs/transformers/v4.34.0/en/model_doc/xclip#transformers.XCLIPModel">XCLIPModel</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.XCLIPModel.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPModel.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-kvfsh7">Examples:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> av <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoProcessor, AutoModel <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> huggingface_hub <span class="hljs-keyword">import</span> hf_hub_download <span class="hljs-meta">&gt;&gt;&gt; </span>np.random.seed(<span class="hljs-number">0</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">read_video_pyav</span>(<span class="hljs-params">container, indices</span>): <span class="hljs-meta">... </span> <span class="hljs-string">''' <span class="hljs-meta">... </span> Decode the video with PyAV decoder. <span class="hljs-meta">... </span> Args: <span class="hljs-meta">... </span> container (`av.container.input.InputContainer`): PyAV container. <span class="hljs-meta">... </span> indices (`List[int]`): List of frame indices to decode. <span class="hljs-meta">... </span> Returns: <span class="hljs-meta">... </span> result (np.ndarray): np array of decoded frames of shape (num_frames, height, width, 3). <span class="hljs-meta">... </span> '''</span> <span class="hljs-meta">... </span> frames = [] <span class="hljs-meta">... </span> container.seek(<span class="hljs-number">0</span>) <span class="hljs-meta">... </span> start_index = indices[<span class="hljs-number">0</span>] <span class="hljs-meta">... </span> end_index = indices[-<span class="hljs-number">1</span>] <span class="hljs-meta">... </span> <span class="hljs-keyword">for</span> i, frame <span class="hljs-keyword">in</span> <span class="hljs-built_in">enumerate</span>(container.decode(video=<span class="hljs-number">0</span>)): <span class="hljs-meta">... </span> <span class="hljs-keyword">if</span> i &gt; end_index: <span class="hljs-meta">... </span> <span class="hljs-keyword">break</span> <span class="hljs-meta">... </span> <span class="hljs-keyword">if</span> i &gt;= start_index <span class="hljs-keyword">and</span> i <span class="hljs-keyword">in</span> indices: <span class="hljs-meta">... </span> frames.append(frame) <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> np.stack([x.to_ndarray(<span class="hljs-built_in">format</span>=<span class="hljs-string">"rgb24"</span>) <span class="hljs-keyword">for</span> x <span class="hljs-keyword">in</span> frames]) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">sample_frame_indices</span>(<span class="hljs-params">clip_len, frame_sample_rate, seg_len</span>): <span class="hljs-meta">... </span> <span class="hljs-string">''' <span class="hljs-meta">... </span> Sample a given number of frame indices from the video. <span class="hljs-meta">... </span> Args: <span class="hljs-meta">... </span> clip_len (`int`): Total number of frames to sample. <span class="hljs-meta">... </span> frame_sample_rate (`int`): Sample every n-th frame. <span class="hljs-meta">... </span> seg_len (`int`): Maximum allowed index of sample's last frame. <span class="hljs-meta">... </span> Returns: <span class="hljs-meta">... </span> indices (`List[int]`): List of sampled frame indices <span class="hljs-meta">... </span> '''</span> <span class="hljs-meta">... </span> converted_len = <span class="hljs-built_in">int</span>(clip_len * frame_sample_rate) <span class="hljs-meta">... </span> end_idx = np.random.randint(converted_len, seg_len) <span class="hljs-meta">... </span> start_idx = end_idx - converted_len <span class="hljs-meta">... </span> indices = np.linspace(start_idx, end_idx, num=clip_len) <span class="hljs-meta">... </span> indices = np.clip(indices, start_idx, end_idx - <span class="hljs-number">1</span>).astype(np.int64) <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> indices <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># video clip consists of 300 frames (10 seconds at 30 FPS)</span> <span class="hljs-meta">&gt;&gt;&gt; </span>file_path = hf_hub_download( <span class="hljs-meta">... </span> repo_id=<span class="hljs-string">"nielsr/video-demo"</span>, filename=<span class="hljs-string">"eating_spaghetti.mp4"</span>, repo_type=<span class="hljs-string">"dataset"</span> <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>container = av.<span class="hljs-built_in">open</span>(file_path) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># sample 8 frames</span> <span class="hljs-meta">&gt;&gt;&gt; </span>indices = sample_frame_indices(clip_len=<span class="hljs-number">8</span>, frame_sample_rate=<span class="hljs-number">1</span>, seg_len=container.streams.video[<span class="hljs-number">0</span>].frames) <span class="hljs-meta">&gt;&gt;&gt; </span>video = read_video_pyav(container, indices) <span class="hljs-meta">&gt;&gt;&gt; </span>processor = AutoProcessor.from_pretrained(<span class="hljs-string">"microsoft/xclip-base-patch32"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = AutoModel.from_pretrained(<span class="hljs-string">"microsoft/xclip-base-patch32"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = processor( <span class="hljs-meta">... </span> text=[<span class="hljs-string">"playing sports"</span>, <span class="hljs-string">"eating spaghetti"</span>, <span class="hljs-string">"go shopping"</span>], <span class="hljs-meta">... </span> videos=<span class="hljs-built_in">list</span>(video), <span class="hljs-meta">... </span> return_tensors=<span class="hljs-string">"pt"</span>, <span class="hljs-meta">... </span> padding=<span class="hljs-literal">True</span>, <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># forward pass</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> outputs = model(**inputs) <span class="hljs-meta">&gt;&gt;&gt; </span>logits_per_video = outputs.logits_per_video <span class="hljs-comment"># this is the video-text similarity score</span> <span class="hljs-meta">&gt;&gt;&gt; </span>probs = logits_per_video.softmax(dim=<span class="hljs-number">1</span>) <span class="hljs-comment"># we can take the softmax to get the label probabilities</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-built_in">print</span>(probs) tensor([[<span class="hljs-number">1.9496e-04</span>, <span class="hljs-number">9.9960e-01</span>, <span class="hljs-number">2.0825e-04</span>]])<!-- HTML_TAG_END --></pre></div></div></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XCLIPModel.get_text_features"><!-- HTML_TAG_START --><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>get_text_features</span></h4><!-- HTML_TAG_END --> <a id="transformers.XCLIPModel.get_text_features" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XCLIPModel.get_text_features"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/x_clip/modeling_x_clip.py#L1339" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><!-- HTML_TAG_START --><span>text_features (<code>torch.FloatTensor</code> of shape <code>(batch_size, output_dim</code>)</span><!-- HTML_TAG_END --></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPModel.get_text_features.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPModel.get_text_features.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPModel.get_text_features.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPModel.get_text_features.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>attention_mask</strong> (<code>torch.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPModel.get_text_features.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPModel.get_text_features.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>position_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.<p></p> <p><a href="../glossary#position-ids">What are position IDs?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPModel.get_text_features.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPModel.get_text_features.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPModel.get_text_features.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPModel.get_text_features.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPModel.get_text_features.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPModel.get_text_features.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.<!-- HTML_TAG_END --> </span></span> </li></ul> <div id="transformers.XCLIPModel.get_text_features.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <!-- HTML_TAG_START --> <p>text_features (<code>torch.FloatTensor</code> of shape <code>(batch_size, output_dim</code>)</p> <!-- HTML_TAG_END --> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"><!-- HTML_TAG_START --> </p><p>The text embeddings obtained by applying the projection layer to the pooled output of <a href="/docs/transformers/v4.34.0/en/model_doc/xclip#transformers.XCLIPTextModel">XCLIPTextModel</a>.</p> <!-- HTML_TAG_END --><p></p> </div></div> <p data-svelte-h="svelte-1ajn5jp">The <a href="/docs/transformers/v4.34.0/en/model_doc/xclip#transformers.XCLIPModel">XCLIPModel</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.XCLIPModel.get_text_features.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPModel.get_text_features.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-kvfsh7">Examples:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, AutoModel <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"microsoft/xclip-base-patch32"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = AutoModel.from_pretrained(<span class="hljs-string">"microsoft/xclip-base-patch32"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer([<span class="hljs-string">"a photo of a cat"</span>, <span class="hljs-string">"a photo of a dog"</span>], padding=<span class="hljs-literal">True</span>, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>text_features = model.get_text_features(**inputs)<!-- HTML_TAG_END --></pre></div></div></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XCLIPModel.get_video_features"><!-- HTML_TAG_START --><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>get_video_features</span></h4><!-- HTML_TAG_END --> <a id="transformers.XCLIPModel.get_video_features" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XCLIPModel.get_video_features"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/x_clip/modeling_x_clip.py#L1386" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pixel_values<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><!-- HTML_TAG_START --><span>video_features (<code>torch.FloatTensor</code> of shape <code>(batch_size, output_dim</code>)</span><!-- HTML_TAG_END --></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPModel.get_video_features.pixel_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPModel.get_video_features.pixel_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>pixel_values</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_channels, height, width)</code>) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoImageProcessor">AutoImageProcessor</a>. See <a href="/docs/transformers/v4.34.0/en/model_doc/deit#transformers.DeiTFeatureExtractor.__call__">CLIPImageProcessor.<strong>call</strong>()</a> for details.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPModel.get_video_features.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPModel.get_video_features.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPModel.get_video_features.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPModel.get_video_features.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPModel.get_video_features.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPModel.get_video_features.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.<!-- HTML_TAG_END --> </span></span> </li></ul> <div id="transformers.XCLIPModel.get_video_features.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <!-- HTML_TAG_START --> <p>video_features (<code>torch.FloatTensor</code> of shape <code>(batch_size, output_dim</code>)</p> <!-- HTML_TAG_END --> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"><!-- HTML_TAG_START --> </p><p>The video embeddings obtained by applying the projection layer to the pooled output of <a href="/docs/transformers/v4.34.0/en/model_doc/xclip#transformers.XCLIPVisionModel">XCLIPVisionModel</a> and <code>XCLIPMultiframeIntegrationTransformer</code>.</p> <!-- HTML_TAG_END --><p></p> </div></div> <p data-svelte-h="svelte-1ajn5jp">The <a href="/docs/transformers/v4.34.0/en/model_doc/xclip#transformers.XCLIPModel">XCLIPModel</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.XCLIPModel.get_video_features.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPModel.get_video_features.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-kvfsh7">Examples:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> av <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoProcessor, AutoModel <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> huggingface_hub <span class="hljs-keyword">import</span> hf_hub_download <span class="hljs-meta">&gt;&gt;&gt; </span>np.random.seed(<span class="hljs-number">0</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">read_video_pyav</span>(<span class="hljs-params">container, indices</span>): <span class="hljs-meta">... </span> <span class="hljs-string">''' <span class="hljs-meta">... </span> Decode the video with PyAV decoder. <span class="hljs-meta">... </span> Args: <span class="hljs-meta">... </span> container (`av.container.input.InputContainer`): PyAV container. <span class="hljs-meta">... </span> indices (`List[int]`): List of frame indices to decode. <span class="hljs-meta">... </span> Returns: <span class="hljs-meta">... </span> result (np.ndarray): np array of decoded frames of shape (num_frames, height, width, 3). <span class="hljs-meta">... </span> '''</span> <span class="hljs-meta">... </span> frames = [] <span class="hljs-meta">... </span> container.seek(<span class="hljs-number">0</span>) <span class="hljs-meta">... </span> start_index = indices[<span class="hljs-number">0</span>] <span class="hljs-meta">... </span> end_index = indices[-<span class="hljs-number">1</span>] <span class="hljs-meta">... </span> <span class="hljs-keyword">for</span> i, frame <span class="hljs-keyword">in</span> <span class="hljs-built_in">enumerate</span>(container.decode(video=<span class="hljs-number">0</span>)): <span class="hljs-meta">... </span> <span class="hljs-keyword">if</span> i &gt; end_index: <span class="hljs-meta">... </span> <span class="hljs-keyword">break</span> <span class="hljs-meta">... </span> <span class="hljs-keyword">if</span> i &gt;= start_index <span class="hljs-keyword">and</span> i <span class="hljs-keyword">in</span> indices: <span class="hljs-meta">... </span> frames.append(frame) <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> np.stack([x.to_ndarray(<span class="hljs-built_in">format</span>=<span class="hljs-string">"rgb24"</span>) <span class="hljs-keyword">for</span> x <span class="hljs-keyword">in</span> frames]) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">sample_frame_indices</span>(<span class="hljs-params">clip_len, frame_sample_rate, seg_len</span>): <span class="hljs-meta">... </span> <span class="hljs-string">''' <span class="hljs-meta">... </span> Sample a given number of frame indices from the video. <span class="hljs-meta">... </span> Args: <span class="hljs-meta">... </span> clip_len (`int`): Total number of frames to sample. <span class="hljs-meta">... </span> frame_sample_rate (`int`): Sample every n-th frame. <span class="hljs-meta">... </span> seg_len (`int`): Maximum allowed index of sample's last frame. <span class="hljs-meta">... </span> Returns: <span class="hljs-meta">... </span> indices (`List[int]`): List of sampled frame indices <span class="hljs-meta">... </span> '''</span> <span class="hljs-meta">... </span> converted_len = <span class="hljs-built_in">int</span>(clip_len * frame_sample_rate) <span class="hljs-meta">... </span> end_idx = np.random.randint(converted_len, seg_len) <span class="hljs-meta">... </span> start_idx = end_idx - converted_len <span class="hljs-meta">... </span> indices = np.linspace(start_idx, end_idx, num=clip_len) <span class="hljs-meta">... </span> indices = np.clip(indices, start_idx, end_idx - <span class="hljs-number">1</span>).astype(np.int64) <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> indices <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># video clip consists of 300 frames (10 seconds at 30 FPS)</span> <span class="hljs-meta">&gt;&gt;&gt; </span>file_path = hf_hub_download( <span class="hljs-meta">... </span> repo_id=<span class="hljs-string">"nielsr/video-demo"</span>, filename=<span class="hljs-string">"eating_spaghetti.mp4"</span>, repo_type=<span class="hljs-string">"dataset"</span> <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>container = av.<span class="hljs-built_in">open</span>(file_path) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># sample 8 frames</span> <span class="hljs-meta">&gt;&gt;&gt; </span>indices = sample_frame_indices(clip_len=<span class="hljs-number">8</span>, frame_sample_rate=<span class="hljs-number">1</span>, seg_len=container.streams.video[<span class="hljs-number">0</span>].frames) <span class="hljs-meta">&gt;&gt;&gt; </span>video = read_video_pyav(container, indices) <span class="hljs-meta">&gt;&gt;&gt; </span>processor = AutoProcessor.from_pretrained(<span class="hljs-string">"microsoft/xclip-base-patch32"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = AutoModel.from_pretrained(<span class="hljs-string">"microsoft/xclip-base-patch32"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = processor(videos=<span class="hljs-built_in">list</span>(video), return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>video_features = model.get_video_features(**inputs)<!-- HTML_TAG_END --></pre></div></div></div></div> <h2 class="relative group"><a id="transformers.XCLIPTextModel" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPTextModel"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-12njn9z">XCLIPTextModel</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XCLIPTextModel"><!-- HTML_TAG_START --><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XCLIPTextModel</span></span></h3><!-- HTML_TAG_END --> <a id="transformers.XCLIPTextModel" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XCLIPTextModel"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/x_clip/modeling_x_clip.py#L833" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60">: XCLIPTextConfig</span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> </div></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XCLIPTextModel.forward"><!-- HTML_TAG_START --><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4><!-- HTML_TAG_END --> <a id="transformers.XCLIPTextModel.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XCLIPTextModel.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/x_clip/modeling_x_clip.py#L848" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><!-- HTML_TAG_START --><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.BaseModelOutputWithPooling">transformers.modeling_outputs.BaseModelOutputWithPooling</a> or <code>tuple(torch.FloatTensor)</code></span><!-- HTML_TAG_END --></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPTextModel.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPTextModel.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPTextModel.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPTextModel.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>attention_mask</strong> (<code>torch.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPTextModel.forward.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPTextModel.forward.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>position_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.<p></p> <p><a href="../glossary#position-ids">What are position IDs?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPTextModel.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPTextModel.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPTextModel.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPTextModel.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPTextModel.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPTextModel.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.<!-- HTML_TAG_END --> </span></span> </li></ul> <div id="transformers.XCLIPTextModel.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <!-- HTML_TAG_START --> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.BaseModelOutputWithPooling">transformers.modeling_outputs.BaseModelOutputWithPooling</a> or <code>tuple(torch.FloatTensor)</code></p> <!-- HTML_TAG_END --> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"><!-- HTML_TAG_START --> </p><p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.BaseModelOutputWithPooling">transformers.modeling_outputs.BaseModelOutputWithPooling</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<code>&lt;class 'transformers.models.x_clip.configuration_x_clip.XCLIPTextConfig'&gt;</code>) and inputs.</p> <ul> <li> <p><strong>last_hidden_state</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>) — Sequence of hidden-states at the output of the last layer of the model.</p> </li> <li> <p><strong>pooler_output</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, hidden_size)</code>) — Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining.</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> <!-- HTML_TAG_END --><p></p> </div></div> <p data-svelte-h="svelte-qpneyn">The <a href="/docs/transformers/v4.34.0/en/model_doc/xclip#transformers.XCLIPTextModel">XCLIPTextModel</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.XCLIPTextModel.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPTextModel.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-kvfsh7">Examples:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, XCLIPTextModel <span class="hljs-meta">&gt;&gt;&gt; </span>model = XCLIPTextModel.from_pretrained(<span class="hljs-string">"microsoft/xclip-base-patch32"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"microsoft/xclip-base-patch32"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer([<span class="hljs-string">"a photo of a cat"</span>, <span class="hljs-string">"a photo of a dog"</span>], padding=<span class="hljs-literal">True</span>, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(**inputs) <span class="hljs-meta">&gt;&gt;&gt; </span>last_hidden_state = outputs.last_hidden_state <span class="hljs-meta">&gt;&gt;&gt; </span>pooled_output = outputs.pooler_output <span class="hljs-comment"># pooled (EOS token) states</span><!-- HTML_TAG_END --></pre></div></div></div></div> <h2 class="relative group"><a id="transformers.XCLIPVisionModel" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPVisionModel"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-twh02c">XCLIPVisionModel</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XCLIPVisionModel"><!-- HTML_TAG_START --><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XCLIPVisionModel</span></span></h3><!-- HTML_TAG_END --> <a id="transformers.XCLIPVisionModel" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XCLIPVisionModel"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/x_clip/modeling_x_clip.py#L1048" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60">: XCLIPVisionConfig</span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> </div></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XCLIPVisionModel.forward"><!-- HTML_TAG_START --><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4><!-- HTML_TAG_END --> <a id="transformers.XCLIPVisionModel.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XCLIPVisionModel.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/x_clip/modeling_x_clip.py#L1061" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pixel_values<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><!-- HTML_TAG_START --><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.BaseModelOutputWithPooling">transformers.modeling_outputs.BaseModelOutputWithPooling</a> or <code>tuple(torch.FloatTensor)</code></span><!-- HTML_TAG_END --></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPVisionModel.forward.pixel_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPVisionModel.forward.pixel_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>pixel_values</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_channels, height, width)</code>) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoImageProcessor">AutoImageProcessor</a>. See <a href="/docs/transformers/v4.34.0/en/model_doc/deit#transformers.DeiTFeatureExtractor.__call__">CLIPImageProcessor.<strong>call</strong>()</a> for details.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPVisionModel.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPVisionModel.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPVisionModel.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPVisionModel.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XCLIPVisionModel.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPVisionModel.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.<!-- HTML_TAG_END --> </span></span> </li></ul> <div id="transformers.XCLIPVisionModel.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <!-- HTML_TAG_START --> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.BaseModelOutputWithPooling">transformers.modeling_outputs.BaseModelOutputWithPooling</a> or <code>tuple(torch.FloatTensor)</code></p> <!-- HTML_TAG_END --> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"><!-- HTML_TAG_START --> </p><p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.BaseModelOutputWithPooling">transformers.modeling_outputs.BaseModelOutputWithPooling</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<code>&lt;class 'transformers.models.x_clip.configuration_x_clip.XCLIPVisionConfig'&gt;</code>) and inputs.</p> <ul> <li> <p><strong>last_hidden_state</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>) — Sequence of hidden-states at the output of the last layer of the model.</p> </li> <li> <p><strong>pooler_output</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, hidden_size)</code>) — Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining.</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> <!-- HTML_TAG_END --><p></p> </div></div> <p data-svelte-h="svelte-wxama9">The <a href="/docs/transformers/v4.34.0/en/model_doc/xclip#transformers.XCLIPVisionModel">XCLIPVisionModel</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.XCLIPVisionModel.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XCLIPVisionModel.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-kvfsh7">Examples:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> av <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoProcessor, XCLIPVisionModel <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> huggingface_hub <span class="hljs-keyword">import</span> hf_hub_download <span class="hljs-meta">&gt;&gt;&gt; </span>np.random.seed(<span class="hljs-number">0</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">read_video_pyav</span>(<span class="hljs-params">container, indices</span>): <span class="hljs-meta">... </span> <span class="hljs-string">''' <span class="hljs-meta">... </span> Decode the video with PyAV decoder. <span class="hljs-meta">... </span> Args: <span class="hljs-meta">... </span> container (`av.container.input.InputContainer`): PyAV container. <span class="hljs-meta">... </span> indices (`List[int]`): List of frame indices to decode. <span class="hljs-meta">... </span> Returns: <span class="hljs-meta">... </span> result (np.ndarray): np array of decoded frames of shape (num_frames, height, width, 3). <span class="hljs-meta">... </span> '''</span> <span class="hljs-meta">... </span> frames = [] <span class="hljs-meta">... </span> container.seek(<span class="hljs-number">0</span>) <span class="hljs-meta">... </span> start_index = indices[<span class="hljs-number">0</span>] <span class="hljs-meta">... </span> end_index = indices[-<span class="hljs-number">1</span>] <span class="hljs-meta">... </span> <span class="hljs-keyword">for</span> i, frame <span class="hljs-keyword">in</span> <span class="hljs-built_in">enumerate</span>(container.decode(video=<span class="hljs-number">0</span>)): <span class="hljs-meta">... </span> <span class="hljs-keyword">if</span> i &gt; end_index: <span class="hljs-meta">... </span> <span class="hljs-keyword">break</span> <span class="hljs-meta">... </span> <span class="hljs-keyword">if</span> i &gt;= start_index <span class="hljs-keyword">and</span> i <span class="hljs-keyword">in</span> indices: <span class="hljs-meta">... </span> frames.append(frame) <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> np.stack([x.to_ndarray(<span class="hljs-built_in">format</span>=<span class="hljs-string">"rgb24"</span>) <span class="hljs-keyword">for</span> x <span class="hljs-keyword">in</span> frames]) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">sample_frame_indices</span>(<span class="hljs-params">clip_len, frame_sample_rate, seg_len</span>): <span class="hljs-meta">... </span> <span class="hljs-string">''' <span class="hljs-meta">... </span> Sample a given number of frame indices from the video. <span class="hljs-meta">... </span> Args: <span class="hljs-meta">... </span> clip_len (`int`): Total number of frames to sample. <span class="hljs-meta">... </span> frame_sample_rate (`int`): Sample every n-th frame. <span class="hljs-meta">... </span> seg_len (`int`): Maximum allowed index of sample's last frame. <span class="hljs-meta">... </span> Returns: <span class="hljs-meta">... </span> indices (`List[int]`): List of sampled frame indices <span class="hljs-meta">... </span> '''</span> <span class="hljs-meta">... </span> converted_len = <span class="hljs-built_in">int</span>(clip_len * frame_sample_rate) <span class="hljs-meta">... </span> end_idx = np.random.randint(converted_len, seg_len) <span class="hljs-meta">... </span> start_idx = end_idx - converted_len <span class="hljs-meta">... </span> indices = np.linspace(start_idx, end_idx, num=clip_len) <span class="hljs-meta">... </span> indices = np.clip(indices, start_idx, end_idx - <span class="hljs-number">1</span>).astype(np.int64) <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> indices <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># video clip consists of 300 frames (10 seconds at 30 FPS)</span> <span class="hljs-meta">&gt;&gt;&gt; </span>file_path = hf_hub_download( <span class="hljs-meta">... </span> repo_id=<span class="hljs-string">"nielsr/video-demo"</span>, filename=<span class="hljs-string">"eating_spaghetti.mp4"</span>, repo_type=<span class="hljs-string">"dataset"</span> <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>container = av.<span class="hljs-built_in">open</span>(file_path) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># sample 16 frames</span> <span class="hljs-meta">&gt;&gt;&gt; </span>indices = sample_frame_indices(clip_len=<span class="hljs-number">8</span>, frame_sample_rate=<span class="hljs-number">1</span>, seg_len=container.streams.video[<span class="hljs-number">0</span>].frames) <span class="hljs-meta">&gt;&gt;&gt; </span>video = read_video_pyav(container, indices) <span class="hljs-meta">&gt;&gt;&gt; </span>processor = AutoProcessor.from_pretrained(<span class="hljs-string">"microsoft/xclip-base-patch32"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = XCLIPVisionModel.from_pretrained(<span class="hljs-string">"microsoft/xclip-base-patch32"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>pixel_values = processor(videos=<span class="hljs-built_in">list</span>(video), return_tensors=<span class="hljs-string">"pt"</span>).pixel_values <span class="hljs-meta">&gt;&gt;&gt; </span>batch_size, num_frames, num_channels, height, width = pixel_values.shape <span class="hljs-meta">&gt;&gt;&gt; </span>pixel_values = pixel_values.reshape(-<span class="hljs-number">1</span>, num_channels, height, width) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(pixel_values) <span class="hljs-meta">&gt;&gt;&gt; </span>last_hidden_state = outputs.last_hidden_state<!-- HTML_TAG_END --></pre></div></div></div></div> <p></p> <script> { __sveltekit_1yybmhh = { assets: "/docs/transformers/v4.34.0/en", base: "/docs/transformers/v4.34.0/en", env: {} }; const element = document.currentScript.parentElement; const data = [null,null]; Promise.all([ import("/docs/transformers/v4.34.0/en/_app/immutable/entry/start.c2db227a.js"), import("/docs/transformers/v4.34.0/en/_app/immutable/entry/app.879d9b87.js") ]).then(([kit, app]) => { kit.start(app, element, { node_ids: [0, 275], data, form: null, error: null }); }); } </script> <!-- HTML_TAG_END --></div> <div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/visual_bert" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>VisualBERT</a> <a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/decision_transformer" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">Decision Transformer<span class="ml-2 translate-y-px">→</span></a></div></div></div> <div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapter&quot;:{&quot;title&quot;:&quot;X-CLIP&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;xclip&quot;,&quot;url&quot;:&quot;#xclip&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;overview&quot;,&quot;url&quot;:&quot;#overview&quot;},{&quot;title&quot;:&quot;Resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;resources&quot;,&quot;url&quot;:&quot;#resources&quot;},{&quot;title&quot;:&quot;XCLIPProcessor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XCLIPProcessor&quot;,&quot;url&quot;:&quot;#transformers.XCLIPProcessor&quot;},{&quot;title&quot;:&quot;XCLIPConfig&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XCLIPConfig&quot;,&quot;url&quot;:&quot;#transformers.XCLIPConfig&quot;},{&quot;title&quot;:&quot;XCLIPTextConfig&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XCLIPTextConfig&quot;,&quot;url&quot;:&quot;#transformers.XCLIPTextConfig&quot;},{&quot;title&quot;:&quot;XCLIPVisionConfig&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XCLIPVisionConfig&quot;,&quot;url&quot;:&quot;#transformers.XCLIPVisionConfig&quot;},{&quot;title&quot;:&quot;XCLIPModel&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XCLIPModel&quot;,&quot;url&quot;:&quot;#transformers.XCLIPModel&quot;},{&quot;title&quot;:&quot;XCLIPTextModel&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XCLIPTextModel&quot;,&quot;url&quot;:&quot;#transformers.XCLIPTextModel&quot;},{&quot;title&quot;:&quot;XCLIPVisionModel&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XCLIPVisionModel&quot;,&quot;url&quot;:&quot;#transformers.XCLIPVisionModel&quot;}]}}" data-target="SubSideMenu"><nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#xclip" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-xclip"><wbr>X-CLIP</a> <a href="#overview" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-overview"><wbr>Overview</a> <a href="#resources" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-resources"><wbr>Resources</a> <a href="#transformers.XCLIPProcessor" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XCLIPProcessor">XCLIP<wbr>Processor</a> <a href="#transformers.XCLIPConfig" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XCLIPConfig">XCLIP<wbr>Config</a> <a href="#transformers.XCLIPTextConfig" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XCLIPTextConfig">XCLIP<wbr>Text<wbr>Config</a> <a href="#transformers.XCLIPVisionConfig" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XCLIPVisionConfig">XCLIP<wbr>Vision<wbr>Config</a> <a href="#transformers.XCLIPModel" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XCLIPModel">XCLIP<wbr>Model</a> <a href="#transformers.XCLIPTextModel" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XCLIPTextModel">XCLIP<wbr>Text<wbr>Model</a> <a href="#transformers.XCLIPVisionModel" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XCLIPVisionModel">XCLIP<wbr>Vision<wbr>Model</a> </nav></div></div></div> <div id="doc-footer"></div></main> </div> <script> import("/front/build/kube-b0520c1/index.js"); window.moonSha = "kube-b0520c1/"; window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc","environment":"production","userAgent":"HuggingFace (production)"}`); </script> <!-- Stripe --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://js.stripe.com/v3/"; script.async = true; document.head.appendChild(script); } </script> <!-- Google analytics v4 --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL"; script.async = true; document.head.appendChild(script); window.dataLayer = window.dataLayer || []; function gtag() { if (window.dataLayer !== undefined) { window.dataLayer.push(arguments); } } gtag("js", new Date()); gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/v4.34.0/en/model_doc/xclip" }); /// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" }); /// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent /// TODO: ask the user for their consent and update this with gtag('consent', 'update') } </script> <!-- Google Analytics v3 (deprecated) --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { (function (i, s, o, g, r, a, m) { i["GoogleAnalyticsObject"] = r; (i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments); }), (i[r].l = 1 * new Date()); (a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]); a.async = 1; a.src = g; m.parentNode.insertBefore(a, m); })(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics"); ganalytics("create", "UA-83738774-2", "auto"); ganalytics("send", "pageview", "/docs/transformers/v4.34.0/en/model_doc/xclip"); } </script> <iframe name="__privateStripeMetricsController3450" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-27c67c0d52761104439bb051c7856ab1.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fv4.34.0%2Fen%2Fmodel_doc%2Fxclip&amp;title=X-CLIP&amp;referrer=&amp;muid=b15a8ef9-7618-4d98-9abd-1d7fdb18f47df4c702&amp;sid=0da2c795-975c-45a5-a090-0475ca1e345f07aeed&amp;version=6&amp;preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html>
2023-10-05T13:33:30.799Z
WavLM
https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/wavlm
# WavLM ## Overview The WavLM model was proposed in [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei. The abstract from the paper is the following: _Self-supervised learning (SSL) achieves great success in speech recognition, while limited exploration has been attempted for other speech processing tasks. As speech signal contains multi-faceted information including speaker identity, paralinguistics, spoken content, etc., learning universal representations for all speech tasks is challenging. In this paper, we propose a new pre-trained model, WavLM, to solve full-stack downstream speech tasks. WavLM is built based on the HuBERT framework, with an emphasis on both spoken content modeling and speaker identity preservation. We first equip the Transformer structure with gated relative position bias to improve its capability on recognition tasks. For better speaker discrimination, we propose an utterance mixing training strategy, where additional overlapped utterances are created unsupervisely and incorporated during model training. Lastly, we scale up the training dataset from 60k hours to 94k hours. WavLM Large achieves state-of-the-art performance on the SUPERB benchmark, and brings significant improvements for various speech processing tasks on their representative benchmarks._ Tips: - WavLM is a speech model that accepts a float array corresponding to the raw waveform of the speech signal. Please use [Wav2Vec2Processor](/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Processor) for the feature extraction. - WavLM model can be fine-tuned using connectionist temporal classification (CTC) so the model output has to be decoded using [Wav2Vec2CTCTokenizer](/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2CTCTokenizer). - WavLM performs especially well on speaker verification, speaker identification, and speaker diarization tasks. Relevant checkpoints can be found under [https://huggingface.co/models?other=wavlm](https://huggingface.co/models?other=wavlm). This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten). The Authors’ code can be found [here](https://github.com/microsoft/unilm/tree/master/wavlm). ## Documentation resources - [Audio classification task guide](../tasks/audio_classification) - [Automatic speech recognition task guide](../tasks/asr) ## WavLMConfig ### class transformers.WavLMConfig [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wavlm/configuration_wavlm.py#L32) ( vocab\_size = 32hidden\_size = 768num\_hidden\_layers = 12num\_attention\_heads = 12intermediate\_size = 3072hidden\_act = 'gelu'hidden\_dropout = 0.1activation\_dropout = 0.1attention\_dropout = 0.1feat\_proj\_dropout = 0.0final\_dropout = 0.1layerdrop = 0.1initializer\_range = 0.02layer\_norm\_eps = 1e-05feat\_extract\_norm = 'group'feat\_extract\_activation = 'gelu'conv\_dim = (512, 512, 512, 512, 512, 512, 512)conv\_stride = (5, 2, 2, 2, 2, 2, 2)conv\_kernel = (10, 3, 3, 3, 3, 2, 2)conv\_bias = Falsenum\_conv\_pos\_embeddings = 128num\_conv\_pos\_embedding\_groups = 16num\_buckets = 320max\_bucket\_distance = 800do\_stable\_layer\_norm = Falseapply\_spec\_augment = Truemask\_time\_prob = 0.05mask\_time\_length = 10mask\_time\_min\_masks = 2mask\_feature\_prob = 0.0mask\_feature\_length = 10num\_codevectors\_per\_group = 320num\_codevector\_groups = 2contrastive\_logits\_temperature = 0.1num\_negatives = 100codevector\_dim = 256proj\_codevector\_dim = 256diversity\_loss\_weight = 0.1ctc\_loss\_reduction = 'mean'ctc\_zero\_infinity = Falseuse\_weighted\_layer\_sum = Falseclassifier\_proj\_size = 256tdnn\_dim = (512, 512, 512, 512, 1500)tdnn\_kernel = (5, 3, 3, 1, 1)tdnn\_dilation = (1, 2, 3, 1, 1)xvector\_output\_dim = 512num\_ctc\_classes = 80pad\_token\_id = 0bos\_token\_id = 1eos\_token\_id = 2add\_adapter = Falseadapter\_kernel\_size = 3adapter\_stride = 2num\_adapter\_layers = 3output\_hidden\_size = None\*\*kwargs ) This is the configuration class to store the configuration of a [WavLMModel](/docs/transformers/v4.34.0/en/model_doc/wavlm#transformers.WavLMModel). It is used to instantiate an WavLM model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the WavLM [microsoft/wavlm-base](https://huggingface.co/microsoft/wavlm-base) architecture. Configuration objects inherit from [PretrainedConfig](/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig) and can be used to control the model outputs. Read the documentation from [PretrainedConfig](/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig) for more information. Example: ``` >>> from transformers import WavLMConfig, WavLMModel >>> >>> configuration = WavLMConfig() >>> >>> model = WavLMModel(configuration) >>> >>> configuration = model.config ``` ## WavLMModel ### class transformers.WavLMModel [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wavlm/modeling_wavlm.py#L1122) ( config: WavLMConfig ) Parameters - **config** ([WavLMConfig](/docs/transformers/v4.34.0/en/model_doc/wavlm#transformers.WavLMConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. The bare WavLM Model transformer outputting raw hidden-states without any specific head on top. WavLM was proposed in [WavLM: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Xiangzhan Yu, Furu Wei. This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.). This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wavlm/modeling_wavlm.py#L1208) ( input\_values: typing.Optional\[torch.Tensor\]attention\_mask: typing.Optional\[torch.Tensor\] = Nonemask\_time\_indices: typing.Optional\[torch.FloatTensor\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → [transformers.modeling\_outputs.Wav2Vec2BaseModelOutput](/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.modeling_outputs.Wav2Vec2BaseModelOutput) or `tuple(torch.FloatTensor)` The [WavLMModel](/docs/transformers/v4.34.0/en/model_doc/wavlm#transformers.WavLMModel) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoProcessor, WavLMModel >>> import torch >>> from datasets import load_dataset >>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") >>> dataset = dataset.sort("id") >>> sampling_rate = dataset.features["audio"].sampling_rate >>> processor = AutoProcessor.from_pretrained("patrickvonplaten/wavlm-libri-clean-100h-base-plus") >>> model = WavLMModel.from_pretrained("patrickvonplaten/wavlm-libri-clean-100h-base-plus") >>> >>> inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt") >>> with torch.no_grad(): ... outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state >>> list(last_hidden_states.shape) [1, 292, 768] ``` ## WavLMForCTC ### class transformers.WavLMForCTC [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wavlm/modeling_wavlm.py#L1274) ( configtarget\_lang: typing.Optional\[str\] = None ) Parameters - **config** ([WavLMConfig](/docs/transformers/v4.34.0/en/model_doc/wavlm#transformers.WavLMConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. WavLM Model with a `language modeling` head on top for Connectionist Temporal Classification (CTC). WavLM was proposed in [WavLM: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Xiangzhan Yu, Furu Wei. This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.). This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wavlm/modeling_wavlm.py#L1346) ( input\_values: typing.Optional\[torch.Tensor\]attention\_mask: typing.Optional\[torch.Tensor\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = Nonelabels: typing.Optional\[torch.Tensor\] = None ) → [transformers.modeling\_outputs.CausalLMOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.CausalLMOutput) or `tuple(torch.FloatTensor)` The [WavLMForCTC](/docs/transformers/v4.34.0/en/model_doc/wavlm#transformers.WavLMForCTC) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoProcessor, WavLMForCTC >>> from datasets import load_dataset >>> import torch >>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") >>> dataset = dataset.sort("id") >>> sampling_rate = dataset.features["audio"].sampling_rate >>> processor = AutoProcessor.from_pretrained("patrickvonplaten/wavlm-libri-clean-100h-base-plus") >>> model = WavLMForCTC.from_pretrained("patrickvonplaten/wavlm-libri-clean-100h-base-plus") >>> >>> inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_ids = torch.argmax(logits, dim=-1) >>> >>> transcription = processor.batch_decode(predicted_ids) >>> transcription[0] 'mister quilter is the aposle of the middle classes and we are glad to welcome his gospel' >>> inputs["labels"] = processor(text=dataset[0]["text"], return_tensors="pt").input_ids >>> >>> loss = model(**inputs).loss >>> round(loss.item(), 2) 12.51 ``` ## WavLMForSequenceClassification ### class transformers.WavLMForSequenceClassification [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wavlm/modeling_wavlm.py#L1433) ( config ) Parameters - **config** ([WavLMConfig](/docs/transformers/v4.34.0/en/model_doc/wavlm#transformers.WavLMConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. WavLM Model with a sequence classification head on top (a linear layer over the pooled output) for tasks like SUPERB Keyword Spotting. WavLM was proposed in [WavLM: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Xiangzhan Yu, Furu Wei. This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.). This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wavlm/modeling_wavlm.py#L1481) ( input\_values: typing.Optional\[torch.Tensor\]attention\_mask: typing.Optional\[torch.Tensor\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = Nonelabels: typing.Optional\[torch.Tensor\] = None ) → [transformers.modeling\_outputs.SequenceClassifierOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.SequenceClassifierOutput) or `tuple(torch.FloatTensor)` The [WavLMForSequenceClassification](/docs/transformers/v4.34.0/en/model_doc/wavlm#transformers.WavLMForSequenceClassification) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoFeatureExtractor, WavLMForSequenceClassification >>> from datasets import load_dataset >>> import torch >>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") >>> dataset = dataset.sort("id") >>> sampling_rate = dataset.features["audio"].sampling_rate >>> feature_extractor = AutoFeatureExtractor.from_pretrained("patrickvonplaten/wavlm-libri-clean-100h-base-plus") >>> model = WavLMForSequenceClassification.from_pretrained("patrickvonplaten/wavlm-libri-clean-100h-base-plus") >>> >>> inputs = feature_extractor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_ids = torch.argmax(logits, dim=-1).item() >>> predicted_label = model.config.id2label[predicted_class_ids] >>> >>> target_label = model.config.id2label[0] >>> inputs["labels"] = torch.tensor([model.config.label2id[target_label]]) >>> loss = model(**inputs).loss ``` ## WavLMForAudioFrameClassification ### class transformers.WavLMForAudioFrameClassification [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wavlm/modeling_wavlm.py#L1558) ( config ) Parameters - **config** ([WavLMConfig](/docs/transformers/v4.34.0/en/model_doc/wavlm#transformers.WavLMConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. WavLM Model with a frame classification head on top for tasks like Speaker Diarization. WavLM was proposed in [WavLM: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Xiangzhan Yu, Furu Wei. This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.). This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wavlm/modeling_wavlm.py#L1602) ( input\_values: typing.Optional\[torch.Tensor\]attention\_mask: typing.Optional\[torch.Tensor\] = Nonelabels: typing.Optional\[torch.Tensor\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → [transformers.modeling\_outputs.TokenClassifierOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.TokenClassifierOutput) or `tuple(torch.FloatTensor)` The [WavLMForAudioFrameClassification](/docs/transformers/v4.34.0/en/model_doc/wavlm#transformers.WavLMForAudioFrameClassification) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoFeatureExtractor, WavLMForAudioFrameClassification >>> from datasets import load_dataset >>> import torch >>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") >>> dataset = dataset.sort("id") >>> sampling_rate = dataset.features["audio"].sampling_rate >>> feature_extractor = AutoFeatureExtractor.from_pretrained("microsoft/wavlm-base-plus-sd") >>> model = WavLMForAudioFrameClassification.from_pretrained("microsoft/wavlm-base-plus-sd") >>> >>> inputs = feature_extractor(dataset[0]["audio"]["array"], return_tensors="pt", sampling_rate=sampling_rate) >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> probabilities = torch.sigmoid(logits[0]) >>> >>> labels = (probabilities > 0.5).long() >>> labels[0].tolist() [0, 0] ``` ## WavLMForXVector ### class transformers.WavLMForXVector [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wavlm/modeling_wavlm.py#L1722) ( config ) Parameters - **config** ([WavLMConfig](/docs/transformers/v4.34.0/en/model_doc/wavlm#transformers.WavLMConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. WavLM Model with an XVector feature extraction head on top for tasks like Speaker Verification. WavLM was proposed in [WavLM: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Xiangzhan Yu, Furu Wei. This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.). This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wavlm/modeling_wavlm.py#L1784) ( input\_values: typing.Optional\[torch.Tensor\]attention\_mask: typing.Optional\[torch.Tensor\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = Nonelabels: typing.Optional\[torch.Tensor\] = None ) → [transformers.modeling\_outputs.XVectorOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.XVectorOutput) or `tuple(torch.FloatTensor)` The [WavLMForXVector](/docs/transformers/v4.34.0/en/model_doc/wavlm#transformers.WavLMForXVector) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoFeatureExtractor, WavLMForXVector >>> from datasets import load_dataset >>> import torch >>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") >>> dataset = dataset.sort("id") >>> sampling_rate = dataset.features["audio"].sampling_rate >>> feature_extractor = AutoFeatureExtractor.from_pretrained("microsoft/wavlm-base-plus-sv") >>> model = WavLMForXVector.from_pretrained("microsoft/wavlm-base-plus-sv") >>> >>> inputs = feature_extractor( ... [d["array"] for d in dataset[:2]["audio"]], sampling_rate=sampling_rate, return_tensors="pt", padding=True ... ) >>> with torch.no_grad(): ... embeddings = model(**inputs).embeddings >>> embeddings = torch.nn.functional.normalize(embeddings, dim=-1).cpu() >>> >>> cosine_sim = torch.nn.CosineSimilarity(dim=-1) >>> similarity = cosine_sim(embeddings[0], embeddings[1]) >>> threshold = 0.7 >>> if similarity < threshold: ... print("Speakers are not the same!") >>> round(similarity.item(), 2) 0.97 ```
<!DOCTYPE html><html class=""><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"> <meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science."> <meta property="fb:app_id" content="1321688464574422"> <meta name="twitter:card" content="summary_large_image"> <meta name="twitter:site" content="@huggingface"> <meta property="og:title" content="WavLM"> <meta property="og:type" content="website"> <meta property="og:url" content="https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/wavlm"> <meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png"> <link rel="stylesheet" href="/front/build/kube-b0520c1/style.css"> <link rel="preconnect" href="https://fonts.gstatic.com"> <link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&amp;display=swap" rel="stylesheet"> <link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&amp;display=swap" rel="stylesheet"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" as="style" onload="this.onload=null;this.rel='stylesheet'"> <noscript> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" /> </noscript> <title>WavLM</title> <script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script> <script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><link rel="stylesheet" href="/docs/transformers/v4.34.0/en/_app/immutable/assets/0.e3b0c442.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/1.38c5c2f6.js"><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"><meta name="hf:doc:metadata" content="{&quot;local&quot;:&quot;wavlm&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;overview&quot;,&quot;title&quot;:&quot;Overview&quot;},{&quot;local&quot;:&quot;documentation-resources&quot;,&quot;title&quot;:&quot;Documentation resources&quot;},{&quot;local&quot;:&quot;transformers.WavLMConfig&quot;,&quot;title&quot;:&quot;WavLMConfig&quot;},{&quot;local&quot;:&quot;transformers.WavLMModel&quot;,&quot;title&quot;:&quot;WavLMModel&quot;},{&quot;local&quot;:&quot;transformers.WavLMForCTC&quot;,&quot;title&quot;:&quot;WavLMForCTC&quot;},{&quot;local&quot;:&quot;transformers.WavLMForSequenceClassification&quot;,&quot;title&quot;:&quot;WavLMForSequenceClassification&quot;},{&quot;local&quot;:&quot;transformers.WavLMForAudioFrameClassification&quot;,&quot;title&quot;:&quot;WavLMForAudioFrameClassification&quot;},{&quot;local&quot;:&quot;transformers.WavLMForXVector&quot;,&quot;title&quot;:&quot;WavLMForXVector&quot;}],&quot;title&quot;:&quot;WavLM&quot;}"></head> <body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage"> <div class="flex min-h-screen flex-col"> <div class="SVELTE_HYDRATER contents" data-props="{&quot;classNames&quot;:&quot;&quot;,&quot;isWide&quot;:true,&quot;isZh&quot;:false}" data-target="MainHeader"><header class="border-b border-gray-100 "><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <div class="flex flex-none items-center justify-center p-0.5 place-self-stretch lg:hidden"><button class="relative z-40 flex h-6 w-8 items-center justify-center" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div></div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="rounded-full border border-transparent bg-gray-900 px-3 py-1 leading-none text-white hover:border-black hover:bg-white hover:text-black" href="/join">Sign Up</a></li></ul></nav></div></header></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div> <main class="flex flex-1 flex-col"><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapters&quot;:[{&quot;title&quot;:&quot;Get started&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;🤗 Transformers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;index&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/index&quot;},{&quot;title&quot;:&quot;Quick tour&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;quicktour&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/quicktour&quot;},{&quot;title&quot;:&quot;Installation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;installation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/installation&quot;}]},{&quot;title&quot;:&quot;Tutorials&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Run inference with pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_tutorial&quot;},{&quot;title&quot;:&quot;Write portable code with AutoClass&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;autoclass_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/autoclass_tutorial&quot;},{&quot;title&quot;:&quot;Preprocess data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocessing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/preprocessing&quot;},{&quot;title&quot;:&quot;Fine-tune a pretrained model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;training&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/training&quot;},{&quot;title&quot;:&quot;Train with a script&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;run_scripts&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/run_scripts&quot;},{&quot;title&quot;:&quot;Set up distributed training with 🤗 Accelerate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;accelerate&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/accelerate&quot;},{&quot;title&quot;:&quot;Load and train adapters with 🤗 PEFT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;peft&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/peft&quot;},{&quot;title&quot;:&quot;Share your model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_sharing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_sharing&quot;},{&quot;title&quot;:&quot;Agents&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers_agents&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/transformers_agents&quot;},{&quot;title&quot;:&quot;Generation with LLMs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;llm_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/llm_tutorial&quot;}]},{&quot;title&quot;:&quot;Task Guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Natural Language Processing&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text classification&quot;,&quot;id&quot;:&quot;tasks/sequence_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/sequence_classification&quot;},{&quot;title&quot;:&quot;Token classification&quot;,&quot;id&quot;:&quot;tasks/token_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/token_classification&quot;},{&quot;title&quot;:&quot;Question answering&quot;,&quot;id&quot;:&quot;tasks/question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/question_answering&quot;},{&quot;title&quot;:&quot;Causal language modeling&quot;,&quot;id&quot;:&quot;tasks/language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/language_modeling&quot;},{&quot;title&quot;:&quot;Masked language modeling&quot;,&quot;id&quot;:&quot;tasks/masked_language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/masked_language_modeling&quot;},{&quot;title&quot;:&quot;Translation&quot;,&quot;id&quot;:&quot;tasks/translation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/translation&quot;},{&quot;title&quot;:&quot;Summarization&quot;,&quot;id&quot;:&quot;tasks/summarization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/summarization&quot;},{&quot;title&quot;:&quot;Multiple choice&quot;,&quot;id&quot;:&quot;tasks/multiple_choice&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/multiple_choice&quot;}]},{&quot;title&quot;:&quot;Audio&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio classification&quot;,&quot;id&quot;:&quot;tasks/audio_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/audio_classification&quot;},{&quot;title&quot;:&quot;Automatic speech recognition&quot;,&quot;id&quot;:&quot;tasks/asr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/asr&quot;}]},{&quot;title&quot;:&quot;Computer Vision&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image classification&quot;,&quot;id&quot;:&quot;tasks/image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_classification&quot;},{&quot;title&quot;:&quot;Semantic segmentation&quot;,&quot;id&quot;:&quot;tasks/semantic_segmentation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/semantic_segmentation&quot;},{&quot;title&quot;:&quot;Video classification&quot;,&quot;id&quot;:&quot;tasks/video_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/video_classification&quot;},{&quot;title&quot;:&quot;Object detection&quot;,&quot;id&quot;:&quot;tasks/object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot object detection&quot;,&quot;id&quot;:&quot;tasks/zero_shot_object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot image classification&quot;,&quot;id&quot;:&quot;tasks/zero_shot_image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification&quot;},{&quot;title&quot;:&quot;Depth estimation&quot;,&quot;id&quot;:&quot;tasks/monocular_depth_estimation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation&quot;}]},{&quot;title&quot;:&quot;Multimodal&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image captioning&quot;,&quot;id&quot;:&quot;tasks/image_captioning&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_captioning&quot;},{&quot;title&quot;:&quot;Document Question Answering&quot;,&quot;id&quot;:&quot;tasks/document_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/document_question_answering&quot;},{&quot;title&quot;:&quot;Visual Question Answering&quot;,&quot;id&quot;:&quot;tasks/visual_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/visual_question_answering&quot;},{&quot;title&quot;:&quot;Text to speech&quot;,&quot;id&quot;:&quot;tasks/text-to-speech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/text-to-speech&quot;}]},{&quot;title&quot;:&quot;Generation&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Customize the generation strategy&quot;,&quot;id&quot;:&quot;generation_strategies&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/generation_strategies&quot;}]},{&quot;title&quot;:&quot;Prompting&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image tasks with IDEFICS&quot;,&quot;id&quot;:&quot;tasks/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/idefics&quot;}]}]},{&quot;title&quot;:&quot;Developer guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Use fast tokenizers from 🤗 Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;fast_tokenizers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/fast_tokenizers&quot;},{&quot;title&quot;:&quot;Run inference with multilingual models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;multilingual&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/multilingual&quot;},{&quot;title&quot;:&quot;Use model-specific APIs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;create_a_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/create_a_model&quot;},{&quot;title&quot;:&quot;Share a custom model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_models&quot;},{&quot;title&quot;:&quot;Templates for chat models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;chat_templating&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/chat_templating&quot;},{&quot;title&quot;:&quot;Run training on Amazon SageMaker&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;sagemaker&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/sagemaker&quot;},{&quot;title&quot;:&quot;Export to ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;serialization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/serialization&quot;},{&quot;title&quot;:&quot;Export to TFLite&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tflite&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tflite&quot;},{&quot;title&quot;:&quot;Export to TorchScript&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;torchscript&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/torchscript&quot;},{&quot;title&quot;:&quot;Benchmarks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;benchmarks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/benchmarks&quot;},{&quot;title&quot;:&quot;Notebooks with examples&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;notebooks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/notebooks&quot;},{&quot;title&quot;:&quot;Community resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;community&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/community&quot;},{&quot;title&quot;:&quot;Custom Tools and Prompts&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_tools&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_tools&quot;},{&quot;title&quot;:&quot;Troubleshoot&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;troubleshooting&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/troubleshooting&quot;}]},{&quot;title&quot;:&quot;Performance and scalability&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;performance&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/performance&quot;},{&quot;title&quot;:&quot;Efficient training techniques&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Methods and tools for efficient training on a single GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_one&quot;},{&quot;title&quot;:&quot;Multiple GPUs and parallelism&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_many&quot;},{&quot;title&quot;:&quot;Efficient training on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu&quot;},{&quot;title&quot;:&quot;Distributed CPU training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu_many&quot;},{&quot;title&quot;:&quot;Training on TPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu&quot;},{&quot;title&quot;:&quot;Training on TPU with TensorFlow&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu_tf&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu_tf&quot;},{&quot;title&quot;:&quot;Training on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_special&quot;},{&quot;title&quot;:&quot;Custom hardware for training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_hardware&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_hardware&quot;},{&quot;title&quot;:&quot;Hyperparameter Search using Trainer API&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;hpo_train&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/hpo_train&quot;}]},{&quot;title&quot;:&quot;Optimizing inference&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Inference on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_cpu&quot;},{&quot;title&quot;:&quot;Inference on one GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_one&quot;},{&quot;title&quot;:&quot;Inference on many GPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_many&quot;},{&quot;title&quot;:&quot;Inference on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_special&quot;}]},{&quot;title&quot;:&quot;Instantiating a big model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;big_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/big_models&quot;},{&quot;title&quot;:&quot;Troubleshooting&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;debugging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/debugging&quot;},{&quot;title&quot;:&quot;XLA Integration for TensorFlow Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tf_xla&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tf_xla&quot;},{&quot;title&quot;:&quot;Optimize inference using `torch.compile()`&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_torch_compile&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_torch_compile&quot;}]},{&quot;title&quot;:&quot;Contribute&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;How to contribute to transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;contributing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/contributing&quot;},{&quot;title&quot;:&quot;How to add a model to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_model&quot;},{&quot;title&quot;:&quot;How to convert a 🤗 Transformers model to TensorFlow?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_tensorflow_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_tensorflow_model&quot;},{&quot;title&quot;:&quot;How to add a pipeline to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_pipeline&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_pipeline&quot;},{&quot;title&quot;:&quot;Testing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;testing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/testing&quot;},{&quot;title&quot;:&quot;Checks on a Pull Request&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pr_checks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pr_checks&quot;}]},{&quot;title&quot;:&quot;Conceptual guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Philosophy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;philosophy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/philosophy&quot;},{&quot;title&quot;:&quot;Glossary&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;glossary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/glossary&quot;},{&quot;title&quot;:&quot;What 🤗 Transformers can do&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;task_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/task_summary&quot;},{&quot;title&quot;:&quot;How 🤗 Transformers solve tasks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks_explained&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks_explained&quot;},{&quot;title&quot;:&quot;The Transformer model family&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_summary&quot;},{&quot;title&quot;:&quot;Summary of the tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tokenizer_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tokenizer_summary&quot;},{&quot;title&quot;:&quot;Attention mechanisms&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;attention&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/attention&quot;},{&quot;title&quot;:&quot;Padding and truncation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pad_truncation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pad_truncation&quot;},{&quot;title&quot;:&quot;BERTology&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;bertology&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/bertology&quot;},{&quot;title&quot;:&quot;Perplexity of fixed-length models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perplexity&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perplexity&quot;},{&quot;title&quot;:&quot;Pipelines for webserver inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_webserver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_webserver&quot;},{&quot;title&quot;:&quot;Model training anatomy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_memory_anatomy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_memory_anatomy&quot;}]},{&quot;title&quot;:&quot;API&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Main Classes&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Agents and Tools&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/agent&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/agent&quot;},{&quot;title&quot;:&quot;Auto Classes&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/auto&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/auto&quot;},{&quot;title&quot;:&quot;Callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/callback&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/callback&quot;},{&quot;title&quot;:&quot;Configuration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/configuration&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/configuration&quot;},{&quot;title&quot;:&quot;Data Collator&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/data_collator&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/data_collator&quot;},{&quot;title&quot;:&quot;Keras callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/keras_callbacks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/keras_callbacks&quot;},{&quot;title&quot;:&quot;Logging&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/logging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/logging&quot;},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/model&quot;},{&quot;title&quot;:&quot;Text Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/text_generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/text_generation&quot;},{&quot;title&quot;:&quot;ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/onnx&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/onnx&quot;},{&quot;title&quot;:&quot;Optimization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/optimizer_schedules&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules&quot;},{&quot;title&quot;:&quot;Model outputs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/output&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/output&quot;},{&quot;title&quot;:&quot;Pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/pipelines&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/pipelines&quot;},{&quot;title&quot;:&quot;Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/processors&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/processors&quot;},{&quot;title&quot;:&quot;Quantization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/quantization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/quantization&quot;},{&quot;title&quot;:&quot;Tokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/tokenizer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/tokenizer&quot;},{&quot;title&quot;:&quot;Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/trainer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/trainer&quot;},{&quot;title&quot;:&quot;DeepSpeed Integration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/deepspeed&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/deepspeed&quot;},{&quot;title&quot;:&quot;Feature Extractor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/feature_extractor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/feature_extractor&quot;},{&quot;title&quot;:&quot;Image Processor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/image_processor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/image_processor&quot;}]},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALBERT&quot;,&quot;id&quot;:&quot;model_doc/albert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/albert&quot;},{&quot;title&quot;:&quot;BART&quot;,&quot;id&quot;:&quot;model_doc/bart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bart&quot;},{&quot;title&quot;:&quot;BARThez&quot;,&quot;id&quot;:&quot;model_doc/barthez&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/barthez&quot;},{&quot;title&quot;:&quot;BARTpho&quot;,&quot;id&quot;:&quot;model_doc/bartpho&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bartpho&quot;},{&quot;title&quot;:&quot;BERT&quot;,&quot;id&quot;:&quot;model_doc/bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert&quot;},{&quot;title&quot;:&quot;BertGeneration&quot;,&quot;id&quot;:&quot;model_doc/bert-generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-generation&quot;},{&quot;title&quot;:&quot;BertJapanese&quot;,&quot;id&quot;:&quot;model_doc/bert-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-japanese&quot;},{&quot;title&quot;:&quot;Bertweet&quot;,&quot;id&quot;:&quot;model_doc/bertweet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bertweet&quot;},{&quot;title&quot;:&quot;BigBird&quot;,&quot;id&quot;:&quot;model_doc/big_bird&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/big_bird&quot;},{&quot;title&quot;:&quot;BigBirdPegasus&quot;,&quot;id&quot;:&quot;model_doc/bigbird_pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus&quot;},{&quot;title&quot;:&quot;BioGpt&quot;,&quot;id&quot;:&quot;model_doc/biogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/biogpt&quot;},{&quot;title&quot;:&quot;Blenderbot&quot;,&quot;id&quot;:&quot;model_doc/blenderbot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot&quot;},{&quot;title&quot;:&quot;Blenderbot Small&quot;,&quot;id&quot;:&quot;model_doc/blenderbot-small&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot-small&quot;},{&quot;title&quot;:&quot;BLOOM&quot;,&quot;id&quot;:&quot;model_doc/bloom&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bloom&quot;},{&quot;title&quot;:&quot;BORT&quot;,&quot;id&quot;:&quot;model_doc/bort&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bort&quot;},{&quot;title&quot;:&quot;ByT5&quot;,&quot;id&quot;:&quot;model_doc/byt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/byt5&quot;},{&quot;title&quot;:&quot;CamemBERT&quot;,&quot;id&quot;:&quot;model_doc/camembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/camembert&quot;},{&quot;title&quot;:&quot;CANINE&quot;,&quot;id&quot;:&quot;model_doc/canine&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/canine&quot;},{&quot;title&quot;:&quot;CodeGen&quot;,&quot;id&quot;:&quot;model_doc/codegen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/codegen&quot;},{&quot;title&quot;:&quot;CodeLlama&quot;,&quot;id&quot;:&quot;model_doc/code_llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/code_llama&quot;},{&quot;title&quot;:&quot;ConvBERT&quot;,&quot;id&quot;:&quot;model_doc/convbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convbert&quot;},{&quot;title&quot;:&quot;CPM&quot;,&quot;id&quot;:&quot;model_doc/cpm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpm&quot;},{&quot;title&quot;:&quot;CPMANT&quot;,&quot;id&quot;:&quot;model_doc/cpmant&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpmant&quot;},{&quot;title&quot;:&quot;CTRL&quot;,&quot;id&quot;:&quot;model_doc/ctrl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ctrl&quot;},{&quot;title&quot;:&quot;DeBERTa&quot;,&quot;id&quot;:&quot;model_doc/deberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta&quot;},{&quot;title&quot;:&quot;DeBERTa-v2&quot;,&quot;id&quot;:&quot;model_doc/deberta-v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta-v2&quot;},{&quot;title&quot;:&quot;DialoGPT&quot;,&quot;id&quot;:&quot;model_doc/dialogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dialogpt&quot;},{&quot;title&quot;:&quot;DistilBERT&quot;,&quot;id&quot;:&quot;model_doc/distilbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/distilbert&quot;},{&quot;title&quot;:&quot;DPR&quot;,&quot;id&quot;:&quot;model_doc/dpr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpr&quot;},{&quot;title&quot;:&quot;ELECTRA&quot;,&quot;id&quot;:&quot;model_doc/electra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/electra&quot;},{&quot;title&quot;:&quot;Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encoder-decoder&quot;},{&quot;title&quot;:&quot;ERNIE&quot;,&quot;id&quot;:&quot;model_doc/ernie&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie&quot;},{&quot;title&quot;:&quot;ErnieM&quot;,&quot;id&quot;:&quot;model_doc/ernie_m&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie_m&quot;},{&quot;title&quot;:&quot;ESM&quot;,&quot;id&quot;:&quot;model_doc/esm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/esm&quot;},{&quot;title&quot;:&quot;Falcon&quot;,&quot;id&quot;:&quot;model_doc/falcon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/falcon&quot;},{&quot;title&quot;:&quot;FLAN-T5&quot;,&quot;id&quot;:&quot;model_doc/flan-t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-t5&quot;},{&quot;title&quot;:&quot;FLAN-UL2&quot;,&quot;id&quot;:&quot;model_doc/flan-ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-ul2&quot;},{&quot;title&quot;:&quot;FlauBERT&quot;,&quot;id&quot;:&quot;model_doc/flaubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flaubert&quot;},{&quot;title&quot;:&quot;FNet&quot;,&quot;id&quot;:&quot;model_doc/fnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fnet&quot;},{&quot;title&quot;:&quot;FSMT&quot;,&quot;id&quot;:&quot;model_doc/fsmt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fsmt&quot;},{&quot;title&quot;:&quot;Funnel Transformer&quot;,&quot;id&quot;:&quot;model_doc/funnel&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/funnel&quot;},{&quot;title&quot;:&quot;GPT&quot;,&quot;id&quot;:&quot;model_doc/openai-gpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/openai-gpt&quot;},{&quot;title&quot;:&quot;GPT Neo&quot;,&quot;id&quot;:&quot;model_doc/gpt_neo&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neo&quot;},{&quot;title&quot;:&quot;GPT NeoX&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox&quot;},{&quot;title&quot;:&quot;GPT NeoX Japanese&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox_japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese&quot;},{&quot;title&quot;:&quot;GPT-J&quot;,&quot;id&quot;:&quot;model_doc/gptj&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptj&quot;},{&quot;title&quot;:&quot;GPT2&quot;,&quot;id&quot;:&quot;model_doc/gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt2&quot;},{&quot;title&quot;:&quot;GPTBigCode&quot;,&quot;id&quot;:&quot;model_doc/gpt_bigcode&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode&quot;},{&quot;title&quot;:&quot;GPTSAN Japanese&quot;,&quot;id&quot;:&quot;model_doc/gptsan-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese&quot;},{&quot;title&quot;:&quot;GPTSw3&quot;,&quot;id&quot;:&quot;model_doc/gpt-sw3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt-sw3&quot;},{&quot;title&quot;:&quot;HerBERT&quot;,&quot;id&quot;:&quot;model_doc/herbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/herbert&quot;},{&quot;title&quot;:&quot;I-BERT&quot;,&quot;id&quot;:&quot;model_doc/ibert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ibert&quot;},{&quot;title&quot;:&quot;Jukebox&quot;,&quot;id&quot;:&quot;model_doc/jukebox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/jukebox&quot;},{&quot;title&quot;:&quot;LED&quot;,&quot;id&quot;:&quot;model_doc/led&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/led&quot;},{&quot;title&quot;:&quot;LLaMA&quot;,&quot;id&quot;:&quot;model_doc/llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama&quot;},{&quot;title&quot;:&quot;Llama2&quot;,&quot;id&quot;:&quot;model_doc/llama2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama2&quot;},{&quot;title&quot;:&quot;Longformer&quot;,&quot;id&quot;:&quot;model_doc/longformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longformer&quot;},{&quot;title&quot;:&quot;LongT5&quot;,&quot;id&quot;:&quot;model_doc/longt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longt5&quot;},{&quot;title&quot;:&quot;LUKE&quot;,&quot;id&quot;:&quot;model_doc/luke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/luke&quot;},{&quot;title&quot;:&quot;M2M100&quot;,&quot;id&quot;:&quot;model_doc/m2m_100&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/m2m_100&quot;},{&quot;title&quot;:&quot;MarianMT&quot;,&quot;id&quot;:&quot;model_doc/marian&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/marian&quot;},{&quot;title&quot;:&quot;MarkupLM&quot;,&quot;id&quot;:&quot;model_doc/markuplm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/markuplm&quot;},{&quot;title&quot;:&quot;MBart and MBart-50&quot;,&quot;id&quot;:&quot;model_doc/mbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mbart&quot;},{&quot;title&quot;:&quot;MEGA&quot;,&quot;id&quot;:&quot;model_doc/mega&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mega&quot;},{&quot;title&quot;:&quot;MegatronBERT&quot;,&quot;id&quot;:&quot;model_doc/megatron-bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron-bert&quot;},{&quot;title&quot;:&quot;MegatronGPT2&quot;,&quot;id&quot;:&quot;model_doc/megatron_gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2&quot;},{&quot;title&quot;:&quot;Mistral&quot;,&quot;id&quot;:&quot;model_doc/mistral&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mistral&quot;},{&quot;title&quot;:&quot;mLUKE&quot;,&quot;id&quot;:&quot;model_doc/mluke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mluke&quot;},{&quot;title&quot;:&quot;MobileBERT&quot;,&quot;id&quot;:&quot;model_doc/mobilebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilebert&quot;},{&quot;title&quot;:&quot;MPNet&quot;,&quot;id&quot;:&quot;model_doc/mpnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpnet&quot;},{&quot;title&quot;:&quot;MPT&quot;,&quot;id&quot;:&quot;model_doc/mpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpt&quot;},{&quot;title&quot;:&quot;MRA&quot;,&quot;id&quot;:&quot;model_doc/mra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mra&quot;},{&quot;title&quot;:&quot;MT5&quot;,&quot;id&quot;:&quot;model_doc/mt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mt5&quot;},{&quot;title&quot;:&quot;MVP&quot;,&quot;id&quot;:&quot;model_doc/mvp&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mvp&quot;},{&quot;title&quot;:&quot;NEZHA&quot;,&quot;id&quot;:&quot;model_doc/nezha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nezha&quot;},{&quot;title&quot;:&quot;NLLB&quot;,&quot;id&quot;:&quot;model_doc/nllb&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb&quot;},{&quot;title&quot;:&quot;NLLB-MoE&quot;,&quot;id&quot;:&quot;model_doc/nllb-moe&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb-moe&quot;},{&quot;title&quot;:&quot;Nyströmformer&quot;,&quot;id&quot;:&quot;model_doc/nystromformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nystromformer&quot;},{&quot;title&quot;:&quot;Open-Llama&quot;,&quot;id&quot;:&quot;model_doc/open-llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/open-llama&quot;},{&quot;title&quot;:&quot;OPT&quot;,&quot;id&quot;:&quot;model_doc/opt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/opt&quot;},{&quot;title&quot;:&quot;Pegasus&quot;,&quot;id&quot;:&quot;model_doc/pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus&quot;},{&quot;title&quot;:&quot;PEGASUS-X&quot;,&quot;id&quot;:&quot;model_doc/pegasus_x&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus_x&quot;},{&quot;title&quot;:&quot;Persimmon&quot;,&quot;id&quot;:&quot;model_doc/persimmon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/persimmon&quot;},{&quot;title&quot;:&quot;PhoBERT&quot;,&quot;id&quot;:&quot;model_doc/phobert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/phobert&quot;},{&quot;title&quot;:&quot;PLBart&quot;,&quot;id&quot;:&quot;model_doc/plbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/plbart&quot;},{&quot;title&quot;:&quot;ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/prophetnet&quot;},{&quot;title&quot;:&quot;QDQBert&quot;,&quot;id&quot;:&quot;model_doc/qdqbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/qdqbert&quot;},{&quot;title&quot;:&quot;RAG&quot;,&quot;id&quot;:&quot;model_doc/rag&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rag&quot;},{&quot;title&quot;:&quot;REALM&quot;,&quot;id&quot;:&quot;model_doc/realm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/realm&quot;},{&quot;title&quot;:&quot;Reformer&quot;,&quot;id&quot;:&quot;model_doc/reformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/reformer&quot;},{&quot;title&quot;:&quot;RemBERT&quot;,&quot;id&quot;:&quot;model_doc/rembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rembert&quot;},{&quot;title&quot;:&quot;RetriBERT&quot;,&quot;id&quot;:&quot;model_doc/retribert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/retribert&quot;},{&quot;title&quot;:&quot;RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta&quot;},{&quot;title&quot;:&quot;RoBERTa-PreLayerNorm&quot;,&quot;id&quot;:&quot;model_doc/roberta-prelayernorm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm&quot;},{&quot;title&quot;:&quot;RoCBert&quot;,&quot;id&quot;:&quot;model_doc/roc_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roc_bert&quot;},{&quot;title&quot;:&quot;RoFormer&quot;,&quot;id&quot;:&quot;model_doc/roformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roformer&quot;},{&quot;title&quot;:&quot;RWKV&quot;,&quot;id&quot;:&quot;model_doc/rwkv&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rwkv&quot;},{&quot;title&quot;:&quot;Splinter&quot;,&quot;id&quot;:&quot;model_doc/splinter&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/splinter&quot;},{&quot;title&quot;:&quot;SqueezeBERT&quot;,&quot;id&quot;:&quot;model_doc/squeezebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/squeezebert&quot;},{&quot;title&quot;:&quot;SwitchTransformers&quot;,&quot;id&quot;:&quot;model_doc/switch_transformers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/switch_transformers&quot;},{&quot;title&quot;:&quot;T5&quot;,&quot;id&quot;:&quot;model_doc/t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5&quot;},{&quot;title&quot;:&quot;T5v1.1&quot;,&quot;id&quot;:&quot;model_doc/t5v1.1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5v1.1&quot;},{&quot;title&quot;:&quot;TAPEX&quot;,&quot;id&quot;:&quot;model_doc/tapex&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapex&quot;},{&quot;title&quot;:&quot;Transformer XL&quot;,&quot;id&quot;:&quot;model_doc/transfo-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/transfo-xl&quot;},{&quot;title&quot;:&quot;UL2&quot;,&quot;id&quot;:&quot;model_doc/ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ul2&quot;},{&quot;title&quot;:&quot;UMT5&quot;,&quot;id&quot;:&quot;model_doc/umt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/umt5&quot;},{&quot;title&quot;:&quot;X-MOD&quot;,&quot;id&quot;:&quot;model_doc/xmod&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xmod&quot;},{&quot;title&quot;:&quot;XGLM&quot;,&quot;id&quot;:&quot;model_doc/xglm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xglm&quot;},{&quot;title&quot;:&quot;XLM&quot;,&quot;id&quot;:&quot;model_doc/xlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm&quot;},{&quot;title&quot;:&quot;XLM-ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/xlm-prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa-XL&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl&quot;},{&quot;title&quot;:&quot;XLM-V&quot;,&quot;id&quot;:&quot;model_doc/xlm-v&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-v&quot;},{&quot;title&quot;:&quot;XLNet&quot;,&quot;id&quot;:&quot;model_doc/xlnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlnet&quot;},{&quot;title&quot;:&quot;YOSO&quot;,&quot;id&quot;:&quot;model_doc/yoso&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yoso&quot;}]},{&quot;title&quot;:&quot;Vision models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;BEiT&quot;,&quot;id&quot;:&quot;model_doc/beit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/beit&quot;},{&quot;title&quot;:&quot;BiT&quot;,&quot;id&quot;:&quot;model_doc/bit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bit&quot;},{&quot;title&quot;:&quot;Conditional DETR&quot;,&quot;id&quot;:&quot;model_doc/conditional_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/conditional_detr&quot;},{&quot;title&quot;:&quot;ConvNeXT&quot;,&quot;id&quot;:&quot;model_doc/convnext&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnext&quot;},{&quot;title&quot;:&quot;ConvNeXTV2&quot;,&quot;id&quot;:&quot;model_doc/convnextv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnextv2&quot;},{&quot;title&quot;:&quot;CvT&quot;,&quot;id&quot;:&quot;model_doc/cvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cvt&quot;},{&quot;title&quot;:&quot;Deformable DETR&quot;,&quot;id&quot;:&quot;model_doc/deformable_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deformable_detr&quot;},{&quot;title&quot;:&quot;DeiT&quot;,&quot;id&quot;:&quot;model_doc/deit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deit&quot;},{&quot;title&quot;:&quot;DETA&quot;,&quot;id&quot;:&quot;model_doc/deta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deta&quot;},{&quot;title&quot;:&quot;DETR&quot;,&quot;id&quot;:&quot;model_doc/detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/detr&quot;},{&quot;title&quot;:&quot;DiNAT&quot;,&quot;id&quot;:&quot;model_doc/dinat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinat&quot;},{&quot;title&quot;:&quot;DINO V2&quot;,&quot;id&quot;:&quot;model_doc/dinov2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinov2&quot;},{&quot;title&quot;:&quot;DiT&quot;,&quot;id&quot;:&quot;model_doc/dit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dit&quot;},{&quot;title&quot;:&quot;DPT&quot;,&quot;id&quot;:&quot;model_doc/dpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpt&quot;},{&quot;title&quot;:&quot;EfficientFormer&quot;,&quot;id&quot;:&quot;model_doc/efficientformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientformer&quot;},{&quot;title&quot;:&quot;EfficientNet&quot;,&quot;id&quot;:&quot;model_doc/efficientnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientnet&quot;},{&quot;title&quot;:&quot;FocalNet&quot;,&quot;id&quot;:&quot;model_doc/focalnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/focalnet&quot;},{&quot;title&quot;:&quot;GLPN&quot;,&quot;id&quot;:&quot;model_doc/glpn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/glpn&quot;},{&quot;title&quot;:&quot;ImageGPT&quot;,&quot;id&quot;:&quot;model_doc/imagegpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/imagegpt&quot;},{&quot;title&quot;:&quot;LeViT&quot;,&quot;id&quot;:&quot;model_doc/levit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/levit&quot;},{&quot;title&quot;:&quot;Mask2Former&quot;,&quot;id&quot;:&quot;model_doc/mask2former&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mask2former&quot;},{&quot;title&quot;:&quot;MaskFormer&quot;,&quot;id&quot;:&quot;model_doc/maskformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/maskformer&quot;},{&quot;title&quot;:&quot;MobileNetV1&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v1&quot;},{&quot;title&quot;:&quot;MobileNetV2&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v2&quot;},{&quot;title&quot;:&quot;MobileViT&quot;,&quot;id&quot;:&quot;model_doc/mobilevit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevit&quot;},{&quot;title&quot;:&quot;MobileViTV2&quot;,&quot;id&quot;:&quot;model_doc/mobilevitv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevitv2&quot;},{&quot;title&quot;:&quot;NAT&quot;,&quot;id&quot;:&quot;model_doc/nat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nat&quot;},{&quot;title&quot;:&quot;PoolFormer&quot;,&quot;id&quot;:&quot;model_doc/poolformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/poolformer&quot;},{&quot;title&quot;:&quot;Pyramid Vision Transformer (PVT)&quot;,&quot;id&quot;:&quot;model_doc/pvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pvt&quot;},{&quot;title&quot;:&quot;RegNet&quot;,&quot;id&quot;:&quot;model_doc/regnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/regnet&quot;},{&quot;title&quot;:&quot;ResNet&quot;,&quot;id&quot;:&quot;model_doc/resnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/resnet&quot;},{&quot;title&quot;:&quot;SegFormer&quot;,&quot;id&quot;:&quot;model_doc/segformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/segformer&quot;},{&quot;title&quot;:&quot;SwiftFormer&quot;,&quot;id&quot;:&quot;model_doc/swiftformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swiftformer&quot;},{&quot;title&quot;:&quot;Swin Transformer&quot;,&quot;id&quot;:&quot;model_doc/swin&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin&quot;},{&quot;title&quot;:&quot;Swin Transformer V2&quot;,&quot;id&quot;:&quot;model_doc/swinv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swinv2&quot;},{&quot;title&quot;:&quot;Swin2SR&quot;,&quot;id&quot;:&quot;model_doc/swin2sr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin2sr&quot;},{&quot;title&quot;:&quot;Table Transformer&quot;,&quot;id&quot;:&quot;model_doc/table-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/table-transformer&quot;},{&quot;title&quot;:&quot;TimeSformer&quot;,&quot;id&quot;:&quot;model_doc/timesformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/timesformer&quot;},{&quot;title&quot;:&quot;UperNet&quot;,&quot;id&quot;:&quot;model_doc/upernet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/upernet&quot;},{&quot;title&quot;:&quot;VAN&quot;,&quot;id&quot;:&quot;model_doc/van&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/van&quot;},{&quot;title&quot;:&quot;VideoMAE&quot;,&quot;id&quot;:&quot;model_doc/videomae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/videomae&quot;},{&quot;title&quot;:&quot;Vision Transformer (ViT)&quot;,&quot;id&quot;:&quot;model_doc/vit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit&quot;},{&quot;title&quot;:&quot;ViT Hybrid&quot;,&quot;id&quot;:&quot;model_doc/vit_hybrid&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_hybrid&quot;},{&quot;title&quot;:&quot;ViTDet&quot;,&quot;id&quot;:&quot;model_doc/vitdet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitdet&quot;},{&quot;title&quot;:&quot;ViTMAE&quot;,&quot;id&quot;:&quot;model_doc/vit_mae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_mae&quot;},{&quot;title&quot;:&quot;ViTMatte&quot;,&quot;id&quot;:&quot;model_doc/vitmatte&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitmatte&quot;},{&quot;title&quot;:&quot;ViTMSN&quot;,&quot;id&quot;:&quot;model_doc/vit_msn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_msn&quot;},{&quot;title&quot;:&quot;ViViT&quot;,&quot;id&quot;:&quot;model_doc/vivit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vivit&quot;},{&quot;title&quot;:&quot;YOLOS&quot;,&quot;id&quot;:&quot;model_doc/yolos&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yolos&quot;}]},{&quot;title&quot;:&quot;Audio models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio Spectrogram Transformer&quot;,&quot;id&quot;:&quot;model_doc/audio-spectrogram-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer&quot;},{&quot;title&quot;:&quot;Bark&quot;,&quot;id&quot;:&quot;model_doc/bark&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bark&quot;},{&quot;title&quot;:&quot;CLAP&quot;,&quot;id&quot;:&quot;model_doc/clap&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clap&quot;},{&quot;title&quot;:&quot;EnCodec&quot;,&quot;id&quot;:&quot;model_doc/encodec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encodec&quot;},{&quot;title&quot;:&quot;Hubert&quot;,&quot;id&quot;:&quot;model_doc/hubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/hubert&quot;},{&quot;title&quot;:&quot;MCTCT&quot;,&quot;id&quot;:&quot;model_doc/mctct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mctct&quot;},{&quot;title&quot;:&quot;MMS&quot;,&quot;id&quot;:&quot;model_doc/mms&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mms&quot;},{&quot;title&quot;:&quot;MusicGen&quot;,&quot;id&quot;:&quot;model_doc/musicgen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/musicgen&quot;},{&quot;title&quot;:&quot;Pop2Piano&quot;,&quot;id&quot;:&quot;model_doc/pop2piano&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pop2piano&quot;},{&quot;title&quot;:&quot;SEW&quot;,&quot;id&quot;:&quot;model_doc/sew&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew&quot;},{&quot;title&quot;:&quot;SEW-D&quot;,&quot;id&quot;:&quot;model_doc/sew-d&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew-d&quot;},{&quot;title&quot;:&quot;Speech2Text&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text&quot;},{&quot;title&quot;:&quot;Speech2Text2&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text_2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2&quot;},{&quot;title&quot;:&quot;SpeechT5&quot;,&quot;id&quot;:&quot;model_doc/speecht5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speecht5&quot;},{&quot;title&quot;:&quot;UniSpeech&quot;,&quot;id&quot;:&quot;model_doc/unispeech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech&quot;},{&quot;title&quot;:&quot;UniSpeech-SAT&quot;,&quot;id&quot;:&quot;model_doc/unispeech-sat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech-sat&quot;},{&quot;title&quot;:&quot;VITS&quot;,&quot;id&quot;:&quot;model_doc/vits&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vits&quot;},{&quot;title&quot;:&quot;Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2&quot;},{&quot;title&quot;:&quot;Wav2Vec2-Conformer&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2-conformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer&quot;},{&quot;title&quot;:&quot;Wav2Vec2Phoneme&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2_phoneme&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme&quot;},{&quot;title&quot;:&quot;WavLM&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/wavlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wavlm&quot;},{&quot;title&quot;:&quot;Whisper&quot;,&quot;id&quot;:&quot;model_doc/whisper&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/whisper&quot;},{&quot;title&quot;:&quot;XLS-R&quot;,&quot;id&quot;:&quot;model_doc/xls_r&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xls_r&quot;},{&quot;title&quot;:&quot;XLSR-Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/xlsr_wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2&quot;}]},{&quot;title&quot;:&quot;Multimodal models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALIGN&quot;,&quot;id&quot;:&quot;model_doc/align&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/align&quot;},{&quot;title&quot;:&quot;AltCLIP&quot;,&quot;id&quot;:&quot;model_doc/altclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/altclip&quot;},{&quot;title&quot;:&quot;BLIP&quot;,&quot;id&quot;:&quot;model_doc/blip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip&quot;},{&quot;title&quot;:&quot;BLIP-2&quot;,&quot;id&quot;:&quot;model_doc/blip-2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip-2&quot;},{&quot;title&quot;:&quot;BridgeTower&quot;,&quot;id&quot;:&quot;model_doc/bridgetower&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bridgetower&quot;},{&quot;title&quot;:&quot;BROS&quot;,&quot;id&quot;:&quot;model_doc/bros&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bros&quot;},{&quot;title&quot;:&quot;Chinese-CLIP&quot;,&quot;id&quot;:&quot;model_doc/chinese_clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/chinese_clip&quot;},{&quot;title&quot;:&quot;CLIP&quot;,&quot;id&quot;:&quot;model_doc/clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clip&quot;},{&quot;title&quot;:&quot;CLIPSeg&quot;,&quot;id&quot;:&quot;model_doc/clipseg&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clipseg&quot;},{&quot;title&quot;:&quot;Data2Vec&quot;,&quot;id&quot;:&quot;model_doc/data2vec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/data2vec&quot;},{&quot;title&quot;:&quot;DePlot&quot;,&quot;id&quot;:&quot;model_doc/deplot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deplot&quot;},{&quot;title&quot;:&quot;Donut&quot;,&quot;id&quot;:&quot;model_doc/donut&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/donut&quot;},{&quot;title&quot;:&quot;FLAVA&quot;,&quot;id&quot;:&quot;model_doc/flava&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flava&quot;},{&quot;title&quot;:&quot;GIT&quot;,&quot;id&quot;:&quot;model_doc/git&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/git&quot;},{&quot;title&quot;:&quot;GroupViT&quot;,&quot;id&quot;:&quot;model_doc/groupvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/groupvit&quot;},{&quot;title&quot;:&quot;IDEFICS&quot;,&quot;id&quot;:&quot;model_doc/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/idefics&quot;},{&quot;title&quot;:&quot;InstructBLIP&quot;,&quot;id&quot;:&quot;model_doc/instructblip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/instructblip&quot;},{&quot;title&quot;:&quot;LayoutLM&quot;,&quot;id&quot;:&quot;model_doc/layoutlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlm&quot;},{&quot;title&quot;:&quot;LayoutLMV2&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv2&quot;},{&quot;title&quot;:&quot;LayoutLMV3&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv3&quot;},{&quot;title&quot;:&quot;LayoutXLM&quot;,&quot;id&quot;:&quot;model_doc/layoutxlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutxlm&quot;},{&quot;title&quot;:&quot;LiLT&quot;,&quot;id&quot;:&quot;model_doc/lilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lilt&quot;},{&quot;title&quot;:&quot;LXMERT&quot;,&quot;id&quot;:&quot;model_doc/lxmert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lxmert&quot;},{&quot;title&quot;:&quot;MatCha&quot;,&quot;id&quot;:&quot;model_doc/matcha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/matcha&quot;},{&quot;title&quot;:&quot;MGP-STR&quot;,&quot;id&quot;:&quot;model_doc/mgp-str&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mgp-str&quot;},{&quot;title&quot;:&quot;Nougat&quot;,&quot;id&quot;:&quot;model_doc/nougat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nougat&quot;},{&quot;title&quot;:&quot;OneFormer&quot;,&quot;id&quot;:&quot;model_doc/oneformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/oneformer&quot;},{&quot;title&quot;:&quot;OWL-ViT&quot;,&quot;id&quot;:&quot;model_doc/owlvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/owlvit&quot;},{&quot;title&quot;:&quot;Perceiver&quot;,&quot;id&quot;:&quot;model_doc/perceiver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/perceiver&quot;},{&quot;title&quot;:&quot;Pix2Struct&quot;,&quot;id&quot;:&quot;model_doc/pix2struct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pix2struct&quot;},{&quot;title&quot;:&quot;Segment Anything&quot;,&quot;id&quot;:&quot;model_doc/sam&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sam&quot;},{&quot;title&quot;:&quot;Speech Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/speech-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder&quot;},{&quot;title&quot;:&quot;TAPAS&quot;,&quot;id&quot;:&quot;model_doc/tapas&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapas&quot;},{&quot;title&quot;:&quot;TrOCR&quot;,&quot;id&quot;:&quot;model_doc/trocr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trocr&quot;},{&quot;title&quot;:&quot;TVLT&quot;,&quot;id&quot;:&quot;model_doc/tvlt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tvlt&quot;},{&quot;title&quot;:&quot;ViLT&quot;,&quot;id&quot;:&quot;model_doc/vilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vilt&quot;},{&quot;title&quot;:&quot;Vision Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/vision-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder&quot;},{&quot;title&quot;:&quot;Vision Text Dual Encoder&quot;,&quot;id&quot;:&quot;model_doc/vision-text-dual-encoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder&quot;},{&quot;title&quot;:&quot;VisualBERT&quot;,&quot;id&quot;:&quot;model_doc/visual_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/visual_bert&quot;},{&quot;title&quot;:&quot;X-CLIP&quot;,&quot;id&quot;:&quot;model_doc/xclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xclip&quot;}]},{&quot;title&quot;:&quot;Reinforcement learning models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Decision Transformer&quot;,&quot;id&quot;:&quot;model_doc/decision_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/decision_transformer&quot;},{&quot;title&quot;:&quot;Trajectory Transformer&quot;,&quot;id&quot;:&quot;model_doc/trajectory_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer&quot;}]},{&quot;title&quot;:&quot;Time series models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Autoformer&quot;,&quot;id&quot;:&quot;model_doc/autoformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/autoformer&quot;},{&quot;title&quot;:&quot;Informer&quot;,&quot;id&quot;:&quot;model_doc/informer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/informer&quot;},{&quot;title&quot;:&quot;Time Series Transformer&quot;,&quot;id&quot;:&quot;model_doc/time_series_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/time_series_transformer&quot;}]},{&quot;title&quot;:&quot;Graph models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Graphormer&quot;,&quot;id&quot;:&quot;model_doc/graphormer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/graphormer&quot;}]}]},{&quot;title&quot;:&quot;Internal Helpers&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Custom Layers and Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/modeling_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/modeling_utils&quot;},{&quot;title&quot;:&quot;Utilities for pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/pipelines_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/pipelines_utils&quot;},{&quot;title&quot;:&quot;Utilities for Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/tokenization_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/tokenization_utils&quot;},{&quot;title&quot;:&quot;Utilities for Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/trainer_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/trainer_utils&quot;},{&quot;title&quot;:&quot;Utilities for Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/generation_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/generation_utils&quot;},{&quot;title&quot;:&quot;Utilities for Image Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/image_processing_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/image_processing_utils&quot;},{&quot;title&quot;:&quot;Utilities for Audio processing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/audio_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/audio_utils&quot;},{&quot;title&quot;:&quot;General Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/file_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/file_utils&quot;},{&quot;title&quot;:&quot;Utilities for Time Series&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/time_series_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/time_series_utils&quot;}]}]}],&quot;chapterId&quot;:&quot;model_doc/wavlm&quot;,&quot;docType&quot;:&quot;docs&quot;,&quot;isLoggedIn&quot;:false,&quot;lang&quot;:&quot;en&quot;,&quot;langs&quot;:[&quot;de&quot;,&quot;en&quot;,&quot;es&quot;,&quot;fr&quot;,&quot;it&quot;,&quot;ko&quot;,&quot;pt&quot;,&quot;zh&quot;],&quot;library&quot;:&quot;transformers&quot;,&quot;theme&quot;:&quot;light&quot;,&quot;version&quot;:&quot;v4.34.0&quot;,&quot;versions&quot;:[{&quot;version&quot;:&quot;main&quot;},{&quot;version&quot;:&quot;v4.34.0&quot;},{&quot;version&quot;:&quot;v4.33.3&quot;},{&quot;version&quot;:&quot;v4.33.2&quot;},{&quot;version&quot;:&quot;v4.33.0&quot;},{&quot;version&quot;:&quot;v4.32.1&quot;},{&quot;version&quot;:&quot;v4.32.0&quot;},{&quot;version&quot;:&quot;v4.31.0&quot;},{&quot;version&quot;:&quot;v4.30.0&quot;},{&quot;version&quot;:&quot;v4.29.1&quot;},{&quot;version&quot;:&quot;v4.29.0&quot;},{&quot;version&quot;:&quot;v4.28.1&quot;},{&quot;version&quot;:&quot;v4.28.0&quot;},{&quot;version&quot;:&quot;v4.27.2&quot;},{&quot;version&quot;:&quot;v4.27.1&quot;},{&quot;version&quot;:&quot;v4.27.0&quot;},{&quot;version&quot;:&quot;v4.26.1&quot;},{&quot;version&quot;:&quot;v4.26.0&quot;},{&quot;version&quot;:&quot;v4.25.1&quot;},{&quot;version&quot;:&quot;v4.24.0&quot;},{&quot;version&quot;:&quot;v4.23.1&quot;},{&quot;version&quot;:&quot;v4.23.0&quot;},{&quot;version&quot;:&quot;v4.22.2&quot;},{&quot;version&quot;:&quot;v4.22.1&quot;},{&quot;version&quot;:&quot;v4.22.0&quot;},{&quot;version&quot;:&quot;v4.21.3&quot;},{&quot;version&quot;:&quot;v4.21.2&quot;},{&quot;version&quot;:&quot;v4.21.1&quot;},{&quot;version&quot;:&quot;v4.21.0&quot;},{&quot;version&quot;:&quot;v4.20.1&quot;},{&quot;version&quot;:&quot;v4.20.0&quot;},{&quot;version&quot;:&quot;v4.19.4&quot;},{&quot;version&quot;:&quot;v4.19.3&quot;},{&quot;version&quot;:&quot;v4.19.2&quot;},{&quot;version&quot;:&quot;v4.19.0&quot;},{&quot;version&quot;:&quot;v4.18.0&quot;},{&quot;version&quot;:&quot;v4.17.0&quot;},{&quot;version&quot;:&quot;v4.16.2&quot;},{&quot;version&quot;:&quot;v4.16.1&quot;},{&quot;version&quot;:&quot;v4.16.0&quot;},{&quot;version&quot;:&quot;v4.15.0&quot;},{&quot;version&quot;:&quot;v4.14.1&quot;},{&quot;version&quot;:&quot;v4.13.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.5&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.4&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.0.0&quot;},{&quot;version&quot;:&quot;doc-builder-html&quot;}],&quot;title&quot;:&quot;WavLM&quot;}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"><div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation</p> <div class="flex items-center"><p class="font-semibold">WavLM</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "><button class=" " type="button"><h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> <span class="ml-auto rounded border border-gray-200 bg-gray-100 px-0.5 text-xs dark:border-gray-800 dark:bg-gray-800"><kbd class="font-sans">⌘K</kbd></span></button> <div class="flex items-center"><select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1">v4.34.0</option><option value="2">v4.33.3</option><option value="3">v4.32.1</option><option value="4">v4.31.0</option><option value="5">v4.30.0</option><option value="6">v4.29.1</option><option value="7">v4.28.1</option><option value="8">v4.27.2</option><option value="9">v4.26.1</option><option value="10">v4.25.1</option><option value="11">v4.24.0</option><option value="12">v4.23.1</option><option value="13">v4.22.2</option><option value="14">v4.21.3</option><option value="15">v4.20.1</option><option value="16">v4.19.4</option><option value="17">v4.18.0</option><option value="18">v4.17.0</option><option value="19">v4.16.2</option><option value="20">v4.15.0</option><option value="21">v4.14.1</option><option value="22">v4.13.0</option><option value="23">v4.12.5</option><option value="24">v4.11.3</option><option value="25">v4.10.1</option><option value="26">v4.9.2</option><option value="27">v4.8.2</option><option value="28">v4.7.0</option><option value="29">v4.6.0</option><option value="30">v4.5.1</option><option value="31">v4.4.2</option><option value="32">v4.3.3</option><option value="33">v4.2.2</option><option value="34">v4.1.1</option><option value="35">v4.0.1</option><option value="36">v3.5.1</option><option value="37">v3.4.0</option><option value="38">v3.3.1</option><option value="39">v3.2.0</option><option value="40">v3.1.0</option><option value="41">v3.0.2</option><option value="42">v2.11.0</option><option value="43">v2.10.0</option><option value="44">v2.9.1</option><option value="45">v2.8.0</option><option value="46">v2.7.0</option><option value="47">v2.6.0</option><option value="48">v2.5.1</option><option value="49">v2.4.1</option><option value="50">v2.3.0</option><option value="51">v2.2.2</option><option value="52">v2.1.1</option><option value="53">v2.0.0</option><option value="54">v1.2.0</option><option value="55">v1.1.0</option><option value="56">v1.0.0</option><option value="57">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"><button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"><svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> 112,792</a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Get started</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/index">🤗 Transformers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/quicktour">Quick tour </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/installation">Installation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Tutorials</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_tutorial">Run inference with pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/autoclass_tutorial">Write portable code with AutoClass </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/preprocessing">Preprocess data </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/training">Fine-tune a pretrained model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/run_scripts">Train with a script </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/accelerate">Set up distributed training with 🤗 Accelerate </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/peft">Load and train adapters with 🤗 PEFT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_sharing">Share your model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/transformers_agents">Agents </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/llm_tutorial">Generation with LLMs </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Task Guides</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Natural Language Processing</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Computer Vision</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Generation</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Prompting</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Developer guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/fast_tokenizers">Use fast tokenizers from 🤗 Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/multilingual">Run inference with multilingual models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/create_a_model">Use model-specific APIs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_models">Share a custom model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/chat_templating">Templates for chat models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/sagemaker">Run training on Amazon SageMaker </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/serialization">Export to ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tflite">Export to TFLite </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/torchscript">Export to TorchScript </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/benchmarks">Benchmarks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/notebooks">Notebooks with examples </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/community">Community resources </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_tools">Custom Tools and Prompts </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/troubleshooting">Troubleshoot </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Performance and scalability</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/performance">Overview </a><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Efficient training techniques</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_one">Methods and tools for efficient training on a single GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_many">Multiple GPUs and parallelism </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu">Efficient training on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu_many">Distributed CPU training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu">Training on TPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu_tf">Training on TPU with TensorFlow </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_special">Training on Specialized Hardware </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_hardware">Custom hardware for training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/hpo_train">Hyperparameter Search using Trainer API </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Optimizing inference</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_cpu">Inference on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_one">Inference on one GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_many">Inference on many GPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_special">Inference on Specialized Hardware </a> </div><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/big_models">Instantiating a big model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/debugging">Troubleshooting </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tf_xla">XLA Integration for TensorFlow Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perf_torch_compile">Optimize inference using `torch.compile()` </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Contribute</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/contributing">How to contribute to transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_model">How to add a model to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_tensorflow_model">How to convert a 🤗 Transformers model to TensorFlow? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_pipeline">How to add a pipeline to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/testing">Testing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pr_checks">Checks on a Pull Request </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Conceptual guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/philosophy">Philosophy </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/glossary">Glossary </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/task_summary">What 🤗 Transformers can do </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tasks_explained">How 🤗 Transformers solve tasks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_summary">The Transformer model family </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tokenizer_summary">Summary of the tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/attention">Attention mechanisms </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pad_truncation">Padding and truncation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/bertology">BERTology </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perplexity">Perplexity of fixed-length models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_webserver">Pipelines for webserver inference </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_memory_anatomy">Model training anatomy </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>API</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Main Classes</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/agent">Agents and Tools </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/model_doc/auto">Auto Classes </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/callback">Callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/configuration">Configuration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/data_collator">Data Collator </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks">Keras callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/logging">Logging </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/model">Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/text_generation">Text Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/onnx">ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules">Optimization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/output">Model outputs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/pipelines">Pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/processors">Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/quantization">Quantization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/tokenizer">Tokenizer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/trainer">Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/deepspeed">DeepSpeed Integration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor">Feature Extractor </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/image_processor">Image Processor </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Models</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Text models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Vision models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Audio models</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer">Audio Spectrogram Transformer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bark">Bark </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/clap">CLAP </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/encodec">EnCodec </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/hubert">Hubert </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mctct">MCTCT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mms">MMS </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/musicgen">MusicGen </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/pop2piano">Pop2Piano </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/sew">SEW </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/sew-d">SEW-D </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/speech_to_text">Speech2Text </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2">Speech2Text2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/speecht5">SpeechT5 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/unispeech">UniSpeech </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/unispeech-sat">UniSpeech-SAT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/vits">VITS </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2">Wav2Vec2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer">Wav2Vec2-Conformer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme">Wav2Vec2Phoneme </a><a data-sveltekit-reload="" class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/wavlm">WavLM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/whisper">Whisper </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xls_r">XLS-R </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2">XLSR-Wav2Vec2 </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Reinforcement learning models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Time series models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Graph models</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Internal Helpers</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/modeling_utils">Custom Layers and Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/pipelines_utils">Utilities for pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/tokenization_utils">Utilities for Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/trainer_utils">Utilities for Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/generation_utils">Utilities for Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/image_processing_utils">Utilities for Image Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/audio_utils">Utilities for Audio processing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/file_utils">General Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/time_series_utils">Utilities for Time Series </a> </div> </div></nav></div></div></div> <div class="z-1 min-w-0 flex-1"> <div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg"> <div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div> <p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience </p> <div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes </div></div></div> <div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a> <p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div> <div class="prose-doc prose relative mx-auto max-w-4xl break-words"> <p></p> <h1 class="relative group"><a id="wavlm" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#wavlm"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-juqje6">WavLM</span></h1> <h2 class="relative group"><a id="overview" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#overview"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1jsw1pg">Overview</span></h2> <p data-svelte-h="svelte-912u8u">The WavLM model was proposed in <a href="https://arxiv.org/abs/2110.13900" rel="nofollow">WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing</a> by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei.</p> <p data-svelte-h="svelte-vfdo9a">The abstract from the paper is the following:</p> <p data-svelte-h="svelte-7d4l1v"><em>Self-supervised learning (SSL) achieves great success in speech recognition, while limited exploration has been attempted for other speech processing tasks. As speech signal contains multi-faceted information including speaker identity, paralinguistics, spoken content, etc., learning universal representations for all speech tasks is challenging. In this paper, we propose a new pre-trained model, WavLM, to solve full-stack downstream speech tasks. WavLM is built based on the HuBERT framework, with an emphasis on both spoken content modeling and speaker identity preservation. We first equip the Transformer structure with gated relative position bias to improve its capability on recognition tasks. For better speaker discrimination, we propose an utterance mixing training strategy, where additional overlapped utterances are created unsupervisely and incorporated during model training. Lastly, we scale up the training dataset from 60k hours to 94k hours. WavLM Large achieves state-of-the-art performance on the SUPERB benchmark, and brings significant improvements for various speech processing tasks on their representative benchmarks.</em></p> <p data-svelte-h="svelte-axv494">Tips:</p> <ul data-svelte-h="svelte-10h9vfg"><li>WavLM is a speech model that accepts a float array corresponding to the raw waveform of the speech signal. Please use <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Processor">Wav2Vec2Processor</a> for the feature extraction.</li> <li>WavLM model can be fine-tuned using connectionist temporal classification (CTC) so the model output has to be decoded using <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2CTCTokenizer">Wav2Vec2CTCTokenizer</a>.</li> <li>WavLM performs especially well on speaker verification, speaker identification, and speaker diarization tasks.</li></ul> <p data-svelte-h="svelte-1dne9eg">Relevant checkpoints can be found under <a href="https://huggingface.co/models?other=wavlm" rel="nofollow">https://huggingface.co/models?other=wavlm</a>.</p> <p data-svelte-h="svelte-1tn43xw">This model was contributed by <a href="https://huggingface.co/patrickvonplaten" rel="nofollow">patrickvonplaten</a>. The Authors’ code can be found <a href="https://github.com/microsoft/unilm/tree/master/wavlm" rel="nofollow">here</a>.</p> <h2 class="relative group"><a id="documentation-resources" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#documentation-resources"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-n3f0j0">Documentation resources</span></h2> <ul data-svelte-h="svelte-11qmliz"><li><a href="../tasks/audio_classification">Audio classification task guide</a></li> <li><a href="../tasks/asr">Automatic speech recognition task guide</a></li></ul> <h2 class="relative group"><a id="transformers.WavLMConfig" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMConfig"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-pp9s6o">WavLMConfig</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.WavLMConfig"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">WavLMConfig</span></span></h3> <a id="transformers.WavLMConfig" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.WavLMConfig"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wavlm/configuration_wavlm.py#L32" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">vocab_size<span class="opacity-60"> = 32</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_size<span class="opacity-60"> = 768</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_hidden_layers<span class="opacity-60"> = 12</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_attention_heads<span class="opacity-60"> = 12</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">intermediate_size<span class="opacity-60"> = 3072</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_act<span class="opacity-60"> = 'gelu'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_dropout<span class="opacity-60"> = 0.1</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">activation_dropout<span class="opacity-60"> = 0.1</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_dropout<span class="opacity-60"> = 0.1</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">feat_proj_dropout<span class="opacity-60"> = 0.0</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">final_dropout<span class="opacity-60"> = 0.1</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">layerdrop<span class="opacity-60"> = 0.1</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">initializer_range<span class="opacity-60"> = 0.02</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">layer_norm_eps<span class="opacity-60"> = 1e-05</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">feat_extract_norm<span class="opacity-60"> = 'group'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">feat_extract_activation<span class="opacity-60"> = 'gelu'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">conv_dim<span class="opacity-60"> = (512, 512, 512, 512, 512, 512, 512)</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">conv_stride<span class="opacity-60"> = (5, 2, 2, 2, 2, 2, 2)</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">conv_kernel<span class="opacity-60"> = (10, 3, 3, 3, 3, 2, 2)</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">conv_bias<span class="opacity-60"> = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_conv_pos_embeddings<span class="opacity-60"> = 128</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_conv_pos_embedding_groups<span class="opacity-60"> = 16</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_buckets<span class="opacity-60"> = 320</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">max_bucket_distance<span class="opacity-60"> = 800</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">do_stable_layer_norm<span class="opacity-60"> = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">apply_spec_augment<span class="opacity-60"> = True</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_time_prob<span class="opacity-60"> = 0.05</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_time_length<span class="opacity-60"> = 10</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_time_min_masks<span class="opacity-60"> = 2</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_feature_prob<span class="opacity-60"> = 0.0</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_feature_length<span class="opacity-60"> = 10</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_codevectors_per_group<span class="opacity-60"> = 320</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_codevector_groups<span class="opacity-60"> = 2</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">contrastive_logits_temperature<span class="opacity-60"> = 0.1</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_negatives<span class="opacity-60"> = 100</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">codevector_dim<span class="opacity-60"> = 256</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">proj_codevector_dim<span class="opacity-60"> = 256</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">diversity_loss_weight<span class="opacity-60"> = 0.1</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">ctc_loss_reduction<span class="opacity-60"> = 'mean'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">ctc_zero_infinity<span class="opacity-60"> = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_weighted_layer_sum<span class="opacity-60"> = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">classifier_proj_size<span class="opacity-60"> = 256</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">tdnn_dim<span class="opacity-60"> = (512, 512, 512, 512, 1500)</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">tdnn_kernel<span class="opacity-60"> = (5, 3, 3, 1, 1)</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">tdnn_dilation<span class="opacity-60"> = (1, 2, 3, 1, 1)</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">xvector_output_dim<span class="opacity-60"> = 512</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_ctc_classes<span class="opacity-60"> = 80</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pad_token_id<span class="opacity-60"> = 0</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">bos_token_id<span class="opacity-60"> = 1</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">eos_token_id<span class="opacity-60"> = 2</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">add_adapter<span class="opacity-60"> = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">adapter_kernel_size<span class="opacity-60"> = 3</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">adapter_stride<span class="opacity-60"> = 2</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_adapter_layers<span class="opacity-60"> = 3</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_size<span class="opacity-60"> = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 49 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMConfig.vocab_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMConfig.vocab_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>vocab_size</strong> (<code>int</code>, <em>optional</em>, defaults to 32) — Vocabulary size of the WavLM model. Defines the number of different tokens that can be represented by the <code>inputs_ids</code> passed when calling <a href="/docs/transformers/v4.34.0/en/model_doc/wavlm#transformers.WavLMModel">WavLMModel</a>. Vocabulary size of the model. Defines the different tokens that can be represented by the <em>inputs_ids</em> passed to the forward method of <a href="/docs/transformers/v4.34.0/en/model_doc/wavlm#transformers.WavLMModel">WavLMModel</a>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMConfig.hidden_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMConfig.hidden_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hidden_size</strong> (<code>int</code>, <em>optional</em>, defaults to 768) — Dimensionality of the encoder layers and the pooler layer.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMConfig.num_hidden_layers" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMConfig.num_hidden_layers"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>num_hidden_layers</strong> (<code>int</code>, <em>optional</em>, defaults to 12) — Number of hidden layers in the Transformer encoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMConfig.num_attention_heads" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMConfig.num_attention_heads"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>num_attention_heads</strong> (<code>int</code>, <em>optional</em>, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMConfig.intermediate_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMConfig.intermediate_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>intermediate_size</strong> (<code>int</code>, <em>optional</em>, defaults to 3072) — Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMConfig.hidden_act" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMConfig.hidden_act"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hidden_act</strong> (<code>str</code> or <code>function</code>, <em>optional</em>, defaults to <code>"gelu"</code>) — The non-linear activation function (function or string) in the encoder and pooler. If string, <code>"gelu"</code>, <code>"relu"</code>, <code>"selu"</code> and <code>"gelu_new"</code> are supported.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMConfig.hidden_dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMConfig.hidden_dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hidden_dropout</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMConfig.activation_dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMConfig.activation_dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>activation_dropout</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) — The dropout ratio for activations inside the fully connected layer.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMConfig.attention_dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMConfig.attention_dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_dropout</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) — The dropout ratio for the attention probabilities.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMConfig.final_dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMConfig.final_dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>final_dropout</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) — The dropout probability for the final projection layer of <a href="/docs/transformers/v4.34.0/en/model_doc/wavlm#transformers.WavLMForCTC">WavLMForCTC</a>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMConfig.layerdrop" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMConfig.layerdrop"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>layerdrop</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) — The LayerDrop probability. See the [LayerDrop paper](see <a href="https://arxiv.org/abs/1909.11556" rel="nofollow">https://arxiv.org/abs/1909.11556</a>) for more details.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMConfig.initializer_range" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMConfig.initializer_range"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>initializer_range</strong> (<code>float</code>, <em>optional</em>, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMConfig.layer_norm_eps" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMConfig.layer_norm_eps"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>layer_norm_eps</strong> (<code>float</code>, <em>optional</em>, defaults to 1e-12) — The epsilon used by the layer normalization layers.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMConfig.feat_extract_norm" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMConfig.feat_extract_norm"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>feat_extract_norm</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"group"</code>) — The norm to be applied to 1D convolutional layers in feature encoder. One of <code>"group"</code> for group normalization of only the first 1D convolutional layer or <code>"layer"</code> for layer normalization of all 1D convolutional layers.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMConfig.feat_proj_dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMConfig.feat_proj_dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>feat_proj_dropout</strong> (<code>float</code>, <em>optional</em>, defaults to 0.0) — The dropout probability for output of the feature encoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMConfig.feat_extract_activation" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMConfig.feat_extract_activation"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>feat_extract_activation</strong> (<code>str, </code>optional<code>, defaults to </code>“gelu”<code>) -- The non-linear activation function (function or string) in the 1D convolutional layers of the feature extractor. If string, </code>“gelu”<code>, </code>“relu”<code>, </code>“selu”<code>and</code>“gelu_new”` are supported.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMConfig.conv_dim" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMConfig.conv_dim"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>conv_dim</strong> (<code>Tuple[int]</code> or <code>List[int]</code>, <em>optional</em>, defaults to <code>(512, 512, 512, 512, 512, 512, 512)</code>) — A tuple of integers defining the number of input and output channels of each 1D convolutional layer in the feature encoder. The length of <em>conv_dim</em> defines the number of 1D convolutional layers.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMConfig.conv_stride" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMConfig.conv_stride"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>conv_stride</strong> (<code>Tuple[int]</code> or <code>List[int]</code>, <em>optional</em>, defaults to <code>(5, 2, 2, 2, 2, 2, 2)</code>) — A tuple of integers defining the stride of each 1D convolutional layer in the feature encoder. The length of <em>conv_stride</em> defines the number of convolutional layers and has to match the length of <em>conv_dim</em>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMConfig.conv_kernel" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMConfig.conv_kernel"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>conv_kernel</strong> (<code>Tuple[int]</code> or <code>List[int]</code>, <em>optional</em>, defaults to <code>(10, 3, 3, 3, 3, 3, 3)</code>) — A tuple of integers defining the kernel size of each 1D convolutional layer in the feature encoder. The length of <em>conv_kernel</em> defines the number of convolutional layers and has to match the length of <em>conv_dim</em>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMConfig.conv_bias" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMConfig.conv_bias"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>conv_bias</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether the 1D convolutional layers have a bias.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMConfig.num_conv_pos_embeddings" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMConfig.num_conv_pos_embeddings"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>num_conv_pos_embeddings</strong> (<code>int</code>, <em>optional</em>, defaults to 128) — Number of convolutional positional embeddings. Defines the kernel size of 1D convolutional positional embeddings layer.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMConfig.num_conv_pos_embedding_groups" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMConfig.num_conv_pos_embedding_groups"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>num_conv_pos_embedding_groups</strong> (<code>int</code>, <em>optional</em>, defaults to 16) — Number of groups of 1D convolutional positional embeddings layer.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMConfig.do_stable_layer_norm" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMConfig.do_stable_layer_norm"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>do_stable_layer_norm</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether to apply <em>stable</em> layer norm architecture of the Transformer encoder. <code>do_stable_layer_norm is True</code> corresponds to applying layer norm before the attention layer, whereas <code>do_stable_layer_norm is False</code> corresponds to applying layer norm after the attention layer.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMConfig.apply_spec_augment" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMConfig.apply_spec_augment"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>apply_spec_augment</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether to apply <em>SpecAugment</em> data augmentation to the outputs of the feature encoder. For reference see <a href="https://arxiv.org/abs/1904.08779" rel="nofollow">SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition</a>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMConfig.mask_time_prob" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMConfig.mask_time_prob"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mask_time_prob</strong> (<code>float</code>, <em>optional</em>, defaults to 0.05) — Propability of each feature vector along the time axis to be chosen as the start of the vector span to be masked. Approximately <code>mask_time_prob * sequence_length // mask_time_length</code> feature vectors will be masked along the time axis. This is only relevant if <code>apply_spec_augment is True</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMConfig.mask_time_length" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMConfig.mask_time_length"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mask_time_length</strong> (<code>int</code>, <em>optional</em>, defaults to 10) — Length of vector span along the time axis.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMConfig.mask_time_min_masks" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMConfig.mask_time_min_masks"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mask_time_min_masks</strong> (<code>int</code>, <em>optional</em>, defaults to 2), — The minimum number of masks of length <code>mask_feature_length</code> generated along the time axis, each time step, irrespectively of <code>mask_feature_prob</code>. Only relevant if ”mask_time_prob*len(time_axis)/mask_time_length &lt; mask_time_min_masks”</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMConfig.mask_feature_prob" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMConfig.mask_feature_prob"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mask_feature_prob</strong> (<code>float</code>, <em>optional</em>, defaults to 0.0) — Propability of each feature vector along the feature axis to be chosen as the start of the vector span to be masked. Approximately <code>mask_time_prob * hidden_size // mask_time_length</code> feature vectors will be masked along the time axis. This is only relevant if <code>apply_spec_augment is True</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMConfig.mask_feature_length" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMConfig.mask_feature_length"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mask_feature_length</strong> (<code>int</code>, <em>optional</em>, defaults to 10) — Length of vector span along the feature axis.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMConfig.num_codevectors_per_group" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMConfig.num_codevectors_per_group"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>num_codevectors_per_group</strong> (<code>int</code>, <em>optional</em>, defaults to 320) — Number of entries in each quantization codebook (group).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMConfig.num_codevector_groups" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMConfig.num_codevector_groups"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>num_codevector_groups</strong> (<code>int</code>, <em>optional</em>, defaults to 2) — Number of codevector groups for product codevector quantization.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMConfig.contrastive_logits_temperature" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMConfig.contrastive_logits_temperature"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>contrastive_logits_temperature</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) — The temperature <em>kappa</em> in the contrastive loss.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMConfig.num_negatives" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMConfig.num_negatives"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>num_negatives</strong> (<code>int</code>, <em>optional</em>, defaults to 100) — Number of negative samples for the contrastive loss.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMConfig.codevector_dim" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMConfig.codevector_dim"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>codevector_dim</strong> (<code>int</code>, <em>optional</em>, defaults to 256) — Dimensionality of the quantized feature vectors.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMConfig.proj_codevector_dim" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMConfig.proj_codevector_dim"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>proj_codevector_dim</strong> (<code>int</code>, <em>optional</em>, defaults to 256) — Dimensionality of the final projection of both the quantized and the transformer features.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMConfig.diversity_loss_weight" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMConfig.diversity_loss_weight"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>diversity_loss_weight</strong> (<code>int</code>, <em>optional</em>, defaults to 0.1) — The weight of the codebook diversity loss component.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMConfig.ctc_loss_reduction" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMConfig.ctc_loss_reduction"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>ctc_loss_reduction</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"mean"</code>) — Specifies the reduction to apply to the output of <code>torch.nn.CTCLoss</code>. Only relevant when training an instance of <a href="/docs/transformers/v4.34.0/en/model_doc/wavlm#transformers.WavLMForCTC">WavLMForCTC</a>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMConfig.ctc_zero_infinity" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMConfig.ctc_zero_infinity"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>ctc_zero_infinity</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether to zero infinite losses and the associated gradients of <code>torch.nn.CTCLoss</code>. Infinite losses mainly occur when the inputs are too short to be aligned to the targets. Only relevant when training an instance of <a href="/docs/transformers/v4.34.0/en/model_doc/wavlm#transformers.WavLMForCTC">WavLMForCTC</a>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMConfig.use_weighted_layer_sum" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMConfig.use_weighted_layer_sum"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>use_weighted_layer_sum</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether to use a weighted average of layer outputs with learned weights. Only relevant when using an instance of <a href="/docs/transformers/v4.34.0/en/model_doc/wavlm#transformers.WavLMForSequenceClassification">WavLMForSequenceClassification</a>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMConfig.classifier_proj_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMConfig.classifier_proj_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>classifier_proj_size</strong> (<code>int</code>, <em>optional</em>, defaults to 256) — Dimensionality of the projection before token mean-pooling for classification.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMConfig.tdnn_dim" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMConfig.tdnn_dim"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>tdnn_dim</strong> (<code>Tuple[int]</code> or <code>List[int]</code>, <em>optional</em>, defaults to <code>(512, 512, 512, 512, 1500)</code>) — A tuple of integers defining the number of output channels of each 1D convolutional layer in the <em>TDNN</em> module of the <em>XVector</em> model. The length of <em>tdnn_dim</em> defines the number of <em>TDNN</em> layers.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMConfig.tdnn_kernel" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMConfig.tdnn_kernel"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>tdnn_kernel</strong> (<code>Tuple[int]</code> or <code>List[int]</code>, <em>optional</em>, defaults to <code>(5, 3, 3, 1, 1)</code>) — A tuple of integers defining the kernel size of each 1D convolutional layer in the <em>TDNN</em> module of the <em>XVector</em> model. The length of <em>tdnn_kernel</em> has to match the length of <em>tdnn_dim</em>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMConfig.tdnn_dilation" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMConfig.tdnn_dilation"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>tdnn_dilation</strong> (<code>Tuple[int]</code> or <code>List[int]</code>, <em>optional</em>, defaults to <code>(1, 2, 3, 1, 1)</code>) — A tuple of integers defining the dilation factor of each 1D convolutional layer in <em>TDNN</em> module of the <em>XVector</em> model. The length of <em>tdnn_dilation</em> has to match the length of <em>tdnn_dim</em>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMConfig.xvector_output_dim" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMConfig.xvector_output_dim"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>xvector_output_dim</strong> (<code>int</code>, <em>optional</em>, defaults to 512) — Dimensionality of the <em>XVector</em> embedding vectors.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMConfig.add_adapter" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMConfig.add_adapter"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>add_adapter</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether a convolutional network should be stacked on top of the Wav2Vec2 Encoder. Can be very useful for warm-starting Wav2Vec2 for SpeechEncoderDecoder models.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMConfig.adapter_kernel_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMConfig.adapter_kernel_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>adapter_kernel_size</strong> (<code>int</code>, <em>optional</em>, defaults to 3) — Kernel size of the convolutional layers in the adapter network. Only relevant if <code>add_adapter is True</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMConfig.adapter_stride" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMConfig.adapter_stride"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>adapter_stride</strong> (<code>int</code>, <em>optional</em>, defaults to 2) — Stride of the convolutional layers in the adapter network. Only relevant if <code>add_adapter is True</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMConfig.num_adapter_layers" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMConfig.num_adapter_layers"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>num_adapter_layers</strong> (<code>int</code>, <em>optional</em>, defaults to 3) — Number of convolutional layers that should be used in the adapter network. Only relevant if <code>add_adapter is True</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMConfig.output_hidden_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMConfig.output_hidden_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_size</strong> (<code>int</code>, <em>optional</em>) — Dimensionality of the encoder output layer. If not defined, this defaults to <em>hidden-size</em>. Only relevant if <code>add_adapter is True</code>.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-gm1dmk">This is the configuration class to store the configuration of a <a href="/docs/transformers/v4.34.0/en/model_doc/wavlm#transformers.WavLMModel">WavLMModel</a>. It is used to instantiate an WavLM model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the WavLM <a href="https://huggingface.co/microsoft/wavlm-base" rel="nofollow">microsoft/wavlm-base</a> architecture.</p> <p data-svelte-h="svelte-10kqkkl">Configuration objects inherit from <a href="/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> and can be used to control the model outputs. Read the documentation from <a href="/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> for more information.</p> <div class="relative group rounded-md"><a id="transformers.WavLMConfig.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMConfig.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""></pre></div></div> <div class="relative group rounded-md"><a id="transformers.WavLMConfig.example-2" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMConfig.example-2"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> WavLMConfig, WavLMModel <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Initializing a WavLM facebook/wavlm-base-960h style configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>configuration = WavLMConfig() <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Initializing a model (with random weights) from the facebook/wavlm-base-960h style configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model = WavLMModel(configuration) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Accessing the model configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>configuration = model.config</pre></div></div></div> <h2 class="relative group"><a id="transformers.WavLMModel" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMModel"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1baykeb">WavLMModel</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.WavLMModel"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">WavLMModel</span></span></h3> <a id="transformers.WavLMModel" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.WavLMModel"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wavlm/modeling_wavlm.py#L1122" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60">: WavLMConfig</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMModel.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMModel.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/wavlm#transformers.WavLMConfig">WavLMConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-111tzhr">The bare WavLM Model transformer outputting raw hidden-states without any specific head on top. WavLM was proposed in <a href="https://arxiv.org/abs/2110.13900" rel="nofollow">WavLM: Unified Speech Representation Learning with Labeled and Unlabeled Data</a> by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Xiangzhan Yu, Furu Wei.</p> <p data-svelte-h="svelte-1e6yl4y">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.).</p> <p data-svelte-h="svelte-68lg8f">This model is a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.WavLMModel.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.WavLMModel.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.WavLMModel.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wavlm/modeling_wavlm.py#L1208" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_values<span class="opacity-60">: typing.Optional[torch.Tensor]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_time_indices<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.modeling_outputs.Wav2Vec2BaseModelOutput">transformers.modeling_outputs.Wav2Vec2BaseModelOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 5 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMModel.forward.input_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMModel.forward.input_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_values</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Float values of input raw speech waveform. Values can be obtained by loading a <code>.flac</code> or <code>.wav</code> audio file into an array of type <code>List[float]</code> or a <code>numpy.ndarray</code>, <em>e.g.</em> via the soundfile library (<code>pip install soundfile</code>). To prepare the array into <code>input_values</code>, the <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoProcessor">AutoProcessor</a> should be used for padding and conversion into a tensor of type <code>torch.FloatTensor</code>. See <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Processor.__call__">Wav2Vec2Processor.<strong>call</strong>()</a> for details.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMModel.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMModel.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing convolution and attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p> <div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400"> <p><code>attention_mask</code> should only be passed if the corresponding processor has <code>config.return_attention_mask == True</code>. For all models whose processor has <code>config.return_attention_mask == False</code>, <code>attention_mask</code> should <strong>not</strong> be passed to avoid degraded performance when doing batched inference. For such models <code>input_values</code> should simply be padded with 0 and passed without <code>attention_mask</code>. Be aware that these models also yield slightly different results depending on whether <code>input_values</code> is padded or not.</p> </div></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMModel.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMModel.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMModel.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMModel.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMModel.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMModel.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li></ul> <div id="transformers.WavLMModel.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.modeling_outputs.Wav2Vec2BaseModelOutput">transformers.modeling_outputs.Wav2Vec2BaseModelOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.modeling_outputs.Wav2Vec2BaseModelOutput">transformers.modeling_outputs.Wav2Vec2BaseModelOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/wavlm#transformers.WavLMConfig">WavLMConfig</a>) and inputs.</p> <ul> <li> <p><strong>last_hidden_state</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>) — Sequence of hidden-states at the output of the last layer of the model.</p> </li> <li> <p><strong>extract_features</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, conv_dim[-1])</code>) — Sequence of extracted feature vectors of the last convolutional layer of the model.</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-su8bwa">The <a href="/docs/transformers/v4.34.0/en/model_doc/wavlm#transformers.WavLMModel">WavLMModel</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.WavLMModel.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMModel.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoProcessor, WavLMModel <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset <span class="hljs-meta">&gt;&gt;&gt; </span>dataset = load_dataset(<span class="hljs-string">"hf-internal-testing/librispeech_asr_demo"</span>, <span class="hljs-string">"clean"</span>, split=<span class="hljs-string">"validation"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>dataset = dataset.sort(<span class="hljs-string">"id"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>sampling_rate = dataset.features[<span class="hljs-string">"audio"</span>].sampling_rate <span class="hljs-meta">&gt;&gt;&gt; </span>processor = AutoProcessor.from_pretrained(<span class="hljs-string">"patrickvonplaten/wavlm-libri-clean-100h-base-plus"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = WavLMModel.from_pretrained(<span class="hljs-string">"patrickvonplaten/wavlm-libri-clean-100h-base-plus"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># audio file is decoded on the fly</span> <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = processor(dataset[<span class="hljs-number">0</span>][<span class="hljs-string">"audio"</span>][<span class="hljs-string">"array"</span>], sampling_rate=sampling_rate, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> outputs = model(**inputs) <span class="hljs-meta">&gt;&gt;&gt; </span>last_hidden_states = outputs.last_hidden_state <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-built_in">list</span>(last_hidden_states.shape) [<span class="hljs-number">1</span>, <span class="hljs-number">292</span>, <span class="hljs-number">768</span>]</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.WavLMForCTC" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMForCTC"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1d0n9af">WavLMForCTC</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.WavLMForCTC"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">WavLMForCTC</span></span></h3> <a id="transformers.WavLMForCTC" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.WavLMForCTC"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wavlm/modeling_wavlm.py#L1274" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">target_lang<span class="opacity-60">: typing.Optional[str] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMForCTC.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMForCTC.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/wavlm#transformers.WavLMConfig">WavLMConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-e51694">WavLM Model with a <code>language modeling</code> head on top for Connectionist Temporal Classification (CTC). WavLM was proposed in <a href="https://arxiv.org/abs/2110.13900" rel="nofollow">WavLM: Unified Speech Representation Learning with Labeled and Unlabeled Data</a> by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Xiangzhan Yu, Furu Wei.</p> <p data-svelte-h="svelte-1e6yl4y">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.).</p> <p data-svelte-h="svelte-68lg8f">This model is a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.WavLMForCTC.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.WavLMForCTC.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.WavLMForCTC.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wavlm/modeling_wavlm.py#L1346" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_values<span class="opacity-60">: typing.Optional[torch.Tensor]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.CausalLMOutput">transformers.modeling_outputs.CausalLMOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 6 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMForCTC.forward.input_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMForCTC.forward.input_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_values</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Float values of input raw speech waveform. Values can be obtained by loading a <code>.flac</code> or <code>.wav</code> audio file into an array of type <code>List[float]</code> or a <code>numpy.ndarray</code>, <em>e.g.</em> via the soundfile library (<code>pip install soundfile</code>). To prepare the array into <code>input_values</code>, the <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoProcessor">AutoProcessor</a> should be used for padding and conversion into a tensor of type <code>torch.FloatTensor</code>. See <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Processor.__call__">Wav2Vec2Processor.<strong>call</strong>()</a> for details.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMForCTC.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMForCTC.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing convolution and attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p> <div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400"> <p><code>attention_mask</code> should only be passed if the corresponding processor has <code>config.return_attention_mask == True</code>. For all models whose processor has <code>config.return_attention_mask == False</code>, <code>attention_mask</code> should <strong>not</strong> be passed to avoid degraded performance when doing batched inference. For such models <code>input_values</code> should simply be padded with 0 and passed without <code>attention_mask</code>. Be aware that these models also yield slightly different results depending on whether <code>input_values</code> is padded or not.</p> </div></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMForCTC.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMForCTC.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMForCTC.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMForCTC.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMForCTC.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMForCTC.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMForCTC.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMForCTC.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, target_length)</code>, <em>optional</em>) — Labels for connectionist temporal classification. Note that <code>target_length</code> has to be smaller or equal to the sequence length of the output logits. Indices are selected in <code>[-100, 0, ..., config.vocab_size - 1]</code>. All labels set to <code>-100</code> are ignored (masked), the loss is only computed for labels in <code>[0, ..., config.vocab_size - 1]</code>.</span></span> </li></ul> <div id="transformers.WavLMForCTC.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.CausalLMOutput">transformers.modeling_outputs.CausalLMOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.CausalLMOutput">transformers.modeling_outputs.CausalLMOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/wavlm#transformers.WavLMConfig">WavLMConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Language modeling loss (for next-token prediction).</p> </li> <li> <p><strong>logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, config.vocab_size)</code>) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-1xe5viu">The <a href="/docs/transformers/v4.34.0/en/model_doc/wavlm#transformers.WavLMForCTC">WavLMForCTC</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.WavLMForCTC.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMForCTC.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoProcessor, WavLMForCTC <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>dataset = load_dataset(<span class="hljs-string">"hf-internal-testing/librispeech_asr_demo"</span>, <span class="hljs-string">"clean"</span>, split=<span class="hljs-string">"validation"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>dataset = dataset.sort(<span class="hljs-string">"id"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>sampling_rate = dataset.features[<span class="hljs-string">"audio"</span>].sampling_rate <span class="hljs-meta">&gt;&gt;&gt; </span>processor = AutoProcessor.from_pretrained(<span class="hljs-string">"patrickvonplaten/wavlm-libri-clean-100h-base-plus"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = WavLMForCTC.from_pretrained(<span class="hljs-string">"patrickvonplaten/wavlm-libri-clean-100h-base-plus"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># audio file is decoded on the fly</span> <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = processor(dataset[<span class="hljs-number">0</span>][<span class="hljs-string">"audio"</span>][<span class="hljs-string">"array"</span>], sampling_rate=sampling_rate, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> logits = model(**inputs).logits <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_ids = torch.argmax(logits, dim=-<span class="hljs-number">1</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># transcribe speech</span> <span class="hljs-meta">&gt;&gt;&gt; </span>transcription = processor.batch_decode(predicted_ids) <span class="hljs-meta">&gt;&gt;&gt; </span>transcription[<span class="hljs-number">0</span>] <span class="hljs-string">'mister quilter is the aposle of the middle classes and we are glad to welcome his gospel'</span> <span class="hljs-meta">&gt;&gt;&gt; </span>inputs[<span class="hljs-string">"labels"</span>] = processor(text=dataset[<span class="hljs-number">0</span>][<span class="hljs-string">"text"</span>], return_tensors=<span class="hljs-string">"pt"</span>).input_ids <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># compute loss</span> <span class="hljs-meta">&gt;&gt;&gt; </span>loss = model(**inputs).loss <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-built_in">round</span>(loss.item(), <span class="hljs-number">2</span>) <span class="hljs-number">12.51</span></pre></div></div></div></div> <h2 class="relative group"><a id="transformers.WavLMForSequenceClassification" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMForSequenceClassification"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-clhgna">WavLMForSequenceClassification</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.WavLMForSequenceClassification"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">WavLMForSequenceClassification</span></span></h3> <a id="transformers.WavLMForSequenceClassification" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.WavLMForSequenceClassification"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wavlm/modeling_wavlm.py#L1433" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMForSequenceClassification.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMForSequenceClassification.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/wavlm#transformers.WavLMConfig">WavLMConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-fuh48f">WavLM Model with a sequence classification head on top (a linear layer over the pooled output) for tasks like SUPERB Keyword Spotting.</p> <p data-svelte-h="svelte-k80xuh">WavLM was proposed in <a href="https://arxiv.org/abs/2110.13900" rel="nofollow">WavLM: Unified Speech Representation Learning with Labeled and Unlabeled Data</a> by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Xiangzhan Yu, Furu Wei.</p> <p data-svelte-h="svelte-1e6yl4y">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.).</p> <p data-svelte-h="svelte-68lg8f">This model is a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.WavLMForSequenceClassification.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.WavLMForSequenceClassification.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.WavLMForSequenceClassification.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wavlm/modeling_wavlm.py#L1481" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_values<span class="opacity-60">: typing.Optional[torch.Tensor]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.SequenceClassifierOutput">transformers.modeling_outputs.SequenceClassifierOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 6 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMForSequenceClassification.forward.input_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMForSequenceClassification.forward.input_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_values</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Float values of input raw speech waveform. Values can be obtained by loading a <code>.flac</code> or <code>.wav</code> audio file into an array of type <code>List[float]</code> or a <code>numpy.ndarray</code>, <em>e.g.</em> via the soundfile library (<code>pip install soundfile</code>). To prepare the array into <code>input_values</code>, the <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoProcessor">AutoProcessor</a> should be used for padding and conversion into a tensor of type <code>torch.FloatTensor</code>. See <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Processor.__call__">Wav2Vec2Processor.<strong>call</strong>()</a> for details.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMForSequenceClassification.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMForSequenceClassification.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing convolution and attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p> <div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400"> <p><code>attention_mask</code> should only be passed if the corresponding processor has <code>config.return_attention_mask == True</code>. For all models whose processor has <code>config.return_attention_mask == False</code>, <code>attention_mask</code> should <strong>not</strong> be passed to avoid degraded performance when doing batched inference. For such models <code>input_values</code> should simply be padded with 0 and passed without <code>attention_mask</code>. Be aware that these models also yield slightly different results depending on whether <code>input_values</code> is padded or not.</p> </div></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMForSequenceClassification.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMForSequenceClassification.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMForSequenceClassification.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMForSequenceClassification.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMForSequenceClassification.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMForSequenceClassification.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMForSequenceClassification.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMForSequenceClassification.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Labels for computing the sequence classification/regression loss. Indices should be in <code>[0, ..., config.num_labels - 1]</code>. If <code>config.num_labels == 1</code> a regression loss is computed (Mean-Square loss), If <code>config.num_labels &gt; 1</code> a classification loss is computed (Cross-Entropy).</span></span> </li></ul> <div id="transformers.WavLMForSequenceClassification.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.SequenceClassifierOutput">transformers.modeling_outputs.SequenceClassifierOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.SequenceClassifierOutput">transformers.modeling_outputs.SequenceClassifierOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/wavlm#transformers.WavLMConfig">WavLMConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Classification (or regression if config.num_labels==1) loss.</p> </li> <li> <p><strong>logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, config.num_labels)</code>) — Classification (or regression if config.num_labels==1) scores (before SoftMax).</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-1nc540">The <a href="/docs/transformers/v4.34.0/en/model_doc/wavlm#transformers.WavLMForSequenceClassification">WavLMForSequenceClassification</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.WavLMForSequenceClassification.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMForSequenceClassification.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoFeatureExtractor, WavLMForSequenceClassification <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>dataset = load_dataset(<span class="hljs-string">"hf-internal-testing/librispeech_asr_demo"</span>, <span class="hljs-string">"clean"</span>, split=<span class="hljs-string">"validation"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>dataset = dataset.sort(<span class="hljs-string">"id"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>sampling_rate = dataset.features[<span class="hljs-string">"audio"</span>].sampling_rate <span class="hljs-meta">&gt;&gt;&gt; </span>feature_extractor = AutoFeatureExtractor.from_pretrained(<span class="hljs-string">"patrickvonplaten/wavlm-libri-clean-100h-base-plus"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = WavLMForSequenceClassification.from_pretrained(<span class="hljs-string">"patrickvonplaten/wavlm-libri-clean-100h-base-plus"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># audio file is decoded on the fly</span> <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = feature_extractor(dataset[<span class="hljs-number">0</span>][<span class="hljs-string">"audio"</span>][<span class="hljs-string">"array"</span>], sampling_rate=sampling_rate, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> logits = model(**inputs).logits <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_class_ids = torch.argmax(logits, dim=-<span class="hljs-number">1</span>).item() <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_label = model.config.id2label[predicted_class_ids] <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># compute loss - target_label is e.g. "down"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>target_label = model.config.id2label[<span class="hljs-number">0</span>] <span class="hljs-meta">&gt;&gt;&gt; </span>inputs[<span class="hljs-string">"labels"</span>] = torch.tensor([model.config.label2id[target_label]]) <span class="hljs-meta">&gt;&gt;&gt; </span>loss = model(**inputs).loss</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.WavLMForAudioFrameClassification" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMForAudioFrameClassification"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-4dc1ko">WavLMForAudioFrameClassification</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.WavLMForAudioFrameClassification"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">WavLMForAudioFrameClassification</span></span></h3> <a id="transformers.WavLMForAudioFrameClassification" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.WavLMForAudioFrameClassification"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wavlm/modeling_wavlm.py#L1558" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMForAudioFrameClassification.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMForAudioFrameClassification.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/wavlm#transformers.WavLMConfig">WavLMConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-dugw4z">WavLM Model with a frame classification head on top for tasks like Speaker Diarization.</p> <p data-svelte-h="svelte-k80xuh">WavLM was proposed in <a href="https://arxiv.org/abs/2110.13900" rel="nofollow">WavLM: Unified Speech Representation Learning with Labeled and Unlabeled Data</a> by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Xiangzhan Yu, Furu Wei.</p> <p data-svelte-h="svelte-1e6yl4y">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.).</p> <p data-svelte-h="svelte-68lg8f">This model is a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.WavLMForAudioFrameClassification.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.WavLMForAudioFrameClassification.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.WavLMForAudioFrameClassification.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wavlm/modeling_wavlm.py#L1602" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_values<span class="opacity-60">: typing.Optional[torch.Tensor]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.TokenClassifierOutput">transformers.modeling_outputs.TokenClassifierOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 6 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMForAudioFrameClassification.forward.input_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMForAudioFrameClassification.forward.input_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_values</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Float values of input raw speech waveform. Values can be obtained by loading a <code>.flac</code> or <code>.wav</code> audio file into an array of type <code>List[float]</code> or a <code>numpy.ndarray</code>, <em>e.g.</em> via the soundfile library (<code>pip install soundfile</code>). To prepare the array into <code>input_values</code>, the <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoProcessor">AutoProcessor</a> should be used for padding and conversion into a tensor of type <code>torch.FloatTensor</code>. See <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Processor.__call__">Wav2Vec2Processor.<strong>call</strong>()</a> for details.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMForAudioFrameClassification.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMForAudioFrameClassification.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing convolution and attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p> <div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400"> <p><code>attention_mask</code> should only be passed if the corresponding processor has <code>config.return_attention_mask == True</code>. For all models whose processor has <code>config.return_attention_mask == False</code>, <code>attention_mask</code> should <strong>not</strong> be passed to avoid degraded performance when doing batched inference. For such models <code>input_values</code> should simply be padded with 0 and passed without <code>attention_mask</code>. Be aware that these models also yield slightly different results depending on whether <code>input_values</code> is padded or not.</p> </div></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMForAudioFrameClassification.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMForAudioFrameClassification.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMForAudioFrameClassification.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMForAudioFrameClassification.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMForAudioFrameClassification.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMForAudioFrameClassification.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMForAudioFrameClassification.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMForAudioFrameClassification.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Labels for computing the sequence classification/regression loss. Indices should be in <code>[0, ..., config.num_labels - 1]</code>. If <code>config.num_labels == 1</code> a regression loss is computed (Mean-Square loss), If <code>config.num_labels &gt; 1</code> a classification loss is computed (Cross-Entropy).</span></span> </li></ul> <div id="transformers.WavLMForAudioFrameClassification.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.TokenClassifierOutput">transformers.modeling_outputs.TokenClassifierOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.TokenClassifierOutput">transformers.modeling_outputs.TokenClassifierOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/wavlm#transformers.WavLMConfig">WavLMConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Classification loss.</p> </li> <li> <p><strong>logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, config.num_labels)</code>) — Classification scores (before SoftMax).</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-1dqwkj0">The <a href="/docs/transformers/v4.34.0/en/model_doc/wavlm#transformers.WavLMForAudioFrameClassification">WavLMForAudioFrameClassification</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.WavLMForAudioFrameClassification.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMForAudioFrameClassification.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoFeatureExtractor, WavLMForAudioFrameClassification <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>dataset = load_dataset(<span class="hljs-string">"hf-internal-testing/librispeech_asr_demo"</span>, <span class="hljs-string">"clean"</span>, split=<span class="hljs-string">"validation"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>dataset = dataset.sort(<span class="hljs-string">"id"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>sampling_rate = dataset.features[<span class="hljs-string">"audio"</span>].sampling_rate <span class="hljs-meta">&gt;&gt;&gt; </span>feature_extractor = AutoFeatureExtractor.from_pretrained(<span class="hljs-string">"microsoft/wavlm-base-plus-sd"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = WavLMForAudioFrameClassification.from_pretrained(<span class="hljs-string">"microsoft/wavlm-base-plus-sd"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># audio file is decoded on the fly</span> <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = feature_extractor(dataset[<span class="hljs-number">0</span>][<span class="hljs-string">"audio"</span>][<span class="hljs-string">"array"</span>], return_tensors=<span class="hljs-string">"pt"</span>, sampling_rate=sampling_rate) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> logits = model(**inputs).logits <span class="hljs-meta">&gt;&gt;&gt; </span>probabilities = torch.sigmoid(logits[<span class="hljs-number">0</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># labels is a one-hot array of shape (num_frames, num_speakers)</span> <span class="hljs-meta">&gt;&gt;&gt; </span>labels = (probabilities &gt; <span class="hljs-number">0.5</span>).long() <span class="hljs-meta">&gt;&gt;&gt; </span>labels[<span class="hljs-number">0</span>].tolist() [<span class="hljs-number">0</span>, <span class="hljs-number">0</span>]</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.WavLMForXVector" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMForXVector"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-e9bswg">WavLMForXVector</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.WavLMForXVector"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">WavLMForXVector</span></span></h3> <a id="transformers.WavLMForXVector" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.WavLMForXVector"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wavlm/modeling_wavlm.py#L1722" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMForXVector.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMForXVector.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/wavlm#transformers.WavLMConfig">WavLMConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1t2j53f">WavLM Model with an XVector feature extraction head on top for tasks like Speaker Verification.</p> <p data-svelte-h="svelte-k80xuh">WavLM was proposed in <a href="https://arxiv.org/abs/2110.13900" rel="nofollow">WavLM: Unified Speech Representation Learning with Labeled and Unlabeled Data</a> by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Xiangzhan Yu, Furu Wei.</p> <p data-svelte-h="svelte-1e6yl4y">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.).</p> <p data-svelte-h="svelte-68lg8f">This model is a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.WavLMForXVector.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.WavLMForXVector.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.WavLMForXVector.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/wavlm/modeling_wavlm.py#L1784" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_values<span class="opacity-60">: typing.Optional[torch.Tensor]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.XVectorOutput">transformers.modeling_outputs.XVectorOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 6 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMForXVector.forward.input_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMForXVector.forward.input_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_values</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Float values of input raw speech waveform. Values can be obtained by loading a <code>.flac</code> or <code>.wav</code> audio file into an array of type <code>List[float]</code> or a <code>numpy.ndarray</code>, <em>e.g.</em> via the soundfile library (<code>pip install soundfile</code>). To prepare the array into <code>input_values</code>, the <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoProcessor">AutoProcessor</a> should be used for padding and conversion into a tensor of type <code>torch.FloatTensor</code>. See <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Processor.__call__">Wav2Vec2Processor.<strong>call</strong>()</a> for details.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMForXVector.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMForXVector.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing convolution and attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p> <div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400"> <p><code>attention_mask</code> should only be passed if the corresponding processor has <code>config.return_attention_mask == True</code>. For all models whose processor has <code>config.return_attention_mask == False</code>, <code>attention_mask</code> should <strong>not</strong> be passed to avoid degraded performance when doing batched inference. For such models <code>input_values</code> should simply be padded with 0 and passed without <code>attention_mask</code>. Be aware that these models also yield slightly different results depending on whether <code>input_values</code> is padded or not.</p> </div></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMForXVector.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMForXVector.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMForXVector.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMForXVector.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMForXVector.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMForXVector.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WavLMForXVector.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMForXVector.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Labels for computing the sequence classification/regression loss. Indices should be in <code>[0, ..., config.num_labels - 1]</code>. If <code>config.num_labels == 1</code> a regression loss is computed (Mean-Square loss), If <code>config.num_labels &gt; 1</code> a classification loss is computed (Cross-Entropy).</span></span> </li></ul> <div id="transformers.WavLMForXVector.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.XVectorOutput">transformers.modeling_outputs.XVectorOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.XVectorOutput">transformers.modeling_outputs.XVectorOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/wavlm#transformers.WavLMConfig">WavLMConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Classification loss.</p> </li> <li> <p><strong>logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, config.xvector_output_dim)</code>) — Classification hidden states before AMSoftmax.</p> </li> <li> <p><strong>embeddings</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, config.xvector_output_dim)</code>) — Utterance embeddings used for vector similarity-based retrieval.</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-iojtuu">The <a href="/docs/transformers/v4.34.0/en/model_doc/wavlm#transformers.WavLMForXVector">WavLMForXVector</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.WavLMForXVector.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WavLMForXVector.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoFeatureExtractor, WavLMForXVector <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>dataset = load_dataset(<span class="hljs-string">"hf-internal-testing/librispeech_asr_demo"</span>, <span class="hljs-string">"clean"</span>, split=<span class="hljs-string">"validation"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>dataset = dataset.sort(<span class="hljs-string">"id"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>sampling_rate = dataset.features[<span class="hljs-string">"audio"</span>].sampling_rate <span class="hljs-meta">&gt;&gt;&gt; </span>feature_extractor = AutoFeatureExtractor.from_pretrained(<span class="hljs-string">"microsoft/wavlm-base-plus-sv"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = WavLMForXVector.from_pretrained(<span class="hljs-string">"microsoft/wavlm-base-plus-sv"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># audio file is decoded on the fly</span> <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = feature_extractor( <span class="hljs-meta">... </span> [d[<span class="hljs-string">"array"</span>] <span class="hljs-keyword">for</span> d <span class="hljs-keyword">in</span> dataset[:<span class="hljs-number">2</span>][<span class="hljs-string">"audio"</span>]], sampling_rate=sampling_rate, return_tensors=<span class="hljs-string">"pt"</span>, padding=<span class="hljs-literal">True</span> <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> embeddings = model(**inputs).embeddings <span class="hljs-meta">&gt;&gt;&gt; </span>embeddings = torch.nn.functional.normalize(embeddings, dim=-<span class="hljs-number">1</span>).cpu() <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># the resulting embeddings can be used for cosine similarity-based retrieval</span> <span class="hljs-meta">&gt;&gt;&gt; </span>cosine_sim = torch.nn.CosineSimilarity(dim=-<span class="hljs-number">1</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>similarity = cosine_sim(embeddings[<span class="hljs-number">0</span>], embeddings[<span class="hljs-number">1</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span>threshold = <span class="hljs-number">0.7</span> <span class="hljs-comment"># the optimal threshold is dataset-dependent</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">if</span> similarity &lt; threshold: <span class="hljs-meta">... </span> <span class="hljs-built_in">print</span>(<span class="hljs-string">"Speakers are not the same!"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-built_in">round</span>(similarity.item(), <span class="hljs-number">2</span>) <span class="hljs-number">0.97</span></pre></div></div></div></div> <p></p> <div id="svelte-announcer" aria-live="assertive" aria-atomic="true" style="position: absolute; left: 0px; top: 0px; clip: rect(0px, 0px, 0px, 0px); clip-path: inset(50%); overflow: hidden; white-space: nowrap; width: 1px; height: 1px;"></div></div> <div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>Wav2Vec2Phoneme</a> <a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/whisper" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">Whisper<span class="ml-2 translate-y-px">→</span></a></div></div></div> <div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapter&quot;:{&quot;title&quot;:&quot;WavLM&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;wavlm&quot;,&quot;url&quot;:&quot;#wavlm&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;overview&quot;,&quot;url&quot;:&quot;#overview&quot;},{&quot;title&quot;:&quot;Documentation resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;documentation-resources&quot;,&quot;url&quot;:&quot;#documentation-resources&quot;},{&quot;title&quot;:&quot;WavLMConfig&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.WavLMConfig&quot;,&quot;url&quot;:&quot;#transformers.WavLMConfig&quot;},{&quot;title&quot;:&quot;WavLMModel&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.WavLMModel&quot;,&quot;url&quot;:&quot;#transformers.WavLMModel&quot;},{&quot;title&quot;:&quot;WavLMForCTC&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.WavLMForCTC&quot;,&quot;url&quot;:&quot;#transformers.WavLMForCTC&quot;},{&quot;title&quot;:&quot;WavLMForSequenceClassification&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.WavLMForSequenceClassification&quot;,&quot;url&quot;:&quot;#transformers.WavLMForSequenceClassification&quot;},{&quot;title&quot;:&quot;WavLMForAudioFrameClassification&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.WavLMForAudioFrameClassification&quot;,&quot;url&quot;:&quot;#transformers.WavLMForAudioFrameClassification&quot;},{&quot;title&quot;:&quot;WavLMForXVector&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.WavLMForXVector&quot;,&quot;url&quot;:&quot;#transformers.WavLMForXVector&quot;}]}}" data-target="SubSideMenu"><nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#wavlm" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-wavlm"><wbr>WavLM</a> <a href="#overview" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-overview"><wbr>Overview</a> <a href="#documentation-resources" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-documentation-resources"><wbr>Documentation resources</a> <a href="#transformers.WavLMConfig" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.WavLMConfig"><wbr>WavLM<wbr>Config</a> <a href="#transformers.WavLMModel" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.WavLMModel"><wbr>WavLM<wbr>Model</a> <a href="#transformers.WavLMForCTC" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.WavLMForCTC"><wbr>WavLM<wbr>ForCTC</a> <a href="#transformers.WavLMForSequenceClassification" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.WavLMForSequenceClassification"><wbr>WavLM<wbr>For<wbr>Sequence<wbr>Classification</a> <a href="#transformers.WavLMForAudioFrameClassification" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.WavLMForAudioFrameClassification"><wbr>WavLM<wbr>For<wbr>Audio<wbr>Frame<wbr>Classification</a> <a href="#transformers.WavLMForXVector" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.WavLMForXVector"><wbr>WavLM<wbr>ForX<wbr>Vector</a> </nav></div></div></div> <div id="doc-footer"></div></main> </div> <script> import("/front/build/kube-b0520c1/index.js"); window.moonSha = "kube-b0520c1/"; window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc","environment":"production","userAgent":"HuggingFace (production)"}`); </script> <!-- Stripe --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://js.stripe.com/v3/"; script.async = true; document.head.appendChild(script); } </script> <!-- Google analytics v4 --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL"; script.async = true; document.head.appendChild(script); window.dataLayer = window.dataLayer || []; function gtag() { if (window.dataLayer !== undefined) { window.dataLayer.push(arguments); } } gtag("js", new Date()); gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/v4.34.0/en/model_doc/wavlm" }); /// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" }); /// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent /// TODO: ask the user for their consent and update this with gtag('consent', 'update') } </script> <!-- Google Analytics v3 (deprecated) --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { (function (i, s, o, g, r, a, m) { i["GoogleAnalyticsObject"] = r; (i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments); }), (i[r].l = 1 * new Date()); (a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]); a.async = 1; a.src = g; m.parentNode.insertBefore(a, m); })(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics"); ganalytics("create", "UA-83738774-2", "auto"); ganalytics("send", "pageview", "/docs/transformers/v4.34.0/en/model_doc/wavlm"); } </script> <iframe name="__privateStripeMetricsController9530" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-27c67c0d52761104439bb051c7856ab1.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fv4.34.0%2Fen%2Fmodel_doc%2Fwavlm&amp;title=WavLM&amp;referrer=&amp;muid=b15a8ef9-7618-4d98-9abd-1d7fdb18f47df4c702&amp;sid=0da2c795-975c-45a5-a090-0475ca1e345f07aeed&amp;version=6&amp;preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html>
2023-10-05T13:33:31.118Z
Whisper
https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/whisper
# Whisper ## Overview The Whisper model was proposed in [Robust Speech Recognition via Large-Scale Weak Supervision](https://cdn.openai.com/papers/whisper.pdf) by Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, Ilya Sutskever. The abstract from the paper is the following: _We study the capabilities of speech processing systems trained simply to predict large amounts of transcripts of audio on the internet. When scaled to 680,000 hours of multilingual and multitask supervision, the resulting models generalize well to standard benchmarks and are often competitive with prior fully supervised results but in a zeroshot transfer setting without the need for any finetuning. When compared to humans, the models approach their accuracy and robustness. We are releasing models and inference code to serve as a foundation for further work on robust speech processing._ Tips: - The model usually performs well without requiring any finetuning. - The architecture follows a classic encoder-decoder architecture, which means that it relies on the [generate()](/docs/transformers/v4.34.0/en/main_classes/text_generation#transformers.GenerationMixin.generate) function for inference. - Inference is currently only implemented for short-form i.e. audio is pre-segmented into <=30s segments. Long-form (including timestamps) will be implemented in a future release. - One can use [WhisperProcessor](/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperProcessor) to prepare audio for the model, and decode the predicted ID’s back into text. This model was contributed by [Arthur Zucker](https://huggingface.co/ArthurZ). The Tensorflow version of this model was contributed by [amyeroberts](https://huggingface.co/amyeroberts). The original code can be found [here](https://github.com/openai/whisper). ## WhisperConfig ### class transformers.WhisperConfig [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/configuration_whisper.py#L62) ( vocab\_size = 51865num\_mel\_bins = 80encoder\_layers = 6encoder\_attention\_heads = 4decoder\_layers = 6decoder\_attention\_heads = 4decoder\_ffn\_dim = 1536encoder\_ffn\_dim = 1536encoder\_layerdrop = 0.0decoder\_layerdrop = 0.0decoder\_start\_token\_id = 50257use\_cache = Trueis\_encoder\_decoder = Trueactivation\_function = 'gelu'd\_model = 256dropout = 0.0attention\_dropout = 0.0activation\_dropout = 0.0init\_std = 0.02scale\_embedding = Falsemax\_source\_positions = 1500max\_target\_positions = 448pad\_token\_id = 50256bos\_token\_id = 50256eos\_token\_id = 50256suppress\_tokens = Nonebegin\_suppress\_tokens = \[220, 50256\]use\_weighted\_layer\_sum = Falseclassifier\_proj\_size = 256apply\_spec\_augment = Falsemask\_time\_prob = 0.05mask\_time\_length = 10mask\_time\_min\_masks = 2mask\_feature\_prob = 0.0mask\_feature\_length = 10mask\_feature\_min\_masks = 0median\_filter\_width = 7\*\*kwargs ) This is the configuration class to store the configuration of a [WhisperModel](/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperModel). It is used to instantiate a Whisper model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Whisper [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) architecture. Configuration objects inherit from [PretrainedConfig](/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig) and can be used to control the model outputs. Read the documentation from [PretrainedConfig](/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig) for more information. Example: ``` >>> from transformers import WhisperConfig, WhisperModel >>> >>> configuration = WhisperConfig() >>> >>> model = WhisperModel(configuration) >>> >>> configuration = model.config ``` ## WhisperTokenizer ### class transformers.WhisperTokenizer [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/tokenization_whisper.py#L215) ( vocab\_filemerges\_filenormalizer\_file = Noneerrors = 'replace'unk\_token = '<|endoftext|>'bos\_token = '<|endoftext|>'eos\_token = '<|endoftext|>'pad\_token = Noneadd\_prefix\_space = Falselanguage = Nonetask = Nonepredict\_timestamps = False\*\*kwargs ) Construct a Whisper tokenizer. This tokenizer inherits from [PreTrainedTokenizer](/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizer) which contains some of the main methods. Users should refer to the superclass for more information regarding such methods. #### set\_prefix\_tokens [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/tokenization_whisper.py#L385) ( language: str = Nonetask: str = Nonepredict\_timestamps: bool = None ) Parameters - **language** (`str`, _optional_, defaults to `None`) — The language of the transcription text. - **task** (`str`, _optional_, defaults to `None`) — Task identifier to append at the start of sequence (if any). - **predict\_timestamps** (`bool`, _optional_, defaults to `None`) — Whether to omit the `<|notimestamps|>` token at the start of the sequence. Override the prefix tokens appended to the start of the label sequence. This method can be used standalone to update the prefix tokens as required when fine-tuning. Example: ``` >>> >>> tokenizer = WhisperTokenizer.from_pretrained("openai/whisper-tiny", language="spanish") >>> >>> tokenizer.set_prefix_tokens(language="french") ``` #### build\_inputs\_with\_special\_tokens [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/tokenization_whisper.py#L444) ( token\_ids\_0token\_ids\_1 = None ) Build model inputs from a sequence by appending eos\_token\_id. #### get\_special\_tokens\_mask [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/tokenization_whisper.py#L452) ( token\_ids\_0: typing.List\[int\]token\_ids\_1: typing.Optional\[typing.List\[int\]\] = Nonealready\_has\_special\_tokens: bool = False ) → `List[int]` Parameters - **token\_ids\_0** (`List[int]`) — List of IDs. - **token\_ids\_1** (`List[int]`, _optional_) — Optional second list of IDs for sequence pairs. - **already\_has\_special\_tokens** (`bool`, _optional_, defaults to `False`) — Whether or not the token list is already formatted with special tokens for the model. A list of integers in the range \[0, 1\]: 1 for a special token, 0 for a sequence token. Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer `prepare_for_model` method. #### create\_token\_type\_ids\_from\_sequences [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/tokenization_utils_base.py#L3305) ( token\_ids\_0: typing.List\[int\]token\_ids\_1: typing.Optional\[typing.List\[int\]\] = None ) → `List[int]` Parameters - **token\_ids\_0** (`List[int]`) — The first tokenized sequence. - **token\_ids\_1** (`List[int]`, _optional_) — The second tokenized sequence. The token type ids. Create the token type IDs corresponding to the sequences passed. [What are token type IDs?](../glossary#token-type-ids) Should be overridden in a subclass if the model has a special way of building those. #### save\_vocabulary [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/tokenization_whisper.py#L718) ( save\_directory: strfilename\_prefix: typing.Optional\[str\] = None ) ## WhisperTokenizerFast ### class transformers.WhisperTokenizerFast [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/tokenization_whisper_fast.py#L90) ( vocab\_file = Nonemerges\_file = Nonenormalizer\_file = Nonetokenizer\_file = Noneunk\_token = '<|endoftext|>'bos\_token = '<|endoftext|>'eos\_token = '<|endoftext|>'add\_prefix\_space = Falselanguage = Nonetask = Nonepredict\_timestamps = False\*\*kwargs ) Construct a “fast” Whisper tokenizer (backed by HuggingFace’s _tokenizers_ library). This tokenizer inherits from [PreTrainedTokenizerFast](/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast) which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. #### set\_prefix\_tokens [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/tokenization_whisper_fast.py#L421) ( language: str = Nonetask: str = Nonepredict\_timestamps: bool = None ) Parameters - **language** (`str`, _optional_, defaults to `None`) — The language of the transcription text. - **task** (`str`, _optional_, defaults to `None`) — Task identifier to append at the start of sequence (if any). - **predict\_timestamps** (`bool`, _optional_, defaults to `None`) — Whether to omit the `<|notimestamps|>` token at the start of the sequence. Override the prefix tokens appended to the start of the label sequence. This method can be used standalone to update the prefix tokens as required when fine-tuning. Example: ``` >>> >>> tokenizer = WhisperTokenizerFast.from_pretrained("openai/whisper-tiny", language="spanish") >>> >>> tokenizer.set_prefix_tokens(language="french") ``` #### build\_inputs\_with\_special\_tokens [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/tokenization_whisper_fast.py#L495) ( token\_ids\_0token\_ids\_1 = None ) Build model inputs from a sequence by appending eos\_token\_id. #### get\_special\_tokens\_mask [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/tokenization_whisper_fast.py#L503) ( token\_ids\_0: typing.List\[int\]token\_ids\_1: typing.Optional\[typing.List\[int\]\] = Nonealready\_has\_special\_tokens: bool = False ) → `List[int]` Parameters - **token\_ids\_0** (`List[int]`) — List of IDs. - **token\_ids\_1** (`List[int]`, _optional_) — Optional second list of IDs for sequence pairs. - **already\_has\_special\_tokens** (`bool`, _optional_, defaults to `False`) — Whether or not the token list is already formatted with special tokens for the model. A list of integers in the range \[0, 1\]: 1 for a special token, 0 for a sequence token. Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer `prepare_for_model` method. #### create\_token\_type\_ids\_from\_sequences [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/tokenization_utils_base.py#L3305) ( token\_ids\_0: typing.List\[int\]token\_ids\_1: typing.Optional\[typing.List\[int\]\] = None ) → `List[int]` Parameters - **token\_ids\_0** (`List[int]`) — The first tokenized sequence. - **token\_ids\_1** (`List[int]`, _optional_) — The second tokenized sequence. The token type ids. Create the token type IDs corresponding to the sequences passed. [What are token type IDs?](../glossary#token-type-ids) Should be overridden in a subclass if the model has a special way of building those. #### save\_vocabulary [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/tokenization_whisper_fast.py#L406) ( save\_directory: strfilename\_prefix: typing.Optional\[str\] = None ) ## WhisperFeatureExtractor ( feature\_size = 80sampling\_rate = 16000hop\_length = 160chunk\_length = 30n\_fft = 400padding\_value = 0.0return\_attention\_mask = False\*\*kwargs ) Parameters - **feature\_size** (`int`, defaults to 80) — The feature dimension of the extracted features. - **sampling\_rate** (`int`, defaults to 16000) — The sampling rate at which the audio files should be digitalized expressed in hertz (Hz). - **hop\_length** (`int`, defaults to 160) — Length of the overlaping windows for the STFT used to obtain the Mel Frequency coefficients. - **chunk\_length** (`int`, defaults to 30) — The maximum number of chuncks of `sampling_rate` samples used to trim and pad longer or shorter audio sequences. - **n\_fft** (`int`, defaults to 400) — Size of the Fourier transform. - **padding\_value** (`float`, _optional_, defaults to 0.0) — Padding value used to pad the audio. Should correspond to silences. Constructs a Whisper feature extractor. This feature extractor inherits from [SequenceFeatureExtractor](/docs/transformers/v4.34.0/en/main_classes/feature_extractor#transformers.SequenceFeatureExtractor) which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. This class extracts mel-filter bank features from raw speech using a custom numpy implementation of the `Short Time Fourier Transform` which should match pytorch’s `torch.stft` equivalent. ( raw\_speech: typing.Union\[numpy.ndarray, typing.List\[float\], typing.List\[numpy.ndarray\], typing.List\[typing.List\[float\]\]\]truncation: bool = Truepad\_to\_multiple\_of: typing.Optional\[int\] = Nonereturn\_tensors: typing.Union\[str, transformers.utils.generic.TensorType, NoneType\] = Nonereturn\_attention\_mask: typing.Optional\[bool\] = Nonepadding: typing.Optional\[str\] = 'max\_length'max\_length: typing.Optional\[int\] = Nonesampling\_rate: typing.Optional\[int\] = Nonedo\_normalize: typing.Optional\[bool\] = None\*\*kwargs ) Main method to featurize and prepare for the model one or several sequence(s). ## WhisperProcessor ### class transformers.WhisperProcessor [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/processing_whisper.py#L23) ( feature\_extractortokenizer ) Parameters - **feature\_extractor** (`WhisperFeatureExtractor`) — An instance of [WhisperFeatureExtractor](/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperFeatureExtractor). The feature extractor is a required input. - **tokenizer** (`WhisperTokenizer`) — An instance of [WhisperTokenizer](/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperTokenizer). The tokenizer is a required input. Constructs a Whisper processor which wraps a Whisper feature extractor and a Whisper tokenizer into a single processor. [WhisperProcessor](/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperProcessor) offers all the functionalities of [WhisperFeatureExtractor](/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperFeatureExtractor) and [WhisperTokenizer](/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperTokenizer). See the [**call**()](/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperProcessor.__call__) and [decode()](/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperProcessor.decode) for more information. Forwards the `audio` argument to WhisperFeatureExtractor’s [**call**()](/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperFeatureExtractor.__call__) and the `text` argument to [**call**()](/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__). Please refer to the doctsring of the above two methods for more information. #### from\_pretrained [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/processing_utils.py#L167) ( pretrained\_model\_name\_or\_path: typing.Union\[str, os.PathLike\]cache\_dir: typing.Union\[str, os.PathLike, NoneType\] = Noneforce\_download: bool = Falselocal\_files\_only: bool = Falsetoken: typing.Union\[bool, str, NoneType\] = Nonerevision: str = 'main'\*\*kwargs ) Parameters - **pretrained\_model\_name\_or\_path** (`str` or `os.PathLike`) — This can be either: - a string, the _model id_ of a pretrained feature\_extractor hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like `bert-base-uncased`, or namespaced under a user or organization name, like `dbmdz/bert-base-german-cased`. - a path to a _directory_ containing a feature extractor file saved using the [save\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/feature_extractor#transformers.FeatureExtractionMixin.save_pretrained) method, e.g., `./my_model_directory/`. - a path or url to a saved feature extractor JSON _file_, e.g., `./my_model_directory/preprocessor_config.json`. \*\*kwargs — Additional keyword arguments passed along to both [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/feature_extractor#transformers.FeatureExtractionMixin.from_pretrained) and `~tokenization_utils_base.PreTrainedTokenizer.from_pretrained`. Instantiate a processor associated with a pretrained model. This class method is simply calling the feature extractor [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/feature_extractor#transformers.FeatureExtractionMixin.from_pretrained), image processor [ImageProcessingMixin](/docs/transformers/v4.34.0/en/main_classes/image_processor#transformers.ImageProcessingMixin) and the tokenizer `~tokenization_utils_base.PreTrainedTokenizer.from_pretrained` methods. Please refer to the docstrings of the methods above for more information. #### save\_pretrained [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/processing_utils.py#L93) ( save\_directorypush\_to\_hub: bool = False\*\*kwargs ) Parameters - **save\_directory** (`str` or `os.PathLike`) — Directory where the feature extractor JSON file and the tokenizer files will be saved (directory will be created if it does not exist). - **push\_to\_hub** (`bool`, _optional_, defaults to `False`) — Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the repository you want to push to with `repo_id` (will default to the name of `save_directory` in your namespace). - **kwargs** (`Dict[str, Any]`, _optional_) — Additional key word arguments passed along to the [push\_to\_hub()](/docs/transformers/v4.34.0/en/main_classes/processors#transformers.ProcessorMixin.push_to_hub) method. Saves the attributes of this processor (feature extractor, tokenizer…) in the specified directory so that it can be reloaded using the [from\_pretrained()](/docs/transformers/v4.34.0/en/model_doc/nougat#transformers.NougatProcessor.from_pretrained) method. This class method is simply calling [save\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/feature_extractor#transformers.FeatureExtractionMixin.save_pretrained) and [save\_pretrained()](/docs/transformers/v4.34.0/en/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.save_pretrained). Please refer to the docstrings of the methods above for more information. This method forwards all its arguments to WhisperTokenizer’s [batch\_decode()](/docs/transformers/v4.34.0/en/model_doc/speecht5#transformers.SpeechT5Tokenizer.batch_decode). Please refer to the docstring of this method for more information. This method forwards all its arguments to WhisperTokenizer’s [decode()](/docs/transformers/v4.34.0/en/model_doc/speecht5#transformers.SpeechT5Tokenizer.decode). Please refer to the docstring of this method for more information. ## WhisperModel ### class transformers.WhisperModel [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/modeling_whisper.py#L1227) ( config: WhisperConfig ) Parameters - **config** ([WhisperConfig](/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. The bare Whisper Model outputting raw hidden-states without any specific head on top. This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/modeling_whisper.py#L1298) ( input\_features: typing.Optional\[torch.FloatTensor\] = Noneattention\_mask: typing.Optional\[torch.LongTensor\] = Nonedecoder\_input\_ids: typing.Optional\[torch.LongTensor\] = Nonedecoder\_attention\_mask: typing.Optional\[torch.LongTensor\] = Nonehead\_mask: typing.Optional\[torch.Tensor\] = Nonedecoder\_head\_mask: typing.Optional\[torch.Tensor\] = Nonecross\_attn\_head\_mask: typing.Optional\[torch.Tensor\] = Noneencoder\_outputs: typing.Optional\[typing.Tuple\[typing.Tuple\[torch.FloatTensor\]\]\] = Nonepast\_key\_values: typing.Optional\[typing.Tuple\[typing.Tuple\[torch.FloatTensor\]\]\] = Nonedecoder\_inputs\_embeds: typing.Optional\[typing.Tuple\[torch.FloatTensor\]\] = Noneuse\_cache: typing.Optional\[bool\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → [transformers.modeling\_outputs.Seq2SeqModelOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.Seq2SeqModelOutput) or `tuple(torch.FloatTensor)` The [WhisperModel](/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperModel) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> import torch >>> from transformers import AutoFeatureExtractor, WhisperModel >>> from datasets import load_dataset >>> model = WhisperModel.from_pretrained("openai/whisper-base") >>> feature_extractor = AutoFeatureExtractor.from_pretrained("openai/whisper-base") >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> inputs = feature_extractor(ds[0]["audio"]["array"], return_tensors="pt") >>> input_features = inputs.input_features >>> decoder_input_ids = torch.tensor([[1, 1]]) * model.config.decoder_start_token_id >>> last_hidden_state = model(input_features, decoder_input_ids=decoder_input_ids).last_hidden_state >>> list(last_hidden_state.shape) [1, 2, 512] ``` #### \_mask\_input\_features [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/modeling_whisper.py#L1255) ( input\_features: FloatTensorattention\_mask: typing.Optional\[torch.LongTensor\] = None ) Masks extracted features along time axis and/or along feature axis according to [SpecAugment](https://arxiv.org/abs/1904.08779). ## WhisperForConditionalGeneration ### class transformers.WhisperForConditionalGeneration [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/modeling_whisper.py#L1395) ( config: WhisperConfig ) Parameters - **config** ([WhisperConfig](/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. The Whisper Model with a language modeling head. Can be used for automatic speech recognition. This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/modeling_whisper.py#L1429) ( input\_features: typing.Optional\[torch.FloatTensor\] = Noneattention\_mask: typing.Optional\[torch.LongTensor\] = Nonedecoder\_input\_ids: typing.Optional\[torch.LongTensor\] = Nonedecoder\_attention\_mask: typing.Optional\[torch.LongTensor\] = Nonehead\_mask: typing.Optional\[torch.Tensor\] = Nonedecoder\_head\_mask: typing.Optional\[torch.Tensor\] = Nonecross\_attn\_head\_mask: typing.Optional\[torch.Tensor\] = Noneencoder\_outputs: typing.Optional\[typing.Tuple\[typing.Tuple\[torch.FloatTensor\]\]\] = Nonepast\_key\_values: typing.Optional\[typing.Tuple\[typing.Tuple\[torch.FloatTensor\]\]\] = Nonedecoder\_inputs\_embeds: typing.Optional\[typing.Tuple\[torch.FloatTensor\]\] = Nonelabels: typing.Optional\[torch.LongTensor\] = Noneuse\_cache: typing.Optional\[bool\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → [transformers.modeling\_outputs.Seq2SeqLMOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.Seq2SeqLMOutput) or `tuple(torch.FloatTensor)` The [WhisperForConditionalGeneration](/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperForConditionalGeneration) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> import torch >>> from transformers import AutoProcessor, WhisperForConditionalGeneration >>> from datasets import load_dataset >>> processor = AutoProcessor.from_pretrained("openai/whisper-tiny.en") >>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny.en") >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> inputs = processor(ds[0]["audio"]["array"], return_tensors="pt") >>> input_features = inputs.input_features >>> generated_ids = model.generate(inputs=input_features) >>> transcription = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] >>> transcription ' Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel.' ``` ## WhisperForAudioClassification ### class transformers.WhisperForAudioClassification [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/modeling_whisper.py#L1866) ( config ) Whisper Encoder Model with a sequence classification head on top (a linear layer over the pooled output) for tasks like SUPERB Keyword Spotting. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/modeling_whisper.py#L1893) ( input\_features: typing.Optional\[torch.LongTensor\] = Nonehead\_mask: typing.Optional\[torch.Tensor\] = Noneencoder\_outputs: typing.Optional\[typing.Tuple\[typing.Tuple\[torch.FloatTensor\]\]\] = Nonelabels: typing.Optional\[torch.LongTensor\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → [transformers.modeling\_outputs.SequenceClassifierOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.SequenceClassifierOutput) or `tuple(torch.FloatTensor)` The [WhisperForAudioClassification](/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperForAudioClassification) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> import torch >>> from transformers import AutoFeatureExtractor, WhisperForAudioClassification >>> from datasets import load_dataset >>> feature_extractor = AutoFeatureExtractor.from_pretrained("sanchit-gandhi/whisper-medium-fleurs-lang-id") >>> model = WhisperForAudioClassification.from_pretrained("sanchit-gandhi/whisper-medium-fleurs-lang-id") >>> ds = load_dataset("google/fleurs", "all", split="validation", streaming=True) >>> sample = next(iter(ds)) >>> inputs = feature_extractor( ... sample["audio"]["array"], sampling_rate=sample["audio"]["sampling_rate"], return_tensors="pt" ... ) >>> input_features = inputs.input_features >>> with torch.no_grad(): ... logits = model(input_features).logits >>> predicted_class_ids = torch.argmax(logits).item() >>> predicted_label = model.config.id2label[predicted_class_ids] >>> predicted_label 'Afrikaans' ``` ## TFWhisperModel ### class transformers.TFWhisperModel [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/modeling_tf_whisper.py#L1093) ( \*args\*\*kwargs ) Parameters - **config** ([WhisperConfig](/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel.from_pretrained) method to load the model weights. The bare Whisper Model outputting raw hidden-states without any specific head on top. This model inherits from [TFPreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a [tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. #### call [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/modeling_tf_whisper.py#L1117) ( input\_features: TFModelInputType | None = Nonedecoder\_input\_ids: np.ndarray | tf.Tensor | None = Nonedecoder\_attention\_mask: np.ndarray | tf.Tensor | None = Nonedecoder\_position\_ids: np.ndarray | tf.Tensor | None = Nonehead\_mask: np.ndarray | tf.Tensor | None = Nonedecoder\_head\_mask: np.ndarray | tf.Tensor | None = Nonecross\_attn\_head\_mask: np.ndarray | tf.Tensor | None = Noneencoder\_outputs: Optional\[Tuple\[Tuple\[Union\[np.ndarray, tf.Tensor\]\]\]\] = Nonepast\_key\_values: Optional\[Tuple\[Tuple\[Union\[np.ndarray, tf.Tensor\]\]\]\] = Nonedecoder\_inputs\_embeds: Optional\[Tuple\[Union\[np.ndarray, tf.Tensor\]\]\] = Noneuse\_cache: Optional\[bool\] = Noneoutput\_attentions: Optional\[bool\] = Noneoutput\_hidden\_states: Optional\[bool\] = Nonereturn\_dict: Optional\[bool\] = Nonetraining: bool = False ) → [transformers.modeling\_tf\_outputs.TFSeq2SeqModelOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFSeq2SeqModelOutput) or `tuple(tf.Tensor)` The [TFWhisperModel](/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.TFWhisperModel) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> import tensorflow as tf >>> from transformers import TFWhisperModel, AutoFeatureExtractor >>> from datasets import load_dataset >>> model = TFWhisperModel.from_pretrained("openai/whisper-base") >>> feature_extractor = AutoFeatureExtractor.from_pretrained("openai/whisper-base") >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> inputs = feature_extractor(ds[0]["audio"]["array"], return_tensors="tf") >>> input_features = inputs.input_features >>> decoder_input_ids = tf.convert_to_tensor([[1, 1]]) * model.config.decoder_start_token_id >>> last_hidden_state = model(input_features, decoder_input_ids=decoder_input_ids).last_hidden_state >>> list(last_hidden_state.shape) [1, 2, 512] ``` ## TFWhisperForConditionalGeneration ### class transformers.TFWhisperForConditionalGeneration [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/modeling_tf_whisper.py#L1201) ( \*args\*\*kwargs ) Parameters - **config** ([WhisperConfig](/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel.from_pretrained) method to load the model weights. The Whisper Model with a language modeling head. Can be used for automatic speech recognition. This model inherits from [TFPreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a [tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. #### call [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/modeling_tf_whisper.py#L1232) ( input\_features: TFModelInputType | None = Nonedecoder\_input\_ids: np.ndarray | tf.Tensor | None = Nonedecoder\_attention\_mask: np.ndarray | tf.Tensor | None = Nonedecoder\_position\_ids: np.ndarray | tf.Tensor | None = Nonehead\_mask: np.ndarray | tf.Tensor | None = Nonedecoder\_head\_mask: np.ndarray | tf.Tensor | None = Nonecross\_attn\_head\_mask: np.ndarray | tf.Tensor | None = Noneencoder\_outputs: Optional\[Tuple\[Tuple\[Union\[np.ndarray, tf.Tensor\]\]\]\] = Nonepast\_key\_values: Optional\[Tuple\[Tuple\[Union\[np.ndarray, tf.Tensor\]\]\]\] = Nonedecoder\_inputs\_embeds: Optional\[Tuple\[Union\[np.ndarray, tf.Tensor\]\]\] = Nonelabels: np.ndarray | tf.Tensor | None = Noneuse\_cache: Optional\[bool\] = Noneoutput\_attentions: Optional\[bool\] = Noneoutput\_hidden\_states: Optional\[bool\] = Nonereturn\_dict: Optional\[bool\] = Nonetraining: bool = False ) → [transformers.modeling\_tf\_outputs.TFSeq2SeqLMOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFSeq2SeqLMOutput) or `tuple(tf.Tensor)` The [TFWhisperForConditionalGeneration](/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.TFWhisperForConditionalGeneration) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> import tensorflow as tf >>> from transformers import AutoProcessor, TFWhisperForConditionalGeneration >>> from datasets import load_dataset >>> processor = AutoProcessor.from_pretrained("openai/whisper-tiny.en") >>> model = TFWhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny.en") >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> inputs = processor(ds[0]["audio"]["array"], return_tensors="tf") >>> input_features = inputs.input_features >>> generated_ids = model.generate(input_features=input_features) >>> transcription = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] >>> transcription ' Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel.' ``` ## FlaxWhisperModel ### class transformers.FlaxWhisperModel [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/modeling_flax_whisper.py#L1165) ( config: WhisperConfiginput\_shape: typing.Tuple\[int\] = (1, 80, 3000)seed: int = 0dtype: dtype = <class 'jax.numpy.float32'>\_do\_init: bool = Truegradient\_checkpointing: bool = False\*\*kwargs ) Parameters - **config** ([WhisperConfig](/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.from_pretrained) method to load the model weights. - **dtype** (`jax.numpy.dtype`, _optional_, defaults to `jax.numpy.float32`) — The data type of the computation. Can be one of `jax.numpy.float32`, `jax.numpy.float16` (on GPUs) and `jax.numpy.bfloat16` (on TPUs). This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given `dtype`. **Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.** If you wish to change the dtype of the model parameters, see [to\_fp16()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.to_fp16) and [to\_bf16()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.to_bf16). The bare Whisper Model transformer outputting raw hidden-states without any specific head on top. This model inherits from [FlaxPreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its models (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a Flax Linen [flax.nn.Module](https://flax.readthedocs.io/en/latest/_autosummary/flax.nn.module.html) subclass. Use it as a regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior. Finally, this model supports inherent JAX features such as: - [Just-In-Time (JIT) compilation](https://jax.readthedocs.io/en/latest/jax.html#just-in-time-compilation-jit) - [Automatic Differentiation](https://jax.readthedocs.io/en/latest/jax.html#automatic-differentiation) - [Vectorization](https://jax.readthedocs.io/en/latest/jax.html#vectorization-vmap) - [Parallelization](https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap) #### \_\_call\_\_ [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/modeling_flax_whisper.py#L1110) ( input\_features: Arraydecoder\_input\_ids: Arrayattention\_mask: typing.Optional\[jax.Array\] = Nonedecoder\_attention\_mask: typing.Optional\[jax.Array\] = Noneposition\_ids: typing.Optional\[jax.Array\] = Nonedecoder\_position\_ids: typing.Optional\[jax.Array\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = Nonetrain: bool = Falseparams: dict = Nonedropout\_rng: PRNGKey = None ) → [transformers.modeling\_flax\_outputs.FlaxSeq2SeqModelOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxSeq2SeqModelOutput) or `tuple(torch.FloatTensor)` The `FlaxWhisperPreTrainedModel` forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, FlaxWhisperModel >>> tokenizer = AutoTokenizer.from_pretrained("openai/whisper-tiny") >>> model = FlaxWhisperModel.from_pretrained("openai/whisper-tiny") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="jax") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state ``` ## FlaxWhisperForConditionalGeneration ### class transformers.FlaxWhisperForConditionalGeneration [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/modeling_flax_whisper.py#L1244) ( config: WhisperConfiginput\_shape: typing.Tuple\[int\] = (1, 80, 3000)seed: int = 0dtype: dtype = <class 'jax.numpy.float32'>\_do\_init: bool = Truegradient\_checkpointing: bool = False\*\*kwargs ) Parameters - **config** ([WhisperConfig](/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.from_pretrained) method to load the model weights. - **dtype** (`jax.numpy.dtype`, _optional_, defaults to `jax.numpy.float32`) — The data type of the computation. Can be one of `jax.numpy.float32`, `jax.numpy.float16` (on GPUs) and `jax.numpy.bfloat16` (on TPUs). This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given `dtype`. **Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.** If you wish to change the dtype of the model parameters, see [to\_fp16()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.to_fp16) and [to\_bf16()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.to_bf16). The Whisper Model with a language modeling head. This model inherits from [FlaxPreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its models (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a Flax Linen [flax.nn.Module](https://flax.readthedocs.io/en/latest/_autosummary/flax.nn.module.html) subclass. Use it as a regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior. Finally, this model supports inherent JAX features such as: - [Just-In-Time (JIT) compilation](https://jax.readthedocs.io/en/latest/jax.html#just-in-time-compilation-jit) - [Automatic Differentiation](https://jax.readthedocs.io/en/latest/jax.html#automatic-differentiation) - [Vectorization](https://jax.readthedocs.io/en/latest/jax.html#vectorization-vmap) - [Parallelization](https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap) #### \_\_call\_\_ [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/modeling_flax_whisper.py#L1110) ( input\_features: Arraydecoder\_input\_ids: Arrayattention\_mask: typing.Optional\[jax.Array\] = Nonedecoder\_attention\_mask: typing.Optional\[jax.Array\] = Noneposition\_ids: typing.Optional\[jax.Array\] = Nonedecoder\_position\_ids: typing.Optional\[jax.Array\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = Nonetrain: bool = Falseparams: dict = Nonedropout\_rng: PRNGKey = None ) → [transformers.modeling\_flax\_outputs.FlaxSeq2SeqLMOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput) or `tuple(torch.FloatTensor)` The `FlaxWhisperPreTrainedModel` forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Transcription example: ``` >>> from transformers import WhisperProcessor, FlaxWhisperForConditionalGeneration >>> from datasets import load_dataset >>> processor = WhisperProcessor.from_pretrained("openai/whisper-tiny.en") >>> model = FlaxWhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny.en", from_pt=True) >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> inputs = processor(ds[0]["audio"]["array"], return_tensors="np") >>> input_features = inputs.input_features >>> generated_ids = model.generate(input_ids=input_features) >>> transcription = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] >>> transcription ' Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel.' ``` ## FlaxWhisperForAudioClassification ### class transformers.FlaxWhisperForAudioClassification [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/modeling_flax_whisper.py#L1574) ( config: WhisperConfiginput\_shape: typing.Tuple\[int\] = (1, 80, 3000)seed: int = 0dtype: dtype = <class 'jax.numpy.float32'>\_do\_init: bool = Truegradient\_checkpointing: bool = False\*\*kwargs ) Parameters - **config** ([WhisperConfig](/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.from_pretrained) method to load the model weights. - **dtype** (`jax.numpy.dtype`, _optional_, defaults to `jax.numpy.float32`) — The data type of the computation. Can be one of `jax.numpy.float32`, `jax.numpy.float16` (on GPUs) and `jax.numpy.bfloat16` (on TPUs). This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given `dtype`. **Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.** If you wish to change the dtype of the model parameters, see [to\_fp16()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.to_fp16) and [to\_bf16()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.to_bf16). The Whisper Model with an audio classification head on top. This model inherits from [FlaxPreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its models (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a Flax Linen [flax.nn.Module](https://flax.readthedocs.io/en/latest/_autosummary/flax.nn.module.html) subclass. Use it as a regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior. Finally, this model supports inherent JAX features such as: - [Just-In-Time (JIT) compilation](https://jax.readthedocs.io/en/latest/jax.html#just-in-time-compilation-jit) - [Automatic Differentiation](https://jax.readthedocs.io/en/latest/jax.html#automatic-differentiation) - [Vectorization](https://jax.readthedocs.io/en/latest/jax.html#vectorization-vmap) - [Parallelization](https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap) #### \_\_call\_\_ [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/modeling_flax_whisper.py#L1601) ( input\_features: Arrayattention\_mask: typing.Optional\[jax.Array\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = Nonetrain: bool = Falseparams: dict = Nonedropout\_rng: PRNGKey = None\*\*kwargs ) → [transformers.modeling\_flax\_outputs.FlaxSequenceClassifierOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput) or `tuple(torch.FloatTensor)` The [FlaxWhisperForAudioClassification](/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.FlaxWhisperForAudioClassification) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Transcription example: ``` >>> import jax.numpy as jnp >>> from transformers import AutoFeatureExtractor, FlaxWhisperForAudioClassification >>> from datasets import load_dataset >>> feature_extractor = AutoFeatureExtractor.from_pretrained("sanchit-gandhi/whisper-medium-fleurs-lang-id") >>> model = FlaxWhisperForAudioClassification.from_pretrained( ... "sanchit-gandhi/whisper-medium-fleurs-lang-id", from_pt=True ... ) >>> ds = load_dataset("google/fleurs", "all", split="validation", streaming=True) >>> sample = next(iter(ds)) >>> inputs = feature_extractor( ... sample["audio"]["array"], sampling_rate=sample["audio"]["sampling_rate"], return_tensors="np" ... ) >>> input_features = inputs.input_features >>> logits = model(input_features).logits >>> predicted_class_ids = jnp.argmax(logits).item() >>> predicted_label = model.config.id2label[predicted_class_ids] >>> predicted_label 'af_za' ```
<!DOCTYPE html><html class=""><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"> <meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science."> <meta property="fb:app_id" content="1321688464574422"> <meta name="twitter:card" content="summary_large_image"> <meta name="twitter:site" content="@huggingface"> <meta property="og:title" content="Whisper"> <meta property="og:type" content="website"> <meta property="og:url" content="https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/whisper"> <meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png"> <link rel="stylesheet" href="/front/build/kube-5e23f38/style.css"> <link rel="preconnect" href="https://fonts.gstatic.com"> <link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&amp;display=swap" rel="stylesheet"> <link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&amp;display=swap" rel="stylesheet"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" as="style" onload="this.onload=null;this.rel='stylesheet'"> <noscript> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" /> </noscript> <title>Whisper</title> <script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script> <script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><link rel="stylesheet" href="/docs/transformers/v4.34.0/en/_app/immutable/assets/0.e3b0c442.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/1.38c5c2f6.js"><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"><meta name="hf:doc:metadata" content="{&quot;local&quot;:&quot;whisper&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;overview&quot;,&quot;title&quot;:&quot;Overview&quot;},{&quot;local&quot;:&quot;transformers.WhisperConfig&quot;,&quot;title&quot;:&quot;WhisperConfig&quot;},{&quot;local&quot;:&quot;transformers.WhisperTokenizer&quot;,&quot;title&quot;:&quot;WhisperTokenizer&quot;},{&quot;local&quot;:&quot;transformers.WhisperTokenizerFast&quot;,&quot;title&quot;:&quot;WhisperTokenizerFast&quot;},{&quot;local&quot;:&quot;transformers.WhisperFeatureExtractor&quot;,&quot;title&quot;:&quot;WhisperFeatureExtractor&quot;},{&quot;local&quot;:&quot;transformers.WhisperProcessor&quot;,&quot;title&quot;:&quot;WhisperProcessor&quot;},{&quot;local&quot;:&quot;transformers.WhisperModel&quot;,&quot;title&quot;:&quot;WhisperModel&quot;},{&quot;local&quot;:&quot;transformers.WhisperForConditionalGeneration&quot;,&quot;title&quot;:&quot;WhisperForConditionalGeneration&quot;},{&quot;local&quot;:&quot;transformers.WhisperForAudioClassification&quot;,&quot;title&quot;:&quot;WhisperForAudioClassification&quot;},{&quot;local&quot;:&quot;transformers.TFWhisperModel&quot;,&quot;title&quot;:&quot;TFWhisperModel&quot;},{&quot;local&quot;:&quot;transformers.TFWhisperForConditionalGeneration&quot;,&quot;title&quot;:&quot;TFWhisperForConditionalGeneration&quot;},{&quot;local&quot;:&quot;transformers.FlaxWhisperModel&quot;,&quot;title&quot;:&quot;FlaxWhisperModel&quot;},{&quot;local&quot;:&quot;transformers.FlaxWhisperForConditionalGeneration&quot;,&quot;title&quot;:&quot;FlaxWhisperForConditionalGeneration&quot;},{&quot;local&quot;:&quot;transformers.FlaxWhisperForAudioClassification&quot;,&quot;title&quot;:&quot;FlaxWhisperForAudioClassification&quot;}],&quot;title&quot;:&quot;Whisper&quot;}"></head> <body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage"> <div class="flex min-h-screen flex-col"> <div class="SVELTE_HYDRATER contents" data-props="{&quot;classNames&quot;:&quot;&quot;,&quot;isWide&quot;:true,&quot;isZh&quot;:false}" data-target="MainHeader"><header class="border-b border-gray-100 "><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <div class="flex flex-none items-center justify-center p-0.5 place-self-stretch lg:hidden"><button class="relative z-40 flex h-6 w-8 items-center justify-center" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div></div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="rounded-full border border-transparent bg-gray-900 px-3 py-1 leading-none text-white hover:border-black hover:bg-white hover:text-black" href="/join">Sign Up</a></li></ul></nav></div></header></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div> <main class="flex flex-1 flex-col"><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapters&quot;:[{&quot;title&quot;:&quot;Get started&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;🤗 Transformers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;index&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/index&quot;},{&quot;title&quot;:&quot;Quick tour&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;quicktour&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/quicktour&quot;},{&quot;title&quot;:&quot;Installation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;installation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/installation&quot;}]},{&quot;title&quot;:&quot;Tutorials&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Run inference with pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_tutorial&quot;},{&quot;title&quot;:&quot;Write portable code with AutoClass&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;autoclass_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/autoclass_tutorial&quot;},{&quot;title&quot;:&quot;Preprocess data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocessing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/preprocessing&quot;},{&quot;title&quot;:&quot;Fine-tune a pretrained model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;training&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/training&quot;},{&quot;title&quot;:&quot;Train with a script&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;run_scripts&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/run_scripts&quot;},{&quot;title&quot;:&quot;Set up distributed training with 🤗 Accelerate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;accelerate&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/accelerate&quot;},{&quot;title&quot;:&quot;Load and train adapters with 🤗 PEFT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;peft&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/peft&quot;},{&quot;title&quot;:&quot;Share your model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_sharing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_sharing&quot;},{&quot;title&quot;:&quot;Agents&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers_agents&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/transformers_agents&quot;},{&quot;title&quot;:&quot;Generation with LLMs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;llm_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/llm_tutorial&quot;}]},{&quot;title&quot;:&quot;Task Guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Natural Language Processing&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text classification&quot;,&quot;id&quot;:&quot;tasks/sequence_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/sequence_classification&quot;},{&quot;title&quot;:&quot;Token classification&quot;,&quot;id&quot;:&quot;tasks/token_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/token_classification&quot;},{&quot;title&quot;:&quot;Question answering&quot;,&quot;id&quot;:&quot;tasks/question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/question_answering&quot;},{&quot;title&quot;:&quot;Causal language modeling&quot;,&quot;id&quot;:&quot;tasks/language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/language_modeling&quot;},{&quot;title&quot;:&quot;Masked language modeling&quot;,&quot;id&quot;:&quot;tasks/masked_language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/masked_language_modeling&quot;},{&quot;title&quot;:&quot;Translation&quot;,&quot;id&quot;:&quot;tasks/translation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/translation&quot;},{&quot;title&quot;:&quot;Summarization&quot;,&quot;id&quot;:&quot;tasks/summarization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/summarization&quot;},{&quot;title&quot;:&quot;Multiple choice&quot;,&quot;id&quot;:&quot;tasks/multiple_choice&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/multiple_choice&quot;}]},{&quot;title&quot;:&quot;Audio&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio classification&quot;,&quot;id&quot;:&quot;tasks/audio_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/audio_classification&quot;},{&quot;title&quot;:&quot;Automatic speech recognition&quot;,&quot;id&quot;:&quot;tasks/asr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/asr&quot;}]},{&quot;title&quot;:&quot;Computer Vision&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image classification&quot;,&quot;id&quot;:&quot;tasks/image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_classification&quot;},{&quot;title&quot;:&quot;Semantic segmentation&quot;,&quot;id&quot;:&quot;tasks/semantic_segmentation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/semantic_segmentation&quot;},{&quot;title&quot;:&quot;Video classification&quot;,&quot;id&quot;:&quot;tasks/video_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/video_classification&quot;},{&quot;title&quot;:&quot;Object detection&quot;,&quot;id&quot;:&quot;tasks/object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot object detection&quot;,&quot;id&quot;:&quot;tasks/zero_shot_object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot image classification&quot;,&quot;id&quot;:&quot;tasks/zero_shot_image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification&quot;},{&quot;title&quot;:&quot;Depth estimation&quot;,&quot;id&quot;:&quot;tasks/monocular_depth_estimation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation&quot;}]},{&quot;title&quot;:&quot;Multimodal&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image captioning&quot;,&quot;id&quot;:&quot;tasks/image_captioning&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_captioning&quot;},{&quot;title&quot;:&quot;Document Question Answering&quot;,&quot;id&quot;:&quot;tasks/document_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/document_question_answering&quot;},{&quot;title&quot;:&quot;Visual Question Answering&quot;,&quot;id&quot;:&quot;tasks/visual_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/visual_question_answering&quot;},{&quot;title&quot;:&quot;Text to speech&quot;,&quot;id&quot;:&quot;tasks/text-to-speech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/text-to-speech&quot;}]},{&quot;title&quot;:&quot;Generation&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Customize the generation strategy&quot;,&quot;id&quot;:&quot;generation_strategies&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/generation_strategies&quot;}]},{&quot;title&quot;:&quot;Prompting&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image tasks with IDEFICS&quot;,&quot;id&quot;:&quot;tasks/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/idefics&quot;}]}]},{&quot;title&quot;:&quot;Developer guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Use fast tokenizers from 🤗 Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;fast_tokenizers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/fast_tokenizers&quot;},{&quot;title&quot;:&quot;Run inference with multilingual models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;multilingual&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/multilingual&quot;},{&quot;title&quot;:&quot;Use model-specific APIs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;create_a_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/create_a_model&quot;},{&quot;title&quot;:&quot;Share a custom model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_models&quot;},{&quot;title&quot;:&quot;Templates for chat models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;chat_templating&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/chat_templating&quot;},{&quot;title&quot;:&quot;Run training on Amazon SageMaker&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;sagemaker&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/sagemaker&quot;},{&quot;title&quot;:&quot;Export to ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;serialization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/serialization&quot;},{&quot;title&quot;:&quot;Export to TFLite&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tflite&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tflite&quot;},{&quot;title&quot;:&quot;Export to TorchScript&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;torchscript&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/torchscript&quot;},{&quot;title&quot;:&quot;Benchmarks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;benchmarks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/benchmarks&quot;},{&quot;title&quot;:&quot;Notebooks with examples&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;notebooks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/notebooks&quot;},{&quot;title&quot;:&quot;Community resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;community&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/community&quot;},{&quot;title&quot;:&quot;Custom Tools and Prompts&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_tools&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_tools&quot;},{&quot;title&quot;:&quot;Troubleshoot&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;troubleshooting&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/troubleshooting&quot;}]},{&quot;title&quot;:&quot;Performance and scalability&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;performance&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/performance&quot;},{&quot;title&quot;:&quot;Efficient training techniques&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Methods and tools for efficient training on a single GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_one&quot;},{&quot;title&quot;:&quot;Multiple GPUs and parallelism&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_many&quot;},{&quot;title&quot;:&quot;Efficient training on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu&quot;},{&quot;title&quot;:&quot;Distributed CPU training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu_many&quot;},{&quot;title&quot;:&quot;Training on TPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu&quot;},{&quot;title&quot;:&quot;Training on TPU with TensorFlow&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu_tf&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu_tf&quot;},{&quot;title&quot;:&quot;Training on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_special&quot;},{&quot;title&quot;:&quot;Custom hardware for training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_hardware&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_hardware&quot;},{&quot;title&quot;:&quot;Hyperparameter Search using Trainer API&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;hpo_train&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/hpo_train&quot;}]},{&quot;title&quot;:&quot;Optimizing inference&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Inference on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_cpu&quot;},{&quot;title&quot;:&quot;Inference on one GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_one&quot;},{&quot;title&quot;:&quot;Inference on many GPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_many&quot;},{&quot;title&quot;:&quot;Inference on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_special&quot;}]},{&quot;title&quot;:&quot;Instantiating a big model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;big_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/big_models&quot;},{&quot;title&quot;:&quot;Troubleshooting&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;debugging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/debugging&quot;},{&quot;title&quot;:&quot;XLA Integration for TensorFlow Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tf_xla&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tf_xla&quot;},{&quot;title&quot;:&quot;Optimize inference using `torch.compile()`&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_torch_compile&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_torch_compile&quot;}]},{&quot;title&quot;:&quot;Contribute&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;How to contribute to transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;contributing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/contributing&quot;},{&quot;title&quot;:&quot;How to add a model to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_model&quot;},{&quot;title&quot;:&quot;How to convert a 🤗 Transformers model to TensorFlow?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_tensorflow_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_tensorflow_model&quot;},{&quot;title&quot;:&quot;How to add a pipeline to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_pipeline&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_pipeline&quot;},{&quot;title&quot;:&quot;Testing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;testing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/testing&quot;},{&quot;title&quot;:&quot;Checks on a Pull Request&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pr_checks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pr_checks&quot;}]},{&quot;title&quot;:&quot;Conceptual guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Philosophy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;philosophy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/philosophy&quot;},{&quot;title&quot;:&quot;Glossary&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;glossary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/glossary&quot;},{&quot;title&quot;:&quot;What 🤗 Transformers can do&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;task_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/task_summary&quot;},{&quot;title&quot;:&quot;How 🤗 Transformers solve tasks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks_explained&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks_explained&quot;},{&quot;title&quot;:&quot;The Transformer model family&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_summary&quot;},{&quot;title&quot;:&quot;Summary of the tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tokenizer_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tokenizer_summary&quot;},{&quot;title&quot;:&quot;Attention mechanisms&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;attention&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/attention&quot;},{&quot;title&quot;:&quot;Padding and truncation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pad_truncation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pad_truncation&quot;},{&quot;title&quot;:&quot;BERTology&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;bertology&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/bertology&quot;},{&quot;title&quot;:&quot;Perplexity of fixed-length models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perplexity&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perplexity&quot;},{&quot;title&quot;:&quot;Pipelines for webserver inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_webserver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_webserver&quot;},{&quot;title&quot;:&quot;Model training anatomy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_memory_anatomy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_memory_anatomy&quot;}]},{&quot;title&quot;:&quot;API&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Main Classes&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Agents and Tools&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/agent&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/agent&quot;},{&quot;title&quot;:&quot;Auto Classes&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/auto&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/auto&quot;},{&quot;title&quot;:&quot;Callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/callback&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/callback&quot;},{&quot;title&quot;:&quot;Configuration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/configuration&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/configuration&quot;},{&quot;title&quot;:&quot;Data Collator&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/data_collator&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/data_collator&quot;},{&quot;title&quot;:&quot;Keras callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/keras_callbacks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/keras_callbacks&quot;},{&quot;title&quot;:&quot;Logging&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/logging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/logging&quot;},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/model&quot;},{&quot;title&quot;:&quot;Text Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/text_generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/text_generation&quot;},{&quot;title&quot;:&quot;ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/onnx&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/onnx&quot;},{&quot;title&quot;:&quot;Optimization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/optimizer_schedules&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules&quot;},{&quot;title&quot;:&quot;Model outputs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/output&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/output&quot;},{&quot;title&quot;:&quot;Pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/pipelines&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/pipelines&quot;},{&quot;title&quot;:&quot;Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/processors&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/processors&quot;},{&quot;title&quot;:&quot;Quantization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/quantization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/quantization&quot;},{&quot;title&quot;:&quot;Tokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/tokenizer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/tokenizer&quot;},{&quot;title&quot;:&quot;Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/trainer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/trainer&quot;},{&quot;title&quot;:&quot;DeepSpeed Integration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/deepspeed&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/deepspeed&quot;},{&quot;title&quot;:&quot;Feature Extractor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/feature_extractor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/feature_extractor&quot;},{&quot;title&quot;:&quot;Image Processor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/image_processor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/image_processor&quot;}]},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALBERT&quot;,&quot;id&quot;:&quot;model_doc/albert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/albert&quot;},{&quot;title&quot;:&quot;BART&quot;,&quot;id&quot;:&quot;model_doc/bart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bart&quot;},{&quot;title&quot;:&quot;BARThez&quot;,&quot;id&quot;:&quot;model_doc/barthez&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/barthez&quot;},{&quot;title&quot;:&quot;BARTpho&quot;,&quot;id&quot;:&quot;model_doc/bartpho&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bartpho&quot;},{&quot;title&quot;:&quot;BERT&quot;,&quot;id&quot;:&quot;model_doc/bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert&quot;},{&quot;title&quot;:&quot;BertGeneration&quot;,&quot;id&quot;:&quot;model_doc/bert-generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-generation&quot;},{&quot;title&quot;:&quot;BertJapanese&quot;,&quot;id&quot;:&quot;model_doc/bert-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-japanese&quot;},{&quot;title&quot;:&quot;Bertweet&quot;,&quot;id&quot;:&quot;model_doc/bertweet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bertweet&quot;},{&quot;title&quot;:&quot;BigBird&quot;,&quot;id&quot;:&quot;model_doc/big_bird&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/big_bird&quot;},{&quot;title&quot;:&quot;BigBirdPegasus&quot;,&quot;id&quot;:&quot;model_doc/bigbird_pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus&quot;},{&quot;title&quot;:&quot;BioGpt&quot;,&quot;id&quot;:&quot;model_doc/biogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/biogpt&quot;},{&quot;title&quot;:&quot;Blenderbot&quot;,&quot;id&quot;:&quot;model_doc/blenderbot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot&quot;},{&quot;title&quot;:&quot;Blenderbot Small&quot;,&quot;id&quot;:&quot;model_doc/blenderbot-small&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot-small&quot;},{&quot;title&quot;:&quot;BLOOM&quot;,&quot;id&quot;:&quot;model_doc/bloom&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bloom&quot;},{&quot;title&quot;:&quot;BORT&quot;,&quot;id&quot;:&quot;model_doc/bort&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bort&quot;},{&quot;title&quot;:&quot;ByT5&quot;,&quot;id&quot;:&quot;model_doc/byt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/byt5&quot;},{&quot;title&quot;:&quot;CamemBERT&quot;,&quot;id&quot;:&quot;model_doc/camembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/camembert&quot;},{&quot;title&quot;:&quot;CANINE&quot;,&quot;id&quot;:&quot;model_doc/canine&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/canine&quot;},{&quot;title&quot;:&quot;CodeGen&quot;,&quot;id&quot;:&quot;model_doc/codegen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/codegen&quot;},{&quot;title&quot;:&quot;CodeLlama&quot;,&quot;id&quot;:&quot;model_doc/code_llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/code_llama&quot;},{&quot;title&quot;:&quot;ConvBERT&quot;,&quot;id&quot;:&quot;model_doc/convbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convbert&quot;},{&quot;title&quot;:&quot;CPM&quot;,&quot;id&quot;:&quot;model_doc/cpm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpm&quot;},{&quot;title&quot;:&quot;CPMANT&quot;,&quot;id&quot;:&quot;model_doc/cpmant&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpmant&quot;},{&quot;title&quot;:&quot;CTRL&quot;,&quot;id&quot;:&quot;model_doc/ctrl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ctrl&quot;},{&quot;title&quot;:&quot;DeBERTa&quot;,&quot;id&quot;:&quot;model_doc/deberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta&quot;},{&quot;title&quot;:&quot;DeBERTa-v2&quot;,&quot;id&quot;:&quot;model_doc/deberta-v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta-v2&quot;},{&quot;title&quot;:&quot;DialoGPT&quot;,&quot;id&quot;:&quot;model_doc/dialogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dialogpt&quot;},{&quot;title&quot;:&quot;DistilBERT&quot;,&quot;id&quot;:&quot;model_doc/distilbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/distilbert&quot;},{&quot;title&quot;:&quot;DPR&quot;,&quot;id&quot;:&quot;model_doc/dpr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpr&quot;},{&quot;title&quot;:&quot;ELECTRA&quot;,&quot;id&quot;:&quot;model_doc/electra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/electra&quot;},{&quot;title&quot;:&quot;Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encoder-decoder&quot;},{&quot;title&quot;:&quot;ERNIE&quot;,&quot;id&quot;:&quot;model_doc/ernie&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie&quot;},{&quot;title&quot;:&quot;ErnieM&quot;,&quot;id&quot;:&quot;model_doc/ernie_m&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie_m&quot;},{&quot;title&quot;:&quot;ESM&quot;,&quot;id&quot;:&quot;model_doc/esm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/esm&quot;},{&quot;title&quot;:&quot;Falcon&quot;,&quot;id&quot;:&quot;model_doc/falcon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/falcon&quot;},{&quot;title&quot;:&quot;FLAN-T5&quot;,&quot;id&quot;:&quot;model_doc/flan-t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-t5&quot;},{&quot;title&quot;:&quot;FLAN-UL2&quot;,&quot;id&quot;:&quot;model_doc/flan-ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-ul2&quot;},{&quot;title&quot;:&quot;FlauBERT&quot;,&quot;id&quot;:&quot;model_doc/flaubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flaubert&quot;},{&quot;title&quot;:&quot;FNet&quot;,&quot;id&quot;:&quot;model_doc/fnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fnet&quot;},{&quot;title&quot;:&quot;FSMT&quot;,&quot;id&quot;:&quot;model_doc/fsmt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fsmt&quot;},{&quot;title&quot;:&quot;Funnel Transformer&quot;,&quot;id&quot;:&quot;model_doc/funnel&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/funnel&quot;},{&quot;title&quot;:&quot;GPT&quot;,&quot;id&quot;:&quot;model_doc/openai-gpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/openai-gpt&quot;},{&quot;title&quot;:&quot;GPT Neo&quot;,&quot;id&quot;:&quot;model_doc/gpt_neo&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neo&quot;},{&quot;title&quot;:&quot;GPT NeoX&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox&quot;},{&quot;title&quot;:&quot;GPT NeoX Japanese&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox_japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese&quot;},{&quot;title&quot;:&quot;GPT-J&quot;,&quot;id&quot;:&quot;model_doc/gptj&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptj&quot;},{&quot;title&quot;:&quot;GPT2&quot;,&quot;id&quot;:&quot;model_doc/gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt2&quot;},{&quot;title&quot;:&quot;GPTBigCode&quot;,&quot;id&quot;:&quot;model_doc/gpt_bigcode&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode&quot;},{&quot;title&quot;:&quot;GPTSAN Japanese&quot;,&quot;id&quot;:&quot;model_doc/gptsan-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese&quot;},{&quot;title&quot;:&quot;GPTSw3&quot;,&quot;id&quot;:&quot;model_doc/gpt-sw3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt-sw3&quot;},{&quot;title&quot;:&quot;HerBERT&quot;,&quot;id&quot;:&quot;model_doc/herbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/herbert&quot;},{&quot;title&quot;:&quot;I-BERT&quot;,&quot;id&quot;:&quot;model_doc/ibert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ibert&quot;},{&quot;title&quot;:&quot;Jukebox&quot;,&quot;id&quot;:&quot;model_doc/jukebox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/jukebox&quot;},{&quot;title&quot;:&quot;LED&quot;,&quot;id&quot;:&quot;model_doc/led&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/led&quot;},{&quot;title&quot;:&quot;LLaMA&quot;,&quot;id&quot;:&quot;model_doc/llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama&quot;},{&quot;title&quot;:&quot;Llama2&quot;,&quot;id&quot;:&quot;model_doc/llama2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama2&quot;},{&quot;title&quot;:&quot;Longformer&quot;,&quot;id&quot;:&quot;model_doc/longformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longformer&quot;},{&quot;title&quot;:&quot;LongT5&quot;,&quot;id&quot;:&quot;model_doc/longt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longt5&quot;},{&quot;title&quot;:&quot;LUKE&quot;,&quot;id&quot;:&quot;model_doc/luke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/luke&quot;},{&quot;title&quot;:&quot;M2M100&quot;,&quot;id&quot;:&quot;model_doc/m2m_100&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/m2m_100&quot;},{&quot;title&quot;:&quot;MarianMT&quot;,&quot;id&quot;:&quot;model_doc/marian&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/marian&quot;},{&quot;title&quot;:&quot;MarkupLM&quot;,&quot;id&quot;:&quot;model_doc/markuplm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/markuplm&quot;},{&quot;title&quot;:&quot;MBart and MBart-50&quot;,&quot;id&quot;:&quot;model_doc/mbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mbart&quot;},{&quot;title&quot;:&quot;MEGA&quot;,&quot;id&quot;:&quot;model_doc/mega&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mega&quot;},{&quot;title&quot;:&quot;MegatronBERT&quot;,&quot;id&quot;:&quot;model_doc/megatron-bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron-bert&quot;},{&quot;title&quot;:&quot;MegatronGPT2&quot;,&quot;id&quot;:&quot;model_doc/megatron_gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2&quot;},{&quot;title&quot;:&quot;Mistral&quot;,&quot;id&quot;:&quot;model_doc/mistral&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mistral&quot;},{&quot;title&quot;:&quot;mLUKE&quot;,&quot;id&quot;:&quot;model_doc/mluke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mluke&quot;},{&quot;title&quot;:&quot;MobileBERT&quot;,&quot;id&quot;:&quot;model_doc/mobilebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilebert&quot;},{&quot;title&quot;:&quot;MPNet&quot;,&quot;id&quot;:&quot;model_doc/mpnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpnet&quot;},{&quot;title&quot;:&quot;MPT&quot;,&quot;id&quot;:&quot;model_doc/mpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpt&quot;},{&quot;title&quot;:&quot;MRA&quot;,&quot;id&quot;:&quot;model_doc/mra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mra&quot;},{&quot;title&quot;:&quot;MT5&quot;,&quot;id&quot;:&quot;model_doc/mt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mt5&quot;},{&quot;title&quot;:&quot;MVP&quot;,&quot;id&quot;:&quot;model_doc/mvp&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mvp&quot;},{&quot;title&quot;:&quot;NEZHA&quot;,&quot;id&quot;:&quot;model_doc/nezha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nezha&quot;},{&quot;title&quot;:&quot;NLLB&quot;,&quot;id&quot;:&quot;model_doc/nllb&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb&quot;},{&quot;title&quot;:&quot;NLLB-MoE&quot;,&quot;id&quot;:&quot;model_doc/nllb-moe&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb-moe&quot;},{&quot;title&quot;:&quot;Nyströmformer&quot;,&quot;id&quot;:&quot;model_doc/nystromformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nystromformer&quot;},{&quot;title&quot;:&quot;Open-Llama&quot;,&quot;id&quot;:&quot;model_doc/open-llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/open-llama&quot;},{&quot;title&quot;:&quot;OPT&quot;,&quot;id&quot;:&quot;model_doc/opt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/opt&quot;},{&quot;title&quot;:&quot;Pegasus&quot;,&quot;id&quot;:&quot;model_doc/pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus&quot;},{&quot;title&quot;:&quot;PEGASUS-X&quot;,&quot;id&quot;:&quot;model_doc/pegasus_x&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus_x&quot;},{&quot;title&quot;:&quot;Persimmon&quot;,&quot;id&quot;:&quot;model_doc/persimmon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/persimmon&quot;},{&quot;title&quot;:&quot;PhoBERT&quot;,&quot;id&quot;:&quot;model_doc/phobert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/phobert&quot;},{&quot;title&quot;:&quot;PLBart&quot;,&quot;id&quot;:&quot;model_doc/plbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/plbart&quot;},{&quot;title&quot;:&quot;ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/prophetnet&quot;},{&quot;title&quot;:&quot;QDQBert&quot;,&quot;id&quot;:&quot;model_doc/qdqbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/qdqbert&quot;},{&quot;title&quot;:&quot;RAG&quot;,&quot;id&quot;:&quot;model_doc/rag&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rag&quot;},{&quot;title&quot;:&quot;REALM&quot;,&quot;id&quot;:&quot;model_doc/realm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/realm&quot;},{&quot;title&quot;:&quot;Reformer&quot;,&quot;id&quot;:&quot;model_doc/reformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/reformer&quot;},{&quot;title&quot;:&quot;RemBERT&quot;,&quot;id&quot;:&quot;model_doc/rembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rembert&quot;},{&quot;title&quot;:&quot;RetriBERT&quot;,&quot;id&quot;:&quot;model_doc/retribert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/retribert&quot;},{&quot;title&quot;:&quot;RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta&quot;},{&quot;title&quot;:&quot;RoBERTa-PreLayerNorm&quot;,&quot;id&quot;:&quot;model_doc/roberta-prelayernorm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm&quot;},{&quot;title&quot;:&quot;RoCBert&quot;,&quot;id&quot;:&quot;model_doc/roc_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roc_bert&quot;},{&quot;title&quot;:&quot;RoFormer&quot;,&quot;id&quot;:&quot;model_doc/roformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roformer&quot;},{&quot;title&quot;:&quot;RWKV&quot;,&quot;id&quot;:&quot;model_doc/rwkv&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rwkv&quot;},{&quot;title&quot;:&quot;Splinter&quot;,&quot;id&quot;:&quot;model_doc/splinter&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/splinter&quot;},{&quot;title&quot;:&quot;SqueezeBERT&quot;,&quot;id&quot;:&quot;model_doc/squeezebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/squeezebert&quot;},{&quot;title&quot;:&quot;SwitchTransformers&quot;,&quot;id&quot;:&quot;model_doc/switch_transformers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/switch_transformers&quot;},{&quot;title&quot;:&quot;T5&quot;,&quot;id&quot;:&quot;model_doc/t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5&quot;},{&quot;title&quot;:&quot;T5v1.1&quot;,&quot;id&quot;:&quot;model_doc/t5v1.1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5v1.1&quot;},{&quot;title&quot;:&quot;TAPEX&quot;,&quot;id&quot;:&quot;model_doc/tapex&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapex&quot;},{&quot;title&quot;:&quot;Transformer XL&quot;,&quot;id&quot;:&quot;model_doc/transfo-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/transfo-xl&quot;},{&quot;title&quot;:&quot;UL2&quot;,&quot;id&quot;:&quot;model_doc/ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ul2&quot;},{&quot;title&quot;:&quot;UMT5&quot;,&quot;id&quot;:&quot;model_doc/umt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/umt5&quot;},{&quot;title&quot;:&quot;X-MOD&quot;,&quot;id&quot;:&quot;model_doc/xmod&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xmod&quot;},{&quot;title&quot;:&quot;XGLM&quot;,&quot;id&quot;:&quot;model_doc/xglm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xglm&quot;},{&quot;title&quot;:&quot;XLM&quot;,&quot;id&quot;:&quot;model_doc/xlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm&quot;},{&quot;title&quot;:&quot;XLM-ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/xlm-prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa-XL&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl&quot;},{&quot;title&quot;:&quot;XLM-V&quot;,&quot;id&quot;:&quot;model_doc/xlm-v&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-v&quot;},{&quot;title&quot;:&quot;XLNet&quot;,&quot;id&quot;:&quot;model_doc/xlnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlnet&quot;},{&quot;title&quot;:&quot;YOSO&quot;,&quot;id&quot;:&quot;model_doc/yoso&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yoso&quot;}]},{&quot;title&quot;:&quot;Vision models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;BEiT&quot;,&quot;id&quot;:&quot;model_doc/beit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/beit&quot;},{&quot;title&quot;:&quot;BiT&quot;,&quot;id&quot;:&quot;model_doc/bit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bit&quot;},{&quot;title&quot;:&quot;Conditional DETR&quot;,&quot;id&quot;:&quot;model_doc/conditional_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/conditional_detr&quot;},{&quot;title&quot;:&quot;ConvNeXT&quot;,&quot;id&quot;:&quot;model_doc/convnext&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnext&quot;},{&quot;title&quot;:&quot;ConvNeXTV2&quot;,&quot;id&quot;:&quot;model_doc/convnextv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnextv2&quot;},{&quot;title&quot;:&quot;CvT&quot;,&quot;id&quot;:&quot;model_doc/cvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cvt&quot;},{&quot;title&quot;:&quot;Deformable DETR&quot;,&quot;id&quot;:&quot;model_doc/deformable_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deformable_detr&quot;},{&quot;title&quot;:&quot;DeiT&quot;,&quot;id&quot;:&quot;model_doc/deit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deit&quot;},{&quot;title&quot;:&quot;DETA&quot;,&quot;id&quot;:&quot;model_doc/deta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deta&quot;},{&quot;title&quot;:&quot;DETR&quot;,&quot;id&quot;:&quot;model_doc/detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/detr&quot;},{&quot;title&quot;:&quot;DiNAT&quot;,&quot;id&quot;:&quot;model_doc/dinat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinat&quot;},{&quot;title&quot;:&quot;DINO V2&quot;,&quot;id&quot;:&quot;model_doc/dinov2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinov2&quot;},{&quot;title&quot;:&quot;DiT&quot;,&quot;id&quot;:&quot;model_doc/dit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dit&quot;},{&quot;title&quot;:&quot;DPT&quot;,&quot;id&quot;:&quot;model_doc/dpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpt&quot;},{&quot;title&quot;:&quot;EfficientFormer&quot;,&quot;id&quot;:&quot;model_doc/efficientformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientformer&quot;},{&quot;title&quot;:&quot;EfficientNet&quot;,&quot;id&quot;:&quot;model_doc/efficientnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientnet&quot;},{&quot;title&quot;:&quot;FocalNet&quot;,&quot;id&quot;:&quot;model_doc/focalnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/focalnet&quot;},{&quot;title&quot;:&quot;GLPN&quot;,&quot;id&quot;:&quot;model_doc/glpn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/glpn&quot;},{&quot;title&quot;:&quot;ImageGPT&quot;,&quot;id&quot;:&quot;model_doc/imagegpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/imagegpt&quot;},{&quot;title&quot;:&quot;LeViT&quot;,&quot;id&quot;:&quot;model_doc/levit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/levit&quot;},{&quot;title&quot;:&quot;Mask2Former&quot;,&quot;id&quot;:&quot;model_doc/mask2former&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mask2former&quot;},{&quot;title&quot;:&quot;MaskFormer&quot;,&quot;id&quot;:&quot;model_doc/maskformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/maskformer&quot;},{&quot;title&quot;:&quot;MobileNetV1&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v1&quot;},{&quot;title&quot;:&quot;MobileNetV2&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v2&quot;},{&quot;title&quot;:&quot;MobileViT&quot;,&quot;id&quot;:&quot;model_doc/mobilevit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevit&quot;},{&quot;title&quot;:&quot;MobileViTV2&quot;,&quot;id&quot;:&quot;model_doc/mobilevitv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevitv2&quot;},{&quot;title&quot;:&quot;NAT&quot;,&quot;id&quot;:&quot;model_doc/nat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nat&quot;},{&quot;title&quot;:&quot;PoolFormer&quot;,&quot;id&quot;:&quot;model_doc/poolformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/poolformer&quot;},{&quot;title&quot;:&quot;Pyramid Vision Transformer (PVT)&quot;,&quot;id&quot;:&quot;model_doc/pvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pvt&quot;},{&quot;title&quot;:&quot;RegNet&quot;,&quot;id&quot;:&quot;model_doc/regnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/regnet&quot;},{&quot;title&quot;:&quot;ResNet&quot;,&quot;id&quot;:&quot;model_doc/resnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/resnet&quot;},{&quot;title&quot;:&quot;SegFormer&quot;,&quot;id&quot;:&quot;model_doc/segformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/segformer&quot;},{&quot;title&quot;:&quot;SwiftFormer&quot;,&quot;id&quot;:&quot;model_doc/swiftformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swiftformer&quot;},{&quot;title&quot;:&quot;Swin Transformer&quot;,&quot;id&quot;:&quot;model_doc/swin&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin&quot;},{&quot;title&quot;:&quot;Swin Transformer V2&quot;,&quot;id&quot;:&quot;model_doc/swinv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swinv2&quot;},{&quot;title&quot;:&quot;Swin2SR&quot;,&quot;id&quot;:&quot;model_doc/swin2sr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin2sr&quot;},{&quot;title&quot;:&quot;Table Transformer&quot;,&quot;id&quot;:&quot;model_doc/table-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/table-transformer&quot;},{&quot;title&quot;:&quot;TimeSformer&quot;,&quot;id&quot;:&quot;model_doc/timesformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/timesformer&quot;},{&quot;title&quot;:&quot;UperNet&quot;,&quot;id&quot;:&quot;model_doc/upernet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/upernet&quot;},{&quot;title&quot;:&quot;VAN&quot;,&quot;id&quot;:&quot;model_doc/van&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/van&quot;},{&quot;title&quot;:&quot;VideoMAE&quot;,&quot;id&quot;:&quot;model_doc/videomae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/videomae&quot;},{&quot;title&quot;:&quot;Vision Transformer (ViT)&quot;,&quot;id&quot;:&quot;model_doc/vit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit&quot;},{&quot;title&quot;:&quot;ViT Hybrid&quot;,&quot;id&quot;:&quot;model_doc/vit_hybrid&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_hybrid&quot;},{&quot;title&quot;:&quot;ViTDet&quot;,&quot;id&quot;:&quot;model_doc/vitdet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitdet&quot;},{&quot;title&quot;:&quot;ViTMAE&quot;,&quot;id&quot;:&quot;model_doc/vit_mae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_mae&quot;},{&quot;title&quot;:&quot;ViTMatte&quot;,&quot;id&quot;:&quot;model_doc/vitmatte&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitmatte&quot;},{&quot;title&quot;:&quot;ViTMSN&quot;,&quot;id&quot;:&quot;model_doc/vit_msn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_msn&quot;},{&quot;title&quot;:&quot;ViViT&quot;,&quot;id&quot;:&quot;model_doc/vivit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vivit&quot;},{&quot;title&quot;:&quot;YOLOS&quot;,&quot;id&quot;:&quot;model_doc/yolos&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yolos&quot;}]},{&quot;title&quot;:&quot;Audio models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio Spectrogram Transformer&quot;,&quot;id&quot;:&quot;model_doc/audio-spectrogram-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer&quot;},{&quot;title&quot;:&quot;Bark&quot;,&quot;id&quot;:&quot;model_doc/bark&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bark&quot;},{&quot;title&quot;:&quot;CLAP&quot;,&quot;id&quot;:&quot;model_doc/clap&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clap&quot;},{&quot;title&quot;:&quot;EnCodec&quot;,&quot;id&quot;:&quot;model_doc/encodec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encodec&quot;},{&quot;title&quot;:&quot;Hubert&quot;,&quot;id&quot;:&quot;model_doc/hubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/hubert&quot;},{&quot;title&quot;:&quot;MCTCT&quot;,&quot;id&quot;:&quot;model_doc/mctct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mctct&quot;},{&quot;title&quot;:&quot;MMS&quot;,&quot;id&quot;:&quot;model_doc/mms&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mms&quot;},{&quot;title&quot;:&quot;MusicGen&quot;,&quot;id&quot;:&quot;model_doc/musicgen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/musicgen&quot;},{&quot;title&quot;:&quot;Pop2Piano&quot;,&quot;id&quot;:&quot;model_doc/pop2piano&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pop2piano&quot;},{&quot;title&quot;:&quot;SEW&quot;,&quot;id&quot;:&quot;model_doc/sew&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew&quot;},{&quot;title&quot;:&quot;SEW-D&quot;,&quot;id&quot;:&quot;model_doc/sew-d&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew-d&quot;},{&quot;title&quot;:&quot;Speech2Text&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text&quot;},{&quot;title&quot;:&quot;Speech2Text2&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text_2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2&quot;},{&quot;title&quot;:&quot;SpeechT5&quot;,&quot;id&quot;:&quot;model_doc/speecht5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speecht5&quot;},{&quot;title&quot;:&quot;UniSpeech&quot;,&quot;id&quot;:&quot;model_doc/unispeech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech&quot;},{&quot;title&quot;:&quot;UniSpeech-SAT&quot;,&quot;id&quot;:&quot;model_doc/unispeech-sat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech-sat&quot;},{&quot;title&quot;:&quot;VITS&quot;,&quot;id&quot;:&quot;model_doc/vits&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vits&quot;},{&quot;title&quot;:&quot;Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2&quot;},{&quot;title&quot;:&quot;Wav2Vec2-Conformer&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2-conformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer&quot;},{&quot;title&quot;:&quot;Wav2Vec2Phoneme&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2_phoneme&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme&quot;},{&quot;title&quot;:&quot;WavLM&quot;,&quot;id&quot;:&quot;model_doc/wavlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wavlm&quot;},{&quot;title&quot;:&quot;Whisper&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/whisper&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/whisper&quot;},{&quot;title&quot;:&quot;XLS-R&quot;,&quot;id&quot;:&quot;model_doc/xls_r&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xls_r&quot;},{&quot;title&quot;:&quot;XLSR-Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/xlsr_wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2&quot;}]},{&quot;title&quot;:&quot;Multimodal models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALIGN&quot;,&quot;id&quot;:&quot;model_doc/align&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/align&quot;},{&quot;title&quot;:&quot;AltCLIP&quot;,&quot;id&quot;:&quot;model_doc/altclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/altclip&quot;},{&quot;title&quot;:&quot;BLIP&quot;,&quot;id&quot;:&quot;model_doc/blip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip&quot;},{&quot;title&quot;:&quot;BLIP-2&quot;,&quot;id&quot;:&quot;model_doc/blip-2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip-2&quot;},{&quot;title&quot;:&quot;BridgeTower&quot;,&quot;id&quot;:&quot;model_doc/bridgetower&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bridgetower&quot;},{&quot;title&quot;:&quot;BROS&quot;,&quot;id&quot;:&quot;model_doc/bros&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bros&quot;},{&quot;title&quot;:&quot;Chinese-CLIP&quot;,&quot;id&quot;:&quot;model_doc/chinese_clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/chinese_clip&quot;},{&quot;title&quot;:&quot;CLIP&quot;,&quot;id&quot;:&quot;model_doc/clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clip&quot;},{&quot;title&quot;:&quot;CLIPSeg&quot;,&quot;id&quot;:&quot;model_doc/clipseg&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clipseg&quot;},{&quot;title&quot;:&quot;Data2Vec&quot;,&quot;id&quot;:&quot;model_doc/data2vec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/data2vec&quot;},{&quot;title&quot;:&quot;DePlot&quot;,&quot;id&quot;:&quot;model_doc/deplot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deplot&quot;},{&quot;title&quot;:&quot;Donut&quot;,&quot;id&quot;:&quot;model_doc/donut&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/donut&quot;},{&quot;title&quot;:&quot;FLAVA&quot;,&quot;id&quot;:&quot;model_doc/flava&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flava&quot;},{&quot;title&quot;:&quot;GIT&quot;,&quot;id&quot;:&quot;model_doc/git&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/git&quot;},{&quot;title&quot;:&quot;GroupViT&quot;,&quot;id&quot;:&quot;model_doc/groupvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/groupvit&quot;},{&quot;title&quot;:&quot;IDEFICS&quot;,&quot;id&quot;:&quot;model_doc/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/idefics&quot;},{&quot;title&quot;:&quot;InstructBLIP&quot;,&quot;id&quot;:&quot;model_doc/instructblip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/instructblip&quot;},{&quot;title&quot;:&quot;LayoutLM&quot;,&quot;id&quot;:&quot;model_doc/layoutlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlm&quot;},{&quot;title&quot;:&quot;LayoutLMV2&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv2&quot;},{&quot;title&quot;:&quot;LayoutLMV3&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv3&quot;},{&quot;title&quot;:&quot;LayoutXLM&quot;,&quot;id&quot;:&quot;model_doc/layoutxlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutxlm&quot;},{&quot;title&quot;:&quot;LiLT&quot;,&quot;id&quot;:&quot;model_doc/lilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lilt&quot;},{&quot;title&quot;:&quot;LXMERT&quot;,&quot;id&quot;:&quot;model_doc/lxmert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lxmert&quot;},{&quot;title&quot;:&quot;MatCha&quot;,&quot;id&quot;:&quot;model_doc/matcha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/matcha&quot;},{&quot;title&quot;:&quot;MGP-STR&quot;,&quot;id&quot;:&quot;model_doc/mgp-str&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mgp-str&quot;},{&quot;title&quot;:&quot;Nougat&quot;,&quot;id&quot;:&quot;model_doc/nougat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nougat&quot;},{&quot;title&quot;:&quot;OneFormer&quot;,&quot;id&quot;:&quot;model_doc/oneformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/oneformer&quot;},{&quot;title&quot;:&quot;OWL-ViT&quot;,&quot;id&quot;:&quot;model_doc/owlvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/owlvit&quot;},{&quot;title&quot;:&quot;Perceiver&quot;,&quot;id&quot;:&quot;model_doc/perceiver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/perceiver&quot;},{&quot;title&quot;:&quot;Pix2Struct&quot;,&quot;id&quot;:&quot;model_doc/pix2struct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pix2struct&quot;},{&quot;title&quot;:&quot;Segment Anything&quot;,&quot;id&quot;:&quot;model_doc/sam&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sam&quot;},{&quot;title&quot;:&quot;Speech Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/speech-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder&quot;},{&quot;title&quot;:&quot;TAPAS&quot;,&quot;id&quot;:&quot;model_doc/tapas&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapas&quot;},{&quot;title&quot;:&quot;TrOCR&quot;,&quot;id&quot;:&quot;model_doc/trocr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trocr&quot;},{&quot;title&quot;:&quot;TVLT&quot;,&quot;id&quot;:&quot;model_doc/tvlt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tvlt&quot;},{&quot;title&quot;:&quot;ViLT&quot;,&quot;id&quot;:&quot;model_doc/vilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vilt&quot;},{&quot;title&quot;:&quot;Vision Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/vision-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder&quot;},{&quot;title&quot;:&quot;Vision Text Dual Encoder&quot;,&quot;id&quot;:&quot;model_doc/vision-text-dual-encoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder&quot;},{&quot;title&quot;:&quot;VisualBERT&quot;,&quot;id&quot;:&quot;model_doc/visual_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/visual_bert&quot;},{&quot;title&quot;:&quot;X-CLIP&quot;,&quot;id&quot;:&quot;model_doc/xclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xclip&quot;}]},{&quot;title&quot;:&quot;Reinforcement learning models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Decision Transformer&quot;,&quot;id&quot;:&quot;model_doc/decision_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/decision_transformer&quot;},{&quot;title&quot;:&quot;Trajectory Transformer&quot;,&quot;id&quot;:&quot;model_doc/trajectory_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer&quot;}]},{&quot;title&quot;:&quot;Time series models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Autoformer&quot;,&quot;id&quot;:&quot;model_doc/autoformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/autoformer&quot;},{&quot;title&quot;:&quot;Informer&quot;,&quot;id&quot;:&quot;model_doc/informer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/informer&quot;},{&quot;title&quot;:&quot;Time Series Transformer&quot;,&quot;id&quot;:&quot;model_doc/time_series_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/time_series_transformer&quot;}]},{&quot;title&quot;:&quot;Graph models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Graphormer&quot;,&quot;id&quot;:&quot;model_doc/graphormer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/graphormer&quot;}]}]},{&quot;title&quot;:&quot;Internal Helpers&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Custom Layers and Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/modeling_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/modeling_utils&quot;},{&quot;title&quot;:&quot;Utilities for pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/pipelines_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/pipelines_utils&quot;},{&quot;title&quot;:&quot;Utilities for Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/tokenization_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/tokenization_utils&quot;},{&quot;title&quot;:&quot;Utilities for Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/trainer_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/trainer_utils&quot;},{&quot;title&quot;:&quot;Utilities for Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/generation_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/generation_utils&quot;},{&quot;title&quot;:&quot;Utilities for Image Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/image_processing_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/image_processing_utils&quot;},{&quot;title&quot;:&quot;Utilities for Audio processing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/audio_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/audio_utils&quot;},{&quot;title&quot;:&quot;General Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/file_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/file_utils&quot;},{&quot;title&quot;:&quot;Utilities for Time Series&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/time_series_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/time_series_utils&quot;}]}]}],&quot;chapterId&quot;:&quot;model_doc/whisper&quot;,&quot;docType&quot;:&quot;docs&quot;,&quot;isLoggedIn&quot;:false,&quot;lang&quot;:&quot;en&quot;,&quot;langs&quot;:[&quot;de&quot;,&quot;en&quot;,&quot;es&quot;,&quot;fr&quot;,&quot;it&quot;,&quot;ko&quot;,&quot;pt&quot;,&quot;zh&quot;],&quot;library&quot;:&quot;transformers&quot;,&quot;theme&quot;:&quot;light&quot;,&quot;version&quot;:&quot;v4.34.0&quot;,&quot;versions&quot;:[{&quot;version&quot;:&quot;main&quot;},{&quot;version&quot;:&quot;v4.34.0&quot;},{&quot;version&quot;:&quot;v4.33.3&quot;},{&quot;version&quot;:&quot;v4.33.2&quot;},{&quot;version&quot;:&quot;v4.33.0&quot;},{&quot;version&quot;:&quot;v4.32.1&quot;},{&quot;version&quot;:&quot;v4.32.0&quot;},{&quot;version&quot;:&quot;v4.31.0&quot;},{&quot;version&quot;:&quot;v4.30.0&quot;},{&quot;version&quot;:&quot;v4.29.1&quot;},{&quot;version&quot;:&quot;v4.29.0&quot;},{&quot;version&quot;:&quot;v4.28.1&quot;},{&quot;version&quot;:&quot;v4.28.0&quot;},{&quot;version&quot;:&quot;v4.27.2&quot;},{&quot;version&quot;:&quot;v4.27.1&quot;},{&quot;version&quot;:&quot;v4.27.0&quot;},{&quot;version&quot;:&quot;v4.26.1&quot;},{&quot;version&quot;:&quot;v4.26.0&quot;},{&quot;version&quot;:&quot;v4.25.1&quot;},{&quot;version&quot;:&quot;v4.24.0&quot;},{&quot;version&quot;:&quot;v4.23.1&quot;},{&quot;version&quot;:&quot;v4.23.0&quot;},{&quot;version&quot;:&quot;v4.22.2&quot;},{&quot;version&quot;:&quot;v4.22.1&quot;},{&quot;version&quot;:&quot;v4.22.0&quot;},{&quot;version&quot;:&quot;v4.21.3&quot;},{&quot;version&quot;:&quot;v4.21.2&quot;},{&quot;version&quot;:&quot;v4.21.1&quot;},{&quot;version&quot;:&quot;v4.21.0&quot;},{&quot;version&quot;:&quot;v4.20.1&quot;},{&quot;version&quot;:&quot;v4.20.0&quot;},{&quot;version&quot;:&quot;v4.19.4&quot;},{&quot;version&quot;:&quot;v4.19.3&quot;},{&quot;version&quot;:&quot;v4.19.2&quot;},{&quot;version&quot;:&quot;v4.19.0&quot;},{&quot;version&quot;:&quot;v4.18.0&quot;},{&quot;version&quot;:&quot;v4.17.0&quot;},{&quot;version&quot;:&quot;v4.16.2&quot;},{&quot;version&quot;:&quot;v4.16.1&quot;},{&quot;version&quot;:&quot;v4.16.0&quot;},{&quot;version&quot;:&quot;v4.15.0&quot;},{&quot;version&quot;:&quot;v4.14.1&quot;},{&quot;version&quot;:&quot;v4.13.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.5&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.4&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.0.0&quot;},{&quot;version&quot;:&quot;doc-builder-html&quot;}],&quot;title&quot;:&quot;Whisper&quot;}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"> <div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation </p> <div class="flex items-center"><p class="font-semibold">Whisper</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "> <button class=" " type="button"> <h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> </button> <div class="flex items-center"> <select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1" selected="">v4.34.0</option><option value="2">v4.33.3</option><option value="3">v4.32.1</option><option value="4">v4.31.0</option><option value="5">v4.30.0</option><option value="6">v4.29.1</option><option value="7">v4.28.1</option><option value="8">v4.27.2</option><option value="9">v4.26.1</option><option value="10">v4.25.1</option><option value="11">v4.24.0</option><option value="12">v4.23.1</option><option value="13">v4.22.2</option><option value="14">v4.21.3</option><option value="15">v4.20.1</option><option value="16">v4.19.4</option><option value="17">v4.18.0</option><option value="18">v4.17.0</option><option value="19">v4.16.2</option><option value="20">v4.15.0</option><option value="21">v4.14.1</option><option value="22">v4.13.0</option><option value="23">v4.12.5</option><option value="24">v4.11.3</option><option value="25">v4.10.1</option><option value="26">v4.9.2</option><option value="27">v4.8.2</option><option value="28">v4.7.0</option><option value="29">v4.6.0</option><option value="30">v4.5.1</option><option value="31">v4.4.2</option><option value="32">v4.3.3</option><option value="33">v4.2.2</option><option value="34">v4.1.1</option><option value="35">v4.0.1</option><option value="36">v3.5.1</option><option value="37">v3.4.0</option><option value="38">v3.3.1</option><option value="39">v3.2.0</option><option value="40">v3.1.0</option><option value="41">v3.0.2</option><option value="42">v2.11.0</option><option value="43">v2.10.0</option><option value="44">v2.9.1</option><option value="45">v2.8.0</option><option value="46">v2.7.0</option><option value="47">v2.6.0</option><option value="48">v2.5.1</option><option value="49">v2.4.1</option><option value="50">v2.3.0</option><option value="51">v2.2.2</option><option value="52">v2.1.1</option><option value="53">v2.0.0</option><option value="54">v1.2.0</option><option value="55">v1.1.0</option><option value="56">v1.0.0</option><option value="57">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en" selected="">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"> <button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"> <svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> </a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Get started<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/index"><!-- HTML_TAG_START -->🤗 Transformers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/quicktour"><!-- HTML_TAG_START -->Quick tour<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/installation"><!-- HTML_TAG_START -->Installation<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Tutorials<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_tutorial"><!-- HTML_TAG_START -->Run inference with pipelines<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/autoclass_tutorial"><!-- HTML_TAG_START -->Write portable code with AutoClass<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/preprocessing"><!-- HTML_TAG_START -->Preprocess data<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/training"><!-- HTML_TAG_START -->Fine-tune a pretrained model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/run_scripts"><!-- HTML_TAG_START -->Train with a script<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/accelerate"><!-- HTML_TAG_START -->Set up distributed training with 🤗 Accelerate<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/peft"><!-- HTML_TAG_START -->Load and train adapters with 🤗 PEFT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_sharing"><!-- HTML_TAG_START -->Share your model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/transformers_agents"><!-- HTML_TAG_START -->Agents<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/llm_tutorial"><!-- HTML_TAG_START -->Generation with LLMs<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Task Guides<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Natural Language Processing<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Audio<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Computer Vision<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Multimodal<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Generation<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Prompting<!-- HTML_TAG_END --></span> </span></span> </div></div> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Developer guides<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/fast_tokenizers"><!-- HTML_TAG_START -->Use fast tokenizers from 🤗 Tokenizers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/multilingual"><!-- HTML_TAG_START -->Run inference with multilingual models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/create_a_model"><!-- HTML_TAG_START -->Use model-specific APIs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_models"><!-- HTML_TAG_START -->Share a custom model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/chat_templating"><!-- HTML_TAG_START -->Templates for chat models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/sagemaker"><!-- HTML_TAG_START -->Run training on Amazon SageMaker<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/serialization"><!-- HTML_TAG_START -->Export to ONNX<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tflite"><!-- HTML_TAG_START -->Export to TFLite<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/torchscript"><!-- HTML_TAG_START -->Export to TorchScript<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/benchmarks"><!-- HTML_TAG_START -->Benchmarks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/notebooks"><!-- HTML_TAG_START -->Notebooks with examples<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/community"><!-- HTML_TAG_START -->Community resources<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_tools"><!-- HTML_TAG_START -->Custom Tools and Prompts<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/troubleshooting"><!-- HTML_TAG_START -->Troubleshoot<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Performance and scalability<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/performance"><!-- HTML_TAG_START -->Overview<!-- HTML_TAG_END --> </a> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Efficient training techniques<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_one"><!-- HTML_TAG_START -->Methods and tools for efficient training on a single GPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_many"><!-- HTML_TAG_START -->Multiple GPUs and parallelism<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu"><!-- HTML_TAG_START -->Efficient training on CPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu_many"><!-- HTML_TAG_START -->Distributed CPU training<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu"><!-- HTML_TAG_START -->Training on TPUs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu_tf"><!-- HTML_TAG_START -->Training on TPU with TensorFlow<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_special"><!-- HTML_TAG_START -->Training on Specialized Hardware<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_hardware"><!-- HTML_TAG_START -->Custom hardware for training<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/hpo_train"><!-- HTML_TAG_START -->Hyperparameter Search using Trainer API<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Optimizing inference<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_cpu"><!-- HTML_TAG_START -->Inference on CPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_one"><!-- HTML_TAG_START -->Inference on one GPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_many"><!-- HTML_TAG_START -->Inference on many GPUs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_special"><!-- HTML_TAG_START -->Inference on Specialized Hardware<!-- HTML_TAG_END --> </a> </div><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/big_models"><!-- HTML_TAG_START -->Instantiating a big model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/debugging"><!-- HTML_TAG_START -->Troubleshooting<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tf_xla"><!-- HTML_TAG_START -->XLA Integration for TensorFlow Models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perf_torch_compile"><!-- HTML_TAG_START -->Optimize inference using `torch.compile()`<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Contribute<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/contributing"><!-- HTML_TAG_START -->How to contribute to transformers?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_model"><!-- HTML_TAG_START -->How to add a model to 🤗 Transformers?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_tensorflow_model"><!-- HTML_TAG_START -->How to convert a 🤗 Transformers model to TensorFlow?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_pipeline"><!-- HTML_TAG_START -->How to add a pipeline to 🤗 Transformers?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/testing"><!-- HTML_TAG_START -->Testing<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pr_checks"><!-- HTML_TAG_START -->Checks on a Pull Request<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Conceptual guides<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/philosophy"><!-- HTML_TAG_START -->Philosophy<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/glossary"><!-- HTML_TAG_START -->Glossary<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/task_summary"><!-- HTML_TAG_START -->What 🤗 Transformers can do<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tasks_explained"><!-- HTML_TAG_START -->How 🤗 Transformers solve tasks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_summary"><!-- HTML_TAG_START -->The Transformer model family<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tokenizer_summary"><!-- HTML_TAG_START -->Summary of the tokenizers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/attention"><!-- HTML_TAG_START -->Attention mechanisms<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pad_truncation"><!-- HTML_TAG_START -->Padding and truncation<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/bertology"><!-- HTML_TAG_START -->BERTology<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perplexity"><!-- HTML_TAG_START -->Perplexity of fixed-length models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_webserver"><!-- HTML_TAG_START -->Pipelines for webserver inference<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_memory_anatomy"><!-- HTML_TAG_START -->Model training anatomy<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->API<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Main Classes<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/agent"><!-- HTML_TAG_START -->Agents and Tools<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/model_doc/auto"><!-- HTML_TAG_START -->Auto Classes<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/callback"><!-- HTML_TAG_START -->Callbacks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/configuration"><!-- HTML_TAG_START -->Configuration<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/data_collator"><!-- HTML_TAG_START -->Data Collator<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks"><!-- HTML_TAG_START -->Keras callbacks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/logging"><!-- HTML_TAG_START -->Logging<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/model"><!-- HTML_TAG_START -->Models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/text_generation"><!-- HTML_TAG_START -->Text Generation<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/onnx"><!-- HTML_TAG_START -->ONNX<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules"><!-- HTML_TAG_START -->Optimization<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/output"><!-- HTML_TAG_START -->Model outputs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/pipelines"><!-- HTML_TAG_START -->Pipelines<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/processors"><!-- HTML_TAG_START -->Processors<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/quantization"><!-- HTML_TAG_START -->Quantization<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/tokenizer"><!-- HTML_TAG_START -->Tokenizer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/trainer"><!-- HTML_TAG_START -->Trainer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/deepspeed"><!-- HTML_TAG_START -->DeepSpeed Integration<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor"><!-- HTML_TAG_START -->Feature Extractor<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/image_processor"><!-- HTML_TAG_START -->Image Processor<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Text models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Vision models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Audio models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer"><!-- HTML_TAG_START -->Audio Spectrogram Transformer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bark"><!-- HTML_TAG_START -->Bark<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/clap"><!-- HTML_TAG_START -->CLAP<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/encodec"><!-- HTML_TAG_START -->EnCodec<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/hubert"><!-- HTML_TAG_START -->Hubert<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mctct"><!-- HTML_TAG_START -->MCTCT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mms"><!-- HTML_TAG_START -->MMS<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/musicgen"><!-- HTML_TAG_START -->MusicGen<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/pop2piano"><!-- HTML_TAG_START -->Pop2Piano<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/sew"><!-- HTML_TAG_START -->SEW<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/sew-d"><!-- HTML_TAG_START -->SEW-D<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/speech_to_text"><!-- HTML_TAG_START -->Speech2Text<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2"><!-- HTML_TAG_START -->Speech2Text2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/speecht5"><!-- HTML_TAG_START -->SpeechT5<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/unispeech"><!-- HTML_TAG_START -->UniSpeech<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/unispeech-sat"><!-- HTML_TAG_START -->UniSpeech-SAT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/vits"><!-- HTML_TAG_START -->VITS<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2"><!-- HTML_TAG_START -->Wav2Vec2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer"><!-- HTML_TAG_START -->Wav2Vec2-Conformer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme"><!-- HTML_TAG_START -->Wav2Vec2Phoneme<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/wavlm"><!-- HTML_TAG_START -->WavLM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/whisper"><!-- HTML_TAG_START -->Whisper<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xls_r"><!-- HTML_TAG_START -->XLS-R<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2"><!-- HTML_TAG_START -->XLSR-Wav2Vec2<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Multimodal models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Reinforcement learning models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Time series models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Graph models<!-- HTML_TAG_END --></span> </span></span> </div></div> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Internal Helpers<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/modeling_utils"><!-- HTML_TAG_START -->Custom Layers and Utilities<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/pipelines_utils"><!-- HTML_TAG_START -->Utilities for pipelines<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/tokenization_utils"><!-- HTML_TAG_START -->Utilities for Tokenizers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/trainer_utils"><!-- HTML_TAG_START -->Utilities for Trainer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/generation_utils"><!-- HTML_TAG_START -->Utilities for Generation<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/image_processing_utils"><!-- HTML_TAG_START -->Utilities for Image Processors<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/audio_utils"><!-- HTML_TAG_START -->Utilities for Audio processing<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/file_utils"><!-- HTML_TAG_START -->General Utilities<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/time_series_utils"><!-- HTML_TAG_START -->Utilities for Time Series<!-- HTML_TAG_END --> </a> </div> </div></nav></div></div></div> <div class="z-1 min-w-0 flex-1"> <div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg"> <div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div> <p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience </p> <div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes </div></div></div> <div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a> <p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div> <div class="prose-doc prose relative mx-auto max-w-4xl break-words"> <p></p> <h1 class="relative group"><a id="whisper" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#whisper"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1rmkvbp">Whisper</span></h1> <h2 class="relative group"><a id="overview" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#overview"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1jsw1pg">Overview</span></h2> <p data-svelte-h="svelte-951n9o">The Whisper model was proposed in <a href="https://cdn.openai.com/papers/whisper.pdf" rel="nofollow">Robust Speech Recognition via Large-Scale Weak Supervision</a> by Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, Ilya Sutskever.</p> <p data-svelte-h="svelte-vfdo9a">The abstract from the paper is the following:</p> <p data-svelte-h="svelte-17t2w30"><em>We study the capabilities of speech processing systems trained simply to predict large amounts of transcripts of audio on the internet. When scaled to 680,000 hours of multilingual and multitask supervision, the resulting models generalize well to standard benchmarks and are often competitive with prior fully supervised results but in a zeroshot transfer setting without the need for any finetuning. When compared to humans, the models approach their accuracy and robustness. We are releasing models and inference code to serve as a foundation for further work on robust speech processing.</em></p> <p data-svelte-h="svelte-axv494">Tips:</p> <ul data-svelte-h="svelte-19841jq"><li>The model usually performs well without requiring any finetuning.</li> <li>The architecture follows a classic encoder-decoder architecture, which means that it relies on the <a href="/docs/transformers/v4.34.0/en/main_classes/text_generation#transformers.GenerationMixin.generate">generate()</a> function for inference.</li> <li>Inference is currently only implemented for short-form i.e. audio is pre-segmented into &lt;=30s segments. Long-form (including timestamps) will be implemented in a future release.</li> <li>One can use <a href="/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperProcessor">WhisperProcessor</a> to prepare audio for the model, and decode the predicted ID’s back into text.</li></ul> <p data-svelte-h="svelte-e1kukz">This model was contributed by <a href="https://huggingface.co/ArthurZ" rel="nofollow">Arthur Zucker</a>. The Tensorflow version of this model was contributed by <a href="https://huggingface.co/amyeroberts" rel="nofollow">amyeroberts</a>. The original code can be found <a href="https://github.com/openai/whisper" rel="nofollow">here</a>.</p> <h2 class="relative group"><a id="transformers.WhisperConfig" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperConfig"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1c8ym5v">WhisperConfig</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.WhisperConfig"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">WhisperConfig</span></span></h3> <a id="transformers.WhisperConfig" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.WhisperConfig"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/configuration_whisper.py#L62" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">vocab_size<span class="opacity-60"> = 51865</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_mel_bins<span class="opacity-60"> = 80</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_layers<span class="opacity-60"> = 6</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_attention_heads<span class="opacity-60"> = 4</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_layers<span class="opacity-60"> = 6</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_attention_heads<span class="opacity-60"> = 4</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_ffn_dim<span class="opacity-60"> = 1536</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_ffn_dim<span class="opacity-60"> = 1536</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_layerdrop<span class="opacity-60"> = 0.0</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_layerdrop<span class="opacity-60"> = 0.0</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_start_token_id<span class="opacity-60"> = 50257</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_cache<span class="opacity-60"> = True</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">is_encoder_decoder<span class="opacity-60"> = True</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">activation_function<span class="opacity-60"> = 'gelu'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">d_model<span class="opacity-60"> = 256</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">dropout<span class="opacity-60"> = 0.0</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_dropout<span class="opacity-60"> = 0.0</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">activation_dropout<span class="opacity-60"> = 0.0</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">init_std<span class="opacity-60"> = 0.02</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">scale_embedding<span class="opacity-60"> = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">max_source_positions<span class="opacity-60"> = 1500</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">max_target_positions<span class="opacity-60"> = 448</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pad_token_id<span class="opacity-60"> = 50256</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">bos_token_id<span class="opacity-60"> = 50256</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">eos_token_id<span class="opacity-60"> = 50256</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">suppress_tokens<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">begin_suppress_tokens<span class="opacity-60"> = [220, 50256]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_weighted_layer_sum<span class="opacity-60"> = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">classifier_proj_size<span class="opacity-60"> = 256</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">apply_spec_augment<span class="opacity-60"> = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_time_prob<span class="opacity-60"> = 0.05</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_time_length<span class="opacity-60"> = 10</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_time_min_masks<span class="opacity-60"> = 2</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_feature_prob<span class="opacity-60"> = 0.0</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_feature_length<span class="opacity-60"> = 10</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_feature_min_masks<span class="opacity-60"> = 0</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">median_filter_width<span class="opacity-60"> = 7</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 37 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperConfig.vocab_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperConfig.vocab_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>vocab_size</strong> (<code>int</code>, <em>optional</em>, defaults to 51865) — Vocabulary size of the Whisper model. Defines the number of different tokens that can be represented by the <code>decoder_input_ids</code> passed when calling <a href="/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperModel">WhisperModel</a></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperConfig.num_mel_bins" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperConfig.num_mel_bins"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>num_mel_bins</strong> (<code>int</code>, <em>optional</em>, defaults to 80) — Number of mel features used per input features. Should correspond to the value used in the <code>WhisperProcessor</code> class.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperConfig.encoder_layers" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperConfig.encoder_layers"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>encoder_layers</strong> (<code>int</code>, <em>optional</em>, defaults to 6) — Number of encoder layers.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperConfig.decoder_layers" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperConfig.decoder_layers"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_layers</strong> (<code>int</code>, <em>optional</em>, defaults to 6) — Number of decoder layers.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperConfig.encoder_attention_heads" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperConfig.encoder_attention_heads"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>encoder_attention_heads</strong> (<code>int</code>, <em>optional</em>, defaults to 4) — Number of attention heads for each attention layer in the Transformer encoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperConfig.decoder_attention_heads" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperConfig.decoder_attention_heads"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_attention_heads</strong> (<code>int</code>, <em>optional</em>, defaults to 4) — Number of attention heads for each attention layer in the Transformer decoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperConfig.encoder_ffn_dim" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperConfig.encoder_ffn_dim"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>encoder_ffn_dim</strong> (<code>int</code>, <em>optional</em>, defaults to 1536) — Dimensionality of the “intermediate” (often named feed-forward) layer in encoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperConfig.decoder_ffn_dim" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperConfig.decoder_ffn_dim"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_ffn_dim</strong> (<code>int</code>, <em>optional</em>, defaults to 1536) — Dimensionality of the “intermediate” (often named feed-forward) layer in decoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperConfig.encoder_layerdrop" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperConfig.encoder_layerdrop"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>encoder_layerdrop</strong> (<code>float</code>, <em>optional</em>, defaults to 0.0) — The LayerDrop probability for the encoder. See the [LayerDrop paper](see <a href="https://arxiv.org/abs/1909.11556" rel="nofollow">https://arxiv.org/abs/1909.11556</a>) for more details.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperConfig.decoder_layerdrop" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperConfig.decoder_layerdrop"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_layerdrop</strong> (<code>float</code>, <em>optional</em>, defaults to 0.0) — The LayerDrop probability for the decoder. See the [LayerDrop paper](see <a href="https://arxiv.org/abs/1909.11556" rel="nofollow">https://arxiv.org/abs/1909.11556</a>) for more details.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperConfig.decoder_start_token_id" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperConfig.decoder_start_token_id"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_start_token_id</strong> (<code>int</code>, <em>optional</em>, defaults to 50257) — Corresponds to the ”&lt;|startoftranscript|&gt;” token, which is automatically used when no <code>decoder_input_ids</code> are provided to the <code>generate</code> function. It is used to guide the model`s generation process depending on the task.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperConfig.use_cache" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperConfig.use_cache"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>use_cache</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether or not the model should return the last key/values attentions (not used by all models).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperConfig.is_encoder_decoder" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperConfig.is_encoder_decoder"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>is_encoder_decoder</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether the model is used as an encoder/decoder or not.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperConfig.activation_function" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperConfig.activation_function"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>activation_function</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"gelu"</code>) — The non-linear activation function (function or string) in the encoder and pooler. If string, <code>"gelu"</code>, <code>"relu"</code>, <code>"silu"</code> and <code>"gelu_new"</code> are supported.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperConfig.d_model" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperConfig.d_model"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>d_model</strong> (<code>int</code>, <em>optional</em>, defaults to 256) — Dimensionality of the layers.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperConfig.dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperConfig.dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>dropout</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperConfig.attention_dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperConfig.attention_dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_dropout</strong> (<code>float</code>, <em>optional</em>, defaults to 0.0) — The dropout ratio for the attention probabilities.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperConfig.activation_dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperConfig.activation_dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>activation_dropout</strong> (<code>float</code>, <em>optional</em>, defaults to 0.0) — The dropout ratio for activations inside the fully connected layer.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperConfig.init_std" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperConfig.init_std"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>init_std</strong> (<code>float</code>, <em>optional</em>, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperConfig.scale_embedding" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperConfig.scale_embedding"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>scale_embedding</strong> (<code>bool</code>, <em>optional</em>, defaults to False) — Scale embeddings by diving by sqrt(d_model).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperConfig.max_source_positions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperConfig.max_source_positions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>max_source_positions</strong> (<code>int</code>, <em>optional</em>, defaults to 1500) — The maximum sequence length of log-mel filter-bank features that this model might ever be used with.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperConfig.max_target_positions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperConfig.max_target_positions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>max_target_positions</strong> (<code>int</code>, <em>optional</em>, defaults to 448) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperConfig.pad_token_id" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperConfig.pad_token_id"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>pad_token_id</strong> (<code>int</code>, <em>optional</em>, defaults to 50256) — Padding token id.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperConfig.bos_token_id" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperConfig.bos_token_id"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>bos_token_id</strong> (<code>int</code>, <em>optional</em>, defaults to 50256) — Begin of stream token id.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperConfig.eos_token_id" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperConfig.eos_token_id"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>eos_token_id</strong> (<code>int</code>, <em>optional</em>, defaults to 50256) — End of stream token id.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperConfig.suppress_tokens" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperConfig.suppress_tokens"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>suppress_tokens</strong> (<code>List[int]</code>, <em>optional</em>) — A list containing the non-speech tokens that will be used by the logit processor in the <code>generate</code> function. NON_SPEECH_TOKENS and NON_SPEECH_TOKENS_MULTI each correspond to the <code>english-only</code> and the <code>multilingual</code> model.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperConfig.begin_suppress_tokens" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperConfig.begin_suppress_tokens"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>begin_suppress_tokens</strong> (<code>List[int]</code>, <em>optional</em>, defaults to <code>[220,50256]</code>) — A list containing tokens that will be supressed at the beginning of the sampling process. Initialized as the token for <code>" "</code> (<code>blank_token_id</code>) and the <code>eos_token_id</code></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperConfig.use_weighted_layer_sum" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperConfig.use_weighted_layer_sum"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>use_weighted_layer_sum</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether to use a weighted average of layer outputs with learned weights. Only relevant when using an instance of <a href="/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperForAudioClassification">WhisperForAudioClassification</a>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperConfig.classifier_proj_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperConfig.classifier_proj_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>classifier_proj_size</strong> (<code>int</code>, <em>optional</em>, defaults to 256) — Dimensionality of the projection before token mean-pooling for classification. Only relevant when using an instance of <a href="/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperForAudioClassification">WhisperForAudioClassification</a>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperConfig.apply_spec_augment" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperConfig.apply_spec_augment"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>apply_spec_augment</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether to apply <em>SpecAugment</em> data augmentation to the outputs of the feature encoder. For reference see <a href="https://arxiv.org/abs/1904.08779" rel="nofollow">SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition</a>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperConfig.mask_time_prob" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperConfig.mask_time_prob"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mask_time_prob</strong> (<code>float</code>, <em>optional</em>, defaults to 0.05) — Percentage (between 0 and 1) of all feature vectors along the time axis which will be masked. The masking procecure generates <code>mask_time_prob*len(time_axis)/mask_time_length</code> independent masks over the axis. If reasoning from the propability of each feature vector to be chosen as the start of the vector span to be masked, <em>mask_time_prob</em> should be <code>prob_vector_start*mask_time_length</code>. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant if <code>apply_spec_augment == True</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperConfig.mask_time_length" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperConfig.mask_time_length"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mask_time_length</strong> (<code>int</code>, <em>optional</em>, defaults to 10) — Length of vector span along the time axis.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperConfig.mask_time_min_masks" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperConfig.mask_time_min_masks"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mask_time_min_masks</strong> (<code>int</code>, <em>optional</em>, defaults to 2), — The minimum number of masks of length <code>mask_feature_length</code> generated along the time axis, each time step, irrespectively of <code>mask_feature_prob</code>. Only relevant if ”mask_time_prob*len(time_axis)/mask_time_length &lt; mask_time_min_masks”</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperConfig.mask_feature_prob" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperConfig.mask_feature_prob"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mask_feature_prob</strong> (<code>float</code>, <em>optional</em>, defaults to 0.0) — Percentage (between 0 and 1) of all feature vectors along the feature axis which will be masked. The masking procecure generates <code>mask_feature_prob*len(feature_axis)/mask_time_length</code> independent masks over the axis. If reasoning from the propability of each feature vector to be chosen as the start of the vector span to be masked, <em>mask_feature_prob</em> should be <code>prob_vector_start*mask_feature_length</code>. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant if <code>apply_spec_augment is True</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperConfig.mask_feature_length" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperConfig.mask_feature_length"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mask_feature_length</strong> (<code>int</code>, <em>optional</em>, defaults to 10) — Length of vector span along the feature axis.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperConfig.mask_feature_min_masks" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperConfig.mask_feature_min_masks"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mask_feature_min_masks</strong> (<code>int</code>, <em>optional</em>, defaults to 0), — The minimum number of masks of length <code>mask_feature_length</code> generated along the feature axis, each time step, irrespectively of <code>mask_feature_prob</code>. Only relevant if <code>mask_feature_prob*len(feature_axis)/mask_feature_length &lt; mask_feature_min_masks</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperConfig.median_filter_width" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperConfig.median_filter_width"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>median_filter_width</strong> (<code>int</code>, <em>optional</em>, defaults to 7) — Width of the median filter used to smoothen to cross-attention outputs when computing token timestamps. Should be an odd number.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-k245lz">This is the configuration class to store the configuration of a <a href="/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperModel">WhisperModel</a>. It is used to instantiate a Whisper model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Whisper <a href="https://huggingface.co/openai/whisper-tiny" rel="nofollow">openai/whisper-tiny</a> architecture.</p> <p data-svelte-h="svelte-10kqkkl">Configuration objects inherit from <a href="/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> and can be used to control the model outputs. Read the documentation from <a href="/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> for more information.</p> <div class="relative group rounded-md"><a id="transformers.WhisperConfig.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperConfig.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> WhisperConfig, WhisperModel <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Initializing a Whisper tiny style configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>configuration = WhisperConfig() <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Initializing a model (with random weights) from the tiny style configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model = WhisperModel(configuration) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Accessing the model configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>configuration = model.config</pre></div></div></div> <h2 class="relative group"><a id="transformers.WhisperTokenizer" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperTokenizer"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1nv5nqw">WhisperTokenizer</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.WhisperTokenizer"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">WhisperTokenizer</span></span></h3> <a id="transformers.WhisperTokenizer" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.WhisperTokenizer"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/tokenization_whisper.py#L215" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">vocab_file<span class="opacity-60"></span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">merges_file<span class="opacity-60"></span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">normalizer_file<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">errors<span class="opacity-60"> = 'replace'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">unk_token<span class="opacity-60"> = '&lt;|endoftext|&gt;'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">bos_token<span class="opacity-60"> = '&lt;|endoftext|&gt;'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">eos_token<span class="opacity-60"> = '&lt;|endoftext|&gt;'</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pad_token<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">add_prefix_space<span class="opacity-60"> = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">language<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">task<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">predict_timestamps<span class="opacity-60"> = False</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 11 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperTokenizer.vocab_file" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperTokenizer.vocab_file"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>vocab_file</strong> (<code>str</code>) — Path to the vocabulary file.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperTokenizer.merges_file" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperTokenizer.merges_file"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>merges_file</strong> (<code>str</code>) — Path to the merges file.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperTokenizer.normalizer_file" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperTokenizer.normalizer_file"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>normalizer_file</strong> (<code>str</code>, <em>optional</em>, defaults to <code>None</code>) — Path to the normalizer_file file.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperTokenizer.errors" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperTokenizer.errors"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>errors</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"replace"</code>) — Paradigm to follow when decoding bytes to UTF-8. See <a href="https://docs.python.org/3/library/stdtypes.html#bytes.decode" rel="nofollow">bytes.decode</a> for more information.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperTokenizer.unk_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperTokenizer.unk_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>unk_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;|endoftext|&gt;"</code>) — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperTokenizer.bos_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperTokenizer.bos_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>bos_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;|endoftext|&gt;"</code>) — The beginning of sequence token. The <code>decoder_start_token_id</code> is used to set the first token as <code>"&lt;|startoftranscript|&gt;"</code> when generating.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperTokenizer.eos_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperTokenizer.eos_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>eos_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;|endoftext|&gt;"</code>) — The end of sequence token.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperTokenizer.add_prefix_space" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperTokenizer.add_prefix_space"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>add_prefix_space</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not to add an initial space to the input. This allows to treat the leading word just as any other word.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperTokenizer.language" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperTokenizer.language"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>language</strong> (<code>str</code>, <em>optional</em>) — The language of the transcription text. The corresponding language id token is appended to the start of the sequence for multilingual speech recognition and speech translation tasks, e.g. for Spanish the token <code>"&lt;|es|&gt;"</code> is appended to the start of sequence. This should be used for multilingual fine-tuning only.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperTokenizer.task" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperTokenizer.task"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>task</strong> (<code>str</code>, <em>optional</em>) — Task identifier to append at the start of sequence (if any). This should be used for mulitlingual fine-tuning, with <code>"transcribe"</code> for speech recognition and <code>"translate"</code> for speech translation.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperTokenizer.predict_timestamps" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperTokenizer.predict_timestamps"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>predict_timestamps</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether to omit the <code>&lt;|notimestamps|&gt;</code> token at the start of the sequence.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1996rkv">Construct a Whisper tokenizer.</p> <p data-svelte-h="svelte-1ery4iu">This tokenizer inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizer">PreTrainedTokenizer</a> which contains some of the main methods. Users should refer to the superclass for more information regarding such methods.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.WhisperTokenizer.set_prefix_tokens"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>set_prefix_tokens</span></h4> <a id="transformers.WhisperTokenizer.set_prefix_tokens" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.WhisperTokenizer.set_prefix_tokens"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/tokenization_whisper.py#L385" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">language<span class="opacity-60">: str = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">task<span class="opacity-60">: str = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">predict_timestamps<span class="opacity-60">: bool = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperTokenizer.set_prefix_tokens.language" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperTokenizer.set_prefix_tokens.language"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>language</strong> (<code>str</code>, <em>optional</em>, defaults to <code>None</code>) — The language of the transcription text.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperTokenizer.set_prefix_tokens.task" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperTokenizer.set_prefix_tokens.task"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>task</strong> (<code>str</code>, <em>optional</em>, defaults to <code>None</code>) — Task identifier to append at the start of sequence (if any).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperTokenizer.set_prefix_tokens.predict_timestamps" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperTokenizer.set_prefix_tokens.predict_timestamps"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>predict_timestamps</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>None</code>) — Whether to omit the <code>&lt;|notimestamps|&gt;</code> token at the start of the sequence.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-8in46s">Override the prefix tokens appended to the start of the label sequence. This method can be used standalone to</p> <div class="relative group rounded-md"><a id="transformers.WhisperTokenizer.set_prefix_tokens.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperTokenizer.set_prefix_tokens.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-14u5irj">update the prefix tokens as required when fine-tuning. Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># instantiate the tokenizer and set the prefix token to Spanish</span> <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = WhisperTokenizer.from_pretrained(<span class="hljs-string">"openai/whisper-tiny"</span>, language=<span class="hljs-string">"spanish"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># now switch the prefix token from Spanish to French</span> <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer.set_prefix_tokens(language=<span class="hljs-string">"french"</span>)</pre></div></div></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.WhisperTokenizer.build_inputs_with_special_tokens"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>build_inputs_with_special_tokens</span></h4> <a id="transformers.WhisperTokenizer.build_inputs_with_special_tokens" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.WhisperTokenizer.build_inputs_with_special_tokens"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/tokenization_whisper.py#L444" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_0<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_1<span class="opacity-60"> = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> </div></div> <p data-svelte-h="svelte-wv4s2m">Build model inputs from a sequence by appending eos_token_id.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.WhisperTokenizer.get_special_tokens_mask"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>get_special_tokens_mask</span></h4> <a id="transformers.WhisperTokenizer.get_special_tokens_mask" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.WhisperTokenizer.get_special_tokens_mask"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/tokenization_whisper.py#L452" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_0<span class="opacity-60">: typing.List[int]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_1<span class="opacity-60">: typing.Optional[typing.List[int]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">already_has_special_tokens<span class="opacity-60">: bool = False</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>List[int]</code></span></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperTokenizer.get_special_tokens_mask.token_ids_0" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperTokenizer.get_special_tokens_mask.token_ids_0"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_0</strong> (<code>List[int]</code>) — List of IDs.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperTokenizer.get_special_tokens_mask.token_ids_1" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperTokenizer.get_special_tokens_mask.token_ids_1"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_1</strong> (<code>List[int]</code>, <em>optional</em>) — Optional second list of IDs for sequence pairs.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperTokenizer.get_special_tokens_mask.already_has_special_tokens" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperTokenizer.get_special_tokens_mask.already_has_special_tokens"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>already_has_special_tokens</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not the token list is already formatted with special tokens for the model.</span></span> </li></ul> <div id="transformers.WhisperTokenizer.get_special_tokens_mask.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><code>List[int]</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.</p> </p> </div></div> <p data-svelte-h="svelte-1f4f5kp">Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer <code>prepare_for_model</code> method.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.WhisperTokenizer.create_token_type_ids_from_sequences"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>create_token_type_ids_from_sequences</span></h4> <a id="transformers.WhisperTokenizer.create_token_type_ids_from_sequences" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.WhisperTokenizer.create_token_type_ids_from_sequences"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/tokenization_utils_base.py#L3305" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_0<span class="opacity-60">: typing.List[int]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_1<span class="opacity-60">: typing.Optional[typing.List[int]] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>List[int]</code></span></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperTokenizer.create_token_type_ids_from_sequences.token_ids_0" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperTokenizer.create_token_type_ids_from_sequences.token_ids_0"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_0</strong> (<code>List[int]</code>) — The first tokenized sequence.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperTokenizer.create_token_type_ids_from_sequences.token_ids_1" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperTokenizer.create_token_type_ids_from_sequences.token_ids_1"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_1</strong> (<code>List[int]</code>, <em>optional</em>) — The second tokenized sequence.</span></span> </li></ul> <div id="transformers.WhisperTokenizer.create_token_type_ids_from_sequences.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><code>List[int]</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>The token type ids.</p> </p> </div></div> <p data-svelte-h="svelte-zj1vf1">Create the token type IDs corresponding to the sequences passed. <a href="../glossary#token-type-ids">What are token type IDs?</a></p> <p data-svelte-h="svelte-9vptpw">Should be overridden in a subclass if the model has a special way of building those.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.WhisperTokenizer.save_vocabulary"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>save_vocabulary</span></h4> <a id="transformers.WhisperTokenizer.save_vocabulary" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.WhisperTokenizer.save_vocabulary"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/tokenization_whisper.py#L718" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">save_directory<span class="opacity-60">: str</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">filename_prefix<span class="opacity-60">: typing.Optional[str] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> </div></div></div></div> <h2 class="relative group"><a id="transformers.WhisperTokenizerFast" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperTokenizerFast"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1m586ni">WhisperTokenizerFast</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.WhisperTokenizerFast"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">WhisperTokenizerFast</span></span></h3> <a id="transformers.WhisperTokenizerFast" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.WhisperTokenizerFast"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/tokenization_whisper_fast.py#L90" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">vocab_file<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">merges_file<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">normalizer_file<span class="opacity-60"> = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">tokenizer_file<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">unk_token<span class="opacity-60"> = '&lt;|endoftext|&gt;'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">bos_token<span class="opacity-60"> = '&lt;|endoftext|&gt;'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">eos_token<span class="opacity-60"> = '&lt;|endoftext|&gt;'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">add_prefix_space<span class="opacity-60"> = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">language<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">task<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">predict_timestamps<span class="opacity-60"> = False</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 12 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperTokenizerFast.vocab_file" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperTokenizerFast.vocab_file"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>vocab_file</strong> (<code>str</code>) — Path to the vocabulary file.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperTokenizerFast.merges_file" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperTokenizerFast.merges_file"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>merges_file</strong> (<code>str</code>) — Path to the merges file.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperTokenizerFast.normalizer_file" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperTokenizerFast.normalizer_file"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>normalizer_file</strong> (<code>str</code>, <em>optional</em>, defaults to <code>None</code>) — Path to the normalizer_file file.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperTokenizerFast.errors" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperTokenizerFast.errors"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>errors</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"replace"</code>) — Paradigm to follow when decoding bytes to UTF-8. See <a href="https://docs.python.org/3/library/stdtypes.html#bytes.decode" rel="nofollow">bytes.decode</a> for more information.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperTokenizerFast.unk_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperTokenizerFast.unk_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>unk_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>&lt;|endoftext|&gt;</code>) — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperTokenizerFast.bos_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperTokenizerFast.bos_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>bos_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;|endoftext|&gt;"</code>) — The beginning of sequence token. The <code>decoder_start_token_id</code> is used to set the first token as <code>"&lt;|startoftranscript|&gt;"</code> when generating.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperTokenizerFast.eos_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperTokenizerFast.eos_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>eos_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>&lt;|endoftext|&gt;</code>) — The end of sequence token.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperTokenizerFast.add_prefix_space" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperTokenizerFast.add_prefix_space"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>add_prefix_space</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not to add an initial space to the input. This allows to treat the leading word just as any other word. (Whisper tokenizer detect beginning of words by the preceding space).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperTokenizerFast.trim_offsets" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperTokenizerFast.trim_offsets"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>trim_offsets</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether or not the post-processing step should trim offsets to avoid including whitespaces.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperTokenizerFast.language" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperTokenizerFast.language"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>language</strong> (<code>str</code>, <em>optional</em>) — The language of the transcription text. The corresponding language id token is appended to the start of the sequence for multilingual speech recognition and speech translation tasks, e.g. for Spanish the token <code>"&lt;|es|&gt;"</code> is appended to the start of sequence. This should be used for multilingual fine-tuning only.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperTokenizerFast.task" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperTokenizerFast.task"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>task</strong> (<code>str</code>, <em>optional</em>) — Task identifier to append at the start of sequence (if any). This should be used for mulitlingual fine-tuning, with <code>"transcribe"</code> for speech recognition and <code>"translate"</code> for speech translation.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperTokenizerFast.predict_timestamps" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperTokenizerFast.predict_timestamps"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>predict_timestamps</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether to omit the <code>&lt;|notimestamps|&gt;</code> token at the start of the sequence.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-14ct2lo">Construct a “fast” Whisper tokenizer (backed by HuggingFace’s <em>tokenizers</em> library).</p> <p data-svelte-h="svelte-ttxvs6">This tokenizer inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast">PreTrainedTokenizerFast</a> which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.WhisperTokenizerFast.set_prefix_tokens"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>set_prefix_tokens</span></h4> <a id="transformers.WhisperTokenizerFast.set_prefix_tokens" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.WhisperTokenizerFast.set_prefix_tokens"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/tokenization_whisper_fast.py#L421" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">language<span class="opacity-60">: str = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">task<span class="opacity-60">: str = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">predict_timestamps<span class="opacity-60">: bool = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperTokenizerFast.set_prefix_tokens.language" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperTokenizerFast.set_prefix_tokens.language"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>language</strong> (<code>str</code>, <em>optional</em>, defaults to <code>None</code>) — The language of the transcription text.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperTokenizerFast.set_prefix_tokens.task" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperTokenizerFast.set_prefix_tokens.task"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>task</strong> (<code>str</code>, <em>optional</em>, defaults to <code>None</code>) — Task identifier to append at the start of sequence (if any).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperTokenizerFast.set_prefix_tokens.predict_timestamps" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperTokenizerFast.set_prefix_tokens.predict_timestamps"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>predict_timestamps</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>None</code>) — Whether to omit the <code>&lt;|notimestamps|&gt;</code> token at the start of the sequence.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-8in46s">Override the prefix tokens appended to the start of the label sequence. This method can be used standalone to</p> <div class="relative group rounded-md"><a id="transformers.WhisperTokenizerFast.set_prefix_tokens.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperTokenizerFast.set_prefix_tokens.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-14u5irj">update the prefix tokens as required when fine-tuning. Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># instantiate the tokenizer and set the prefix token to Spanish</span> <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = WhisperTokenizerFast.from_pretrained(<span class="hljs-string">"openai/whisper-tiny"</span>, language=<span class="hljs-string">"spanish"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># now switch the prefix token from Spanish to French</span> <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer.set_prefix_tokens(language=<span class="hljs-string">"french"</span>)</pre></div></div></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.WhisperTokenizerFast.build_inputs_with_special_tokens"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>build_inputs_with_special_tokens</span></h4> <a id="transformers.WhisperTokenizerFast.build_inputs_with_special_tokens" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.WhisperTokenizerFast.build_inputs_with_special_tokens"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/tokenization_whisper_fast.py#L495" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_0<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_1<span class="opacity-60"> = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> </div></div> <p data-svelte-h="svelte-wv4s2m">Build model inputs from a sequence by appending eos_token_id.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.WhisperTokenizerFast.get_special_tokens_mask"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>get_special_tokens_mask</span></h4> <a id="transformers.WhisperTokenizerFast.get_special_tokens_mask" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.WhisperTokenizerFast.get_special_tokens_mask"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/tokenization_whisper_fast.py#L503" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_0<span class="opacity-60">: typing.List[int]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_1<span class="opacity-60">: typing.Optional[typing.List[int]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">already_has_special_tokens<span class="opacity-60">: bool = False</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>List[int]</code></span></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperTokenizerFast.get_special_tokens_mask.token_ids_0" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperTokenizerFast.get_special_tokens_mask.token_ids_0"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_0</strong> (<code>List[int]</code>) — List of IDs.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperTokenizerFast.get_special_tokens_mask.token_ids_1" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperTokenizerFast.get_special_tokens_mask.token_ids_1"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_1</strong> (<code>List[int]</code>, <em>optional</em>) — Optional second list of IDs for sequence pairs.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperTokenizerFast.get_special_tokens_mask.already_has_special_tokens" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperTokenizerFast.get_special_tokens_mask.already_has_special_tokens"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>already_has_special_tokens</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not the token list is already formatted with special tokens for the model.</span></span> </li></ul> <div id="transformers.WhisperTokenizerFast.get_special_tokens_mask.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><code>List[int]</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.</p> </p> </div></div> <p data-svelte-h="svelte-1f4f5kp">Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer <code>prepare_for_model</code> method.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.WhisperTokenizerFast.create_token_type_ids_from_sequences"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>create_token_type_ids_from_sequences</span></h4> <a id="transformers.WhisperTokenizerFast.create_token_type_ids_from_sequences" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.WhisperTokenizerFast.create_token_type_ids_from_sequences"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/tokenization_utils_base.py#L3305" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_0<span class="opacity-60">: typing.List[int]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_1<span class="opacity-60">: typing.Optional[typing.List[int]] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>List[int]</code></span></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperTokenizerFast.create_token_type_ids_from_sequences.token_ids_0" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperTokenizerFast.create_token_type_ids_from_sequences.token_ids_0"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_0</strong> (<code>List[int]</code>) — The first tokenized sequence.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperTokenizerFast.create_token_type_ids_from_sequences.token_ids_1" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperTokenizerFast.create_token_type_ids_from_sequences.token_ids_1"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_1</strong> (<code>List[int]</code>, <em>optional</em>) — The second tokenized sequence.</span></span> </li></ul> <div id="transformers.WhisperTokenizerFast.create_token_type_ids_from_sequences.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><code>List[int]</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>The token type ids.</p> </p> </div></div> <p data-svelte-h="svelte-zj1vf1">Create the token type IDs corresponding to the sequences passed. <a href="../glossary#token-type-ids">What are token type IDs?</a></p> <p data-svelte-h="svelte-9vptpw">Should be overridden in a subclass if the model has a special way of building those.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.WhisperTokenizerFast.save_vocabulary"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>save_vocabulary</span></h4> <a id="transformers.WhisperTokenizerFast.save_vocabulary" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.WhisperTokenizerFast.save_vocabulary"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/tokenization_whisper_fast.py#L406" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">save_directory<span class="opacity-60">: str</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">filename_prefix<span class="opacity-60">: typing.Optional[str] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> </div></div></div></div> <h2 class="relative group"><a id="transformers.WhisperFeatureExtractor" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperFeatureExtractor"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-16q3cbr">WhisperFeatureExtractor</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.WhisperFeatureExtractor"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">WhisperFeatureExtractor</span></span></h3> <a id="transformers.WhisperFeatureExtractor" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.WhisperFeatureExtractor"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/feature_extraction_whisper.py#L32" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">feature_size<span class="opacity-60"> = 80</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">sampling_rate<span class="opacity-60"> = 16000</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hop_length<span class="opacity-60"> = 160</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">chunk_length<span class="opacity-60"> = 30</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">n_fft<span class="opacity-60"> = 400</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">padding_value<span class="opacity-60"> = 0.0</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_attention_mask<span class="opacity-60"> = False</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperFeatureExtractor.feature_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperFeatureExtractor.feature_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>feature_size</strong> (<code>int</code>, defaults to 80) — The feature dimension of the extracted features.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperFeatureExtractor.sampling_rate" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperFeatureExtractor.sampling_rate"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>sampling_rate</strong> (<code>int</code>, defaults to 16000) — The sampling rate at which the audio files should be digitalized expressed in hertz (Hz).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperFeatureExtractor.hop_length" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperFeatureExtractor.hop_length"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hop_length</strong> (<code>int</code>, defaults to 160) — Length of the overlaping windows for the STFT used to obtain the Mel Frequency coefficients.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperFeatureExtractor.chunk_length" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperFeatureExtractor.chunk_length"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>chunk_length</strong> (<code>int</code>, defaults to 30) — The maximum number of chuncks of <code>sampling_rate</code> samples used to trim and pad longer or shorter audio sequences.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperFeatureExtractor.n_fft" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperFeatureExtractor.n_fft"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>n_fft</strong> (<code>int</code>, defaults to 400) — Size of the Fourier transform.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperFeatureExtractor.padding_value" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperFeatureExtractor.padding_value"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>padding_value</strong> (<code>float</code>, <em>optional</em>, defaults to 0.0) — Padding value used to pad the audio. Should correspond to silences.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1xbhurt">Constructs a Whisper feature extractor.</p> <p data-svelte-h="svelte-bnr2z1">This feature extractor inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor#transformers.SequenceFeatureExtractor">SequenceFeatureExtractor</a> which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.</p> <p data-svelte-h="svelte-1lv9ra7">This class extracts mel-filter bank features from raw speech using a custom numpy implementation of the <code>Short Time Fourier Transform</code> which should match pytorch’s <code>torch.stft</code> equivalent.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.WhisperFeatureExtractor.__call__"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>__call__</span></h4> <a id="transformers.WhisperFeatureExtractor.__call__" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.WhisperFeatureExtractor.__call__"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/feature_extraction_whisper.py#L136" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">raw_speech<span class="opacity-60">: typing.Union[numpy.ndarray, typing.List[float], typing.List[numpy.ndarray], typing.List[typing.List[float]]]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">truncation<span class="opacity-60">: bool = True</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pad_to_multiple_of<span class="opacity-60">: typing.Optional[int] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_tensors<span class="opacity-60">: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_attention_mask<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">padding<span class="opacity-60">: typing.Optional[str] = 'max_length'</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">max_length<span class="opacity-60">: typing.Optional[int] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">sampling_rate<span class="opacity-60">: typing.Optional[int] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">do_normalize<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 8 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperFeatureExtractor.__call__.raw_speech" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperFeatureExtractor.__call__.raw_speech"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>raw_speech</strong> (<code>np.ndarray</code>, <code>List[float]</code>, <code>List[np.ndarray]</code>, <code>List[List[float]]</code>) — The sequence or batch of sequences to be padded. Each sequence can be a numpy array, a list of float values, a list of numpy arrays or a list of list of float values. Must be mono channel audio, not stereo, i.e. single float per timestep.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperFeatureExtractor.__call__.truncation" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperFeatureExtractor.__call__.truncation"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>truncation</strong> (<code>bool</code>, <em>optional</em>, default to <code>True</code>) — Activates truncation to cut input sequences longer than <em>max_length</em> to <em>max_length</em>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperFeatureExtractor.__call__.pad_to_multiple_of" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperFeatureExtractor.__call__.pad_to_multiple_of"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>pad_to_multiple_of</strong> (<code>int</code>, <em>optional</em>, defaults to None) — If set will pad the sequence to a multiple of the provided value.<p></p> <p>This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability <code>&gt;= 7.5</code> (Volta), or on TPUs which benefit from having sequence lengths be a multiple of 128.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperFeatureExtractor.__call__.return_attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperFeatureExtractor.__call__.return_attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_attention_mask</strong> (<code>bool</code>, <em>optional</em>) — Whether to return the attention mask. If left to the default, will return the attention mask according to the specific feature_extractor’s default.<p></p> <p><a href="../glossary#attention-mask">What are attention masks?</a></p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"> <p>For Whisper models, <code>attention_mask</code> should always be passed for batched inference, to avoid subtle bugs.</p> </div></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperFeatureExtractor.__call__.return_tensors" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperFeatureExtractor.__call__.return_tensors"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_tensors</strong> (<code>str</code> or <a href="/docs/transformers/v4.34.0/en/internal/file_utils#transformers.TensorType">TensorType</a>, <em>optional</em>) — If set, will return tensors instead of list of python integers. Acceptable values are:<p></p> <ul> <li><code>'tf'</code>: Return TensorFlow <code>tf.constant</code> objects.</li> <li><code>'pt'</code>: Return PyTorch <code>torch.Tensor</code> objects.</li> <li><code>'np'</code>: Return Numpy <code>np.ndarray</code> objects.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperFeatureExtractor.__call__.sampling_rate" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperFeatureExtractor.__call__.sampling_rate"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>sampling_rate</strong> (<code>int</code>, <em>optional</em>) — The sampling rate at which the <code>raw_speech</code> input was sampled. It is strongly recommended to pass <code>sampling_rate</code> at the forward call to prevent silent errors and allow automatic speech recognition pipeline.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperFeatureExtractor.__call__.padding_value" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperFeatureExtractor.__call__.padding_value"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>padding_value</strong> (<code>float</code>, defaults to 0.0) — The value that is used to fill the padding values / vectors.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperFeatureExtractor.__call__.do_normalize" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperFeatureExtractor.__call__.do_normalize"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>do_normalize</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not to zero-mean unit-variance normalize the input. Normalizing can help to significantly improve the performance of the model.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1a6wgfx">Main method to featurize and prepare for the model one or several sequence(s).</p></div></div> <h2 class="relative group"><a id="transformers.WhisperProcessor" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperProcessor"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1efpcn3">WhisperProcessor</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.WhisperProcessor"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">WhisperProcessor</span></span></h3> <a id="transformers.WhisperProcessor" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.WhisperProcessor"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/processing_whisper.py#L23" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">feature_extractor<span class="opacity-60"></span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">tokenizer<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperProcessor.feature_extractor" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperProcessor.feature_extractor"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>feature_extractor</strong> (<code>WhisperFeatureExtractor</code>) — An instance of <a href="/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperFeatureExtractor">WhisperFeatureExtractor</a>. The feature extractor is a required input.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperProcessor.tokenizer" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperProcessor.tokenizer"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>tokenizer</strong> (<code>WhisperTokenizer</code>) — An instance of <a href="/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperTokenizer">WhisperTokenizer</a>. The tokenizer is a required input.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1g1myb6">Constructs a Whisper processor which wraps a Whisper feature extractor and a Whisper tokenizer into a single processor.</p> <p data-svelte-h="svelte-1icdagf"><a href="/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperProcessor">WhisperProcessor</a> offers all the functionalities of <a href="/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperFeatureExtractor">WhisperFeatureExtractor</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperTokenizer">WhisperTokenizer</a>. See the <a href="/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperProcessor.__call__"><strong>call</strong>()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperProcessor.decode">decode()</a> for more information.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.WhisperProcessor.__call__"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>__call__</span></h4> <a id="transformers.WhisperProcessor.__call__" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.WhisperProcessor.__call__"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/processing_whisper.py#L48" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">*args<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> </div></div> <p data-svelte-h="svelte-1gk6khv">Forwards the <code>audio</code> argument to WhisperFeatureExtractor’s <a href="/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperFeatureExtractor.__call__"><strong>call</strong>()</a> and the <code>text</code> argument to <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__"><strong>call</strong>()</a>. Please refer to the doctsring of the above two methods for more information.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.WhisperProcessor.from_pretrained"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>from_pretrained</span></h4> <a id="transformers.WhisperProcessor.from_pretrained" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.WhisperProcessor.from_pretrained"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/processing_utils.py#L167" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pretrained_model_name_or_path<span class="opacity-60">: typing.Union[str, os.PathLike]</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">cache_dir<span class="opacity-60">: typing.Union[str, os.PathLike, NoneType] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">force_download<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">local_files_only<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token<span class="opacity-60">: typing.Union[bool, str, NoneType] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">revision<span class="opacity-60">: str = 'main'</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperProcessor.from_pretrained.pretrained_model_name_or_path" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperProcessor.from_pretrained.pretrained_model_name_or_path"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>pretrained_model_name_or_path</strong> (<code>str</code> or <code>os.PathLike</code>) — This can be either:<p></p> <ul> <li>a string, the <em>model id</em> of a pretrained feature_extractor hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like <code>bert-base-uncased</code>, or namespaced under a user or organization name, like <code>dbmdz/bert-base-german-cased</code>.</li> <li>a path to a <em>directory</em> containing a feature extractor file saved using the <a href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor#transformers.FeatureExtractionMixin.save_pretrained">save_pretrained()</a> method, e.g., <code>./my_model_directory/</code>.</li> <li>a path or url to a saved feature extractor JSON <em>file</em>, e.g., <code>./my_model_directory/preprocessor_config.json</code>. **kwargs — Additional keyword arguments passed along to both <a href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor#transformers.FeatureExtractionMixin.from_pretrained">from_pretrained()</a> and <code>~tokenization_utils_base.PreTrainedTokenizer.from_pretrained</code>.</li> </ul></span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1cj8dcb">Instantiate a processor associated with a pretrained model.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-gbwq5r">This class method is simply calling the feature extractor <a href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor#transformers.FeatureExtractionMixin.from_pretrained">from_pretrained()</a>, image processor <a href="/docs/transformers/v4.34.0/en/main_classes/image_processor#transformers.ImageProcessingMixin">ImageProcessingMixin</a> and the tokenizer <code>~tokenization_utils_base.PreTrainedTokenizer.from_pretrained</code> methods. Please refer to the docstrings of the methods above for more information.</p></div></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.WhisperProcessor.save_pretrained"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>save_pretrained</span></h4> <a id="transformers.WhisperProcessor.save_pretrained" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.WhisperProcessor.save_pretrained"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/processing_utils.py#L93" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">save_directory<span class="opacity-60"></span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">push_to_hub<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperProcessor.save_pretrained.save_directory" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperProcessor.save_pretrained.save_directory"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>save_directory</strong> (<code>str</code> or <code>os.PathLike</code>) — Directory where the feature extractor JSON file and the tokenizer files will be saved (directory will be created if it does not exist).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperProcessor.save_pretrained.push_to_hub" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperProcessor.save_pretrained.push_to_hub"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>push_to_hub</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the repository you want to push to with <code>repo_id</code> (will default to the name of <code>save_directory</code> in your namespace).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperProcessor.save_pretrained.kwargs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperProcessor.save_pretrained.kwargs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>kwargs</strong> (<code>Dict[str, Any]</code>, <em>optional</em>) — Additional key word arguments passed along to the <a href="/docs/transformers/v4.34.0/en/main_classes/processors#transformers.ProcessorMixin.push_to_hub">push_to_hub()</a> method.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-7negql">Saves the attributes of this processor (feature extractor, tokenizer…) in the specified directory so that it can be reloaded using the <a href="/docs/transformers/v4.34.0/en/model_doc/nougat#transformers.NougatProcessor.from_pretrained">from_pretrained()</a> method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1rzuu8q">This class method is simply calling <a href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor#transformers.FeatureExtractionMixin.save_pretrained">save_pretrained()</a> and <a href="/docs/transformers/v4.34.0/en/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.save_pretrained">save_pretrained()</a>. Please refer to the docstrings of the methods above for more information.</p></div></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.WhisperProcessor.batch_decode"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>batch_decode</span></h4> <a id="transformers.WhisperProcessor.batch_decode" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.WhisperProcessor.batch_decode"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/processing_whisper.py#L82" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">*args<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> </div></div> <p data-svelte-h="svelte-mg32jw">This method forwards all its arguments to WhisperTokenizer’s <a href="/docs/transformers/v4.34.0/en/model_doc/speecht5#transformers.SpeechT5Tokenizer.batch_decode">batch_decode()</a>. Please refer to the docstring of this method for more information.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.WhisperProcessor.decode"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>decode</span></h4> <a id="transformers.WhisperProcessor.decode" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.WhisperProcessor.decode"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/processing_whisper.py#L89" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">*args<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> </div></div> <p data-svelte-h="svelte-xhg86">This method forwards all its arguments to WhisperTokenizer’s <a href="/docs/transformers/v4.34.0/en/model_doc/speecht5#transformers.SpeechT5Tokenizer.decode">decode()</a>. Please refer to the docstring of this method for more information.</p></div></div> <h2 class="relative group"><a id="transformers.WhisperModel" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperModel"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1iqupw8">WhisperModel</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.WhisperModel"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">WhisperModel</span></span></h3> <a id="transformers.WhisperModel" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.WhisperModel"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/modeling_whisper.py#L1227" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60">: WhisperConfig</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperModel.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperModel.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperConfig">WhisperConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1r05lvu">The bare Whisper Model outputting raw hidden-states without any specific head on top. This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-hswkmf">This model is also a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.WhisperModel.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.WhisperModel.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.WhisperModel.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/modeling_whisper.py#L1298" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_features<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_input_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_attention_mask<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_head_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">cross_attn_head_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_outputs<span class="opacity-60">: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">past_key_values<span class="opacity-60">: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_inputs_embeds<span class="opacity-60">: typing.Optional[typing.Tuple[torch.FloatTensor]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_cache<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.Seq2SeqModelOutput">transformers.modeling_outputs.Seq2SeqModelOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 14 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperModel.forward.input_features" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperModel.forward.input_features"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_features</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, feature_size, sequence_length)</code>) — Float values mel features extracted from the raw speech waveform. Raw speech waveform can be obtained by loading a <code>.flac</code> or <code>.wav</code> audio file into an array of type <code>List[float]</code> or a <code>numpy.ndarray</code>, <em>e.g.</em> via the soundfile library (<code>pip install soundfile</code>). To prepare the array into <code>input_features</code>, the <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoFeatureExtractor">AutoFeatureExtractor</a> should be used for extracting the mel features, padding and conversion into a tensor of type <code>torch.FloatTensor</code>. See <a href="/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperFeatureExtractor.__call__"><strong>call</strong>()</a></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperModel.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperModel.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing <em>SpecAugment</em> data augmentation on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperModel.forward.decoder_input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperModel.forward.decoder_input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, target_sequence_length)</code>, <em>optional</em>) — Indices of decoder input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperTokenizer">WhisperTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#decoder-input-ids">What are decoder input IDs?</a></p> <p>Whisper uses the <code>decoder_start_token_id</code> as the starting token for <code>decoder_input_ids</code> generation. If <code>past_key_values</code> is used, optionally only the last <code>decoder_input_ids</code> have to be input (see <code>past_key_values</code>).</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperModel.forward.decoder_attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperModel.forward.decoder_attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_attention_mask</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, target_sequence_length)</code>, <em>optional</em>) — Default behavior: generate a tensor that ignores pad tokens in <code>decoder_input_ids</code>. Causal mask will also be used by default.<p></p> <p>If you want to change padding behavior, you should read <code>modeling_whisper._prepare_decoder_attention_mask</code> and modify to your needs. See diagram 1 in <a href="https://arxiv.org/abs/1910.13461" rel="nofollow">the BART paper</a> for more information on the default strategy.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperModel.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperModel.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.Tensor</code> of shape <code>(encoder_layers, encoder_attention_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperModel.forward.decoder_head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperModel.forward.decoder_head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_head_mask</strong> (<code>torch.Tensor</code> of shape <code>(decoder_layers, decoder_attention_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperModel.forward.cross_attn_head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperModel.forward.cross_attn_head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>cross_attn_head_mask</strong> (<code>torch.Tensor</code> of shape <code>(decoder_layers, decoder_attention_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the cross-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperModel.forward.encoder_outputs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperModel.forward.encoder_outputs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>encoder_outputs</strong> (<code>tuple(tuple(torch.FloatTensor)</code>, <em>optional</em>) — Tuple consists of (<code>last_hidden_state</code>, <em>optional</em>: <code>hidden_states</code>, <em>optional</em>: <code>attentions</code>) <code>last_hidden_state</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperModel.forward.past_key_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperModel.forward.past_key_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>past_key_values</strong> (<code>tuple(tuple(torch.FloatTensor))</code>, <em>optional</em>, returned when <code>use_cache=True</code> is passed or when <code>config.use_cache=True</code>) — Tuple of <code>tuple(torch.FloatTensor)</code> of length <code>config.n_layers</code>, with each tuple having 2 tensors of shape <code>(batch_size, num_heads, sequence_length, embed_size_per_head)</code>) and 2 additional tensors of shape <code>(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)</code>.<p></p> <p>Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see <code>past_key_values</code> input) to speed up sequential decoding.</p> <p>If <code>past_key_values</code> are used, the user can optionally input only the last <code>decoder_input_ids</code> (those that don’t have their past key value states given to this model) of shape <code>(batch_size, 1)</code> instead of all <code>decoder_input_ids</code> of shape <code>(batch_size, sequence_length)</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperModel.forward.decoder_inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperModel.forward.decoder_inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, target_sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>decoder_input_ids</code> you can choose to directly pass an embedded representation. If <code>past_key_values</code> is used, optionally only the last <code>decoder_inputs_embeds</code> have to be input (see <code>past_key_values</code>). This is useful if you want more control over how to convert <code>decoder_input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperModel.forward.use_cache" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperModel.forward.use_cache"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>use_cache</strong> (<code>bool</code>, <em>optional</em>) — If set to <code>True</code>, <code>past_key_values</code> key value states are returned and can be used to speed up decoding (see <code>past_key_values</code>).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperModel.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperModel.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperModel.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperModel.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperModel.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperModel.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li></ul> <div id="transformers.WhisperModel.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.Seq2SeqModelOutput">transformers.modeling_outputs.Seq2SeqModelOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.Seq2SeqModelOutput">transformers.modeling_outputs.Seq2SeqModelOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperConfig">WhisperConfig</a>) and inputs.</p> <ul> <li> <p><strong>last_hidden_state</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>) — Sequence of hidden-states at the output of the last layer of the decoder of the model.</p> <p>If <code>past_key_values</code> is used only the last hidden-state of the sequences of shape <code>(batch_size, 1, hidden_size)</code> is output.</p> </li> <li> <p><strong>past_key_values</strong> (<code>tuple(tuple(torch.FloatTensor))</code>, <em>optional</em>, returned when <code>use_cache=True</code> is passed or when <code>config.use_cache=True</code>) — Tuple of <code>tuple(torch.FloatTensor)</code> of length <code>config.n_layers</code>, with each tuple having 2 tensors of shape <code>(batch_size, num_heads, sequence_length, embed_size_per_head)</code>) and 2 additional tensors of shape <code>(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)</code>.</p> <p>Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see <code>past_key_values</code> input) to speed up sequential decoding.</p> </li> <li> <p><strong>decoder_hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the decoder at the output of each layer plus the optional initial embedding outputs.</p> </li> <li> <p><strong>decoder_attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> <li> <p><strong>cross_attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.</p> </li> <li> <p><strong>encoder_last_hidden_state</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Sequence of hidden-states at the output of the last layer of the encoder of the model.</p> </li> <li> <p><strong>encoder_hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the encoder at the output of each layer plus the optional initial embedding outputs.</p> </li> <li> <p><strong>encoder_attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-lwvyrl">The <a href="/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperModel">WhisperModel</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.WhisperModel.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperModel.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoFeatureExtractor, WhisperModel <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset <span class="hljs-meta">&gt;&gt;&gt; </span>model = WhisperModel.from_pretrained(<span class="hljs-string">"openai/whisper-base"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>feature_extractor = AutoFeatureExtractor.from_pretrained(<span class="hljs-string">"openai/whisper-base"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>ds = load_dataset(<span class="hljs-string">"hf-internal-testing/librispeech_asr_dummy"</span>, <span class="hljs-string">"clean"</span>, split=<span class="hljs-string">"validation"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = feature_extractor(ds[<span class="hljs-number">0</span>][<span class="hljs-string">"audio"</span>][<span class="hljs-string">"array"</span>], return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>input_features = inputs.input_features <span class="hljs-meta">&gt;&gt;&gt; </span>decoder_input_ids = torch.tensor([[<span class="hljs-number">1</span>, <span class="hljs-number">1</span>]]) * model.config.decoder_start_token_id <span class="hljs-meta">&gt;&gt;&gt; </span>last_hidden_state = model(input_features, decoder_input_ids=decoder_input_ids).last_hidden_state <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-built_in">list</span>(last_hidden_state.shape) [<span class="hljs-number">1</span>, <span class="hljs-number">2</span>, <span class="hljs-number">512</span>]</pre></div></div></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.WhisperModel._mask_input_features"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>_mask_input_features</span></h4> <a id="transformers.WhisperModel._mask_input_features" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.WhisperModel._mask_input_features"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/modeling_whisper.py#L1255" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_features<span class="opacity-60">: FloatTensor</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> </div></div> <p data-svelte-h="svelte-1iyovru">Masks extracted features along time axis and/or along feature axis according to <a href="https://arxiv.org/abs/1904.08779" rel="nofollow">SpecAugment</a>.</p></div></div> <h2 class="relative group"><a id="transformers.WhisperForConditionalGeneration" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperForConditionalGeneration"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-jtz5y2">WhisperForConditionalGeneration</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.WhisperForConditionalGeneration"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">WhisperForConditionalGeneration</span></span></h3> <a id="transformers.WhisperForConditionalGeneration" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.WhisperForConditionalGeneration"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/modeling_whisper.py#L1395" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60">: WhisperConfig</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperForConditionalGeneration.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperForConditionalGeneration.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperConfig">WhisperConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1v16d8j">The Whisper Model with a language modeling head. Can be used for automatic speech recognition. This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-hswkmf">This model is also a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.WhisperForConditionalGeneration.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.WhisperForConditionalGeneration.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.WhisperForConditionalGeneration.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/modeling_whisper.py#L1429" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_features<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_input_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_attention_mask<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_head_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">cross_attn_head_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_outputs<span class="opacity-60">: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">past_key_values<span class="opacity-60">: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_inputs_embeds<span class="opacity-60">: typing.Optional[typing.Tuple[torch.FloatTensor]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_cache<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.Seq2SeqLMOutput">transformers.modeling_outputs.Seq2SeqLMOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 15 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperForConditionalGeneration.forward.input_features" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperForConditionalGeneration.forward.input_features"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_features</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, feature_size, sequence_length)</code>) — Float values mel features extracted from the raw speech waveform. Raw speech waveform can be obtained by loading a <code>.flac</code> or <code>.wav</code> audio file into an array of type <code>List[float]</code> or a <code>numpy.ndarray</code>, <em>e.g.</em> via the soundfile library (<code>pip install soundfile</code>). To prepare the array into <code>input_features</code>, the <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoFeatureExtractor">AutoFeatureExtractor</a> should be used for extracting the mel features, padding and conversion into a tensor of type <code>torch.FloatTensor</code>. See <a href="/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperFeatureExtractor.__call__"><strong>call</strong>()</a></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperForConditionalGeneration.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperForConditionalGeneration.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing <em>SpecAugment</em> data augmentation on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperForConditionalGeneration.forward.decoder_input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperForConditionalGeneration.forward.decoder_input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, target_sequence_length)</code>, <em>optional</em>) — Indices of decoder input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperTokenizer">WhisperTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#decoder-input-ids">What are decoder input IDs?</a></p> <p>Whisper uses the <code>decoder_start_token_id</code> as the starting token for <code>decoder_input_ids</code> generation. If <code>past_key_values</code> is used, optionally only the last <code>decoder_input_ids</code> have to be input (see <code>past_key_values</code>).</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperForConditionalGeneration.forward.decoder_attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperForConditionalGeneration.forward.decoder_attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_attention_mask</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, target_sequence_length)</code>, <em>optional</em>) — Default behavior: generate a tensor that ignores pad tokens in <code>decoder_input_ids</code>. Causal mask will also be used by default.<p></p> <p>If you want to change padding behavior, you should read <code>modeling_whisper._prepare_decoder_attention_mask</code> and modify to your needs. See diagram 1 in <a href="https://arxiv.org/abs/1910.13461" rel="nofollow">the BART paper</a> for more information on the default strategy.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperForConditionalGeneration.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperForConditionalGeneration.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.Tensor</code> of shape <code>(encoder_layers, encoder_attention_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperForConditionalGeneration.forward.decoder_head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperForConditionalGeneration.forward.decoder_head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_head_mask</strong> (<code>torch.Tensor</code> of shape <code>(decoder_layers, decoder_attention_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperForConditionalGeneration.forward.cross_attn_head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperForConditionalGeneration.forward.cross_attn_head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>cross_attn_head_mask</strong> (<code>torch.Tensor</code> of shape <code>(decoder_layers, decoder_attention_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the cross-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperForConditionalGeneration.forward.encoder_outputs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperForConditionalGeneration.forward.encoder_outputs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>encoder_outputs</strong> (<code>tuple(tuple(torch.FloatTensor)</code>, <em>optional</em>) — Tuple consists of (<code>last_hidden_state</code>, <em>optional</em>: <code>hidden_states</code>, <em>optional</em>: <code>attentions</code>) <code>last_hidden_state</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperForConditionalGeneration.forward.past_key_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperForConditionalGeneration.forward.past_key_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>past_key_values</strong> (<code>tuple(tuple(torch.FloatTensor))</code>, <em>optional</em>, returned when <code>use_cache=True</code> is passed or when <code>config.use_cache=True</code>) — Tuple of <code>tuple(torch.FloatTensor)</code> of length <code>config.n_layers</code>, with each tuple having 2 tensors of shape <code>(batch_size, num_heads, sequence_length, embed_size_per_head)</code>) and 2 additional tensors of shape <code>(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)</code>.<p></p> <p>Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see <code>past_key_values</code> input) to speed up sequential decoding.</p> <p>If <code>past_key_values</code> are used, the user can optionally input only the last <code>decoder_input_ids</code> (those that don’t have their past key value states given to this model) of shape <code>(batch_size, 1)</code> instead of all <code>decoder_input_ids</code> of shape <code>(batch_size, sequence_length)</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperForConditionalGeneration.forward.decoder_inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperForConditionalGeneration.forward.decoder_inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, target_sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>decoder_input_ids</code> you can choose to directly pass an embedded representation. If <code>past_key_values</code> is used, optionally only the last <code>decoder_inputs_embeds</code> have to be input (see <code>past_key_values</code>). This is useful if you want more control over how to convert <code>decoder_input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperForConditionalGeneration.forward.use_cache" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperForConditionalGeneration.forward.use_cache"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>use_cache</strong> (<code>bool</code>, <em>optional</em>) — If set to <code>True</code>, <code>past_key_values</code> key value states are returned and can be used to speed up decoding (see <code>past_key_values</code>).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperForConditionalGeneration.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperForConditionalGeneration.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperForConditionalGeneration.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperForConditionalGeneration.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperForConditionalGeneration.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperForConditionalGeneration.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperForConditionalGeneration.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperForConditionalGeneration.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Labels for computing the language modeling loss. Indices should either be in <code>[0, ..., config.vocab_size]</code> or -100 (see <code>input_ids</code> docstring). Tokens with indices set to <code>-100</code> are ignored (masked), the loss is only computed for the tokens with labels in <code>[0, ..., config.vocab_size]</code>.</span></span> </li></ul> <div id="transformers.WhisperForConditionalGeneration.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.Seq2SeqLMOutput">transformers.modeling_outputs.Seq2SeqLMOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.Seq2SeqLMOutput">transformers.modeling_outputs.Seq2SeqLMOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperConfig">WhisperConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Language modeling loss.</p> </li> <li> <p><strong>logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, config.vocab_size)</code>) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).</p> </li> <li> <p><strong>past_key_values</strong> (<code>tuple(tuple(torch.FloatTensor))</code>, <em>optional</em>, returned when <code>use_cache=True</code> is passed or when <code>config.use_cache=True</code>) — Tuple of <code>tuple(torch.FloatTensor)</code> of length <code>config.n_layers</code>, with each tuple having 2 tensors of shape <code>(batch_size, num_heads, sequence_length, embed_size_per_head)</code>) and 2 additional tensors of shape <code>(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)</code>.</p> <p>Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see <code>past_key_values</code> input) to speed up sequential decoding.</p> </li> <li> <p><strong>decoder_hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>decoder_attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> <li> <p><strong>cross_attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.</p> </li> <li> <p><strong>encoder_last_hidden_state</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Sequence of hidden-states at the output of the last layer of the encoder of the model.</p> </li> <li> <p><strong>encoder_hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>encoder_attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-hef1pj">The <a href="/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperForConditionalGeneration">WhisperForConditionalGeneration</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.WhisperForConditionalGeneration.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperForConditionalGeneration.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoProcessor, WhisperForConditionalGeneration <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset <span class="hljs-meta">&gt;&gt;&gt; </span>processor = AutoProcessor.from_pretrained(<span class="hljs-string">"openai/whisper-tiny.en"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = WhisperForConditionalGeneration.from_pretrained(<span class="hljs-string">"openai/whisper-tiny.en"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>ds = load_dataset(<span class="hljs-string">"hf-internal-testing/librispeech_asr_dummy"</span>, <span class="hljs-string">"clean"</span>, split=<span class="hljs-string">"validation"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = processor(ds[<span class="hljs-number">0</span>][<span class="hljs-string">"audio"</span>][<span class="hljs-string">"array"</span>], return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>input_features = inputs.input_features <span class="hljs-meta">&gt;&gt;&gt; </span>generated_ids = model.generate(inputs=input_features) <span class="hljs-meta">&gt;&gt;&gt; </span>transcription = processor.batch_decode(generated_ids, skip_special_tokens=<span class="hljs-literal">True</span>)[<span class="hljs-number">0</span>] <span class="hljs-meta">&gt;&gt;&gt; </span>transcription <span class="hljs-string">' Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel.'</span></pre></div></div></div></div> <h2 class="relative group"><a id="transformers.WhisperForAudioClassification" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperForAudioClassification"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-14wy4ry">WhisperForAudioClassification</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.WhisperForAudioClassification"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">WhisperForAudioClassification</span></span></h3> <a id="transformers.WhisperForAudioClassification" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.WhisperForAudioClassification"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/modeling_whisper.py#L1866" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 6 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperForAudioClassification.input_features" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperForAudioClassification.input_features"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_features</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, feature_size, sequence_length)</code>) — Float values mel features extracted from the raw speech waveform. Raw speech waveform can be obtained by loading a <code>.flac</code> or <code>.wav</code> audio file into an array of type <code>List[float]</code> or a <code>numpy.ndarray</code>, <em>e.g.</em> via the soundfile library (<code>pip install soundfile</code>). To prepare the array into <code>input_features</code>, the <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoFeatureExtractor">AutoFeatureExtractor</a> should be used for extracting the mel features, padding and conversion into a tensor of type <code>torch.FloatTensor</code>. See <a href="/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperFeatureExtractor.__call__"><strong>call</strong>()</a></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperForAudioClassification.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperForAudioClassification.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.Tensor</code> of shape <code>(encoder_layers, encoder_attention_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperForAudioClassification.encoder_outputs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperForAudioClassification.encoder_outputs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>encoder_outputs</strong> (<code>tuple(tuple(torch.FloatTensor)</code>, <em>optional</em>) — Tuple consists of (<code>last_hidden_state</code>, <em>optional</em>: <code>hidden_states</code>, <em>optional</em>: <code>attentions</code>) <code>last_hidden_state</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) is a sequence of hidden-states at the output of the last layer of the encoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperForAudioClassification.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperForAudioClassification.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperForAudioClassification.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperForAudioClassification.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperForAudioClassification.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperForAudioClassification.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1y2nev0">Whisper Encoder Model with a sequence classification head on top (a linear layer over the pooled output) for tasks like SUPERB Keyword Spotting.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.WhisperForAudioClassification.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.WhisperForAudioClassification.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.WhisperForAudioClassification.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/modeling_whisper.py#L1893" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_features<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_outputs<span class="opacity-60">: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.SequenceClassifierOutput">transformers.modeling_outputs.SequenceClassifierOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 7 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperForAudioClassification.forward.input_features" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperForAudioClassification.forward.input_features"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_features</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, feature_size, sequence_length)</code>) — Float values mel features extracted from the raw speech waveform. Raw speech waveform can be obtained by loading a <code>.flac</code> or <code>.wav</code> audio file into an array of type <code>List[float]</code> or a <code>numpy.ndarray</code>, <em>e.g.</em> via the soundfile library (<code>pip install soundfile</code>). To prepare the array into <code>input_features</code>, the <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoFeatureExtractor">AutoFeatureExtractor</a> should be used for extracting the mel features, padding and conversion into a tensor of type <code>torch.FloatTensor</code>. See <a href="/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperFeatureExtractor.__call__"><strong>call</strong>()</a></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperForAudioClassification.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperForAudioClassification.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.Tensor</code> of shape <code>(encoder_layers, encoder_attention_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperForAudioClassification.forward.encoder_outputs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperForAudioClassification.forward.encoder_outputs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>encoder_outputs</strong> (<code>tuple(tuple(torch.FloatTensor)</code>, <em>optional</em>) — Tuple consists of (<code>last_hidden_state</code>, <em>optional</em>: <code>hidden_states</code>, <em>optional</em>: <code>attentions</code>) <code>last_hidden_state</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) is a sequence of hidden-states at the output of the last layer of the encoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperForAudioClassification.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperForAudioClassification.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperForAudioClassification.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperForAudioClassification.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperForAudioClassification.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperForAudioClassification.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.WhisperForAudioClassification.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperForAudioClassification.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Labels for computing the sequence classification/regression loss. Indices should be in <code>[0, ..., config.num_labels - 1]</code>. If <code>config.num_labels == 1</code> a regression loss is computed (Mean-Square loss), If <code>config.num_labels &gt; 1</code> a classification loss is computed (Cross-Entropy).</span></span> </li></ul> <div id="transformers.WhisperForAudioClassification.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.SequenceClassifierOutput">transformers.modeling_outputs.SequenceClassifierOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.SequenceClassifierOutput">transformers.modeling_outputs.SequenceClassifierOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperConfig">WhisperConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Classification (or regression if config.num_labels==1) loss.</p> </li> <li> <p><strong>logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, config.num_labels)</code>) — Classification (or regression if config.num_labels==1) scores (before SoftMax).</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-1juza2r">The <a href="/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperForAudioClassification">WhisperForAudioClassification</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.WhisperForAudioClassification.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.WhisperForAudioClassification.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoFeatureExtractor, WhisperForAudioClassification <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset <span class="hljs-meta">&gt;&gt;&gt; </span>feature_extractor = AutoFeatureExtractor.from_pretrained(<span class="hljs-string">"sanchit-gandhi/whisper-medium-fleurs-lang-id"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = WhisperForAudioClassification.from_pretrained(<span class="hljs-string">"sanchit-gandhi/whisper-medium-fleurs-lang-id"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>ds = load_dataset(<span class="hljs-string">"google/fleurs"</span>, <span class="hljs-string">"all"</span>, split=<span class="hljs-string">"validation"</span>, streaming=<span class="hljs-literal">True</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>sample = <span class="hljs-built_in">next</span>(<span class="hljs-built_in">iter</span>(ds)) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = feature_extractor( <span class="hljs-meta">... </span> sample[<span class="hljs-string">"audio"</span>][<span class="hljs-string">"array"</span>], sampling_rate=sample[<span class="hljs-string">"audio"</span>][<span class="hljs-string">"sampling_rate"</span>], return_tensors=<span class="hljs-string">"pt"</span> <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>input_features = inputs.input_features <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> logits = model(input_features).logits <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_class_ids = torch.argmax(logits).item() <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_label = model.config.id2label[predicted_class_ids] <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_label <span class="hljs-string">'Afrikaans'</span></pre></div></div></div></div> <h2 class="relative group"><a id="transformers.TFWhisperModel" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFWhisperModel"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1phdna6">TFWhisperModel</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFWhisperModel"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">TFWhisperModel</span></span></h3> <a id="transformers.TFWhisperModel" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFWhisperModel"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/modeling_tf_whisper.py#L1093" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">*args<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFWhisperModel.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFWhisperModel.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperConfig">WhisperConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-9jnvu">The bare Whisper Model outputting raw hidden-states without any specific head on top. This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel">TFPreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-1ivrf8m">This model is also a <a href="https://www.tensorflow.org/api_docs/python/tf/keras/Model" rel="nofollow">tf.keras.Model</a> subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFWhisperModel.call"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>call</span></h4> <a id="transformers.TFWhisperModel.call" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFWhisperModel.call"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/modeling_tf_whisper.py#L1117" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_features<span class="opacity-60">: TFModelInputType | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_input_ids<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_attention_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_position_ids<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_head_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">cross_attn_head_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_outputs<span class="opacity-60">: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">past_key_values<span class="opacity-60">: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_inputs_embeds<span class="opacity-60">: Optional[Tuple[Union[np.ndarray, tf.Tensor]]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_cache<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">training<span class="opacity-60">: bool = False</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFSeq2SeqModelOutput">transformers.modeling_tf_outputs.TFSeq2SeqModelOutput</a> or <code>tuple(tf.Tensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 13 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFWhisperModel.call.input_features" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFWhisperModel.call.input_features"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_features</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, feature_size, sequence_length)</code>) — Float values of fbank features extracted from the raw speech waveform. Raw speech waveform can be obtained by loading a <code>.flac</code> or <code>.wav</code> audio file into an array of type <code>List[float]</code> or a <code>numpy.ndarray</code>, <em>e.g.</em> via the soundfile library (<code>pip install soundfile</code>). To prepare the array into <code>input_features</code>, the <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoFeatureExtractor">AutoFeatureExtractor</a> should be used for extracting the fbank features, padding and conversion into a tensor of type <code>tf.Tensor</code>. See <a href="/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperFeatureExtractor.__call__"><strong>call</strong>()</a></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFWhisperModel.call.decoder_input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFWhisperModel.call.decoder_input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_input_ids</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, target_sequence_length)</code>, <em>optional</em>) — Indices of decoder input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <code>SpeechToTextTokenizer</code>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#decoder-input-ids">What are decoder input IDs?</a></p> <p>SpeechToText uses the <code>eos_token_id</code> as the starting token for <code>decoder_input_ids</code> generation. If <code>past_key_values</code> is used, optionally only the last <code>decoder_input_ids</code> have to be input (see <code>past_key_values</code>).</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFWhisperModel.call.decoder_attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFWhisperModel.call.decoder_attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_attention_mask</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, target_sequence_length)</code>, <em>optional</em>) — Default behavior: generate a tensor that ignores pad tokens in <code>decoder_input_ids</code>. Causal mask will also be used by default.<p></p> <p>If you want to change padding behavior, you should read <code>modeling_whisper._prepare_decoder_attention_mask</code> and modify to your needs. See diagram 1 in <a href="https://arxiv.org/abs/1910.13461" rel="nofollow">the paper</a> for more information on the default strategy.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFWhisperModel.call.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFWhisperModel.call.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>tf.Tensor</code> of shape <code>(encoder_layers, encoder_attention_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFWhisperModel.call.decoder_head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFWhisperModel.call.decoder_head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_head_mask</strong> (<code>tf.Tensor</code> of shape <code>(decoder_layers, decoder_attention_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFWhisperModel.call.cross_attn_head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFWhisperModel.call.cross_attn_head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>cross_attn_head_mask</strong> (<code>tf.Tensor</code> of shape <code>(decoder_layers, decoder_attention_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the cross-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFWhisperModel.call.encoder_outputs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFWhisperModel.call.encoder_outputs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>encoder_outputs</strong> (<code>tuple(tuple(tf.Tensor)</code>, <em>optional</em>) — Tuple consists of (<code>last_hidden_state</code>, <em>optional</em>: <code>hidden_states</code>, <em>optional</em>: <code>attentions</code>) <code>last_hidden_state</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFWhisperModel.call.past_key_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFWhisperModel.call.past_key_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>past_key_values</strong> (<code>tuple(tuple(tf.Tensor))</code>, <em>optional</em>, returned when <code>use_cache=True</code> is passed or when <code>config.use_cache=True</code>) — Tuple of <code>tuple(tf.Tensor)</code> of length <code>config.n_layers</code>, with each tuple having 2 tensors of shape <code>(batch_size, num_heads, sequence_length, embed_size_per_head)</code>) and 2 additional tensors of shape <code>(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)</code>.<p></p> <p>Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see <code>past_key_values</code> input) to speed up sequential decoding.</p> <p>If <code>past_key_values</code> are used, the user can optionally input only the last <code>decoder_input_ids</code> (those that don’t have their past key value states given to this model) of shape <code>(batch_size, 1)</code> instead of all <code>decoder_input_ids</code> of shape <code>(batch_size, sequence_length)</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFWhisperModel.call.decoder_inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFWhisperModel.call.decoder_inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_inputs_embeds</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, target_sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>decoder_input_ids</code> you can choose to directly pass an embedded representation. If <code>past_key_values</code> is used, optionally only the last <code>decoder_inputs_embeds</code> have to be input (see <code>past_key_values</code>). This is useful if you want more control over how to convert <code>decoder_input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFWhisperModel.call.use_cache" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFWhisperModel.call.use_cache"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>use_cache</strong> (<code>bool</code>, <em>optional</em>) — If set to <code>True</code>, <code>past_key_values</code> key value states are returned and can be used to speed up decoding (see <code>past_key_values</code>).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFWhisperModel.call.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFWhisperModel.call.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFWhisperModel.call.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFWhisperModel.call.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFWhisperModel.call.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFWhisperModel.call.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li></ul> <div id="transformers.TFWhisperModel.call.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFSeq2SeqModelOutput">transformers.modeling_tf_outputs.TFSeq2SeqModelOutput</a> or <code>tuple(tf.Tensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFSeq2SeqModelOutput">transformers.modeling_tf_outputs.TFSeq2SeqModelOutput</a> or a tuple of <code>tf.Tensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperConfig">WhisperConfig</a>) and inputs.</p> <ul> <li> <p><strong>last_hidden_state</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>) — Sequence of hidden-states at the output of the last layer of the decoder of the model.</p> <p>If <code>past_key_values</code> is used only the last hidden-state of the sequences of shape <code>(batch_size, 1, hidden_size)</code> is output.</p> </li> <li> <p><strong>past_key_values</strong> (<code>List[tf.Tensor]</code>, <em>optional</em>, returned when <code>use_cache=True</code> is passed or when <code>config.use_cache=True</code>) — List of <code>tf.Tensor</code> of length <code>config.n_layers</code>, with each tensor of shape <code>(2, batch_size, num_heads, sequence_length, embed_size_per_head)</code>).</p> <p>Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be used (see <code>past_key_values</code> input) to speed up sequential decoding.</p> </li> <li> <p><strong>decoder_hidden_states</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>tf.Tensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>decoder_attentions</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>tf.Tensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> <li> <p><strong>cross_attentions</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>tf.Tensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.</p> </li> <li> <p><strong>encoder_last_hidden_state</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Sequence of hidden-states at the output of the last layer of the encoder of the model.</p> </li> <li> <p><strong>encoder_hidden_states</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>tf.Tensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>encoder_attentions</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>tf.Tensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-z9iiet">The <a href="/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.TFWhisperModel">TFWhisperModel</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.TFWhisperModel.call.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFWhisperModel.call.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> tensorflow <span class="hljs-keyword">as</span> tf <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> TFWhisperModel, AutoFeatureExtractor <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset <span class="hljs-meta">&gt;&gt;&gt; </span>model = TFWhisperModel.from_pretrained(<span class="hljs-string">"openai/whisper-base"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>feature_extractor = AutoFeatureExtractor.from_pretrained(<span class="hljs-string">"openai/whisper-base"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>ds = load_dataset(<span class="hljs-string">"hf-internal-testing/librispeech_asr_dummy"</span>, <span class="hljs-string">"clean"</span>, split=<span class="hljs-string">"validation"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = feature_extractor(ds[<span class="hljs-number">0</span>][<span class="hljs-string">"audio"</span>][<span class="hljs-string">"array"</span>], return_tensors=<span class="hljs-string">"tf"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>input_features = inputs.input_features <span class="hljs-meta">&gt;&gt;&gt; </span>decoder_input_ids = tf.convert_to_tensor([[<span class="hljs-number">1</span>, <span class="hljs-number">1</span>]]) * model.config.decoder_start_token_id <span class="hljs-meta">&gt;&gt;&gt; </span>last_hidden_state = model(input_features, decoder_input_ids=decoder_input_ids).last_hidden_state <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-built_in">list</span>(last_hidden_state.shape) [<span class="hljs-number">1</span>, <span class="hljs-number">2</span>, <span class="hljs-number">512</span>]</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.TFWhisperForConditionalGeneration" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFWhisperForConditionalGeneration"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-15jenqk">TFWhisperForConditionalGeneration</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFWhisperForConditionalGeneration"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">TFWhisperForConditionalGeneration</span></span></h3> <a id="transformers.TFWhisperForConditionalGeneration" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFWhisperForConditionalGeneration"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/modeling_tf_whisper.py#L1201" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">*args<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFWhisperForConditionalGeneration.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFWhisperForConditionalGeneration.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperConfig">WhisperConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-et73oj">The Whisper Model with a language modeling head. Can be used for automatic speech recognition. This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel">TFPreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-1ivrf8m">This model is also a <a href="https://www.tensorflow.org/api_docs/python/tf/keras/Model" rel="nofollow">tf.keras.Model</a> subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFWhisperForConditionalGeneration.call"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>call</span></h4> <a id="transformers.TFWhisperForConditionalGeneration.call" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFWhisperForConditionalGeneration.call"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/modeling_tf_whisper.py#L1232" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_features<span class="opacity-60">: TFModelInputType | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_input_ids<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_attention_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_position_ids<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_head_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">cross_attn_head_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_outputs<span class="opacity-60">: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">past_key_values<span class="opacity-60">: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_inputs_embeds<span class="opacity-60">: Optional[Tuple[Union[np.ndarray, tf.Tensor]]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_cache<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">training<span class="opacity-60">: bool = False</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFSeq2SeqLMOutput">transformers.modeling_tf_outputs.TFSeq2SeqLMOutput</a> or <code>tuple(tf.Tensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 14 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFWhisperForConditionalGeneration.call.input_features" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFWhisperForConditionalGeneration.call.input_features"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_features</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, feature_size, sequence_length)</code>) — Float values of fbank features extracted from the raw speech waveform. Raw speech waveform can be obtained by loading a <code>.flac</code> or <code>.wav</code> audio file into an array of type <code>List[float]</code> or a <code>numpy.ndarray</code>, <em>e.g.</em> via the soundfile library (<code>pip install soundfile</code>). To prepare the array into <code>input_features</code>, the <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoFeatureExtractor">AutoFeatureExtractor</a> should be used for extracting the fbank features, padding and conversion into a tensor of type <code>tf.Tensor</code>. See <a href="/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperFeatureExtractor.__call__"><strong>call</strong>()</a></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFWhisperForConditionalGeneration.call.decoder_input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFWhisperForConditionalGeneration.call.decoder_input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_input_ids</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, target_sequence_length)</code>, <em>optional</em>) — Indices of decoder input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <code>SpeechToTextTokenizer</code>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#decoder-input-ids">What are decoder input IDs?</a></p> <p>SpeechToText uses the <code>eos_token_id</code> as the starting token for <code>decoder_input_ids</code> generation. If <code>past_key_values</code> is used, optionally only the last <code>decoder_input_ids</code> have to be input (see <code>past_key_values</code>).</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFWhisperForConditionalGeneration.call.decoder_attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFWhisperForConditionalGeneration.call.decoder_attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_attention_mask</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, target_sequence_length)</code>, <em>optional</em>) — Default behavior: generate a tensor that ignores pad tokens in <code>decoder_input_ids</code>. Causal mask will also be used by default.<p></p> <p>If you want to change padding behavior, you should read <code>modeling_whisper._prepare_decoder_attention_mask</code> and modify to your needs. See diagram 1 in <a href="https://arxiv.org/abs/1910.13461" rel="nofollow">the paper</a> for more information on the default strategy.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFWhisperForConditionalGeneration.call.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFWhisperForConditionalGeneration.call.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>tf.Tensor</code> of shape <code>(encoder_layers, encoder_attention_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFWhisperForConditionalGeneration.call.decoder_head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFWhisperForConditionalGeneration.call.decoder_head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_head_mask</strong> (<code>tf.Tensor</code> of shape <code>(decoder_layers, decoder_attention_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFWhisperForConditionalGeneration.call.cross_attn_head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFWhisperForConditionalGeneration.call.cross_attn_head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>cross_attn_head_mask</strong> (<code>tf.Tensor</code> of shape <code>(decoder_layers, decoder_attention_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the cross-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFWhisperForConditionalGeneration.call.encoder_outputs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFWhisperForConditionalGeneration.call.encoder_outputs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>encoder_outputs</strong> (<code>tuple(tuple(tf.Tensor)</code>, <em>optional</em>) — Tuple consists of (<code>last_hidden_state</code>, <em>optional</em>: <code>hidden_states</code>, <em>optional</em>: <code>attentions</code>) <code>last_hidden_state</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFWhisperForConditionalGeneration.call.past_key_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFWhisperForConditionalGeneration.call.past_key_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>past_key_values</strong> (<code>tuple(tuple(tf.Tensor))</code>, <em>optional</em>, returned when <code>use_cache=True</code> is passed or when <code>config.use_cache=True</code>) — Tuple of <code>tuple(tf.Tensor)</code> of length <code>config.n_layers</code>, with each tuple having 2 tensors of shape <code>(batch_size, num_heads, sequence_length, embed_size_per_head)</code>) and 2 additional tensors of shape <code>(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)</code>.<p></p> <p>Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see <code>past_key_values</code> input) to speed up sequential decoding.</p> <p>If <code>past_key_values</code> are used, the user can optionally input only the last <code>decoder_input_ids</code> (those that don’t have their past key value states given to this model) of shape <code>(batch_size, 1)</code> instead of all <code>decoder_input_ids</code> of shape <code>(batch_size, sequence_length)</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFWhisperForConditionalGeneration.call.decoder_inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFWhisperForConditionalGeneration.call.decoder_inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_inputs_embeds</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, target_sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>decoder_input_ids</code> you can choose to directly pass an embedded representation. If <code>past_key_values</code> is used, optionally only the last <code>decoder_inputs_embeds</code> have to be input (see <code>past_key_values</code>). This is useful if you want more control over how to convert <code>decoder_input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFWhisperForConditionalGeneration.call.use_cache" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFWhisperForConditionalGeneration.call.use_cache"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>use_cache</strong> (<code>bool</code>, <em>optional</em>) — If set to <code>True</code>, <code>past_key_values</code> key value states are returned and can be used to speed up decoding (see <code>past_key_values</code>).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFWhisperForConditionalGeneration.call.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFWhisperForConditionalGeneration.call.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFWhisperForConditionalGeneration.call.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFWhisperForConditionalGeneration.call.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFWhisperForConditionalGeneration.call.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFWhisperForConditionalGeneration.call.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFWhisperForConditionalGeneration.call.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFWhisperForConditionalGeneration.call.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Labels for computing the language modeling loss. Indices should either be in <code>[0, ..., config.vocab_size]</code> or -100 (see <code>input_ids</code> docstring). Tokens with indices set to <code>-100</code> are ignored (masked), the loss is only computed for the tokens with labels in <code>[0, ..., config.vocab_size]</code>.</span></span> </li></ul> <div id="transformers.TFWhisperForConditionalGeneration.call.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFSeq2SeqLMOutput">transformers.modeling_tf_outputs.TFSeq2SeqLMOutput</a> or <code>tuple(tf.Tensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFSeq2SeqLMOutput">transformers.modeling_tf_outputs.TFSeq2SeqLMOutput</a> or a tuple of <code>tf.Tensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperConfig">WhisperConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>tf.Tensor</code> of shape <code>(n,)</code>, <em>optional</em>, where n is the number of non-masked labels, returned when <code>labels</code> is provided) — Language modeling loss.</p> </li> <li> <p><strong>logits</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length, config.vocab_size)</code>) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).</p> </li> <li> <p><strong>past_key_values</strong> (<code>List[tf.Tensor]</code>, <em>optional</em>, returned when <code>use_cache=True</code> is passed or when <code>config.use_cache=True</code>) — List of <code>tf.Tensor</code> of length <code>config.n_layers</code>, with each tensor of shape <code>(2, batch_size, num_heads, sequence_length, embed_size_per_head)</code>).</p> <p>Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be used (see <code>past_key_values</code> input) to speed up sequential decoding.</p> </li> <li> <p><strong>decoder_hidden_states</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>tf.Tensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>decoder_attentions</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>tf.Tensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> <li> <p><strong>cross_attentions</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>tf.Tensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.</p> </li> <li> <p><strong>encoder_last_hidden_state</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Sequence of hidden-states at the output of the last layer of the encoder of the model.</p> </li> <li> <p><strong>encoder_hidden_states</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>tf.Tensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>encoder_attentions</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>tf.Tensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-1wc4t9z">The <a href="/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.TFWhisperForConditionalGeneration">TFWhisperForConditionalGeneration</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.TFWhisperForConditionalGeneration.call.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFWhisperForConditionalGeneration.call.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> tensorflow <span class="hljs-keyword">as</span> tf <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoProcessor, TFWhisperForConditionalGeneration <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset <span class="hljs-meta">&gt;&gt;&gt; </span>processor = AutoProcessor.from_pretrained(<span class="hljs-string">"openai/whisper-tiny.en"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = TFWhisperForConditionalGeneration.from_pretrained(<span class="hljs-string">"openai/whisper-tiny.en"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>ds = load_dataset(<span class="hljs-string">"hf-internal-testing/librispeech_asr_dummy"</span>, <span class="hljs-string">"clean"</span>, split=<span class="hljs-string">"validation"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = processor(ds[<span class="hljs-number">0</span>][<span class="hljs-string">"audio"</span>][<span class="hljs-string">"array"</span>], return_tensors=<span class="hljs-string">"tf"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>input_features = inputs.input_features <span class="hljs-meta">&gt;&gt;&gt; </span>generated_ids = model.generate(input_features=input_features) <span class="hljs-meta">&gt;&gt;&gt; </span>transcription = processor.batch_decode(generated_ids, skip_special_tokens=<span class="hljs-literal">True</span>)[<span class="hljs-number">0</span>] <span class="hljs-meta">&gt;&gt;&gt; </span>transcription <span class="hljs-string">' Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel.'</span></pre></div></div></div></div> <h2 class="relative group"><a id="transformers.FlaxWhisperModel" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWhisperModel"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-yhfv7x">FlaxWhisperModel</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FlaxWhisperModel"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">FlaxWhisperModel</span></span></h3> <a id="transformers.FlaxWhisperModel" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FlaxWhisperModel"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/modeling_flax_whisper.py#L1165" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60">: WhisperConfig</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_shape<span class="opacity-60">: typing.Tuple[int] = (1, 80, 3000)</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">seed<span class="opacity-60">: int = 0</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">dtype<span class="opacity-60">: dtype = &lt;class 'jax.numpy.float32'&gt;</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">_do_init<span class="opacity-60">: bool = True</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">gradient_checkpointing<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxWhisperModel.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWhisperModel.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperConfig">WhisperConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxWhisperModel.dtype" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWhisperModel.dtype"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>dtype</strong> (<code>jax.numpy.dtype</code>, <em>optional</em>, defaults to <code>jax.numpy.float32</code>) — The data type of the computation. Can be one of <code>jax.numpy.float32</code>, <code>jax.numpy.float16</code> (on GPUs) and <code>jax.numpy.bfloat16</code> (on TPUs). This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given <code>dtype</code>. <strong>Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.</strong> If you wish to change the dtype of the model parameters, see <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.to_fp16">to_fp16()</a> and <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.to_bf16">to_bf16()</a>.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-14p8ouj">The bare Whisper Model transformer outputting raw hidden-states without any specific head on top. This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel">FlaxPreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its models (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a Flax Linen <a href="https://flax.readthedocs.io/en/latest/_autosummary/flax.nn.module.html" rel="nofollow">flax.nn.Module</a> subclass. Use it as a regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior. Finally, this model supports inherent JAX features such as:</p> <ul data-svelte-h="svelte-1w7z84m"><li><a href="https://jax.readthedocs.io/en/latest/jax.html#just-in-time-compilation-jit" rel="nofollow">Just-In-Time (JIT) compilation</a></li> <li><a href="https://jax.readthedocs.io/en/latest/jax.html#automatic-differentiation" rel="nofollow">Automatic Differentiation</a></li> <li><a href="https://jax.readthedocs.io/en/latest/jax.html#vectorization-vmap" rel="nofollow">Vectorization</a></li> <li><a href="https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap" rel="nofollow">Parallelization</a></li></ul> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FlaxWhisperModel.__call__"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>__call__</span></h4> <a id="transformers.FlaxWhisperModel.__call__" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FlaxWhisperModel.__call__"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/modeling_flax_whisper.py#L1110" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_features<span class="opacity-60">: Array</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_input_ids<span class="opacity-60">: Array</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[jax.Array] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_attention_mask<span class="opacity-60">: typing.Optional[jax.Array] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: typing.Optional[jax.Array] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_position_ids<span class="opacity-60">: typing.Optional[jax.Array] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">train<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">params<span class="opacity-60">: dict = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">dropout_rng<span class="opacity-60">: PRNGKey = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxSeq2SeqModelOutput">transformers.modeling_flax_outputs.FlaxSeq2SeqModelOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 9 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxWhisperModel.__call__.input_features" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWhisperModel.__call__.input_features"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_features</strong> (<code>numpy.ndarray</code> of shape <code>(batch_size, feature_size, sequence_length)</code>) — Float values mel features extracted from the raw speech waveform. Raw speech waveform can be obtained by loading a <code>.flac</code> or <code>.wav</code> audio file into an array of type <code>List[float]</code> or a <code>numpy.ndarray</code>, <em>e.g.</em> via the soundfile library (<code>pip install soundfile</code>). To prepare the array into <code>input_features</code>, the <a href="/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperFeatureExtractor">WhisperFeatureExtractor</a> should be used for extracting the features, padding and conversion into a tensor of type <code>numpy.ndarray</code>. See <a href="/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperFeatureExtractor.__call__"><strong>call</strong>()</a></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxWhisperModel.__call__.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWhisperModel.__call__.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>numpy.ndarray</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Whisper does not support masking of the <code>input_features</code>, this argument is preserved for compatibility, but is not used. By default the silence in the input log mel spectrogram are ignored.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxWhisperModel.__call__.decoder_input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWhisperModel.__call__.decoder_input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_input_ids</strong> (<code>numpy.ndarray</code> of shape <code>(batch_size, target_sequence_length)</code>, <em>optional</em>) — Indices of decoder input sequence tokens in the vocabulary. Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperTokenizer">WhisperTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details. <a href="../glossary#decoder-input-ids">What are decoder input IDs?</a> Whisper uses the <code>decoder_start_token_id</code> as the starting token for <code>decoder_input_ids</code> generation.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxWhisperModel.__call__.decoder_attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWhisperModel.__call__.decoder_attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_attention_mask</strong> (<code>numpy.ndarray</code> of shape <code>(batch_size, target_sequence_length)</code>, <em>optional</em>) — Default behavior: generate a tensor that ignores pad tokens in <code>decoder_input_ids</code>. Causal mask will also be used by default. If you want to change padding behavior, you should modify to your needs. See diagram 1 in <a href="https://arxiv.org/abs/1910.13461" rel="nofollow">the paper</a> for more information on the default strategy.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxWhisperModel.__call__.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWhisperModel.__call__.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>numpy.ndarray</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Whisper does not use <code>position_ids</code> in the encoder as <code>input_features</code> is always the same size and doesn’t use masking, but this argument is preserved for compatibility. By default the silence in the input log mel spectrogram are ignored.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxWhisperModel.__call__.decoder_position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWhisperModel.__call__.decoder_position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_position_ids</strong> (<code>numpy.ndarray</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxWhisperModel.__call__.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWhisperModel.__call__.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxWhisperModel.__call__.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWhisperModel.__call__.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxWhisperModel.__call__.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWhisperModel.__call__.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li></ul> <div id="transformers.FlaxWhisperModel.__call__.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxSeq2SeqModelOutput">transformers.modeling_flax_outputs.FlaxSeq2SeqModelOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxSeq2SeqModelOutput">transformers.modeling_flax_outputs.FlaxSeq2SeqModelOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperConfig">WhisperConfig</a>) and inputs.</p> <ul> <li> <p><strong>last_hidden_state</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>) — Sequence of hidden-states at the output of the last layer of the decoder of the model.</p> <p>If <code>past_key_values</code> is used only the last hidden-state of the sequences of shape <code>(batch_size, 1, hidden_size)</code> is output.</p> </li> <li> <p><strong>past_key_values</strong> (<code>tuple(tuple(jnp.ndarray))</code>, <em>optional</em>, returned when <code>use_cache=True</code> is passed or when <code>config.use_cache=True</code>) — Tuple of <code>tuple(jnp.ndarray)</code> of length <code>config.n_layers</code>, with each tuple having 2 tensors of shape <code>(batch_size, num_heads, sequence_length, embed_size_per_head)</code>) and 2 additional tensors of shape <code>(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)</code>.</p> <p>Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see <code>past_key_values</code> input) to speed up sequential decoding.</p> </li> <li> <p><strong>decoder_hidden_states</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>jnp.ndarray</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>decoder_attentions</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>jnp.ndarray</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> <li> <p><strong>cross_attentions</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>jnp.ndarray</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.</p> </li> <li> <p><strong>encoder_last_hidden_state</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Sequence of hidden-states at the output of the last layer of the encoder of the model.</p> </li> <li> <p><strong>encoder_hidden_states</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>jnp.ndarray</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>encoder_attentions</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>jnp.ndarray</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-13ych8p">The <code>FlaxWhisperPreTrainedModel</code> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.FlaxWhisperModel.__call__.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWhisperModel.__call__.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, FlaxWhisperModel <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"openai/whisper-tiny"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = FlaxWhisperModel.from_pretrained(<span class="hljs-string">"openai/whisper-tiny"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(<span class="hljs-string">"Hello, my dog is cute"</span>, return_tensors=<span class="hljs-string">"jax"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(**inputs) <span class="hljs-meta">&gt;&gt;&gt; </span>last_hidden_states = outputs.last_hidden_state</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.FlaxWhisperForConditionalGeneration" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWhisperForConditionalGeneration"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-gjouj">FlaxWhisperForConditionalGeneration</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FlaxWhisperForConditionalGeneration"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">FlaxWhisperForConditionalGeneration</span></span></h3> <a id="transformers.FlaxWhisperForConditionalGeneration" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FlaxWhisperForConditionalGeneration"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/modeling_flax_whisper.py#L1244" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60">: WhisperConfig</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_shape<span class="opacity-60">: typing.Tuple[int] = (1, 80, 3000)</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">seed<span class="opacity-60">: int = 0</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">dtype<span class="opacity-60">: dtype = &lt;class 'jax.numpy.float32'&gt;</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">_do_init<span class="opacity-60">: bool = True</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">gradient_checkpointing<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxWhisperForConditionalGeneration.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWhisperForConditionalGeneration.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperConfig">WhisperConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxWhisperForConditionalGeneration.dtype" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWhisperForConditionalGeneration.dtype"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>dtype</strong> (<code>jax.numpy.dtype</code>, <em>optional</em>, defaults to <code>jax.numpy.float32</code>) — The data type of the computation. Can be one of <code>jax.numpy.float32</code>, <code>jax.numpy.float16</code> (on GPUs) and <code>jax.numpy.bfloat16</code> (on TPUs). This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given <code>dtype</code>. <strong>Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.</strong> If you wish to change the dtype of the model parameters, see <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.to_fp16">to_fp16()</a> and <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.to_bf16">to_bf16()</a>.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1dihsrs">The Whisper Model with a language modeling head. This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel">FlaxPreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its models (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a Flax Linen <a href="https://flax.readthedocs.io/en/latest/_autosummary/flax.nn.module.html" rel="nofollow">flax.nn.Module</a> subclass. Use it as a regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior. Finally, this model supports inherent JAX features such as:</p> <ul data-svelte-h="svelte-1w7z84m"><li><a href="https://jax.readthedocs.io/en/latest/jax.html#just-in-time-compilation-jit" rel="nofollow">Just-In-Time (JIT) compilation</a></li> <li><a href="https://jax.readthedocs.io/en/latest/jax.html#automatic-differentiation" rel="nofollow">Automatic Differentiation</a></li> <li><a href="https://jax.readthedocs.io/en/latest/jax.html#vectorization-vmap" rel="nofollow">Vectorization</a></li> <li><a href="https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap" rel="nofollow">Parallelization</a></li></ul> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FlaxWhisperForConditionalGeneration.__call__"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>__call__</span></h4> <a id="transformers.FlaxWhisperForConditionalGeneration.__call__" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FlaxWhisperForConditionalGeneration.__call__"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/modeling_flax_whisper.py#L1110" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_features<span class="opacity-60">: Array</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_input_ids<span class="opacity-60">: Array</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[jax.Array] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_attention_mask<span class="opacity-60">: typing.Optional[jax.Array] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: typing.Optional[jax.Array] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_position_ids<span class="opacity-60">: typing.Optional[jax.Array] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">train<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">params<span class="opacity-60">: dict = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">dropout_rng<span class="opacity-60">: PRNGKey = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput">transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 9 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxWhisperForConditionalGeneration.__call__.input_features" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWhisperForConditionalGeneration.__call__.input_features"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_features</strong> (<code>numpy.ndarray</code> of shape <code>(batch_size, feature_size, sequence_length)</code>) — Float values mel features extracted from the raw speech waveform. Raw speech waveform can be obtained by loading a <code>.flac</code> or <code>.wav</code> audio file into an array of type <code>List[float]</code> or a <code>numpy.ndarray</code>, <em>e.g.</em> via the soundfile library (<code>pip install soundfile</code>). To prepare the array into <code>input_features</code>, the <a href="/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperFeatureExtractor">WhisperFeatureExtractor</a> should be used for extracting the features, padding and conversion into a tensor of type <code>numpy.ndarray</code>. See <a href="/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperFeatureExtractor.__call__"><strong>call</strong>()</a></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxWhisperForConditionalGeneration.__call__.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWhisperForConditionalGeneration.__call__.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>numpy.ndarray</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Whisper does not support masking of the <code>input_features</code>, this argument is preserved for compatibility, but is not used. By default the silence in the input log mel spectrogram are ignored.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxWhisperForConditionalGeneration.__call__.decoder_input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWhisperForConditionalGeneration.__call__.decoder_input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_input_ids</strong> (<code>numpy.ndarray</code> of shape <code>(batch_size, target_sequence_length)</code>, <em>optional</em>) — Indices of decoder input sequence tokens in the vocabulary. Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperTokenizer">WhisperTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details. <a href="../glossary#decoder-input-ids">What are decoder input IDs?</a> Whisper uses the <code>decoder_start_token_id</code> as the starting token for <code>decoder_input_ids</code> generation.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxWhisperForConditionalGeneration.__call__.decoder_attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWhisperForConditionalGeneration.__call__.decoder_attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_attention_mask</strong> (<code>numpy.ndarray</code> of shape <code>(batch_size, target_sequence_length)</code>, <em>optional</em>) — Default behavior: generate a tensor that ignores pad tokens in <code>decoder_input_ids</code>. Causal mask will also be used by default. If you want to change padding behavior, you should modify to your needs. See diagram 1 in <a href="https://arxiv.org/abs/1910.13461" rel="nofollow">the paper</a> for more information on the default strategy.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxWhisperForConditionalGeneration.__call__.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWhisperForConditionalGeneration.__call__.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>numpy.ndarray</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Whisper does not use <code>position_ids</code> in the encoder as <code>input_features</code> is always the same size and doesn’t use masking, but this argument is preserved for compatibility. By default the silence in the input log mel spectrogram are ignored.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxWhisperForConditionalGeneration.__call__.decoder_position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWhisperForConditionalGeneration.__call__.decoder_position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_position_ids</strong> (<code>numpy.ndarray</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxWhisperForConditionalGeneration.__call__.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWhisperForConditionalGeneration.__call__.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxWhisperForConditionalGeneration.__call__.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWhisperForConditionalGeneration.__call__.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxWhisperForConditionalGeneration.__call__.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWhisperForConditionalGeneration.__call__.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li></ul> <div id="transformers.FlaxWhisperForConditionalGeneration.__call__.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput">transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput">transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperConfig">WhisperConfig</a>) and inputs.</p> <ul> <li> <p><strong>logits</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, sequence_length, config.vocab_size)</code>) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).</p> </li> <li> <p><strong>past_key_values</strong> (<code>tuple(tuple(jnp.ndarray))</code>, <em>optional</em>, returned when <code>use_cache=True</code> is passed or when <code>config.use_cache=True</code>) — Tuple of <code>tuple(jnp.ndarray)</code> of length <code>config.n_layers</code>, with each tuple having 2 tensors of shape <code>(batch_size, num_heads, sequence_length, embed_size_per_head)</code>) and 2 additional tensors of shape <code>(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)</code>.</p> <p>Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see <code>past_key_values</code> input) to speed up sequential decoding.</p> </li> <li> <p><strong>decoder_hidden_states</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>jnp.ndarray</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>decoder_attentions</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>jnp.ndarray</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> <li> <p><strong>cross_attentions</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>jnp.ndarray</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.</p> </li> <li> <p><strong>encoder_last_hidden_state</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Sequence of hidden-states at the output of the last layer of the encoder of the model.</p> </li> <li> <p><strong>encoder_hidden_states</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>jnp.ndarray</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>encoder_attentions</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>jnp.ndarray</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-13ych8p">The <code>FlaxWhisperPreTrainedModel</code> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.FlaxWhisperForConditionalGeneration.__call__.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWhisperForConditionalGeneration.__call__.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-yrk4pw">Transcription example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> WhisperProcessor, FlaxWhisperForConditionalGeneration <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset <span class="hljs-meta">&gt;&gt;&gt; </span>processor = WhisperProcessor.from_pretrained(<span class="hljs-string">"openai/whisper-tiny.en"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = FlaxWhisperForConditionalGeneration.from_pretrained(<span class="hljs-string">"openai/whisper-tiny.en"</span>, from_pt=<span class="hljs-literal">True</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>ds = load_dataset(<span class="hljs-string">"hf-internal-testing/librispeech_asr_dummy"</span>, <span class="hljs-string">"clean"</span>, split=<span class="hljs-string">"validation"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = processor(ds[<span class="hljs-number">0</span>][<span class="hljs-string">"audio"</span>][<span class="hljs-string">"array"</span>], return_tensors=<span class="hljs-string">"np"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>input_features = inputs.input_features <span class="hljs-meta">&gt;&gt;&gt; </span>generated_ids = model.generate(input_ids=input_features) <span class="hljs-meta">&gt;&gt;&gt; </span>transcription = processor.batch_decode(generated_ids, skip_special_tokens=<span class="hljs-literal">True</span>)[<span class="hljs-number">0</span>] <span class="hljs-meta">&gt;&gt;&gt; </span>transcription <span class="hljs-string">' Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel.'</span></pre></div></div></div></div> <h2 class="relative group"><a id="transformers.FlaxWhisperForAudioClassification" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWhisperForAudioClassification"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1cxeb7j">FlaxWhisperForAudioClassification</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FlaxWhisperForAudioClassification"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">FlaxWhisperForAudioClassification</span></span></h3> <a id="transformers.FlaxWhisperForAudioClassification" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FlaxWhisperForAudioClassification"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/modeling_flax_whisper.py#L1574" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60">: WhisperConfig</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_shape<span class="opacity-60">: typing.Tuple[int] = (1, 80, 3000)</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">seed<span class="opacity-60">: int = 0</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">dtype<span class="opacity-60">: dtype = &lt;class 'jax.numpy.float32'&gt;</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">_do_init<span class="opacity-60">: bool = True</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">gradient_checkpointing<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxWhisperForAudioClassification.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWhisperForAudioClassification.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperConfig">WhisperConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxWhisperForAudioClassification.dtype" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWhisperForAudioClassification.dtype"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>dtype</strong> (<code>jax.numpy.dtype</code>, <em>optional</em>, defaults to <code>jax.numpy.float32</code>) — The data type of the computation. Can be one of <code>jax.numpy.float32</code>, <code>jax.numpy.float16</code> (on GPUs) and <code>jax.numpy.bfloat16</code> (on TPUs). This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given <code>dtype</code>. <strong>Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.</strong> If you wish to change the dtype of the model parameters, see <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.to_fp16">to_fp16()</a> and <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.to_bf16">to_bf16()</a>.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-drdfjh">The Whisper Model with an audio classification head on top. This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel">FlaxPreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its models (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a Flax Linen <a href="https://flax.readthedocs.io/en/latest/_autosummary/flax.nn.module.html" rel="nofollow">flax.nn.Module</a> subclass. Use it as a regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior. Finally, this model supports inherent JAX features such as:</p> <ul data-svelte-h="svelte-1w7z84m"><li><a href="https://jax.readthedocs.io/en/latest/jax.html#just-in-time-compilation-jit" rel="nofollow">Just-In-Time (JIT) compilation</a></li> <li><a href="https://jax.readthedocs.io/en/latest/jax.html#automatic-differentiation" rel="nofollow">Automatic Differentiation</a></li> <li><a href="https://jax.readthedocs.io/en/latest/jax.html#vectorization-vmap" rel="nofollow">Vectorization</a></li> <li><a href="https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap" rel="nofollow">Parallelization</a></li></ul> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FlaxWhisperForAudioClassification.__call__"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>__call__</span></h4> <a id="transformers.FlaxWhisperForAudioClassification.__call__" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FlaxWhisperForAudioClassification.__call__"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/whisper/modeling_flax_whisper.py#L1601" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_features<span class="opacity-60">: Array</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[jax.Array] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">train<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">params<span class="opacity-60">: dict = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">dropout_rng<span class="opacity-60">: PRNGKey = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput">transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 9 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxWhisperForAudioClassification.__call__.input_features" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWhisperForAudioClassification.__call__.input_features"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_features</strong> (<code>numpy.ndarray</code> of shape <code>(batch_size, feature_size, sequence_length)</code>) — Float values mel features extracted from the raw speech waveform. Raw speech waveform can be obtained by loading a <code>.flac</code> or <code>.wav</code> audio file into an array of type <code>List[float]</code> or a <code>numpy.ndarray</code>, <em>e.g.</em> via the soundfile library (<code>pip install soundfile</code>). To prepare the array into <code>input_features</code>, the <a href="/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperFeatureExtractor">WhisperFeatureExtractor</a> should be used for extracting the features, padding and conversion into a tensor of type <code>numpy.ndarray</code>. See <a href="/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperFeatureExtractor.__call__"><strong>call</strong>()</a></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxWhisperForAudioClassification.__call__.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWhisperForAudioClassification.__call__.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>numpy.ndarray</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Whisper does not support masking of the <code>input_features</code>, this argument is preserved for compatibility, but is not used. By default the silence in the input log mel spectrogram are ignored.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxWhisperForAudioClassification.__call__.decoder_input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWhisperForAudioClassification.__call__.decoder_input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_input_ids</strong> (<code>numpy.ndarray</code> of shape <code>(batch_size, target_sequence_length)</code>, <em>optional</em>) — Indices of decoder input sequence tokens in the vocabulary. Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperTokenizer">WhisperTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details. <a href="../glossary#decoder-input-ids">What are decoder input IDs?</a> Whisper uses the <code>decoder_start_token_id</code> as the starting token for <code>decoder_input_ids</code> generation.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxWhisperForAudioClassification.__call__.decoder_attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWhisperForAudioClassification.__call__.decoder_attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_attention_mask</strong> (<code>numpy.ndarray</code> of shape <code>(batch_size, target_sequence_length)</code>, <em>optional</em>) — Default behavior: generate a tensor that ignores pad tokens in <code>decoder_input_ids</code>. Causal mask will also be used by default. If you want to change padding behavior, you should modify to your needs. See diagram 1 in <a href="https://arxiv.org/abs/1910.13461" rel="nofollow">the paper</a> for more information on the default strategy.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxWhisperForAudioClassification.__call__.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWhisperForAudioClassification.__call__.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>numpy.ndarray</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Whisper does not use <code>position_ids</code> in the encoder as <code>input_features</code> is always the same size and doesn’t use masking, but this argument is preserved for compatibility. By default the silence in the input log mel spectrogram are ignored.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxWhisperForAudioClassification.__call__.decoder_position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWhisperForAudioClassification.__call__.decoder_position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_position_ids</strong> (<code>numpy.ndarray</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxWhisperForAudioClassification.__call__.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWhisperForAudioClassification.__call__.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxWhisperForAudioClassification.__call__.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWhisperForAudioClassification.__call__.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxWhisperForAudioClassification.__call__.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWhisperForAudioClassification.__call__.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li></ul> <div id="transformers.FlaxWhisperForAudioClassification.__call__.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput">transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput">transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.WhisperConfig">WhisperConfig</a>) and inputs.</p> <ul> <li> <p><strong>logits</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, config.num_labels)</code>) — Classification (or regression if config.num_labels==1) scores (before SoftMax).</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>jnp.ndarray</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>jnp.ndarray</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-171yxqr">The <a href="/docs/transformers/v4.34.0/en/model_doc/whisper#transformers.FlaxWhisperForAudioClassification">FlaxWhisperForAudioClassification</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.FlaxWhisperForAudioClassification.__call__.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxWhisperForAudioClassification.__call__.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-yrk4pw">Transcription example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> jax.numpy <span class="hljs-keyword">as</span> jnp <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoFeatureExtractor, FlaxWhisperForAudioClassification <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset <span class="hljs-meta">&gt;&gt;&gt; </span>feature_extractor = AutoFeatureExtractor.from_pretrained(<span class="hljs-string">"sanchit-gandhi/whisper-medium-fleurs-lang-id"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = FlaxWhisperForAudioClassification.from_pretrained( <span class="hljs-meta">... </span> <span class="hljs-string">"sanchit-gandhi/whisper-medium-fleurs-lang-id"</span>, from_pt=<span class="hljs-literal">True</span> <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>ds = load_dataset(<span class="hljs-string">"google/fleurs"</span>, <span class="hljs-string">"all"</span>, split=<span class="hljs-string">"validation"</span>, streaming=<span class="hljs-literal">True</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>sample = <span class="hljs-built_in">next</span>(<span class="hljs-built_in">iter</span>(ds)) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = feature_extractor( <span class="hljs-meta">... </span> sample[<span class="hljs-string">"audio"</span>][<span class="hljs-string">"array"</span>], sampling_rate=sample[<span class="hljs-string">"audio"</span>][<span class="hljs-string">"sampling_rate"</span>], return_tensors=<span class="hljs-string">"np"</span> <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>input_features = inputs.input_features <span class="hljs-meta">&gt;&gt;&gt; </span>logits = model(input_features).logits <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_class_ids = jnp.argmax(logits).item() <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_label = model.config.id2label[predicted_class_ids] <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_label <span class="hljs-string">'af_za'</span></pre></div></div></div></div> <p></p> <div id="svelte-announcer" aria-live="assertive" aria-atomic="true" style="position: absolute; left: 0px; top: 0px; clip: rect(0px, 0px, 0px, 0px); clip-path: inset(50%); overflow: hidden; white-space: nowrap; width: 1px; height: 1px;"></div></div> <div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/wavlm" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>WavLM</a> <a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/xls_r" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">XLS-R<span class="ml-2 translate-y-px">→</span></a></div></div></div> <div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapter&quot;:{&quot;title&quot;:&quot;Whisper&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;whisper&quot;,&quot;url&quot;:&quot;#whisper&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;overview&quot;,&quot;url&quot;:&quot;#overview&quot;},{&quot;title&quot;:&quot;WhisperConfig&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.WhisperConfig&quot;,&quot;url&quot;:&quot;#transformers.WhisperConfig&quot;},{&quot;title&quot;:&quot;WhisperTokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.WhisperTokenizer&quot;,&quot;url&quot;:&quot;#transformers.WhisperTokenizer&quot;},{&quot;title&quot;:&quot;WhisperTokenizerFast&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.WhisperTokenizerFast&quot;,&quot;url&quot;:&quot;#transformers.WhisperTokenizerFast&quot;},{&quot;title&quot;:&quot;WhisperFeatureExtractor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.WhisperFeatureExtractor&quot;,&quot;url&quot;:&quot;#transformers.WhisperFeatureExtractor&quot;},{&quot;title&quot;:&quot;WhisperProcessor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.WhisperProcessor&quot;,&quot;url&quot;:&quot;#transformers.WhisperProcessor&quot;},{&quot;title&quot;:&quot;WhisperModel&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.WhisperModel&quot;,&quot;url&quot;:&quot;#transformers.WhisperModel&quot;},{&quot;title&quot;:&quot;WhisperForConditionalGeneration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.WhisperForConditionalGeneration&quot;,&quot;url&quot;:&quot;#transformers.WhisperForConditionalGeneration&quot;},{&quot;title&quot;:&quot;WhisperForAudioClassification&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.WhisperForAudioClassification&quot;,&quot;url&quot;:&quot;#transformers.WhisperForAudioClassification&quot;},{&quot;title&quot;:&quot;TFWhisperModel&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.TFWhisperModel&quot;,&quot;url&quot;:&quot;#transformers.TFWhisperModel&quot;},{&quot;title&quot;:&quot;TFWhisperForConditionalGeneration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.TFWhisperForConditionalGeneration&quot;,&quot;url&quot;:&quot;#transformers.TFWhisperForConditionalGeneration&quot;},{&quot;title&quot;:&quot;FlaxWhisperModel&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.FlaxWhisperModel&quot;,&quot;url&quot;:&quot;#transformers.FlaxWhisperModel&quot;},{&quot;title&quot;:&quot;FlaxWhisperForConditionalGeneration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.FlaxWhisperForConditionalGeneration&quot;,&quot;url&quot;:&quot;#transformers.FlaxWhisperForConditionalGeneration&quot;},{&quot;title&quot;:&quot;FlaxWhisperForAudioClassification&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.FlaxWhisperForAudioClassification&quot;,&quot;url&quot;:&quot;#transformers.FlaxWhisperForAudioClassification&quot;}]}}" data-target="SubSideMenu"> <nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#whisper" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-whisper"><!-- HTML_TAG_START --><wbr>Whisper<!-- HTML_TAG_END --></a> <a href="#overview" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-overview"><!-- HTML_TAG_START --><wbr>Overview<!-- HTML_TAG_END --></a> <a href="#transformers.WhisperConfig" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.WhisperConfig"><!-- HTML_TAG_START --><wbr>Whisper<wbr>Config<!-- HTML_TAG_END --></a> <a href="#transformers.WhisperTokenizer" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.WhisperTokenizer"><!-- HTML_TAG_START --><wbr>Whisper<wbr>Tokenizer<!-- HTML_TAG_END --></a> <a href="#transformers.WhisperTokenizerFast" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.WhisperTokenizerFast"><!-- HTML_TAG_START --><wbr>Whisper<wbr>Tokenizer<wbr>Fast<!-- HTML_TAG_END --></a> <a href="#transformers.WhisperFeatureExtractor" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.WhisperFeatureExtractor"><!-- HTML_TAG_START --><wbr>Whisper<wbr>Feature<wbr>Extractor<!-- HTML_TAG_END --></a> <a href="#transformers.WhisperProcessor" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.WhisperProcessor"><!-- HTML_TAG_START --><wbr>Whisper<wbr>Processor<!-- HTML_TAG_END --></a> <a href="#transformers.WhisperModel" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.WhisperModel"><!-- HTML_TAG_START --><wbr>Whisper<wbr>Model<!-- HTML_TAG_END --></a> <a href="#transformers.WhisperForConditionalGeneration" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.WhisperForConditionalGeneration"><!-- HTML_TAG_START --><wbr>Whisper<wbr>For<wbr>Conditional<wbr>Generation<!-- HTML_TAG_END --></a> <a href="#transformers.WhisperForAudioClassification" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.WhisperForAudioClassification"><!-- HTML_TAG_START --><wbr>Whisper<wbr>For<wbr>Audio<wbr>Classification<!-- HTML_TAG_END --></a> <a href="#transformers.TFWhisperModel" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.TFWhisperModel"><!-- HTML_TAG_START -->TF<wbr>Whisper<wbr>Model<!-- HTML_TAG_END --></a> <a href="#transformers.TFWhisperForConditionalGeneration" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.TFWhisperForConditionalGeneration"><!-- HTML_TAG_START -->TF<wbr>Whisper<wbr>For<wbr>Conditional<wbr>Generation<!-- HTML_TAG_END --></a> <a href="#transformers.FlaxWhisperModel" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.FlaxWhisperModel"><!-- HTML_TAG_START --><wbr>Flax<wbr>Whisper<wbr>Model<!-- HTML_TAG_END --></a> <a href="#transformers.FlaxWhisperForConditionalGeneration" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.FlaxWhisperForConditionalGeneration"><!-- HTML_TAG_START --><wbr>Flax<wbr>Whisper<wbr>For<wbr>Conditional<wbr>Generation<!-- HTML_TAG_END --></a> <a href="#transformers.FlaxWhisperForAudioClassification" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.FlaxWhisperForAudioClassification"><!-- HTML_TAG_START --><wbr>Flax<wbr>Whisper<wbr>For<wbr>Audio<wbr>Classification<!-- HTML_TAG_END --></a> </nav></div></div></div> <div id="doc-footer"></div></main> </div> <script> import("/front/build/kube-5e23f38/index.js"); window.moonSha = "kube-5e23f38/"; window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc","environment":"production","userAgent":"HuggingFace (production)"}`); </script> <!-- Stripe --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://js.stripe.com/v3/"; script.async = true; document.head.appendChild(script); } </script> <!-- Google analytics v4 --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL"; script.async = true; document.head.appendChild(script); window.dataLayer = window.dataLayer || []; function gtag() { if (window.dataLayer !== undefined) { window.dataLayer.push(arguments); } } gtag("js", new Date()); gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/v4.34.0/en/model_doc/whisper" }); /// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" }); /// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent /// TODO: ask the user for their consent and update this with gtag('consent', 'update') } </script> <!-- Google Analytics v3 (deprecated) --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { (function (i, s, o, g, r, a, m) { i["GoogleAnalyticsObject"] = r; (i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments); }), (i[r].l = 1 * new Date()); (a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]); a.async = 1; a.src = g; m.parentNode.insertBefore(a, m); })(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics"); ganalytics("create", "UA-83738774-2", "auto"); ganalytics("send", "pageview", "/docs/transformers/v4.34.0/en/model_doc/whisper"); } </script> <iframe name="__privateStripeMetricsController2100" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-27c67c0d52761104439bb051c7856ab1.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fv4.34.0%2Fen%2Fmodel_doc%2Fwhisper&amp;title=Whisper&amp;referrer=&amp;muid=b15a8ef9-7618-4d98-9abd-1d7fdb18f47df4c702&amp;sid=0da2c795-975c-45a5-a090-0475ca1e345f07aeed&amp;version=6&amp;preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html>
2023-10-05T13:33:31.779Z
X-MOD
https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/xmod
# X-MOD ## Overview The X-MOD model was proposed in [Lifting the Curse of Multilinguality by Pre-training Modular Transformers](http://dx.doi.org/10.18653/v1/2022.naacl-main.255) by Jonas Pfeiffer, Naman Goyal, Xi Lin, Xian Li, James Cross, Sebastian Riedel, and Mikel Artetxe. X-MOD extends multilingual masked language models like [XLM-R](xlm-roberta) to include language-specific modular components (_language adapters_) during pre-training. For fine-tuning, the language adapters in each transformer layer are frozen. The abstract from the paper is the following: _Multilingual pre-trained models are known to suffer from the curse of multilinguality, which causes per-language performance to drop as they cover more languages. We address this issue by introducing language-specific modules, which allows us to grow the total capacity of the model, while keeping the total number of trainable parameters per language constant. In contrast with prior work that learns language-specific components post-hoc, we pre-train the modules of our Cross-lingual Modular (X-MOD) models from the start. Our experiments on natural language inference, named entity recognition and question answering show that our approach not only mitigates the negative interference between languages, but also enables positive transfer, resulting in improved monolingual and cross-lingual performance. Furthermore, our approach enables adding languages post-hoc with no measurable drop in performance, no longer limiting the model usage to the set of pre-trained languages._ Tips: - X-MOD is similar to [XLM-R](xlm-roberta), but a difference is that the input language needs to be specified so that the correct language adapter can be activated. - The main models – base and large – have adapters for 81 languages. This model was contributed by [jvamvas](https://huggingface.co/jvamvas). The original code can be found [here](https://github.com/facebookresearch/fairseq/tree/58cc6cca18f15e6d56e3f60c959fe4f878960a60/fairseq/models/xmod) and the original documentation is found [here](https://github.com/facebookresearch/fairseq/tree/58cc6cca18f15e6d56e3f60c959fe4f878960a60/examples/xmod). ## Adapter Usage ### Input language There are two ways to specify the input language: 1. By setting a default language before using the model: ``` from transformers import XmodModel model = XmodModel.from_pretrained("facebook/xmod-base") model.set_default_language("en_XX") ``` 2. By explicitly passing the index of the language adapter for each sample: ``` import torch input_ids = torch.tensor( [ [0, 581, 10269, 83, 99942, 136, 60742, 23, 70, 80583, 18276, 2], [0, 1310, 49083, 443, 269, 71, 5486, 165, 60429, 660, 23, 2], ] ) lang_ids = torch.LongTensor( [ 0, 8, ] ) output = model(input_ids, lang_ids=lang_ids) ``` ### Fine-tuning The paper recommends that the embedding layer and the language adapters are frozen during fine-tuning. A method for doing this is provided: ``` model.freeze_embeddings_and_language_adapters() ``` ### Cross-lingual transfer After fine-tuning, zero-shot cross-lingual transfer can be tested by activating the language adapter of the target language: ``` model.set_default_language("de_DE") ``` ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Causal language modeling task guide](../tasks/language_modeling) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/multiple_choice) ## XmodConfig ### class transformers.XmodConfig [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xmod/configuration_xmod.py#L40) ( vocab\_size = 30522 hidden\_size = 768 num\_hidden\_layers = 12 num\_attention\_heads = 12 intermediate\_size = 3072 hidden\_act = 'gelu' hidden\_dropout\_prob = 0.1 attention\_probs\_dropout\_prob = 0.1 max\_position\_embeddings = 512 type\_vocab\_size = 2 initializer\_range = 0.02 layer\_norm\_eps = 1e-12 pad\_token\_id = 1 bos\_token\_id = 0 eos\_token\_id = 2 position\_embedding\_type = 'absolute' use\_cache = True classifier\_dropout = None pre\_norm = False adapter\_reduction\_factor = 2 adapter\_layer\_norm = False adapter\_reuse\_layer\_norm = True ln\_before\_adapter = True languages = ('en\_XX',) default\_language = None \*\*kwargs ) Parameters - **vocab\_size** (`int`, _optional_, defaults to 30522) — Vocabulary size of the X-MOD model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [XmodModel](/docs/transformers/v4.34.0/en/model_doc/xmod#transformers.XmodModel). - **hidden\_size** (`int`, _optional_, defaults to 768) — Dimensionality of the encoder layers and the pooler layer. - **num\_hidden\_layers** (`int`, _optional_, defaults to 12) — Number of hidden layers in the Transformer encoder. - **num\_attention\_heads** (`int`, _optional_, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder. - **intermediate\_size** (`int`, _optional_, defaults to 3072) — Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder. - **hidden\_act** (`str` or `Callable`, _optional_, defaults to `"gelu"`) — The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"silu"` and `"gelu_new"` are supported. - **hidden\_dropout\_prob** (`float`, _optional_, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. - **attention\_probs\_dropout\_prob** (`float`, _optional_, defaults to 0.1) — The dropout ratio for the attention probabilities. - **max\_position\_embeddings** (`int`, _optional_, defaults to 512) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). - **type\_vocab\_size** (`int`, _optional_, defaults to 2) — The vocabulary size of the `token_type_ids` passed when calling [XmodModel](/docs/transformers/v4.34.0/en/model_doc/xmod#transformers.XmodModel). - **initializer\_range** (`float`, _optional_, defaults to 0.02) — The standard deviation of the truncated\_normal\_initializer for initializing all weight matrices. - **layer\_norm\_eps** (`float`, _optional_, defaults to 1e-12) — The epsilon used by the layer normalization layers. - **position\_embedding\_type** (`str`, _optional_, defaults to `"absolute"`) — Type of position embedding. Choose one of `"absolute"`, `"relative_key"`, `"relative_key_query"`. For positional embeddings use `"absolute"`. For more information on `"relative_key"`, please refer to [Self-Attention with Relative Position Representations (Shaw et al.)](https://arxiv.org/abs/1803.02155). For more information on `"relative_key_query"`, please refer to _Method 4_ in [Improve Transformer Models with Better Relative Position Embeddings (Huang et al.)](https://arxiv.org/abs/2009.13658). - **is\_decoder** (`bool`, _optional_, defaults to `False`) — Whether the model is used as a decoder or not. If `False`, the model is used as an encoder. - **use\_cache** (`bool`, _optional_, defaults to `True`) — Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if `config.is_decoder=True`. - **classifier\_dropout** (`float`, _optional_) — The dropout ratio for the classification head. - **pre\_norm** (`bool`, _optional_, defaults to `False`) — Whether to apply layer normalization before each block. - **adapter\_reduction\_factor** (`int` or `float`, _optional_, defaults to 2) — The factor by which the dimensionality of the adapter is reduced relative to `hidden_size`. - **adapter\_layer\_norm** (`bool`, _optional_, defaults to `False`) — Whether to apply a new layer normalization before the adapter modules (shared across all adapters). - **adapter\_reuse\_layer\_norm** (`bool`, _optional_, defaults to `True`) — Whether to reuse the second layer normalization and apply it before the adapter modules as well. - **ln\_before\_adapter** (`bool`, _optional_, defaults to `True`) — Whether to apply the layer normalization before the residual connection around the adapter module. - **languages** (`Iterable[str]`, _optional_, defaults to `["en_XX"]`) — An iterable of language codes for which adapter modules should be initialized. - **default\_language** (`str`, _optional_) — Language code of a default language. It will be assumed that the input is in this language if no language codes are explicitly passed to the forward method. This is the configuration class to store the configuration of a [XmodModel](/docs/transformers/v4.34.0/en/model_doc/xmod#transformers.XmodModel). It is used to instantiate an X-MOD model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the [facebook/xmod-base](https://huggingface.co/facebook/xmod-base) architecture. Configuration objects inherit from [PretrainedConfig](/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig) and can be used to control the model outputs. Read the documentation from [PretrainedConfig](/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig) for more information. Examples: ``` >>> from transformers import XmodConfig, XmodModel >>> >>> configuration = XmodConfig() >>> >>> model = XmodModel(configuration) >>> >>> configuration = model.config ``` ## XmodModel ### class transformers.XmodModel [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xmod/modeling_xmod.py#L790) ( config add\_pooling\_layer = True ) Parameters - **config** ([XmodConfig](/docs/transformers/v4.34.0/en/model_doc/xmod#transformers.XmodConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. The bare X-MOD Model transformer outputting raw hidden-states without any specific head on top. This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of cross-attention is added between the self-attention layers, following the architecture described in _Attention is all you need_\_ by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin. To behave as an decoder the model needs to be initialized with the `is_decoder` argument of the configuration set to `True`. To be used in a Seq2Seq model, the model needs to initialized with both `is_decoder` argument and `add_cross_attention` set to `True`; an `encoder_hidden_states` is then expected as an input to the forward pass. .. \__Attention is all you need_: [https://arxiv.org/abs/1706.03762](https://arxiv.org/abs/1706.03762) #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xmod/modeling_xmod.py#L836) ( input\_ids: typing.Optional\[torch.Tensor\] = None lang\_ids: typing.Optional\[torch.LongTensor\] = None attention\_mask: typing.Optional\[torch.Tensor\] = None token\_type\_ids: typing.Optional\[torch.Tensor\] = None position\_ids: typing.Optional\[torch.Tensor\] = None head\_mask: typing.Optional\[torch.Tensor\] = None inputs\_embeds: typing.Optional\[torch.Tensor\] = None encoder\_hidden\_states: typing.Optional\[torch.Tensor\] = None encoder\_attention\_mask: typing.Optional\[torch.Tensor\] = None past\_key\_values: typing.Optional\[typing.List\[torch.FloatTensor\]\] = None use\_cache: typing.Optional\[bool\] = None output\_attentions: typing.Optional\[bool\] = None output\_hidden\_states: typing.Optional\[bool\] = None return\_dict: typing.Optional\[bool\] = None ) Parameters - **input\_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using [AutoTokenizer](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer). See [PreTrainedTokenizer.encode()](/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode) and [PreTrainedTokenizer.**call**()](/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__) for details. [What are input IDs?](../glossary#input-ids) - **lang\_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`, _optional_) — Indices of the language adapters that should be activated for each sample, respectively. Default: the index that corresponds to `self.config.default_language`. - **attention\_mask** (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, _optional_) — Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - **token\_type\_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`, _optional_) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, 1]`: - 0 corresponds to a _sentence A_ token, - 1 corresponds to a _sentence B_ token. [What are token type IDs?](../glossary#token-type-ids) - **position\_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`, _optional_) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, config.max_position_embeddings - 1]`. [What are position IDs?](../glossary#position-ids) - **head\_mask** (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, _optional_) — Mask to nullify selected heads of the self-attention modules. Mask values selected in `[0, 1]`: - 1 indicates the head is **not masked**, - 0 indicates the head is **masked**. - **inputs\_embeds** (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, _optional_) — Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert `input_ids` indices into associated vectors than the model’s internal embedding lookup matrix. - **output\_attentions** (`bool`, _optional_) — Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. - **output\_hidden\_states** (`bool`, _optional_) — Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for more detail. - **return\_dict** (`bool`, _optional_) — Whether or not to return a [ModelOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput) instead of a plain tuple. - **encoder\_hidden\_states** (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, _optional_) — Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder. - **encoder\_attention\_mask** (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, _optional_) — Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. - **past\_key\_values** (`tuple(tuple(torch.FloatTensor))` of length `config.n_layers` with each tuple having 4 tensors — - **of** shape `(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`) — Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that don’t have their past key value states given to this model) of shape `(batch_size, 1)` instead of all `decoder_input_ids` of shape `(batch_size, sequence_length)`. - **use\_cache** (`bool`, _optional_) — If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see `past_key_values`). The [XmodModel](/docs/transformers/v4.34.0/en/model_doc/xmod#transformers.XmodModel) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. ## XmodForCausalLM ### class transformers.XmodForCausalLM [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xmod/modeling_xmod.py#L982) ( config ) Parameters - **config** ([XmodConfig](/docs/transformers/v4.34.0/en/model_doc/xmod#transformers.XmodConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. X-MOD Model with a `language modeling` head on top for CLM fine-tuning. This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xmod/modeling_xmod.py#L1006) ( input\_ids: typing.Optional\[torch.LongTensor\] = None lang\_ids: typing.Optional\[torch.LongTensor\] = None attention\_mask: typing.Optional\[torch.FloatTensor\] = None token\_type\_ids: typing.Optional\[torch.LongTensor\] = None position\_ids: typing.Optional\[torch.LongTensor\] = None head\_mask: typing.Optional\[torch.FloatTensor\] = None inputs\_embeds: typing.Optional\[torch.FloatTensor\] = None encoder\_hidden\_states: typing.Optional\[torch.FloatTensor\] = None encoder\_attention\_mask: typing.Optional\[torch.FloatTensor\] = None labels: typing.Optional\[torch.LongTensor\] = None past\_key\_values: typing.Tuple\[typing.Tuple\[torch.FloatTensor\]\] = None use\_cache: typing.Optional\[bool\] = None output\_attentions: typing.Optional\[bool\] = None output\_hidden\_states: typing.Optional\[bool\] = None return\_dict: typing.Optional\[bool\] = None ) Parameters - **input\_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using [AutoTokenizer](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer). See [PreTrainedTokenizer.encode()](/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode) and [PreTrainedTokenizer.**call**()](/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__) for details. [What are input IDs?](../glossary#input-ids) - **lang\_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`, _optional_) — Indices of the language adapters that should be activated for each sample, respectively. Default: the index that corresponds to `self.config.default_language`. - **attention\_mask** (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, _optional_) — Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - **token\_type\_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`, _optional_) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, 1]`: - 0 corresponds to a _sentence A_ token, - 1 corresponds to a _sentence B_ token. [What are token type IDs?](../glossary#token-type-ids) - **position\_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`, _optional_) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, config.max_position_embeddings - 1]`. [What are position IDs?](../glossary#position-ids) - **head\_mask** (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, _optional_) — Mask to nullify selected heads of the self-attention modules. Mask values selected in `[0, 1]`: - 1 indicates the head is **not masked**, - 0 indicates the head is **masked**. - **inputs\_embeds** (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, _optional_) — Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert `input_ids` indices into associated vectors than the model’s internal embedding lookup matrix. - **output\_attentions** (`bool`, _optional_) — Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. - **output\_hidden\_states** (`bool`, _optional_) — Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for more detail. - **return\_dict** (`bool`, _optional_) — Whether or not to return a [ModelOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput) instead of a plain tuple. - **encoder\_hidden\_states** (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, _optional_) — Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder. - **encoder\_attention\_mask** (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, _optional_) — Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. - **labels** (`torch.LongTensor` of shape `(batch_size, sequence_length)`, _optional_) — Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be in `[-100, 0, ..., config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are ignored (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]` - **past\_key\_values** (`tuple(tuple(torch.FloatTensor))` of length `config.n_layers` with each tuple having 4 tensors of shape `(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`) — Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that don’t have their past key value states given to this model) of shape `(batch_size, 1)` instead of all `decoder_input_ids` of shape `(batch_size, sequence_length)`. - **use\_cache** (`bool`, _optional_) — If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see `past_key_values`). Returns — `transformers.modeling_outputs.CausalLMOutputWithCrossAttentions` or `tuple(torch.FloatTensor)` Example — The [XmodForCausalLM](/docs/transformers/v4.34.0/en/model_doc/xmod#transformers.XmodForCausalLM) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. ## XmodForMaskedLM ### class transformers.XmodForMaskedLM [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xmod/modeling_xmod.py#L1141) ( config ) Parameters - **config** ([XmodConfig](/docs/transformers/v4.34.0/en/model_doc/xmod#transformers.XmodConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. X-MOD Model with a `language modeling` head on top. This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xmod/modeling_xmod.py#L1168) ( input\_ids: typing.Optional\[torch.LongTensor\] = None lang\_ids: typing.Optional\[torch.LongTensor\] = None attention\_mask: typing.Optional\[torch.FloatTensor\] = None token\_type\_ids: typing.Optional\[torch.LongTensor\] = None position\_ids: typing.Optional\[torch.LongTensor\] = None head\_mask: typing.Optional\[torch.FloatTensor\] = None inputs\_embeds: typing.Optional\[torch.FloatTensor\] = None encoder\_hidden\_states: typing.Optional\[torch.FloatTensor\] = None encoder\_attention\_mask: typing.Optional\[torch.FloatTensor\] = None labels: typing.Optional\[torch.LongTensor\] = None output\_attentions: typing.Optional\[bool\] = None output\_hidden\_states: typing.Optional\[bool\] = None return\_dict: typing.Optional\[bool\] = None ) Parameters - **input\_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using [AutoTokenizer](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer). See [PreTrainedTokenizer.encode()](/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode) and [PreTrainedTokenizer.**call**()](/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__) for details. [What are input IDs?](../glossary#input-ids) - **lang\_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`, _optional_) — Indices of the language adapters that should be activated for each sample, respectively. Default: the index that corresponds to `self.config.default_language`. - **attention\_mask** (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, _optional_) — Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - **token\_type\_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`, _optional_) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, 1]`: - 0 corresponds to a _sentence A_ token, - 1 corresponds to a _sentence B_ token. [What are token type IDs?](../glossary#token-type-ids) - **position\_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`, _optional_) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, config.max_position_embeddings - 1]`. [What are position IDs?](../glossary#position-ids) - **head\_mask** (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, _optional_) — Mask to nullify selected heads of the self-attention modules. Mask values selected in `[0, 1]`: - 1 indicates the head is **not masked**, - 0 indicates the head is **masked**. - **inputs\_embeds** (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, _optional_) — Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert `input_ids` indices into associated vectors than the model’s internal embedding lookup matrix. - **output\_attentions** (`bool`, _optional_) — Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. - **output\_hidden\_states** (`bool`, _optional_) — Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for more detail. - **return\_dict** (`bool`, _optional_) — Whether or not to return a [ModelOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput) instead of a plain tuple. - **labels** (`torch.LongTensor` of shape `(batch_size, sequence_length)`, _optional_) — Labels for computing the masked language modeling loss. Indices should be in `[-100, 0, ..., config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are ignored (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]` - **kwargs** (`Dict[str, any]`, optional, defaults to _{}_) — Used to hide legacy arguments that have been deprecated. The [XmodForMaskedLM](/docs/transformers/v4.34.0/en/model_doc/xmod#transformers.XmodForMaskedLM) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. ## XmodForSequenceClassification ### class transformers.XmodForSequenceClassification [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xmod/modeling_xmod.py#L1268) ( config ) Parameters - **config** ([XmodConfig](/docs/transformers/v4.34.0/en/model_doc/xmod#transformers.XmodConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. X-MOD Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xmod/modeling_xmod.py#L1281) ( input\_ids: typing.Optional\[torch.LongTensor\] = None lang\_ids: typing.Optional\[torch.LongTensor\] = None attention\_mask: typing.Optional\[torch.FloatTensor\] = None token\_type\_ids: typing.Optional\[torch.LongTensor\] = None position\_ids: typing.Optional\[torch.LongTensor\] = None head\_mask: typing.Optional\[torch.FloatTensor\] = None inputs\_embeds: typing.Optional\[torch.FloatTensor\] = None labels: typing.Optional\[torch.LongTensor\] = None output\_attentions: typing.Optional\[bool\] = None output\_hidden\_states: typing.Optional\[bool\] = None return\_dict: typing.Optional\[bool\] = None ) Parameters - **input\_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using [AutoTokenizer](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer). See [PreTrainedTokenizer.encode()](/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode) and [PreTrainedTokenizer.**call**()](/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__) for details. [What are input IDs?](../glossary#input-ids) - **lang\_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`, _optional_) — Indices of the language adapters that should be activated for each sample, respectively. Default: the index that corresponds to `self.config.default_language`. - **attention\_mask** (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, _optional_) — Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - **token\_type\_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`, _optional_) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, 1]`: - 0 corresponds to a _sentence A_ token, - 1 corresponds to a _sentence B_ token. [What are token type IDs?](../glossary#token-type-ids) - **position\_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`, _optional_) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, config.max_position_embeddings - 1]`. [What are position IDs?](../glossary#position-ids) - **head\_mask** (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, _optional_) — Mask to nullify selected heads of the self-attention modules. Mask values selected in `[0, 1]`: - 1 indicates the head is **not masked**, - 0 indicates the head is **masked**. - **inputs\_embeds** (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, _optional_) — Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert `input_ids` indices into associated vectors than the model’s internal embedding lookup matrix. - **output\_attentions** (`bool`, _optional_) — Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. - **output\_hidden\_states** (`bool`, _optional_) — Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for more detail. - **return\_dict** (`bool`, _optional_) — Whether or not to return a [ModelOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput) instead of a plain tuple. - **labels** (`torch.LongTensor` of shape `(batch_size,)`, _optional_) — Labels for computing the sequence classification/regression loss. Indices should be in `[0, ..., config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If `config.num_labels > 1` a classification loss is computed (Cross-Entropy). The [XmodForSequenceClassification](/docs/transformers/v4.34.0/en/model_doc/xmod#transformers.XmodForSequenceClassification) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. ## XmodForMultipleChoice ### class transformers.XmodForMultipleChoice [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xmod/modeling_xmod.py#L1361) ( config ) Parameters - **config** ([XmodConfig](/docs/transformers/v4.34.0/en/model_doc/xmod#transformers.XmodConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. X-MOD Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks. This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xmod/modeling_xmod.py#L1373) ( input\_ids: typing.Optional\[torch.LongTensor\] = None lang\_ids: typing.Optional\[torch.LongTensor\] = None token\_type\_ids: typing.Optional\[torch.LongTensor\] = None attention\_mask: typing.Optional\[torch.FloatTensor\] = None labels: typing.Optional\[torch.LongTensor\] = None position\_ids: typing.Optional\[torch.LongTensor\] = None head\_mask: typing.Optional\[torch.FloatTensor\] = None inputs\_embeds: typing.Optional\[torch.FloatTensor\] = None output\_attentions: typing.Optional\[bool\] = None output\_hidden\_states: typing.Optional\[bool\] = None return\_dict: typing.Optional\[bool\] = None ) Parameters - **input\_ids** (`torch.LongTensor` of shape `(batch_size, num_choices, sequence_length)`) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using [AutoTokenizer](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer). See [PreTrainedTokenizer.encode()](/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode) and [PreTrainedTokenizer.**call**()](/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__) for details. [What are input IDs?](../glossary#input-ids) - **lang\_ids** (`torch.LongTensor` of shape `(batch_size, num_choices, sequence_length)`, _optional_) — Indices of the language adapters that should be activated for each sample, respectively. Default: the index that corresponds to `self.config.default_language`. - **attention\_mask** (`torch.FloatTensor` of shape `(batch_size, num_choices, sequence_length)`, _optional_) — Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - **token\_type\_ids** (`torch.LongTensor` of shape `(batch_size, num_choices, sequence_length)`, _optional_) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, 1]`: - 0 corresponds to a _sentence A_ token, - 1 corresponds to a _sentence B_ token. [What are token type IDs?](../glossary#token-type-ids) - **position\_ids** (`torch.LongTensor` of shape `(batch_size, num_choices, sequence_length)`, _optional_) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, config.max_position_embeddings - 1]`. [What are position IDs?](../glossary#position-ids) - **head\_mask** (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, _optional_) — Mask to nullify selected heads of the self-attention modules. Mask values selected in `[0, 1]`: - 1 indicates the head is **not masked**, - 0 indicates the head is **masked**. - **inputs\_embeds** (`torch.FloatTensor` of shape `(batch_size, num_choices, sequence_length, hidden_size)`, _optional_) — Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert `input_ids` indices into associated vectors than the model’s internal embedding lookup matrix. - **output\_attentions** (`bool`, _optional_) — Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. - **output\_hidden\_states** (`bool`, _optional_) — Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for more detail. - **return\_dict** (`bool`, _optional_) — Whether or not to return a [ModelOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput) instead of a plain tuple. - **labels** (`torch.LongTensor` of shape `(batch_size,)`, _optional_) — Labels for computing the multiple choice classification loss. Indices should be in `[0, ..., num_choices-1]` where `num_choices` is the size of the second dimension of the input tensors. (See `input_ids` above) The [XmodForMultipleChoice](/docs/transformers/v4.34.0/en/model_doc/xmod#transformers.XmodForMultipleChoice) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. ## XmodForTokenClassification ### class transformers.XmodForTokenClassification [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xmod/modeling_xmod.py#L1450) ( config ) Parameters - **config** ([XmodConfig](/docs/transformers/v4.34.0/en/model_doc/xmod#transformers.XmodConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. X-MOD Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xmod/modeling_xmod.py#L1466) ( input\_ids: typing.Optional\[torch.LongTensor\] = None lang\_ids: typing.Optional\[torch.LongTensor\] = None attention\_mask: typing.Optional\[torch.FloatTensor\] = None token\_type\_ids: typing.Optional\[torch.LongTensor\] = None position\_ids: typing.Optional\[torch.LongTensor\] = None head\_mask: typing.Optional\[torch.FloatTensor\] = None inputs\_embeds: typing.Optional\[torch.FloatTensor\] = None labels: typing.Optional\[torch.LongTensor\] = None output\_attentions: typing.Optional\[bool\] = None output\_hidden\_states: typing.Optional\[bool\] = None return\_dict: typing.Optional\[bool\] = None ) Parameters - **input\_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using [AutoTokenizer](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer). See [PreTrainedTokenizer.encode()](/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode) and [PreTrainedTokenizer.**call**()](/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__) for details. [What are input IDs?](../glossary#input-ids) - **lang\_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`, _optional_) — Indices of the language adapters that should be activated for each sample, respectively. Default: the index that corresponds to `self.config.default_language`. - **attention\_mask** (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, _optional_) — Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - **token\_type\_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`, _optional_) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, 1]`: - 0 corresponds to a _sentence A_ token, - 1 corresponds to a _sentence B_ token. [What are token type IDs?](../glossary#token-type-ids) - **position\_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`, _optional_) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, config.max_position_embeddings - 1]`. [What are position IDs?](../glossary#position-ids) - **head\_mask** (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, _optional_) — Mask to nullify selected heads of the self-attention modules. Mask values selected in `[0, 1]`: - 1 indicates the head is **not masked**, - 0 indicates the head is **masked**. - **inputs\_embeds** (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, _optional_) — Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert `input_ids` indices into associated vectors than the model’s internal embedding lookup matrix. - **output\_attentions** (`bool`, _optional_) — Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. - **output\_hidden\_states** (`bool`, _optional_) — Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for more detail. - **return\_dict** (`bool`, _optional_) — Whether or not to return a [ModelOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput) instead of a plain tuple. - **labels** (`torch.LongTensor` of shape `(batch_size, sequence_length)`, _optional_) — Labels for computing the token classification loss. Indices should be in `[0, ..., config.num_labels - 1]`. The [XmodForTokenClassification](/docs/transformers/v4.34.0/en/model_doc/xmod#transformers.XmodForTokenClassification) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. ## XmodForQuestionAnswering ### class transformers.XmodForQuestionAnswering [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xmod/modeling_xmod.py#L1552) ( config ) Parameters - **config** ([XmodConfig](/docs/transformers/v4.34.0/en/model_doc/xmod#transformers.XmodConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. X-MOD Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute `span start logits` and `span end logits`). This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xmod/modeling_xmod.py#L1564) ( input\_ids: typing.Optional\[torch.LongTensor\] = None lang\_ids: typing.Optional\[torch.LongTensor\] = None attention\_mask: typing.Optional\[torch.FloatTensor\] = None token\_type\_ids: typing.Optional\[torch.LongTensor\] = None position\_ids: typing.Optional\[torch.LongTensor\] = None head\_mask: typing.Optional\[torch.FloatTensor\] = None inputs\_embeds: typing.Optional\[torch.FloatTensor\] = None start\_positions: typing.Optional\[torch.LongTensor\] = None end\_positions: typing.Optional\[torch.LongTensor\] = None output\_attentions: typing.Optional\[bool\] = None output\_hidden\_states: typing.Optional\[bool\] = None return\_dict: typing.Optional\[bool\] = None ) Parameters - **input\_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using [AutoTokenizer](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer). See [PreTrainedTokenizer.encode()](/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode) and [PreTrainedTokenizer.**call**()](/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__) for details. [What are input IDs?](../glossary#input-ids) - **lang\_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`, _optional_) — Indices of the language adapters that should be activated for each sample, respectively. Default: the index that corresponds to `self.config.default_language`. - **attention\_mask** (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, _optional_) — Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - **token\_type\_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`, _optional_) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, 1]`: - 0 corresponds to a _sentence A_ token, - 1 corresponds to a _sentence B_ token. [What are token type IDs?](../glossary#token-type-ids) - **position\_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`, _optional_) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, config.max_position_embeddings - 1]`. [What are position IDs?](../glossary#position-ids) - **head\_mask** (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, _optional_) — Mask to nullify selected heads of the self-attention modules. Mask values selected in `[0, 1]`: - 1 indicates the head is **not masked**, - 0 indicates the head is **masked**. - **inputs\_embeds** (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, _optional_) — Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert `input_ids` indices into associated vectors than the model’s internal embedding lookup matrix. - **output\_attentions** (`bool`, _optional_) — Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. - **output\_hidden\_states** (`bool`, _optional_) — Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for more detail. - **return\_dict** (`bool`, _optional_) — Whether or not to return a [ModelOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput) instead of a plain tuple. - **start\_positions** (`torch.LongTensor` of shape `(batch_size,)`, _optional_) — Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence are not taken into account for computing the loss. - **end\_positions** (`torch.LongTensor` of shape `(batch_size,)`, _optional_) — Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence are not taken into account for computing the loss. The [XmodForQuestionAnswering](/docs/transformers/v4.34.0/en/model_doc/xmod#transformers.XmodForQuestionAnswering) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
<!DOCTYPE html><html class=""><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"> <meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science."> <meta property="fb:app_id" content="1321688464574422"> <meta name="twitter:card" content="summary_large_image"> <meta name="twitter:site" content="@huggingface"> <meta property="og:title" content="X-MOD"> <meta property="og:type" content="website"> <meta property="og:url" content="https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/xmod"> <meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png"> <link rel="stylesheet" href="/front/build/kube-b0520c1/style.css"> <link rel="preconnect" href="https://fonts.gstatic.com"> <link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&amp;display=swap" rel="stylesheet"> <link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&amp;display=swap" rel="stylesheet"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" as="style" onload="this.onload=null;this.rel='stylesheet'"> <noscript> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" /> </noscript> <title>X-MOD</title> <script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script> <script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head> <body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage"> <div class="flex min-h-screen flex-col"> <div class="SVELTE_HYDRATER contents" data-props="{&quot;classNames&quot;:&quot;&quot;,&quot;isWide&quot;:true,&quot;isZh&quot;:false}" data-target="MainHeader"><header class="border-b border-gray-100 "><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <div class="flex flex-none items-center justify-center p-0.5 place-self-stretch lg:hidden"><button class="relative z-40 flex h-6 w-8 items-center justify-center" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div></div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="rounded-full border border-transparent bg-gray-900 px-3 py-1 leading-none text-white hover:border-black hover:bg-white hover:text-black" href="/join">Sign Up</a></li></ul></nav></div></header></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div> <main class="flex flex-1 flex-col"><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapters&quot;:[{&quot;title&quot;:&quot;Get started&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;🤗 Transformers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;index&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/index&quot;},{&quot;title&quot;:&quot;Quick tour&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;quicktour&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/quicktour&quot;},{&quot;title&quot;:&quot;Installation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;installation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/installation&quot;}]},{&quot;title&quot;:&quot;Tutorials&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Run inference with pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_tutorial&quot;},{&quot;title&quot;:&quot;Write portable code with AutoClass&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;autoclass_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/autoclass_tutorial&quot;},{&quot;title&quot;:&quot;Preprocess data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocessing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/preprocessing&quot;},{&quot;title&quot;:&quot;Fine-tune a pretrained model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;training&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/training&quot;},{&quot;title&quot;:&quot;Train with a script&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;run_scripts&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/run_scripts&quot;},{&quot;title&quot;:&quot;Set up distributed training with 🤗 Accelerate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;accelerate&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/accelerate&quot;},{&quot;title&quot;:&quot;Load and train adapters with 🤗 PEFT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;peft&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/peft&quot;},{&quot;title&quot;:&quot;Share your model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_sharing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_sharing&quot;},{&quot;title&quot;:&quot;Agents&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers_agents&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/transformers_agents&quot;},{&quot;title&quot;:&quot;Generation with LLMs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;llm_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/llm_tutorial&quot;}]},{&quot;title&quot;:&quot;Task Guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Natural Language Processing&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text classification&quot;,&quot;id&quot;:&quot;tasks/sequence_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/sequence_classification&quot;},{&quot;title&quot;:&quot;Token classification&quot;,&quot;id&quot;:&quot;tasks/token_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/token_classification&quot;},{&quot;title&quot;:&quot;Question answering&quot;,&quot;id&quot;:&quot;tasks/question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/question_answering&quot;},{&quot;title&quot;:&quot;Causal language modeling&quot;,&quot;id&quot;:&quot;tasks/language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/language_modeling&quot;},{&quot;title&quot;:&quot;Masked language modeling&quot;,&quot;id&quot;:&quot;tasks/masked_language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/masked_language_modeling&quot;},{&quot;title&quot;:&quot;Translation&quot;,&quot;id&quot;:&quot;tasks/translation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/translation&quot;},{&quot;title&quot;:&quot;Summarization&quot;,&quot;id&quot;:&quot;tasks/summarization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/summarization&quot;},{&quot;title&quot;:&quot;Multiple choice&quot;,&quot;id&quot;:&quot;tasks/multiple_choice&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/multiple_choice&quot;}]},{&quot;title&quot;:&quot;Audio&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio classification&quot;,&quot;id&quot;:&quot;tasks/audio_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/audio_classification&quot;},{&quot;title&quot;:&quot;Automatic speech recognition&quot;,&quot;id&quot;:&quot;tasks/asr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/asr&quot;}]},{&quot;title&quot;:&quot;Computer Vision&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image classification&quot;,&quot;id&quot;:&quot;tasks/image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_classification&quot;},{&quot;title&quot;:&quot;Semantic segmentation&quot;,&quot;id&quot;:&quot;tasks/semantic_segmentation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/semantic_segmentation&quot;},{&quot;title&quot;:&quot;Video classification&quot;,&quot;id&quot;:&quot;tasks/video_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/video_classification&quot;},{&quot;title&quot;:&quot;Object detection&quot;,&quot;id&quot;:&quot;tasks/object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot object detection&quot;,&quot;id&quot;:&quot;tasks/zero_shot_object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot image classification&quot;,&quot;id&quot;:&quot;tasks/zero_shot_image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification&quot;},{&quot;title&quot;:&quot;Depth estimation&quot;,&quot;id&quot;:&quot;tasks/monocular_depth_estimation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation&quot;}]},{&quot;title&quot;:&quot;Multimodal&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image captioning&quot;,&quot;id&quot;:&quot;tasks/image_captioning&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_captioning&quot;},{&quot;title&quot;:&quot;Document Question Answering&quot;,&quot;id&quot;:&quot;tasks/document_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/document_question_answering&quot;},{&quot;title&quot;:&quot;Visual Question Answering&quot;,&quot;id&quot;:&quot;tasks/visual_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/visual_question_answering&quot;},{&quot;title&quot;:&quot;Text to speech&quot;,&quot;id&quot;:&quot;tasks/text-to-speech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/text-to-speech&quot;}]},{&quot;title&quot;:&quot;Generation&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Customize the generation strategy&quot;,&quot;id&quot;:&quot;generation_strategies&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/generation_strategies&quot;}]},{&quot;title&quot;:&quot;Prompting&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image tasks with IDEFICS&quot;,&quot;id&quot;:&quot;tasks/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/idefics&quot;}]}]},{&quot;title&quot;:&quot;Developer guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Use fast tokenizers from 🤗 Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;fast_tokenizers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/fast_tokenizers&quot;},{&quot;title&quot;:&quot;Run inference with multilingual models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;multilingual&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/multilingual&quot;},{&quot;title&quot;:&quot;Use model-specific APIs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;create_a_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/create_a_model&quot;},{&quot;title&quot;:&quot;Share a custom model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_models&quot;},{&quot;title&quot;:&quot;Templates for chat models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;chat_templating&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/chat_templating&quot;},{&quot;title&quot;:&quot;Run training on Amazon SageMaker&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;sagemaker&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/sagemaker&quot;},{&quot;title&quot;:&quot;Export to ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;serialization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/serialization&quot;},{&quot;title&quot;:&quot;Export to TFLite&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tflite&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tflite&quot;},{&quot;title&quot;:&quot;Export to TorchScript&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;torchscript&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/torchscript&quot;},{&quot;title&quot;:&quot;Benchmarks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;benchmarks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/benchmarks&quot;},{&quot;title&quot;:&quot;Notebooks with examples&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;notebooks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/notebooks&quot;},{&quot;title&quot;:&quot;Community resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;community&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/community&quot;},{&quot;title&quot;:&quot;Custom Tools and Prompts&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_tools&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_tools&quot;},{&quot;title&quot;:&quot;Troubleshoot&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;troubleshooting&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/troubleshooting&quot;}]},{&quot;title&quot;:&quot;Performance and scalability&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;performance&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/performance&quot;},{&quot;title&quot;:&quot;Efficient training techniques&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Methods and tools for efficient training on a single GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_one&quot;},{&quot;title&quot;:&quot;Multiple GPUs and parallelism&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_many&quot;},{&quot;title&quot;:&quot;Efficient training on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu&quot;},{&quot;title&quot;:&quot;Distributed CPU training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu_many&quot;},{&quot;title&quot;:&quot;Training on TPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu&quot;},{&quot;title&quot;:&quot;Training on TPU with TensorFlow&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu_tf&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu_tf&quot;},{&quot;title&quot;:&quot;Training on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_special&quot;},{&quot;title&quot;:&quot;Custom hardware for training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_hardware&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_hardware&quot;},{&quot;title&quot;:&quot;Hyperparameter Search using Trainer API&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;hpo_train&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/hpo_train&quot;}]},{&quot;title&quot;:&quot;Optimizing inference&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Inference on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_cpu&quot;},{&quot;title&quot;:&quot;Inference on one GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_one&quot;},{&quot;title&quot;:&quot;Inference on many GPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_many&quot;},{&quot;title&quot;:&quot;Inference on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_special&quot;}]},{&quot;title&quot;:&quot;Instantiating a big model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;big_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/big_models&quot;},{&quot;title&quot;:&quot;Troubleshooting&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;debugging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/debugging&quot;},{&quot;title&quot;:&quot;XLA Integration for TensorFlow Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tf_xla&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tf_xla&quot;},{&quot;title&quot;:&quot;Optimize inference using `torch.compile()`&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_torch_compile&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_torch_compile&quot;}]},{&quot;title&quot;:&quot;Contribute&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;How to contribute to transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;contributing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/contributing&quot;},{&quot;title&quot;:&quot;How to add a model to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_model&quot;},{&quot;title&quot;:&quot;How to convert a 🤗 Transformers model to TensorFlow?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_tensorflow_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_tensorflow_model&quot;},{&quot;title&quot;:&quot;How to add a pipeline to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_pipeline&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_pipeline&quot;},{&quot;title&quot;:&quot;Testing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;testing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/testing&quot;},{&quot;title&quot;:&quot;Checks on a Pull Request&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pr_checks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pr_checks&quot;}]},{&quot;title&quot;:&quot;Conceptual guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Philosophy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;philosophy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/philosophy&quot;},{&quot;title&quot;:&quot;Glossary&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;glossary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/glossary&quot;},{&quot;title&quot;:&quot;What 🤗 Transformers can do&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;task_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/task_summary&quot;},{&quot;title&quot;:&quot;How 🤗 Transformers solve tasks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks_explained&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks_explained&quot;},{&quot;title&quot;:&quot;The Transformer model family&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_summary&quot;},{&quot;title&quot;:&quot;Summary of the tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tokenizer_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tokenizer_summary&quot;},{&quot;title&quot;:&quot;Attention mechanisms&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;attention&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/attention&quot;},{&quot;title&quot;:&quot;Padding and truncation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pad_truncation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pad_truncation&quot;},{&quot;title&quot;:&quot;BERTology&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;bertology&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/bertology&quot;},{&quot;title&quot;:&quot;Perplexity of fixed-length models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perplexity&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perplexity&quot;},{&quot;title&quot;:&quot;Pipelines for webserver inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_webserver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_webserver&quot;},{&quot;title&quot;:&quot;Model training anatomy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_memory_anatomy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_memory_anatomy&quot;}]},{&quot;title&quot;:&quot;API&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Main Classes&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Agents and Tools&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/agent&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/agent&quot;},{&quot;title&quot;:&quot;Auto Classes&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/auto&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/auto&quot;},{&quot;title&quot;:&quot;Callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/callback&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/callback&quot;},{&quot;title&quot;:&quot;Configuration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/configuration&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/configuration&quot;},{&quot;title&quot;:&quot;Data Collator&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/data_collator&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/data_collator&quot;},{&quot;title&quot;:&quot;Keras callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/keras_callbacks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/keras_callbacks&quot;},{&quot;title&quot;:&quot;Logging&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/logging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/logging&quot;},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/model&quot;},{&quot;title&quot;:&quot;Text Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/text_generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/text_generation&quot;},{&quot;title&quot;:&quot;ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/onnx&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/onnx&quot;},{&quot;title&quot;:&quot;Optimization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/optimizer_schedules&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules&quot;},{&quot;title&quot;:&quot;Model outputs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/output&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/output&quot;},{&quot;title&quot;:&quot;Pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/pipelines&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/pipelines&quot;},{&quot;title&quot;:&quot;Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/processors&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/processors&quot;},{&quot;title&quot;:&quot;Quantization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/quantization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/quantization&quot;},{&quot;title&quot;:&quot;Tokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/tokenizer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/tokenizer&quot;},{&quot;title&quot;:&quot;Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/trainer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/trainer&quot;},{&quot;title&quot;:&quot;DeepSpeed Integration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/deepspeed&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/deepspeed&quot;},{&quot;title&quot;:&quot;Feature Extractor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/feature_extractor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/feature_extractor&quot;},{&quot;title&quot;:&quot;Image Processor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/image_processor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/image_processor&quot;}]},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALBERT&quot;,&quot;id&quot;:&quot;model_doc/albert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/albert&quot;},{&quot;title&quot;:&quot;BART&quot;,&quot;id&quot;:&quot;model_doc/bart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bart&quot;},{&quot;title&quot;:&quot;BARThez&quot;,&quot;id&quot;:&quot;model_doc/barthez&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/barthez&quot;},{&quot;title&quot;:&quot;BARTpho&quot;,&quot;id&quot;:&quot;model_doc/bartpho&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bartpho&quot;},{&quot;title&quot;:&quot;BERT&quot;,&quot;id&quot;:&quot;model_doc/bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert&quot;},{&quot;title&quot;:&quot;BertGeneration&quot;,&quot;id&quot;:&quot;model_doc/bert-generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-generation&quot;},{&quot;title&quot;:&quot;BertJapanese&quot;,&quot;id&quot;:&quot;model_doc/bert-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-japanese&quot;},{&quot;title&quot;:&quot;Bertweet&quot;,&quot;id&quot;:&quot;model_doc/bertweet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bertweet&quot;},{&quot;title&quot;:&quot;BigBird&quot;,&quot;id&quot;:&quot;model_doc/big_bird&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/big_bird&quot;},{&quot;title&quot;:&quot;BigBirdPegasus&quot;,&quot;id&quot;:&quot;model_doc/bigbird_pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus&quot;},{&quot;title&quot;:&quot;BioGpt&quot;,&quot;id&quot;:&quot;model_doc/biogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/biogpt&quot;},{&quot;title&quot;:&quot;Blenderbot&quot;,&quot;id&quot;:&quot;model_doc/blenderbot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot&quot;},{&quot;title&quot;:&quot;Blenderbot Small&quot;,&quot;id&quot;:&quot;model_doc/blenderbot-small&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot-small&quot;},{&quot;title&quot;:&quot;BLOOM&quot;,&quot;id&quot;:&quot;model_doc/bloom&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bloom&quot;},{&quot;title&quot;:&quot;BORT&quot;,&quot;id&quot;:&quot;model_doc/bort&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bort&quot;},{&quot;title&quot;:&quot;ByT5&quot;,&quot;id&quot;:&quot;model_doc/byt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/byt5&quot;},{&quot;title&quot;:&quot;CamemBERT&quot;,&quot;id&quot;:&quot;model_doc/camembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/camembert&quot;},{&quot;title&quot;:&quot;CANINE&quot;,&quot;id&quot;:&quot;model_doc/canine&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/canine&quot;},{&quot;title&quot;:&quot;CodeGen&quot;,&quot;id&quot;:&quot;model_doc/codegen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/codegen&quot;},{&quot;title&quot;:&quot;CodeLlama&quot;,&quot;id&quot;:&quot;model_doc/code_llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/code_llama&quot;},{&quot;title&quot;:&quot;ConvBERT&quot;,&quot;id&quot;:&quot;model_doc/convbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convbert&quot;},{&quot;title&quot;:&quot;CPM&quot;,&quot;id&quot;:&quot;model_doc/cpm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpm&quot;},{&quot;title&quot;:&quot;CPMANT&quot;,&quot;id&quot;:&quot;model_doc/cpmant&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpmant&quot;},{&quot;title&quot;:&quot;CTRL&quot;,&quot;id&quot;:&quot;model_doc/ctrl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ctrl&quot;},{&quot;title&quot;:&quot;DeBERTa&quot;,&quot;id&quot;:&quot;model_doc/deberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta&quot;},{&quot;title&quot;:&quot;DeBERTa-v2&quot;,&quot;id&quot;:&quot;model_doc/deberta-v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta-v2&quot;},{&quot;title&quot;:&quot;DialoGPT&quot;,&quot;id&quot;:&quot;model_doc/dialogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dialogpt&quot;},{&quot;title&quot;:&quot;DistilBERT&quot;,&quot;id&quot;:&quot;model_doc/distilbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/distilbert&quot;},{&quot;title&quot;:&quot;DPR&quot;,&quot;id&quot;:&quot;model_doc/dpr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpr&quot;},{&quot;title&quot;:&quot;ELECTRA&quot;,&quot;id&quot;:&quot;model_doc/electra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/electra&quot;},{&quot;title&quot;:&quot;Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encoder-decoder&quot;},{&quot;title&quot;:&quot;ERNIE&quot;,&quot;id&quot;:&quot;model_doc/ernie&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie&quot;},{&quot;title&quot;:&quot;ErnieM&quot;,&quot;id&quot;:&quot;model_doc/ernie_m&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie_m&quot;},{&quot;title&quot;:&quot;ESM&quot;,&quot;id&quot;:&quot;model_doc/esm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/esm&quot;},{&quot;title&quot;:&quot;Falcon&quot;,&quot;id&quot;:&quot;model_doc/falcon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/falcon&quot;},{&quot;title&quot;:&quot;FLAN-T5&quot;,&quot;id&quot;:&quot;model_doc/flan-t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-t5&quot;},{&quot;title&quot;:&quot;FLAN-UL2&quot;,&quot;id&quot;:&quot;model_doc/flan-ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-ul2&quot;},{&quot;title&quot;:&quot;FlauBERT&quot;,&quot;id&quot;:&quot;model_doc/flaubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flaubert&quot;},{&quot;title&quot;:&quot;FNet&quot;,&quot;id&quot;:&quot;model_doc/fnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fnet&quot;},{&quot;title&quot;:&quot;FSMT&quot;,&quot;id&quot;:&quot;model_doc/fsmt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fsmt&quot;},{&quot;title&quot;:&quot;Funnel Transformer&quot;,&quot;id&quot;:&quot;model_doc/funnel&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/funnel&quot;},{&quot;title&quot;:&quot;GPT&quot;,&quot;id&quot;:&quot;model_doc/openai-gpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/openai-gpt&quot;},{&quot;title&quot;:&quot;GPT Neo&quot;,&quot;id&quot;:&quot;model_doc/gpt_neo&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neo&quot;},{&quot;title&quot;:&quot;GPT NeoX&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox&quot;},{&quot;title&quot;:&quot;GPT NeoX Japanese&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox_japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese&quot;},{&quot;title&quot;:&quot;GPT-J&quot;,&quot;id&quot;:&quot;model_doc/gptj&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptj&quot;},{&quot;title&quot;:&quot;GPT2&quot;,&quot;id&quot;:&quot;model_doc/gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt2&quot;},{&quot;title&quot;:&quot;GPTBigCode&quot;,&quot;id&quot;:&quot;model_doc/gpt_bigcode&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode&quot;},{&quot;title&quot;:&quot;GPTSAN Japanese&quot;,&quot;id&quot;:&quot;model_doc/gptsan-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese&quot;},{&quot;title&quot;:&quot;GPTSw3&quot;,&quot;id&quot;:&quot;model_doc/gpt-sw3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt-sw3&quot;},{&quot;title&quot;:&quot;HerBERT&quot;,&quot;id&quot;:&quot;model_doc/herbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/herbert&quot;},{&quot;title&quot;:&quot;I-BERT&quot;,&quot;id&quot;:&quot;model_doc/ibert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ibert&quot;},{&quot;title&quot;:&quot;Jukebox&quot;,&quot;id&quot;:&quot;model_doc/jukebox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/jukebox&quot;},{&quot;title&quot;:&quot;LED&quot;,&quot;id&quot;:&quot;model_doc/led&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/led&quot;},{&quot;title&quot;:&quot;LLaMA&quot;,&quot;id&quot;:&quot;model_doc/llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama&quot;},{&quot;title&quot;:&quot;Llama2&quot;,&quot;id&quot;:&quot;model_doc/llama2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama2&quot;},{&quot;title&quot;:&quot;Longformer&quot;,&quot;id&quot;:&quot;model_doc/longformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longformer&quot;},{&quot;title&quot;:&quot;LongT5&quot;,&quot;id&quot;:&quot;model_doc/longt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longt5&quot;},{&quot;title&quot;:&quot;LUKE&quot;,&quot;id&quot;:&quot;model_doc/luke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/luke&quot;},{&quot;title&quot;:&quot;M2M100&quot;,&quot;id&quot;:&quot;model_doc/m2m_100&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/m2m_100&quot;},{&quot;title&quot;:&quot;MarianMT&quot;,&quot;id&quot;:&quot;model_doc/marian&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/marian&quot;},{&quot;title&quot;:&quot;MarkupLM&quot;,&quot;id&quot;:&quot;model_doc/markuplm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/markuplm&quot;},{&quot;title&quot;:&quot;MBart and MBart-50&quot;,&quot;id&quot;:&quot;model_doc/mbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mbart&quot;},{&quot;title&quot;:&quot;MEGA&quot;,&quot;id&quot;:&quot;model_doc/mega&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mega&quot;},{&quot;title&quot;:&quot;MegatronBERT&quot;,&quot;id&quot;:&quot;model_doc/megatron-bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron-bert&quot;},{&quot;title&quot;:&quot;MegatronGPT2&quot;,&quot;id&quot;:&quot;model_doc/megatron_gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2&quot;},{&quot;title&quot;:&quot;Mistral&quot;,&quot;id&quot;:&quot;model_doc/mistral&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mistral&quot;},{&quot;title&quot;:&quot;mLUKE&quot;,&quot;id&quot;:&quot;model_doc/mluke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mluke&quot;},{&quot;title&quot;:&quot;MobileBERT&quot;,&quot;id&quot;:&quot;model_doc/mobilebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilebert&quot;},{&quot;title&quot;:&quot;MPNet&quot;,&quot;id&quot;:&quot;model_doc/mpnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpnet&quot;},{&quot;title&quot;:&quot;MPT&quot;,&quot;id&quot;:&quot;model_doc/mpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpt&quot;},{&quot;title&quot;:&quot;MRA&quot;,&quot;id&quot;:&quot;model_doc/mra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mra&quot;},{&quot;title&quot;:&quot;MT5&quot;,&quot;id&quot;:&quot;model_doc/mt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mt5&quot;},{&quot;title&quot;:&quot;MVP&quot;,&quot;id&quot;:&quot;model_doc/mvp&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mvp&quot;},{&quot;title&quot;:&quot;NEZHA&quot;,&quot;id&quot;:&quot;model_doc/nezha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nezha&quot;},{&quot;title&quot;:&quot;NLLB&quot;,&quot;id&quot;:&quot;model_doc/nllb&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb&quot;},{&quot;title&quot;:&quot;NLLB-MoE&quot;,&quot;id&quot;:&quot;model_doc/nllb-moe&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb-moe&quot;},{&quot;title&quot;:&quot;Nyströmformer&quot;,&quot;id&quot;:&quot;model_doc/nystromformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nystromformer&quot;},{&quot;title&quot;:&quot;Open-Llama&quot;,&quot;id&quot;:&quot;model_doc/open-llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/open-llama&quot;},{&quot;title&quot;:&quot;OPT&quot;,&quot;id&quot;:&quot;model_doc/opt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/opt&quot;},{&quot;title&quot;:&quot;Pegasus&quot;,&quot;id&quot;:&quot;model_doc/pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus&quot;},{&quot;title&quot;:&quot;PEGASUS-X&quot;,&quot;id&quot;:&quot;model_doc/pegasus_x&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus_x&quot;},{&quot;title&quot;:&quot;Persimmon&quot;,&quot;id&quot;:&quot;model_doc/persimmon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/persimmon&quot;},{&quot;title&quot;:&quot;PhoBERT&quot;,&quot;id&quot;:&quot;model_doc/phobert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/phobert&quot;},{&quot;title&quot;:&quot;PLBart&quot;,&quot;id&quot;:&quot;model_doc/plbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/plbart&quot;},{&quot;title&quot;:&quot;ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/prophetnet&quot;},{&quot;title&quot;:&quot;QDQBert&quot;,&quot;id&quot;:&quot;model_doc/qdqbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/qdqbert&quot;},{&quot;title&quot;:&quot;RAG&quot;,&quot;id&quot;:&quot;model_doc/rag&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rag&quot;},{&quot;title&quot;:&quot;REALM&quot;,&quot;id&quot;:&quot;model_doc/realm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/realm&quot;},{&quot;title&quot;:&quot;Reformer&quot;,&quot;id&quot;:&quot;model_doc/reformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/reformer&quot;},{&quot;title&quot;:&quot;RemBERT&quot;,&quot;id&quot;:&quot;model_doc/rembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rembert&quot;},{&quot;title&quot;:&quot;RetriBERT&quot;,&quot;id&quot;:&quot;model_doc/retribert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/retribert&quot;},{&quot;title&quot;:&quot;RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta&quot;},{&quot;title&quot;:&quot;RoBERTa-PreLayerNorm&quot;,&quot;id&quot;:&quot;model_doc/roberta-prelayernorm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm&quot;},{&quot;title&quot;:&quot;RoCBert&quot;,&quot;id&quot;:&quot;model_doc/roc_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roc_bert&quot;},{&quot;title&quot;:&quot;RoFormer&quot;,&quot;id&quot;:&quot;model_doc/roformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roformer&quot;},{&quot;title&quot;:&quot;RWKV&quot;,&quot;id&quot;:&quot;model_doc/rwkv&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rwkv&quot;},{&quot;title&quot;:&quot;Splinter&quot;,&quot;id&quot;:&quot;model_doc/splinter&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/splinter&quot;},{&quot;title&quot;:&quot;SqueezeBERT&quot;,&quot;id&quot;:&quot;model_doc/squeezebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/squeezebert&quot;},{&quot;title&quot;:&quot;SwitchTransformers&quot;,&quot;id&quot;:&quot;model_doc/switch_transformers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/switch_transformers&quot;},{&quot;title&quot;:&quot;T5&quot;,&quot;id&quot;:&quot;model_doc/t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5&quot;},{&quot;title&quot;:&quot;T5v1.1&quot;,&quot;id&quot;:&quot;model_doc/t5v1.1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5v1.1&quot;},{&quot;title&quot;:&quot;TAPEX&quot;,&quot;id&quot;:&quot;model_doc/tapex&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapex&quot;},{&quot;title&quot;:&quot;Transformer XL&quot;,&quot;id&quot;:&quot;model_doc/transfo-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/transfo-xl&quot;},{&quot;title&quot;:&quot;UL2&quot;,&quot;id&quot;:&quot;model_doc/ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ul2&quot;},{&quot;title&quot;:&quot;UMT5&quot;,&quot;id&quot;:&quot;model_doc/umt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/umt5&quot;},{&quot;title&quot;:&quot;X-MOD&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/xmod&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xmod&quot;},{&quot;title&quot;:&quot;XGLM&quot;,&quot;id&quot;:&quot;model_doc/xglm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xglm&quot;},{&quot;title&quot;:&quot;XLM&quot;,&quot;id&quot;:&quot;model_doc/xlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm&quot;},{&quot;title&quot;:&quot;XLM-ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/xlm-prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa-XL&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl&quot;},{&quot;title&quot;:&quot;XLM-V&quot;,&quot;id&quot;:&quot;model_doc/xlm-v&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-v&quot;},{&quot;title&quot;:&quot;XLNet&quot;,&quot;id&quot;:&quot;model_doc/xlnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlnet&quot;},{&quot;title&quot;:&quot;YOSO&quot;,&quot;id&quot;:&quot;model_doc/yoso&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yoso&quot;}]},{&quot;title&quot;:&quot;Vision models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;BEiT&quot;,&quot;id&quot;:&quot;model_doc/beit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/beit&quot;},{&quot;title&quot;:&quot;BiT&quot;,&quot;id&quot;:&quot;model_doc/bit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bit&quot;},{&quot;title&quot;:&quot;Conditional DETR&quot;,&quot;id&quot;:&quot;model_doc/conditional_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/conditional_detr&quot;},{&quot;title&quot;:&quot;ConvNeXT&quot;,&quot;id&quot;:&quot;model_doc/convnext&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnext&quot;},{&quot;title&quot;:&quot;ConvNeXTV2&quot;,&quot;id&quot;:&quot;model_doc/convnextv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnextv2&quot;},{&quot;title&quot;:&quot;CvT&quot;,&quot;id&quot;:&quot;model_doc/cvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cvt&quot;},{&quot;title&quot;:&quot;Deformable DETR&quot;,&quot;id&quot;:&quot;model_doc/deformable_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deformable_detr&quot;},{&quot;title&quot;:&quot;DeiT&quot;,&quot;id&quot;:&quot;model_doc/deit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deit&quot;},{&quot;title&quot;:&quot;DETA&quot;,&quot;id&quot;:&quot;model_doc/deta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deta&quot;},{&quot;title&quot;:&quot;DETR&quot;,&quot;id&quot;:&quot;model_doc/detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/detr&quot;},{&quot;title&quot;:&quot;DiNAT&quot;,&quot;id&quot;:&quot;model_doc/dinat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinat&quot;},{&quot;title&quot;:&quot;DINO V2&quot;,&quot;id&quot;:&quot;model_doc/dinov2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinov2&quot;},{&quot;title&quot;:&quot;DiT&quot;,&quot;id&quot;:&quot;model_doc/dit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dit&quot;},{&quot;title&quot;:&quot;DPT&quot;,&quot;id&quot;:&quot;model_doc/dpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpt&quot;},{&quot;title&quot;:&quot;EfficientFormer&quot;,&quot;id&quot;:&quot;model_doc/efficientformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientformer&quot;},{&quot;title&quot;:&quot;EfficientNet&quot;,&quot;id&quot;:&quot;model_doc/efficientnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientnet&quot;},{&quot;title&quot;:&quot;FocalNet&quot;,&quot;id&quot;:&quot;model_doc/focalnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/focalnet&quot;},{&quot;title&quot;:&quot;GLPN&quot;,&quot;id&quot;:&quot;model_doc/glpn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/glpn&quot;},{&quot;title&quot;:&quot;ImageGPT&quot;,&quot;id&quot;:&quot;model_doc/imagegpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/imagegpt&quot;},{&quot;title&quot;:&quot;LeViT&quot;,&quot;id&quot;:&quot;model_doc/levit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/levit&quot;},{&quot;title&quot;:&quot;Mask2Former&quot;,&quot;id&quot;:&quot;model_doc/mask2former&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mask2former&quot;},{&quot;title&quot;:&quot;MaskFormer&quot;,&quot;id&quot;:&quot;model_doc/maskformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/maskformer&quot;},{&quot;title&quot;:&quot;MobileNetV1&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v1&quot;},{&quot;title&quot;:&quot;MobileNetV2&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v2&quot;},{&quot;title&quot;:&quot;MobileViT&quot;,&quot;id&quot;:&quot;model_doc/mobilevit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevit&quot;},{&quot;title&quot;:&quot;MobileViTV2&quot;,&quot;id&quot;:&quot;model_doc/mobilevitv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevitv2&quot;},{&quot;title&quot;:&quot;NAT&quot;,&quot;id&quot;:&quot;model_doc/nat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nat&quot;},{&quot;title&quot;:&quot;PoolFormer&quot;,&quot;id&quot;:&quot;model_doc/poolformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/poolformer&quot;},{&quot;title&quot;:&quot;Pyramid Vision Transformer (PVT)&quot;,&quot;id&quot;:&quot;model_doc/pvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pvt&quot;},{&quot;title&quot;:&quot;RegNet&quot;,&quot;id&quot;:&quot;model_doc/regnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/regnet&quot;},{&quot;title&quot;:&quot;ResNet&quot;,&quot;id&quot;:&quot;model_doc/resnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/resnet&quot;},{&quot;title&quot;:&quot;SegFormer&quot;,&quot;id&quot;:&quot;model_doc/segformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/segformer&quot;},{&quot;title&quot;:&quot;SwiftFormer&quot;,&quot;id&quot;:&quot;model_doc/swiftformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swiftformer&quot;},{&quot;title&quot;:&quot;Swin Transformer&quot;,&quot;id&quot;:&quot;model_doc/swin&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin&quot;},{&quot;title&quot;:&quot;Swin Transformer V2&quot;,&quot;id&quot;:&quot;model_doc/swinv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swinv2&quot;},{&quot;title&quot;:&quot;Swin2SR&quot;,&quot;id&quot;:&quot;model_doc/swin2sr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin2sr&quot;},{&quot;title&quot;:&quot;Table Transformer&quot;,&quot;id&quot;:&quot;model_doc/table-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/table-transformer&quot;},{&quot;title&quot;:&quot;TimeSformer&quot;,&quot;id&quot;:&quot;model_doc/timesformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/timesformer&quot;},{&quot;title&quot;:&quot;UperNet&quot;,&quot;id&quot;:&quot;model_doc/upernet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/upernet&quot;},{&quot;title&quot;:&quot;VAN&quot;,&quot;id&quot;:&quot;model_doc/van&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/van&quot;},{&quot;title&quot;:&quot;VideoMAE&quot;,&quot;id&quot;:&quot;model_doc/videomae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/videomae&quot;},{&quot;title&quot;:&quot;Vision Transformer (ViT)&quot;,&quot;id&quot;:&quot;model_doc/vit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit&quot;},{&quot;title&quot;:&quot;ViT Hybrid&quot;,&quot;id&quot;:&quot;model_doc/vit_hybrid&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_hybrid&quot;},{&quot;title&quot;:&quot;ViTDet&quot;,&quot;id&quot;:&quot;model_doc/vitdet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitdet&quot;},{&quot;title&quot;:&quot;ViTMAE&quot;,&quot;id&quot;:&quot;model_doc/vit_mae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_mae&quot;},{&quot;title&quot;:&quot;ViTMatte&quot;,&quot;id&quot;:&quot;model_doc/vitmatte&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitmatte&quot;},{&quot;title&quot;:&quot;ViTMSN&quot;,&quot;id&quot;:&quot;model_doc/vit_msn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_msn&quot;},{&quot;title&quot;:&quot;ViViT&quot;,&quot;id&quot;:&quot;model_doc/vivit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vivit&quot;},{&quot;title&quot;:&quot;YOLOS&quot;,&quot;id&quot;:&quot;model_doc/yolos&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yolos&quot;}]},{&quot;title&quot;:&quot;Audio models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio Spectrogram Transformer&quot;,&quot;id&quot;:&quot;model_doc/audio-spectrogram-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer&quot;},{&quot;title&quot;:&quot;Bark&quot;,&quot;id&quot;:&quot;model_doc/bark&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bark&quot;},{&quot;title&quot;:&quot;CLAP&quot;,&quot;id&quot;:&quot;model_doc/clap&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clap&quot;},{&quot;title&quot;:&quot;EnCodec&quot;,&quot;id&quot;:&quot;model_doc/encodec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encodec&quot;},{&quot;title&quot;:&quot;Hubert&quot;,&quot;id&quot;:&quot;model_doc/hubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/hubert&quot;},{&quot;title&quot;:&quot;MCTCT&quot;,&quot;id&quot;:&quot;model_doc/mctct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mctct&quot;},{&quot;title&quot;:&quot;MMS&quot;,&quot;id&quot;:&quot;model_doc/mms&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mms&quot;},{&quot;title&quot;:&quot;MusicGen&quot;,&quot;id&quot;:&quot;model_doc/musicgen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/musicgen&quot;},{&quot;title&quot;:&quot;Pop2Piano&quot;,&quot;id&quot;:&quot;model_doc/pop2piano&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pop2piano&quot;},{&quot;title&quot;:&quot;SEW&quot;,&quot;id&quot;:&quot;model_doc/sew&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew&quot;},{&quot;title&quot;:&quot;SEW-D&quot;,&quot;id&quot;:&quot;model_doc/sew-d&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew-d&quot;},{&quot;title&quot;:&quot;Speech2Text&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text&quot;},{&quot;title&quot;:&quot;Speech2Text2&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text_2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2&quot;},{&quot;title&quot;:&quot;SpeechT5&quot;,&quot;id&quot;:&quot;model_doc/speecht5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speecht5&quot;},{&quot;title&quot;:&quot;UniSpeech&quot;,&quot;id&quot;:&quot;model_doc/unispeech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech&quot;},{&quot;title&quot;:&quot;UniSpeech-SAT&quot;,&quot;id&quot;:&quot;model_doc/unispeech-sat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech-sat&quot;},{&quot;title&quot;:&quot;VITS&quot;,&quot;id&quot;:&quot;model_doc/vits&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vits&quot;},{&quot;title&quot;:&quot;Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2&quot;},{&quot;title&quot;:&quot;Wav2Vec2-Conformer&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2-conformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer&quot;},{&quot;title&quot;:&quot;Wav2Vec2Phoneme&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2_phoneme&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme&quot;},{&quot;title&quot;:&quot;WavLM&quot;,&quot;id&quot;:&quot;model_doc/wavlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wavlm&quot;},{&quot;title&quot;:&quot;Whisper&quot;,&quot;id&quot;:&quot;model_doc/whisper&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/whisper&quot;},{&quot;title&quot;:&quot;XLS-R&quot;,&quot;id&quot;:&quot;model_doc/xls_r&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xls_r&quot;},{&quot;title&quot;:&quot;XLSR-Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/xlsr_wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2&quot;}]},{&quot;title&quot;:&quot;Multimodal models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALIGN&quot;,&quot;id&quot;:&quot;model_doc/align&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/align&quot;},{&quot;title&quot;:&quot;AltCLIP&quot;,&quot;id&quot;:&quot;model_doc/altclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/altclip&quot;},{&quot;title&quot;:&quot;BLIP&quot;,&quot;id&quot;:&quot;model_doc/blip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip&quot;},{&quot;title&quot;:&quot;BLIP-2&quot;,&quot;id&quot;:&quot;model_doc/blip-2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip-2&quot;},{&quot;title&quot;:&quot;BridgeTower&quot;,&quot;id&quot;:&quot;model_doc/bridgetower&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bridgetower&quot;},{&quot;title&quot;:&quot;BROS&quot;,&quot;id&quot;:&quot;model_doc/bros&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bros&quot;},{&quot;title&quot;:&quot;Chinese-CLIP&quot;,&quot;id&quot;:&quot;model_doc/chinese_clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/chinese_clip&quot;},{&quot;title&quot;:&quot;CLIP&quot;,&quot;id&quot;:&quot;model_doc/clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clip&quot;},{&quot;title&quot;:&quot;CLIPSeg&quot;,&quot;id&quot;:&quot;model_doc/clipseg&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clipseg&quot;},{&quot;title&quot;:&quot;Data2Vec&quot;,&quot;id&quot;:&quot;model_doc/data2vec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/data2vec&quot;},{&quot;title&quot;:&quot;DePlot&quot;,&quot;id&quot;:&quot;model_doc/deplot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deplot&quot;},{&quot;title&quot;:&quot;Donut&quot;,&quot;id&quot;:&quot;model_doc/donut&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/donut&quot;},{&quot;title&quot;:&quot;FLAVA&quot;,&quot;id&quot;:&quot;model_doc/flava&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flava&quot;},{&quot;title&quot;:&quot;GIT&quot;,&quot;id&quot;:&quot;model_doc/git&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/git&quot;},{&quot;title&quot;:&quot;GroupViT&quot;,&quot;id&quot;:&quot;model_doc/groupvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/groupvit&quot;},{&quot;title&quot;:&quot;IDEFICS&quot;,&quot;id&quot;:&quot;model_doc/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/idefics&quot;},{&quot;title&quot;:&quot;InstructBLIP&quot;,&quot;id&quot;:&quot;model_doc/instructblip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/instructblip&quot;},{&quot;title&quot;:&quot;LayoutLM&quot;,&quot;id&quot;:&quot;model_doc/layoutlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlm&quot;},{&quot;title&quot;:&quot;LayoutLMV2&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv2&quot;},{&quot;title&quot;:&quot;LayoutLMV3&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv3&quot;},{&quot;title&quot;:&quot;LayoutXLM&quot;,&quot;id&quot;:&quot;model_doc/layoutxlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutxlm&quot;},{&quot;title&quot;:&quot;LiLT&quot;,&quot;id&quot;:&quot;model_doc/lilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lilt&quot;},{&quot;title&quot;:&quot;LXMERT&quot;,&quot;id&quot;:&quot;model_doc/lxmert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lxmert&quot;},{&quot;title&quot;:&quot;MatCha&quot;,&quot;id&quot;:&quot;model_doc/matcha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/matcha&quot;},{&quot;title&quot;:&quot;MGP-STR&quot;,&quot;id&quot;:&quot;model_doc/mgp-str&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mgp-str&quot;},{&quot;title&quot;:&quot;Nougat&quot;,&quot;id&quot;:&quot;model_doc/nougat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nougat&quot;},{&quot;title&quot;:&quot;OneFormer&quot;,&quot;id&quot;:&quot;model_doc/oneformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/oneformer&quot;},{&quot;title&quot;:&quot;OWL-ViT&quot;,&quot;id&quot;:&quot;model_doc/owlvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/owlvit&quot;},{&quot;title&quot;:&quot;Perceiver&quot;,&quot;id&quot;:&quot;model_doc/perceiver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/perceiver&quot;},{&quot;title&quot;:&quot;Pix2Struct&quot;,&quot;id&quot;:&quot;model_doc/pix2struct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pix2struct&quot;},{&quot;title&quot;:&quot;Segment Anything&quot;,&quot;id&quot;:&quot;model_doc/sam&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sam&quot;},{&quot;title&quot;:&quot;Speech Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/speech-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder&quot;},{&quot;title&quot;:&quot;TAPAS&quot;,&quot;id&quot;:&quot;model_doc/tapas&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapas&quot;},{&quot;title&quot;:&quot;TrOCR&quot;,&quot;id&quot;:&quot;model_doc/trocr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trocr&quot;},{&quot;title&quot;:&quot;TVLT&quot;,&quot;id&quot;:&quot;model_doc/tvlt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tvlt&quot;},{&quot;title&quot;:&quot;ViLT&quot;,&quot;id&quot;:&quot;model_doc/vilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vilt&quot;},{&quot;title&quot;:&quot;Vision Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/vision-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder&quot;},{&quot;title&quot;:&quot;Vision Text Dual Encoder&quot;,&quot;id&quot;:&quot;model_doc/vision-text-dual-encoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder&quot;},{&quot;title&quot;:&quot;VisualBERT&quot;,&quot;id&quot;:&quot;model_doc/visual_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/visual_bert&quot;},{&quot;title&quot;:&quot;X-CLIP&quot;,&quot;id&quot;:&quot;model_doc/xclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xclip&quot;}]},{&quot;title&quot;:&quot;Reinforcement learning models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Decision Transformer&quot;,&quot;id&quot;:&quot;model_doc/decision_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/decision_transformer&quot;},{&quot;title&quot;:&quot;Trajectory Transformer&quot;,&quot;id&quot;:&quot;model_doc/trajectory_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer&quot;}]},{&quot;title&quot;:&quot;Time series models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Autoformer&quot;,&quot;id&quot;:&quot;model_doc/autoformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/autoformer&quot;},{&quot;title&quot;:&quot;Informer&quot;,&quot;id&quot;:&quot;model_doc/informer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/informer&quot;},{&quot;title&quot;:&quot;Time Series Transformer&quot;,&quot;id&quot;:&quot;model_doc/time_series_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/time_series_transformer&quot;}]},{&quot;title&quot;:&quot;Graph models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Graphormer&quot;,&quot;id&quot;:&quot;model_doc/graphormer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/graphormer&quot;}]}]},{&quot;title&quot;:&quot;Internal Helpers&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Custom Layers and Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/modeling_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/modeling_utils&quot;},{&quot;title&quot;:&quot;Utilities for pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/pipelines_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/pipelines_utils&quot;},{&quot;title&quot;:&quot;Utilities for Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/tokenization_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/tokenization_utils&quot;},{&quot;title&quot;:&quot;Utilities for Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/trainer_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/trainer_utils&quot;},{&quot;title&quot;:&quot;Utilities for Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/generation_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/generation_utils&quot;},{&quot;title&quot;:&quot;Utilities for Image Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/image_processing_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/image_processing_utils&quot;},{&quot;title&quot;:&quot;Utilities for Audio processing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/audio_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/audio_utils&quot;},{&quot;title&quot;:&quot;General Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/file_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/file_utils&quot;},{&quot;title&quot;:&quot;Utilities for Time Series&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/time_series_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/time_series_utils&quot;}]}]}],&quot;chapterId&quot;:&quot;model_doc/xmod&quot;,&quot;docType&quot;:&quot;docs&quot;,&quot;isLoggedIn&quot;:false,&quot;lang&quot;:&quot;en&quot;,&quot;langs&quot;:[&quot;de&quot;,&quot;en&quot;,&quot;es&quot;,&quot;fr&quot;,&quot;it&quot;,&quot;ko&quot;,&quot;pt&quot;,&quot;zh&quot;],&quot;library&quot;:&quot;transformers&quot;,&quot;theme&quot;:&quot;light&quot;,&quot;version&quot;:&quot;v4.34.0&quot;,&quot;versions&quot;:[{&quot;version&quot;:&quot;main&quot;},{&quot;version&quot;:&quot;v4.34.0&quot;},{&quot;version&quot;:&quot;v4.33.3&quot;},{&quot;version&quot;:&quot;v4.33.2&quot;},{&quot;version&quot;:&quot;v4.33.0&quot;},{&quot;version&quot;:&quot;v4.32.1&quot;},{&quot;version&quot;:&quot;v4.32.0&quot;},{&quot;version&quot;:&quot;v4.31.0&quot;},{&quot;version&quot;:&quot;v4.30.0&quot;},{&quot;version&quot;:&quot;v4.29.1&quot;},{&quot;version&quot;:&quot;v4.29.0&quot;},{&quot;version&quot;:&quot;v4.28.1&quot;},{&quot;version&quot;:&quot;v4.28.0&quot;},{&quot;version&quot;:&quot;v4.27.2&quot;},{&quot;version&quot;:&quot;v4.27.1&quot;},{&quot;version&quot;:&quot;v4.27.0&quot;},{&quot;version&quot;:&quot;v4.26.1&quot;},{&quot;version&quot;:&quot;v4.26.0&quot;},{&quot;version&quot;:&quot;v4.25.1&quot;},{&quot;version&quot;:&quot;v4.24.0&quot;},{&quot;version&quot;:&quot;v4.23.1&quot;},{&quot;version&quot;:&quot;v4.23.0&quot;},{&quot;version&quot;:&quot;v4.22.2&quot;},{&quot;version&quot;:&quot;v4.22.1&quot;},{&quot;version&quot;:&quot;v4.22.0&quot;},{&quot;version&quot;:&quot;v4.21.3&quot;},{&quot;version&quot;:&quot;v4.21.2&quot;},{&quot;version&quot;:&quot;v4.21.1&quot;},{&quot;version&quot;:&quot;v4.21.0&quot;},{&quot;version&quot;:&quot;v4.20.1&quot;},{&quot;version&quot;:&quot;v4.20.0&quot;},{&quot;version&quot;:&quot;v4.19.4&quot;},{&quot;version&quot;:&quot;v4.19.3&quot;},{&quot;version&quot;:&quot;v4.19.2&quot;},{&quot;version&quot;:&quot;v4.19.0&quot;},{&quot;version&quot;:&quot;v4.18.0&quot;},{&quot;version&quot;:&quot;v4.17.0&quot;},{&quot;version&quot;:&quot;v4.16.2&quot;},{&quot;version&quot;:&quot;v4.16.1&quot;},{&quot;version&quot;:&quot;v4.16.0&quot;},{&quot;version&quot;:&quot;v4.15.0&quot;},{&quot;version&quot;:&quot;v4.14.1&quot;},{&quot;version&quot;:&quot;v4.13.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.5&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.4&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.0.0&quot;},{&quot;version&quot;:&quot;doc-builder-html&quot;}],&quot;title&quot;:&quot;X-MOD&quot;}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"><div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation</p> <div class="flex items-center"><p class="font-semibold">X-MOD</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "><button class=" " type="button"><h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> <span class="ml-auto rounded border border-gray-200 bg-gray-100 px-0.5 text-xs dark:border-gray-800 dark:bg-gray-800"><kbd class="font-sans">⌘K</kbd></span></button> <div class="flex items-center"><select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1">v4.34.0</option><option value="2">v4.33.3</option><option value="3">v4.32.1</option><option value="4">v4.31.0</option><option value="5">v4.30.0</option><option value="6">v4.29.1</option><option value="7">v4.28.1</option><option value="8">v4.27.2</option><option value="9">v4.26.1</option><option value="10">v4.25.1</option><option value="11">v4.24.0</option><option value="12">v4.23.1</option><option value="13">v4.22.2</option><option value="14">v4.21.3</option><option value="15">v4.20.1</option><option value="16">v4.19.4</option><option value="17">v4.18.0</option><option value="18">v4.17.0</option><option value="19">v4.16.2</option><option value="20">v4.15.0</option><option value="21">v4.14.1</option><option value="22">v4.13.0</option><option value="23">v4.12.5</option><option value="24">v4.11.3</option><option value="25">v4.10.1</option><option value="26">v4.9.2</option><option value="27">v4.8.2</option><option value="28">v4.7.0</option><option value="29">v4.6.0</option><option value="30">v4.5.1</option><option value="31">v4.4.2</option><option value="32">v4.3.3</option><option value="33">v4.2.2</option><option value="34">v4.1.1</option><option value="35">v4.0.1</option><option value="36">v3.5.1</option><option value="37">v3.4.0</option><option value="38">v3.3.1</option><option value="39">v3.2.0</option><option value="40">v3.1.0</option><option value="41">v3.0.2</option><option value="42">v2.11.0</option><option value="43">v2.10.0</option><option value="44">v2.9.1</option><option value="45">v2.8.0</option><option value="46">v2.7.0</option><option value="47">v2.6.0</option><option value="48">v2.5.1</option><option value="49">v2.4.1</option><option value="50">v2.3.0</option><option value="51">v2.2.2</option><option value="52">v2.1.1</option><option value="53">v2.0.0</option><option value="54">v1.2.0</option><option value="55">v1.1.0</option><option value="56">v1.0.0</option><option value="57">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"><button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"><svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> </a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Get started</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/index">🤗 Transformers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/quicktour">Quick tour </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/installation">Installation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Tutorials</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_tutorial">Run inference with pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/autoclass_tutorial">Write portable code with AutoClass </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/preprocessing">Preprocess data </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/training">Fine-tune a pretrained model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/run_scripts">Train with a script </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/accelerate">Set up distributed training with 🤗 Accelerate </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/peft">Load and train adapters with 🤗 PEFT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_sharing">Share your model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/transformers_agents">Agents </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/llm_tutorial">Generation with LLMs </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Task Guides</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Natural Language Processing</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Computer Vision</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Generation</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Prompting</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Developer guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/fast_tokenizers">Use fast tokenizers from 🤗 Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/multilingual">Run inference with multilingual models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/create_a_model">Use model-specific APIs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_models">Share a custom model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/chat_templating">Templates for chat models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/sagemaker">Run training on Amazon SageMaker </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/serialization">Export to ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tflite">Export to TFLite </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/torchscript">Export to TorchScript </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/benchmarks">Benchmarks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/notebooks">Notebooks with examples </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/community">Community resources </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_tools">Custom Tools and Prompts </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/troubleshooting">Troubleshoot </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Performance and scalability</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/performance">Overview </a><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Efficient training techniques</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_one">Methods and tools for efficient training on a single GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_many">Multiple GPUs and parallelism </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu">Efficient training on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu_many">Distributed CPU training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu">Training on TPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu_tf">Training on TPU with TensorFlow </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_special">Training on Specialized Hardware </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_hardware">Custom hardware for training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/hpo_train">Hyperparameter Search using Trainer API </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Optimizing inference</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_cpu">Inference on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_one">Inference on one GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_many">Inference on many GPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_special">Inference on Specialized Hardware </a> </div><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/big_models">Instantiating a big model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/debugging">Troubleshooting </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tf_xla">XLA Integration for TensorFlow Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perf_torch_compile">Optimize inference using `torch.compile()` </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Contribute</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/contributing">How to contribute to transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_model">How to add a model to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_tensorflow_model">How to convert a 🤗 Transformers model to TensorFlow? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_pipeline">How to add a pipeline to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/testing">Testing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pr_checks">Checks on a Pull Request </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Conceptual guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/philosophy">Philosophy </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/glossary">Glossary </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/task_summary">What 🤗 Transformers can do </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tasks_explained">How 🤗 Transformers solve tasks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_summary">The Transformer model family </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tokenizer_summary">Summary of the tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/attention">Attention mechanisms </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pad_truncation">Padding and truncation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/bertology">BERTology </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perplexity">Perplexity of fixed-length models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_webserver">Pipelines for webserver inference </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_memory_anatomy">Model training anatomy </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>API</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Main Classes</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/agent">Agents and Tools </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/model_doc/auto">Auto Classes </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/callback">Callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/configuration">Configuration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/data_collator">Data Collator </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks">Keras callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/logging">Logging </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/model">Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/text_generation">Text Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/onnx">ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules">Optimization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/output">Model outputs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/pipelines">Pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/processors">Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/quantization">Quantization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/tokenizer">Tokenizer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/trainer">Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/deepspeed">DeepSpeed Integration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor">Feature Extractor </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/image_processor">Image Processor </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Models</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Text models</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/albert">ALBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bart">BART </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/barthez">BARThez </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bartpho">BARTpho </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bert">BERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bert-generation">BertGeneration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bert-japanese">BertJapanese </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bertweet">Bertweet </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/big_bird">BigBird </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus">BigBirdPegasus </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/biogpt">BioGpt </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/blenderbot">Blenderbot </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/blenderbot-small">Blenderbot Small </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bloom">BLOOM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bort">BORT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/byt5">ByT5 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/camembert">CamemBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/canine">CANINE </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/codegen">CodeGen </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/code_llama">CodeLlama </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/convbert">ConvBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/cpm">CPM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/cpmant">CPMANT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ctrl">CTRL </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/deberta">DeBERTa </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/deberta-v2">DeBERTa-v2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/dialogpt">DialoGPT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/distilbert">DistilBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/dpr">DPR </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/electra">ELECTRA </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/encoder-decoder">Encoder Decoder Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ernie">ERNIE </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ernie_m">ErnieM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/esm">ESM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/falcon">Falcon </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/flan-t5">FLAN-T5 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/flan-ul2">FLAN-UL2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/flaubert">FlauBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/fnet">FNet </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/fsmt">FSMT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/funnel">Funnel Transformer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/openai-gpt">GPT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt_neo">GPT Neo </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt_neox">GPT NeoX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese">GPT NeoX Japanese </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gptj">GPT-J </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt2">GPT2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode">GPTBigCode </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese">GPTSAN Japanese </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt-sw3">GPTSw3 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/herbert">HerBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ibert">I-BERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/jukebox">Jukebox </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/led">LED </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/llama">LLaMA </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/llama2">Llama2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/longformer">Longformer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/longt5">LongT5 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/luke">LUKE </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/m2m_100">M2M100 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/marian">MarianMT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/markuplm">MarkupLM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mbart">MBart and MBart-50 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mega">MEGA </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/megatron-bert">MegatronBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2">MegatronGPT2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mistral">Mistral </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mluke">mLUKE </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mobilebert">MobileBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mpnet">MPNet </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mpt">MPT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mra">MRA </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mt5">MT5 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mvp">MVP </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nezha">NEZHA </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nllb">NLLB </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nllb-moe">NLLB-MoE </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nystromformer">Nyströmformer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/open-llama">Open-Llama </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/opt">OPT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/pegasus">Pegasus </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/pegasus_x">PEGASUS-X </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/persimmon">Persimmon </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/phobert">PhoBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/plbart">PLBart </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/prophetnet">ProphetNet </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/qdqbert">QDQBert </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/rag">RAG </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/realm">REALM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/reformer">Reformer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/rembert">RemBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/retribert">RetriBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/roberta">RoBERTa </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm">RoBERTa-PreLayerNorm </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/roc_bert">RoCBert </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/roformer">RoFormer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/rwkv">RWKV </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/splinter">Splinter </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/squeezebert">SqueezeBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/switch_transformers">SwitchTransformers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/t5">T5 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/t5v1.1">T5v1.1 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/tapex">TAPEX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/transfo-xl">Transformer XL </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ul2">UL2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/umt5">UMT5 </a><a data-sveltekit-reload="" class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xmod">X-MOD </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xglm">XGLM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm">XLM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet">XLM-ProphetNet </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta">XLM-RoBERTa </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl">XLM-RoBERTa-XL </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm-v">XLM-V </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlnet">XLNet </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/yoso">YOSO </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Vision models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Reinforcement learning models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Time series models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Graph models</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Internal Helpers</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/modeling_utils">Custom Layers and Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/pipelines_utils">Utilities for pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/tokenization_utils">Utilities for Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/trainer_utils">Utilities for Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/generation_utils">Utilities for Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/image_processing_utils">Utilities for Image Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/audio_utils">Utilities for Audio processing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/file_utils">General Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/time_series_utils">Utilities for Time Series </a> </div> </div></nav></div></div></div> <div class="z-1 min-w-0 flex-1"> <div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg"> <div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div> <p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience </p> <div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes </div></div></div> <div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a> <p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div> <div class="prose-doc prose relative mx-auto max-w-4xl break-words"><!-- HTML_TAG_START --> <link href="/docs/transformers/v4.34.0/en/_app/immutable/assets/0.e3b0c442.css" rel="modulepreload"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/entry/start.c2db227a.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/scheduler.9bc65507.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/singletons.e3057404.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/index.3b203c72.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/paths.e7de6301.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/entry/app.879d9b87.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/index.78c82d43.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/0.242aaaff.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/each.e59479a4.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/285.4d17146d.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/Tip.87d55b76.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/Docstring.4e7352e2.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/globals.7f7f1b26.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/IconCopyLink.bedaa44d.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/CodeBlock.73e038be.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/ExampleCodeBlock.872b014d.js"><!-- HEAD_svelte-1phssyn_START --><meta name="hf:doc:metadata" content="{&quot;local&quot;:&quot;xmod&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;overview&quot;,&quot;title&quot;:&quot;Overview&quot;},{&quot;local&quot;:&quot;adapter-usage&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;input-language&quot;,&quot;title&quot;:&quot;Input language&quot;},{&quot;local&quot;:&quot;finetuning&quot;,&quot;title&quot;:&quot;Fine-tuning&quot;},{&quot;local&quot;:&quot;crosslingual-transfer&quot;,&quot;title&quot;:&quot;Cross-lingual transfer&quot;}],&quot;title&quot;:&quot;Adapter Usage&quot;},{&quot;local&quot;:&quot;resources&quot;,&quot;title&quot;:&quot;Resources&quot;},{&quot;local&quot;:&quot;transformers.XmodConfig&quot;,&quot;title&quot;:&quot;XmodConfig&quot;},{&quot;local&quot;:&quot;transformers.XmodModel&quot;,&quot;title&quot;:&quot;XmodModel&quot;},{&quot;local&quot;:&quot;transformers.XmodForCausalLM&quot;,&quot;title&quot;:&quot;XmodForCausalLM&quot;},{&quot;local&quot;:&quot;transformers.XmodForMaskedLM&quot;,&quot;title&quot;:&quot;XmodForMaskedLM&quot;},{&quot;local&quot;:&quot;transformers.XmodForSequenceClassification&quot;,&quot;title&quot;:&quot;XmodForSequenceClassification&quot;},{&quot;local&quot;:&quot;transformers.XmodForMultipleChoice&quot;,&quot;title&quot;:&quot;XmodForMultipleChoice&quot;},{&quot;local&quot;:&quot;transformers.XmodForTokenClassification&quot;,&quot;title&quot;:&quot;XmodForTokenClassification&quot;},{&quot;local&quot;:&quot;transformers.XmodForQuestionAnswering&quot;,&quot;title&quot;:&quot;XmodForQuestionAnswering&quot;}],&quot;title&quot;:&quot;X-MOD&quot;}"><!-- HEAD_svelte-1phssyn_END --> <p></p> <h1 class="relative group"><a id="xmod" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#xmod"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-rj9s7y">X-MOD</span></h1> <h2 class="relative group"><a id="overview" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#overview"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1jsw1pg">Overview</span></h2> <p data-svelte-h="svelte-1vmoovo">The X-MOD model was proposed in <a href="http://dx.doi.org/10.18653/v1/2022.naacl-main.255" rel="nofollow">Lifting the Curse of Multilinguality by Pre-training Modular Transformers</a> by Jonas Pfeiffer, Naman Goyal, Xi Lin, Xian Li, James Cross, Sebastian Riedel, and Mikel Artetxe. X-MOD extends multilingual masked language models like <a href="xlm-roberta">XLM-R</a> to include language-specific modular components (<em>language adapters</em>) during pre-training. For fine-tuning, the language adapters in each transformer layer are frozen.</p> <p data-svelte-h="svelte-vfdo9a">The abstract from the paper is the following:</p> <p data-svelte-h="svelte-1ce41kp"><em>Multilingual pre-trained models are known to suffer from the curse of multilinguality, which causes per-language performance to drop as they cover more languages. We address this issue by introducing language-specific modules, which allows us to grow the total capacity of the model, while keeping the total number of trainable parameters per language constant. In contrast with prior work that learns language-specific components post-hoc, we pre-train the modules of our Cross-lingual Modular (X-MOD) models from the start. Our experiments on natural language inference, named entity recognition and question answering show that our approach not only mitigates the negative interference between languages, but also enables positive transfer, resulting in improved monolingual and cross-lingual performance. Furthermore, our approach enables adding languages post-hoc with no measurable drop in performance, no longer limiting the model usage to the set of pre-trained languages.</em></p> <p data-svelte-h="svelte-axv494">Tips:</p> <ul data-svelte-h="svelte-gqhdug"><li>X-MOD is similar to <a href="xlm-roberta">XLM-R</a>, but a difference is that the input language needs to be specified so that the correct language adapter can be activated.</li> <li>The main models – base and large – have adapters for 81 languages.</li></ul> <p data-svelte-h="svelte-17wcokr">This model was contributed by <a href="https://huggingface.co/jvamvas" rel="nofollow">jvamvas</a>. The original code can be found <a href="https://github.com/facebookresearch/fairseq/tree/58cc6cca18f15e6d56e3f60c959fe4f878960a60/fairseq/models/xmod" rel="nofollow">here</a> and the original documentation is found <a href="https://github.com/facebookresearch/fairseq/tree/58cc6cca18f15e6d56e3f60c959fe4f878960a60/examples/xmod" rel="nofollow">here</a>.</p> <h2 class="relative group"><a id="adapter-usage" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#adapter-usage"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-doqaof">Adapter Usage</span></h2> <h3 class="relative group"><a id="input-language" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#input-language"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1xnj0pb">Input language</span></h3> <p data-svelte-h="svelte-1bi3hxf">There are two ways to specify the input language:</p> <ol data-svelte-h="svelte-1nun6oj"><li>By setting a default language before using the model:</li></ol> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> XmodModel model = XmodModel.from_pretrained(<span class="hljs-string">"facebook/xmod-base"</span>) model.set_default_language(<span class="hljs-string">"en_XX"</span>)<!-- HTML_TAG_END --></pre></div> <ol start="2" data-svelte-h="svelte-1m276jh"><li>By explicitly passing the index of the language adapter for each sample:</li></ol> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-keyword">import</span> torch input_ids = torch.tensor( [ [<span class="hljs-number">0</span>, <span class="hljs-number">581</span>, <span class="hljs-number">10269</span>, <span class="hljs-number">83</span>, <span class="hljs-number">99942</span>, <span class="hljs-number">136</span>, <span class="hljs-number">60742</span>, <span class="hljs-number">23</span>, <span class="hljs-number">70</span>, <span class="hljs-number">80583</span>, <span class="hljs-number">18276</span>, <span class="hljs-number">2</span>], [<span class="hljs-number">0</span>, <span class="hljs-number">1310</span>, <span class="hljs-number">49083</span>, <span class="hljs-number">443</span>, <span class="hljs-number">269</span>, <span class="hljs-number">71</span>, <span class="hljs-number">5486</span>, <span class="hljs-number">165</span>, <span class="hljs-number">60429</span>, <span class="hljs-number">660</span>, <span class="hljs-number">23</span>, <span class="hljs-number">2</span>], ] ) lang_ids = torch.LongTensor( [ <span class="hljs-number">0</span>, <span class="hljs-comment"># en_XX</span> <span class="hljs-number">8</span>, <span class="hljs-comment"># de_DE</span> ] ) output = model(input_ids, lang_ids=lang_ids)<!-- HTML_TAG_END --></pre></div> <h3 class="relative group"><a id="finetuning" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#finetuning"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-nl2q21">Fine-tuning</span></h3> The paper recommends that the embedding layer and the language adapters are frozen during fine-tuning. A method for doing this is provided: <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START -->model.freeze_embeddings_and_language_adapters() <span class="hljs-comment"># Fine-tune the model ...</span><!-- HTML_TAG_END --></pre></div> <h3 class="relative group"><a id="crosslingual-transfer" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#crosslingual-transfer"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1j5nfgn">Cross-lingual transfer</span></h3> After fine-tuning, zero-shot cross-lingual transfer can be tested by activating the language adapter of the target language: <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START -->model.set_default_language(<span class="hljs-string">"de_DE"</span>) <span class="hljs-comment"># Evaluate the model on German examples ...</span><!-- HTML_TAG_END --></pre></div> <h2 class="relative group"><a id="resources" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#resources"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-w4zzv6">Resources</span></h2> <ul data-svelte-h="svelte-p1b16m"><li><a href="../tasks/sequence_classification">Text classification task guide</a></li> <li><a href="../tasks/token_classification">Token classification task guide</a></li> <li><a href="../tasks/question_answering">Question answering task guide</a></li> <li><a href="../tasks/language_modeling">Causal language modeling task guide</a></li> <li><a href="../tasks/masked_language_modeling">Masked language modeling task guide</a></li> <li><a href="../tasks/multiple_choice">Multiple choice task guide</a></li></ul> <h2 class="relative group"><a id="transformers.XmodConfig" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodConfig"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-haho87">XmodConfig</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XmodConfig"><!-- HTML_TAG_START --><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XmodConfig</span></span></h3><!-- HTML_TAG_END --> <a id="transformers.XmodConfig" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XmodConfig"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xmod/configuration_xmod.py#L40" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">vocab_size<span class="opacity-60"> = 30522</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_size<span class="opacity-60"> = 768</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_hidden_layers<span class="opacity-60"> = 12</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_attention_heads<span class="opacity-60"> = 12</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">intermediate_size<span class="opacity-60"> = 3072</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_act<span class="opacity-60"> = 'gelu'</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_dropout_prob<span class="opacity-60"> = 0.1</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_probs_dropout_prob<span class="opacity-60"> = 0.1</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">max_position_embeddings<span class="opacity-60"> = 512</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">type_vocab_size<span class="opacity-60"> = 2</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">initializer_range<span class="opacity-60"> = 0.02</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">layer_norm_eps<span class="opacity-60"> = 1e-12</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pad_token_id<span class="opacity-60"> = 1</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">bos_token_id<span class="opacity-60"> = 0</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">eos_token_id<span class="opacity-60"> = 2</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_embedding_type<span class="opacity-60"> = 'absolute'</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_cache<span class="opacity-60"> = True</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">classifier_dropout<span class="opacity-60"> = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pre_norm<span class="opacity-60"> = False</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">adapter_reduction_factor<span class="opacity-60"> = 2</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">adapter_layer_norm<span class="opacity-60"> = False</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">adapter_reuse_layer_norm<span class="opacity-60"> = True</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">ln_before_adapter<span class="opacity-60"> = True</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">languages<span class="opacity-60"> = ('en_XX',)</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">default_language<span class="opacity-60"> = None</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodConfig.vocab_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodConfig.vocab_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>vocab_size</strong> (<code>int</code>, <em>optional</em>, defaults to 30522) — Vocabulary size of the X-MOD model. Defines the number of different tokens that can be represented by the <code>inputs_ids</code> passed when calling <a href="/docs/transformers/v4.34.0/en/model_doc/xmod#transformers.XmodModel">XmodModel</a>.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodConfig.hidden_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodConfig.hidden_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>hidden_size</strong> (<code>int</code>, <em>optional</em>, defaults to 768) — Dimensionality of the encoder layers and the pooler layer.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodConfig.num_hidden_layers" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodConfig.num_hidden_layers"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>num_hidden_layers</strong> (<code>int</code>, <em>optional</em>, defaults to 12) — Number of hidden layers in the Transformer encoder.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodConfig.num_attention_heads" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodConfig.num_attention_heads"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>num_attention_heads</strong> (<code>int</code>, <em>optional</em>, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodConfig.intermediate_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodConfig.intermediate_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>intermediate_size</strong> (<code>int</code>, <em>optional</em>, defaults to 3072) — Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodConfig.hidden_act" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodConfig.hidden_act"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>hidden_act</strong> (<code>str</code> or <code>Callable</code>, <em>optional</em>, defaults to <code>"gelu"</code>) — The non-linear activation function (function or string) in the encoder and pooler. If string, <code>"gelu"</code>, <code>"relu"</code>, <code>"silu"</code> and <code>"gelu_new"</code> are supported.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodConfig.hidden_dropout_prob" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodConfig.hidden_dropout_prob"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>hidden_dropout_prob</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodConfig.attention_probs_dropout_prob" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodConfig.attention_probs_dropout_prob"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>attention_probs_dropout_prob</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) — The dropout ratio for the attention probabilities.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodConfig.max_position_embeddings" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodConfig.max_position_embeddings"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>max_position_embeddings</strong> (<code>int</code>, <em>optional</em>, defaults to 512) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048).<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodConfig.type_vocab_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodConfig.type_vocab_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>type_vocab_size</strong> (<code>int</code>, <em>optional</em>, defaults to 2) — The vocabulary size of the <code>token_type_ids</code> passed when calling <a href="/docs/transformers/v4.34.0/en/model_doc/xmod#transformers.XmodModel">XmodModel</a>.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodConfig.initializer_range" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodConfig.initializer_range"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>initializer_range</strong> (<code>float</code>, <em>optional</em>, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodConfig.layer_norm_eps" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodConfig.layer_norm_eps"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>layer_norm_eps</strong> (<code>float</code>, <em>optional</em>, defaults to 1e-12) — The epsilon used by the layer normalization layers.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodConfig.position_embedding_type" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodConfig.position_embedding_type"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>position_embedding_type</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"absolute"</code>) — Type of position embedding. Choose one of <code>"absolute"</code>, <code>"relative_key"</code>, <code>"relative_key_query"</code>. For positional embeddings use <code>"absolute"</code>. For more information on <code>"relative_key"</code>, please refer to <a href="https://arxiv.org/abs/1803.02155" rel="nofollow">Self-Attention with Relative Position Representations (Shaw et al.)</a>. For more information on <code>"relative_key_query"</code>, please refer to <em>Method 4</em> in <a href="https://arxiv.org/abs/2009.13658" rel="nofollow">Improve Transformer Models with Better Relative Position Embeddings (Huang et al.)</a>.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodConfig.is_decoder" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodConfig.is_decoder"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>is_decoder</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether the model is used as a decoder or not. If <code>False</code>, the model is used as an encoder.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodConfig.use_cache" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodConfig.use_cache"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>use_cache</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if <code>config.is_decoder=True</code>.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodConfig.classifier_dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodConfig.classifier_dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>classifier_dropout</strong> (<code>float</code>, <em>optional</em>) — The dropout ratio for the classification head.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodConfig.pre_norm" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodConfig.pre_norm"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>pre_norm</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether to apply layer normalization before each block.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodConfig.adapter_reduction_factor" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodConfig.adapter_reduction_factor"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>adapter_reduction_factor</strong> (<code>int</code> or <code>float</code>, <em>optional</em>, defaults to 2) — The factor by which the dimensionality of the adapter is reduced relative to <code>hidden_size</code>.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodConfig.adapter_layer_norm" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodConfig.adapter_layer_norm"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>adapter_layer_norm</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether to apply a new layer normalization before the adapter modules (shared across all adapters).<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodConfig.adapter_reuse_layer_norm" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodConfig.adapter_reuse_layer_norm"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>adapter_reuse_layer_norm</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether to reuse the second layer normalization and apply it before the adapter modules as well.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodConfig.ln_before_adapter" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodConfig.ln_before_adapter"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>ln_before_adapter</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether to apply the layer normalization before the residual connection around the adapter module.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodConfig.languages" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodConfig.languages"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>languages</strong> (<code>Iterable[str]</code>, <em>optional</em>, defaults to <code>["en_XX"]</code>) — An iterable of language codes for which adapter modules should be initialized.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodConfig.default_language" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodConfig.default_language"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>default_language</strong> (<code>str</code>, <em>optional</em>) — Language code of a default language. It will be assumed that the input is in this language if no language codes are explicitly passed to the forward method.<!-- HTML_TAG_END --> </span></span> </li></ul> </div></div> <p data-svelte-h="svelte-i3c9uw">This is the configuration class to store the configuration of a <a href="/docs/transformers/v4.34.0/en/model_doc/xmod#transformers.XmodModel">XmodModel</a>. It is used to instantiate an X-MOD model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the <a href="https://huggingface.co/facebook/xmod-base" rel="nofollow">facebook/xmod-base</a> architecture.</p> <p data-svelte-h="svelte-10kqkkl">Configuration objects inherit from <a href="/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> and can be used to control the model outputs. Read the documentation from <a href="/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> for more information.</p> <div class="relative group rounded-md"><a id="transformers.XmodConfig.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodConfig.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-kvfsh7">Examples:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> XmodConfig, XmodModel <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Initializing an X-MOD facebook/xmod-base style configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>configuration = XmodConfig() <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Initializing a model (with random weights) from the facebook/xmod-base style configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model = XmodModel(configuration) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Accessing the model configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>configuration = model.config<!-- HTML_TAG_END --></pre></div></div></div> <h2 class="relative group"><a id="transformers.XmodModel" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodModel"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1dv4jwe">XmodModel</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XmodModel"><!-- HTML_TAG_START --><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XmodModel</span></span></h3><!-- HTML_TAG_END --> <a id="transformers.XmodModel" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XmodModel"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xmod/modeling_xmod.py#L790" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">add_pooling_layer<span class="opacity-60"> = True</span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodModel.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodModel.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xmod#transformers.XmodConfig">XmodConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.<!-- HTML_TAG_END --> </span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1534uvs">The bare X-MOD Model transformer outputting raw hidden-states without any specific head on top.</p> <p data-svelte-h="svelte-hmtw9k">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-hswkmf">This model is also a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <p data-svelte-h="svelte-rehfhh">The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of cross-attention is added between the self-attention layers, following the architecture described in <em>Attention is all you need</em>_ by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin.</p> <p data-svelte-h="svelte-174erte">To behave as an decoder the model needs to be initialized with the <code>is_decoder</code> argument of the configuration set to <code>True</code>. To be used in a Seq2Seq model, the model needs to initialized with both <code>is_decoder</code> argument and <code>add_cross_attention</code> set to <code>True</code>; an <code>encoder_hidden_states</code> is then expected as an input to the forward pass.</p> <p data-svelte-h="svelte-p9qvd1">.. _<em>Attention is all you need</em>: <a href="https://arxiv.org/abs/1706.03762" rel="nofollow">https://arxiv.org/abs/1706.03762</a></p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XmodModel.forward"><!-- HTML_TAG_START --><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4><!-- HTML_TAG_END --> <a id="transformers.XmodModel.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XmodModel.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xmod/modeling_xmod.py#L836" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">lang_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_hidden_states<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">past_key_values<span class="opacity-60">: typing.Optional[typing.List[torch.FloatTensor]] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_cache<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodModel.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodModel.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodModel.forward.lang_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodModel.forward.lang_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>lang_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of the language adapters that should be activated for each sample, respectively. Default: the index that corresponds to <code>self.config.default_language</code>.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodModel.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodModel.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodModel.forward.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodModel.forward.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>token_type_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token.</li> </ul> <p><a href="../glossary#token-type-ids">What are token type IDs?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodModel.forward.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodModel.forward.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>position_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.<p></p> <p><a href="../glossary#position-ids">What are position IDs?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodModel.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodModel.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul><!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodModel.forward.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodModel.forward.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodModel.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodModel.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodModel.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodModel.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodModel.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodModel.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodModel.forward.encoder_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodModel.forward.encoder_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>encoder_hidden_states</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodModel.forward.encoder_attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodModel.forward.encoder_attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>encoder_attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul><!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodModel.forward.past_key_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodModel.forward.past_key_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>past_key_values</strong> (<code>tuple(tuple(torch.FloatTensor))</code> of length <code>config.n_layers</code> with each tuple having 4 tensors —<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodModel.forward.of" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodModel.forward.of"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>of</strong> shape <code>(batch_size, num_heads, sequence_length - 1, embed_size_per_head)</code>) — Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.<p></p> <p>If <code>past_key_values</code> are used, the user can optionally input only the last <code>decoder_input_ids</code> (those that don’t have their past key value states given to this model) of shape <code>(batch_size, 1)</code> instead of all <code>decoder_input_ids</code> of shape <code>(batch_size, sequence_length)</code>.<!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodModel.forward.use_cache" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodModel.forward.use_cache"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>use_cache</strong> (<code>bool</code>, <em>optional</em>) — If set to <code>True</code>, <code>past_key_values</code> key value states are returned and can be used to speed up decoding (see <code>past_key_values</code>).<!-- HTML_TAG_END --> </span></span> </li></ul> </div></div> <p data-svelte-h="svelte-jodpy1">The <a href="/docs/transformers/v4.34.0/en/model_doc/xmod#transformers.XmodModel">XmodModel</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div></div></div> <h2 class="relative group"><a id="transformers.XmodForCausalLM" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForCausalLM"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1493t4e">XmodForCausalLM</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XmodForCausalLM"><!-- HTML_TAG_START --><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XmodForCausalLM</span></span></h3><!-- HTML_TAG_END --> <a id="transformers.XmodForCausalLM" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XmodForCausalLM"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xmod/modeling_xmod.py#L982" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForCausalLM.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForCausalLM.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xmod#transformers.XmodConfig">XmodConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.<!-- HTML_TAG_END --> </span></span> </li></ul> </div></div> <p data-svelte-h="svelte-dygsvo">X-MOD Model with a <code>language modeling</code> head on top for CLM fine-tuning.</p> <p data-svelte-h="svelte-hmtw9k">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-hswkmf">This model is also a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XmodForCausalLM.forward"><!-- HTML_TAG_START --><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4><!-- HTML_TAG_END --> <a id="transformers.XmodForCausalLM.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XmodForCausalLM.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xmod/modeling_xmod.py#L1006" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">lang_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_hidden_states<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_attention_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">past_key_values<span class="opacity-60">: typing.Tuple[typing.Tuple[torch.FloatTensor]] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_cache<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForCausalLM.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForCausalLM.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForCausalLM.forward.lang_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForCausalLM.forward.lang_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>lang_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of the language adapters that should be activated for each sample, respectively. Default: the index that corresponds to <code>self.config.default_language</code>.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForCausalLM.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForCausalLM.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForCausalLM.forward.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForCausalLM.forward.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>token_type_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token.</li> </ul> <p><a href="../glossary#token-type-ids">What are token type IDs?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForCausalLM.forward.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForCausalLM.forward.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>position_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.<p></p> <p><a href="../glossary#position-ids">What are position IDs?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForCausalLM.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForCausalLM.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul><!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForCausalLM.forward.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForCausalLM.forward.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForCausalLM.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForCausalLM.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForCausalLM.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForCausalLM.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForCausalLM.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForCausalLM.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForCausalLM.forward.encoder_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForCausalLM.forward.encoder_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>encoder_hidden_states</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForCausalLM.forward.encoder_attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForCausalLM.forward.encoder_attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>encoder_attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul><!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForCausalLM.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForCausalLM.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be in <code>[-100, 0, ..., config.vocab_size]</code> (see <code>input_ids</code> docstring) Tokens with indices set to <code>-100</code> are ignored (masked), the loss is only computed for the tokens with labels in <code>[0, ..., config.vocab_size]</code><!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForCausalLM.forward.past_key_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForCausalLM.forward.past_key_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>past_key_values</strong> (<code>tuple(tuple(torch.FloatTensor))</code> of length <code>config.n_layers</code> with each tuple having 4 tensors of shape <code>(batch_size, num_heads, sequence_length - 1, embed_size_per_head)</code>) — Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.<p></p> <p>If <code>past_key_values</code> are used, the user can optionally input only the last <code>decoder_input_ids</code> (those that don’t have their past key value states given to this model) of shape <code>(batch_size, 1)</code> instead of all <code>decoder_input_ids</code> of shape <code>(batch_size, sequence_length)</code>.<!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForCausalLM.forward.use_cache" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForCausalLM.forward.use_cache"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>use_cache</strong> (<code>bool</code>, <em>optional</em>) — If set to <code>True</code>, <code>past_key_values</code> key value states are returned and can be used to speed up decoding (see <code>past_key_values</code>).<p></p> <p>Returns — <code>transformers.modeling_outputs.CausalLMOutputWithCrossAttentions</code> or <code>tuple(torch.FloatTensor)</code></p> <p>Example —<!-- HTML_TAG_END --> </p></span></span></li></ul> </div></div> <p data-svelte-h="svelte-yeudth">The <a href="/docs/transformers/v4.34.0/en/model_doc/xmod#transformers.XmodForCausalLM">XmodForCausalLM</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div></div></div> <h2 class="relative group"><a id="transformers.XmodForMaskedLM" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForMaskedLM"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-tv6ng6">XmodForMaskedLM</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XmodForMaskedLM"><!-- HTML_TAG_START --><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XmodForMaskedLM</span></span></h3><!-- HTML_TAG_END --> <a id="transformers.XmodForMaskedLM" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XmodForMaskedLM"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xmod/modeling_xmod.py#L1141" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForMaskedLM.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForMaskedLM.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xmod#transformers.XmodConfig">XmodConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.<!-- HTML_TAG_END --> </span></span> </li></ul> </div></div> <p data-svelte-h="svelte-aus8gn">X-MOD Model with a <code>language modeling</code> head on top.</p> <p data-svelte-h="svelte-hmtw9k">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-hswkmf">This model is also a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XmodForMaskedLM.forward"><!-- HTML_TAG_START --><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4><!-- HTML_TAG_END --> <a id="transformers.XmodForMaskedLM.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XmodForMaskedLM.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xmod/modeling_xmod.py#L1168" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">lang_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_hidden_states<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_attention_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForMaskedLM.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForMaskedLM.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForMaskedLM.forward.lang_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForMaskedLM.forward.lang_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>lang_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of the language adapters that should be activated for each sample, respectively. Default: the index that corresponds to <code>self.config.default_language</code>.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForMaskedLM.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForMaskedLM.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForMaskedLM.forward.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForMaskedLM.forward.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>token_type_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token.</li> </ul> <p><a href="../glossary#token-type-ids">What are token type IDs?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForMaskedLM.forward.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForMaskedLM.forward.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>position_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.<p></p> <p><a href="../glossary#position-ids">What are position IDs?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForMaskedLM.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForMaskedLM.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul><!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForMaskedLM.forward.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForMaskedLM.forward.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForMaskedLM.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForMaskedLM.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForMaskedLM.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForMaskedLM.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForMaskedLM.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForMaskedLM.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForMaskedLM.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForMaskedLM.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Labels for computing the masked language modeling loss. Indices should be in <code>[-100, 0, ..., config.vocab_size]</code> (see <code>input_ids</code> docstring) Tokens with indices set to <code>-100</code> are ignored (masked), the loss is only computed for the tokens with labels in <code>[0, ..., config.vocab_size]</code><!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForMaskedLM.forward.kwargs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForMaskedLM.forward.kwargs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>kwargs</strong> (<code>Dict[str, any]</code>, optional, defaults to <em>{}</em>) — Used to hide legacy arguments that have been deprecated.<!-- HTML_TAG_END --> </span></span> </li></ul> </div></div> <p data-svelte-h="svelte-17vaevh">The <a href="/docs/transformers/v4.34.0/en/model_doc/xmod#transformers.XmodForMaskedLM">XmodForMaskedLM</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div></div></div> <h2 class="relative group"><a id="transformers.XmodForSequenceClassification" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForSequenceClassification"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1wvsjn1">XmodForSequenceClassification</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XmodForSequenceClassification"><!-- HTML_TAG_START --><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XmodForSequenceClassification</span></span></h3><!-- HTML_TAG_END --> <a id="transformers.XmodForSequenceClassification" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XmodForSequenceClassification"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xmod/modeling_xmod.py#L1268" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForSequenceClassification.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForSequenceClassification.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xmod#transformers.XmodConfig">XmodConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.<!-- HTML_TAG_END --> </span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1vluxlz">X-MOD Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks.</p> <p data-svelte-h="svelte-hmtw9k">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-hswkmf">This model is also a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XmodForSequenceClassification.forward"><!-- HTML_TAG_START --><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4><!-- HTML_TAG_END --> <a id="transformers.XmodForSequenceClassification.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XmodForSequenceClassification.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xmod/modeling_xmod.py#L1281" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">lang_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForSequenceClassification.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForSequenceClassification.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForSequenceClassification.forward.lang_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForSequenceClassification.forward.lang_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>lang_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of the language adapters that should be activated for each sample, respectively. Default: the index that corresponds to <code>self.config.default_language</code>.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForSequenceClassification.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForSequenceClassification.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForSequenceClassification.forward.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForSequenceClassification.forward.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>token_type_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token.</li> </ul> <p><a href="../glossary#token-type-ids">What are token type IDs?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForSequenceClassification.forward.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForSequenceClassification.forward.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>position_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.<p></p> <p><a href="../glossary#position-ids">What are position IDs?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForSequenceClassification.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForSequenceClassification.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul><!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForSequenceClassification.forward.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForSequenceClassification.forward.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForSequenceClassification.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForSequenceClassification.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForSequenceClassification.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForSequenceClassification.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForSequenceClassification.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForSequenceClassification.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForSequenceClassification.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForSequenceClassification.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Labels for computing the sequence classification/regression loss. Indices should be in <code>[0, ..., config.num_labels - 1]</code>. If <code>config.num_labels == 1</code> a regression loss is computed (Mean-Square loss), If <code>config.num_labels &gt; 1</code> a classification loss is computed (Cross-Entropy).<!-- HTML_TAG_END --> </span></span> </li></ul> </div></div> <p data-svelte-h="svelte-w9oiop">The <a href="/docs/transformers/v4.34.0/en/model_doc/xmod#transformers.XmodForSequenceClassification">XmodForSequenceClassification</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div></div></div> <h2 class="relative group"><a id="transformers.XmodForMultipleChoice" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForMultipleChoice"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1fxxruj">XmodForMultipleChoice</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XmodForMultipleChoice"><!-- HTML_TAG_START --><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XmodForMultipleChoice</span></span></h3><!-- HTML_TAG_END --> <a id="transformers.XmodForMultipleChoice" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XmodForMultipleChoice"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xmod/modeling_xmod.py#L1361" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForMultipleChoice.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForMultipleChoice.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xmod#transformers.XmodConfig">XmodConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.<!-- HTML_TAG_END --> </span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1mpcjkj">X-MOD Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks.</p> <p data-svelte-h="svelte-hmtw9k">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-hswkmf">This model is also a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XmodForMultipleChoice.forward"><!-- HTML_TAG_START --><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4><!-- HTML_TAG_END --> <a id="transformers.XmodForMultipleChoice.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XmodForMultipleChoice.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xmod/modeling_xmod.py#L1373" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">lang_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForMultipleChoice.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForMultipleChoice.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, num_choices, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForMultipleChoice.forward.lang_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForMultipleChoice.forward.lang_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>lang_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, num_choices, sequence_length)</code>, <em>optional</em>) — Indices of the language adapters that should be activated for each sample, respectively. Default: the index that corresponds to <code>self.config.default_language</code>.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForMultipleChoice.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForMultipleChoice.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_choices, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForMultipleChoice.forward.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForMultipleChoice.forward.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>token_type_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, num_choices, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token.</li> </ul> <p><a href="../glossary#token-type-ids">What are token type IDs?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForMultipleChoice.forward.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForMultipleChoice.forward.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>position_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, num_choices, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.<p></p> <p><a href="../glossary#position-ids">What are position IDs?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForMultipleChoice.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForMultipleChoice.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul><!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForMultipleChoice.forward.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForMultipleChoice.forward.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_choices, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForMultipleChoice.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForMultipleChoice.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForMultipleChoice.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForMultipleChoice.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForMultipleChoice.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForMultipleChoice.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForMultipleChoice.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForMultipleChoice.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Labels for computing the multiple choice classification loss. Indices should be in <code>[0, ..., num_choices-1]</code> where <code>num_choices</code> is the size of the second dimension of the input tensors. (See <code>input_ids</code> above)<!-- HTML_TAG_END --> </span></span> </li></ul> </div></div> <p data-svelte-h="svelte-14d82ul">The <a href="/docs/transformers/v4.34.0/en/model_doc/xmod#transformers.XmodForMultipleChoice">XmodForMultipleChoice</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div></div></div> <h2 class="relative group"><a id="transformers.XmodForTokenClassification" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForTokenClassification"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1d0fkd9">XmodForTokenClassification</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XmodForTokenClassification"><!-- HTML_TAG_START --><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XmodForTokenClassification</span></span></h3><!-- HTML_TAG_END --> <a id="transformers.XmodForTokenClassification" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XmodForTokenClassification"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xmod/modeling_xmod.py#L1450" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForTokenClassification.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForTokenClassification.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xmod#transformers.XmodConfig">XmodConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.<!-- HTML_TAG_END --> </span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1lcpzd2">X-MOD Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks.</p> <p data-svelte-h="svelte-hmtw9k">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-hswkmf">This model is also a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XmodForTokenClassification.forward"><!-- HTML_TAG_START --><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4><!-- HTML_TAG_END --> <a id="transformers.XmodForTokenClassification.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XmodForTokenClassification.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xmod/modeling_xmod.py#L1466" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">lang_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForTokenClassification.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForTokenClassification.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForTokenClassification.forward.lang_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForTokenClassification.forward.lang_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>lang_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of the language adapters that should be activated for each sample, respectively. Default: the index that corresponds to <code>self.config.default_language</code>.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForTokenClassification.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForTokenClassification.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForTokenClassification.forward.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForTokenClassification.forward.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>token_type_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token.</li> </ul> <p><a href="../glossary#token-type-ids">What are token type IDs?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForTokenClassification.forward.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForTokenClassification.forward.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>position_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.<p></p> <p><a href="../glossary#position-ids">What are position IDs?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForTokenClassification.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForTokenClassification.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul><!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForTokenClassification.forward.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForTokenClassification.forward.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForTokenClassification.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForTokenClassification.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForTokenClassification.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForTokenClassification.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForTokenClassification.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForTokenClassification.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForTokenClassification.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForTokenClassification.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Labels for computing the token classification loss. Indices should be in <code>[0, ..., config.num_labels - 1]</code>.<!-- HTML_TAG_END --> </span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1e6y1e9">The <a href="/docs/transformers/v4.34.0/en/model_doc/xmod#transformers.XmodForTokenClassification">XmodForTokenClassification</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div></div></div> <h2 class="relative group"><a id="transformers.XmodForQuestionAnswering" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForQuestionAnswering"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-x04g8o">XmodForQuestionAnswering</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XmodForQuestionAnswering"><!-- HTML_TAG_START --><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XmodForQuestionAnswering</span></span></h3><!-- HTML_TAG_END --> <a id="transformers.XmodForQuestionAnswering" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XmodForQuestionAnswering"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xmod/modeling_xmod.py#L1552" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForQuestionAnswering.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForQuestionAnswering.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xmod#transformers.XmodConfig">XmodConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.<!-- HTML_TAG_END --> </span></span> </li></ul> </div></div> <p data-svelte-h="svelte-pvcgqo">X-MOD Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute <code>span start logits</code> and <code>span end logits</code>).</p> <p data-svelte-h="svelte-hmtw9k">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-hswkmf">This model is also a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XmodForQuestionAnswering.forward"><!-- HTML_TAG_START --><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4><!-- HTML_TAG_END --> <a id="transformers.XmodForQuestionAnswering.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XmodForQuestionAnswering.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xmod/modeling_xmod.py#L1564" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">lang_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">start_positions<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">end_positions<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForQuestionAnswering.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForQuestionAnswering.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForQuestionAnswering.forward.lang_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForQuestionAnswering.forward.lang_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>lang_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of the language adapters that should be activated for each sample, respectively. Default: the index that corresponds to <code>self.config.default_language</code>.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForQuestionAnswering.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForQuestionAnswering.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForQuestionAnswering.forward.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForQuestionAnswering.forward.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>token_type_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token.</li> </ul> <p><a href="../glossary#token-type-ids">What are token type IDs?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForQuestionAnswering.forward.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForQuestionAnswering.forward.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>position_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.<p></p> <p><a href="../glossary#position-ids">What are position IDs?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForQuestionAnswering.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForQuestionAnswering.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul><!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForQuestionAnswering.forward.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForQuestionAnswering.forward.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForQuestionAnswering.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForQuestionAnswering.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForQuestionAnswering.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForQuestionAnswering.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForQuestionAnswering.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForQuestionAnswering.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForQuestionAnswering.forward.start_positions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForQuestionAnswering.forward.start_positions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>start_positions</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (<code>sequence_length</code>). Position outside of the sequence are not taken into account for computing the loss.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XmodForQuestionAnswering.forward.end_positions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XmodForQuestionAnswering.forward.end_positions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>end_positions</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (<code>sequence_length</code>). Position outside of the sequence are not taken into account for computing the loss.<!-- HTML_TAG_END --> </span></span> </li></ul> </div></div> <p data-svelte-h="svelte-5r54iv">The <a href="/docs/transformers/v4.34.0/en/model_doc/xmod#transformers.XmodForQuestionAnswering">XmodForQuestionAnswering</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div></div></div> <p></p> <script> { __sveltekit_1yybmhh = { assets: "/docs/transformers/v4.34.0/en", base: "/docs/transformers/v4.34.0/en", env: {} }; const element = document.currentScript.parentElement; const data = [null,null]; Promise.all([ import("/docs/transformers/v4.34.0/en/_app/immutable/entry/start.c2db227a.js"), import("/docs/transformers/v4.34.0/en/_app/immutable/entry/app.879d9b87.js") ]).then(([kit, app]) => { kit.start(app, element, { node_ids: [0, 285], data, form: null, error: null }); }); } </script> <!-- HTML_TAG_END --></div> <div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/umt5" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>UMT5</a> <a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/xglm" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">XGLM<span class="ml-2 translate-y-px">→</span></a></div></div></div> <div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapter&quot;:{&quot;title&quot;:&quot;X-MOD&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;xmod&quot;,&quot;url&quot;:&quot;#xmod&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;overview&quot;,&quot;url&quot;:&quot;#overview&quot;},{&quot;title&quot;:&quot;Adapter Usage&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;adapter-usage&quot;,&quot;url&quot;:&quot;#adapter-usage&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Input language&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;input-language&quot;,&quot;url&quot;:&quot;#input-language&quot;},{&quot;title&quot;:&quot;Fine-tuning&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;finetuning&quot;,&quot;url&quot;:&quot;#finetuning&quot;},{&quot;title&quot;:&quot;Cross-lingual transfer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;crosslingual-transfer&quot;,&quot;url&quot;:&quot;#crosslingual-transfer&quot;}]},{&quot;title&quot;:&quot;Resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;resources&quot;,&quot;url&quot;:&quot;#resources&quot;},{&quot;title&quot;:&quot;XmodConfig&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XmodConfig&quot;,&quot;url&quot;:&quot;#transformers.XmodConfig&quot;},{&quot;title&quot;:&quot;XmodModel&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XmodModel&quot;,&quot;url&quot;:&quot;#transformers.XmodModel&quot;},{&quot;title&quot;:&quot;XmodForCausalLM&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XmodForCausalLM&quot;,&quot;url&quot;:&quot;#transformers.XmodForCausalLM&quot;},{&quot;title&quot;:&quot;XmodForMaskedLM&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XmodForMaskedLM&quot;,&quot;url&quot;:&quot;#transformers.XmodForMaskedLM&quot;},{&quot;title&quot;:&quot;XmodForSequenceClassification&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XmodForSequenceClassification&quot;,&quot;url&quot;:&quot;#transformers.XmodForSequenceClassification&quot;},{&quot;title&quot;:&quot;XmodForMultipleChoice&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XmodForMultipleChoice&quot;,&quot;url&quot;:&quot;#transformers.XmodForMultipleChoice&quot;},{&quot;title&quot;:&quot;XmodForTokenClassification&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XmodForTokenClassification&quot;,&quot;url&quot;:&quot;#transformers.XmodForTokenClassification&quot;},{&quot;title&quot;:&quot;XmodForQuestionAnswering&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XmodForQuestionAnswering&quot;,&quot;url&quot;:&quot;#transformers.XmodForQuestionAnswering&quot;}]}}" data-target="SubSideMenu"><nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#xmod" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-xmod"><wbr>X-MOD</a> <a href="#overview" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-overview"><wbr>Overview</a> <a href="#adapter-usage" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-adapter-usage"><wbr>Adapter <wbr>Usage</a> <a href="#input-language" class="pl-8 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-input-language"><wbr>Input language</a> <a href="#finetuning" class="pl-8 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-finetuning"><wbr>Fine-tuning</a> <a href="#crosslingual-transfer" class="pl-8 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-crosslingual-transfer"><wbr>Cross-lingual transfer</a> <a href="#resources" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-resources"><wbr>Resources</a> <a href="#transformers.XmodConfig" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XmodConfig"><wbr>Xmod<wbr>Config</a> <a href="#transformers.XmodModel" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XmodModel"><wbr>Xmod<wbr>Model</a> <a href="#transformers.XmodForCausalLM" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XmodForCausalLM"><wbr>Xmod<wbr>For<wbr>CausalLM</a> <a href="#transformers.XmodForMaskedLM" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XmodForMaskedLM"><wbr>Xmod<wbr>For<wbr>MaskedLM</a> <a href="#transformers.XmodForSequenceClassification" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XmodForSequenceClassification"><wbr>Xmod<wbr>For<wbr>Sequence<wbr>Classification</a> <a href="#transformers.XmodForMultipleChoice" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XmodForMultipleChoice"><wbr>Xmod<wbr>For<wbr>Multiple<wbr>Choice</a> <a href="#transformers.XmodForTokenClassification" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XmodForTokenClassification"><wbr>Xmod<wbr>For<wbr>Token<wbr>Classification</a> <a href="#transformers.XmodForQuestionAnswering" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XmodForQuestionAnswering"><wbr>Xmod<wbr>For<wbr>Question<wbr>Answering</a> </nav></div></div></div> <div id="doc-footer"></div></main> </div> <script> import("/front/build/kube-b0520c1/index.js"); window.moonSha = "kube-b0520c1/"; window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc","environment":"production","userAgent":"HuggingFace (production)"}`); </script> <!-- Stripe --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://js.stripe.com/v3/"; script.async = true; document.head.appendChild(script); } </script> <!-- Google analytics v4 --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL"; script.async = true; document.head.appendChild(script); window.dataLayer = window.dataLayer || []; function gtag() { if (window.dataLayer !== undefined) { window.dataLayer.push(arguments); } } gtag("js", new Date()); gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/v4.34.0/en/model_doc/xmod" }); /// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" }); /// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent /// TODO: ask the user for their consent and update this with gtag('consent', 'update') } </script> <!-- Google Analytics v3 (deprecated) --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { (function (i, s, o, g, r, a, m) { i["GoogleAnalyticsObject"] = r; (i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments); }), (i[r].l = 1 * new Date()); (a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]); a.async = 1; a.src = g; m.parentNode.insertBefore(a, m); })(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics"); ganalytics("create", "UA-83738774-2", "auto"); ganalytics("send", "pageview", "/docs/transformers/v4.34.0/en/model_doc/xmod"); } </script> <iframe name="__privateStripeMetricsController7310" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-27c67c0d52761104439bb051c7856ab1.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fv4.34.0%2Fen%2Fmodel_doc%2Fxmod&amp;title=X-MOD&amp;referrer=&amp;muid=NA&amp;sid=NA&amp;version=6&amp;preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html>
2023-10-05T13:33:32.930Z
XGLM
https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/xglm
# XGLM ## Overview The XGLM model was proposed in [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) by Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O’Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li. The abstract from the paper is the following: _Large-scale autoregressive language models such as GPT-3 are few-shot learners that can perform a wide range of language tasks without fine-tuning. While these models are known to be able to jointly represent many different languages, their training data is dominated by English, potentially limiting their cross-lingual generalization. In this work, we train multilingual autoregressive language models on a balanced corpus covering a diverse set of languages, and study their few- and zero-shot learning capabilities in a wide range of tasks. Our largest model with 7.5 billion parameters sets new state of the art in few-shot learning in more than 20 representative languages, outperforming GPT-3 of comparable size in multilingual commonsense reasoning (with +7.4% absolute accuracy improvement in 0-shot settings and +9.4% in 4-shot settings) and natural language inference (+5.4% in each of 0-shot and 4-shot settings). On the FLORES-101 machine translation benchmark, our model outperforms GPT-3 on 171 out of 182 translation directions with 32 training examples, while surpassing the official supervised baseline in 45 directions. We present a detailed analysis of where the model succeeds and fails, showing in particular that it enables cross-lingual in-context learning on some tasks, while there is still room for improvement on surface form robustness and adaptation to tasks that do not have a natural cloze form. Finally, we evaluate our models in social value tasks such as hate speech detection in five languages and find it has limitations similar to comparable sized GPT-3 models._ This model was contributed by [Suraj](https://huggingface.co/valhalla). The original code can be found [here](https://github.com/pytorch/fairseq/tree/main/examples/xglm). ## Documentation resources - [Causal language modeling task guide](../tasks/language_modeling) ## XGLMConfig ### class transformers.XGLMConfig [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xglm/configuration_xglm.py#L29) ( vocab\_size = 256008 max\_position\_embeddings = 2048 d\_model = 1024 ffn\_dim = 4096 num\_layers = 24 attention\_heads = 16 activation\_function = 'gelu' dropout = 0.1 attention\_dropout = 0.1 activation\_dropout = 0.0 layerdrop = 0.0 init\_std = 0.02 scale\_embedding = True use\_cache = True decoder\_start\_token\_id = 2 pad\_token\_id = 1 bos\_token\_id = 0 eos\_token\_id = 2 \*\*kwargs ) Parameters - **vocab\_size** (`int`, _optional_, defaults to 256008) — Vocabulary size of the XGLM model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [XGLMModel](/docs/transformers/v4.34.0/en/model_doc/xglm#transformers.XGLMModel) or [FlaxXGLMModel](/docs/transformers/v4.34.0/en/model_doc/xglm#transformers.FlaxXGLMModel). - **max\_position\_embeddings** (`int`, _optional_, defaults to 2048) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). - **d\_model** (`int`, _optional_, defaults to 1024) — Dimension of the layers and the pooler layer. - **ffn\_dim** (`int`, _optional_, defaults to 4096) — Dimension of the “intermediate” (often named feed-forward) layer in decoder. - **num\_layers** (`int`, _optional_, defaults to 24) — Number of hidden layers Transformer decoder. - **attention\_heads** (`int`, _optional_, defaults to 16) — Number of attention heads for each attention layer in the Transformer decoder. - **activation\_function** (`str` or `function`, _optional_, defaults to `"gelu"`) — The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"silu"` and `"gelu_new"` are supported. - **dropout** (`float`, _optional_, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, dencoder, and pooler. - **attention\_dropout** (`float`, _optional_, defaults to 0.1) — The dropout ratio for the attention probabilities. - **activation\_dropout** (`float`, _optional_, defaults to 0.0) — The dropout ratio for activations inside the fully connected layer. - **layerdrop** (`float`, _optional_, defaults to 0.0) — The LayerDrop probability for the encoder. See the \[LayerDrop paper\](see [https://arxiv.org/abs/1909.11556](https://arxiv.org/abs/1909.11556)) for more details. - **init\_std** (`float`, _optional_, defaults to 0.02) — The standard deviation of the truncated\_normal\_initializer for initializing all weight matrices. - **scale\_embedding** (`bool`, _optional_, defaults to `True`) — Scale embeddings by diving by sqrt(d\_model). - **use\_cache** (`bool`, _optional_, defaults to `True`) — Whether or not the model should return the last key/values attentions (not used by all models). This is the configuration class to store the configuration of a [XGLMModel](/docs/transformers/v4.34.0/en/model_doc/xglm#transformers.XGLMModel). It is used to instantiate an XGLM model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the XGLM [facebook/xglm-564M](https://huggingface.co/facebook/xglm-564M) architecture. Configuration objects inherit from [PretrainedConfig](/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig) and can be used to control the model outputs. Read the documentation from [PretrainedConfig](/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig) for more information. Example: ``` >>> from transformers import XGLMModel, XGLMConfig >>> >>> configuration = XGLMConfig() >>> >>> model = XGLMModel(configuration) >>> >>> configuration = model.config ``` ## XGLMTokenizer ### class transformers.XGLMTokenizer [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xglm/tokenization_xglm.py#L43) ( vocab\_file bos\_token = '<s>' eos\_token = '</s>' sep\_token = '</s>' cls\_token = '<s>' unk\_token = '<unk>' pad\_token = '<pad>' sp\_model\_kwargs: typing.Union\[typing.Dict\[str, typing.Any\], NoneType\] = None \*\*kwargs ) Parameters - **vocab\_file** (`str`) — Path to the vocabulary file. - **bos\_token** (`str`, _optional_, defaults to `"<s>"`) — The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token. When building a sequence using special tokens, this is not the token that is used for the beginning of sequence. The token used is the `cls_token`. - **eos\_token** (`str`, _optional_, defaults to `"</s>"`) — The end of sequence token. When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the `sep_token`. - **sep\_token** (`str`, _optional_, defaults to `"</s>"`) — The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens. - **cls\_token** (`str`, _optional_, defaults to `"<s>"`) — The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens. - **unk\_token** (`str`, _optional_, defaults to `"<unk>"`) — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. - **pad\_token** (`str`, _optional_, defaults to `"<pad>"`) — The token used for padding, for example when batching sequences of different lengths. - **mask\_token** (`str`, _optional_, defaults to `"<mask>"`) — The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict. - **additional\_special\_tokens** (`List[str]`, _optional_, defaults to `["<s>NOTUSED", "</s>NOTUSED"]`) — Additional special tokens used by the tokenizer. - **sp\_model\_kwargs** (`dict`, _optional_) — Will be passed to the `SentencePieceProcessor.__init__()` method. The [Python wrapper for SentencePiece](https://github.com/google/sentencepiece/tree/master/python) can be used, among other things, to set: - `enable_sampling`: Enable subword regularization. - `nbest_size`: Sampling parameters for unigram. Invalid for BPE-Dropout. - `nbest_size = {0,1}`: No sampling is performed. - `nbest_size > 1`: samples from the nbest\_size results. - `nbest_size < 0`: assuming that nbest\_size is infinite and samples from the all hypothesis (lattice) using forward-filtering-and-backward-sampling algorithm. - `alpha`: Smoothing parameter for unigram sampling, and dropout probability of merge operations for BPE-dropout. - **sp\_model** (`SentencePieceProcessor`) — The _SentencePiece_ processor that is used for every conversion (string, tokens and IDs). Adapted from [RobertaTokenizer](/docs/transformers/v4.34.0/en/model_doc/roberta#transformers.RobertaTokenizer) and [XLNetTokenizer](/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetTokenizer). Based on [SentencePiece](https://github.com/google/sentencepiece). This tokenizer inherits from [PreTrainedTokenizer](/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizer) which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. #### build\_inputs\_with\_special\_tokens [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xglm/tokenization_xglm.py#L189) ( token\_ids\_0: typing.List\[int\] token\_ids\_1: typing.Optional\[typing.List\[int\]\] = None ) → `List[int]` Parameters - **token\_ids\_0** (`List[int]`) — List of IDs to which the special tokens will be added. - **token\_ids\_1** (`List[int]`, _optional_) — Optional second list of IDs for sequence pairs. List of [input IDs](../glossary#input-ids) with the appropriate special tokens. Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. An XLM-RoBERTa sequence has the following format: - single sequence: `<s> X </s>` - pair of sequences: `<s> A </s></s> B </s>` #### get\_special\_tokens\_mask [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xglm/tokenization_xglm.py#L214) ( token\_ids\_0: typing.List\[int\] token\_ids\_1: typing.Optional\[typing.List\[int\]\] = None already\_has\_special\_tokens: bool = False ) → `List[int]` Parameters - **token\_ids\_0** (`List[int]`) — List of IDs. - **token\_ids\_1** (`List[int]`, _optional_) — Optional second list of IDs for sequence pairs. - **already\_has\_special\_tokens** (`bool`, _optional_, defaults to `False`) — Whether or not the token list is already formatted with special tokens for the model. A list of integers in the range \[0, 1\]: 1 for a special token, 0 for a sequence token. Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer `prepare_for_model` method. #### create\_token\_type\_ids\_from\_sequences [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xglm/tokenization_xglm.py#L242) ( token\_ids\_0: typing.List\[int\] token\_ids\_1: typing.Optional\[typing.List\[int\]\] = None ) → `List[int]` Parameters - **token\_ids\_0** (`List[int]`) — List of IDs. - **token\_ids\_1** (`List[int]`, _optional_) — Optional second list of IDs for sequence pairs. List of zeros. Create a mask from the two sequences passed to be used in a sequence-pair classification task. XLM-RoBERTa does not make use of token type ids, therefore a list of zeros is returned. #### save\_vocabulary [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xglm/tokenization_xglm.py#L298) ( save\_directory: str filename\_prefix: typing.Optional\[str\] = None ) ## XGLMTokenizerFast ### class transformers.XGLMTokenizerFast [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xglm/tokenization_xglm_fast.py#L49) ( vocab\_file = None tokenizer\_file = None bos\_token = '<s>' eos\_token = '</s>' sep\_token = '</s>' cls\_token = '<s>' unk\_token = '<unk>' pad\_token = '<pad>' \*\*kwargs ) Parameters - **vocab\_file** (`str`) — Path to the vocabulary file. - **bos\_token** (`str`, _optional_, defaults to `"<s>"`) — The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token. When building a sequence using special tokens, this is not the token that is used for the beginning of sequence. The token used is the `cls_token`. - **eos\_token** (`str`, _optional_, defaults to `"</s>"`) — The end of sequence token. When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the `sep_token`. - **sep\_token** (`str`, _optional_, defaults to `"</s>"`) — The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens. - **cls\_token** (`str`, _optional_, defaults to `"<s>"`) — The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens. - **unk\_token** (`str`, _optional_, defaults to `"<unk>"`) — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. - **pad\_token** (`str`, _optional_, defaults to `"<pad>"`) — The token used for padding, for example when batching sequences of different lengths. - **additional\_special\_tokens** (`List[str]`, _optional_, defaults to `["<s>NOTUSED", "</s>NOTUSED"]`) — Additional special tokens used by the tokenizer. Construct a “fast” XGLM tokenizer (backed by HuggingFace’s _tokenizers_ library). Adapted from [RobertaTokenizer](/docs/transformers/v4.34.0/en/model_doc/roberta#transformers.RobertaTokenizer) and [XLNetTokenizer](/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetTokenizer). Based on [BPE](https://huggingface.co/docs/tokenizers/python/latest/components.html?highlight=BPE#models). This tokenizer inherits from [PreTrainedTokenizerFast](/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast) which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. #### build\_inputs\_with\_special\_tokens [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xglm/tokenization_xglm_fast.py#L142) ( token\_ids\_0: typing.List\[int\] token\_ids\_1: typing.Optional\[typing.List\[int\]\] = None ) → `List[int]` Parameters - **token\_ids\_0** (`List[int]`) — List of IDs to which the special tokens will be added. - **token\_ids\_1** (`List[int]`, _optional_) — Optional second list of IDs for sequence pairs. List of [input IDs](../glossary#input-ids) with the appropriate special tokens. Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. An XLM-RoBERTa sequence has the following format: - single sequence: `<s> X </s>` - pair of sequences: `<s> A </s></s> B </s>` #### create\_token\_type\_ids\_from\_sequences [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xglm/tokenization_xglm_fast.py#L167) ( token\_ids\_0: typing.List\[int\] token\_ids\_1: typing.Optional\[typing.List\[int\]\] = None ) → `List[int]` Parameters - **token\_ids\_0** (`List[int]`) — List of IDs. - **token\_ids\_1** (`List[int]`, _optional_) — Optional second list of IDs for sequence pairs. List of zeros. Create a mask from the two sequences passed to be used in a sequence-pair classification task. XLM-RoBERTa does not make use of token type ids, therefore a list of zeros is returned. ## XGLMModel ### class transformers.XGLMModel [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xglm/modeling_xglm.py#L515) ( config: XGLMConfig embed\_tokens: typing.Optional\[torch.nn.modules.sparse.Embedding\] = None ) Parameters - **config** ([XGLMConfig](/docs/transformers/v4.34.0/en/model_doc/xglm#transformers.XGLMConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. config — XGLMConfig - **embed\_tokens** (nn.Embedding) — output embedding The bare XGLM Model transformer outputting raw hidden-states without any specific head on top. This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Transformer decoder consisting of _config.num\_layers_ layers. Each layer is a `XGLMDecoderLayer` #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xglm/modeling_xglm.py#L576) ( input\_ids: typing.Optional\[torch.Tensor\] = None attention\_mask: typing.Optional\[torch.Tensor\] = None position\_ids: typing.Optional\[torch.Tensor\] = None encoder\_hidden\_states: typing.Optional\[torch.Tensor\] = None encoder\_attention\_mask: typing.Optional\[torch.Tensor\] = None head\_mask: typing.Optional\[torch.Tensor\] = None cross\_attn\_head\_mask: typing.Optional\[torch.Tensor\] = None past\_key\_values: typing.Optional\[typing.List\[torch.FloatTensor\]\] = None inputs\_embeds: typing.Optional\[torch.Tensor\] = None use\_cache: typing.Optional\[bool\] = None output\_attentions: typing.Optional\[bool\] = None output\_hidden\_states: typing.Optional\[bool\] = None return\_dict: typing.Optional\[bool\] = None ) → [transformers.modeling\_outputs.BaseModelOutputWithPastAndCrossAttentions](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions) or `tuple(torch.FloatTensor)` Parameters - **input\_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using [AutoTokenizer](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer). See [PreTrainedTokenizer.encode()](/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode) and [PreTrainedTokenizer.**call**()](/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__) for details. [What are input IDs?](../glossary#input-ids) - **attention\_mask** (`torch.Tensor` of shape `(batch_size, sequence_length)`, _optional_) — Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - **position\_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`, _optional_) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, config.max_position_embeddings - 1]`. [What are position IDs?](../glossary#position-ids) - **encoder\_hidden\_states** (`torch.FloatTensor` of shape `(batch_size, encoder_sequence_length, hidden_size)`, _optional_) — Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. - **encoder\_attention\_mask** (`torch.LongTensor` of shape `(batch_size, encoder_sequence_length)`, _optional_) — Mask to avoid performing cross-attention on padding tokens indices of encoder input\_ids. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - **head\_mask** (`torch.Tensor` of shape `(num_layers, attention_heads)`, _optional_) — Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - 1 indicates the head is **not masked**, - 0 indicates the head is **masked**. - **cross\_attn\_head\_mask** (`torch.Tensor` of shape `(num_layers, attention_heads)`, _optional_) — Mask to nullify selected heads of the cross-attention modules. Mask values selected in `[0, 1]`: - 1 indicates the head is **not masked**, - 0 indicates the head is **masked**. - **past\_key\_values** (`tuple(tuple(torch.FloatTensor))`, _optional_, returned when `use_cache=True` is passed or when `config.use_cache=True`) — Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of shape `(batch_size, num_heads, sequence_length, embed_size_per_head)`) and 2 additional tensors of shape `(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)`. Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see `past_key_values` input) to speed up sequential decoding. If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that don’t have their past key value states given to this model) of shape `(batch_size, 1)` instead of all `decoder_input_ids` of shape `(batch_size, sequence_length)`. inputs\_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, _optional_): Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert `input_ids` indices into associated vectors than the model’s internal embedding lookup matrix. - **output\_attentions** (`bool`, _optional_) — Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. - **output\_hidden\_states** (`bool`, _optional_) — Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for more detail. - **return\_dict** (`bool`, _optional_) — Whether or not to return a [ModelOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput) instead of a plain tuple. A [transformers.modeling\_outputs.BaseModelOutputWithPastAndCrossAttentions](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions) or a tuple of `torch.FloatTensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various elements depending on the configuration ([XGLMConfig](/docs/transformers/v4.34.0/en/model_doc/xglm#transformers.XGLMConfig)) and inputs. - **last\_hidden\_state** (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`) — Sequence of hidden-states at the output of the last layer of the model. If `past_key_values` is used only the last hidden-state of the sequences of shape `(batch_size, 1, hidden_size)` is output. - **past\_key\_values** (`tuple(tuple(torch.FloatTensor))`, _optional_, returned when `use_cache=True` is passed or when `config.use_cache=True`) — Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of shape `(batch_size, num_heads, sequence_length, embed_size_per_head)`) and optionally if `config.is_encoder_decoder=True` 2 additional tensors of shape `(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)`. Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if `config.is_encoder_decoder=True` in the cross-attention blocks) that can be used (see `past_key_values` input) to speed up sequential decoding. - **hidden\_states** (`tuple(torch.FloatTensor)`, _optional_, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`) — Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. - **attentions** (`tuple(torch.FloatTensor)`, _optional_, returned when `output_attentions=True` is passed or when `config.output_attentions=True`) — Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. - **cross\_attentions** (`tuple(torch.FloatTensor)`, _optional_, returned when `output_attentions=True` and `config.add_cross_attention=True` is passed or when `config.output_attentions=True`) — Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. The [XGLMModel](/docs/transformers/v4.34.0/en/model_doc/xglm#transformers.XGLMModel) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, XGLMModel >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("facebook/xglm-564M") >>> model = XGLMModel.from_pretrained("facebook/xglm-564M") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state ``` ## XGLMForCausalLM ### class transformers.XGLMForCausalLM [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xglm/modeling_xglm.py#L751) ( config ) Parameters - **config** ([XGLMConfig](/docs/transformers/v4.34.0/en/model_doc/xglm#transformers.XGLMConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. The XGLM Model transformer with a language modeling head on top (linear layer with weights tied to the input embeddings). This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xglm/modeling_xglm.py#L775) ( input\_ids: typing.Optional\[torch.Tensor\] = None attention\_mask: typing.Optional\[torch.Tensor\] = None position\_ids: typing.Optional\[torch.Tensor\] = None encoder\_hidden\_states: typing.Optional\[torch.Tensor\] = None encoder\_attention\_mask: typing.Optional\[torch.Tensor\] = None head\_mask: typing.Optional\[torch.Tensor\] = None cross\_attn\_head\_mask: typing.Optional\[torch.Tensor\] = None past\_key\_values: typing.Optional\[typing.List\[torch.FloatTensor\]\] = None inputs\_embeds: typing.Optional\[torch.Tensor\] = None labels: typing.Optional\[torch.Tensor\] = None use\_cache: typing.Optional\[bool\] = None output\_attentions: typing.Optional\[bool\] = None output\_hidden\_states: typing.Optional\[bool\] = None return\_dict: typing.Optional\[bool\] = None ) → [transformers.modeling\_outputs.CausalLMOutputWithCrossAttentions](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.CausalLMOutputWithCrossAttentions) or `tuple(torch.FloatTensor)` Parameters - **input\_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using [AutoTokenizer](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer). See [PreTrainedTokenizer.encode()](/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode) and [PreTrainedTokenizer.**call**()](/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__) for details. [What are input IDs?](../glossary#input-ids) - **attention\_mask** (`torch.Tensor` of shape `(batch_size, sequence_length)`, _optional_) — Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - **position\_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`, _optional_) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, config.max_position_embeddings - 1]`. [What are position IDs?](../glossary#position-ids) - **encoder\_hidden\_states** (`torch.FloatTensor` of shape `(batch_size, encoder_sequence_length, hidden_size)`, _optional_) — Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. - **encoder\_attention\_mask** (`torch.LongTensor` of shape `(batch_size, encoder_sequence_length)`, _optional_) — Mask to avoid performing cross-attention on padding tokens indices of encoder input\_ids. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - **head\_mask** (`torch.Tensor` of shape `(num_layers, attention_heads)`, _optional_) — Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - 1 indicates the head is **not masked**, - 0 indicates the head is **masked**. - **cross\_attn\_head\_mask** (`torch.Tensor` of shape `(num_layers, attention_heads)`, _optional_) — Mask to nullify selected heads of the cross-attention modules. Mask values selected in `[0, 1]`: - 1 indicates the head is **not masked**, - 0 indicates the head is **masked**. - **past\_key\_values** (`tuple(tuple(torch.FloatTensor))`, _optional_, returned when `use_cache=True` is passed or when `config.use_cache=True`) — Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of shape `(batch_size, num_heads, sequence_length, embed_size_per_head)`) and 2 additional tensors of shape `(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)`. Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see `past_key_values` input) to speed up sequential decoding. If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that don’t have their past key value states given to this model) of shape `(batch_size, 1)` instead of all `decoder_input_ids` of shape `(batch_size, sequence_length)`. inputs\_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, _optional_): Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert `input_ids` indices into associated vectors than the model’s internal embedding lookup matrix. - **output\_attentions** (`bool`, _optional_) — Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. - **output\_hidden\_states** (`bool`, _optional_) — Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for more detail. - **return\_dict** (`bool`, _optional_) — Whether or not to return a [ModelOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput) instead of a plain tuple. - **labels** (`torch.LongTensor` of shape `(batch_size, sequence_length)`, _optional_) — Labels for computing the masked language modeling loss. Indices should either be in `[0, ..., config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`. A [transformers.modeling\_outputs.CausalLMOutputWithCrossAttentions](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.CausalLMOutputWithCrossAttentions) or a tuple of `torch.FloatTensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various elements depending on the configuration ([XGLMConfig](/docs/transformers/v4.34.0/en/model_doc/xglm#transformers.XGLMConfig)) and inputs. - **loss** (`torch.FloatTensor` of shape `(1,)`, _optional_, returned when `labels` is provided) — Language modeling loss (for next-token prediction). - **logits** (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). - **hidden\_states** (`tuple(torch.FloatTensor)`, _optional_, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`) — Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. - **attentions** (`tuple(torch.FloatTensor)`, _optional_, returned when `output_attentions=True` is passed or when `config.output_attentions=True`) — Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. - **cross\_attentions** (`tuple(torch.FloatTensor)`, _optional_, returned when `output_attentions=True` is passed or when `config.output_attentions=True`) — Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Cross attentions weights after the attention softmax, used to compute the weighted average in the cross-attention heads. - **past\_key\_values** (`tuple(tuple(torch.FloatTensor))`, _optional_, returned when `use_cache=True` is passed or when `config.use_cache=True`) — Tuple of `torch.FloatTensor` tuples of length `config.n_layers`, with each tuple containing the cached key, value states of the self-attention and the cross-attention layers if model is used in encoder-decoder setting. Only relevant if `config.is_decoder = True`. Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see `past_key_values` input) to speed up sequential decoding. The [XGLMForCausalLM](/docs/transformers/v4.34.0/en/model_doc/xglm#transformers.XGLMForCausalLM) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> import torch >>> from transformers import AutoTokenizer, XGLMForCausalLM >>> tokenizer = AutoTokenizer.from_pretrained("facebook/xglm-564M") >>> model = XGLMForCausalLM.from_pretrained("facebook/xglm-564M") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs, labels=inputs["input_ids"]) >>> loss = outputs.loss >>> logits = outputs.logits ``` ## TFXGLMModel ### class transformers.TFXGLMModel [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xglm/modeling_tf_xglm.py#L736) ( \*args \*\*kwargs ) Parameters - **config** ([XGLMConfig](/docs/transformers/v4.34.0/en/model_doc/xglm#transformers.XGLMConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel.from_pretrained) method to load the model weights. config — XGLMConfig embed\_tokens — \[TFSharedEmbeddings\]: output embedding The bare XGLM Model transformer outputting raw hidden-states without any specific head on top. This model inherits from [TFPreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a [tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in `transformers` accept two formats as input: - having all inputs as keyword arguments (like PyTorch models), or - having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like `model.fit()` things should “just work” for you - just pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: - a single Tensor with `input_ids` only and nothing else: `model(input_ids)` - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])` - a dictionary with one or several input Tensors associated to the input names given in the docstring: `model({"input_ids": input_ids, "token_type_ids": token_type_ids})` Note that when creating models and layers with [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! Transformer decoder consisting of _config.num\_layers_ layers. Each layer is a `TFXGLMDecoderLayer` #### call [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xglm/modeling_tf_xglm.py#L752) ( input\_ids: TFModelInputType | None = None attention\_mask: np.ndarray | tf.Tensor | None = None position\_ids: np.ndarray | tf.Tensor | None = None encoder\_hidden\_states: np.ndarray | tf.Tensor | None = None encoder\_attention\_mask: np.ndarray | tf.Tensor | None = None head\_mask: np.ndarray | tf.Tensor | None = None cross\_attn\_head\_mask: np.ndarray | tf.Tensor | None = None past\_key\_values: Optional\[Tuple\[Tuple\[Union\[np.ndarray, tf.Tensor\]\]\]\] = None inputs\_embeds: np.ndarray | tf.Tensor | None = None use\_cache: Optional\[bool\] = None output\_attentions: Optional\[bool\] = None output\_hidden\_states: Optional\[bool\] = None return\_dict: Optional\[bool\] = None training: Optional\[bool\] = False \*\*kwargs: Any ) → [transformers.modeling\_tf\_outputs.TFBaseModelOutputWithPastAndCrossAttentions](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFBaseModelOutputWithPastAndCrossAttentions) or `tuple(tf.Tensor)` Parameters - **input\_ids** (`tf.Tensor` of shape `({0})`) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using [AutoTokenizer](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer). See [PreTrainedTokenizer.encode()](/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode) and [PreTrainedTokenizer.**call**()](/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__) for details. [What are input IDs?](../glossary#input-ids) - **attention\_mask** (`tf.Tensor` of shape `({0})`, _optional_) — Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - **position\_ids** (`tf.Tensor` or `Numpy array` of shape `(batch_size, sequence_length)`, _optional_) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, config.max_position_embeddings - 1]`. [What are position IDs?](../glossary#position-ids) - **encoder\_hidden\_states** (`tf.Tensor` of shape `(batch_size, encoder_sequence_length, hidden_size)`, _optional_) — Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. - **encoder\_attention\_mask** (`tf.Tensor` of shape `(batch_size, encoder_sequence_length)`, _optional_) — Mask to avoid performing cross-attention on padding tokens indices of encoder input\_ids. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - **head\_mask** (`tf.Tensor` of shape `(num_layers, attention_heads)`, _optional_) — Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in `[0, 1]`: - 1 indicates the head is **not masked**, - 0 indicates the head is **masked**. - **cross\_attn\_head\_mask** (`tf.Tensor` of shape `(num_layers, attention_heads)`, _optional_) — Mask to nullify selected heads of the cross-attention modules. Mask values selected in `[0, 1]`: - 1 indicates the head is **not masked**, - 0 indicates the head is **masked**. - **past\_key\_values** (`Tuple[Tuple[tf.Tensor]]` of length `config.num_layers`) — contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that don’t have their past key value states given to this model) of shape `(batch_size, 1)` instead of all `decoder_input_ids` of shape `(batch_size, sequence_length)`. - **inputs\_embeds** (`tf.Tensor` of shape `(batch_size, sequence_length, hidden_size)`, _optional_) — Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert `input_ids` indices into associated vectors than the model’s internal embedding lookup matrix. - **use\_cache** (`bool`, _optional_, defaults to `True`) — If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see `past_key_values`). Set to `False` during training, `True` during generation - **output\_attentions** (`bool`, _optional_) — Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. - **output\_hidden\_states** (`bool`, _optional_) — Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. - **return\_dict** (`bool`, _optional_) — Whether or not to return a [ModelOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput) instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. - **training** (`bool`, _optional_, defaults to `False`) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). A [transformers.modeling\_tf\_outputs.TFBaseModelOutputWithPastAndCrossAttentions](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFBaseModelOutputWithPastAndCrossAttentions) or a tuple of `tf.Tensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various elements depending on the configuration ([XGLMConfig](/docs/transformers/v4.34.0/en/model_doc/xglm#transformers.XGLMConfig)) and inputs. - **last\_hidden\_state** (`tf.Tensor` of shape `(batch_size, sequence_length, hidden_size)`) — Sequence of hidden-states at the output of the last layer of the model. If `past_key_values` is used only the last hidden-state of the sequences of shape `(batch_size, 1, hidden_size)` is output. - **past\_key\_values** (`List[tf.Tensor]`, _optional_, returned when `use_cache=True` is passed or when `config.use_cache=True`) — List of `tf.Tensor` of length `config.n_layers`, with each tensor of shape `(2, batch_size, num_heads, sequence_length, embed_size_per_head)`). Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see `past_key_values` input) to speed up sequential decoding. - **hidden\_states** (`tuple(tf.FloatTensor)`, _optional_, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`) — Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. Hidden-states of the model at the output of each layer plus the initial embedding outputs. - **attentions** (`tuple(tf.Tensor)`, _optional_, returned when `output_attentions=True` is passed or when `config.output_attentions=True`) — Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. - **cross\_attentions** (`tuple(tf.Tensor)`, _optional_, returned when `output_attentions=True` is passed or when `config.output_attentions=True`) — Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. The [TFXGLMModel](/docs/transformers/v4.34.0/en/model_doc/xglm#transformers.TFXGLMModel) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, TFXGLMModel >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("facebook/xglm-564M") >>> model = TFXGLMModel.from_pretrained("facebook/xglm-564M") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") >>> outputs = model(inputs) >>> last_hidden_states = outputs.last_hidden_state ``` ## TFXGLMForCausalLM ### class transformers.TFXGLMForCausalLM [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xglm/modeling_tf_xglm.py#L803) ( \*args \*\*kwargs ) Parameters - **config** ([XGLMConfig](/docs/transformers/v4.34.0/en/model_doc/xglm#transformers.XGLMConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel.from_pretrained) method to load the model weights. The XGLM Model transformer with a language modeling head on top (linear layer with weights tied to the input embeddings). This model inherits from [TFPreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a [tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in `transformers` accept two formats as input: - having all inputs as keyword arguments (like PyTorch models), or - having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like `model.fit()` things should “just work” for you - just pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: - a single Tensor with `input_ids` only and nothing else: `model(input_ids)` - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])` - a dictionary with one or several input Tensors associated to the input names given in the docstring: `model({"input_ids": input_ids, "token_type_ids": token_type_ids})` Note that when creating models and layers with [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! #### call [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xglm/modeling_tf_xglm.py#L853) ( input\_ids: TFModelInputType | None = None attention\_mask: np.ndarray | tf.Tensor | None = None position\_ids: np.ndarray | tf.Tensor | None = None encoder\_hidden\_states: np.ndarray | tf.Tensor | None = None encoder\_attention\_mask: np.ndarray | tf.Tensor | None = None head\_mask: np.ndarray | tf.Tensor | None = None cross\_attn\_head\_mask: np.ndarray | tf.Tensor | None = None past\_key\_values: Optional\[Tuple\[Tuple\[Union\[np.ndarray, tf.Tensor\]\]\]\] = None inputs\_embeds: np.ndarray | tf.Tensor | None = None labels: np.ndarray | tf.Tensor | None = None use\_cache: Optional\[bool\] = None output\_attentions: Optional\[bool\] = None output\_hidden\_states: Optional\[bool\] = None return\_dict: Optional\[bool\] = None training: Optional\[bool\] = False \*\*kwargs: Any ) → [transformers.modeling\_tf\_outputs.TFCausalLMOutputWithCrossAttentions](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions) or `tuple(tf.Tensor)` Parameters - **input\_ids** (`tf.Tensor` of shape `({0})`) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using [AutoTokenizer](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer). See [PreTrainedTokenizer.encode()](/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode) and [PreTrainedTokenizer.**call**()](/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__) for details. [What are input IDs?](../glossary#input-ids) - **attention\_mask** (`tf.Tensor` of shape `({0})`, _optional_) — Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - **position\_ids** (`tf.Tensor` or `Numpy array` of shape `(batch_size, sequence_length)`, _optional_) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, config.max_position_embeddings - 1]`. [What are position IDs?](../glossary#position-ids) - **encoder\_hidden\_states** (`tf.Tensor` of shape `(batch_size, encoder_sequence_length, hidden_size)`, _optional_) — Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. - **encoder\_attention\_mask** (`tf.Tensor` of shape `(batch_size, encoder_sequence_length)`, _optional_) — Mask to avoid performing cross-attention on padding tokens indices of encoder input\_ids. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - **head\_mask** (`tf.Tensor` of shape `(num_layers, attention_heads)`, _optional_) — Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in `[0, 1]`: - 1 indicates the head is **not masked**, - 0 indicates the head is **masked**. - **cross\_attn\_head\_mask** (`tf.Tensor` of shape `(num_layers, attention_heads)`, _optional_) — Mask to nullify selected heads of the cross-attention modules. Mask values selected in `[0, 1]`: - 1 indicates the head is **not masked**, - 0 indicates the head is **masked**. - **past\_key\_values** (`Tuple[Tuple[tf.Tensor]]` of length `config.num_layers`) — contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that don’t have their past key value states given to this model) of shape `(batch_size, 1)` instead of all `decoder_input_ids` of shape `(batch_size, sequence_length)`. - **inputs\_embeds** (`tf.Tensor` of shape `(batch_size, sequence_length, hidden_size)`, _optional_) — Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert `input_ids` indices into associated vectors than the model’s internal embedding lookup matrix. - **use\_cache** (`bool`, _optional_, defaults to `True`) — If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see `past_key_values`). Set to `False` during training, `True` during generation - **output\_attentions** (`bool`, _optional_) — Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. - **output\_hidden\_states** (`bool`, _optional_) — Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. - **return\_dict** (`bool`, _optional_) — Whether or not to return a [ModelOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput) instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. - **training** (`bool`, _optional_, defaults to `False`) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). - **labels** (`np.ndarray` or `tf.Tensor` of shape `(batch_size, sequence_length)`, _optional_) — Labels for language modeling. Note that the labels **are shifted** inside the model, i.e. you can set `labels = input_ids` Indices are selected in `[-100, 0, ..., config.vocab_size]` All labels set to `-100` are ignored (masked), the loss is only computed for labels in `[0, ..., config.vocab_size]` A [transformers.modeling\_tf\_outputs.TFCausalLMOutputWithCrossAttentions](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions) or a tuple of `tf.Tensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various elements depending on the configuration ([XGLMConfig](/docs/transformers/v4.34.0/en/model_doc/xglm#transformers.XGLMConfig)) and inputs. - **loss** (`tf.Tensor` of shape `(n,)`, _optional_, where n is the number of non-masked labels, returned when `labels` is provided) — Language modeling loss (for next-token prediction). - **logits** (`tf.Tensor` of shape `(batch_size, sequence_length, config.vocab_size)`) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). - **hidden\_states** (`tuple(tf.Tensor)`, _optional_, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`) — Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. Hidden-states of the model at the output of each layer plus the initial embedding outputs. - **attentions** (`tuple(tf.Tensor)`, _optional_, returned when `output_attentions=True` is passed or when `config.output_attentions=True`) — Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. - **cross\_attentions** (`tuple(tf.Tensor)`, _optional_, returned when `output_attentions=True` is passed or when `config.output_attentions=True`) — Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. - **past\_key\_values** (`List[tf.Tensor]`, _optional_, returned when `use_cache=True` is passed or when `config.use_cache=True`) — List of `tf.Tensor` of length `config.n_layers`, with each tensor of shape `(2, batch_size, num_heads, sequence_length, embed_size_per_head)`). Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see `past_key_values` input) to speed up sequential decoding. [transformers.modeling\_tf\_outputs.TFCausalLMOutputWithCrossAttentions](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions) or `tuple(tf.Tensor)`: A [transformers.modeling\_tf\_outputs.TFCausalLMOutputWithCrossAttentions](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions) or a tuple of `tf.Tensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various elements depending on the configuration ([XGLMConfig](/docs/transformers/v4.34.0/en/model_doc/xglm#transformers.XGLMConfig)) and inputs. - **loss** (`tf.Tensor` of shape `(n,)`, _optional_, where n is the number of non-masked labels, returned when `labels` is provided) — Language modeling loss (for next-token prediction). - **logits** (`tf.Tensor` of shape `(batch_size, sequence_length, config.vocab_size)`) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). - **hidden\_states** (`tuple(tf.Tensor)`, _optional_, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`) — Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. Hidden-states of the model at the output of each layer plus the initial embedding outputs. - **attentions** (`tuple(tf.Tensor)`, _optional_, returned when `output_attentions=True` is passed or when `config.output_attentions=True`) — Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. - **cross\_attentions** (`tuple(tf.Tensor)`, _optional_, returned when `output_attentions=True` is passed or when `config.output_attentions=True`) — Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. - **past\_key\_values** (`List[tf.Tensor]`, _optional_, returned when `use_cache=True` is passed or when `config.use_cache=True`) — List of `tf.Tensor` of length `config.n_layers`, with each tensor of shape `(2, batch_size, num_heads, sequence_length, embed_size_per_head)`). Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see `past_key_values` input) to speed up sequential decoding. The [TFXGLMForCausalLM](/docs/transformers/v4.34.0/en/model_doc/xglm#transformers.TFXGLMForCausalLM) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, TFXGLMForCausalLM >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("facebook/xglm-564M") >>> model = TFXGLMForCausalLM.from_pretrained("facebook/xglm-564M") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") >>> outputs = model(inputs) >>> logits = outputs.logits ``` ## FlaxXGLMModel ### class transformers.FlaxXGLMModel [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xglm/modeling_flax_xglm.py#L689) ( config: XGLMConfig input\_shape: typing.Tuple\[int\] = (1, 1) seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> \_do\_init: bool = True \*\*kwargs ) Parameters - **config** ([XGLMConfig](/docs/transformers/v4.34.0/en/model_doc/xglm#transformers.XGLMConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.from_pretrained) method to load the model weights. - **dtype** (`jax.numpy.dtype`, _optional_, defaults to `jax.numpy.float32`) — The data type of the computation. Can be one of `jax.numpy.float32`, `jax.numpy.float16` (on GPUs) and `jax.numpy.bfloat16` (on TPUs). This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given `dtype`. **Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.** If you wish to change the dtype of the model parameters, see [to\_fp16()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.to_fp16) and [to\_bf16()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.to_bf16). The bare XGLM Model transformer outputting raw hidden-states without any specific head on top. This model inherits from [FlaxPreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a Flax Linen [flax.nn.Module](https://flax.readthedocs.io/en/latest/_autosummary/flax.nn.module.html) subclass. Use it as a regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior. Finally, this model supports inherent JAX features such as: - [Just-In-Time (JIT) compilation](https://jax.readthedocs.io/en/latest/jax.html#just-in-time-compilation-jit) - [Automatic Differentiation](https://jax.readthedocs.io/en/latest/jax.html#automatic-differentiation) - [Vectorization](https://jax.readthedocs.io/en/latest/jax.html#vectorization-vmap) - [Parallelization](https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap) #### \_\_call\_\_ [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xglm/modeling_flax_xglm.py#L611) ( input\_ids: Array attention\_mask: typing.Optional\[jax.Array\] = None position\_ids: typing.Optional\[jax.Array\] = None encoder\_hidden\_states: typing.Optional\[jax.Array\] = None encoder\_attention\_mask: typing.Optional\[jax.Array\] = None output\_attentions: typing.Optional\[bool\] = None output\_hidden\_states: typing.Optional\[bool\] = None return\_dict: typing.Optional\[bool\] = None train: bool = False params: dict = None past\_key\_values: dict = None dropout\_rng: PRNGKey = None ) → [transformers.modeling\_flax\_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions) or `tuple(torch.FloatTensor)` Parameters - **input\_ids** (`jnp.ndarray` of shape `(batch_size, sequence_length)`) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using [AutoTokenizer](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer). See [PreTrainedTokenizer.encode()](/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode) and [PreTrainedTokenizer.**call**()](/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__) for details. [What are input IDs?](../glossary#input-ids) - **attention\_mask** (`jnp.ndarray` of shape `(batch_size, sequence_length)`, _optional_) — Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - **position\_ids** (`numpy.ndarray` of shape `(batch_size, sequence_length)`, _optional_) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, config.max_position_embeddings - 1]`. - **output\_attentions** (`bool`, _optional_) — Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. - **output\_hidden\_states** (`bool`, _optional_) — Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for more detail. - **return\_dict** (`bool`, _optional_) — Whether or not to return a [ModelOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput) instead of a plain tuple. A [transformers.modeling\_flax\_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions) or a tuple of `torch.FloatTensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various elements depending on the configuration ([XGLMConfig](/docs/transformers/v4.34.0/en/model_doc/xglm#transformers.XGLMConfig)) and inputs. - **last\_hidden\_state** (`jnp.ndarray` of shape `(batch_size, sequence_length, hidden_size)`) — Sequence of hidden-states at the output of the last layer of the model. If `past_key_values` is used only the last hidden-state of the sequences of shape `(batch_size, 1, hidden_size)` is output. - **past\_key\_values** (`tuple(tuple(jnp.ndarray))`, _optional_, returned when `use_cache=True` is passed or when `config.use_cache=True`) — Tuple of `tuple(jnp.ndarray)` of length `config.n_layers`, with each tuple having 2 tensors of shape `(batch_size, num_heads, sequence_length, embed_size_per_head)`) and optionally if `config.is_encoder_decoder=True` 2 additional tensors of shape `(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)`. Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if `config.is_encoder_decoder=True` in the cross-attention blocks) that can be used (see `past_key_values` input) to speed up sequential decoding. - **hidden\_states** (`tuple(jnp.ndarray)`, _optional_, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`) — Tuple of `jnp.ndarray` (one for the output of the embeddings + one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. Hidden-states of the model at the output of each layer plus the initial embedding outputs. - **attentions** (`tuple(jnp.ndarray)`, _optional_, returned when `output_attentions=True` is passed or when `config.output_attentions=True`) — Tuple of `jnp.ndarray` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. - **cross\_attentions** (`tuple(jnp.ndarray)`, _optional_, returned when `output_attentions=True` and `config.add_cross_attention=True` is passed or when `config.output_attentions=True`) — Tuple of `jnp.ndarray` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. The `FlaxXGLMPreTrainedModel` forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, FlaxXGLMModel >>> tokenizer = AutoTokenizer.from_pretrained("facebook/xglm-564M") >>> model = FlaxXGLMModel.from_pretrained("facebook/xglm-564M") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="jax") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state ``` ## FlaxXGLMForCausalLM ### class transformers.FlaxXGLMForCausalLM [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xglm/modeling_flax_xglm.py#L766) ( config: XGLMConfig input\_shape: typing.Tuple\[int\] = (1, 1) seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> \_do\_init: bool = True \*\*kwargs ) Parameters - **config** ([XGLMConfig](/docs/transformers/v4.34.0/en/model_doc/xglm#transformers.XGLMConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.from_pretrained) method to load the model weights. - **dtype** (`jax.numpy.dtype`, _optional_, defaults to `jax.numpy.float32`) — The data type of the computation. Can be one of `jax.numpy.float32`, `jax.numpy.float16` (on GPUs) and `jax.numpy.bfloat16` (on TPUs). This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given `dtype`. **Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.** If you wish to change the dtype of the model parameters, see [to\_fp16()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.to_fp16) and [to\_bf16()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.to_bf16). The XGLM Model transformer with a language modeling head on top (linear layer with weights tied to the input embeddings). This model inherits from [FlaxPreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a Flax Linen [flax.nn.Module](https://flax.readthedocs.io/en/latest/_autosummary/flax.nn.module.html) subclass. Use it as a regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior. Finally, this model supports inherent JAX features such as: - [Just-In-Time (JIT) compilation](https://jax.readthedocs.io/en/latest/jax.html#just-in-time-compilation-jit) - [Automatic Differentiation](https://jax.readthedocs.io/en/latest/jax.html#automatic-differentiation) - [Vectorization](https://jax.readthedocs.io/en/latest/jax.html#vectorization-vmap) - [Parallelization](https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap) #### \_\_call\_\_ [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xglm/modeling_flax_xglm.py#L611) ( input\_ids: Array attention\_mask: typing.Optional\[jax.Array\] = None position\_ids: typing.Optional\[jax.Array\] = None encoder\_hidden\_states: typing.Optional\[jax.Array\] = None encoder\_attention\_mask: typing.Optional\[jax.Array\] = None output\_attentions: typing.Optional\[bool\] = None output\_hidden\_states: typing.Optional\[bool\] = None return\_dict: typing.Optional\[bool\] = None train: bool = False params: dict = None past\_key\_values: dict = None dropout\_rng: PRNGKey = None ) → [transformers.modeling\_flax\_outputs.FlaxCausalLMOutputWithCrossAttentions](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions) or `tuple(torch.FloatTensor)` Parameters - **input\_ids** (`jnp.ndarray` of shape `(batch_size, sequence_length)`) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using [AutoTokenizer](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer). See [PreTrainedTokenizer.encode()](/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode) and [PreTrainedTokenizer.**call**()](/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__) for details. [What are input IDs?](../glossary#input-ids) - **attention\_mask** (`jnp.ndarray` of shape `(batch_size, sequence_length)`, _optional_) — Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - **position\_ids** (`numpy.ndarray` of shape `(batch_size, sequence_length)`, _optional_) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, config.max_position_embeddings - 1]`. - **output\_attentions** (`bool`, _optional_) — Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. - **output\_hidden\_states** (`bool`, _optional_) — Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for more detail. - **return\_dict** (`bool`, _optional_) — Whether or not to return a [ModelOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput) instead of a plain tuple. A [transformers.modeling\_flax\_outputs.FlaxCausalLMOutputWithCrossAttentions](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions) or a tuple of `torch.FloatTensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various elements depending on the configuration ([XGLMConfig](/docs/transformers/v4.34.0/en/model_doc/xglm#transformers.XGLMConfig)) and inputs. - **logits** (`jnp.ndarray` of shape `(batch_size, sequence_length, config.vocab_size)`) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). - **hidden\_states** (`tuple(jnp.ndarray)`, _optional_, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`) — Tuple of `jnp.ndarray` (one for the output of the embeddings + one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. Hidden-states of the model at the output of each layer plus the initial embedding outputs. - **attentions** (`tuple(jnp.ndarray)`, _optional_, returned when `output_attentions=True` is passed or when `config.output_attentions=True`) — Tuple of `jnp.ndarray` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. - **cross\_attentions** (`tuple(jnp.ndarray)`, _optional_, returned when `output_attentions=True` is passed or when `config.output_attentions=True`) — Tuple of `jnp.ndarray` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Cross attentions weights after the attention softmax, used to compute the weighted average in the cross-attention heads. - **past\_key\_values** (`tuple(tuple(jnp.ndarray))`, _optional_, returned when `use_cache=True` is passed or when `config.use_cache=True`) — Tuple of `jnp.ndarray` tuples of length `config.n_layers`, with each tuple containing the cached key, value states of the self-attention and the cross-attention layers if model is used in encoder-decoder setting. Only relevant if `config.is_decoder = True`. Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see `past_key_values` input) to speed up sequential decoding. The `FlaxXGLMPreTrainedModel` forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, FlaxXGLMForCausalLM >>> tokenizer = AutoTokenizer.from_pretrained("facebook/xglm-564M") >>> model = FlaxXGLMForCausalLM.from_pretrained("facebook/xglm-564M") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="np") >>> outputs = model(**inputs) >>> >>> next_token_logits = outputs.logits[:, -1] ```
<!DOCTYPE html><html class=""><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"> <meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science."> <meta property="fb:app_id" content="1321688464574422"> <meta name="twitter:card" content="summary_large_image"> <meta name="twitter:site" content="@huggingface"> <meta property="og:title" content="XGLM"> <meta property="og:type" content="website"> <meta property="og:url" content="https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/xglm"> <meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png"> <link rel="stylesheet" href="/front/build/kube-b0520c1/style.css"> <link rel="preconnect" href="https://fonts.gstatic.com"> <link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&amp;display=swap" rel="stylesheet"> <link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&amp;display=swap" rel="stylesheet"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" as="style" onload="this.onload=null;this.rel='stylesheet'"> <noscript> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" /> </noscript> <title>XGLM</title> <script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script> <script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"><link rel="stylesheet" href="/docs/transformers/v4.34.0/en/_app/immutable/assets/0.e3b0c442.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/1.38c5c2f6.js"></head> <body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage"> <div class="flex min-h-screen flex-col"> <div class="SVELTE_HYDRATER contents" data-props="{&quot;classNames&quot;:&quot;&quot;,&quot;isWide&quot;:true,&quot;isZh&quot;:false}" data-target="MainHeader"><header class="border-b border-gray-100 "><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text" value=""> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <div class="flex flex-none items-center justify-center p-0.5 place-self-stretch lg:hidden"><button class="relative z-40 flex h-6 w-8 items-center justify-center" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div></div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a> </li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a> </li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a> </li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a> </li> <li><div class="relative "> <button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"> <svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing </a></li> <li><div class="relative group"> <button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"> <svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In </a></li> <li><a class="rounded-full border border-transparent bg-gray-900 px-3 py-1 leading-none text-white hover:border-black hover:bg-white hover:text-black" href="/join">Sign Up </a></li></ul></nav></div></header></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div> <main class="flex flex-1 flex-col"><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapters&quot;:[{&quot;title&quot;:&quot;Get started&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;🤗 Transformers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;index&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/index&quot;},{&quot;title&quot;:&quot;Quick tour&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;quicktour&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/quicktour&quot;},{&quot;title&quot;:&quot;Installation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;installation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/installation&quot;}]},{&quot;title&quot;:&quot;Tutorials&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Run inference with pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_tutorial&quot;},{&quot;title&quot;:&quot;Write portable code with AutoClass&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;autoclass_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/autoclass_tutorial&quot;},{&quot;title&quot;:&quot;Preprocess data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocessing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/preprocessing&quot;},{&quot;title&quot;:&quot;Fine-tune a pretrained model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;training&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/training&quot;},{&quot;title&quot;:&quot;Train with a script&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;run_scripts&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/run_scripts&quot;},{&quot;title&quot;:&quot;Set up distributed training with 🤗 Accelerate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;accelerate&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/accelerate&quot;},{&quot;title&quot;:&quot;Load and train adapters with 🤗 PEFT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;peft&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/peft&quot;},{&quot;title&quot;:&quot;Share your model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_sharing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_sharing&quot;},{&quot;title&quot;:&quot;Agents&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers_agents&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/transformers_agents&quot;},{&quot;title&quot;:&quot;Generation with LLMs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;llm_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/llm_tutorial&quot;}]},{&quot;title&quot;:&quot;Task Guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Natural Language Processing&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text classification&quot;,&quot;id&quot;:&quot;tasks/sequence_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/sequence_classification&quot;},{&quot;title&quot;:&quot;Token classification&quot;,&quot;id&quot;:&quot;tasks/token_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/token_classification&quot;},{&quot;title&quot;:&quot;Question answering&quot;,&quot;id&quot;:&quot;tasks/question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/question_answering&quot;},{&quot;title&quot;:&quot;Causal language modeling&quot;,&quot;id&quot;:&quot;tasks/language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/language_modeling&quot;},{&quot;title&quot;:&quot;Masked language modeling&quot;,&quot;id&quot;:&quot;tasks/masked_language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/masked_language_modeling&quot;},{&quot;title&quot;:&quot;Translation&quot;,&quot;id&quot;:&quot;tasks/translation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/translation&quot;},{&quot;title&quot;:&quot;Summarization&quot;,&quot;id&quot;:&quot;tasks/summarization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/summarization&quot;},{&quot;title&quot;:&quot;Multiple choice&quot;,&quot;id&quot;:&quot;tasks/multiple_choice&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/multiple_choice&quot;}]},{&quot;title&quot;:&quot;Audio&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio classification&quot;,&quot;id&quot;:&quot;tasks/audio_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/audio_classification&quot;},{&quot;title&quot;:&quot;Automatic speech recognition&quot;,&quot;id&quot;:&quot;tasks/asr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/asr&quot;}]},{&quot;title&quot;:&quot;Computer Vision&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image classification&quot;,&quot;id&quot;:&quot;tasks/image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_classification&quot;},{&quot;title&quot;:&quot;Semantic segmentation&quot;,&quot;id&quot;:&quot;tasks/semantic_segmentation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/semantic_segmentation&quot;},{&quot;title&quot;:&quot;Video classification&quot;,&quot;id&quot;:&quot;tasks/video_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/video_classification&quot;},{&quot;title&quot;:&quot;Object detection&quot;,&quot;id&quot;:&quot;tasks/object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot object detection&quot;,&quot;id&quot;:&quot;tasks/zero_shot_object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot image classification&quot;,&quot;id&quot;:&quot;tasks/zero_shot_image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification&quot;},{&quot;title&quot;:&quot;Depth estimation&quot;,&quot;id&quot;:&quot;tasks/monocular_depth_estimation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation&quot;}]},{&quot;title&quot;:&quot;Multimodal&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image captioning&quot;,&quot;id&quot;:&quot;tasks/image_captioning&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_captioning&quot;},{&quot;title&quot;:&quot;Document Question Answering&quot;,&quot;id&quot;:&quot;tasks/document_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/document_question_answering&quot;},{&quot;title&quot;:&quot;Visual Question Answering&quot;,&quot;id&quot;:&quot;tasks/visual_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/visual_question_answering&quot;},{&quot;title&quot;:&quot;Text to speech&quot;,&quot;id&quot;:&quot;tasks/text-to-speech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/text-to-speech&quot;}]},{&quot;title&quot;:&quot;Generation&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Customize the generation strategy&quot;,&quot;id&quot;:&quot;generation_strategies&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/generation_strategies&quot;}]},{&quot;title&quot;:&quot;Prompting&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image tasks with IDEFICS&quot;,&quot;id&quot;:&quot;tasks/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/idefics&quot;}]}]},{&quot;title&quot;:&quot;Developer guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Use fast tokenizers from 🤗 Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;fast_tokenizers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/fast_tokenizers&quot;},{&quot;title&quot;:&quot;Run inference with multilingual models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;multilingual&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/multilingual&quot;},{&quot;title&quot;:&quot;Use model-specific APIs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;create_a_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/create_a_model&quot;},{&quot;title&quot;:&quot;Share a custom model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_models&quot;},{&quot;title&quot;:&quot;Templates for chat models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;chat_templating&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/chat_templating&quot;},{&quot;title&quot;:&quot;Run training on Amazon SageMaker&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;sagemaker&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/sagemaker&quot;},{&quot;title&quot;:&quot;Export to ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;serialization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/serialization&quot;},{&quot;title&quot;:&quot;Export to TFLite&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tflite&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tflite&quot;},{&quot;title&quot;:&quot;Export to TorchScript&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;torchscript&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/torchscript&quot;},{&quot;title&quot;:&quot;Benchmarks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;benchmarks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/benchmarks&quot;},{&quot;title&quot;:&quot;Notebooks with examples&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;notebooks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/notebooks&quot;},{&quot;title&quot;:&quot;Community resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;community&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/community&quot;},{&quot;title&quot;:&quot;Custom Tools and Prompts&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_tools&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_tools&quot;},{&quot;title&quot;:&quot;Troubleshoot&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;troubleshooting&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/troubleshooting&quot;}]},{&quot;title&quot;:&quot;Performance and scalability&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;performance&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/performance&quot;},{&quot;title&quot;:&quot;Efficient training techniques&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Methods and tools for efficient training on a single GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_one&quot;},{&quot;title&quot;:&quot;Multiple GPUs and parallelism&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_many&quot;},{&quot;title&quot;:&quot;Efficient training on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu&quot;},{&quot;title&quot;:&quot;Distributed CPU training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu_many&quot;},{&quot;title&quot;:&quot;Training on TPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu&quot;},{&quot;title&quot;:&quot;Training on TPU with TensorFlow&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu_tf&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu_tf&quot;},{&quot;title&quot;:&quot;Training on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_special&quot;},{&quot;title&quot;:&quot;Custom hardware for training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_hardware&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_hardware&quot;},{&quot;title&quot;:&quot;Hyperparameter Search using Trainer API&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;hpo_train&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/hpo_train&quot;}]},{&quot;title&quot;:&quot;Optimizing inference&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Inference on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_cpu&quot;},{&quot;title&quot;:&quot;Inference on one GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_one&quot;},{&quot;title&quot;:&quot;Inference on many GPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_many&quot;},{&quot;title&quot;:&quot;Inference on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_special&quot;}]},{&quot;title&quot;:&quot;Instantiating a big model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;big_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/big_models&quot;},{&quot;title&quot;:&quot;Troubleshooting&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;debugging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/debugging&quot;},{&quot;title&quot;:&quot;XLA Integration for TensorFlow Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tf_xla&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tf_xla&quot;},{&quot;title&quot;:&quot;Optimize inference using `torch.compile()`&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_torch_compile&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_torch_compile&quot;}]},{&quot;title&quot;:&quot;Contribute&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;How to contribute to transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;contributing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/contributing&quot;},{&quot;title&quot;:&quot;How to add a model to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_model&quot;},{&quot;title&quot;:&quot;How to convert a 🤗 Transformers model to TensorFlow?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_tensorflow_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_tensorflow_model&quot;},{&quot;title&quot;:&quot;How to add a pipeline to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_pipeline&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_pipeline&quot;},{&quot;title&quot;:&quot;Testing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;testing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/testing&quot;},{&quot;title&quot;:&quot;Checks on a Pull Request&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pr_checks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pr_checks&quot;}]},{&quot;title&quot;:&quot;Conceptual guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Philosophy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;philosophy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/philosophy&quot;},{&quot;title&quot;:&quot;Glossary&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;glossary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/glossary&quot;},{&quot;title&quot;:&quot;What 🤗 Transformers can do&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;task_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/task_summary&quot;},{&quot;title&quot;:&quot;How 🤗 Transformers solve tasks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks_explained&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks_explained&quot;},{&quot;title&quot;:&quot;The Transformer model family&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_summary&quot;},{&quot;title&quot;:&quot;Summary of the tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tokenizer_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tokenizer_summary&quot;},{&quot;title&quot;:&quot;Attention mechanisms&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;attention&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/attention&quot;},{&quot;title&quot;:&quot;Padding and truncation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pad_truncation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pad_truncation&quot;},{&quot;title&quot;:&quot;BERTology&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;bertology&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/bertology&quot;},{&quot;title&quot;:&quot;Perplexity of fixed-length models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perplexity&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perplexity&quot;},{&quot;title&quot;:&quot;Pipelines for webserver inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_webserver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_webserver&quot;},{&quot;title&quot;:&quot;Model training anatomy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_memory_anatomy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_memory_anatomy&quot;}]},{&quot;title&quot;:&quot;API&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Main Classes&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Agents and Tools&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/agent&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/agent&quot;},{&quot;title&quot;:&quot;Auto Classes&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/auto&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/auto&quot;},{&quot;title&quot;:&quot;Callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/callback&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/callback&quot;},{&quot;title&quot;:&quot;Configuration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/configuration&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/configuration&quot;},{&quot;title&quot;:&quot;Data Collator&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/data_collator&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/data_collator&quot;},{&quot;title&quot;:&quot;Keras callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/keras_callbacks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/keras_callbacks&quot;},{&quot;title&quot;:&quot;Logging&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/logging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/logging&quot;},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/model&quot;},{&quot;title&quot;:&quot;Text Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/text_generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/text_generation&quot;},{&quot;title&quot;:&quot;ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/onnx&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/onnx&quot;},{&quot;title&quot;:&quot;Optimization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/optimizer_schedules&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules&quot;},{&quot;title&quot;:&quot;Model outputs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/output&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/output&quot;},{&quot;title&quot;:&quot;Pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/pipelines&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/pipelines&quot;},{&quot;title&quot;:&quot;Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/processors&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/processors&quot;},{&quot;title&quot;:&quot;Quantization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/quantization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/quantization&quot;},{&quot;title&quot;:&quot;Tokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/tokenizer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/tokenizer&quot;},{&quot;title&quot;:&quot;Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/trainer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/trainer&quot;},{&quot;title&quot;:&quot;DeepSpeed Integration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/deepspeed&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/deepspeed&quot;},{&quot;title&quot;:&quot;Feature Extractor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/feature_extractor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/feature_extractor&quot;},{&quot;title&quot;:&quot;Image Processor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/image_processor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/image_processor&quot;}]},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALBERT&quot;,&quot;id&quot;:&quot;model_doc/albert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/albert&quot;},{&quot;title&quot;:&quot;BART&quot;,&quot;id&quot;:&quot;model_doc/bart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bart&quot;},{&quot;title&quot;:&quot;BARThez&quot;,&quot;id&quot;:&quot;model_doc/barthez&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/barthez&quot;},{&quot;title&quot;:&quot;BARTpho&quot;,&quot;id&quot;:&quot;model_doc/bartpho&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bartpho&quot;},{&quot;title&quot;:&quot;BERT&quot;,&quot;id&quot;:&quot;model_doc/bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert&quot;},{&quot;title&quot;:&quot;BertGeneration&quot;,&quot;id&quot;:&quot;model_doc/bert-generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-generation&quot;},{&quot;title&quot;:&quot;BertJapanese&quot;,&quot;id&quot;:&quot;model_doc/bert-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-japanese&quot;},{&quot;title&quot;:&quot;Bertweet&quot;,&quot;id&quot;:&quot;model_doc/bertweet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bertweet&quot;},{&quot;title&quot;:&quot;BigBird&quot;,&quot;id&quot;:&quot;model_doc/big_bird&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/big_bird&quot;},{&quot;title&quot;:&quot;BigBirdPegasus&quot;,&quot;id&quot;:&quot;model_doc/bigbird_pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus&quot;},{&quot;title&quot;:&quot;BioGpt&quot;,&quot;id&quot;:&quot;model_doc/biogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/biogpt&quot;},{&quot;title&quot;:&quot;Blenderbot&quot;,&quot;id&quot;:&quot;model_doc/blenderbot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot&quot;},{&quot;title&quot;:&quot;Blenderbot Small&quot;,&quot;id&quot;:&quot;model_doc/blenderbot-small&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot-small&quot;},{&quot;title&quot;:&quot;BLOOM&quot;,&quot;id&quot;:&quot;model_doc/bloom&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bloom&quot;},{&quot;title&quot;:&quot;BORT&quot;,&quot;id&quot;:&quot;model_doc/bort&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bort&quot;},{&quot;title&quot;:&quot;ByT5&quot;,&quot;id&quot;:&quot;model_doc/byt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/byt5&quot;},{&quot;title&quot;:&quot;CamemBERT&quot;,&quot;id&quot;:&quot;model_doc/camembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/camembert&quot;},{&quot;title&quot;:&quot;CANINE&quot;,&quot;id&quot;:&quot;model_doc/canine&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/canine&quot;},{&quot;title&quot;:&quot;CodeGen&quot;,&quot;id&quot;:&quot;model_doc/codegen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/codegen&quot;},{&quot;title&quot;:&quot;CodeLlama&quot;,&quot;id&quot;:&quot;model_doc/code_llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/code_llama&quot;},{&quot;title&quot;:&quot;ConvBERT&quot;,&quot;id&quot;:&quot;model_doc/convbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convbert&quot;},{&quot;title&quot;:&quot;CPM&quot;,&quot;id&quot;:&quot;model_doc/cpm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpm&quot;},{&quot;title&quot;:&quot;CPMANT&quot;,&quot;id&quot;:&quot;model_doc/cpmant&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpmant&quot;},{&quot;title&quot;:&quot;CTRL&quot;,&quot;id&quot;:&quot;model_doc/ctrl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ctrl&quot;},{&quot;title&quot;:&quot;DeBERTa&quot;,&quot;id&quot;:&quot;model_doc/deberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta&quot;},{&quot;title&quot;:&quot;DeBERTa-v2&quot;,&quot;id&quot;:&quot;model_doc/deberta-v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta-v2&quot;},{&quot;title&quot;:&quot;DialoGPT&quot;,&quot;id&quot;:&quot;model_doc/dialogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dialogpt&quot;},{&quot;title&quot;:&quot;DistilBERT&quot;,&quot;id&quot;:&quot;model_doc/distilbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/distilbert&quot;},{&quot;title&quot;:&quot;DPR&quot;,&quot;id&quot;:&quot;model_doc/dpr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpr&quot;},{&quot;title&quot;:&quot;ELECTRA&quot;,&quot;id&quot;:&quot;model_doc/electra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/electra&quot;},{&quot;title&quot;:&quot;Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encoder-decoder&quot;},{&quot;title&quot;:&quot;ERNIE&quot;,&quot;id&quot;:&quot;model_doc/ernie&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie&quot;},{&quot;title&quot;:&quot;ErnieM&quot;,&quot;id&quot;:&quot;model_doc/ernie_m&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie_m&quot;},{&quot;title&quot;:&quot;ESM&quot;,&quot;id&quot;:&quot;model_doc/esm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/esm&quot;},{&quot;title&quot;:&quot;Falcon&quot;,&quot;id&quot;:&quot;model_doc/falcon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/falcon&quot;},{&quot;title&quot;:&quot;FLAN-T5&quot;,&quot;id&quot;:&quot;model_doc/flan-t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-t5&quot;},{&quot;title&quot;:&quot;FLAN-UL2&quot;,&quot;id&quot;:&quot;model_doc/flan-ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-ul2&quot;},{&quot;title&quot;:&quot;FlauBERT&quot;,&quot;id&quot;:&quot;model_doc/flaubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flaubert&quot;},{&quot;title&quot;:&quot;FNet&quot;,&quot;id&quot;:&quot;model_doc/fnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fnet&quot;},{&quot;title&quot;:&quot;FSMT&quot;,&quot;id&quot;:&quot;model_doc/fsmt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fsmt&quot;},{&quot;title&quot;:&quot;Funnel Transformer&quot;,&quot;id&quot;:&quot;model_doc/funnel&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/funnel&quot;},{&quot;title&quot;:&quot;GPT&quot;,&quot;id&quot;:&quot;model_doc/openai-gpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/openai-gpt&quot;},{&quot;title&quot;:&quot;GPT Neo&quot;,&quot;id&quot;:&quot;model_doc/gpt_neo&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neo&quot;},{&quot;title&quot;:&quot;GPT NeoX&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox&quot;},{&quot;title&quot;:&quot;GPT NeoX Japanese&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox_japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese&quot;},{&quot;title&quot;:&quot;GPT-J&quot;,&quot;id&quot;:&quot;model_doc/gptj&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptj&quot;},{&quot;title&quot;:&quot;GPT2&quot;,&quot;id&quot;:&quot;model_doc/gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt2&quot;},{&quot;title&quot;:&quot;GPTBigCode&quot;,&quot;id&quot;:&quot;model_doc/gpt_bigcode&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode&quot;},{&quot;title&quot;:&quot;GPTSAN Japanese&quot;,&quot;id&quot;:&quot;model_doc/gptsan-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese&quot;},{&quot;title&quot;:&quot;GPTSw3&quot;,&quot;id&quot;:&quot;model_doc/gpt-sw3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt-sw3&quot;},{&quot;title&quot;:&quot;HerBERT&quot;,&quot;id&quot;:&quot;model_doc/herbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/herbert&quot;},{&quot;title&quot;:&quot;I-BERT&quot;,&quot;id&quot;:&quot;model_doc/ibert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ibert&quot;},{&quot;title&quot;:&quot;Jukebox&quot;,&quot;id&quot;:&quot;model_doc/jukebox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/jukebox&quot;},{&quot;title&quot;:&quot;LED&quot;,&quot;id&quot;:&quot;model_doc/led&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/led&quot;},{&quot;title&quot;:&quot;LLaMA&quot;,&quot;id&quot;:&quot;model_doc/llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama&quot;},{&quot;title&quot;:&quot;Llama2&quot;,&quot;id&quot;:&quot;model_doc/llama2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama2&quot;},{&quot;title&quot;:&quot;Longformer&quot;,&quot;id&quot;:&quot;model_doc/longformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longformer&quot;},{&quot;title&quot;:&quot;LongT5&quot;,&quot;id&quot;:&quot;model_doc/longt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longt5&quot;},{&quot;title&quot;:&quot;LUKE&quot;,&quot;id&quot;:&quot;model_doc/luke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/luke&quot;},{&quot;title&quot;:&quot;M2M100&quot;,&quot;id&quot;:&quot;model_doc/m2m_100&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/m2m_100&quot;},{&quot;title&quot;:&quot;MarianMT&quot;,&quot;id&quot;:&quot;model_doc/marian&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/marian&quot;},{&quot;title&quot;:&quot;MarkupLM&quot;,&quot;id&quot;:&quot;model_doc/markuplm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/markuplm&quot;},{&quot;title&quot;:&quot;MBart and MBart-50&quot;,&quot;id&quot;:&quot;model_doc/mbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mbart&quot;},{&quot;title&quot;:&quot;MEGA&quot;,&quot;id&quot;:&quot;model_doc/mega&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mega&quot;},{&quot;title&quot;:&quot;MegatronBERT&quot;,&quot;id&quot;:&quot;model_doc/megatron-bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron-bert&quot;},{&quot;title&quot;:&quot;MegatronGPT2&quot;,&quot;id&quot;:&quot;model_doc/megatron_gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2&quot;},{&quot;title&quot;:&quot;Mistral&quot;,&quot;id&quot;:&quot;model_doc/mistral&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mistral&quot;},{&quot;title&quot;:&quot;mLUKE&quot;,&quot;id&quot;:&quot;model_doc/mluke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mluke&quot;},{&quot;title&quot;:&quot;MobileBERT&quot;,&quot;id&quot;:&quot;model_doc/mobilebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilebert&quot;},{&quot;title&quot;:&quot;MPNet&quot;,&quot;id&quot;:&quot;model_doc/mpnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpnet&quot;},{&quot;title&quot;:&quot;MPT&quot;,&quot;id&quot;:&quot;model_doc/mpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpt&quot;},{&quot;title&quot;:&quot;MRA&quot;,&quot;id&quot;:&quot;model_doc/mra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mra&quot;},{&quot;title&quot;:&quot;MT5&quot;,&quot;id&quot;:&quot;model_doc/mt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mt5&quot;},{&quot;title&quot;:&quot;MVP&quot;,&quot;id&quot;:&quot;model_doc/mvp&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mvp&quot;},{&quot;title&quot;:&quot;NEZHA&quot;,&quot;id&quot;:&quot;model_doc/nezha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nezha&quot;},{&quot;title&quot;:&quot;NLLB&quot;,&quot;id&quot;:&quot;model_doc/nllb&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb&quot;},{&quot;title&quot;:&quot;NLLB-MoE&quot;,&quot;id&quot;:&quot;model_doc/nllb-moe&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb-moe&quot;},{&quot;title&quot;:&quot;Nyströmformer&quot;,&quot;id&quot;:&quot;model_doc/nystromformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nystromformer&quot;},{&quot;title&quot;:&quot;Open-Llama&quot;,&quot;id&quot;:&quot;model_doc/open-llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/open-llama&quot;},{&quot;title&quot;:&quot;OPT&quot;,&quot;id&quot;:&quot;model_doc/opt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/opt&quot;},{&quot;title&quot;:&quot;Pegasus&quot;,&quot;id&quot;:&quot;model_doc/pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus&quot;},{&quot;title&quot;:&quot;PEGASUS-X&quot;,&quot;id&quot;:&quot;model_doc/pegasus_x&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus_x&quot;},{&quot;title&quot;:&quot;Persimmon&quot;,&quot;id&quot;:&quot;model_doc/persimmon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/persimmon&quot;},{&quot;title&quot;:&quot;PhoBERT&quot;,&quot;id&quot;:&quot;model_doc/phobert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/phobert&quot;},{&quot;title&quot;:&quot;PLBart&quot;,&quot;id&quot;:&quot;model_doc/plbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/plbart&quot;},{&quot;title&quot;:&quot;ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/prophetnet&quot;},{&quot;title&quot;:&quot;QDQBert&quot;,&quot;id&quot;:&quot;model_doc/qdqbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/qdqbert&quot;},{&quot;title&quot;:&quot;RAG&quot;,&quot;id&quot;:&quot;model_doc/rag&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rag&quot;},{&quot;title&quot;:&quot;REALM&quot;,&quot;id&quot;:&quot;model_doc/realm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/realm&quot;},{&quot;title&quot;:&quot;Reformer&quot;,&quot;id&quot;:&quot;model_doc/reformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/reformer&quot;},{&quot;title&quot;:&quot;RemBERT&quot;,&quot;id&quot;:&quot;model_doc/rembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rembert&quot;},{&quot;title&quot;:&quot;RetriBERT&quot;,&quot;id&quot;:&quot;model_doc/retribert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/retribert&quot;},{&quot;title&quot;:&quot;RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta&quot;},{&quot;title&quot;:&quot;RoBERTa-PreLayerNorm&quot;,&quot;id&quot;:&quot;model_doc/roberta-prelayernorm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm&quot;},{&quot;title&quot;:&quot;RoCBert&quot;,&quot;id&quot;:&quot;model_doc/roc_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roc_bert&quot;},{&quot;title&quot;:&quot;RoFormer&quot;,&quot;id&quot;:&quot;model_doc/roformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roformer&quot;},{&quot;title&quot;:&quot;RWKV&quot;,&quot;id&quot;:&quot;model_doc/rwkv&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rwkv&quot;},{&quot;title&quot;:&quot;Splinter&quot;,&quot;id&quot;:&quot;model_doc/splinter&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/splinter&quot;},{&quot;title&quot;:&quot;SqueezeBERT&quot;,&quot;id&quot;:&quot;model_doc/squeezebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/squeezebert&quot;},{&quot;title&quot;:&quot;SwitchTransformers&quot;,&quot;id&quot;:&quot;model_doc/switch_transformers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/switch_transformers&quot;},{&quot;title&quot;:&quot;T5&quot;,&quot;id&quot;:&quot;model_doc/t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5&quot;},{&quot;title&quot;:&quot;T5v1.1&quot;,&quot;id&quot;:&quot;model_doc/t5v1.1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5v1.1&quot;},{&quot;title&quot;:&quot;TAPEX&quot;,&quot;id&quot;:&quot;model_doc/tapex&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapex&quot;},{&quot;title&quot;:&quot;Transformer XL&quot;,&quot;id&quot;:&quot;model_doc/transfo-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/transfo-xl&quot;},{&quot;title&quot;:&quot;UL2&quot;,&quot;id&quot;:&quot;model_doc/ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ul2&quot;},{&quot;title&quot;:&quot;UMT5&quot;,&quot;id&quot;:&quot;model_doc/umt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/umt5&quot;},{&quot;title&quot;:&quot;X-MOD&quot;,&quot;id&quot;:&quot;model_doc/xmod&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xmod&quot;},{&quot;title&quot;:&quot;XGLM&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/xglm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xglm&quot;},{&quot;title&quot;:&quot;XLM&quot;,&quot;id&quot;:&quot;model_doc/xlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm&quot;},{&quot;title&quot;:&quot;XLM-ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/xlm-prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa-XL&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl&quot;},{&quot;title&quot;:&quot;XLM-V&quot;,&quot;id&quot;:&quot;model_doc/xlm-v&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-v&quot;},{&quot;title&quot;:&quot;XLNet&quot;,&quot;id&quot;:&quot;model_doc/xlnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlnet&quot;},{&quot;title&quot;:&quot;YOSO&quot;,&quot;id&quot;:&quot;model_doc/yoso&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yoso&quot;}]},{&quot;title&quot;:&quot;Vision models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;BEiT&quot;,&quot;id&quot;:&quot;model_doc/beit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/beit&quot;},{&quot;title&quot;:&quot;BiT&quot;,&quot;id&quot;:&quot;model_doc/bit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bit&quot;},{&quot;title&quot;:&quot;Conditional DETR&quot;,&quot;id&quot;:&quot;model_doc/conditional_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/conditional_detr&quot;},{&quot;title&quot;:&quot;ConvNeXT&quot;,&quot;id&quot;:&quot;model_doc/convnext&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnext&quot;},{&quot;title&quot;:&quot;ConvNeXTV2&quot;,&quot;id&quot;:&quot;model_doc/convnextv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnextv2&quot;},{&quot;title&quot;:&quot;CvT&quot;,&quot;id&quot;:&quot;model_doc/cvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cvt&quot;},{&quot;title&quot;:&quot;Deformable DETR&quot;,&quot;id&quot;:&quot;model_doc/deformable_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deformable_detr&quot;},{&quot;title&quot;:&quot;DeiT&quot;,&quot;id&quot;:&quot;model_doc/deit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deit&quot;},{&quot;title&quot;:&quot;DETA&quot;,&quot;id&quot;:&quot;model_doc/deta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deta&quot;},{&quot;title&quot;:&quot;DETR&quot;,&quot;id&quot;:&quot;model_doc/detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/detr&quot;},{&quot;title&quot;:&quot;DiNAT&quot;,&quot;id&quot;:&quot;model_doc/dinat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinat&quot;},{&quot;title&quot;:&quot;DINO V2&quot;,&quot;id&quot;:&quot;model_doc/dinov2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinov2&quot;},{&quot;title&quot;:&quot;DiT&quot;,&quot;id&quot;:&quot;model_doc/dit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dit&quot;},{&quot;title&quot;:&quot;DPT&quot;,&quot;id&quot;:&quot;model_doc/dpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpt&quot;},{&quot;title&quot;:&quot;EfficientFormer&quot;,&quot;id&quot;:&quot;model_doc/efficientformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientformer&quot;},{&quot;title&quot;:&quot;EfficientNet&quot;,&quot;id&quot;:&quot;model_doc/efficientnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientnet&quot;},{&quot;title&quot;:&quot;FocalNet&quot;,&quot;id&quot;:&quot;model_doc/focalnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/focalnet&quot;},{&quot;title&quot;:&quot;GLPN&quot;,&quot;id&quot;:&quot;model_doc/glpn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/glpn&quot;},{&quot;title&quot;:&quot;ImageGPT&quot;,&quot;id&quot;:&quot;model_doc/imagegpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/imagegpt&quot;},{&quot;title&quot;:&quot;LeViT&quot;,&quot;id&quot;:&quot;model_doc/levit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/levit&quot;},{&quot;title&quot;:&quot;Mask2Former&quot;,&quot;id&quot;:&quot;model_doc/mask2former&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mask2former&quot;},{&quot;title&quot;:&quot;MaskFormer&quot;,&quot;id&quot;:&quot;model_doc/maskformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/maskformer&quot;},{&quot;title&quot;:&quot;MobileNetV1&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v1&quot;},{&quot;title&quot;:&quot;MobileNetV2&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v2&quot;},{&quot;title&quot;:&quot;MobileViT&quot;,&quot;id&quot;:&quot;model_doc/mobilevit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevit&quot;},{&quot;title&quot;:&quot;MobileViTV2&quot;,&quot;id&quot;:&quot;model_doc/mobilevitv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevitv2&quot;},{&quot;title&quot;:&quot;NAT&quot;,&quot;id&quot;:&quot;model_doc/nat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nat&quot;},{&quot;title&quot;:&quot;PoolFormer&quot;,&quot;id&quot;:&quot;model_doc/poolformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/poolformer&quot;},{&quot;title&quot;:&quot;Pyramid Vision Transformer (PVT)&quot;,&quot;id&quot;:&quot;model_doc/pvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pvt&quot;},{&quot;title&quot;:&quot;RegNet&quot;,&quot;id&quot;:&quot;model_doc/regnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/regnet&quot;},{&quot;title&quot;:&quot;ResNet&quot;,&quot;id&quot;:&quot;model_doc/resnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/resnet&quot;},{&quot;title&quot;:&quot;SegFormer&quot;,&quot;id&quot;:&quot;model_doc/segformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/segformer&quot;},{&quot;title&quot;:&quot;SwiftFormer&quot;,&quot;id&quot;:&quot;model_doc/swiftformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swiftformer&quot;},{&quot;title&quot;:&quot;Swin Transformer&quot;,&quot;id&quot;:&quot;model_doc/swin&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin&quot;},{&quot;title&quot;:&quot;Swin Transformer V2&quot;,&quot;id&quot;:&quot;model_doc/swinv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swinv2&quot;},{&quot;title&quot;:&quot;Swin2SR&quot;,&quot;id&quot;:&quot;model_doc/swin2sr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin2sr&quot;},{&quot;title&quot;:&quot;Table Transformer&quot;,&quot;id&quot;:&quot;model_doc/table-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/table-transformer&quot;},{&quot;title&quot;:&quot;TimeSformer&quot;,&quot;id&quot;:&quot;model_doc/timesformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/timesformer&quot;},{&quot;title&quot;:&quot;UperNet&quot;,&quot;id&quot;:&quot;model_doc/upernet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/upernet&quot;},{&quot;title&quot;:&quot;VAN&quot;,&quot;id&quot;:&quot;model_doc/van&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/van&quot;},{&quot;title&quot;:&quot;VideoMAE&quot;,&quot;id&quot;:&quot;model_doc/videomae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/videomae&quot;},{&quot;title&quot;:&quot;Vision Transformer (ViT)&quot;,&quot;id&quot;:&quot;model_doc/vit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit&quot;},{&quot;title&quot;:&quot;ViT Hybrid&quot;,&quot;id&quot;:&quot;model_doc/vit_hybrid&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_hybrid&quot;},{&quot;title&quot;:&quot;ViTDet&quot;,&quot;id&quot;:&quot;model_doc/vitdet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitdet&quot;},{&quot;title&quot;:&quot;ViTMAE&quot;,&quot;id&quot;:&quot;model_doc/vit_mae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_mae&quot;},{&quot;title&quot;:&quot;ViTMatte&quot;,&quot;id&quot;:&quot;model_doc/vitmatte&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitmatte&quot;},{&quot;title&quot;:&quot;ViTMSN&quot;,&quot;id&quot;:&quot;model_doc/vit_msn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_msn&quot;},{&quot;title&quot;:&quot;ViViT&quot;,&quot;id&quot;:&quot;model_doc/vivit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vivit&quot;},{&quot;title&quot;:&quot;YOLOS&quot;,&quot;id&quot;:&quot;model_doc/yolos&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yolos&quot;}]},{&quot;title&quot;:&quot;Audio models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio Spectrogram Transformer&quot;,&quot;id&quot;:&quot;model_doc/audio-spectrogram-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer&quot;},{&quot;title&quot;:&quot;Bark&quot;,&quot;id&quot;:&quot;model_doc/bark&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bark&quot;},{&quot;title&quot;:&quot;CLAP&quot;,&quot;id&quot;:&quot;model_doc/clap&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clap&quot;},{&quot;title&quot;:&quot;EnCodec&quot;,&quot;id&quot;:&quot;model_doc/encodec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encodec&quot;},{&quot;title&quot;:&quot;Hubert&quot;,&quot;id&quot;:&quot;model_doc/hubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/hubert&quot;},{&quot;title&quot;:&quot;MCTCT&quot;,&quot;id&quot;:&quot;model_doc/mctct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mctct&quot;},{&quot;title&quot;:&quot;MMS&quot;,&quot;id&quot;:&quot;model_doc/mms&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mms&quot;},{&quot;title&quot;:&quot;MusicGen&quot;,&quot;id&quot;:&quot;model_doc/musicgen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/musicgen&quot;},{&quot;title&quot;:&quot;Pop2Piano&quot;,&quot;id&quot;:&quot;model_doc/pop2piano&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pop2piano&quot;},{&quot;title&quot;:&quot;SEW&quot;,&quot;id&quot;:&quot;model_doc/sew&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew&quot;},{&quot;title&quot;:&quot;SEW-D&quot;,&quot;id&quot;:&quot;model_doc/sew-d&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew-d&quot;},{&quot;title&quot;:&quot;Speech2Text&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text&quot;},{&quot;title&quot;:&quot;Speech2Text2&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text_2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2&quot;},{&quot;title&quot;:&quot;SpeechT5&quot;,&quot;id&quot;:&quot;model_doc/speecht5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speecht5&quot;},{&quot;title&quot;:&quot;UniSpeech&quot;,&quot;id&quot;:&quot;model_doc/unispeech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech&quot;},{&quot;title&quot;:&quot;UniSpeech-SAT&quot;,&quot;id&quot;:&quot;model_doc/unispeech-sat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech-sat&quot;},{&quot;title&quot;:&quot;VITS&quot;,&quot;id&quot;:&quot;model_doc/vits&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vits&quot;},{&quot;title&quot;:&quot;Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2&quot;},{&quot;title&quot;:&quot;Wav2Vec2-Conformer&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2-conformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer&quot;},{&quot;title&quot;:&quot;Wav2Vec2Phoneme&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2_phoneme&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme&quot;},{&quot;title&quot;:&quot;WavLM&quot;,&quot;id&quot;:&quot;model_doc/wavlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wavlm&quot;},{&quot;title&quot;:&quot;Whisper&quot;,&quot;id&quot;:&quot;model_doc/whisper&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/whisper&quot;},{&quot;title&quot;:&quot;XLS-R&quot;,&quot;id&quot;:&quot;model_doc/xls_r&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xls_r&quot;},{&quot;title&quot;:&quot;XLSR-Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/xlsr_wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2&quot;}]},{&quot;title&quot;:&quot;Multimodal models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALIGN&quot;,&quot;id&quot;:&quot;model_doc/align&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/align&quot;},{&quot;title&quot;:&quot;AltCLIP&quot;,&quot;id&quot;:&quot;model_doc/altclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/altclip&quot;},{&quot;title&quot;:&quot;BLIP&quot;,&quot;id&quot;:&quot;model_doc/blip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip&quot;},{&quot;title&quot;:&quot;BLIP-2&quot;,&quot;id&quot;:&quot;model_doc/blip-2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip-2&quot;},{&quot;title&quot;:&quot;BridgeTower&quot;,&quot;id&quot;:&quot;model_doc/bridgetower&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bridgetower&quot;},{&quot;title&quot;:&quot;BROS&quot;,&quot;id&quot;:&quot;model_doc/bros&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bros&quot;},{&quot;title&quot;:&quot;Chinese-CLIP&quot;,&quot;id&quot;:&quot;model_doc/chinese_clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/chinese_clip&quot;},{&quot;title&quot;:&quot;CLIP&quot;,&quot;id&quot;:&quot;model_doc/clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clip&quot;},{&quot;title&quot;:&quot;CLIPSeg&quot;,&quot;id&quot;:&quot;model_doc/clipseg&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clipseg&quot;},{&quot;title&quot;:&quot;Data2Vec&quot;,&quot;id&quot;:&quot;model_doc/data2vec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/data2vec&quot;},{&quot;title&quot;:&quot;DePlot&quot;,&quot;id&quot;:&quot;model_doc/deplot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deplot&quot;},{&quot;title&quot;:&quot;Donut&quot;,&quot;id&quot;:&quot;model_doc/donut&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/donut&quot;},{&quot;title&quot;:&quot;FLAVA&quot;,&quot;id&quot;:&quot;model_doc/flava&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flava&quot;},{&quot;title&quot;:&quot;GIT&quot;,&quot;id&quot;:&quot;model_doc/git&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/git&quot;},{&quot;title&quot;:&quot;GroupViT&quot;,&quot;id&quot;:&quot;model_doc/groupvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/groupvit&quot;},{&quot;title&quot;:&quot;IDEFICS&quot;,&quot;id&quot;:&quot;model_doc/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/idefics&quot;},{&quot;title&quot;:&quot;InstructBLIP&quot;,&quot;id&quot;:&quot;model_doc/instructblip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/instructblip&quot;},{&quot;title&quot;:&quot;LayoutLM&quot;,&quot;id&quot;:&quot;model_doc/layoutlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlm&quot;},{&quot;title&quot;:&quot;LayoutLMV2&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv2&quot;},{&quot;title&quot;:&quot;LayoutLMV3&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv3&quot;},{&quot;title&quot;:&quot;LayoutXLM&quot;,&quot;id&quot;:&quot;model_doc/layoutxlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutxlm&quot;},{&quot;title&quot;:&quot;LiLT&quot;,&quot;id&quot;:&quot;model_doc/lilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lilt&quot;},{&quot;title&quot;:&quot;LXMERT&quot;,&quot;id&quot;:&quot;model_doc/lxmert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lxmert&quot;},{&quot;title&quot;:&quot;MatCha&quot;,&quot;id&quot;:&quot;model_doc/matcha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/matcha&quot;},{&quot;title&quot;:&quot;MGP-STR&quot;,&quot;id&quot;:&quot;model_doc/mgp-str&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mgp-str&quot;},{&quot;title&quot;:&quot;Nougat&quot;,&quot;id&quot;:&quot;model_doc/nougat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nougat&quot;},{&quot;title&quot;:&quot;OneFormer&quot;,&quot;id&quot;:&quot;model_doc/oneformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/oneformer&quot;},{&quot;title&quot;:&quot;OWL-ViT&quot;,&quot;id&quot;:&quot;model_doc/owlvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/owlvit&quot;},{&quot;title&quot;:&quot;Perceiver&quot;,&quot;id&quot;:&quot;model_doc/perceiver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/perceiver&quot;},{&quot;title&quot;:&quot;Pix2Struct&quot;,&quot;id&quot;:&quot;model_doc/pix2struct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pix2struct&quot;},{&quot;title&quot;:&quot;Segment Anything&quot;,&quot;id&quot;:&quot;model_doc/sam&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sam&quot;},{&quot;title&quot;:&quot;Speech Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/speech-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder&quot;},{&quot;title&quot;:&quot;TAPAS&quot;,&quot;id&quot;:&quot;model_doc/tapas&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapas&quot;},{&quot;title&quot;:&quot;TrOCR&quot;,&quot;id&quot;:&quot;model_doc/trocr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trocr&quot;},{&quot;title&quot;:&quot;TVLT&quot;,&quot;id&quot;:&quot;model_doc/tvlt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tvlt&quot;},{&quot;title&quot;:&quot;ViLT&quot;,&quot;id&quot;:&quot;model_doc/vilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vilt&quot;},{&quot;title&quot;:&quot;Vision Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/vision-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder&quot;},{&quot;title&quot;:&quot;Vision Text Dual Encoder&quot;,&quot;id&quot;:&quot;model_doc/vision-text-dual-encoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder&quot;},{&quot;title&quot;:&quot;VisualBERT&quot;,&quot;id&quot;:&quot;model_doc/visual_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/visual_bert&quot;},{&quot;title&quot;:&quot;X-CLIP&quot;,&quot;id&quot;:&quot;model_doc/xclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xclip&quot;}]},{&quot;title&quot;:&quot;Reinforcement learning models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Decision Transformer&quot;,&quot;id&quot;:&quot;model_doc/decision_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/decision_transformer&quot;},{&quot;title&quot;:&quot;Trajectory Transformer&quot;,&quot;id&quot;:&quot;model_doc/trajectory_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer&quot;}]},{&quot;title&quot;:&quot;Time series models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Autoformer&quot;,&quot;id&quot;:&quot;model_doc/autoformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/autoformer&quot;},{&quot;title&quot;:&quot;Informer&quot;,&quot;id&quot;:&quot;model_doc/informer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/informer&quot;},{&quot;title&quot;:&quot;Time Series Transformer&quot;,&quot;id&quot;:&quot;model_doc/time_series_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/time_series_transformer&quot;}]},{&quot;title&quot;:&quot;Graph models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Graphormer&quot;,&quot;id&quot;:&quot;model_doc/graphormer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/graphormer&quot;}]}]},{&quot;title&quot;:&quot;Internal Helpers&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Custom Layers and Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/modeling_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/modeling_utils&quot;},{&quot;title&quot;:&quot;Utilities for pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/pipelines_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/pipelines_utils&quot;},{&quot;title&quot;:&quot;Utilities for Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/tokenization_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/tokenization_utils&quot;},{&quot;title&quot;:&quot;Utilities for Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/trainer_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/trainer_utils&quot;},{&quot;title&quot;:&quot;Utilities for Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/generation_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/generation_utils&quot;},{&quot;title&quot;:&quot;Utilities for Image Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/image_processing_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/image_processing_utils&quot;},{&quot;title&quot;:&quot;Utilities for Audio processing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/audio_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/audio_utils&quot;},{&quot;title&quot;:&quot;General Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/file_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/file_utils&quot;},{&quot;title&quot;:&quot;Utilities for Time Series&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/time_series_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/time_series_utils&quot;}]}]}],&quot;chapterId&quot;:&quot;model_doc/xglm&quot;,&quot;docType&quot;:&quot;docs&quot;,&quot;isLoggedIn&quot;:false,&quot;lang&quot;:&quot;en&quot;,&quot;langs&quot;:[&quot;de&quot;,&quot;en&quot;,&quot;es&quot;,&quot;fr&quot;,&quot;it&quot;,&quot;ko&quot;,&quot;pt&quot;,&quot;zh&quot;],&quot;library&quot;:&quot;transformers&quot;,&quot;theme&quot;:&quot;light&quot;,&quot;version&quot;:&quot;v4.34.0&quot;,&quot;versions&quot;:[{&quot;version&quot;:&quot;main&quot;},{&quot;version&quot;:&quot;v4.34.0&quot;},{&quot;version&quot;:&quot;v4.33.3&quot;},{&quot;version&quot;:&quot;v4.33.2&quot;},{&quot;version&quot;:&quot;v4.33.0&quot;},{&quot;version&quot;:&quot;v4.32.1&quot;},{&quot;version&quot;:&quot;v4.32.0&quot;},{&quot;version&quot;:&quot;v4.31.0&quot;},{&quot;version&quot;:&quot;v4.30.0&quot;},{&quot;version&quot;:&quot;v4.29.1&quot;},{&quot;version&quot;:&quot;v4.29.0&quot;},{&quot;version&quot;:&quot;v4.28.1&quot;},{&quot;version&quot;:&quot;v4.28.0&quot;},{&quot;version&quot;:&quot;v4.27.2&quot;},{&quot;version&quot;:&quot;v4.27.1&quot;},{&quot;version&quot;:&quot;v4.27.0&quot;},{&quot;version&quot;:&quot;v4.26.1&quot;},{&quot;version&quot;:&quot;v4.26.0&quot;},{&quot;version&quot;:&quot;v4.25.1&quot;},{&quot;version&quot;:&quot;v4.24.0&quot;},{&quot;version&quot;:&quot;v4.23.1&quot;},{&quot;version&quot;:&quot;v4.23.0&quot;},{&quot;version&quot;:&quot;v4.22.2&quot;},{&quot;version&quot;:&quot;v4.22.1&quot;},{&quot;version&quot;:&quot;v4.22.0&quot;},{&quot;version&quot;:&quot;v4.21.3&quot;},{&quot;version&quot;:&quot;v4.21.2&quot;},{&quot;version&quot;:&quot;v4.21.1&quot;},{&quot;version&quot;:&quot;v4.21.0&quot;},{&quot;version&quot;:&quot;v4.20.1&quot;},{&quot;version&quot;:&quot;v4.20.0&quot;},{&quot;version&quot;:&quot;v4.19.4&quot;},{&quot;version&quot;:&quot;v4.19.3&quot;},{&quot;version&quot;:&quot;v4.19.2&quot;},{&quot;version&quot;:&quot;v4.19.0&quot;},{&quot;version&quot;:&quot;v4.18.0&quot;},{&quot;version&quot;:&quot;v4.17.0&quot;},{&quot;version&quot;:&quot;v4.16.2&quot;},{&quot;version&quot;:&quot;v4.16.1&quot;},{&quot;version&quot;:&quot;v4.16.0&quot;},{&quot;version&quot;:&quot;v4.15.0&quot;},{&quot;version&quot;:&quot;v4.14.1&quot;},{&quot;version&quot;:&quot;v4.13.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.5&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.4&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.0.0&quot;},{&quot;version&quot;:&quot;doc-builder-html&quot;}],&quot;title&quot;:&quot;XGLM&quot;}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"> <div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation </p> <div class="flex items-center"><p class="font-semibold">XGLM</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "> <button class=" " type="button"> <h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> </button> <div class="flex items-center"> <select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1" selected="">v4.34.0</option><option value="2">v4.33.3</option><option value="3">v4.32.1</option><option value="4">v4.31.0</option><option value="5">v4.30.0</option><option value="6">v4.29.1</option><option value="7">v4.28.1</option><option value="8">v4.27.2</option><option value="9">v4.26.1</option><option value="10">v4.25.1</option><option value="11">v4.24.0</option><option value="12">v4.23.1</option><option value="13">v4.22.2</option><option value="14">v4.21.3</option><option value="15">v4.20.1</option><option value="16">v4.19.4</option><option value="17">v4.18.0</option><option value="18">v4.17.0</option><option value="19">v4.16.2</option><option value="20">v4.15.0</option><option value="21">v4.14.1</option><option value="22">v4.13.0</option><option value="23">v4.12.5</option><option value="24">v4.11.3</option><option value="25">v4.10.1</option><option value="26">v4.9.2</option><option value="27">v4.8.2</option><option value="28">v4.7.0</option><option value="29">v4.6.0</option><option value="30">v4.5.1</option><option value="31">v4.4.2</option><option value="32">v4.3.3</option><option value="33">v4.2.2</option><option value="34">v4.1.1</option><option value="35">v4.0.1</option><option value="36">v3.5.1</option><option value="37">v3.4.0</option><option value="38">v3.3.1</option><option value="39">v3.2.0</option><option value="40">v3.1.0</option><option value="41">v3.0.2</option><option value="42">v2.11.0</option><option value="43">v2.10.0</option><option value="44">v2.9.1</option><option value="45">v2.8.0</option><option value="46">v2.7.0</option><option value="47">v2.6.0</option><option value="48">v2.5.1</option><option value="49">v2.4.1</option><option value="50">v2.3.0</option><option value="51">v2.2.2</option><option value="52">v2.1.1</option><option value="53">v2.0.0</option><option value="54">v1.2.0</option><option value="55">v1.1.0</option><option value="56">v1.0.0</option><option value="57">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en" selected="">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"> <button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"> <svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> </a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Get started<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/index"><!-- HTML_TAG_START -->🤗 Transformers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/quicktour"><!-- HTML_TAG_START -->Quick tour<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/installation"><!-- HTML_TAG_START -->Installation<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Tutorials<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_tutorial"><!-- HTML_TAG_START -->Run inference with pipelines<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/autoclass_tutorial"><!-- HTML_TAG_START -->Write portable code with AutoClass<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/preprocessing"><!-- HTML_TAG_START -->Preprocess data<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/training"><!-- HTML_TAG_START -->Fine-tune a pretrained model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/run_scripts"><!-- HTML_TAG_START -->Train with a script<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/accelerate"><!-- HTML_TAG_START -->Set up distributed training with 🤗 Accelerate<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/peft"><!-- HTML_TAG_START -->Load and train adapters with 🤗 PEFT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_sharing"><!-- HTML_TAG_START -->Share your model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/transformers_agents"><!-- HTML_TAG_START -->Agents<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/llm_tutorial"><!-- HTML_TAG_START -->Generation with LLMs<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Task Guides<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Natural Language Processing<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Audio<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Computer Vision<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Multimodal<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Generation<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Prompting<!-- HTML_TAG_END --></span> </span></span> </div></div> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Developer guides<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/fast_tokenizers"><!-- HTML_TAG_START -->Use fast tokenizers from 🤗 Tokenizers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/multilingual"><!-- HTML_TAG_START -->Run inference with multilingual models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/create_a_model"><!-- HTML_TAG_START -->Use model-specific APIs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_models"><!-- HTML_TAG_START -->Share a custom model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/chat_templating"><!-- HTML_TAG_START -->Templates for chat models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/sagemaker"><!-- HTML_TAG_START -->Run training on Amazon SageMaker<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/serialization"><!-- HTML_TAG_START -->Export to ONNX<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tflite"><!-- HTML_TAG_START -->Export to TFLite<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/torchscript"><!-- HTML_TAG_START -->Export to TorchScript<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/benchmarks"><!-- HTML_TAG_START -->Benchmarks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/notebooks"><!-- HTML_TAG_START -->Notebooks with examples<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/community"><!-- HTML_TAG_START -->Community resources<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_tools"><!-- HTML_TAG_START -->Custom Tools and Prompts<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/troubleshooting"><!-- HTML_TAG_START -->Troubleshoot<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Performance and scalability<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/performance"><!-- HTML_TAG_START -->Overview<!-- HTML_TAG_END --> </a> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Efficient training techniques<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_one"><!-- HTML_TAG_START -->Methods and tools for efficient training on a single GPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_many"><!-- HTML_TAG_START -->Multiple GPUs and parallelism<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu"><!-- HTML_TAG_START -->Efficient training on CPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu_many"><!-- HTML_TAG_START -->Distributed CPU training<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu"><!-- HTML_TAG_START -->Training on TPUs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu_tf"><!-- HTML_TAG_START -->Training on TPU with TensorFlow<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_special"><!-- HTML_TAG_START -->Training on Specialized Hardware<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_hardware"><!-- HTML_TAG_START -->Custom hardware for training<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/hpo_train"><!-- HTML_TAG_START -->Hyperparameter Search using Trainer API<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Optimizing inference<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_cpu"><!-- HTML_TAG_START -->Inference on CPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_one"><!-- HTML_TAG_START -->Inference on one GPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_many"><!-- HTML_TAG_START -->Inference on many GPUs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_special"><!-- HTML_TAG_START -->Inference on Specialized Hardware<!-- HTML_TAG_END --> </a> </div><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/big_models"><!-- HTML_TAG_START -->Instantiating a big model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/debugging"><!-- HTML_TAG_START -->Troubleshooting<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tf_xla"><!-- HTML_TAG_START -->XLA Integration for TensorFlow Models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perf_torch_compile"><!-- HTML_TAG_START -->Optimize inference using `torch.compile()`<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Contribute<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/contributing"><!-- HTML_TAG_START -->How to contribute to transformers?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_model"><!-- HTML_TAG_START -->How to add a model to 🤗 Transformers?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_tensorflow_model"><!-- HTML_TAG_START -->How to convert a 🤗 Transformers model to TensorFlow?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_pipeline"><!-- HTML_TAG_START -->How to add a pipeline to 🤗 Transformers?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/testing"><!-- HTML_TAG_START -->Testing<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pr_checks"><!-- HTML_TAG_START -->Checks on a Pull Request<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Conceptual guides<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/philosophy"><!-- HTML_TAG_START -->Philosophy<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/glossary"><!-- HTML_TAG_START -->Glossary<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/task_summary"><!-- HTML_TAG_START -->What 🤗 Transformers can do<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tasks_explained"><!-- HTML_TAG_START -->How 🤗 Transformers solve tasks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_summary"><!-- HTML_TAG_START -->The Transformer model family<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tokenizer_summary"><!-- HTML_TAG_START -->Summary of the tokenizers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/attention"><!-- HTML_TAG_START -->Attention mechanisms<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pad_truncation"><!-- HTML_TAG_START -->Padding and truncation<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/bertology"><!-- HTML_TAG_START -->BERTology<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perplexity"><!-- HTML_TAG_START -->Perplexity of fixed-length models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_webserver"><!-- HTML_TAG_START -->Pipelines for webserver inference<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_memory_anatomy"><!-- HTML_TAG_START -->Model training anatomy<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->API<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Main Classes<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/agent"><!-- HTML_TAG_START -->Agents and Tools<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/model_doc/auto"><!-- HTML_TAG_START -->Auto Classes<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/callback"><!-- HTML_TAG_START -->Callbacks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/configuration"><!-- HTML_TAG_START -->Configuration<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/data_collator"><!-- HTML_TAG_START -->Data Collator<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks"><!-- HTML_TAG_START -->Keras callbacks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/logging"><!-- HTML_TAG_START -->Logging<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/model"><!-- HTML_TAG_START -->Models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/text_generation"><!-- HTML_TAG_START -->Text Generation<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/onnx"><!-- HTML_TAG_START -->ONNX<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules"><!-- HTML_TAG_START -->Optimization<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/output"><!-- HTML_TAG_START -->Model outputs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/pipelines"><!-- HTML_TAG_START -->Pipelines<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/processors"><!-- HTML_TAG_START -->Processors<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/quantization"><!-- HTML_TAG_START -->Quantization<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/tokenizer"><!-- HTML_TAG_START -->Tokenizer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/trainer"><!-- HTML_TAG_START -->Trainer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/deepspeed"><!-- HTML_TAG_START -->DeepSpeed Integration<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor"><!-- HTML_TAG_START -->Feature Extractor<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/image_processor"><!-- HTML_TAG_START -->Image Processor<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Text models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/albert"><!-- HTML_TAG_START -->ALBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bart"><!-- HTML_TAG_START -->BART<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/barthez"><!-- HTML_TAG_START -->BARThez<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bartpho"><!-- HTML_TAG_START -->BARTpho<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bert"><!-- HTML_TAG_START -->BERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bert-generation"><!-- HTML_TAG_START -->BertGeneration<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bert-japanese"><!-- HTML_TAG_START -->BertJapanese<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bertweet"><!-- HTML_TAG_START -->Bertweet<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/big_bird"><!-- HTML_TAG_START -->BigBird<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus"><!-- HTML_TAG_START -->BigBirdPegasus<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/biogpt"><!-- HTML_TAG_START -->BioGpt<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/blenderbot"><!-- HTML_TAG_START -->Blenderbot<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/blenderbot-small"><!-- HTML_TAG_START -->Blenderbot Small<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bloom"><!-- HTML_TAG_START -->BLOOM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bort"><!-- HTML_TAG_START -->BORT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/byt5"><!-- HTML_TAG_START -->ByT5<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/camembert"><!-- HTML_TAG_START -->CamemBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/canine"><!-- HTML_TAG_START -->CANINE<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/codegen"><!-- HTML_TAG_START -->CodeGen<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/code_llama"><!-- HTML_TAG_START -->CodeLlama<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/convbert"><!-- HTML_TAG_START -->ConvBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/cpm"><!-- HTML_TAG_START -->CPM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/cpmant"><!-- HTML_TAG_START -->CPMANT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ctrl"><!-- HTML_TAG_START -->CTRL<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/deberta"><!-- HTML_TAG_START -->DeBERTa<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/deberta-v2"><!-- HTML_TAG_START -->DeBERTa-v2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/dialogpt"><!-- HTML_TAG_START -->DialoGPT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/distilbert"><!-- HTML_TAG_START -->DistilBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/dpr"><!-- HTML_TAG_START -->DPR<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/electra"><!-- HTML_TAG_START -->ELECTRA<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/encoder-decoder"><!-- HTML_TAG_START -->Encoder Decoder Models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ernie"><!-- HTML_TAG_START -->ERNIE<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ernie_m"><!-- HTML_TAG_START -->ErnieM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/esm"><!-- HTML_TAG_START -->ESM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/falcon"><!-- HTML_TAG_START -->Falcon<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/flan-t5"><!-- HTML_TAG_START -->FLAN-T5<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/flan-ul2"><!-- HTML_TAG_START -->FLAN-UL2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/flaubert"><!-- HTML_TAG_START -->FlauBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/fnet"><!-- HTML_TAG_START -->FNet<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/fsmt"><!-- HTML_TAG_START -->FSMT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/funnel"><!-- HTML_TAG_START -->Funnel Transformer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/openai-gpt"><!-- HTML_TAG_START -->GPT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt_neo"><!-- HTML_TAG_START -->GPT Neo<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt_neox"><!-- HTML_TAG_START -->GPT NeoX<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese"><!-- HTML_TAG_START -->GPT NeoX Japanese<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gptj"><!-- HTML_TAG_START -->GPT-J<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt2"><!-- HTML_TAG_START -->GPT2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode"><!-- HTML_TAG_START -->GPTBigCode<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese"><!-- HTML_TAG_START -->GPTSAN Japanese<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt-sw3"><!-- HTML_TAG_START -->GPTSw3<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/herbert"><!-- HTML_TAG_START -->HerBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ibert"><!-- HTML_TAG_START -->I-BERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/jukebox"><!-- HTML_TAG_START -->Jukebox<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/led"><!-- HTML_TAG_START -->LED<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/llama"><!-- HTML_TAG_START -->LLaMA<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/llama2"><!-- HTML_TAG_START -->Llama2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/longformer"><!-- HTML_TAG_START -->Longformer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/longt5"><!-- HTML_TAG_START -->LongT5<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/luke"><!-- HTML_TAG_START -->LUKE<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/m2m_100"><!-- HTML_TAG_START -->M2M100<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/marian"><!-- HTML_TAG_START -->MarianMT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/markuplm"><!-- HTML_TAG_START -->MarkupLM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mbart"><!-- HTML_TAG_START -->MBart and MBart-50<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mega"><!-- HTML_TAG_START -->MEGA<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/megatron-bert"><!-- HTML_TAG_START -->MegatronBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2"><!-- HTML_TAG_START -->MegatronGPT2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mistral"><!-- HTML_TAG_START -->Mistral<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mluke"><!-- HTML_TAG_START -->mLUKE<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mobilebert"><!-- HTML_TAG_START -->MobileBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mpnet"><!-- HTML_TAG_START -->MPNet<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mpt"><!-- HTML_TAG_START -->MPT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mra"><!-- HTML_TAG_START -->MRA<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mt5"><!-- HTML_TAG_START -->MT5<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mvp"><!-- HTML_TAG_START -->MVP<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nezha"><!-- HTML_TAG_START -->NEZHA<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nllb"><!-- HTML_TAG_START -->NLLB<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nllb-moe"><!-- HTML_TAG_START -->NLLB-MoE<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nystromformer"><!-- HTML_TAG_START -->Nyströmformer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/open-llama"><!-- HTML_TAG_START -->Open-Llama<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/opt"><!-- HTML_TAG_START -->OPT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/pegasus"><!-- HTML_TAG_START -->Pegasus<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/pegasus_x"><!-- HTML_TAG_START -->PEGASUS-X<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/persimmon"><!-- HTML_TAG_START -->Persimmon<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/phobert"><!-- HTML_TAG_START -->PhoBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/plbart"><!-- HTML_TAG_START -->PLBart<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/prophetnet"><!-- HTML_TAG_START -->ProphetNet<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/qdqbert"><!-- HTML_TAG_START -->QDQBert<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/rag"><!-- HTML_TAG_START -->RAG<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/realm"><!-- HTML_TAG_START -->REALM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/reformer"><!-- HTML_TAG_START -->Reformer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/rembert"><!-- HTML_TAG_START -->RemBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/retribert"><!-- HTML_TAG_START -->RetriBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/roberta"><!-- HTML_TAG_START -->RoBERTa<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm"><!-- HTML_TAG_START -->RoBERTa-PreLayerNorm<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/roc_bert"><!-- HTML_TAG_START -->RoCBert<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/roformer"><!-- HTML_TAG_START -->RoFormer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/rwkv"><!-- HTML_TAG_START -->RWKV<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/splinter"><!-- HTML_TAG_START -->Splinter<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/squeezebert"><!-- HTML_TAG_START -->SqueezeBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/switch_transformers"><!-- HTML_TAG_START -->SwitchTransformers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/t5"><!-- HTML_TAG_START -->T5<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/t5v1.1"><!-- HTML_TAG_START -->T5v1.1<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/tapex"><!-- HTML_TAG_START -->TAPEX<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/transfo-xl"><!-- HTML_TAG_START -->Transformer XL<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ul2"><!-- HTML_TAG_START -->UL2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/umt5"><!-- HTML_TAG_START -->UMT5<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xmod"><!-- HTML_TAG_START -->X-MOD<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xglm"><!-- HTML_TAG_START -->XGLM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm"><!-- HTML_TAG_START -->XLM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet"><!-- HTML_TAG_START -->XLM-ProphetNet<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta"><!-- HTML_TAG_START -->XLM-RoBERTa<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl"><!-- HTML_TAG_START -->XLM-RoBERTa-XL<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm-v"><!-- HTML_TAG_START -->XLM-V<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlnet"><!-- HTML_TAG_START -->XLNet<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/yoso"><!-- HTML_TAG_START -->YOSO<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Vision models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Audio models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Multimodal models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Reinforcement learning models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Time series models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Graph models<!-- HTML_TAG_END --></span> </span></span> </div></div> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Internal Helpers<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/modeling_utils"><!-- HTML_TAG_START -->Custom Layers and Utilities<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/pipelines_utils"><!-- HTML_TAG_START -->Utilities for pipelines<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/tokenization_utils"><!-- HTML_TAG_START -->Utilities for Tokenizers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/trainer_utils"><!-- HTML_TAG_START -->Utilities for Trainer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/generation_utils"><!-- HTML_TAG_START -->Utilities for Generation<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/image_processing_utils"><!-- HTML_TAG_START -->Utilities for Image Processors<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/audio_utils"><!-- HTML_TAG_START -->Utilities for Audio processing<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/file_utils"><!-- HTML_TAG_START -->General Utilities<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/time_series_utils"><!-- HTML_TAG_START -->Utilities for Time Series<!-- HTML_TAG_END --> </a> </div> </div></nav></div></div></div> <div class="z-1 min-w-0 flex-1"> <div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg"> <div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div> <p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience </p> <div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes </div></div></div> <div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a> <p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div> <div class="prose-doc prose relative mx-auto max-w-4xl break-words"><!-- HTML_TAG_START --> <link href="/docs/transformers/v4.34.0/en/_app/immutable/assets/0.e3b0c442.css" rel="modulepreload"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/entry/start.c2db227a.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/scheduler.9bc65507.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/singletons.e3057404.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/index.3b203c72.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/paths.e7de6301.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/entry/app.879d9b87.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/index.78c82d43.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/0.242aaaff.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/each.e59479a4.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/276.1bba5537.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/Tip.87d55b76.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/Docstring.4e7352e2.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/globals.7f7f1b26.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/IconCopyLink.bedaa44d.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/CodeBlock.73e038be.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/ExampleCodeBlock.872b014d.js"><!-- HEAD_svelte-1phssyn_START --><meta name="hf:doc:metadata" content="{&quot;local&quot;:&quot;xglm&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;overview&quot;,&quot;title&quot;:&quot;Overview&quot;},{&quot;local&quot;:&quot;documentation-resources&quot;,&quot;title&quot;:&quot;Documentation resources&quot;},{&quot;local&quot;:&quot;transformers.XGLMConfig&quot;,&quot;title&quot;:&quot;XGLMConfig&quot;},{&quot;local&quot;:&quot;transformers.XGLMTokenizer&quot;,&quot;title&quot;:&quot;XGLMTokenizer&quot;},{&quot;local&quot;:&quot;transformers.XGLMTokenizerFast&quot;,&quot;title&quot;:&quot;XGLMTokenizerFast&quot;},{&quot;local&quot;:&quot;transformers.XGLMModel&quot;,&quot;title&quot;:&quot;XGLMModel&quot;},{&quot;local&quot;:&quot;transformers.XGLMForCausalLM&quot;,&quot;title&quot;:&quot;XGLMForCausalLM&quot;},{&quot;local&quot;:&quot;transformers.TFXGLMModel&quot;,&quot;title&quot;:&quot;TFXGLMModel&quot;},{&quot;local&quot;:&quot;transformers.TFXGLMForCausalLM&quot;,&quot;title&quot;:&quot;TFXGLMForCausalLM&quot;},{&quot;local&quot;:&quot;transformers.FlaxXGLMModel&quot;,&quot;title&quot;:&quot;FlaxXGLMModel&quot;},{&quot;local&quot;:&quot;transformers.FlaxXGLMForCausalLM&quot;,&quot;title&quot;:&quot;FlaxXGLMForCausalLM&quot;}],&quot;title&quot;:&quot;XGLM&quot;}"><!-- HEAD_svelte-1phssyn_END --> <p></p> <h1 class="relative group"><a id="xglm" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#xglm"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-11khbq3">XGLM</span></h1> <h2 class="relative group"><a id="overview" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#overview"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1jsw1pg">Overview</span></h2> <p data-svelte-h="svelte-16pkwmo">The XGLM model was proposed in <a href="https://arxiv.org/abs/2112.10668" rel="nofollow">Few-shot Learning with Multilingual Language Models</a> by Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O’Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li.</p> <p data-svelte-h="svelte-vfdo9a">The abstract from the paper is the following:</p> <p data-svelte-h="svelte-21oekg"><em>Large-scale autoregressive language models such as GPT-3 are few-shot learners that can perform a wide range of language tasks without fine-tuning. While these models are known to be able to jointly represent many different languages, their training data is dominated by English, potentially limiting their cross-lingual generalization. In this work, we train multilingual autoregressive language models on a balanced corpus covering a diverse set of languages, and study their few- and zero-shot learning capabilities in a wide range of tasks. Our largest model with 7.5 billion parameters sets new state of the art in few-shot learning in more than 20 representative languages, outperforming GPT-3 of comparable size in multilingual commonsense reasoning (with +7.4% absolute accuracy improvement in 0-shot settings and +9.4% in 4-shot settings) and natural language inference (+5.4% in each of 0-shot and 4-shot settings). On the FLORES-101 machine translation benchmark, our model outperforms GPT-3 on 171 out of 182 translation directions with 32 training examples, while surpassing the official supervised baseline in 45 directions. We present a detailed analysis of where the model succeeds and fails, showing in particular that it enables cross-lingual in-context learning on some tasks, while there is still room for improvement on surface form robustness and adaptation to tasks that do not have a natural cloze form. Finally, we evaluate our models in social value tasks such as hate speech detection in five languages and find it has limitations similar to comparable sized GPT-3 models.</em></p> <p data-svelte-h="svelte-1c16k7b">This model was contributed by <a href="https://huggingface.co/valhalla" rel="nofollow">Suraj</a>. The original code can be found <a href="https://github.com/pytorch/fairseq/tree/main/examples/xglm" rel="nofollow">here</a>.</p> <h2 class="relative group"><a id="documentation-resources" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#documentation-resources"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-n3f0j0">Documentation resources</span></h2> <ul data-svelte-h="svelte-162aebv"><li><a href="../tasks/language_modeling">Causal language modeling task guide</a></li></ul> <h2 class="relative group"><a id="transformers.XGLMConfig" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMConfig"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-rgygdx">XGLMConfig</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XGLMConfig"><!-- HTML_TAG_START --><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XGLMConfig</span></span></h3><!-- HTML_TAG_END --> <a id="transformers.XGLMConfig" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XGLMConfig"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xglm/configuration_xglm.py#L29" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">vocab_size<span class="opacity-60"> = 256008</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">max_position_embeddings<span class="opacity-60"> = 2048</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">d_model<span class="opacity-60"> = 1024</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">ffn_dim<span class="opacity-60"> = 4096</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_layers<span class="opacity-60"> = 24</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_heads<span class="opacity-60"> = 16</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">activation_function<span class="opacity-60"> = 'gelu'</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">dropout<span class="opacity-60"> = 0.1</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_dropout<span class="opacity-60"> = 0.1</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">activation_dropout<span class="opacity-60"> = 0.0</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">layerdrop<span class="opacity-60"> = 0.0</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">init_std<span class="opacity-60"> = 0.02</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">scale_embedding<span class="opacity-60"> = True</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_cache<span class="opacity-60"> = True</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_start_token_id<span class="opacity-60"> = 2</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pad_token_id<span class="opacity-60"> = 1</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">bos_token_id<span class="opacity-60"> = 0</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">eos_token_id<span class="opacity-60"> = 2</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMConfig.vocab_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMConfig.vocab_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>vocab_size</strong> (<code>int</code>, <em>optional</em>, defaults to 256008) — Vocabulary size of the XGLM model. Defines the number of different tokens that can be represented by the <code>inputs_ids</code> passed when calling <a href="/docs/transformers/v4.34.0/en/model_doc/xglm#transformers.XGLMModel">XGLMModel</a> or <a href="/docs/transformers/v4.34.0/en/model_doc/xglm#transformers.FlaxXGLMModel">FlaxXGLMModel</a>.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMConfig.max_position_embeddings" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMConfig.max_position_embeddings"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>max_position_embeddings</strong> (<code>int</code>, <em>optional</em>, defaults to 2048) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048).<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMConfig.d_model" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMConfig.d_model"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>d_model</strong> (<code>int</code>, <em>optional</em>, defaults to 1024) — Dimension of the layers and the pooler layer.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMConfig.ffn_dim" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMConfig.ffn_dim"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>ffn_dim</strong> (<code>int</code>, <em>optional</em>, defaults to 4096) — Dimension of the “intermediate” (often named feed-forward) layer in decoder.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMConfig.num_layers" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMConfig.num_layers"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>num_layers</strong> (<code>int</code>, <em>optional</em>, defaults to 24) — Number of hidden layers Transformer decoder.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMConfig.attention_heads" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMConfig.attention_heads"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>attention_heads</strong> (<code>int</code>, <em>optional</em>, defaults to 16) — Number of attention heads for each attention layer in the Transformer decoder.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMConfig.activation_function" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMConfig.activation_function"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>activation_function</strong> (<code>str</code> or <code>function</code>, <em>optional</em>, defaults to <code>"gelu"</code>) — The non-linear activation function (function or string) in the encoder and pooler. If string, <code>"gelu"</code>, <code>"relu"</code>, <code>"silu"</code> and <code>"gelu_new"</code> are supported.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMConfig.dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMConfig.dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>dropout</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, dencoder, and pooler.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMConfig.attention_dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMConfig.attention_dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>attention_dropout</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) — The dropout ratio for the attention probabilities.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMConfig.activation_dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMConfig.activation_dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>activation_dropout</strong> (<code>float</code>, <em>optional</em>, defaults to 0.0) — The dropout ratio for activations inside the fully connected layer.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMConfig.layerdrop" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMConfig.layerdrop"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>layerdrop</strong> (<code>float</code>, <em>optional</em>, defaults to 0.0) — The LayerDrop probability for the encoder. See the [LayerDrop paper](see <a href="https://arxiv.org/abs/1909.11556" rel="nofollow">https://arxiv.org/abs/1909.11556</a>) for more details.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMConfig.init_std" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMConfig.init_std"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>init_std</strong> (<code>float</code>, <em>optional</em>, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMConfig.scale_embedding" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMConfig.scale_embedding"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>scale_embedding</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Scale embeddings by diving by sqrt(d_model).<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMConfig.use_cache" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMConfig.use_cache"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>use_cache</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether or not the model should return the last key/values attentions (not used by all models).<!-- HTML_TAG_END --> </span></span> </li></ul> </div></div> <p data-svelte-h="svelte-n31dn5">This is the configuration class to store the configuration of a <a href="/docs/transformers/v4.34.0/en/model_doc/xglm#transformers.XGLMModel">XGLMModel</a>. It is used to instantiate an XGLM model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the XGLM <a href="https://huggingface.co/facebook/xglm-564M" rel="nofollow">facebook/xglm-564M</a> architecture.</p> <p data-svelte-h="svelte-10kqkkl">Configuration objects inherit from <a href="/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> and can be used to control the model outputs. Read the documentation from <a href="/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> for more information.</p> <div class="relative group rounded-md"><a id="transformers.XGLMConfig.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMConfig.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> XGLMModel, XGLMConfig <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Initializing a XGLM facebook/xglm-564M style configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>configuration = XGLMConfig() <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Initializing a model from the facebook/xglm-564M style configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model = XGLMModel(configuration) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Accessing the model configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>configuration = model.config<!-- HTML_TAG_END --></pre></div></div></div> <h2 class="relative group"><a id="transformers.XGLMTokenizer" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMTokenizer"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-eqtub0">XGLMTokenizer</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XGLMTokenizer"><!-- HTML_TAG_START --><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XGLMTokenizer</span></span></h3><!-- HTML_TAG_END --> <a id="transformers.XGLMTokenizer" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XGLMTokenizer"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xglm/tokenization_xglm.py#L43" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">vocab_file<span class="opacity-60"></span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">bos_token<span class="opacity-60"> = '&lt;s&gt;'</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">eos_token<span class="opacity-60"> = '&lt;/s&gt;'</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">sep_token<span class="opacity-60"> = '&lt;/s&gt;'</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">cls_token<span class="opacity-60"> = '&lt;s&gt;'</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">unk_token<span class="opacity-60"> = '&lt;unk&gt;'</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pad_token<span class="opacity-60"> = '&lt;pad&gt;'</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">sp_model_kwargs<span class="opacity-60">: typing.Union[typing.Dict[str, typing.Any], NoneType] = None</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMTokenizer.vocab_file" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMTokenizer.vocab_file"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>vocab_file</strong> (<code>str</code>) — Path to the vocabulary file.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMTokenizer.bos_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMTokenizer.bos_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>bos_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;s&gt;"</code>) — The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.<p></p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"> <p>When building a sequence using special tokens, this is not the token that is used for the beginning of sequence. The token used is the <code>cls_token</code>.</p> </div><!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMTokenizer.eos_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMTokenizer.eos_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>eos_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;/s&gt;"</code>) — The end of sequence token.<p></p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"> <p>When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the <code>sep_token</code>.</p> </div><!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMTokenizer.sep_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMTokenizer.sep_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>sep_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;/s&gt;"</code>) — The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMTokenizer.cls_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMTokenizer.cls_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>cls_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;s&gt;"</code>) — The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMTokenizer.unk_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMTokenizer.unk_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>unk_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;unk&gt;"</code>) — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMTokenizer.pad_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMTokenizer.pad_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>pad_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;pad&gt;"</code>) — The token used for padding, for example when batching sequences of different lengths.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMTokenizer.mask_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMTokenizer.mask_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>mask_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;mask&gt;"</code>) — The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMTokenizer.additional_special_tokens" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMTokenizer.additional_special_tokens"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>additional_special_tokens</strong> (<code>List[str]</code>, <em>optional</em>, defaults to <code>["&lt;s&gt;NOTUSED", "&lt;/s&gt;NOTUSED"]</code>) — Additional special tokens used by the tokenizer.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMTokenizer.sp_model_kwargs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMTokenizer.sp_model_kwargs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>sp_model_kwargs</strong> (<code>dict</code>, <em>optional</em>) — Will be passed to the <code>SentencePieceProcessor.__init__()</code> method. The <a href="https://github.com/google/sentencepiece/tree/master/python" rel="nofollow">Python wrapper for SentencePiece</a> can be used, among other things, to set:<p></p> <ul> <li> <p><code>enable_sampling</code>: Enable subword regularization.</p> </li> <li> <p><code>nbest_size</code>: Sampling parameters for unigram. Invalid for BPE-Dropout.</p> <ul> <li><code>nbest_size = {0,1}</code>: No sampling is performed.</li> <li><code>nbest_size &gt; 1</code>: samples from the nbest_size results.</li> <li><code>nbest_size &lt; 0</code>: assuming that nbest_size is infinite and samples from the all hypothesis (lattice) using forward-filtering-and-backward-sampling algorithm.</li> </ul> </li> <li> <p><code>alpha</code>: Smoothing parameter for unigram sampling, and dropout probability of merge operations for BPE-dropout.</p> </li> </ul><!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMTokenizer.sp_model" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMTokenizer.sp_model"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>sp_model</strong> (<code>SentencePieceProcessor</code>) — The <em>SentencePiece</em> processor that is used for every conversion (string, tokens and IDs).<!-- HTML_TAG_END --> </span></span> </li></ul> </div></div> <p data-svelte-h="svelte-z2kpmr">Adapted from <a href="/docs/transformers/v4.34.0/en/model_doc/roberta#transformers.RobertaTokenizer">RobertaTokenizer</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetTokenizer">XLNetTokenizer</a>. Based on <a href="https://github.com/google/sentencepiece" rel="nofollow">SentencePiece</a>.</p> <p data-svelte-h="svelte-1b0fouy">This tokenizer inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizer">PreTrainedTokenizer</a> which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XGLMTokenizer.build_inputs_with_special_tokens"><!-- HTML_TAG_START --><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>build_inputs_with_special_tokens</span></h4><!-- HTML_TAG_END --> <a id="transformers.XGLMTokenizer.build_inputs_with_special_tokens" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XGLMTokenizer.build_inputs_with_special_tokens"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xglm/tokenization_xglm.py#L189" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_0<span class="opacity-60">: typing.List[int]</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_1<span class="opacity-60">: typing.Optional[typing.List[int]] = None</span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><!-- HTML_TAG_START --><span><code>List[int]</code></span><!-- HTML_TAG_END --></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMTokenizer.build_inputs_with_special_tokens.token_ids_0" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMTokenizer.build_inputs_with_special_tokens.token_ids_0"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>token_ids_0</strong> (<code>List[int]</code>) — List of IDs to which the special tokens will be added.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMTokenizer.build_inputs_with_special_tokens.token_ids_1" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMTokenizer.build_inputs_with_special_tokens.token_ids_1"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>token_ids_1</strong> (<code>List[int]</code>, <em>optional</em>) — Optional second list of IDs for sequence pairs.<!-- HTML_TAG_END --> </span></span> </li></ul> <div id="transformers.XGLMTokenizer.build_inputs_with_special_tokens.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <!-- HTML_TAG_START --> <p><code>List[int]</code></p> <!-- HTML_TAG_END --> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"><!-- HTML_TAG_START --> </p><p>List of <a href="../glossary#input-ids">input IDs</a> with the appropriate special tokens.</p> <!-- HTML_TAG_END --><p></p> </div></div> <p data-svelte-h="svelte-1ooxl9e">Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. An XLM-RoBERTa sequence has the following format:</p> <ul data-svelte-h="svelte-rq8uot"><li>single sequence: <code>&lt;s&gt; X &lt;/s&gt;</code></li> <li>pair of sequences: <code>&lt;s&gt; A &lt;/s&gt;&lt;/s&gt; B &lt;/s&gt;</code></li></ul></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XGLMTokenizer.get_special_tokens_mask"><!-- HTML_TAG_START --><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>get_special_tokens_mask</span></h4><!-- HTML_TAG_END --> <a id="transformers.XGLMTokenizer.get_special_tokens_mask" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XGLMTokenizer.get_special_tokens_mask"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xglm/tokenization_xglm.py#L214" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_0<span class="opacity-60">: typing.List[int]</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_1<span class="opacity-60">: typing.Optional[typing.List[int]] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">already_has_special_tokens<span class="opacity-60">: bool = False</span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><!-- HTML_TAG_START --><span><code>List[int]</code></span><!-- HTML_TAG_END --></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMTokenizer.get_special_tokens_mask.token_ids_0" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMTokenizer.get_special_tokens_mask.token_ids_0"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>token_ids_0</strong> (<code>List[int]</code>) — List of IDs.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMTokenizer.get_special_tokens_mask.token_ids_1" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMTokenizer.get_special_tokens_mask.token_ids_1"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>token_ids_1</strong> (<code>List[int]</code>, <em>optional</em>) — Optional second list of IDs for sequence pairs.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMTokenizer.get_special_tokens_mask.already_has_special_tokens" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMTokenizer.get_special_tokens_mask.already_has_special_tokens"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>already_has_special_tokens</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not the token list is already formatted with special tokens for the model.<!-- HTML_TAG_END --> </span></span> </li></ul> <div id="transformers.XGLMTokenizer.get_special_tokens_mask.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <!-- HTML_TAG_START --> <p><code>List[int]</code></p> <!-- HTML_TAG_END --> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"><!-- HTML_TAG_START --> </p><p>A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.</p> <!-- HTML_TAG_END --><p></p> </div></div> <p data-svelte-h="svelte-1f4f5kp">Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer <code>prepare_for_model</code> method.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XGLMTokenizer.create_token_type_ids_from_sequences"><!-- HTML_TAG_START --><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>create_token_type_ids_from_sequences</span></h4><!-- HTML_TAG_END --> <a id="transformers.XGLMTokenizer.create_token_type_ids_from_sequences" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XGLMTokenizer.create_token_type_ids_from_sequences"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xglm/tokenization_xglm.py#L242" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_0<span class="opacity-60">: typing.List[int]</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_1<span class="opacity-60">: typing.Optional[typing.List[int]] = None</span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><!-- HTML_TAG_START --><span><code>List[int]</code></span><!-- HTML_TAG_END --></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMTokenizer.create_token_type_ids_from_sequences.token_ids_0" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMTokenizer.create_token_type_ids_from_sequences.token_ids_0"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>token_ids_0</strong> (<code>List[int]</code>) — List of IDs.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMTokenizer.create_token_type_ids_from_sequences.token_ids_1" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMTokenizer.create_token_type_ids_from_sequences.token_ids_1"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>token_ids_1</strong> (<code>List[int]</code>, <em>optional</em>) — Optional second list of IDs for sequence pairs.<!-- HTML_TAG_END --> </span></span> </li></ul> <div id="transformers.XGLMTokenizer.create_token_type_ids_from_sequences.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <!-- HTML_TAG_START --> <p><code>List[int]</code></p> <!-- HTML_TAG_END --> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"><!-- HTML_TAG_START --> </p><p>List of zeros.</p> <!-- HTML_TAG_END --><p></p> </div></div> <p data-svelte-h="svelte-bub0ru">Create a mask from the two sequences passed to be used in a sequence-pair classification task. XLM-RoBERTa does not make use of token type ids, therefore a list of zeros is returned.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XGLMTokenizer.save_vocabulary"><!-- HTML_TAG_START --><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>save_vocabulary</span></h4><!-- HTML_TAG_END --> <a id="transformers.XGLMTokenizer.save_vocabulary" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XGLMTokenizer.save_vocabulary"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xglm/tokenization_xglm.py#L298" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">save_directory<span class="opacity-60">: str</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">filename_prefix<span class="opacity-60">: typing.Optional[str] = None</span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> </div></div></div></div> <h2 class="relative group"><a id="transformers.XGLMTokenizerFast" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMTokenizerFast"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-usjjg6">XGLMTokenizerFast</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XGLMTokenizerFast"><!-- HTML_TAG_START --><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XGLMTokenizerFast</span></span></h3><!-- HTML_TAG_END --> <a id="transformers.XGLMTokenizerFast" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XGLMTokenizerFast"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xglm/tokenization_xglm_fast.py#L49" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">vocab_file<span class="opacity-60"> = None</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">tokenizer_file<span class="opacity-60"> = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">bos_token<span class="opacity-60"> = '&lt;s&gt;'</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">eos_token<span class="opacity-60"> = '&lt;/s&gt;'</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">sep_token<span class="opacity-60"> = '&lt;/s&gt;'</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">cls_token<span class="opacity-60"> = '&lt;s&gt;'</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">unk_token<span class="opacity-60"> = '&lt;unk&gt;'</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pad_token<span class="opacity-60"> = '&lt;pad&gt;'</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMTokenizerFast.vocab_file" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMTokenizerFast.vocab_file"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>vocab_file</strong> (<code>str</code>) — Path to the vocabulary file.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMTokenizerFast.bos_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMTokenizerFast.bos_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>bos_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;s&gt;"</code>) — The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.<p></p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"> <p>When building a sequence using special tokens, this is not the token that is used for the beginning of sequence. The token used is the <code>cls_token</code>.</p> </div><!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMTokenizerFast.eos_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMTokenizerFast.eos_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>eos_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;/s&gt;"</code>) — The end of sequence token.<p></p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"> <p>When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the <code>sep_token</code>.</p> </div><!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMTokenizerFast.sep_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMTokenizerFast.sep_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>sep_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;/s&gt;"</code>) — The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMTokenizerFast.cls_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMTokenizerFast.cls_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>cls_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;s&gt;"</code>) — The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMTokenizerFast.unk_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMTokenizerFast.unk_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>unk_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;unk&gt;"</code>) — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMTokenizerFast.pad_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMTokenizerFast.pad_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>pad_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;pad&gt;"</code>) — The token used for padding, for example when batching sequences of different lengths.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMTokenizerFast.additional_special_tokens" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMTokenizerFast.additional_special_tokens"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>additional_special_tokens</strong> (<code>List[str]</code>, <em>optional</em>, defaults to <code>["&lt;s&gt;NOTUSED", "&lt;/s&gt;NOTUSED"]</code>) — Additional special tokens used by the tokenizer.<!-- HTML_TAG_END --> </span></span> </li></ul> </div></div> <p data-svelte-h="svelte-18aqt3z">Construct a “fast” XGLM tokenizer (backed by HuggingFace’s <em>tokenizers</em> library). Adapted from <a href="/docs/transformers/v4.34.0/en/model_doc/roberta#transformers.RobertaTokenizer">RobertaTokenizer</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetTokenizer">XLNetTokenizer</a>. Based on <a href="https://huggingface.co/docs/tokenizers/python/latest/components.html?highlight=BPE#models" rel="nofollow">BPE</a>.</p> <p data-svelte-h="svelte-ttxvs6">This tokenizer inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast">PreTrainedTokenizerFast</a> which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XGLMTokenizerFast.build_inputs_with_special_tokens"><!-- HTML_TAG_START --><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>build_inputs_with_special_tokens</span></h4><!-- HTML_TAG_END --> <a id="transformers.XGLMTokenizerFast.build_inputs_with_special_tokens" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XGLMTokenizerFast.build_inputs_with_special_tokens"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xglm/tokenization_xglm_fast.py#L142" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_0<span class="opacity-60">: typing.List[int]</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_1<span class="opacity-60">: typing.Optional[typing.List[int]] = None</span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><!-- HTML_TAG_START --><span><code>List[int]</code></span><!-- HTML_TAG_END --></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMTokenizerFast.build_inputs_with_special_tokens.token_ids_0" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMTokenizerFast.build_inputs_with_special_tokens.token_ids_0"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>token_ids_0</strong> (<code>List[int]</code>) — List of IDs to which the special tokens will be added.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMTokenizerFast.build_inputs_with_special_tokens.token_ids_1" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMTokenizerFast.build_inputs_with_special_tokens.token_ids_1"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>token_ids_1</strong> (<code>List[int]</code>, <em>optional</em>) — Optional second list of IDs for sequence pairs.<!-- HTML_TAG_END --> </span></span> </li></ul> <div id="transformers.XGLMTokenizerFast.build_inputs_with_special_tokens.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <!-- HTML_TAG_START --> <p><code>List[int]</code></p> <!-- HTML_TAG_END --> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"><!-- HTML_TAG_START --> </p><p>List of <a href="../glossary#input-ids">input IDs</a> with the appropriate special tokens.</p> <!-- HTML_TAG_END --><p></p> </div></div> <p data-svelte-h="svelte-1ooxl9e">Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. An XLM-RoBERTa sequence has the following format:</p> <ul data-svelte-h="svelte-rq8uot"><li>single sequence: <code>&lt;s&gt; X &lt;/s&gt;</code></li> <li>pair of sequences: <code>&lt;s&gt; A &lt;/s&gt;&lt;/s&gt; B &lt;/s&gt;</code></li></ul></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XGLMTokenizerFast.create_token_type_ids_from_sequences"><!-- HTML_TAG_START --><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>create_token_type_ids_from_sequences</span></h4><!-- HTML_TAG_END --> <a id="transformers.XGLMTokenizerFast.create_token_type_ids_from_sequences" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XGLMTokenizerFast.create_token_type_ids_from_sequences"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xglm/tokenization_xglm_fast.py#L167" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_0<span class="opacity-60">: typing.List[int]</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_1<span class="opacity-60">: typing.Optional[typing.List[int]] = None</span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><!-- HTML_TAG_START --><span><code>List[int]</code></span><!-- HTML_TAG_END --></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMTokenizerFast.create_token_type_ids_from_sequences.token_ids_0" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMTokenizerFast.create_token_type_ids_from_sequences.token_ids_0"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>token_ids_0</strong> (<code>List[int]</code>) — List of IDs.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMTokenizerFast.create_token_type_ids_from_sequences.token_ids_1" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMTokenizerFast.create_token_type_ids_from_sequences.token_ids_1"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>token_ids_1</strong> (<code>List[int]</code>, <em>optional</em>) — Optional second list of IDs for sequence pairs.<!-- HTML_TAG_END --> </span></span> </li></ul> <div id="transformers.XGLMTokenizerFast.create_token_type_ids_from_sequences.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <!-- HTML_TAG_START --> <p><code>List[int]</code></p> <!-- HTML_TAG_END --> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"><!-- HTML_TAG_START --> </p><p>List of zeros.</p> <!-- HTML_TAG_END --><p></p> </div></div> <p data-svelte-h="svelte-bub0ru">Create a mask from the two sequences passed to be used in a sequence-pair classification task. XLM-RoBERTa does not make use of token type ids, therefore a list of zeros is returned.</p></div></div> <h2 class="relative group"><a id="transformers.XGLMModel" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMModel"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-2pzd70">XGLMModel</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XGLMModel"><!-- HTML_TAG_START --><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XGLMModel</span></span></h3><!-- HTML_TAG_END --> <a id="transformers.XGLMModel" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XGLMModel"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xglm/modeling_xglm.py#L515" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60">: XGLMConfig</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">embed_tokens<span class="opacity-60">: typing.Optional[torch.nn.modules.sparse.Embedding] = None</span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMModel.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMModel.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xglm#transformers.XGLMConfig">XGLMConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights. config — XGLMConfig<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMModel.embed_tokens" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMModel.embed_tokens"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>embed_tokens</strong> (nn.Embedding) — output embedding<!-- HTML_TAG_END --> </span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1mu2ulj">The bare XGLM Model transformer outputting raw hidden-states without any specific head on top. This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-hswkmf">This model is also a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <p data-svelte-h="svelte-1tiprja">Transformer decoder consisting of <em>config.num_layers</em> layers. Each layer is a <code>XGLMDecoderLayer</code></p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XGLMModel.forward"><!-- HTML_TAG_START --><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4><!-- HTML_TAG_END --> <a id="transformers.XGLMModel.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XGLMModel.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xglm/modeling_xglm.py#L576" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_hidden_states<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">cross_attn_head_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">past_key_values<span class="opacity-60">: typing.Optional[typing.List[torch.FloatTensor]] = None</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_cache<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><!-- HTML_TAG_START --><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions">transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions</a> or <code>tuple(torch.FloatTensor)</code></span><!-- HTML_TAG_END --></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMModel.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMModel.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMModel.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMModel.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>attention_mask</strong> (<code>torch.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMModel.forward.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMModel.forward.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>position_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.<p></p> <p><a href="../glossary#position-ids">What are position IDs?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMModel.forward.encoder_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMModel.forward.encoder_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>encoder_hidden_states</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, encoder_sequence_length, hidden_size)</code>, <em>optional</em>) — Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMModel.forward.encoder_attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMModel.forward.encoder_attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>encoder_attention_mask</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, encoder_sequence_length)</code>, <em>optional</em>) — Mask to avoid performing cross-attention on padding tokens indices of encoder input_ids. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMModel.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMModel.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>head_mask</strong> (<code>torch.Tensor</code> of shape <code>(num_layers, attention_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul><!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMModel.forward.cross_attn_head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMModel.forward.cross_attn_head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>cross_attn_head_mask</strong> (<code>torch.Tensor</code> of shape <code>(num_layers, attention_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the cross-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul><!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMModel.forward.past_key_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMModel.forward.past_key_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>past_key_values</strong> (<code>tuple(tuple(torch.FloatTensor))</code>, <em>optional</em>, returned when <code>use_cache=True</code> is passed or when <code>config.use_cache=True</code>) — Tuple of <code>tuple(torch.FloatTensor)</code> of length <code>config.n_layers</code>, with each tuple having 2 tensors of shape <code>(batch_size, num_heads, sequence_length, embed_size_per_head)</code>) and 2 additional tensors of shape <code>(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)</code>.<p></p> <p>Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see <code>past_key_values</code> input) to speed up sequential decoding.</p> <p>If <code>past_key_values</code> are used, the user can optionally input only the last <code>decoder_input_ids</code> (those that don’t have their past key value states given to this model) of shape <code>(batch_size, 1)</code> instead of all <code>decoder_input_ids</code> of shape <code>(batch_size, sequence_length)</code>. inputs_embeds (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>): Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.<!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMModel.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMModel.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMModel.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMModel.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMModel.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMModel.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.<!-- HTML_TAG_END --> </span></span> </li></ul> <div id="transformers.XGLMModel.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <!-- HTML_TAG_START --> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions">transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions</a> or <code>tuple(torch.FloatTensor)</code></p> <!-- HTML_TAG_END --> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"><!-- HTML_TAG_START --> </p><p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions">transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xglm#transformers.XGLMConfig">XGLMConfig</a>) and inputs.</p> <ul> <li> <p><strong>last_hidden_state</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>) — Sequence of hidden-states at the output of the last layer of the model.</p> <p>If <code>past_key_values</code> is used only the last hidden-state of the sequences of shape <code>(batch_size, 1, hidden_size)</code> is output.</p> </li> <li> <p><strong>past_key_values</strong> (<code>tuple(tuple(torch.FloatTensor))</code>, <em>optional</em>, returned when <code>use_cache=True</code> is passed or when <code>config.use_cache=True</code>) — Tuple of <code>tuple(torch.FloatTensor)</code> of length <code>config.n_layers</code>, with each tuple having 2 tensors of shape <code>(batch_size, num_heads, sequence_length, embed_size_per_head)</code>) and optionally if <code>config.is_encoder_decoder=True</code> 2 additional tensors of shape <code>(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)</code>.</p> <p>Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if <code>config.is_encoder_decoder=True</code> in the cross-attention blocks) that can be used (see <code>past_key_values</code> input) to speed up sequential decoding.</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> <li> <p><strong>cross_attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> and <code>config.add_cross_attention=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.</p> </li> </ul> <!-- HTML_TAG_END --><p></p> </div></div> <p data-svelte-h="svelte-1mps0kn">The <a href="/docs/transformers/v4.34.0/en/model_doc/xglm#transformers.XGLMModel">XGLMModel</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.XGLMModel.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMModel.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, XGLMModel <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"facebook/xglm-564M"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = XGLMModel.from_pretrained(<span class="hljs-string">"facebook/xglm-564M"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(<span class="hljs-string">"Hello, my dog is cute"</span>, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(**inputs) <span class="hljs-meta">&gt;&gt;&gt; </span>last_hidden_states = outputs.last_hidden_state<!-- HTML_TAG_END --></pre></div></div></div></div> <h2 class="relative group"><a id="transformers.XGLMForCausalLM" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMForCausalLM"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-26ym4">XGLMForCausalLM</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XGLMForCausalLM"><!-- HTML_TAG_START --><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XGLMForCausalLM</span></span></h3><!-- HTML_TAG_END --> <a id="transformers.XGLMForCausalLM" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XGLMForCausalLM"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xglm/modeling_xglm.py#L751" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMForCausalLM.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMForCausalLM.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xglm#transformers.XGLMConfig">XGLMConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.<!-- HTML_TAG_END --> </span></span> </li></ul> </div></div> <p data-svelte-h="svelte-11br4fd">The XGLM Model transformer with a language modeling head on top (linear layer with weights tied to the input embeddings).</p> <p data-svelte-h="svelte-hmtw9k">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-hswkmf">This model is also a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XGLMForCausalLM.forward"><!-- HTML_TAG_START --><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4><!-- HTML_TAG_END --> <a id="transformers.XGLMForCausalLM.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XGLMForCausalLM.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xglm/modeling_xglm.py#L775" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_hidden_states<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">cross_attn_head_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">past_key_values<span class="opacity-60">: typing.Optional[typing.List[torch.FloatTensor]] = None</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_cache<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><!-- HTML_TAG_START --><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.CausalLMOutputWithCrossAttentions">transformers.modeling_outputs.CausalLMOutputWithCrossAttentions</a> or <code>tuple(torch.FloatTensor)</code></span><!-- HTML_TAG_END --></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMForCausalLM.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMForCausalLM.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMForCausalLM.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMForCausalLM.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>attention_mask</strong> (<code>torch.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMForCausalLM.forward.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMForCausalLM.forward.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>position_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.<p></p> <p><a href="../glossary#position-ids">What are position IDs?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMForCausalLM.forward.encoder_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMForCausalLM.forward.encoder_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>encoder_hidden_states</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, encoder_sequence_length, hidden_size)</code>, <em>optional</em>) — Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMForCausalLM.forward.encoder_attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMForCausalLM.forward.encoder_attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>encoder_attention_mask</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, encoder_sequence_length)</code>, <em>optional</em>) — Mask to avoid performing cross-attention on padding tokens indices of encoder input_ids. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMForCausalLM.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMForCausalLM.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>head_mask</strong> (<code>torch.Tensor</code> of shape <code>(num_layers, attention_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul><!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMForCausalLM.forward.cross_attn_head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMForCausalLM.forward.cross_attn_head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>cross_attn_head_mask</strong> (<code>torch.Tensor</code> of shape <code>(num_layers, attention_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the cross-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul><!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMForCausalLM.forward.past_key_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMForCausalLM.forward.past_key_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>past_key_values</strong> (<code>tuple(tuple(torch.FloatTensor))</code>, <em>optional</em>, returned when <code>use_cache=True</code> is passed or when <code>config.use_cache=True</code>) — Tuple of <code>tuple(torch.FloatTensor)</code> of length <code>config.n_layers</code>, with each tuple having 2 tensors of shape <code>(batch_size, num_heads, sequence_length, embed_size_per_head)</code>) and 2 additional tensors of shape <code>(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)</code>.<p></p> <p>Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see <code>past_key_values</code> input) to speed up sequential decoding.</p> <p>If <code>past_key_values</code> are used, the user can optionally input only the last <code>decoder_input_ids</code> (those that don’t have their past key value states given to this model) of shape <code>(batch_size, 1)</code> instead of all <code>decoder_input_ids</code> of shape <code>(batch_size, sequence_length)</code>. inputs_embeds (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>): Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.<!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMForCausalLM.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMForCausalLM.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMForCausalLM.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMForCausalLM.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMForCausalLM.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMForCausalLM.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XGLMForCausalLM.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMForCausalLM.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Labels for computing the masked language modeling loss. Indices should either be in <code>[0, ..., config.vocab_size]</code> or -100 (see <code>input_ids</code> docstring). Tokens with indices set to <code>-100</code> are ignored (masked), the loss is only computed for the tokens with labels in <code>[0, ..., config.vocab_size]</code>.<!-- HTML_TAG_END --> </span></span> </li></ul> <div id="transformers.XGLMForCausalLM.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <!-- HTML_TAG_START --> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.CausalLMOutputWithCrossAttentions">transformers.modeling_outputs.CausalLMOutputWithCrossAttentions</a> or <code>tuple(torch.FloatTensor)</code></p> <!-- HTML_TAG_END --> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"><!-- HTML_TAG_START --> </p><p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.CausalLMOutputWithCrossAttentions">transformers.modeling_outputs.CausalLMOutputWithCrossAttentions</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xglm#transformers.XGLMConfig">XGLMConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Language modeling loss (for next-token prediction).</p> </li> <li> <p><strong>logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, config.vocab_size)</code>) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> <li> <p><strong>cross_attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Cross attentions weights after the attention softmax, used to compute the weighted average in the cross-attention heads.</p> </li> <li> <p><strong>past_key_values</strong> (<code>tuple(tuple(torch.FloatTensor))</code>, <em>optional</em>, returned when <code>use_cache=True</code> is passed or when <code>config.use_cache=True</code>) — Tuple of <code>torch.FloatTensor</code> tuples of length <code>config.n_layers</code>, with each tuple containing the cached key, value states of the self-attention and the cross-attention layers if model is used in encoder-decoder setting. Only relevant if <code>config.is_decoder = True</code>.</p> <p>Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see <code>past_key_values</code> input) to speed up sequential decoding.</p> </li> </ul> <!-- HTML_TAG_END --><p></p> </div></div> <p data-svelte-h="svelte-gpzp2r">The <a href="/docs/transformers/v4.34.0/en/model_doc/xglm#transformers.XGLMForCausalLM">XGLMForCausalLM</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.XGLMForCausalLM.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XGLMForCausalLM.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, XGLMForCausalLM <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"facebook/xglm-564M"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = XGLMForCausalLM.from_pretrained(<span class="hljs-string">"facebook/xglm-564M"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(<span class="hljs-string">"Hello, my dog is cute"</span>, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(**inputs, labels=inputs[<span class="hljs-string">"input_ids"</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span>loss = outputs.loss <span class="hljs-meta">&gt;&gt;&gt; </span>logits = outputs.logits<!-- HTML_TAG_END --></pre></div></div></div></div> <h2 class="relative group"><a id="transformers.TFXGLMModel" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXGLMModel"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1nd2ljm">TFXGLMModel</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFXGLMModel"><!-- HTML_TAG_START --><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">TFXGLMModel</span></span></h3><!-- HTML_TAG_END --> <a id="transformers.TFXGLMModel" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFXGLMModel"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xglm/modeling_tf_xglm.py#L736" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">*args<span class="opacity-60"></span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXGLMModel.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXGLMModel.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xglm#transformers.XGLMConfig">XGLMConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights. config — XGLMConfig embed_tokens — [TFSharedEmbeddings]: output embedding<!-- HTML_TAG_END --> </span></span> </li></ul> </div></div> <p data-svelte-h="svelte-czrhaf">The bare XGLM Model transformer outputting raw hidden-states without any specific head on top. This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel">TFPreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-1ivrf8m">This model is also a <a href="https://www.tensorflow.org/api_docs/python/tf/keras/Model" rel="nofollow">tf.keras.Model</a> subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1ajbfxg">TensorFlow models and layers in <code>transformers</code> accept two formats as input:</p> <ul data-svelte-h="svelte-qm1t26"><li>having all inputs as keyword arguments (like PyTorch models), or</li> <li>having all inputs as a list, tuple or dict in the first positional argument.</li></ul> <p data-svelte-h="svelte-1v9qsc5">The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like <code>model.fit()</code> things should “just work” for you - just pass your inputs and labels in any format that <code>model.fit()</code> supports! If, however, you want to use the second format outside of Keras methods like <code>fit()</code> and <code>predict()</code>, such as when creating your own layers or models with the Keras <code>Functional</code> API, there are three possibilities you can use to gather all the input Tensors in the first positional argument:</p> <ul data-svelte-h="svelte-15scerc"><li>a single Tensor with <code>input_ids</code> only and nothing else: <code>model(input_ids)</code></li> <li>a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: <code>model([input_ids, attention_mask])</code> or <code>model([input_ids, attention_mask, token_type_ids])</code></li> <li>a dictionary with one or several input Tensors associated to the input names given in the docstring: <code>model({"input_ids": input_ids, "token_type_ids": token_type_ids})</code></li></ul> <p data-svelte-h="svelte-1an3odd">Note that when creating models and layers with <a href="https://keras.io/guides/making_new_layers_and_models_via_subclassing/" rel="nofollow">subclassing</a> then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function!</p></div> <p data-svelte-h="svelte-f8ft2s">Transformer decoder consisting of <em>config.num_layers</em> layers. Each layer is a <code>TFXGLMDecoderLayer</code></p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFXGLMModel.call"><!-- HTML_TAG_START --><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>call</span></h4><!-- HTML_TAG_END --> <a id="transformers.TFXGLMModel.call" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFXGLMModel.call"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xglm/modeling_tf_xglm.py#L752" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: TFModelInputType | None = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_hidden_states<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_attention_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">cross_attn_head_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">past_key_values<span class="opacity-60">: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_cache<span class="opacity-60">: Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">training<span class="opacity-60">: Optional[bool] = False</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60">: Any</span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><!-- HTML_TAG_START --><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFBaseModelOutputWithPastAndCrossAttentions">transformers.modeling_tf_outputs.TFBaseModelOutputWithPastAndCrossAttentions</a> or <code>tuple(tf.Tensor)</code></span><!-- HTML_TAG_END --></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXGLMModel.call.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXGLMModel.call.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>input_ids</strong> (<code>tf.Tensor</code> of shape <code>({0})</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXGLMModel.call.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXGLMModel.call.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>attention_mask</strong> (<code>tf.Tensor</code> of shape <code>({0})</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXGLMModel.call.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXGLMModel.call.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>position_ids</strong> (<code>tf.Tensor</code> or <code>Numpy array</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.<p></p> <p><a href="../glossary#position-ids">What are position IDs?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXGLMModel.call.encoder_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXGLMModel.call.encoder_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>encoder_hidden_states</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, encoder_sequence_length, hidden_size)</code>, <em>optional</em>) — Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXGLMModel.call.encoder_attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXGLMModel.call.encoder_attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>encoder_attention_mask</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, encoder_sequence_length)</code>, <em>optional</em>) — Mask to avoid performing cross-attention on padding tokens indices of encoder input_ids. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXGLMModel.call.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXGLMModel.call.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>head_mask</strong> (<code>tf.Tensor</code> of shape <code>(num_layers, attention_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul><!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXGLMModel.call.cross_attn_head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXGLMModel.call.cross_attn_head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>cross_attn_head_mask</strong> (<code>tf.Tensor</code> of shape <code>(num_layers, attention_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the cross-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul><!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXGLMModel.call.past_key_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXGLMModel.call.past_key_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>past_key_values</strong> (<code>Tuple[Tuple[tf.Tensor]]</code> of length <code>config.num_layers</code>) — contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. If <code>past_key_values</code> are used, the user can optionally input only the last <code>decoder_input_ids</code> (those that don’t have their past key value states given to this model) of shape <code>(batch_size, 1)</code> instead of all <code>decoder_input_ids</code> of shape <code>(batch_size, sequence_length)</code>.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXGLMModel.call.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXGLMModel.call.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>inputs_embeds</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXGLMModel.call.use_cache" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXGLMModel.call.use_cache"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>use_cache</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — If set to <code>True</code>, <code>past_key_values</code> key value states are returned and can be used to speed up decoding (see <code>past_key_values</code>). Set to <code>False</code> during training, <code>True</code> during generation<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXGLMModel.call.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXGLMModel.call.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXGLMModel.call.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXGLMModel.call.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXGLMModel.call.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXGLMModel.call.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXGLMModel.call.training" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXGLMModel.call.training"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>training</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation).<!-- HTML_TAG_END --> </span></span> </li></ul> <div id="transformers.TFXGLMModel.call.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <!-- HTML_TAG_START --> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFBaseModelOutputWithPastAndCrossAttentions">transformers.modeling_tf_outputs.TFBaseModelOutputWithPastAndCrossAttentions</a> or <code>tuple(tf.Tensor)</code></p> <!-- HTML_TAG_END --> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"><!-- HTML_TAG_START --> </p><p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFBaseModelOutputWithPastAndCrossAttentions">transformers.modeling_tf_outputs.TFBaseModelOutputWithPastAndCrossAttentions</a> or a tuple of <code>tf.Tensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xglm#transformers.XGLMConfig">XGLMConfig</a>) and inputs.</p> <ul> <li> <p><strong>last_hidden_state</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>) — Sequence of hidden-states at the output of the last layer of the model.</p> <p>If <code>past_key_values</code> is used only the last hidden-state of the sequences of shape <code>(batch_size, 1, hidden_size)</code> is output.</p> </li> <li> <p><strong>past_key_values</strong> (<code>List[tf.Tensor]</code>, <em>optional</em>, returned when <code>use_cache=True</code> is passed or when <code>config.use_cache=True</code>) — List of <code>tf.Tensor</code> of length <code>config.n_layers</code>, with each tensor of shape <code>(2, batch_size, num_heads, sequence_length, embed_size_per_head)</code>).</p> <p>Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see <code>past_key_values</code> input) to speed up sequential decoding.</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(tf.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>tf.Tensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>tf.Tensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> <li> <p><strong>cross_attentions</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>tf.Tensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.</p> </li> </ul> <!-- HTML_TAG_END --><p></p> </div></div> <p data-svelte-h="svelte-15zlb77">The <a href="/docs/transformers/v4.34.0/en/model_doc/xglm#transformers.TFXGLMModel">TFXGLMModel</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.TFXGLMModel.call.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXGLMModel.call.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, TFXGLMModel <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> tensorflow <span class="hljs-keyword">as</span> tf <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"facebook/xglm-564M"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = TFXGLMModel.from_pretrained(<span class="hljs-string">"facebook/xglm-564M"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(<span class="hljs-string">"Hello, my dog is cute"</span>, return_tensors=<span class="hljs-string">"tf"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(inputs) <span class="hljs-meta">&gt;&gt;&gt; </span>last_hidden_states = outputs.last_hidden_state<!-- HTML_TAG_END --></pre></div></div></div></div> <h2 class="relative group"><a id="transformers.TFXGLMForCausalLM" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXGLMForCausalLM"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-7sb0o2">TFXGLMForCausalLM</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFXGLMForCausalLM"><!-- HTML_TAG_START --><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">TFXGLMForCausalLM</span></span></h3><!-- HTML_TAG_END --> <a id="transformers.TFXGLMForCausalLM" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFXGLMForCausalLM"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xglm/modeling_tf_xglm.py#L803" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">*args<span class="opacity-60"></span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXGLMForCausalLM.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXGLMForCausalLM.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xglm#transformers.XGLMConfig">XGLMConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.<!-- HTML_TAG_END --> </span></span> </li></ul> </div></div> <p data-svelte-h="svelte-11br4fd">The XGLM Model transformer with a language modeling head on top (linear layer with weights tied to the input embeddings).</p> <p data-svelte-h="svelte-1i0vt4o">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel">TFPreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-1ivrf8m">This model is also a <a href="https://www.tensorflow.org/api_docs/python/tf/keras/Model" rel="nofollow">tf.keras.Model</a> subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1ajbfxg">TensorFlow models and layers in <code>transformers</code> accept two formats as input:</p> <ul data-svelte-h="svelte-qm1t26"><li>having all inputs as keyword arguments (like PyTorch models), or</li> <li>having all inputs as a list, tuple or dict in the first positional argument.</li></ul> <p data-svelte-h="svelte-1v9qsc5">The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like <code>model.fit()</code> things should “just work” for you - just pass your inputs and labels in any format that <code>model.fit()</code> supports! If, however, you want to use the second format outside of Keras methods like <code>fit()</code> and <code>predict()</code>, such as when creating your own layers or models with the Keras <code>Functional</code> API, there are three possibilities you can use to gather all the input Tensors in the first positional argument:</p> <ul data-svelte-h="svelte-15scerc"><li>a single Tensor with <code>input_ids</code> only and nothing else: <code>model(input_ids)</code></li> <li>a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: <code>model([input_ids, attention_mask])</code> or <code>model([input_ids, attention_mask, token_type_ids])</code></li> <li>a dictionary with one or several input Tensors associated to the input names given in the docstring: <code>model({"input_ids": input_ids, "token_type_ids": token_type_ids})</code></li></ul> <p data-svelte-h="svelte-1an3odd">Note that when creating models and layers with <a href="https://keras.io/guides/making_new_layers_and_models_via_subclassing/" rel="nofollow">subclassing</a> then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function!</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFXGLMForCausalLM.call"><!-- HTML_TAG_START --><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>call</span></h4><!-- HTML_TAG_END --> <a id="transformers.TFXGLMForCausalLM.call" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFXGLMForCausalLM.call"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xglm/modeling_tf_xglm.py#L853" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: TFModelInputType | None = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_hidden_states<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_attention_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">cross_attn_head_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">past_key_values<span class="opacity-60">: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_cache<span class="opacity-60">: Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">training<span class="opacity-60">: Optional[bool] = False</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60">: Any</span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><!-- HTML_TAG_START --><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions">transformers.modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions</a> or <code>tuple(tf.Tensor)</code></span><!-- HTML_TAG_END --></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXGLMForCausalLM.call.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXGLMForCausalLM.call.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>input_ids</strong> (<code>tf.Tensor</code> of shape <code>({0})</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXGLMForCausalLM.call.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXGLMForCausalLM.call.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>attention_mask</strong> (<code>tf.Tensor</code> of shape <code>({0})</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXGLMForCausalLM.call.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXGLMForCausalLM.call.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>position_ids</strong> (<code>tf.Tensor</code> or <code>Numpy array</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.<p></p> <p><a href="../glossary#position-ids">What are position IDs?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXGLMForCausalLM.call.encoder_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXGLMForCausalLM.call.encoder_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>encoder_hidden_states</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, encoder_sequence_length, hidden_size)</code>, <em>optional</em>) — Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXGLMForCausalLM.call.encoder_attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXGLMForCausalLM.call.encoder_attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>encoder_attention_mask</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, encoder_sequence_length)</code>, <em>optional</em>) — Mask to avoid performing cross-attention on padding tokens indices of encoder input_ids. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXGLMForCausalLM.call.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXGLMForCausalLM.call.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>head_mask</strong> (<code>tf.Tensor</code> of shape <code>(num_layers, attention_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul><!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXGLMForCausalLM.call.cross_attn_head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXGLMForCausalLM.call.cross_attn_head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>cross_attn_head_mask</strong> (<code>tf.Tensor</code> of shape <code>(num_layers, attention_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the cross-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul><!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXGLMForCausalLM.call.past_key_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXGLMForCausalLM.call.past_key_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>past_key_values</strong> (<code>Tuple[Tuple[tf.Tensor]]</code> of length <code>config.num_layers</code>) — contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. If <code>past_key_values</code> are used, the user can optionally input only the last <code>decoder_input_ids</code> (those that don’t have their past key value states given to this model) of shape <code>(batch_size, 1)</code> instead of all <code>decoder_input_ids</code> of shape <code>(batch_size, sequence_length)</code>.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXGLMForCausalLM.call.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXGLMForCausalLM.call.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>inputs_embeds</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXGLMForCausalLM.call.use_cache" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXGLMForCausalLM.call.use_cache"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>use_cache</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — If set to <code>True</code>, <code>past_key_values</code> key value states are returned and can be used to speed up decoding (see <code>past_key_values</code>). Set to <code>False</code> during training, <code>True</code> during generation<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXGLMForCausalLM.call.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXGLMForCausalLM.call.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXGLMForCausalLM.call.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXGLMForCausalLM.call.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXGLMForCausalLM.call.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXGLMForCausalLM.call.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXGLMForCausalLM.call.training" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXGLMForCausalLM.call.training"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>training</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation).<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXGLMForCausalLM.call.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXGLMForCausalLM.call.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>labels</strong> (<code>np.ndarray</code> or <code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Labels for language modeling. Note that the labels <strong>are shifted</strong> inside the model, i.e. you can set <code>labels = input_ids</code> Indices are selected in <code>[-100, 0, ..., config.vocab_size]</code> All labels set to <code>-100</code> are ignored (masked), the loss is only computed for labels in <code>[0, ..., config.vocab_size]</code><!-- HTML_TAG_END --> </span></span> </li></ul> <div id="transformers.TFXGLMForCausalLM.call.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <!-- HTML_TAG_START --> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions">transformers.modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions</a> or <code>tuple(tf.Tensor)</code></p> <!-- HTML_TAG_END --> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"><!-- HTML_TAG_START --> </p><p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions">transformers.modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions</a> or a tuple of <code>tf.Tensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xglm#transformers.XGLMConfig">XGLMConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>tf.Tensor</code> of shape <code>(n,)</code>, <em>optional</em>, where n is the number of non-masked labels, returned when <code>labels</code> is provided) — Language modeling loss (for next-token prediction).</p> </li> <li> <p><strong>logits</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length, config.vocab_size)</code>) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>tf.Tensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>tf.Tensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> <li> <p><strong>cross_attentions</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>tf.Tensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.</p> </li> <li> <p><strong>past_key_values</strong> (<code>List[tf.Tensor]</code>, <em>optional</em>, returned when <code>use_cache=True</code> is passed or when <code>config.use_cache=True</code>) — List of <code>tf.Tensor</code> of length <code>config.n_layers</code>, with each tensor of shape <code>(2, batch_size, num_heads, sequence_length, embed_size_per_head)</code>).</p> <p>Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see <code>past_key_values</code> input) to speed up sequential decoding.</p> </li> </ul> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions">transformers.modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions</a> or <code>tuple(tf.Tensor)</code>: A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions">transformers.modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions</a> or a tuple of <code>tf.Tensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xglm#transformers.XGLMConfig">XGLMConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>tf.Tensor</code> of shape <code>(n,)</code>, <em>optional</em>, where n is the number of non-masked labels, returned when <code>labels</code> is provided) — Language modeling loss (for next-token prediction).</p> </li> <li> <p><strong>logits</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length, config.vocab_size)</code>) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>tf.Tensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>tf.Tensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> <li> <p><strong>cross_attentions</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>tf.Tensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.</p> </li> <li> <p><strong>past_key_values</strong> (<code>List[tf.Tensor]</code>, <em>optional</em>, returned when <code>use_cache=True</code> is passed or when <code>config.use_cache=True</code>) — List of <code>tf.Tensor</code> of length <code>config.n_layers</code>, with each tensor of shape <code>(2, batch_size, num_heads, sequence_length, embed_size_per_head)</code>).</p> <p>Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see <code>past_key_values</code> input) to speed up sequential decoding.</p> </li> </ul> <!-- HTML_TAG_END --><p></p> </div></div> <p data-svelte-h="svelte-m1c2nj">The <a href="/docs/transformers/v4.34.0/en/model_doc/xglm#transformers.TFXGLMForCausalLM">TFXGLMForCausalLM</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.TFXGLMForCausalLM.call.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXGLMForCausalLM.call.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, TFXGLMForCausalLM <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> tensorflow <span class="hljs-keyword">as</span> tf <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"facebook/xglm-564M"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = TFXGLMForCausalLM.from_pretrained(<span class="hljs-string">"facebook/xglm-564M"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(<span class="hljs-string">"Hello, my dog is cute"</span>, return_tensors=<span class="hljs-string">"tf"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(inputs) <span class="hljs-meta">&gt;&gt;&gt; </span>logits = outputs.logits<!-- HTML_TAG_END --></pre></div></div></div></div> <h2 class="relative group"><a id="transformers.FlaxXGLMModel" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXGLMModel"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-f9tztd">FlaxXGLMModel</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FlaxXGLMModel"><!-- HTML_TAG_START --><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">FlaxXGLMModel</span></span></h3><!-- HTML_TAG_END --> <a id="transformers.FlaxXGLMModel" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FlaxXGLMModel"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xglm/modeling_flax_xglm.py#L689" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60">: XGLMConfig</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_shape<span class="opacity-60">: typing.Tuple[int] = (1, 1)</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">seed<span class="opacity-60">: int = 0</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">dtype<span class="opacity-60">: dtype = &lt;class 'jax.numpy.float32'&gt;</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">_do_init<span class="opacity-60">: bool = True</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXGLMModel.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXGLMModel.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xglm#transformers.XGLMConfig">XGLMConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXGLMModel.dtype" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXGLMModel.dtype"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>dtype</strong> (<code>jax.numpy.dtype</code>, <em>optional</em>, defaults to <code>jax.numpy.float32</code>) — The data type of the computation. Can be one of <code>jax.numpy.float32</code>, <code>jax.numpy.float16</code> (on GPUs) and <code>jax.numpy.bfloat16</code> (on TPUs).<p></p> <p>This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given <code>dtype</code>.</p> <p><strong>Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.</strong></p> <p>If you wish to change the dtype of the model parameters, see <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.to_fp16">to_fp16()</a> and <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.to_bf16">to_bf16()</a>.<!-- HTML_TAG_END --> </p></span></span></li></ul> </div></div> <p data-svelte-h="svelte-592o0b">The bare XGLM Model transformer outputting raw hidden-states without any specific head on top. This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel">FlaxPreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-idybz1">This model is also a Flax Linen <a href="https://flax.readthedocs.io/en/latest/_autosummary/flax.nn.module.html" rel="nofollow">flax.nn.Module</a> subclass. Use it as a regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.</p> <p data-svelte-h="svelte-1pplc4a">Finally, this model supports inherent JAX features such as:</p> <ul data-svelte-h="svelte-1w7z84m"><li><a href="https://jax.readthedocs.io/en/latest/jax.html#just-in-time-compilation-jit" rel="nofollow">Just-In-Time (JIT) compilation</a></li> <li><a href="https://jax.readthedocs.io/en/latest/jax.html#automatic-differentiation" rel="nofollow">Automatic Differentiation</a></li> <li><a href="https://jax.readthedocs.io/en/latest/jax.html#vectorization-vmap" rel="nofollow">Vectorization</a></li> <li><a href="https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap" rel="nofollow">Parallelization</a></li></ul> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FlaxXGLMModel.__call__"><!-- HTML_TAG_START --><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>__call__</span></h4><!-- HTML_TAG_END --> <a id="transformers.FlaxXGLMModel.__call__" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FlaxXGLMModel.__call__"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xglm/modeling_flax_xglm.py#L611" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: Array</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[jax.Array] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: typing.Optional[jax.Array] = None</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_hidden_states<span class="opacity-60">: typing.Optional[jax.Array] = None</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_attention_mask<span class="opacity-60">: typing.Optional[jax.Array] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">train<span class="opacity-60">: bool = False</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">params<span class="opacity-60">: dict = None</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">past_key_values<span class="opacity-60">: dict = None</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">dropout_rng<span class="opacity-60">: PRNGKey = None</span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><!-- HTML_TAG_START --><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions">transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions</a> or <code>tuple(torch.FloatTensor)</code></span><!-- HTML_TAG_END --></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXGLMModel.__call__.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXGLMModel.__call__.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>input_ids</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXGLMModel.__call__.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXGLMModel.__call__.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>attention_mask</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXGLMModel.__call__.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXGLMModel.__call__.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>position_ids</strong> (<code>numpy.ndarray</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXGLMModel.__call__.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXGLMModel.__call__.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXGLMModel.__call__.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXGLMModel.__call__.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXGLMModel.__call__.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXGLMModel.__call__.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.<!-- HTML_TAG_END --> </span></span> </li></ul> <div id="transformers.FlaxXGLMModel.__call__.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <!-- HTML_TAG_START --> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions">transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions</a> or <code>tuple(torch.FloatTensor)</code></p> <!-- HTML_TAG_END --> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"><!-- HTML_TAG_START --> </p><p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions">transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xglm#transformers.XGLMConfig">XGLMConfig</a>) and inputs.</p> <ul> <li> <p><strong>last_hidden_state</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>) — Sequence of hidden-states at the output of the last layer of the model.</p> <p>If <code>past_key_values</code> is used only the last hidden-state of the sequences of shape <code>(batch_size, 1, hidden_size)</code> is output.</p> </li> <li> <p><strong>past_key_values</strong> (<code>tuple(tuple(jnp.ndarray))</code>, <em>optional</em>, returned when <code>use_cache=True</code> is passed or when <code>config.use_cache=True</code>) — Tuple of <code>tuple(jnp.ndarray)</code> of length <code>config.n_layers</code>, with each tuple having 2 tensors of shape <code>(batch_size, num_heads, sequence_length, embed_size_per_head)</code>) and optionally if <code>config.is_encoder_decoder=True</code> 2 additional tensors of shape <code>(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)</code>.</p> <p>Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if <code>config.is_encoder_decoder=True</code> in the cross-attention blocks) that can be used (see <code>past_key_values</code> input) to speed up sequential decoding.</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>jnp.ndarray</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>jnp.ndarray</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> <li> <p><strong>cross_attentions</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> and <code>config.add_cross_attention=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>jnp.ndarray</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.</p> </li> </ul> <!-- HTML_TAG_END --><p></p> </div></div> <p data-svelte-h="svelte-17axqe5">The <code>FlaxXGLMPreTrainedModel</code> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.FlaxXGLMModel.__call__.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXGLMModel.__call__.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, FlaxXGLMModel <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"facebook/xglm-564M"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = FlaxXGLMModel.from_pretrained(<span class="hljs-string">"facebook/xglm-564M"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(<span class="hljs-string">"Hello, my dog is cute"</span>, return_tensors=<span class="hljs-string">"jax"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(**inputs) <span class="hljs-meta">&gt;&gt;&gt; </span>last_hidden_states = outputs.last_hidden_state<!-- HTML_TAG_END --></pre></div></div></div></div> <h2 class="relative group"><a id="transformers.FlaxXGLMForCausalLM" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXGLMForCausalLM"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-n8piht">FlaxXGLMForCausalLM</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FlaxXGLMForCausalLM"><!-- HTML_TAG_START --><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">FlaxXGLMForCausalLM</span></span></h3><!-- HTML_TAG_END --> <a id="transformers.FlaxXGLMForCausalLM" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FlaxXGLMForCausalLM"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xglm/modeling_flax_xglm.py#L766" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60">: XGLMConfig</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_shape<span class="opacity-60">: typing.Tuple[int] = (1, 1)</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">seed<span class="opacity-60">: int = 0</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">dtype<span class="opacity-60">: dtype = &lt;class 'jax.numpy.float32'&gt;</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">_do_init<span class="opacity-60">: bool = True</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXGLMForCausalLM.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXGLMForCausalLM.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xglm#transformers.XGLMConfig">XGLMConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXGLMForCausalLM.dtype" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXGLMForCausalLM.dtype"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>dtype</strong> (<code>jax.numpy.dtype</code>, <em>optional</em>, defaults to <code>jax.numpy.float32</code>) — The data type of the computation. Can be one of <code>jax.numpy.float32</code>, <code>jax.numpy.float16</code> (on GPUs) and <code>jax.numpy.bfloat16</code> (on TPUs).<p></p> <p>This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given <code>dtype</code>.</p> <p><strong>Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.</strong></p> <p>If you wish to change the dtype of the model parameters, see <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.to_fp16">to_fp16()</a> and <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.to_bf16">to_bf16()</a>.<!-- HTML_TAG_END --> </p></span></span></li></ul> </div></div> <p data-svelte-h="svelte-11br4fd">The XGLM Model transformer with a language modeling head on top (linear layer with weights tied to the input embeddings).</p> <p data-svelte-h="svelte-1b68hcc">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel">FlaxPreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-idybz1">This model is also a Flax Linen <a href="https://flax.readthedocs.io/en/latest/_autosummary/flax.nn.module.html" rel="nofollow">flax.nn.Module</a> subclass. Use it as a regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.</p> <p data-svelte-h="svelte-1pplc4a">Finally, this model supports inherent JAX features such as:</p> <ul data-svelte-h="svelte-1w7z84m"><li><a href="https://jax.readthedocs.io/en/latest/jax.html#just-in-time-compilation-jit" rel="nofollow">Just-In-Time (JIT) compilation</a></li> <li><a href="https://jax.readthedocs.io/en/latest/jax.html#automatic-differentiation" rel="nofollow">Automatic Differentiation</a></li> <li><a href="https://jax.readthedocs.io/en/latest/jax.html#vectorization-vmap" rel="nofollow">Vectorization</a></li> <li><a href="https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap" rel="nofollow">Parallelization</a></li></ul> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FlaxXGLMForCausalLM.__call__"><!-- HTML_TAG_START --><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>__call__</span></h4><!-- HTML_TAG_END --> <a id="transformers.FlaxXGLMForCausalLM.__call__" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FlaxXGLMForCausalLM.__call__"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xglm/modeling_flax_xglm.py#L611" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: Array</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[jax.Array] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: typing.Optional[jax.Array] = None</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_hidden_states<span class="opacity-60">: typing.Optional[jax.Array] = None</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_attention_mask<span class="opacity-60">: typing.Optional[jax.Array] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">train<span class="opacity-60">: bool = False</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">params<span class="opacity-60">: dict = None</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">past_key_values<span class="opacity-60">: dict = None</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">dropout_rng<span class="opacity-60">: PRNGKey = None</span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><!-- HTML_TAG_START --><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions">transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions</a> or <code>tuple(torch.FloatTensor)</code></span><!-- HTML_TAG_END --></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXGLMForCausalLM.__call__.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXGLMForCausalLM.__call__.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>input_ids</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXGLMForCausalLM.__call__.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXGLMForCausalLM.__call__.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>attention_mask</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXGLMForCausalLM.__call__.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXGLMForCausalLM.__call__.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>position_ids</strong> (<code>numpy.ndarray</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXGLMForCausalLM.__call__.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXGLMForCausalLM.__call__.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXGLMForCausalLM.__call__.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXGLMForCausalLM.__call__.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXGLMForCausalLM.__call__.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXGLMForCausalLM.__call__.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.<!-- HTML_TAG_END --> </span></span> </li></ul> <div id="transformers.FlaxXGLMForCausalLM.__call__.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <!-- HTML_TAG_START --> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions">transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions</a> or <code>tuple(torch.FloatTensor)</code></p> <!-- HTML_TAG_END --> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"><!-- HTML_TAG_START --> </p><p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions">transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xglm#transformers.XGLMConfig">XGLMConfig</a>) and inputs.</p> <ul> <li> <p><strong>logits</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, sequence_length, config.vocab_size)</code>) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>jnp.ndarray</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>jnp.ndarray</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> <li> <p><strong>cross_attentions</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>jnp.ndarray</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Cross attentions weights after the attention softmax, used to compute the weighted average in the cross-attention heads.</p> </li> <li> <p><strong>past_key_values</strong> (<code>tuple(tuple(jnp.ndarray))</code>, <em>optional</em>, returned when <code>use_cache=True</code> is passed or when <code>config.use_cache=True</code>) — Tuple of <code>jnp.ndarray</code> tuples of length <code>config.n_layers</code>, with each tuple containing the cached key, value states of the self-attention and the cross-attention layers if model is used in encoder-decoder setting. Only relevant if <code>config.is_decoder = True</code>.</p> <p>Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see <code>past_key_values</code> input) to speed up sequential decoding.</p> </li> </ul> <!-- HTML_TAG_END --><p></p> </div></div> <p data-svelte-h="svelte-17axqe5">The <code>FlaxXGLMPreTrainedModel</code> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.FlaxXGLMForCausalLM.__call__.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXGLMForCausalLM.__call__.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, FlaxXGLMForCausalLM <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"facebook/xglm-564M"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = FlaxXGLMForCausalLM.from_pretrained(<span class="hljs-string">"facebook/xglm-564M"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(<span class="hljs-string">"Hello, my dog is cute"</span>, return_tensors=<span class="hljs-string">"np"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(**inputs) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># retrieve logts for next token</span> <span class="hljs-meta">&gt;&gt;&gt; </span>next_token_logits = outputs.logits[:, -<span class="hljs-number">1</span>]<!-- HTML_TAG_END --></pre></div></div></div></div> <p></p> <script> { __sveltekit_1yybmhh = { assets: "/docs/transformers/v4.34.0/en", base: "/docs/transformers/v4.34.0/en", env: {} }; const element = document.currentScript.parentElement; const data = [null,null]; Promise.all([ import("/docs/transformers/v4.34.0/en/_app/immutable/entry/start.c2db227a.js"), import("/docs/transformers/v4.34.0/en/_app/immutable/entry/app.879d9b87.js") ]).then(([kit, app]) => { kit.start(app, element, { node_ids: [0, 276], data, form: null, error: null }); }); } </script> <!-- HTML_TAG_END --></div> <div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/xmod" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>X-MOD</a> <a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/xlm" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">XLM<span class="ml-2 translate-y-px">→</span></a></div></div></div> <div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapter&quot;:{&quot;title&quot;:&quot;XGLM&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;xglm&quot;,&quot;url&quot;:&quot;#xglm&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;overview&quot;,&quot;url&quot;:&quot;#overview&quot;},{&quot;title&quot;:&quot;Documentation resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;documentation-resources&quot;,&quot;url&quot;:&quot;#documentation-resources&quot;},{&quot;title&quot;:&quot;XGLMConfig&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XGLMConfig&quot;,&quot;url&quot;:&quot;#transformers.XGLMConfig&quot;},{&quot;title&quot;:&quot;XGLMTokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XGLMTokenizer&quot;,&quot;url&quot;:&quot;#transformers.XGLMTokenizer&quot;},{&quot;title&quot;:&quot;XGLMTokenizerFast&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XGLMTokenizerFast&quot;,&quot;url&quot;:&quot;#transformers.XGLMTokenizerFast&quot;},{&quot;title&quot;:&quot;XGLMModel&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XGLMModel&quot;,&quot;url&quot;:&quot;#transformers.XGLMModel&quot;},{&quot;title&quot;:&quot;XGLMForCausalLM&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XGLMForCausalLM&quot;,&quot;url&quot;:&quot;#transformers.XGLMForCausalLM&quot;},{&quot;title&quot;:&quot;TFXGLMModel&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.TFXGLMModel&quot;,&quot;url&quot;:&quot;#transformers.TFXGLMModel&quot;},{&quot;title&quot;:&quot;TFXGLMForCausalLM&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.TFXGLMForCausalLM&quot;,&quot;url&quot;:&quot;#transformers.TFXGLMForCausalLM&quot;},{&quot;title&quot;:&quot;FlaxXGLMModel&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.FlaxXGLMModel&quot;,&quot;url&quot;:&quot;#transformers.FlaxXGLMModel&quot;},{&quot;title&quot;:&quot;FlaxXGLMForCausalLM&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.FlaxXGLMForCausalLM&quot;,&quot;url&quot;:&quot;#transformers.FlaxXGLMForCausalLM&quot;}]}}" data-target="SubSideMenu"> <nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#xglm" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-xglm"><!-- HTML_TAG_START -->XGLM<!-- HTML_TAG_END --></a> <a href="#overview" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-overview"><!-- HTML_TAG_START --><wbr>Overview<!-- HTML_TAG_END --></a> <a href="#documentation-resources" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-documentation-resources"><!-- HTML_TAG_START --><wbr>Documentation resources<!-- HTML_TAG_END --></a> <a href="#transformers.XGLMConfig" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XGLMConfig"><!-- HTML_TAG_START -->XGLM<wbr>Config<!-- HTML_TAG_END --></a> <a href="#transformers.XGLMTokenizer" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XGLMTokenizer"><!-- HTML_TAG_START -->XGLM<wbr>Tokenizer<!-- HTML_TAG_END --></a> <a href="#transformers.XGLMTokenizerFast" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XGLMTokenizerFast"><!-- HTML_TAG_START -->XGLM<wbr>Tokenizer<wbr>Fast<!-- HTML_TAG_END --></a> <a href="#transformers.XGLMModel" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XGLMModel"><!-- HTML_TAG_START -->XGLM<wbr>Model<!-- HTML_TAG_END --></a> <a href="#transformers.XGLMForCausalLM" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XGLMForCausalLM"><!-- HTML_TAG_START -->XGLM<wbr>For<wbr>CausalLM<!-- HTML_TAG_END --></a> <a href="#transformers.TFXGLMModel" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.TFXGLMModel"><!-- HTML_TAG_START -->TFXGLM<wbr>Model<!-- HTML_TAG_END --></a> <a href="#transformers.TFXGLMForCausalLM" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.TFXGLMForCausalLM"><!-- HTML_TAG_START -->TFXGLM<wbr>For<wbr>CausalLM<!-- HTML_TAG_END --></a> <a href="#transformers.FlaxXGLMModel" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.FlaxXGLMModel"><!-- HTML_TAG_START --><wbr>FlaxXGLM<wbr>Model<!-- HTML_TAG_END --></a> <a href="#transformers.FlaxXGLMForCausalLM" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.FlaxXGLMForCausalLM"><!-- HTML_TAG_START --><wbr>FlaxXGLM<wbr>For<wbr>CausalLM<!-- HTML_TAG_END --></a> </nav></div></div></div> <div id="doc-footer"></div></main> </div> <script> import("/front/build/kube-b0520c1/index.js"); window.moonSha = "kube-b0520c1/"; window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc","environment":"production","userAgent":"HuggingFace (production)"}`); </script> <!-- Stripe --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://js.stripe.com/v3/"; script.async = true; document.head.appendChild(script); } </script> <!-- Google analytics v4 --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL"; script.async = true; document.head.appendChild(script); window.dataLayer = window.dataLayer || []; function gtag() { if (window.dataLayer !== undefined) { window.dataLayer.push(arguments); } } gtag("js", new Date()); gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/v4.34.0/en/model_doc/xglm" }); /// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" }); /// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent /// TODO: ask the user for their consent and update this with gtag('consent', 'update') } </script> <!-- Google Analytics v3 (deprecated) --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { (function (i, s, o, g, r, a, m) { i["GoogleAnalyticsObject"] = r; (i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments); }), (i[r].l = 1 * new Date()); (a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]); a.async = 1; a.src = g; m.parentNode.insertBefore(a, m); })(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics"); ganalytics("create", "UA-83738774-2", "auto"); ganalytics("send", "pageview", "/docs/transformers/v4.34.0/en/model_doc/xglm"); } </script> <iframe name="__privateStripeMetricsController3130" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-27c67c0d52761104439bb051c7856ab1.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fv4.34.0%2Fen%2Fmodel_doc%2Fxglm&amp;title=XGLM&amp;referrer=&amp;muid=NA&amp;sid=NA&amp;version=6&amp;preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html>
2023-10-05T13:33:33.478Z
XLM
https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/xlm
# XLM [![Models](https://img.shields.io/badge/All_model_pages-xlm-blueviolet)](https://huggingface.co/models?filter=xlm) [![Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/docs-demos/xlm-mlm-en-2048) ## Overview The XLM model was proposed in [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample, Alexis Conneau. It’s a transformer pretrained using one of the following objectives: - a causal language modeling (CLM) objective (next token prediction), - a masked language modeling (MLM) objective (BERT-like), or - a Translation Language Modeling (TLM) object (extension of BERT’s MLM to multiple language inputs) The abstract from the paper is the following: _Recent studies have demonstrated the efficiency of generative pretraining for English natural language understanding. In this work, we extend this approach to multiple languages and show the effectiveness of cross-lingual pretraining. We propose two methods to learn cross-lingual language models (XLMs): one unsupervised that only relies on monolingual data, and one supervised that leverages parallel data with a new cross-lingual language model objective. We obtain state-of-the-art results on cross-lingual classification, unsupervised and supervised machine translation. On XNLI, our approach pushes the state of the art by an absolute gain of 4.9% accuracy. On unsupervised machine translation, we obtain 34.3 BLEU on WMT’16 German-English, improving the previous state of the art by more than 9 BLEU. On supervised machine translation, we obtain a new state of the art of 38.5 BLEU on WMT’16 Romanian-English, outperforming the previous best approach by more than 4 BLEU. Our code and pretrained models will be made publicly available._ Tips: - XLM has many different checkpoints, which were trained using different objectives: CLM, MLM or TLM. Make sure to select the correct objective for your task (e.g. MLM checkpoints are not suitable for generation). - XLM has multilingual checkpoints which leverage a specific `lang` parameter. Check out the [multi-lingual](../multilingual) page for more information. - A transformer model trained on several languages. There are three different type of training for this model and the library provides checkpoints for all of them: - Causal language modeling (CLM) which is the traditional autoregressive training (so this model could be in the previous section as well). One of the languages is selected for each training sample, and the model input is a sentence of 256 tokens, that may span over several documents in one of those languages. - Masked language modeling (MLM) which is like RoBERTa. One of the languages is selected for each training sample, and the model input is a sentence of 256 tokens, that may span over several documents in one of those languages, with dynamic masking of the tokens. - A combination of MLM and translation language modeling (TLM). This consists of concatenating a sentence in two different languages, with random masking. To predict one of the masked tokens, the model can use both, the surrounding context in language 1 and the context given by language 2. This model was contributed by [thomwolf](https://huggingface.co/thomwolf). The original code can be found [here](https://github.com/facebookresearch/XLM/). ## Documentation resources - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Causal language modeling task guide](../tasks/language_modeling) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/multiple_choice) ## XLMConfig ### class transformers.XLMConfig [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/configuration_xlm.py#L40) ( vocab\_size = 30145emb\_dim = 2048n\_layers = 12n\_heads = 16dropout = 0.1attention\_dropout = 0.1gelu\_activation = Truesinusoidal\_embeddings = Falsecausal = Falseasm = Falsen\_langs = 1use\_lang\_emb = Truemax\_position\_embeddings = 512embed\_init\_std = 0.02209708691207961layer\_norm\_eps = 1e-12init\_std = 0.02bos\_index = 0eos\_index = 1pad\_index = 2unk\_index = 3mask\_index = 5is\_encoder = Truesummary\_type = 'first'summary\_use\_proj = Truesummary\_activation = Nonesummary\_proj\_to\_labels = Truesummary\_first\_dropout = 0.1start\_n\_top = 5end\_n\_top = 5mask\_token\_id = 0lang\_id = 0pad\_token\_id = 2bos\_token\_id = 0\*\*kwargs ) This is the configuration class to store the configuration of a [XLMModel](/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.XLMModel) or a [TFXLMModel](/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.TFXLMModel). It is used to instantiate a XLM model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the [xlm-mlm-en-2048](https://huggingface.co/xlm-mlm-en-2048) architecture. Configuration objects inherit from [PretrainedConfig](/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig) and can be used to control the model outputs. Read the documentation from [PretrainedConfig](/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig) for more information. Examples: ``` >>> from transformers import XLMConfig, XLMModel >>> >>> configuration = XLMConfig() >>> >>> model = XLMModel(configuration) >>> >>> configuration = model.config ``` ## XLMTokenizer ### class transformers.XLMTokenizer [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/tokenization_xlm.py#L528) ( vocab\_filemerges\_fileunk\_token = '<unk>'bos\_token = '<s>'sep\_token = '</s>'pad\_token = '<pad>'cls\_token = '</s>'mask\_token = '<special1>'additional\_special\_tokens = \['<special0>', '<special1>', '<special2>', '<special3>', '<special4>', '<special5>', '<special6>', '<special7>', '<special8>', '<special9>'\]lang2id = Noneid2lang = Nonedo\_lowercase\_and\_remove\_accent = True\*\*kwargs ) Construct an XLM tokenizer. Based on Byte-Pair Encoding. The tokenization process is the following: - Moses preprocessing and tokenization for most supported languages. - Language specific tokenization for Chinese (Jieba), Japanese (KyTea) and Thai (PyThaiNLP). - Optionally lowercases and normalizes all inputs text. - The arguments `special_tokens` and the function `set_special_tokens`, can be used to add additional symbols (like ”**classify**”) to a vocabulary. - The `lang2id` attribute maps the languages supported by the model with their IDs if provided (automatically set for pretrained vocabularies). - The `id2lang` attributes does reverse mapping if provided (automatically set for pretrained vocabularies). This tokenizer inherits from [PreTrainedTokenizer](/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizer) which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. #### build\_inputs\_with\_special\_tokens [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/tokenization_xlm.py#L870) ( token\_ids\_0: typing.List\[int\]token\_ids\_1: typing.Optional\[typing.List\[int\]\] = None ) → `List[int]` Parameters - **token\_ids\_0** (`List[int]`) — List of IDs to which the special tokens will be added. - **token\_ids\_1** (`List[int]`, _optional_) — Optional second list of IDs for sequence pairs. List of [input IDs](../glossary#input-ids) with the appropriate special tokens. Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. An XLM sequence has the following format: - single sequence: `<s> X </s>` - pair of sequences: `<s> A </s> B </s>` #### get\_special\_tokens\_mask [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/tokenization_xlm.py#L897) ( token\_ids\_0: typing.List\[int\]token\_ids\_1: typing.Optional\[typing.List\[int\]\] = Nonealready\_has\_special\_tokens: bool = False ) → `List[int]` Parameters - **token\_ids\_0** (`List[int]`) — List of IDs. - **token\_ids\_1** (`List[int]`, _optional_) — Optional second list of IDs for sequence pairs. - **already\_has\_special\_tokens** (`bool`, _optional_, defaults to `False`) — Whether or not the token list is already formatted with special tokens for the model. A list of integers in the range \[0, 1\]: 1 for a special token, 0 for a sequence token. Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer `prepare_for_model` method. #### create\_token\_type\_ids\_from\_sequences [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/tokenization_xlm.py#L925) ( token\_ids\_0: typing.List\[int\]token\_ids\_1: typing.Optional\[typing.List\[int\]\] = None ) → `List[int]` Parameters - **token\_ids\_0** (`List[int]`) — List of IDs. - **token\_ids\_1** (`List[int]`, _optional_) — Optional second list of IDs for sequence pairs. List of [token type IDs](../glossary#token-type-ids) according to the given sequence(s). Create a mask from the two sequences passed to be used in a sequence-pair classification task. An XLM sequence pair mask has the following format: ``` 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 | first sequence | second sequence | ``` If `token_ids_1` is `None`, this method only returns the first portion of the mask (0s). #### save\_vocabulary [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/tokenization_xlm.py#L954) ( save\_directory: strfilename\_prefix: typing.Optional\[str\] = None ) ## XLM specific outputs ### class transformers.models.xlm.modeling\_xlm.XLMForQuestionAnsweringOutput [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/modeling_xlm.py#L262) ( loss: typing.Optional\[torch.FloatTensor\] = Nonestart\_top\_log\_probs: typing.Optional\[torch.FloatTensor\] = Nonestart\_top\_index: typing.Optional\[torch.LongTensor\] = Noneend\_top\_log\_probs: typing.Optional\[torch.FloatTensor\] = Noneend\_top\_index: typing.Optional\[torch.LongTensor\] = Nonecls\_logits: typing.Optional\[torch.FloatTensor\] = Nonehidden\_states: typing.Optional\[typing.Tuple\[torch.FloatTensor\]\] = Noneattentions: typing.Optional\[typing.Tuple\[torch.FloatTensor\]\] = None ) Base class for outputs of question answering models using a `SquadHead`. ## XLMModel ### class transformers.XLMModel [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/modeling_xlm.py#L393) ( config ) Parameters - **config** ([XLMConfig](/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.XLMConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. The bare XLM Model transformer outputting raw hidden-states without any specific head on top. This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/modeling_xlm.py#L480) ( input\_ids: typing.Optional\[torch.Tensor\] = Noneattention\_mask: typing.Optional\[torch.Tensor\] = Nonelangs: typing.Optional\[torch.Tensor\] = Nonetoken\_type\_ids: typing.Optional\[torch.Tensor\] = Noneposition\_ids: typing.Optional\[torch.Tensor\] = Nonelengths: typing.Optional\[torch.Tensor\] = Nonecache: typing.Union\[typing.Dict\[str, torch.Tensor\], NoneType\] = Nonehead\_mask: typing.Optional\[torch.Tensor\] = Noneinputs\_embeds: typing.Optional\[torch.Tensor\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → [transformers.modeling\_outputs.BaseModelOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.BaseModelOutput) or `tuple(torch.FloatTensor)` The [XLMModel](/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.XLMModel) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, XLMModel >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("xlm-mlm-en-2048") >>> model = XLMModel.from_pretrained("xlm-mlm-en-2048") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state ``` ## XLMWithLMHeadModel ### class transformers.XLMWithLMHeadModel [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/modeling_xlm.py#L672) ( config ) Parameters - **config** ([XLMConfig](/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.XLMConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. The XLM Model transformer with a language modeling head on top (linear layer with weights tied to the input embeddings). This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/modeling_xlm.py#L702) ( input\_ids: typing.Optional\[torch.Tensor\] = Noneattention\_mask: typing.Optional\[torch.Tensor\] = Nonelangs: typing.Optional\[torch.Tensor\] = Nonetoken\_type\_ids: typing.Optional\[torch.Tensor\] = Noneposition\_ids: typing.Optional\[torch.Tensor\] = Nonelengths: typing.Optional\[torch.Tensor\] = Nonecache: typing.Union\[typing.Dict\[str, torch.Tensor\], NoneType\] = Nonehead\_mask: typing.Optional\[torch.Tensor\] = Noneinputs\_embeds: typing.Optional\[torch.Tensor\] = Nonelabels: typing.Optional\[torch.Tensor\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → [transformers.modeling\_outputs.MaskedLMOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.MaskedLMOutput) or `tuple(torch.FloatTensor)` The [XLMWithLMHeadModel](/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.XLMWithLMHeadModel) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, XLMWithLMHeadModel >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("xlm-mlm-en-2048") >>> model = XLMWithLMHeadModel.from_pretrained("xlm-mlm-en-2048") >>> inputs = tokenizer("The capital of France is <special1>.", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> >>> mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0] >>> predicted_token_id = logits[0, mask_token_index].argmax(axis=-1) >>> labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"] >>> >>> labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100) >>> outputs = model(**inputs, labels=labels) ``` ## XLMForSequenceClassification ### class transformers.XLMForSequenceClassification [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/modeling_xlm.py#L769) ( config ) Parameters - **config** ([XLMConfig](/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.XLMConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. XLM Model with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/modeling_xlm.py#L781) ( input\_ids: typing.Optional\[torch.Tensor\] = Noneattention\_mask: typing.Optional\[torch.Tensor\] = Nonelangs: typing.Optional\[torch.Tensor\] = Nonetoken\_type\_ids: typing.Optional\[torch.Tensor\] = Noneposition\_ids: typing.Optional\[torch.Tensor\] = Nonelengths: typing.Optional\[torch.Tensor\] = Nonecache: typing.Union\[typing.Dict\[str, torch.Tensor\], NoneType\] = Nonehead\_mask: typing.Optional\[torch.Tensor\] = Noneinputs\_embeds: typing.Optional\[torch.Tensor\] = Nonelabels: typing.Optional\[torch.Tensor\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → [transformers.modeling\_outputs.SequenceClassifierOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.SequenceClassifierOutput) or `tuple(torch.FloatTensor)` The [XLMForSequenceClassification](/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.XLMForSequenceClassification) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example of single-label classification: ``` >>> import torch >>> from transformers import AutoTokenizer, XLMForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("xlm-mlm-en-2048") >>> model = XLMForSequenceClassification.from_pretrained("xlm-mlm-en-2048") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_id = logits.argmax().item() >>> >>> num_labels = len(model.config.id2label) >>> model = XLMForSequenceClassification.from_pretrained("xlm-mlm-en-2048", num_labels=num_labels) >>> labels = torch.tensor([1]) >>> loss = model(**inputs, labels=labels).loss ``` Example of multi-label classification: ``` >>> import torch >>> from transformers import AutoTokenizer, XLMForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("xlm-mlm-en-2048") >>> model = XLMForSequenceClassification.from_pretrained("xlm-mlm-en-2048", problem_type="multi_label_classification") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5] >>> >>> num_labels = len(model.config.id2label) >>> model = XLMForSequenceClassification.from_pretrained( ... "xlm-mlm-en-2048", num_labels=num_labels, problem_type="multi_label_classification" ... ) >>> labels = torch.sum( ... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1 ... ).to(torch.float) >>> loss = model(**inputs, labels=labels).loss ``` ## XLMForMultipleChoice ### class transformers.XLMForMultipleChoice [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/modeling_xlm.py#L1180) ( config\*inputs\*\*kwargs ) Parameters - **config** ([XLMConfig](/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.XLMConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. XLM Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks. This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/modeling_xlm.py#L1191) ( input\_ids: typing.Optional\[torch.Tensor\] = Noneattention\_mask: typing.Optional\[torch.Tensor\] = Nonelangs: typing.Optional\[torch.Tensor\] = Nonetoken\_type\_ids: typing.Optional\[torch.Tensor\] = Noneposition\_ids: typing.Optional\[torch.Tensor\] = Nonelengths: typing.Optional\[torch.Tensor\] = Nonecache: typing.Union\[typing.Dict\[str, torch.Tensor\], NoneType\] = Nonehead\_mask: typing.Optional\[torch.Tensor\] = Noneinputs\_embeds: typing.Optional\[torch.Tensor\] = Nonelabels: typing.Optional\[torch.Tensor\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → [transformers.modeling\_outputs.MultipleChoiceModelOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.MultipleChoiceModelOutput) or `tuple(torch.FloatTensor)` The [XLMForMultipleChoice](/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.XLMForMultipleChoice) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, XLMForMultipleChoice >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("xlm-mlm-en-2048") >>> model = XLMForMultipleChoice.from_pretrained("xlm-mlm-en-2048") >>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced." >>> choice0 = "It is eaten with a fork and a knife." >>> choice1 = "It is eaten while held in the hand." >>> labels = torch.tensor(0).unsqueeze(0) >>> encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="pt", padding=True) >>> outputs = model(**{k: v.unsqueeze(0) for k, v in encoding.items()}, labels=labels) >>> >>> loss = outputs.loss >>> logits = outputs.logits ``` ## XLMForTokenClassification ### class transformers.XLMForTokenClassification [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/modeling_xlm.py#L1096) ( config ) Parameters - **config** ([XLMConfig](/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.XLMConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. XLM Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/modeling_xlm.py#L1108) ( input\_ids: typing.Optional\[torch.Tensor\] = Noneattention\_mask: typing.Optional\[torch.Tensor\] = Nonelangs: typing.Optional\[torch.Tensor\] = Nonetoken\_type\_ids: typing.Optional\[torch.Tensor\] = Noneposition\_ids: typing.Optional\[torch.Tensor\] = Nonelengths: typing.Optional\[torch.Tensor\] = Nonecache: typing.Union\[typing.Dict\[str, torch.Tensor\], NoneType\] = Nonehead\_mask: typing.Optional\[torch.Tensor\] = Noneinputs\_embeds: typing.Optional\[torch.Tensor\] = Nonelabels: typing.Optional\[torch.Tensor\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → [transformers.modeling\_outputs.TokenClassifierOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.TokenClassifierOutput) or `tuple(torch.FloatTensor)` The [XLMForTokenClassification](/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.XLMForTokenClassification) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, XLMForTokenClassification >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("xlm-mlm-en-2048") >>> model = XLMForTokenClassification.from_pretrained("xlm-mlm-en-2048") >>> inputs = tokenizer( ... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt" ... ) >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_token_class_ids = logits.argmax(-1) >>> >>> >>> >>> predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]] >>> labels = predicted_token_class_ids >>> loss = model(**inputs, labels=labels).loss ``` ## XLMForQuestionAnsweringSimple ### class transformers.XLMForQuestionAnsweringSimple [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/modeling_xlm.py#L871) ( config ) Parameters - **config** ([XLMConfig](/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.XLMConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. XLM Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute `span start logits` and `span end logits`). This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/modeling_xlm.py#L881) ( input\_ids: typing.Optional\[torch.Tensor\] = Noneattention\_mask: typing.Optional\[torch.Tensor\] = Nonelangs: typing.Optional\[torch.Tensor\] = Nonetoken\_type\_ids: typing.Optional\[torch.Tensor\] = Noneposition\_ids: typing.Optional\[torch.Tensor\] = Nonelengths: typing.Optional\[torch.Tensor\] = Nonecache: typing.Union\[typing.Dict\[str, torch.Tensor\], NoneType\] = Nonehead\_mask: typing.Optional\[torch.Tensor\] = Noneinputs\_embeds: typing.Optional\[torch.Tensor\] = Nonestart\_positions: typing.Optional\[torch.Tensor\] = Noneend\_positions: typing.Optional\[torch.Tensor\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → [transformers.modeling\_outputs.QuestionAnsweringModelOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.QuestionAnsweringModelOutput) or `tuple(torch.FloatTensor)` The [XLMForQuestionAnsweringSimple](/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.XLMForQuestionAnsweringSimple) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, XLMForQuestionAnsweringSimple >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("xlm-mlm-en-2048") >>> model = XLMForQuestionAnsweringSimple.from_pretrained("xlm-mlm-en-2048") >>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" >>> inputs = tokenizer(question, text, return_tensors="pt") >>> with torch.no_grad(): ... outputs = model(**inputs) >>> answer_start_index = outputs.start_logits.argmax() >>> answer_end_index = outputs.end_logits.argmax() >>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1] >>> >>> target_start_index = torch.tensor([14]) >>> target_end_index = torch.tensor([15]) >>> outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index) >>> loss = outputs.loss ``` ## XLMForQuestionAnswering ### class transformers.XLMForQuestionAnswering [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/modeling_xlm.py#L975) ( config ) Parameters - **config** ([XLMConfig](/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.XLMConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. XLM Model with a beam-search span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute `span start logits` and `span end logits`). This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/modeling_xlm.py#L985) ( input\_ids: typing.Optional\[torch.Tensor\] = Noneattention\_mask: typing.Optional\[torch.Tensor\] = Nonelangs: typing.Optional\[torch.Tensor\] = Nonetoken\_type\_ids: typing.Optional\[torch.Tensor\] = Noneposition\_ids: typing.Optional\[torch.Tensor\] = Nonelengths: typing.Optional\[torch.Tensor\] = Nonecache: typing.Union\[typing.Dict\[str, torch.Tensor\], NoneType\] = Nonehead\_mask: typing.Optional\[torch.Tensor\] = Noneinputs\_embeds: typing.Optional\[torch.Tensor\] = Nonestart\_positions: typing.Optional\[torch.Tensor\] = Noneend\_positions: typing.Optional\[torch.Tensor\] = Noneis\_impossible: typing.Optional\[torch.Tensor\] = Nonecls\_index: typing.Optional\[torch.Tensor\] = Nonep\_mask: typing.Optional\[torch.Tensor\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → [transformers.models.xlm.modeling\_xlm.XLMForQuestionAnsweringOutput](/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.models.xlm.modeling_xlm.XLMForQuestionAnsweringOutput) or `tuple(torch.FloatTensor)` The [XLMForQuestionAnswering](/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.XLMForQuestionAnswering) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, XLMForQuestionAnswering >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("xlm-mlm-en-2048") >>> model = XLMForQuestionAnswering.from_pretrained("xlm-mlm-en-2048") >>> input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze( ... 0 ... ) >>> start_positions = torch.tensor([1]) >>> end_positions = torch.tensor([3]) >>> outputs = model(input_ids, start_positions=start_positions, end_positions=end_positions) >>> loss = outputs.loss ``` ## TFXLMModel ### class transformers.TFXLMModel [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/modeling_tf_xlm.py#L688) ( \*args\*\*kwargs ) Parameters - **config** ([XLMConfig](/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.XLMConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. The bare XLM Model transformer outputting raw hidden-states without any specific head on top. This model inherits from [TFPreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a [tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in `transformers` accept two formats as input: - having all inputs as keyword arguments (like PyTorch models), or - having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like `model.fit()` things should “just work” for you - just pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: - a single Tensor with `input_ids` only and nothing else: `model(input_ids)` - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])` - a dictionary with one or several input Tensors associated to the input names given in the docstring: `model({"input_ids": input_ids, "token_type_ids": token_type_ids})` Note that when creating models and layers with [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! #### call [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/modeling_tf_xlm.py#L693) ( input\_ids: TFModelInputType | None = Noneattention\_mask: tf.Tensor | None = Nonelangs: tf.Tensor | None = Nonetoken\_type\_ids: tf.Tensor | None = Noneposition\_ids: tf.Tensor | None = Nonelengths: tf.Tensor | None = Nonecache: Dict\[str, tf.Tensor\] | None = Nonehead\_mask: tf.Tensor | None = Noneinputs\_embeds: tf.Tensor | None = Noneoutput\_attentions: bool | None = Noneoutput\_hidden\_states: bool | None = Nonereturn\_dict: bool | None = Nonetraining: bool = False ) → [transformers.modeling\_tf\_outputs.TFBaseModelOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFBaseModelOutput) or `tuple(tf.Tensor)` The [TFXLMModel](/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.TFXLMModel) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, TFXLMModel >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("xlm-mlm-en-2048") >>> model = TFXLMModel.from_pretrained("xlm-mlm-en-2048") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") >>> outputs = model(inputs) >>> last_hidden_states = outputs.last_hidden_state ``` ## TFXLMWithLMHeadModel ### class transformers.TFXLMWithLMHeadModel [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/modeling_tf_xlm.py#L793) ( \*args\*\*kwargs ) Parameters - **config** ([XLMConfig](/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.XLMConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. The XLM Model transformer with a language modeling head on top (linear layer with weights tied to the input embeddings). This model inherits from [TFPreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a [tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in `transformers` accept two formats as input: - having all inputs as keyword arguments (like PyTorch models), or - having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like `model.fit()` things should “just work” for you - just pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: - a single Tensor with `input_ids` only and nothing else: `model(input_ids)` - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])` - a dictionary with one or several input Tensors associated to the input names given in the docstring: `model({"input_ids": input_ids, "token_type_ids": token_type_ids})` Note that when creating models and layers with [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! #### call [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/modeling_tf_xlm.py#L822) ( input\_ids: TFModelInputType | None = Noneattention\_mask: np.ndarray | tf.Tensor | None = Nonelangs: np.ndarray | tf.Tensor | None = Nonetoken\_type\_ids: np.ndarray | tf.Tensor | None = Noneposition\_ids: np.ndarray | tf.Tensor | None = Nonelengths: np.ndarray | tf.Tensor | None = Nonecache: Optional\[Dict\[str, tf.Tensor\]\] = Nonehead\_mask: np.ndarray | tf.Tensor | None = Noneinputs\_embeds: np.ndarray | tf.Tensor | None = Noneoutput\_attentions: Optional\[bool\] = Noneoutput\_hidden\_states: Optional\[bool\] = Nonereturn\_dict: Optional\[bool\] = Nonetraining: bool = False ) → `transformers.models.xlm.modeling_tf_xlm.TFXLMWithLMHeadModelOutput` or `tuple(tf.Tensor)` The [TFXLMWithLMHeadModel](/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.TFXLMWithLMHeadModel) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, TFXLMWithLMHeadModel >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("xlm-mlm-en-2048") >>> model = TFXLMWithLMHeadModel.from_pretrained("xlm-mlm-en-2048") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") >>> outputs = model(inputs) >>> logits = outputs.logits ``` ## TFXLMForSequenceClassification ### class transformers.TFXLMForSequenceClassification [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/modeling_tf_xlm.py#L879) ( \*args\*\*kwargs ) Parameters - **config** ([XLMConfig](/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.XLMConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. XLM Model with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. This model inherits from [TFPreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a [tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in `transformers` accept two formats as input: - having all inputs as keyword arguments (like PyTorch models), or - having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like `model.fit()` things should “just work” for you - just pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: - a single Tensor with `input_ids` only and nothing else: `model(input_ids)` - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])` - a dictionary with one or several input Tensors associated to the input names given in the docstring: `model({"input_ids": input_ids, "token_type_ids": token_type_ids})` Note that when creating models and layers with [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! #### call [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/modeling_tf_xlm.py#L887) ( input\_ids: TFModelInputType | None = Noneattention\_mask: np.ndarray | tf.Tensor | None = Nonelangs: np.ndarray | tf.Tensor | None = Nonetoken\_type\_ids: np.ndarray | tf.Tensor | None = Noneposition\_ids: np.ndarray | tf.Tensor | None = Nonelengths: np.ndarray | tf.Tensor | None = Nonecache: Optional\[Dict\[str, tf.Tensor\]\] = Nonehead\_mask: np.ndarray | tf.Tensor | None = Noneinputs\_embeds: np.ndarray | tf.Tensor | None = Noneoutput\_attentions: Optional\[bool\] = Noneoutput\_hidden\_states: Optional\[bool\] = Nonereturn\_dict: Optional\[bool\] = Nonelabels: np.ndarray | tf.Tensor | None = Nonetraining: bool = False ) → [transformers.modeling\_tf\_outputs.TFSequenceClassifierOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFSequenceClassifierOutput) or `tuple(tf.Tensor)` The [TFXLMForSequenceClassification](/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.TFXLMForSequenceClassification) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, TFXLMForSequenceClassification >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("xlm-mlm-en-2048") >>> model = TFXLMForSequenceClassification.from_pretrained("xlm-mlm-en-2048") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") >>> logits = model(**inputs).logits >>> predicted_class_id = int(tf.math.argmax(logits, axis=-1)[0]) ``` ``` >>> >>> num_labels = len(model.config.id2label) >>> model = TFXLMForSequenceClassification.from_pretrained("xlm-mlm-en-2048", num_labels=num_labels) >>> labels = tf.constant(1) >>> loss = model(**inputs, labels=labels).loss ``` ## TFXLMForMultipleChoice ### class transformers.TFXLMForMultipleChoice [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/modeling_tf_xlm.py#L957) ( \*args\*\*kwargs ) Parameters - **config** ([XLMConfig](/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.XLMConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. XLM Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks. This model inherits from [TFPreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a [tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in `transformers` accept two formats as input: - having all inputs as keyword arguments (like PyTorch models), or - having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like `model.fit()` things should “just work” for you - just pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: - a single Tensor with `input_ids` only and nothing else: `model(input_ids)` - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])` - a dictionary with one or several input Tensors associated to the input names given in the docstring: `model({"input_ids": input_ids, "token_type_ids": token_type_ids})` Note that when creating models and layers with [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! #### call [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/modeling_tf_xlm.py#L986) ( input\_ids: TFModelInputType | None = Noneattention\_mask: np.ndarray | tf.Tensor | None = Nonelangs: np.ndarray | tf.Tensor | None = Nonetoken\_type\_ids: np.ndarray | tf.Tensor | None = Noneposition\_ids: np.ndarray | tf.Tensor | None = Nonelengths: np.ndarray | tf.Tensor | None = Nonecache: Optional\[Dict\[str, tf.Tensor\]\] = Nonehead\_mask: np.ndarray | tf.Tensor | None = Noneinputs\_embeds: np.ndarray | tf.Tensor | None = Noneoutput\_attentions: Optional\[bool\] = Noneoutput\_hidden\_states: Optional\[bool\] = Nonereturn\_dict: Optional\[bool\] = Nonelabels: np.ndarray | tf.Tensor | None = Nonetraining: bool = False ) → [transformers.modeling\_tf\_outputs.TFMultipleChoiceModelOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput) or `tuple(tf.Tensor)` The [TFXLMForMultipleChoice](/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.TFXLMForMultipleChoice) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, TFXLMForMultipleChoice >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("xlm-mlm-en-2048") >>> model = TFXLMForMultipleChoice.from_pretrained("xlm-mlm-en-2048") >>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced." >>> choice0 = "It is eaten with a fork and a knife." >>> choice1 = "It is eaten while held in the hand." >>> encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="tf", padding=True) >>> inputs = {k: tf.expand_dims(v, 0) for k, v in encoding.items()} >>> outputs = model(inputs) >>> >>> logits = outputs.logits ``` ## TFXLMForTokenClassification ### class transformers.TFXLMForTokenClassification [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/modeling_tf_xlm.py#L1076) ( \*args\*\*kwargs ) Parameters - **config** ([XLMConfig](/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.XLMConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. XLM Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. This model inherits from [TFPreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a [tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in `transformers` accept two formats as input: - having all inputs as keyword arguments (like PyTorch models), or - having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like `model.fit()` things should “just work” for you - just pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: - a single Tensor with `input_ids` only and nothing else: `model(input_ids)` - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])` - a dictionary with one or several input Tensors associated to the input names given in the docstring: `model({"input_ids": input_ids, "token_type_ids": token_type_ids})` Note that when creating models and layers with [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! #### call [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/modeling_tf_xlm.py#L1087) ( input\_ids: TFModelInputType | None = Noneattention\_mask: np.ndarray | tf.Tensor | None = Nonelangs: np.ndarray | tf.Tensor | None = Nonetoken\_type\_ids: np.ndarray | tf.Tensor | None = Noneposition\_ids: np.ndarray | tf.Tensor | None = Nonelengths: np.ndarray | tf.Tensor | None = Nonecache: Optional\[Dict\[str, tf.Tensor\]\] = Nonehead\_mask: np.ndarray | tf.Tensor | None = Noneinputs\_embeds: np.ndarray | tf.Tensor | None = Noneoutput\_attentions: Optional\[bool\] = Noneoutput\_hidden\_states: Optional\[bool\] = Nonereturn\_dict: Optional\[bool\] = Nonelabels: np.ndarray | tf.Tensor | None = Nonetraining: bool = False ) → [transformers.modeling\_tf\_outputs.TFTokenClassifierOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFTokenClassifierOutput) or `tuple(tf.Tensor)` The [TFXLMForTokenClassification](/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.TFXLMForTokenClassification) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, TFXLMForTokenClassification >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("xlm-mlm-en-2048") >>> model = TFXLMForTokenClassification.from_pretrained("xlm-mlm-en-2048") >>> inputs = tokenizer( ... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="tf" ... ) >>> logits = model(**inputs).logits >>> predicted_token_class_ids = tf.math.argmax(logits, axis=-1) >>> >>> >>> >>> predicted_tokens_classes = [model.config.id2label[t] for t in predicted_token_class_ids[0].numpy().tolist()] ``` ``` >>> labels = predicted_token_class_ids >>> loss = tf.math.reduce_mean(model(**inputs, labels=labels).loss) ``` ## TFXLMForQuestionAnsweringSimple ### class transformers.TFXLMForQuestionAnsweringSimple [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/modeling_tf_xlm.py#L1156) ( \*args\*\*kwargs ) Parameters - **config** ([XLMConfig](/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.XLMConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. XLM Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layer on top of the hidden-states output to compute `span start logits` and `span end logits`). This model inherits from [TFPreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a [tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in `transformers` accept two formats as input: - having all inputs as keyword arguments (like PyTorch models), or - having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like `model.fit()` things should “just work” for you - just pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: - a single Tensor with `input_ids` only and nothing else: `model(input_ids)` - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])` - a dictionary with one or several input Tensors associated to the input names given in the docstring: `model({"input_ids": input_ids, "token_type_ids": token_type_ids})` Note that when creating models and layers with [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! #### call [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/modeling_tf_xlm.py#L1164) ( input\_ids: TFModelInputType | None = Noneattention\_mask: np.ndarray | tf.Tensor | None = Nonelangs: np.ndarray | tf.Tensor | None = Nonetoken\_type\_ids: np.ndarray | tf.Tensor | None = Noneposition\_ids: np.ndarray | tf.Tensor | None = Nonelengths: np.ndarray | tf.Tensor | None = Nonecache: Optional\[Dict\[str, tf.Tensor\]\] = Nonehead\_mask: np.ndarray | tf.Tensor | None = Noneinputs\_embeds: np.ndarray | tf.Tensor | None = Noneoutput\_attentions: Optional\[bool\] = Noneoutput\_hidden\_states: Optional\[bool\] = Nonereturn\_dict: Optional\[bool\] = Nonestart\_positions: np.ndarray | tf.Tensor | None = Noneend\_positions: np.ndarray | tf.Tensor | None = Nonetraining: bool = False ) → [transformers.modeling\_tf\_outputs.TFQuestionAnsweringModelOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput) or `tuple(tf.Tensor)` The [TFXLMForQuestionAnsweringSimple](/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.TFXLMForQuestionAnsweringSimple) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, TFXLMForQuestionAnsweringSimple >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("xlm-mlm-en-2048") >>> model = TFXLMForQuestionAnsweringSimple.from_pretrained("xlm-mlm-en-2048") >>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" >>> inputs = tokenizer(question, text, return_tensors="tf") >>> outputs = model(**inputs) >>> answer_start_index = int(tf.math.argmax(outputs.start_logits, axis=-1)[0]) >>> answer_end_index = int(tf.math.argmax(outputs.end_logits, axis=-1)[0]) >>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1] ``` ``` >>> >>> target_start_index = tf.constant([14]) >>> target_end_index = tf.constant([15]) >>> outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index) >>> loss = tf.math.reduce_mean(outputs.loss) ```
<!DOCTYPE html><html class=""><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"> <meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science."> <meta property="fb:app_id" content="1321688464574422"> <meta name="twitter:card" content="summary_large_image"> <meta name="twitter:site" content="@huggingface"> <meta property="og:title" content="XLM"> <meta property="og:type" content="website"> <meta property="og:url" content="https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/xlm"> <meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png"> <link rel="stylesheet" href="/front/build/kube-5e23f38/style.css"> <link rel="preconnect" href="https://fonts.gstatic.com"> <link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&amp;display=swap" rel="stylesheet"> <link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&amp;display=swap" rel="stylesheet"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" as="style" onload="this.onload=null;this.rel='stylesheet'"> <noscript> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" /> </noscript> <title>XLM</title> <script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script> <script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"><link rel="stylesheet" href="/docs/transformers/v4.34.0/en/_app/immutable/assets/0.e3b0c442.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/1.38c5c2f6.js"><meta name="hf:doc:metadata" content="{&quot;local&quot;:&quot;xlm&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;overview&quot;,&quot;title&quot;:&quot;Overview&quot;},{&quot;local&quot;:&quot;documentation-resources&quot;,&quot;title&quot;:&quot;Documentation resources&quot;},{&quot;local&quot;:&quot;transformers.XLMConfig&quot;,&quot;title&quot;:&quot;XLMConfig&quot;},{&quot;local&quot;:&quot;transformers.XLMTokenizer&quot;,&quot;title&quot;:&quot;XLMTokenizer&quot;},{&quot;local&quot;:&quot;transformers.models.xlm.modeling_xlm.XLMForQuestionAnsweringOutput&quot;,&quot;title&quot;:&quot;XLM specific outputs&quot;},{&quot;local&quot;:&quot;transformers.XLMModel&quot;,&quot;title&quot;:&quot;XLMModel&quot;},{&quot;local&quot;:&quot;transformers.XLMWithLMHeadModel&quot;,&quot;title&quot;:&quot;XLMWithLMHeadModel&quot;},{&quot;local&quot;:&quot;transformers.XLMForSequenceClassification&quot;,&quot;title&quot;:&quot;XLMForSequenceClassification&quot;},{&quot;local&quot;:&quot;transformers.XLMForMultipleChoice&quot;,&quot;title&quot;:&quot;XLMForMultipleChoice&quot;},{&quot;local&quot;:&quot;transformers.XLMForTokenClassification&quot;,&quot;title&quot;:&quot;XLMForTokenClassification&quot;},{&quot;local&quot;:&quot;transformers.XLMForQuestionAnsweringSimple&quot;,&quot;title&quot;:&quot;XLMForQuestionAnsweringSimple&quot;},{&quot;local&quot;:&quot;transformers.XLMForQuestionAnswering&quot;,&quot;title&quot;:&quot;XLMForQuestionAnswering&quot;},{&quot;local&quot;:&quot;transformers.TFXLMModel&quot;,&quot;title&quot;:&quot;TFXLMModel&quot;},{&quot;local&quot;:&quot;transformers.TFXLMWithLMHeadModel&quot;,&quot;title&quot;:&quot;TFXLMWithLMHeadModel&quot;},{&quot;local&quot;:&quot;transformers.TFXLMForSequenceClassification&quot;,&quot;title&quot;:&quot;TFXLMForSequenceClassification&quot;},{&quot;local&quot;:&quot;transformers.TFXLMForMultipleChoice&quot;,&quot;title&quot;:&quot;TFXLMForMultipleChoice&quot;},{&quot;local&quot;:&quot;transformers.TFXLMForTokenClassification&quot;,&quot;title&quot;:&quot;TFXLMForTokenClassification&quot;},{&quot;local&quot;:&quot;transformers.TFXLMForQuestionAnsweringSimple&quot;,&quot;title&quot;:&quot;TFXLMForQuestionAnsweringSimple&quot;}],&quot;title&quot;:&quot;XLM&quot;}"></head> <body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage"> <div class="flex min-h-screen flex-col"> <div class="SVELTE_HYDRATER contents" data-props="{&quot;classNames&quot;:&quot;&quot;,&quot;isWide&quot;:true,&quot;isZh&quot;:false}" data-target="MainHeader"><header class="border-b border-gray-100 "><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text" value=""> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <div class="flex flex-none items-center justify-center p-0.5 place-self-stretch lg:hidden"><button class="relative z-40 flex h-6 w-8 items-center justify-center" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div></div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a> </li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a> </li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a> </li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a> </li> <li><div class="relative "> <button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"> <svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing </a></li> <li><div class="relative group"> <button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"> <svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In </a></li> <li><a class="rounded-full border border-transparent bg-gray-900 px-3 py-1 leading-none text-white hover:border-black hover:bg-white hover:text-black" href="/join">Sign Up </a></li></ul></nav></div></header></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div> <main class="flex flex-1 flex-col"><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapters&quot;:[{&quot;title&quot;:&quot;Get started&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;🤗 Transformers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;index&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/index&quot;},{&quot;title&quot;:&quot;Quick tour&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;quicktour&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/quicktour&quot;},{&quot;title&quot;:&quot;Installation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;installation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/installation&quot;}]},{&quot;title&quot;:&quot;Tutorials&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Run inference with pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_tutorial&quot;},{&quot;title&quot;:&quot;Write portable code with AutoClass&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;autoclass_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/autoclass_tutorial&quot;},{&quot;title&quot;:&quot;Preprocess data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocessing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/preprocessing&quot;},{&quot;title&quot;:&quot;Fine-tune a pretrained model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;training&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/training&quot;},{&quot;title&quot;:&quot;Train with a script&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;run_scripts&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/run_scripts&quot;},{&quot;title&quot;:&quot;Set up distributed training with 🤗 Accelerate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;accelerate&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/accelerate&quot;},{&quot;title&quot;:&quot;Load and train adapters with 🤗 PEFT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;peft&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/peft&quot;},{&quot;title&quot;:&quot;Share your model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_sharing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_sharing&quot;},{&quot;title&quot;:&quot;Agents&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers_agents&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/transformers_agents&quot;},{&quot;title&quot;:&quot;Generation with LLMs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;llm_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/llm_tutorial&quot;}]},{&quot;title&quot;:&quot;Task Guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Natural Language Processing&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text classification&quot;,&quot;id&quot;:&quot;tasks/sequence_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/sequence_classification&quot;},{&quot;title&quot;:&quot;Token classification&quot;,&quot;id&quot;:&quot;tasks/token_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/token_classification&quot;},{&quot;title&quot;:&quot;Question answering&quot;,&quot;id&quot;:&quot;tasks/question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/question_answering&quot;},{&quot;title&quot;:&quot;Causal language modeling&quot;,&quot;id&quot;:&quot;tasks/language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/language_modeling&quot;},{&quot;title&quot;:&quot;Masked language modeling&quot;,&quot;id&quot;:&quot;tasks/masked_language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/masked_language_modeling&quot;},{&quot;title&quot;:&quot;Translation&quot;,&quot;id&quot;:&quot;tasks/translation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/translation&quot;},{&quot;title&quot;:&quot;Summarization&quot;,&quot;id&quot;:&quot;tasks/summarization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/summarization&quot;},{&quot;title&quot;:&quot;Multiple choice&quot;,&quot;id&quot;:&quot;tasks/multiple_choice&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/multiple_choice&quot;}]},{&quot;title&quot;:&quot;Audio&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio classification&quot;,&quot;id&quot;:&quot;tasks/audio_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/audio_classification&quot;},{&quot;title&quot;:&quot;Automatic speech recognition&quot;,&quot;id&quot;:&quot;tasks/asr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/asr&quot;}]},{&quot;title&quot;:&quot;Computer Vision&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image classification&quot;,&quot;id&quot;:&quot;tasks/image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_classification&quot;},{&quot;title&quot;:&quot;Semantic segmentation&quot;,&quot;id&quot;:&quot;tasks/semantic_segmentation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/semantic_segmentation&quot;},{&quot;title&quot;:&quot;Video classification&quot;,&quot;id&quot;:&quot;tasks/video_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/video_classification&quot;},{&quot;title&quot;:&quot;Object detection&quot;,&quot;id&quot;:&quot;tasks/object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot object detection&quot;,&quot;id&quot;:&quot;tasks/zero_shot_object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot image classification&quot;,&quot;id&quot;:&quot;tasks/zero_shot_image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification&quot;},{&quot;title&quot;:&quot;Depth estimation&quot;,&quot;id&quot;:&quot;tasks/monocular_depth_estimation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation&quot;}]},{&quot;title&quot;:&quot;Multimodal&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image captioning&quot;,&quot;id&quot;:&quot;tasks/image_captioning&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_captioning&quot;},{&quot;title&quot;:&quot;Document Question Answering&quot;,&quot;id&quot;:&quot;tasks/document_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/document_question_answering&quot;},{&quot;title&quot;:&quot;Visual Question Answering&quot;,&quot;id&quot;:&quot;tasks/visual_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/visual_question_answering&quot;},{&quot;title&quot;:&quot;Text to speech&quot;,&quot;id&quot;:&quot;tasks/text-to-speech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/text-to-speech&quot;}]},{&quot;title&quot;:&quot;Generation&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Customize the generation strategy&quot;,&quot;id&quot;:&quot;generation_strategies&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/generation_strategies&quot;}]},{&quot;title&quot;:&quot;Prompting&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image tasks with IDEFICS&quot;,&quot;id&quot;:&quot;tasks/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/idefics&quot;}]}]},{&quot;title&quot;:&quot;Developer guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Use fast tokenizers from 🤗 Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;fast_tokenizers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/fast_tokenizers&quot;},{&quot;title&quot;:&quot;Run inference with multilingual models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;multilingual&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/multilingual&quot;},{&quot;title&quot;:&quot;Use model-specific APIs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;create_a_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/create_a_model&quot;},{&quot;title&quot;:&quot;Share a custom model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_models&quot;},{&quot;title&quot;:&quot;Templates for chat models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;chat_templating&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/chat_templating&quot;},{&quot;title&quot;:&quot;Run training on Amazon SageMaker&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;sagemaker&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/sagemaker&quot;},{&quot;title&quot;:&quot;Export to ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;serialization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/serialization&quot;},{&quot;title&quot;:&quot;Export to TFLite&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tflite&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tflite&quot;},{&quot;title&quot;:&quot;Export to TorchScript&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;torchscript&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/torchscript&quot;},{&quot;title&quot;:&quot;Benchmarks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;benchmarks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/benchmarks&quot;},{&quot;title&quot;:&quot;Notebooks with examples&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;notebooks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/notebooks&quot;},{&quot;title&quot;:&quot;Community resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;community&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/community&quot;},{&quot;title&quot;:&quot;Custom Tools and Prompts&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_tools&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_tools&quot;},{&quot;title&quot;:&quot;Troubleshoot&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;troubleshooting&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/troubleshooting&quot;}]},{&quot;title&quot;:&quot;Performance and scalability&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;performance&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/performance&quot;},{&quot;title&quot;:&quot;Efficient training techniques&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Methods and tools for efficient training on a single GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_one&quot;},{&quot;title&quot;:&quot;Multiple GPUs and parallelism&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_many&quot;},{&quot;title&quot;:&quot;Efficient training on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu&quot;},{&quot;title&quot;:&quot;Distributed CPU training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu_many&quot;},{&quot;title&quot;:&quot;Training on TPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu&quot;},{&quot;title&quot;:&quot;Training on TPU with TensorFlow&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu_tf&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu_tf&quot;},{&quot;title&quot;:&quot;Training on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_special&quot;},{&quot;title&quot;:&quot;Custom hardware for training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_hardware&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_hardware&quot;},{&quot;title&quot;:&quot;Hyperparameter Search using Trainer API&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;hpo_train&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/hpo_train&quot;}]},{&quot;title&quot;:&quot;Optimizing inference&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Inference on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_cpu&quot;},{&quot;title&quot;:&quot;Inference on one GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_one&quot;},{&quot;title&quot;:&quot;Inference on many GPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_many&quot;},{&quot;title&quot;:&quot;Inference on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_special&quot;}]},{&quot;title&quot;:&quot;Instantiating a big model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;big_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/big_models&quot;},{&quot;title&quot;:&quot;Troubleshooting&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;debugging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/debugging&quot;},{&quot;title&quot;:&quot;XLA Integration for TensorFlow Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tf_xla&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tf_xla&quot;},{&quot;title&quot;:&quot;Optimize inference using `torch.compile()`&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_torch_compile&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_torch_compile&quot;}]},{&quot;title&quot;:&quot;Contribute&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;How to contribute to transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;contributing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/contributing&quot;},{&quot;title&quot;:&quot;How to add a model to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_model&quot;},{&quot;title&quot;:&quot;How to convert a 🤗 Transformers model to TensorFlow?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_tensorflow_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_tensorflow_model&quot;},{&quot;title&quot;:&quot;How to add a pipeline to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_pipeline&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_pipeline&quot;},{&quot;title&quot;:&quot;Testing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;testing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/testing&quot;},{&quot;title&quot;:&quot;Checks on a Pull Request&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pr_checks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pr_checks&quot;}]},{&quot;title&quot;:&quot;Conceptual guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Philosophy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;philosophy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/philosophy&quot;},{&quot;title&quot;:&quot;Glossary&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;glossary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/glossary&quot;},{&quot;title&quot;:&quot;What 🤗 Transformers can do&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;task_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/task_summary&quot;},{&quot;title&quot;:&quot;How 🤗 Transformers solve tasks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks_explained&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks_explained&quot;},{&quot;title&quot;:&quot;The Transformer model family&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_summary&quot;},{&quot;title&quot;:&quot;Summary of the tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tokenizer_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tokenizer_summary&quot;},{&quot;title&quot;:&quot;Attention mechanisms&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;attention&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/attention&quot;},{&quot;title&quot;:&quot;Padding and truncation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pad_truncation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pad_truncation&quot;},{&quot;title&quot;:&quot;BERTology&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;bertology&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/bertology&quot;},{&quot;title&quot;:&quot;Perplexity of fixed-length models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perplexity&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perplexity&quot;},{&quot;title&quot;:&quot;Pipelines for webserver inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_webserver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_webserver&quot;},{&quot;title&quot;:&quot;Model training anatomy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_memory_anatomy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_memory_anatomy&quot;}]},{&quot;title&quot;:&quot;API&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Main Classes&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Agents and Tools&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/agent&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/agent&quot;},{&quot;title&quot;:&quot;Auto Classes&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/auto&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/auto&quot;},{&quot;title&quot;:&quot;Callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/callback&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/callback&quot;},{&quot;title&quot;:&quot;Configuration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/configuration&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/configuration&quot;},{&quot;title&quot;:&quot;Data Collator&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/data_collator&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/data_collator&quot;},{&quot;title&quot;:&quot;Keras callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/keras_callbacks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/keras_callbacks&quot;},{&quot;title&quot;:&quot;Logging&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/logging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/logging&quot;},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/model&quot;},{&quot;title&quot;:&quot;Text Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/text_generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/text_generation&quot;},{&quot;title&quot;:&quot;ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/onnx&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/onnx&quot;},{&quot;title&quot;:&quot;Optimization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/optimizer_schedules&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules&quot;},{&quot;title&quot;:&quot;Model outputs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/output&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/output&quot;},{&quot;title&quot;:&quot;Pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/pipelines&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/pipelines&quot;},{&quot;title&quot;:&quot;Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/processors&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/processors&quot;},{&quot;title&quot;:&quot;Quantization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/quantization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/quantization&quot;},{&quot;title&quot;:&quot;Tokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/tokenizer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/tokenizer&quot;},{&quot;title&quot;:&quot;Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/trainer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/trainer&quot;},{&quot;title&quot;:&quot;DeepSpeed Integration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/deepspeed&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/deepspeed&quot;},{&quot;title&quot;:&quot;Feature Extractor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/feature_extractor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/feature_extractor&quot;},{&quot;title&quot;:&quot;Image Processor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/image_processor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/image_processor&quot;}]},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALBERT&quot;,&quot;id&quot;:&quot;model_doc/albert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/albert&quot;},{&quot;title&quot;:&quot;BART&quot;,&quot;id&quot;:&quot;model_doc/bart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bart&quot;},{&quot;title&quot;:&quot;BARThez&quot;,&quot;id&quot;:&quot;model_doc/barthez&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/barthez&quot;},{&quot;title&quot;:&quot;BARTpho&quot;,&quot;id&quot;:&quot;model_doc/bartpho&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bartpho&quot;},{&quot;title&quot;:&quot;BERT&quot;,&quot;id&quot;:&quot;model_doc/bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert&quot;},{&quot;title&quot;:&quot;BertGeneration&quot;,&quot;id&quot;:&quot;model_doc/bert-generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-generation&quot;},{&quot;title&quot;:&quot;BertJapanese&quot;,&quot;id&quot;:&quot;model_doc/bert-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-japanese&quot;},{&quot;title&quot;:&quot;Bertweet&quot;,&quot;id&quot;:&quot;model_doc/bertweet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bertweet&quot;},{&quot;title&quot;:&quot;BigBird&quot;,&quot;id&quot;:&quot;model_doc/big_bird&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/big_bird&quot;},{&quot;title&quot;:&quot;BigBirdPegasus&quot;,&quot;id&quot;:&quot;model_doc/bigbird_pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus&quot;},{&quot;title&quot;:&quot;BioGpt&quot;,&quot;id&quot;:&quot;model_doc/biogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/biogpt&quot;},{&quot;title&quot;:&quot;Blenderbot&quot;,&quot;id&quot;:&quot;model_doc/blenderbot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot&quot;},{&quot;title&quot;:&quot;Blenderbot Small&quot;,&quot;id&quot;:&quot;model_doc/blenderbot-small&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot-small&quot;},{&quot;title&quot;:&quot;BLOOM&quot;,&quot;id&quot;:&quot;model_doc/bloom&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bloom&quot;},{&quot;title&quot;:&quot;BORT&quot;,&quot;id&quot;:&quot;model_doc/bort&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bort&quot;},{&quot;title&quot;:&quot;ByT5&quot;,&quot;id&quot;:&quot;model_doc/byt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/byt5&quot;},{&quot;title&quot;:&quot;CamemBERT&quot;,&quot;id&quot;:&quot;model_doc/camembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/camembert&quot;},{&quot;title&quot;:&quot;CANINE&quot;,&quot;id&quot;:&quot;model_doc/canine&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/canine&quot;},{&quot;title&quot;:&quot;CodeGen&quot;,&quot;id&quot;:&quot;model_doc/codegen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/codegen&quot;},{&quot;title&quot;:&quot;CodeLlama&quot;,&quot;id&quot;:&quot;model_doc/code_llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/code_llama&quot;},{&quot;title&quot;:&quot;ConvBERT&quot;,&quot;id&quot;:&quot;model_doc/convbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convbert&quot;},{&quot;title&quot;:&quot;CPM&quot;,&quot;id&quot;:&quot;model_doc/cpm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpm&quot;},{&quot;title&quot;:&quot;CPMANT&quot;,&quot;id&quot;:&quot;model_doc/cpmant&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpmant&quot;},{&quot;title&quot;:&quot;CTRL&quot;,&quot;id&quot;:&quot;model_doc/ctrl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ctrl&quot;},{&quot;title&quot;:&quot;DeBERTa&quot;,&quot;id&quot;:&quot;model_doc/deberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta&quot;},{&quot;title&quot;:&quot;DeBERTa-v2&quot;,&quot;id&quot;:&quot;model_doc/deberta-v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta-v2&quot;},{&quot;title&quot;:&quot;DialoGPT&quot;,&quot;id&quot;:&quot;model_doc/dialogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dialogpt&quot;},{&quot;title&quot;:&quot;DistilBERT&quot;,&quot;id&quot;:&quot;model_doc/distilbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/distilbert&quot;},{&quot;title&quot;:&quot;DPR&quot;,&quot;id&quot;:&quot;model_doc/dpr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpr&quot;},{&quot;title&quot;:&quot;ELECTRA&quot;,&quot;id&quot;:&quot;model_doc/electra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/electra&quot;},{&quot;title&quot;:&quot;Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encoder-decoder&quot;},{&quot;title&quot;:&quot;ERNIE&quot;,&quot;id&quot;:&quot;model_doc/ernie&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie&quot;},{&quot;title&quot;:&quot;ErnieM&quot;,&quot;id&quot;:&quot;model_doc/ernie_m&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie_m&quot;},{&quot;title&quot;:&quot;ESM&quot;,&quot;id&quot;:&quot;model_doc/esm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/esm&quot;},{&quot;title&quot;:&quot;Falcon&quot;,&quot;id&quot;:&quot;model_doc/falcon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/falcon&quot;},{&quot;title&quot;:&quot;FLAN-T5&quot;,&quot;id&quot;:&quot;model_doc/flan-t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-t5&quot;},{&quot;title&quot;:&quot;FLAN-UL2&quot;,&quot;id&quot;:&quot;model_doc/flan-ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-ul2&quot;},{&quot;title&quot;:&quot;FlauBERT&quot;,&quot;id&quot;:&quot;model_doc/flaubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flaubert&quot;},{&quot;title&quot;:&quot;FNet&quot;,&quot;id&quot;:&quot;model_doc/fnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fnet&quot;},{&quot;title&quot;:&quot;FSMT&quot;,&quot;id&quot;:&quot;model_doc/fsmt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fsmt&quot;},{&quot;title&quot;:&quot;Funnel Transformer&quot;,&quot;id&quot;:&quot;model_doc/funnel&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/funnel&quot;},{&quot;title&quot;:&quot;GPT&quot;,&quot;id&quot;:&quot;model_doc/openai-gpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/openai-gpt&quot;},{&quot;title&quot;:&quot;GPT Neo&quot;,&quot;id&quot;:&quot;model_doc/gpt_neo&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neo&quot;},{&quot;title&quot;:&quot;GPT NeoX&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox&quot;},{&quot;title&quot;:&quot;GPT NeoX Japanese&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox_japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese&quot;},{&quot;title&quot;:&quot;GPT-J&quot;,&quot;id&quot;:&quot;model_doc/gptj&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptj&quot;},{&quot;title&quot;:&quot;GPT2&quot;,&quot;id&quot;:&quot;model_doc/gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt2&quot;},{&quot;title&quot;:&quot;GPTBigCode&quot;,&quot;id&quot;:&quot;model_doc/gpt_bigcode&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode&quot;},{&quot;title&quot;:&quot;GPTSAN Japanese&quot;,&quot;id&quot;:&quot;model_doc/gptsan-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese&quot;},{&quot;title&quot;:&quot;GPTSw3&quot;,&quot;id&quot;:&quot;model_doc/gpt-sw3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt-sw3&quot;},{&quot;title&quot;:&quot;HerBERT&quot;,&quot;id&quot;:&quot;model_doc/herbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/herbert&quot;},{&quot;title&quot;:&quot;I-BERT&quot;,&quot;id&quot;:&quot;model_doc/ibert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ibert&quot;},{&quot;title&quot;:&quot;Jukebox&quot;,&quot;id&quot;:&quot;model_doc/jukebox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/jukebox&quot;},{&quot;title&quot;:&quot;LED&quot;,&quot;id&quot;:&quot;model_doc/led&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/led&quot;},{&quot;title&quot;:&quot;LLaMA&quot;,&quot;id&quot;:&quot;model_doc/llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama&quot;},{&quot;title&quot;:&quot;Llama2&quot;,&quot;id&quot;:&quot;model_doc/llama2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama2&quot;},{&quot;title&quot;:&quot;Longformer&quot;,&quot;id&quot;:&quot;model_doc/longformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longformer&quot;},{&quot;title&quot;:&quot;LongT5&quot;,&quot;id&quot;:&quot;model_doc/longt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longt5&quot;},{&quot;title&quot;:&quot;LUKE&quot;,&quot;id&quot;:&quot;model_doc/luke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/luke&quot;},{&quot;title&quot;:&quot;M2M100&quot;,&quot;id&quot;:&quot;model_doc/m2m_100&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/m2m_100&quot;},{&quot;title&quot;:&quot;MarianMT&quot;,&quot;id&quot;:&quot;model_doc/marian&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/marian&quot;},{&quot;title&quot;:&quot;MarkupLM&quot;,&quot;id&quot;:&quot;model_doc/markuplm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/markuplm&quot;},{&quot;title&quot;:&quot;MBart and MBart-50&quot;,&quot;id&quot;:&quot;model_doc/mbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mbart&quot;},{&quot;title&quot;:&quot;MEGA&quot;,&quot;id&quot;:&quot;model_doc/mega&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mega&quot;},{&quot;title&quot;:&quot;MegatronBERT&quot;,&quot;id&quot;:&quot;model_doc/megatron-bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron-bert&quot;},{&quot;title&quot;:&quot;MegatronGPT2&quot;,&quot;id&quot;:&quot;model_doc/megatron_gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2&quot;},{&quot;title&quot;:&quot;Mistral&quot;,&quot;id&quot;:&quot;model_doc/mistral&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mistral&quot;},{&quot;title&quot;:&quot;mLUKE&quot;,&quot;id&quot;:&quot;model_doc/mluke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mluke&quot;},{&quot;title&quot;:&quot;MobileBERT&quot;,&quot;id&quot;:&quot;model_doc/mobilebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilebert&quot;},{&quot;title&quot;:&quot;MPNet&quot;,&quot;id&quot;:&quot;model_doc/mpnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpnet&quot;},{&quot;title&quot;:&quot;MPT&quot;,&quot;id&quot;:&quot;model_doc/mpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpt&quot;},{&quot;title&quot;:&quot;MRA&quot;,&quot;id&quot;:&quot;model_doc/mra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mra&quot;},{&quot;title&quot;:&quot;MT5&quot;,&quot;id&quot;:&quot;model_doc/mt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mt5&quot;},{&quot;title&quot;:&quot;MVP&quot;,&quot;id&quot;:&quot;model_doc/mvp&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mvp&quot;},{&quot;title&quot;:&quot;NEZHA&quot;,&quot;id&quot;:&quot;model_doc/nezha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nezha&quot;},{&quot;title&quot;:&quot;NLLB&quot;,&quot;id&quot;:&quot;model_doc/nllb&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb&quot;},{&quot;title&quot;:&quot;NLLB-MoE&quot;,&quot;id&quot;:&quot;model_doc/nllb-moe&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb-moe&quot;},{&quot;title&quot;:&quot;Nyströmformer&quot;,&quot;id&quot;:&quot;model_doc/nystromformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nystromformer&quot;},{&quot;title&quot;:&quot;Open-Llama&quot;,&quot;id&quot;:&quot;model_doc/open-llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/open-llama&quot;},{&quot;title&quot;:&quot;OPT&quot;,&quot;id&quot;:&quot;model_doc/opt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/opt&quot;},{&quot;title&quot;:&quot;Pegasus&quot;,&quot;id&quot;:&quot;model_doc/pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus&quot;},{&quot;title&quot;:&quot;PEGASUS-X&quot;,&quot;id&quot;:&quot;model_doc/pegasus_x&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus_x&quot;},{&quot;title&quot;:&quot;Persimmon&quot;,&quot;id&quot;:&quot;model_doc/persimmon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/persimmon&quot;},{&quot;title&quot;:&quot;PhoBERT&quot;,&quot;id&quot;:&quot;model_doc/phobert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/phobert&quot;},{&quot;title&quot;:&quot;PLBart&quot;,&quot;id&quot;:&quot;model_doc/plbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/plbart&quot;},{&quot;title&quot;:&quot;ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/prophetnet&quot;},{&quot;title&quot;:&quot;QDQBert&quot;,&quot;id&quot;:&quot;model_doc/qdqbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/qdqbert&quot;},{&quot;title&quot;:&quot;RAG&quot;,&quot;id&quot;:&quot;model_doc/rag&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rag&quot;},{&quot;title&quot;:&quot;REALM&quot;,&quot;id&quot;:&quot;model_doc/realm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/realm&quot;},{&quot;title&quot;:&quot;Reformer&quot;,&quot;id&quot;:&quot;model_doc/reformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/reformer&quot;},{&quot;title&quot;:&quot;RemBERT&quot;,&quot;id&quot;:&quot;model_doc/rembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rembert&quot;},{&quot;title&quot;:&quot;RetriBERT&quot;,&quot;id&quot;:&quot;model_doc/retribert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/retribert&quot;},{&quot;title&quot;:&quot;RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta&quot;},{&quot;title&quot;:&quot;RoBERTa-PreLayerNorm&quot;,&quot;id&quot;:&quot;model_doc/roberta-prelayernorm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm&quot;},{&quot;title&quot;:&quot;RoCBert&quot;,&quot;id&quot;:&quot;model_doc/roc_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roc_bert&quot;},{&quot;title&quot;:&quot;RoFormer&quot;,&quot;id&quot;:&quot;model_doc/roformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roformer&quot;},{&quot;title&quot;:&quot;RWKV&quot;,&quot;id&quot;:&quot;model_doc/rwkv&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rwkv&quot;},{&quot;title&quot;:&quot;Splinter&quot;,&quot;id&quot;:&quot;model_doc/splinter&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/splinter&quot;},{&quot;title&quot;:&quot;SqueezeBERT&quot;,&quot;id&quot;:&quot;model_doc/squeezebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/squeezebert&quot;},{&quot;title&quot;:&quot;SwitchTransformers&quot;,&quot;id&quot;:&quot;model_doc/switch_transformers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/switch_transformers&quot;},{&quot;title&quot;:&quot;T5&quot;,&quot;id&quot;:&quot;model_doc/t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5&quot;},{&quot;title&quot;:&quot;T5v1.1&quot;,&quot;id&quot;:&quot;model_doc/t5v1.1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5v1.1&quot;},{&quot;title&quot;:&quot;TAPEX&quot;,&quot;id&quot;:&quot;model_doc/tapex&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapex&quot;},{&quot;title&quot;:&quot;Transformer XL&quot;,&quot;id&quot;:&quot;model_doc/transfo-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/transfo-xl&quot;},{&quot;title&quot;:&quot;UL2&quot;,&quot;id&quot;:&quot;model_doc/ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ul2&quot;},{&quot;title&quot;:&quot;UMT5&quot;,&quot;id&quot;:&quot;model_doc/umt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/umt5&quot;},{&quot;title&quot;:&quot;X-MOD&quot;,&quot;id&quot;:&quot;model_doc/xmod&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xmod&quot;},{&quot;title&quot;:&quot;XGLM&quot;,&quot;id&quot;:&quot;model_doc/xglm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xglm&quot;},{&quot;title&quot;:&quot;XLM&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/xlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm&quot;},{&quot;title&quot;:&quot;XLM-ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/xlm-prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa-XL&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl&quot;},{&quot;title&quot;:&quot;XLM-V&quot;,&quot;id&quot;:&quot;model_doc/xlm-v&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-v&quot;},{&quot;title&quot;:&quot;XLNet&quot;,&quot;id&quot;:&quot;model_doc/xlnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlnet&quot;},{&quot;title&quot;:&quot;YOSO&quot;,&quot;id&quot;:&quot;model_doc/yoso&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yoso&quot;}]},{&quot;title&quot;:&quot;Vision models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;BEiT&quot;,&quot;id&quot;:&quot;model_doc/beit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/beit&quot;},{&quot;title&quot;:&quot;BiT&quot;,&quot;id&quot;:&quot;model_doc/bit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bit&quot;},{&quot;title&quot;:&quot;Conditional DETR&quot;,&quot;id&quot;:&quot;model_doc/conditional_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/conditional_detr&quot;},{&quot;title&quot;:&quot;ConvNeXT&quot;,&quot;id&quot;:&quot;model_doc/convnext&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnext&quot;},{&quot;title&quot;:&quot;ConvNeXTV2&quot;,&quot;id&quot;:&quot;model_doc/convnextv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnextv2&quot;},{&quot;title&quot;:&quot;CvT&quot;,&quot;id&quot;:&quot;model_doc/cvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cvt&quot;},{&quot;title&quot;:&quot;Deformable DETR&quot;,&quot;id&quot;:&quot;model_doc/deformable_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deformable_detr&quot;},{&quot;title&quot;:&quot;DeiT&quot;,&quot;id&quot;:&quot;model_doc/deit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deit&quot;},{&quot;title&quot;:&quot;DETA&quot;,&quot;id&quot;:&quot;model_doc/deta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deta&quot;},{&quot;title&quot;:&quot;DETR&quot;,&quot;id&quot;:&quot;model_doc/detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/detr&quot;},{&quot;title&quot;:&quot;DiNAT&quot;,&quot;id&quot;:&quot;model_doc/dinat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinat&quot;},{&quot;title&quot;:&quot;DINO V2&quot;,&quot;id&quot;:&quot;model_doc/dinov2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinov2&quot;},{&quot;title&quot;:&quot;DiT&quot;,&quot;id&quot;:&quot;model_doc/dit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dit&quot;},{&quot;title&quot;:&quot;DPT&quot;,&quot;id&quot;:&quot;model_doc/dpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpt&quot;},{&quot;title&quot;:&quot;EfficientFormer&quot;,&quot;id&quot;:&quot;model_doc/efficientformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientformer&quot;},{&quot;title&quot;:&quot;EfficientNet&quot;,&quot;id&quot;:&quot;model_doc/efficientnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientnet&quot;},{&quot;title&quot;:&quot;FocalNet&quot;,&quot;id&quot;:&quot;model_doc/focalnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/focalnet&quot;},{&quot;title&quot;:&quot;GLPN&quot;,&quot;id&quot;:&quot;model_doc/glpn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/glpn&quot;},{&quot;title&quot;:&quot;ImageGPT&quot;,&quot;id&quot;:&quot;model_doc/imagegpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/imagegpt&quot;},{&quot;title&quot;:&quot;LeViT&quot;,&quot;id&quot;:&quot;model_doc/levit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/levit&quot;},{&quot;title&quot;:&quot;Mask2Former&quot;,&quot;id&quot;:&quot;model_doc/mask2former&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mask2former&quot;},{&quot;title&quot;:&quot;MaskFormer&quot;,&quot;id&quot;:&quot;model_doc/maskformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/maskformer&quot;},{&quot;title&quot;:&quot;MobileNetV1&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v1&quot;},{&quot;title&quot;:&quot;MobileNetV2&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v2&quot;},{&quot;title&quot;:&quot;MobileViT&quot;,&quot;id&quot;:&quot;model_doc/mobilevit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevit&quot;},{&quot;title&quot;:&quot;MobileViTV2&quot;,&quot;id&quot;:&quot;model_doc/mobilevitv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevitv2&quot;},{&quot;title&quot;:&quot;NAT&quot;,&quot;id&quot;:&quot;model_doc/nat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nat&quot;},{&quot;title&quot;:&quot;PoolFormer&quot;,&quot;id&quot;:&quot;model_doc/poolformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/poolformer&quot;},{&quot;title&quot;:&quot;Pyramid Vision Transformer (PVT)&quot;,&quot;id&quot;:&quot;model_doc/pvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pvt&quot;},{&quot;title&quot;:&quot;RegNet&quot;,&quot;id&quot;:&quot;model_doc/regnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/regnet&quot;},{&quot;title&quot;:&quot;ResNet&quot;,&quot;id&quot;:&quot;model_doc/resnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/resnet&quot;},{&quot;title&quot;:&quot;SegFormer&quot;,&quot;id&quot;:&quot;model_doc/segformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/segformer&quot;},{&quot;title&quot;:&quot;SwiftFormer&quot;,&quot;id&quot;:&quot;model_doc/swiftformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swiftformer&quot;},{&quot;title&quot;:&quot;Swin Transformer&quot;,&quot;id&quot;:&quot;model_doc/swin&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin&quot;},{&quot;title&quot;:&quot;Swin Transformer V2&quot;,&quot;id&quot;:&quot;model_doc/swinv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swinv2&quot;},{&quot;title&quot;:&quot;Swin2SR&quot;,&quot;id&quot;:&quot;model_doc/swin2sr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin2sr&quot;},{&quot;title&quot;:&quot;Table Transformer&quot;,&quot;id&quot;:&quot;model_doc/table-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/table-transformer&quot;},{&quot;title&quot;:&quot;TimeSformer&quot;,&quot;id&quot;:&quot;model_doc/timesformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/timesformer&quot;},{&quot;title&quot;:&quot;UperNet&quot;,&quot;id&quot;:&quot;model_doc/upernet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/upernet&quot;},{&quot;title&quot;:&quot;VAN&quot;,&quot;id&quot;:&quot;model_doc/van&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/van&quot;},{&quot;title&quot;:&quot;VideoMAE&quot;,&quot;id&quot;:&quot;model_doc/videomae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/videomae&quot;},{&quot;title&quot;:&quot;Vision Transformer (ViT)&quot;,&quot;id&quot;:&quot;model_doc/vit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit&quot;},{&quot;title&quot;:&quot;ViT Hybrid&quot;,&quot;id&quot;:&quot;model_doc/vit_hybrid&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_hybrid&quot;},{&quot;title&quot;:&quot;ViTDet&quot;,&quot;id&quot;:&quot;model_doc/vitdet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitdet&quot;},{&quot;title&quot;:&quot;ViTMAE&quot;,&quot;id&quot;:&quot;model_doc/vit_mae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_mae&quot;},{&quot;title&quot;:&quot;ViTMatte&quot;,&quot;id&quot;:&quot;model_doc/vitmatte&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitmatte&quot;},{&quot;title&quot;:&quot;ViTMSN&quot;,&quot;id&quot;:&quot;model_doc/vit_msn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_msn&quot;},{&quot;title&quot;:&quot;ViViT&quot;,&quot;id&quot;:&quot;model_doc/vivit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vivit&quot;},{&quot;title&quot;:&quot;YOLOS&quot;,&quot;id&quot;:&quot;model_doc/yolos&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yolos&quot;}]},{&quot;title&quot;:&quot;Audio models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio Spectrogram Transformer&quot;,&quot;id&quot;:&quot;model_doc/audio-spectrogram-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer&quot;},{&quot;title&quot;:&quot;Bark&quot;,&quot;id&quot;:&quot;model_doc/bark&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bark&quot;},{&quot;title&quot;:&quot;CLAP&quot;,&quot;id&quot;:&quot;model_doc/clap&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clap&quot;},{&quot;title&quot;:&quot;EnCodec&quot;,&quot;id&quot;:&quot;model_doc/encodec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encodec&quot;},{&quot;title&quot;:&quot;Hubert&quot;,&quot;id&quot;:&quot;model_doc/hubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/hubert&quot;},{&quot;title&quot;:&quot;MCTCT&quot;,&quot;id&quot;:&quot;model_doc/mctct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mctct&quot;},{&quot;title&quot;:&quot;MMS&quot;,&quot;id&quot;:&quot;model_doc/mms&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mms&quot;},{&quot;title&quot;:&quot;MusicGen&quot;,&quot;id&quot;:&quot;model_doc/musicgen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/musicgen&quot;},{&quot;title&quot;:&quot;Pop2Piano&quot;,&quot;id&quot;:&quot;model_doc/pop2piano&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pop2piano&quot;},{&quot;title&quot;:&quot;SEW&quot;,&quot;id&quot;:&quot;model_doc/sew&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew&quot;},{&quot;title&quot;:&quot;SEW-D&quot;,&quot;id&quot;:&quot;model_doc/sew-d&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew-d&quot;},{&quot;title&quot;:&quot;Speech2Text&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text&quot;},{&quot;title&quot;:&quot;Speech2Text2&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text_2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2&quot;},{&quot;title&quot;:&quot;SpeechT5&quot;,&quot;id&quot;:&quot;model_doc/speecht5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speecht5&quot;},{&quot;title&quot;:&quot;UniSpeech&quot;,&quot;id&quot;:&quot;model_doc/unispeech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech&quot;},{&quot;title&quot;:&quot;UniSpeech-SAT&quot;,&quot;id&quot;:&quot;model_doc/unispeech-sat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech-sat&quot;},{&quot;title&quot;:&quot;VITS&quot;,&quot;id&quot;:&quot;model_doc/vits&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vits&quot;},{&quot;title&quot;:&quot;Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2&quot;},{&quot;title&quot;:&quot;Wav2Vec2-Conformer&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2-conformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer&quot;},{&quot;title&quot;:&quot;Wav2Vec2Phoneme&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2_phoneme&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme&quot;},{&quot;title&quot;:&quot;WavLM&quot;,&quot;id&quot;:&quot;model_doc/wavlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wavlm&quot;},{&quot;title&quot;:&quot;Whisper&quot;,&quot;id&quot;:&quot;model_doc/whisper&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/whisper&quot;},{&quot;title&quot;:&quot;XLS-R&quot;,&quot;id&quot;:&quot;model_doc/xls_r&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xls_r&quot;},{&quot;title&quot;:&quot;XLSR-Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/xlsr_wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2&quot;}]},{&quot;title&quot;:&quot;Multimodal models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALIGN&quot;,&quot;id&quot;:&quot;model_doc/align&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/align&quot;},{&quot;title&quot;:&quot;AltCLIP&quot;,&quot;id&quot;:&quot;model_doc/altclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/altclip&quot;},{&quot;title&quot;:&quot;BLIP&quot;,&quot;id&quot;:&quot;model_doc/blip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip&quot;},{&quot;title&quot;:&quot;BLIP-2&quot;,&quot;id&quot;:&quot;model_doc/blip-2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip-2&quot;},{&quot;title&quot;:&quot;BridgeTower&quot;,&quot;id&quot;:&quot;model_doc/bridgetower&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bridgetower&quot;},{&quot;title&quot;:&quot;BROS&quot;,&quot;id&quot;:&quot;model_doc/bros&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bros&quot;},{&quot;title&quot;:&quot;Chinese-CLIP&quot;,&quot;id&quot;:&quot;model_doc/chinese_clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/chinese_clip&quot;},{&quot;title&quot;:&quot;CLIP&quot;,&quot;id&quot;:&quot;model_doc/clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clip&quot;},{&quot;title&quot;:&quot;CLIPSeg&quot;,&quot;id&quot;:&quot;model_doc/clipseg&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clipseg&quot;},{&quot;title&quot;:&quot;Data2Vec&quot;,&quot;id&quot;:&quot;model_doc/data2vec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/data2vec&quot;},{&quot;title&quot;:&quot;DePlot&quot;,&quot;id&quot;:&quot;model_doc/deplot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deplot&quot;},{&quot;title&quot;:&quot;Donut&quot;,&quot;id&quot;:&quot;model_doc/donut&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/donut&quot;},{&quot;title&quot;:&quot;FLAVA&quot;,&quot;id&quot;:&quot;model_doc/flava&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flava&quot;},{&quot;title&quot;:&quot;GIT&quot;,&quot;id&quot;:&quot;model_doc/git&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/git&quot;},{&quot;title&quot;:&quot;GroupViT&quot;,&quot;id&quot;:&quot;model_doc/groupvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/groupvit&quot;},{&quot;title&quot;:&quot;IDEFICS&quot;,&quot;id&quot;:&quot;model_doc/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/idefics&quot;},{&quot;title&quot;:&quot;InstructBLIP&quot;,&quot;id&quot;:&quot;model_doc/instructblip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/instructblip&quot;},{&quot;title&quot;:&quot;LayoutLM&quot;,&quot;id&quot;:&quot;model_doc/layoutlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlm&quot;},{&quot;title&quot;:&quot;LayoutLMV2&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv2&quot;},{&quot;title&quot;:&quot;LayoutLMV3&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv3&quot;},{&quot;title&quot;:&quot;LayoutXLM&quot;,&quot;id&quot;:&quot;model_doc/layoutxlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutxlm&quot;},{&quot;title&quot;:&quot;LiLT&quot;,&quot;id&quot;:&quot;model_doc/lilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lilt&quot;},{&quot;title&quot;:&quot;LXMERT&quot;,&quot;id&quot;:&quot;model_doc/lxmert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lxmert&quot;},{&quot;title&quot;:&quot;MatCha&quot;,&quot;id&quot;:&quot;model_doc/matcha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/matcha&quot;},{&quot;title&quot;:&quot;MGP-STR&quot;,&quot;id&quot;:&quot;model_doc/mgp-str&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mgp-str&quot;},{&quot;title&quot;:&quot;Nougat&quot;,&quot;id&quot;:&quot;model_doc/nougat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nougat&quot;},{&quot;title&quot;:&quot;OneFormer&quot;,&quot;id&quot;:&quot;model_doc/oneformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/oneformer&quot;},{&quot;title&quot;:&quot;OWL-ViT&quot;,&quot;id&quot;:&quot;model_doc/owlvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/owlvit&quot;},{&quot;title&quot;:&quot;Perceiver&quot;,&quot;id&quot;:&quot;model_doc/perceiver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/perceiver&quot;},{&quot;title&quot;:&quot;Pix2Struct&quot;,&quot;id&quot;:&quot;model_doc/pix2struct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pix2struct&quot;},{&quot;title&quot;:&quot;Segment Anything&quot;,&quot;id&quot;:&quot;model_doc/sam&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sam&quot;},{&quot;title&quot;:&quot;Speech Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/speech-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder&quot;},{&quot;title&quot;:&quot;TAPAS&quot;,&quot;id&quot;:&quot;model_doc/tapas&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapas&quot;},{&quot;title&quot;:&quot;TrOCR&quot;,&quot;id&quot;:&quot;model_doc/trocr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trocr&quot;},{&quot;title&quot;:&quot;TVLT&quot;,&quot;id&quot;:&quot;model_doc/tvlt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tvlt&quot;},{&quot;title&quot;:&quot;ViLT&quot;,&quot;id&quot;:&quot;model_doc/vilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vilt&quot;},{&quot;title&quot;:&quot;Vision Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/vision-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder&quot;},{&quot;title&quot;:&quot;Vision Text Dual Encoder&quot;,&quot;id&quot;:&quot;model_doc/vision-text-dual-encoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder&quot;},{&quot;title&quot;:&quot;VisualBERT&quot;,&quot;id&quot;:&quot;model_doc/visual_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/visual_bert&quot;},{&quot;title&quot;:&quot;X-CLIP&quot;,&quot;id&quot;:&quot;model_doc/xclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xclip&quot;}]},{&quot;title&quot;:&quot;Reinforcement learning models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Decision Transformer&quot;,&quot;id&quot;:&quot;model_doc/decision_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/decision_transformer&quot;},{&quot;title&quot;:&quot;Trajectory Transformer&quot;,&quot;id&quot;:&quot;model_doc/trajectory_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer&quot;}]},{&quot;title&quot;:&quot;Time series models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Autoformer&quot;,&quot;id&quot;:&quot;model_doc/autoformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/autoformer&quot;},{&quot;title&quot;:&quot;Informer&quot;,&quot;id&quot;:&quot;model_doc/informer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/informer&quot;},{&quot;title&quot;:&quot;Time Series Transformer&quot;,&quot;id&quot;:&quot;model_doc/time_series_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/time_series_transformer&quot;}]},{&quot;title&quot;:&quot;Graph models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Graphormer&quot;,&quot;id&quot;:&quot;model_doc/graphormer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/graphormer&quot;}]}]},{&quot;title&quot;:&quot;Internal Helpers&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Custom Layers and Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/modeling_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/modeling_utils&quot;},{&quot;title&quot;:&quot;Utilities for pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/pipelines_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/pipelines_utils&quot;},{&quot;title&quot;:&quot;Utilities for Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/tokenization_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/tokenization_utils&quot;},{&quot;title&quot;:&quot;Utilities for Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/trainer_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/trainer_utils&quot;},{&quot;title&quot;:&quot;Utilities for Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/generation_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/generation_utils&quot;},{&quot;title&quot;:&quot;Utilities for Image Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/image_processing_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/image_processing_utils&quot;},{&quot;title&quot;:&quot;Utilities for Audio processing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/audio_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/audio_utils&quot;},{&quot;title&quot;:&quot;General Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/file_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/file_utils&quot;},{&quot;title&quot;:&quot;Utilities for Time Series&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/time_series_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/time_series_utils&quot;}]}]}],&quot;chapterId&quot;:&quot;model_doc/xlm&quot;,&quot;docType&quot;:&quot;docs&quot;,&quot;isLoggedIn&quot;:false,&quot;lang&quot;:&quot;en&quot;,&quot;langs&quot;:[&quot;de&quot;,&quot;en&quot;,&quot;es&quot;,&quot;fr&quot;,&quot;it&quot;,&quot;ko&quot;,&quot;pt&quot;,&quot;zh&quot;],&quot;library&quot;:&quot;transformers&quot;,&quot;theme&quot;:&quot;light&quot;,&quot;version&quot;:&quot;v4.34.0&quot;,&quot;versions&quot;:[{&quot;version&quot;:&quot;main&quot;},{&quot;version&quot;:&quot;v4.34.0&quot;},{&quot;version&quot;:&quot;v4.33.3&quot;},{&quot;version&quot;:&quot;v4.33.2&quot;},{&quot;version&quot;:&quot;v4.33.0&quot;},{&quot;version&quot;:&quot;v4.32.1&quot;},{&quot;version&quot;:&quot;v4.32.0&quot;},{&quot;version&quot;:&quot;v4.31.0&quot;},{&quot;version&quot;:&quot;v4.30.0&quot;},{&quot;version&quot;:&quot;v4.29.1&quot;},{&quot;version&quot;:&quot;v4.29.0&quot;},{&quot;version&quot;:&quot;v4.28.1&quot;},{&quot;version&quot;:&quot;v4.28.0&quot;},{&quot;version&quot;:&quot;v4.27.2&quot;},{&quot;version&quot;:&quot;v4.27.1&quot;},{&quot;version&quot;:&quot;v4.27.0&quot;},{&quot;version&quot;:&quot;v4.26.1&quot;},{&quot;version&quot;:&quot;v4.26.0&quot;},{&quot;version&quot;:&quot;v4.25.1&quot;},{&quot;version&quot;:&quot;v4.24.0&quot;},{&quot;version&quot;:&quot;v4.23.1&quot;},{&quot;version&quot;:&quot;v4.23.0&quot;},{&quot;version&quot;:&quot;v4.22.2&quot;},{&quot;version&quot;:&quot;v4.22.1&quot;},{&quot;version&quot;:&quot;v4.22.0&quot;},{&quot;version&quot;:&quot;v4.21.3&quot;},{&quot;version&quot;:&quot;v4.21.2&quot;},{&quot;version&quot;:&quot;v4.21.1&quot;},{&quot;version&quot;:&quot;v4.21.0&quot;},{&quot;version&quot;:&quot;v4.20.1&quot;},{&quot;version&quot;:&quot;v4.20.0&quot;},{&quot;version&quot;:&quot;v4.19.4&quot;},{&quot;version&quot;:&quot;v4.19.3&quot;},{&quot;version&quot;:&quot;v4.19.2&quot;},{&quot;version&quot;:&quot;v4.19.0&quot;},{&quot;version&quot;:&quot;v4.18.0&quot;},{&quot;version&quot;:&quot;v4.17.0&quot;},{&quot;version&quot;:&quot;v4.16.2&quot;},{&quot;version&quot;:&quot;v4.16.1&quot;},{&quot;version&quot;:&quot;v4.16.0&quot;},{&quot;version&quot;:&quot;v4.15.0&quot;},{&quot;version&quot;:&quot;v4.14.1&quot;},{&quot;version&quot;:&quot;v4.13.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.5&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.4&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.0.0&quot;},{&quot;version&quot;:&quot;doc-builder-html&quot;}],&quot;title&quot;:&quot;XLM&quot;}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"> <div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation </p> <div class="flex items-center"><p class="font-semibold">XLM</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "> <button class=" " type="button"> <h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> </button> <div class="flex items-center"> <select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1" selected="">v4.34.0</option><option value="2">v4.33.3</option><option value="3">v4.32.1</option><option value="4">v4.31.0</option><option value="5">v4.30.0</option><option value="6">v4.29.1</option><option value="7">v4.28.1</option><option value="8">v4.27.2</option><option value="9">v4.26.1</option><option value="10">v4.25.1</option><option value="11">v4.24.0</option><option value="12">v4.23.1</option><option value="13">v4.22.2</option><option value="14">v4.21.3</option><option value="15">v4.20.1</option><option value="16">v4.19.4</option><option value="17">v4.18.0</option><option value="18">v4.17.0</option><option value="19">v4.16.2</option><option value="20">v4.15.0</option><option value="21">v4.14.1</option><option value="22">v4.13.0</option><option value="23">v4.12.5</option><option value="24">v4.11.3</option><option value="25">v4.10.1</option><option value="26">v4.9.2</option><option value="27">v4.8.2</option><option value="28">v4.7.0</option><option value="29">v4.6.0</option><option value="30">v4.5.1</option><option value="31">v4.4.2</option><option value="32">v4.3.3</option><option value="33">v4.2.2</option><option value="34">v4.1.1</option><option value="35">v4.0.1</option><option value="36">v3.5.1</option><option value="37">v3.4.0</option><option value="38">v3.3.1</option><option value="39">v3.2.0</option><option value="40">v3.1.0</option><option value="41">v3.0.2</option><option value="42">v2.11.0</option><option value="43">v2.10.0</option><option value="44">v2.9.1</option><option value="45">v2.8.0</option><option value="46">v2.7.0</option><option value="47">v2.6.0</option><option value="48">v2.5.1</option><option value="49">v2.4.1</option><option value="50">v2.3.0</option><option value="51">v2.2.2</option><option value="52">v2.1.1</option><option value="53">v2.0.0</option><option value="54">v1.2.0</option><option value="55">v1.1.0</option><option value="56">v1.0.0</option><option value="57">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en" selected="">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"> <button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"> <svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> </a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Get started<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/index"><!-- HTML_TAG_START -->🤗 Transformers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/quicktour"><!-- HTML_TAG_START -->Quick tour<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/installation"><!-- HTML_TAG_START -->Installation<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Tutorials<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_tutorial"><!-- HTML_TAG_START -->Run inference with pipelines<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/autoclass_tutorial"><!-- HTML_TAG_START -->Write portable code with AutoClass<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/preprocessing"><!-- HTML_TAG_START -->Preprocess data<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/training"><!-- HTML_TAG_START -->Fine-tune a pretrained model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/run_scripts"><!-- HTML_TAG_START -->Train with a script<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/accelerate"><!-- HTML_TAG_START -->Set up distributed training with 🤗 Accelerate<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/peft"><!-- HTML_TAG_START -->Load and train adapters with 🤗 PEFT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_sharing"><!-- HTML_TAG_START -->Share your model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/transformers_agents"><!-- HTML_TAG_START -->Agents<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/llm_tutorial"><!-- HTML_TAG_START -->Generation with LLMs<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Task Guides<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Natural Language Processing<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Audio<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Computer Vision<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Multimodal<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Generation<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Prompting<!-- HTML_TAG_END --></span> </span></span> </div></div> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Developer guides<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/fast_tokenizers"><!-- HTML_TAG_START -->Use fast tokenizers from 🤗 Tokenizers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/multilingual"><!-- HTML_TAG_START -->Run inference with multilingual models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/create_a_model"><!-- HTML_TAG_START -->Use model-specific APIs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_models"><!-- HTML_TAG_START -->Share a custom model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/chat_templating"><!-- HTML_TAG_START -->Templates for chat models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/sagemaker"><!-- HTML_TAG_START -->Run training on Amazon SageMaker<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/serialization"><!-- HTML_TAG_START -->Export to ONNX<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tflite"><!-- HTML_TAG_START -->Export to TFLite<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/torchscript"><!-- HTML_TAG_START -->Export to TorchScript<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/benchmarks"><!-- HTML_TAG_START -->Benchmarks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/notebooks"><!-- HTML_TAG_START -->Notebooks with examples<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/community"><!-- HTML_TAG_START -->Community resources<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_tools"><!-- HTML_TAG_START -->Custom Tools and Prompts<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/troubleshooting"><!-- HTML_TAG_START -->Troubleshoot<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Performance and scalability<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/performance"><!-- HTML_TAG_START -->Overview<!-- HTML_TAG_END --> </a> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Efficient training techniques<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_one"><!-- HTML_TAG_START -->Methods and tools for efficient training on a single GPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_many"><!-- HTML_TAG_START -->Multiple GPUs and parallelism<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu"><!-- HTML_TAG_START -->Efficient training on CPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu_many"><!-- HTML_TAG_START -->Distributed CPU training<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu"><!-- HTML_TAG_START -->Training on TPUs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu_tf"><!-- HTML_TAG_START -->Training on TPU with TensorFlow<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_special"><!-- HTML_TAG_START -->Training on Specialized Hardware<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_hardware"><!-- HTML_TAG_START -->Custom hardware for training<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/hpo_train"><!-- HTML_TAG_START -->Hyperparameter Search using Trainer API<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Optimizing inference<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_cpu"><!-- HTML_TAG_START -->Inference on CPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_one"><!-- HTML_TAG_START -->Inference on one GPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_many"><!-- HTML_TAG_START -->Inference on many GPUs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_special"><!-- HTML_TAG_START -->Inference on Specialized Hardware<!-- HTML_TAG_END --> </a> </div><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/big_models"><!-- HTML_TAG_START -->Instantiating a big model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/debugging"><!-- HTML_TAG_START -->Troubleshooting<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tf_xla"><!-- HTML_TAG_START -->XLA Integration for TensorFlow Models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perf_torch_compile"><!-- HTML_TAG_START -->Optimize inference using `torch.compile()`<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Contribute<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/contributing"><!-- HTML_TAG_START -->How to contribute to transformers?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_model"><!-- HTML_TAG_START -->How to add a model to 🤗 Transformers?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_tensorflow_model"><!-- HTML_TAG_START -->How to convert a 🤗 Transformers model to TensorFlow?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_pipeline"><!-- HTML_TAG_START -->How to add a pipeline to 🤗 Transformers?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/testing"><!-- HTML_TAG_START -->Testing<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pr_checks"><!-- HTML_TAG_START -->Checks on a Pull Request<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Conceptual guides<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/philosophy"><!-- HTML_TAG_START -->Philosophy<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/glossary"><!-- HTML_TAG_START -->Glossary<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/task_summary"><!-- HTML_TAG_START -->What 🤗 Transformers can do<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tasks_explained"><!-- HTML_TAG_START -->How 🤗 Transformers solve tasks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_summary"><!-- HTML_TAG_START -->The Transformer model family<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tokenizer_summary"><!-- HTML_TAG_START -->Summary of the tokenizers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/attention"><!-- HTML_TAG_START -->Attention mechanisms<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pad_truncation"><!-- HTML_TAG_START -->Padding and truncation<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/bertology"><!-- HTML_TAG_START -->BERTology<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perplexity"><!-- HTML_TAG_START -->Perplexity of fixed-length models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_webserver"><!-- HTML_TAG_START -->Pipelines for webserver inference<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_memory_anatomy"><!-- HTML_TAG_START -->Model training anatomy<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->API<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Main Classes<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/agent"><!-- HTML_TAG_START -->Agents and Tools<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/model_doc/auto"><!-- HTML_TAG_START -->Auto Classes<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/callback"><!-- HTML_TAG_START -->Callbacks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/configuration"><!-- HTML_TAG_START -->Configuration<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/data_collator"><!-- HTML_TAG_START -->Data Collator<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks"><!-- HTML_TAG_START -->Keras callbacks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/logging"><!-- HTML_TAG_START -->Logging<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/model"><!-- HTML_TAG_START -->Models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/text_generation"><!-- HTML_TAG_START -->Text Generation<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/onnx"><!-- HTML_TAG_START -->ONNX<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules"><!-- HTML_TAG_START -->Optimization<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/output"><!-- HTML_TAG_START -->Model outputs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/pipelines"><!-- HTML_TAG_START -->Pipelines<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/processors"><!-- HTML_TAG_START -->Processors<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/quantization"><!-- HTML_TAG_START -->Quantization<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/tokenizer"><!-- HTML_TAG_START -->Tokenizer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/trainer"><!-- HTML_TAG_START -->Trainer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/deepspeed"><!-- HTML_TAG_START -->DeepSpeed Integration<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor"><!-- HTML_TAG_START -->Feature Extractor<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/image_processor"><!-- HTML_TAG_START -->Image Processor<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Text models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/albert"><!-- HTML_TAG_START -->ALBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bart"><!-- HTML_TAG_START -->BART<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/barthez"><!-- HTML_TAG_START -->BARThez<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bartpho"><!-- HTML_TAG_START -->BARTpho<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bert"><!-- HTML_TAG_START -->BERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bert-generation"><!-- HTML_TAG_START -->BertGeneration<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bert-japanese"><!-- HTML_TAG_START -->BertJapanese<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bertweet"><!-- HTML_TAG_START -->Bertweet<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/big_bird"><!-- HTML_TAG_START -->BigBird<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus"><!-- HTML_TAG_START -->BigBirdPegasus<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/biogpt"><!-- HTML_TAG_START -->BioGpt<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/blenderbot"><!-- HTML_TAG_START -->Blenderbot<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/blenderbot-small"><!-- HTML_TAG_START -->Blenderbot Small<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bloom"><!-- HTML_TAG_START -->BLOOM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bort"><!-- HTML_TAG_START -->BORT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/byt5"><!-- HTML_TAG_START -->ByT5<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/camembert"><!-- HTML_TAG_START -->CamemBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/canine"><!-- HTML_TAG_START -->CANINE<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/codegen"><!-- HTML_TAG_START -->CodeGen<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/code_llama"><!-- HTML_TAG_START -->CodeLlama<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/convbert"><!-- HTML_TAG_START -->ConvBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/cpm"><!-- HTML_TAG_START -->CPM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/cpmant"><!-- HTML_TAG_START -->CPMANT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ctrl"><!-- HTML_TAG_START -->CTRL<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/deberta"><!-- HTML_TAG_START -->DeBERTa<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/deberta-v2"><!-- HTML_TAG_START -->DeBERTa-v2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/dialogpt"><!-- HTML_TAG_START -->DialoGPT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/distilbert"><!-- HTML_TAG_START -->DistilBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/dpr"><!-- HTML_TAG_START -->DPR<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/electra"><!-- HTML_TAG_START -->ELECTRA<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/encoder-decoder"><!-- HTML_TAG_START -->Encoder Decoder Models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ernie"><!-- HTML_TAG_START -->ERNIE<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ernie_m"><!-- HTML_TAG_START -->ErnieM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/esm"><!-- HTML_TAG_START -->ESM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/falcon"><!-- HTML_TAG_START -->Falcon<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/flan-t5"><!-- HTML_TAG_START -->FLAN-T5<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/flan-ul2"><!-- HTML_TAG_START -->FLAN-UL2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/flaubert"><!-- HTML_TAG_START -->FlauBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/fnet"><!-- HTML_TAG_START -->FNet<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/fsmt"><!-- HTML_TAG_START -->FSMT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/funnel"><!-- HTML_TAG_START -->Funnel Transformer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/openai-gpt"><!-- HTML_TAG_START -->GPT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt_neo"><!-- HTML_TAG_START -->GPT Neo<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt_neox"><!-- HTML_TAG_START -->GPT NeoX<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese"><!-- HTML_TAG_START -->GPT NeoX Japanese<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gptj"><!-- HTML_TAG_START -->GPT-J<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt2"><!-- HTML_TAG_START -->GPT2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode"><!-- HTML_TAG_START -->GPTBigCode<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese"><!-- HTML_TAG_START -->GPTSAN Japanese<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt-sw3"><!-- HTML_TAG_START -->GPTSw3<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/herbert"><!-- HTML_TAG_START -->HerBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ibert"><!-- HTML_TAG_START -->I-BERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/jukebox"><!-- HTML_TAG_START -->Jukebox<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/led"><!-- HTML_TAG_START -->LED<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/llama"><!-- HTML_TAG_START -->LLaMA<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/llama2"><!-- HTML_TAG_START -->Llama2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/longformer"><!-- HTML_TAG_START -->Longformer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/longt5"><!-- HTML_TAG_START -->LongT5<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/luke"><!-- HTML_TAG_START -->LUKE<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/m2m_100"><!-- HTML_TAG_START -->M2M100<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/marian"><!-- HTML_TAG_START -->MarianMT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/markuplm"><!-- HTML_TAG_START -->MarkupLM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mbart"><!-- HTML_TAG_START -->MBart and MBart-50<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mega"><!-- HTML_TAG_START -->MEGA<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/megatron-bert"><!-- HTML_TAG_START -->MegatronBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2"><!-- HTML_TAG_START -->MegatronGPT2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mistral"><!-- HTML_TAG_START -->Mistral<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mluke"><!-- HTML_TAG_START -->mLUKE<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mobilebert"><!-- HTML_TAG_START -->MobileBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mpnet"><!-- HTML_TAG_START -->MPNet<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mpt"><!-- HTML_TAG_START -->MPT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mra"><!-- HTML_TAG_START -->MRA<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mt5"><!-- HTML_TAG_START -->MT5<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mvp"><!-- HTML_TAG_START -->MVP<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nezha"><!-- HTML_TAG_START -->NEZHA<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nllb"><!-- HTML_TAG_START -->NLLB<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nllb-moe"><!-- HTML_TAG_START -->NLLB-MoE<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nystromformer"><!-- HTML_TAG_START -->Nyströmformer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/open-llama"><!-- HTML_TAG_START -->Open-Llama<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/opt"><!-- HTML_TAG_START -->OPT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/pegasus"><!-- HTML_TAG_START -->Pegasus<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/pegasus_x"><!-- HTML_TAG_START -->PEGASUS-X<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/persimmon"><!-- HTML_TAG_START -->Persimmon<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/phobert"><!-- HTML_TAG_START -->PhoBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/plbart"><!-- HTML_TAG_START -->PLBart<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/prophetnet"><!-- HTML_TAG_START -->ProphetNet<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/qdqbert"><!-- HTML_TAG_START -->QDQBert<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/rag"><!-- HTML_TAG_START -->RAG<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/realm"><!-- HTML_TAG_START -->REALM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/reformer"><!-- HTML_TAG_START -->Reformer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/rembert"><!-- HTML_TAG_START -->RemBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/retribert"><!-- HTML_TAG_START -->RetriBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/roberta"><!-- HTML_TAG_START -->RoBERTa<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm"><!-- HTML_TAG_START -->RoBERTa-PreLayerNorm<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/roc_bert"><!-- HTML_TAG_START -->RoCBert<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/roformer"><!-- HTML_TAG_START -->RoFormer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/rwkv"><!-- HTML_TAG_START -->RWKV<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/splinter"><!-- HTML_TAG_START -->Splinter<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/squeezebert"><!-- HTML_TAG_START -->SqueezeBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/switch_transformers"><!-- HTML_TAG_START -->SwitchTransformers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/t5"><!-- HTML_TAG_START -->T5<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/t5v1.1"><!-- HTML_TAG_START -->T5v1.1<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/tapex"><!-- HTML_TAG_START -->TAPEX<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/transfo-xl"><!-- HTML_TAG_START -->Transformer XL<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ul2"><!-- HTML_TAG_START -->UL2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/umt5"><!-- HTML_TAG_START -->UMT5<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xmod"><!-- HTML_TAG_START -->X-MOD<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xglm"><!-- HTML_TAG_START -->XGLM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm"><!-- HTML_TAG_START -->XLM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet"><!-- HTML_TAG_START -->XLM-ProphetNet<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta"><!-- HTML_TAG_START -->XLM-RoBERTa<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl"><!-- HTML_TAG_START -->XLM-RoBERTa-XL<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm-v"><!-- HTML_TAG_START -->XLM-V<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlnet"><!-- HTML_TAG_START -->XLNet<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/yoso"><!-- HTML_TAG_START -->YOSO<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Vision models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Audio models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Multimodal models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Reinforcement learning models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Time series models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Graph models<!-- HTML_TAG_END --></span> </span></span> </div></div> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Internal Helpers<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/modeling_utils"><!-- HTML_TAG_START -->Custom Layers and Utilities<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/pipelines_utils"><!-- HTML_TAG_START -->Utilities for pipelines<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/tokenization_utils"><!-- HTML_TAG_START -->Utilities for Tokenizers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/trainer_utils"><!-- HTML_TAG_START -->Utilities for Trainer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/generation_utils"><!-- HTML_TAG_START -->Utilities for Generation<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/image_processing_utils"><!-- HTML_TAG_START -->Utilities for Image Processors<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/audio_utils"><!-- HTML_TAG_START -->Utilities for Audio processing<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/file_utils"><!-- HTML_TAG_START -->General Utilities<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/time_series_utils"><!-- HTML_TAG_START -->Utilities for Time Series<!-- HTML_TAG_END --> </a> </div> </div></nav></div></div></div> <div class="z-1 min-w-0 flex-1"> <div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg"> <div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div> <p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience </p> <div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes </div></div></div> <div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a> <p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div> <div class="prose-doc prose relative mx-auto max-w-4xl break-words"> <p></p> <h1 class="relative group"><a id="xlm" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#xlm"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-zmv7xk">XLM</span></h1> <div class="flex flex-wrap space-x-1" data-svelte-h="svelte-7m5bn6"><a href="https://huggingface.co/models?filter=xlm"><img alt="Models" src="https://img.shields.io/badge/All_model_pages-xlm-blueviolet"></a> <a href="https://huggingface.co/spaces/docs-demos/xlm-mlm-en-2048"><img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"></a></div> <h2 class="relative group"><a id="overview" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#overview"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1jsw1pg">Overview</span></h2> <p data-svelte-h="svelte-1jiewv4">The XLM model was proposed in <a href="https://arxiv.org/abs/1901.07291" rel="nofollow">Cross-lingual Language Model Pretraining</a> by Guillaume Lample, Alexis Conneau. It’s a transformer pretrained using one of the following objectives:</p> <ul data-svelte-h="svelte-4ojsv2"><li>a causal language modeling (CLM) objective (next token prediction),</li> <li>a masked language modeling (MLM) objective (BERT-like), or</li> <li>a Translation Language Modeling (TLM) object (extension of BERT’s MLM to multiple language inputs)</li></ul> <p data-svelte-h="svelte-vfdo9a">The abstract from the paper is the following:</p> <p data-svelte-h="svelte-1s611dc"><em>Recent studies have demonstrated the efficiency of generative pretraining for English natural language understanding. In this work, we extend this approach to multiple languages and show the effectiveness of cross-lingual pretraining. We propose two methods to learn cross-lingual language models (XLMs): one unsupervised that only relies on monolingual data, and one supervised that leverages parallel data with a new cross-lingual language model objective. We obtain state-of-the-art results on cross-lingual classification, unsupervised and supervised machine translation. On XNLI, our approach pushes the state of the art by an absolute gain of 4.9% accuracy. On unsupervised machine translation, we obtain 34.3 BLEU on WMT’16 German-English, improving the previous state of the art by more than 9 BLEU. On supervised machine translation, we obtain a new state of the art of 38.5 BLEU on WMT’16 Romanian-English, outperforming the previous best approach by more than 4 BLEU. Our code and pretrained models will be made publicly available.</em></p> <p data-svelte-h="svelte-axv494">Tips:</p> <ul data-svelte-h="svelte-23jwmu"><li><p>XLM has many different checkpoints, which were trained using different objectives: CLM, MLM or TLM. Make sure to select the correct objective for your task (e.g. MLM checkpoints are not suitable for generation).</p></li> <li><p>XLM has multilingual checkpoints which leverage a specific <code>lang</code> parameter. Check out the <a href="../multilingual">multi-lingual</a> page for more information.</p></li> <li><p>A transformer model trained on several languages. There are three different type of training for this model and the library provides checkpoints for all of them:</p> <ul><li>Causal language modeling (CLM) which is the traditional autoregressive training (so this model could be in the previous section as well). One of the languages is selected for each training sample, and the model input is a sentence of 256 tokens, that may span over several documents in one of those languages.</li> <li>Masked language modeling (MLM) which is like RoBERTa. One of the languages is selected for each training sample, and the model input is a sentence of 256 tokens, that may span over several documents in one of those languages, with dynamic masking of the tokens.</li> <li>A combination of MLM and translation language modeling (TLM). This consists of concatenating a sentence in two different languages, with random masking. To predict one of the masked tokens, the model can use both, the surrounding context in language 1 and the context given by language 2.</li></ul></li></ul> <p data-svelte-h="svelte-14fck8o">This model was contributed by <a href="https://huggingface.co/thomwolf" rel="nofollow">thomwolf</a>. The original code can be found <a href="https://github.com/facebookresearch/XLM/" rel="nofollow">here</a>.</p> <h2 class="relative group"><a id="documentation-resources" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#documentation-resources"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-n3f0j0">Documentation resources</span></h2> <ul data-svelte-h="svelte-p1b16m"><li><a href="../tasks/sequence_classification">Text classification task guide</a></li> <li><a href="../tasks/token_classification">Token classification task guide</a></li> <li><a href="../tasks/question_answering">Question answering task guide</a></li> <li><a href="../tasks/language_modeling">Causal language modeling task guide</a></li> <li><a href="../tasks/masked_language_modeling">Masked language modeling task guide</a></li> <li><a href="../tasks/multiple_choice">Multiple choice task guide</a></li></ul> <h2 class="relative group"><a id="transformers.XLMConfig" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMConfig"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-xkau36">XLMConfig</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMConfig"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XLMConfig</span></span></h3> <a id="transformers.XLMConfig" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMConfig"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/configuration_xlm.py#L40" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">vocab_size<span class="opacity-60"> = 30145</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">emb_dim<span class="opacity-60"> = 2048</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">n_layers<span class="opacity-60"> = 12</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">n_heads<span class="opacity-60"> = 16</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">dropout<span class="opacity-60"> = 0.1</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_dropout<span class="opacity-60"> = 0.1</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">gelu_activation<span class="opacity-60"> = True</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">sinusoidal_embeddings<span class="opacity-60"> = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">causal<span class="opacity-60"> = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">asm<span class="opacity-60"> = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">n_langs<span class="opacity-60"> = 1</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_lang_emb<span class="opacity-60"> = True</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">max_position_embeddings<span class="opacity-60"> = 512</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">embed_init_std<span class="opacity-60"> = 0.02209708691207961</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">layer_norm_eps<span class="opacity-60"> = 1e-12</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">init_std<span class="opacity-60"> = 0.02</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">bos_index<span class="opacity-60"> = 0</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">eos_index<span class="opacity-60"> = 1</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pad_index<span class="opacity-60"> = 2</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">unk_index<span class="opacity-60"> = 3</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_index<span class="opacity-60"> = 5</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">is_encoder<span class="opacity-60"> = True</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">summary_type<span class="opacity-60"> = 'first'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">summary_use_proj<span class="opacity-60"> = True</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">summary_activation<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">summary_proj_to_labels<span class="opacity-60"> = True</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">summary_first_dropout<span class="opacity-60"> = 0.1</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">start_n_top<span class="opacity-60"> = 5</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">end_n_top<span class="opacity-60"> = 5</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_token_id<span class="opacity-60"> = 0</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">lang_id<span class="opacity-60"> = 0</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pad_token_id<span class="opacity-60"> = 2</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">bos_token_id<span class="opacity-60"> = 0</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 31 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMConfig.vocab_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMConfig.vocab_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>vocab_size</strong> (<code>int</code>, <em>optional</em>, defaults to 30145) — Vocabulary size of the BERT model. Defines the number of different tokens that can be represented by the <code>inputs_ids</code> passed when calling <a href="/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.XLMModel">XLMModel</a> or <a href="/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.TFXLMModel">TFXLMModel</a>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMConfig.emb_dim" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMConfig.emb_dim"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>emb_dim</strong> (<code>int</code>, <em>optional</em>, defaults to 2048) — Dimensionality of the encoder layers and the pooler layer.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMConfig.n_layer" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMConfig.n_layer"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>n_layer</strong> (<code>int</code>, <em>optional</em>, defaults to 12) — Number of hidden layers in the Transformer encoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMConfig.n_head" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMConfig.n_head"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>n_head</strong> (<code>int</code>, <em>optional</em>, defaults to 16) — Number of attention heads for each attention layer in the Transformer encoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMConfig.dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMConfig.dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>dropout</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMConfig.attention_dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMConfig.attention_dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_dropout</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) — The dropout probability for the attention mechanism</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMConfig.gelu_activation" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMConfig.gelu_activation"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>gelu_activation</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether or not to use <em>gelu</em> for the activations instead of <em>relu</em>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMConfig.sinusoidal_embeddings" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMConfig.sinusoidal_embeddings"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>sinusoidal_embeddings</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not to use sinusoidal positional embeddings instead of absolute positional embeddings.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMConfig.causal" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMConfig.causal"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>causal</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not the model should behave in a causal manner. Causal models use a triangular attention mask in order to only attend to the left-side context instead if a bidirectional context.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMConfig.asm" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMConfig.asm"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>asm</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not to use an adaptive log softmax projection layer instead of a linear layer for the prediction layer.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMConfig.n_langs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMConfig.n_langs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>n_langs</strong> (<code>int</code>, <em>optional</em>, defaults to 1) — The number of languages the model handles. Set to 1 for monolingual models.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMConfig.use_lang_emb" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMConfig.use_lang_emb"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>use_lang_emb</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether to use language embeddings. Some models use additional language embeddings, see <a href="http://huggingface.co/transformers/multilingual.html#xlm-language-embeddings" rel="nofollow">the multilingual models page</a> for information on how to use them.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMConfig.max_position_embeddings" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMConfig.max_position_embeddings"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>max_position_embeddings</strong> (<code>int</code>, <em>optional</em>, defaults to 512) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMConfig.embed_init_std" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMConfig.embed_init_std"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>embed_init_std</strong> (<code>float</code>, <em>optional</em>, defaults to 2048^-0.5) — The standard deviation of the truncated_normal_initializer for initializing the embedding matrices.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMConfig.init_std" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMConfig.init_std"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>init_std</strong> (<code>int</code>, <em>optional</em>, defaults to 50257) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices except the embedding matrices.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMConfig.layer_norm_eps" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMConfig.layer_norm_eps"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>layer_norm_eps</strong> (<code>float</code>, <em>optional</em>, defaults to 1e-12) — The epsilon used by the layer normalization layers.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMConfig.bos_index" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMConfig.bos_index"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>bos_index</strong> (<code>int</code>, <em>optional</em>, defaults to 0) — The index of the beginning of sentence token in the vocabulary.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMConfig.eos_index" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMConfig.eos_index"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>eos_index</strong> (<code>int</code>, <em>optional</em>, defaults to 1) — The index of the end of sentence token in the vocabulary.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMConfig.pad_index" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMConfig.pad_index"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>pad_index</strong> (<code>int</code>, <em>optional</em>, defaults to 2) — The index of the padding token in the vocabulary.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMConfig.unk_index" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMConfig.unk_index"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>unk_index</strong> (<code>int</code>, <em>optional</em>, defaults to 3) — The index of the unknown token in the vocabulary.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMConfig.mask_index" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMConfig.mask_index"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mask_index</strong> (<code>int</code>, <em>optional</em>, defaults to 5) — The index of the masking token in the vocabulary.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMConfig.is_encoder(bool," class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMConfig.is_encoder(bool,"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>is_encoder(<code>bool</code>,</strong> <em>optional</em>, defaults to <code>True</code>) — Whether or not the initialized model should be a transformer encoder or decoder as seen in Vaswani et al.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMConfig.summary_type" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMConfig.summary_type"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>summary_type</strong> (<code>string</code>, <em>optional</em>, defaults to “first”) — Argument used when doing sequence summary. Used in the sequence classification and multiple choice models.<p></p> <p>Has to be one of the following options:</p> <ul> <li><code>"last"</code>: Take the last token hidden state (like XLNet).</li> <li><code>"first"</code>: Take the first token hidden state (like BERT).</li> <li><code>"mean"</code>: Take the mean of all tokens hidden states.</li> <li><code>"cls_index"</code>: Supply a Tensor of classification token position (like GPT/GPT-2).</li> <li><code>"attn"</code>: Not implemented now, use multi-head attention.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMConfig.summary_use_proj" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMConfig.summary_use_proj"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>summary_use_proj</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Argument used when doing sequence summary. Used in the sequence classification and multiple choice models.<p></p> <p>Whether or not to add a projection after the vector extraction.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMConfig.summary_activation" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMConfig.summary_activation"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>summary_activation</strong> (<code>str</code>, <em>optional</em>) — Argument used when doing sequence summary. Used in the sequence classification and multiple choice models.<p></p> <p>Pass <code>"tanh"</code> for a tanh activation to the output, any other value will result in no activation.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMConfig.summary_proj_to_labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMConfig.summary_proj_to_labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>summary_proj_to_labels</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Used in the sequence classification and multiple choice models.<p></p> <p>Whether the projection outputs should have <code>config.num_labels</code> or <code>config.hidden_size</code> classes.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMConfig.summary_first_dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMConfig.summary_first_dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>summary_first_dropout</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) — Used in the sequence classification and multiple choice models.<p></p> <p>The dropout ratio to be used after the projection and activation.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMConfig.start_n_top" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMConfig.start_n_top"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>start_n_top</strong> (<code>int</code>, <em>optional</em>, defaults to 5) — Used in the SQuAD evaluation script.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMConfig.end_n_top" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMConfig.end_n_top"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>end_n_top</strong> (<code>int</code>, <em>optional</em>, defaults to 5) — Used in the SQuAD evaluation script.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMConfig.mask_token_id" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMConfig.mask_token_id"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mask_token_id</strong> (<code>int</code>, <em>optional</em>, defaults to 0) — Model agnostic parameter to identify masked tokens when generating text in an MLM context.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMConfig.lang_id" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMConfig.lang_id"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>lang_id</strong> (<code>int</code>, <em>optional</em>, defaults to 1) — The ID of the language used by the model. This parameter is used when generating text in a given language.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-o222n1">This is the configuration class to store the configuration of a <a href="/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.XLMModel">XLMModel</a> or a <a href="/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.TFXLMModel">TFXLMModel</a>. It is used to instantiate a XLM model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the <a href="https://huggingface.co/xlm-mlm-en-2048" rel="nofollow">xlm-mlm-en-2048</a> architecture.</p> <p data-svelte-h="svelte-10kqkkl">Configuration objects inherit from <a href="/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> and can be used to control the model outputs. Read the documentation from <a href="/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> for more information.</p> <div class="relative group rounded-md"><a id="transformers.XLMConfig.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMConfig.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-kvfsh7">Examples:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> XLMConfig, XLMModel <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Initializing a XLM configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>configuration = XLMConfig() <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Initializing a model (with random weights) from the configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model = XLMModel(configuration) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Accessing the model configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>configuration = model.config</pre></div></div></div> <h2 class="relative group"><a id="transformers.XLMTokenizer" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMTokenizer"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-16zqvo9">XLMTokenizer</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMTokenizer"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XLMTokenizer</span></span></h3> <a id="transformers.XLMTokenizer" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMTokenizer"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/tokenization_xlm.py#L528" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">vocab_file<span class="opacity-60"></span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">merges_file<span class="opacity-60"></span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">unk_token<span class="opacity-60"> = '&lt;unk&gt;'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">bos_token<span class="opacity-60"> = '&lt;s&gt;'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">sep_token<span class="opacity-60"> = '&lt;/s&gt;'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pad_token<span class="opacity-60"> = '&lt;pad&gt;'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">cls_token<span class="opacity-60"> = '&lt;/s&gt;'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_token<span class="opacity-60"> = '&lt;special1&gt;'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">additional_special_tokens<span class="opacity-60"> = ['&lt;special0&gt;', '&lt;special1&gt;', '&lt;special2&gt;', '&lt;special3&gt;', '&lt;special4&gt;', '&lt;special5&gt;', '&lt;special6&gt;', '&lt;special7&gt;', '&lt;special8&gt;', '&lt;special9&gt;']</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">lang2id<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">id2lang<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">do_lowercase_and_remove_accent<span class="opacity-60"> = True</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 12 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMTokenizer.vocab_file" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMTokenizer.vocab_file"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>vocab_file</strong> (<code>str</code>) — Vocabulary file.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMTokenizer.merges_file" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMTokenizer.merges_file"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>merges_file</strong> (<code>str</code>) — Merges file.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMTokenizer.unk_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMTokenizer.unk_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>unk_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;unk&gt;"</code>) — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMTokenizer.bos_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMTokenizer.bos_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>bos_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;s&gt;"</code>) — The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.<p></p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"> <p>When building a sequence using special tokens, this is not the token that is used for the beginning of sequence. The token used is the <code>cls_token</code>.</p> </div></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMTokenizer.sep_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMTokenizer.sep_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>sep_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;/s&gt;"</code>) — The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMTokenizer.pad_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMTokenizer.pad_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>pad_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;pad&gt;"</code>) — The token used for padding, for example when batching sequences of different lengths.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMTokenizer.cls_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMTokenizer.cls_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>cls_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;/s&gt;"</code>) — The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMTokenizer.mask_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMTokenizer.mask_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mask_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;special1&gt;"</code>) — The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMTokenizer.additional_special_tokens" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMTokenizer.additional_special_tokens"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>additional_special_tokens</strong> (<code>List[str]</code>, <em>optional</em>, defaults to <code>["&lt;special0&gt;","&lt;special1&gt;","&lt;special2&gt;","&lt;special3&gt;","&lt;special4&gt;","&lt;special5&gt;","&lt;special6&gt;","&lt;special7&gt;","&lt;special8&gt;","&lt;special9&gt;"]</code>) — List of additional special tokens.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMTokenizer.lang2id" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMTokenizer.lang2id"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>lang2id</strong> (<code>Dict[str, int]</code>, <em>optional</em>) — Dictionary mapping languages string identifiers to their IDs.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMTokenizer.id2lang" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMTokenizer.id2lang"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>id2lang</strong> (<code>Dict[int, str]</code>, <em>optional</em>) — Dictionary mapping language IDs to their string identifiers.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMTokenizer.do_lowercase_and_remove_accent" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMTokenizer.do_lowercase_and_remove_accent"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>do_lowercase_and_remove_accent</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether to lowercase and remove accents when tokenizing.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1m6402p">Construct an XLM tokenizer. Based on Byte-Pair Encoding. The tokenization process is the following:</p> <ul data-svelte-h="svelte-rbg0g8"><li>Moses preprocessing and tokenization for most supported languages.</li> <li>Language specific tokenization for Chinese (Jieba), Japanese (KyTea) and Thai (PyThaiNLP).</li> <li>Optionally lowercases and normalizes all inputs text.</li> <li>The arguments <code>special_tokens</code> and the function <code>set_special_tokens</code>, can be used to add additional symbols (like ”<strong>classify</strong>”) to a vocabulary.</li> <li>The <code>lang2id</code> attribute maps the languages supported by the model with their IDs if provided (automatically set for pretrained vocabularies).</li> <li>The <code>id2lang</code> attributes does reverse mapping if provided (automatically set for pretrained vocabularies).</li></ul> <p data-svelte-h="svelte-1b0fouy">This tokenizer inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizer">PreTrainedTokenizer</a> which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMTokenizer.build_inputs_with_special_tokens"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>build_inputs_with_special_tokens</span></h4> <a id="transformers.XLMTokenizer.build_inputs_with_special_tokens" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMTokenizer.build_inputs_with_special_tokens"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/tokenization_xlm.py#L870" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_0<span class="opacity-60">: typing.List[int]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_1<span class="opacity-60">: typing.Optional[typing.List[int]] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>List[int]</code></span></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMTokenizer.build_inputs_with_special_tokens.token_ids_0" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMTokenizer.build_inputs_with_special_tokens.token_ids_0"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_0</strong> (<code>List[int]</code>) — List of IDs to which the special tokens will be added.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMTokenizer.build_inputs_with_special_tokens.token_ids_1" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMTokenizer.build_inputs_with_special_tokens.token_ids_1"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_1</strong> (<code>List[int]</code>, <em>optional</em>) — Optional second list of IDs for sequence pairs.</span></span> </li></ul> <div id="transformers.XLMTokenizer.build_inputs_with_special_tokens.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><code>List[int]</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>List of <a href="../glossary#input-ids">input IDs</a> with the appropriate special tokens.</p> </p> </div></div> <p data-svelte-h="svelte-1xo6smc">Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. An XLM sequence has the following format:</p> <ul data-svelte-h="svelte-1w73b42"><li>single sequence: <code>&lt;s&gt; X &lt;/s&gt;</code></li> <li>pair of sequences: <code>&lt;s&gt; A &lt;/s&gt; B &lt;/s&gt;</code></li></ul></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMTokenizer.get_special_tokens_mask"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>get_special_tokens_mask</span></h4> <a id="transformers.XLMTokenizer.get_special_tokens_mask" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMTokenizer.get_special_tokens_mask"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/tokenization_xlm.py#L897" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_0<span class="opacity-60">: typing.List[int]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_1<span class="opacity-60">: typing.Optional[typing.List[int]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">already_has_special_tokens<span class="opacity-60">: bool = False</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>List[int]</code></span></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMTokenizer.get_special_tokens_mask.token_ids_0" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMTokenizer.get_special_tokens_mask.token_ids_0"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_0</strong> (<code>List[int]</code>) — List of IDs.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMTokenizer.get_special_tokens_mask.token_ids_1" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMTokenizer.get_special_tokens_mask.token_ids_1"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_1</strong> (<code>List[int]</code>, <em>optional</em>) — Optional second list of IDs for sequence pairs.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMTokenizer.get_special_tokens_mask.already_has_special_tokens" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMTokenizer.get_special_tokens_mask.already_has_special_tokens"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>already_has_special_tokens</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not the token list is already formatted with special tokens for the model.</span></span> </li></ul> <div id="transformers.XLMTokenizer.get_special_tokens_mask.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><code>List[int]</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.</p> </p> </div></div> <p data-svelte-h="svelte-1f4f5kp">Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer <code>prepare_for_model</code> method.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMTokenizer.create_token_type_ids_from_sequences"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>create_token_type_ids_from_sequences</span></h4> <a id="transformers.XLMTokenizer.create_token_type_ids_from_sequences" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMTokenizer.create_token_type_ids_from_sequences"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/tokenization_xlm.py#L925" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_0<span class="opacity-60">: typing.List[int]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_1<span class="opacity-60">: typing.Optional[typing.List[int]] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>List[int]</code></span></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMTokenizer.create_token_type_ids_from_sequences.token_ids_0" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMTokenizer.create_token_type_ids_from_sequences.token_ids_0"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_0</strong> (<code>List[int]</code>) — List of IDs.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMTokenizer.create_token_type_ids_from_sequences.token_ids_1" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMTokenizer.create_token_type_ids_from_sequences.token_ids_1"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_1</strong> (<code>List[int]</code>, <em>optional</em>) — Optional second list of IDs for sequence pairs.</span></span> </li></ul> <div id="transformers.XLMTokenizer.create_token_type_ids_from_sequences.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><code>List[int]</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>List of <a href="../glossary#token-type-ids">token type IDs</a> according to the given sequence(s).</p> </p> </div></div> <p data-svelte-h="svelte-17m549d">Create a mask from the two sequences passed to be used in a sequence-pair classification task. An XLM sequence</p> <div class="relative group rounded-md"><a id="transformers.XLMTokenizer.create_token_type_ids_from_sequences.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMTokenizer.create_token_type_ids_from_sequences.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-qjgeij">pair mask has the following format:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class="">0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 1 </span>1<span class="hljs-number"> 1 </span>1<span class="hljs-number"> 1 </span>1<span class="hljs-number"> 1 </span>1 1 | first sequence | second sequence |</pre></div></div> <p data-svelte-h="svelte-owoxgn">If <code>token_ids_1</code> is <code>None</code>, this method only returns the first portion of the mask (0s).</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMTokenizer.save_vocabulary"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>save_vocabulary</span></h4> <a id="transformers.XLMTokenizer.save_vocabulary" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMTokenizer.save_vocabulary"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/tokenization_xlm.py#L954" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">save_directory<span class="opacity-60">: str</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">filename_prefix<span class="opacity-60">: typing.Optional[str] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> </div></div></div></div> <h2 class="relative group"><a id="transformers.models.xlm.modeling_xlm.XLMForQuestionAnsweringOutput" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlm.modeling_xlm.XLMForQuestionAnsweringOutput"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1qecp44">XLM specific outputs</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.models.xlm.modeling_xlm.XLMForQuestionAnsweringOutput"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.models.xlm.modeling_xlm.</span><span class="font-semibold">XLMForQuestionAnsweringOutput</span></span></h3> <a id="transformers.models.xlm.modeling_xlm.XLMForQuestionAnsweringOutput" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.models.xlm.modeling_xlm.XLMForQuestionAnsweringOutput"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/modeling_xlm.py#L262" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">loss<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">start_top_log_probs<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">start_top_index<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">end_top_log_probs<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">end_top_index<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">cls_logits<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_states<span class="opacity-60">: typing.Optional[typing.Tuple[torch.FloatTensor]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attentions<span class="opacity-60">: typing.Optional[typing.Tuple[torch.FloatTensor]] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 8 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlm.modeling_xlm.XLMForQuestionAnsweringOutput.loss" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlm.modeling_xlm.XLMForQuestionAnsweringOutput.loss"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned if both <code>start_positions</code> and <code>end_positions</code> are provided) — Classification loss as the sum of start token, end token (and is_impossible if provided) classification losses.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlm.modeling_xlm.XLMForQuestionAnsweringOutput.start_top_log_probs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlm.modeling_xlm.XLMForQuestionAnsweringOutput.start_top_log_probs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>start_top_log_probs</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, config.start_n_top)</code>, <em>optional</em>, returned if <code>start_positions</code> or <code>end_positions</code> is not provided) — Log probabilities for the top config.start_n_top start token possibilities (beam-search).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlm.modeling_xlm.XLMForQuestionAnsweringOutput.start_top_index" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlm.modeling_xlm.XLMForQuestionAnsweringOutput.start_top_index"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>start_top_index</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, config.start_n_top)</code>, <em>optional</em>, returned if <code>start_positions</code> or <code>end_positions</code> is not provided) — Indices for the top config.start_n_top start token possibilities (beam-search).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlm.modeling_xlm.XLMForQuestionAnsweringOutput.end_top_log_probs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlm.modeling_xlm.XLMForQuestionAnsweringOutput.end_top_log_probs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>end_top_log_probs</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, config.start_n_top * config.end_n_top)</code>, <em>optional</em>, returned if <code>start_positions</code> or <code>end_positions</code> is not provided) — Log probabilities for the top <code>config.start_n_top * config.end_n_top</code> end token possibilities (beam-search).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlm.modeling_xlm.XLMForQuestionAnsweringOutput.end_top_index" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlm.modeling_xlm.XLMForQuestionAnsweringOutput.end_top_index"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>end_top_index</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, config.start_n_top * config.end_n_top)</code>, <em>optional</em>, returned if <code>start_positions</code> or <code>end_positions</code> is not provided) — Indices for the top <code>config.start_n_top * config.end_n_top</code> end token possibilities (beam-search).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlm.modeling_xlm.XLMForQuestionAnsweringOutput.cls_logits" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlm.modeling_xlm.XLMForQuestionAnsweringOutput.cls_logits"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>cls_logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>, returned if <code>start_positions</code> or <code>end_positions</code> is not provided) — Log probabilities for the <code>is_impossible</code> label of the answers.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlm.modeling_xlm.XLMForQuestionAnsweringOutput.hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlm.modeling_xlm.XLMForQuestionAnsweringOutput.hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.<p></p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlm.modeling_xlm.XLMForQuestionAnsweringOutput.attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlm.modeling_xlm.XLMForQuestionAnsweringOutput.attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.<p></p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p></span></span> </li></ul> </div></div> <p data-svelte-h="svelte-ywkme9">Base class for outputs of question answering models using a <code>SquadHead</code>.</p></div> <h2 class="relative group"><a id="transformers.XLMModel" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMModel"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1alv1fd">XLMModel</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMModel"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XLMModel</span></span></h3> <a id="transformers.XLMModel" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMModel"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/modeling_xlm.py#L393" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMModel.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMModel.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.XLMConfig">XLMConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-hyefcu">The bare XLM Model transformer outputting raw hidden-states without any specific head on top.</p> <p data-svelte-h="svelte-hmtw9k">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-hswkmf">This model is also a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMModel.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.XLMModel.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMModel.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/modeling_xlm.py#L480" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">langs<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">lengths<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">cache<span class="opacity-60">: typing.Union[typing.Dict[str, torch.Tensor], NoneType] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.BaseModelOutput">transformers.modeling_outputs.BaseModelOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 12 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMModel.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMModel.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMModel.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMModel.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMModel.forward.langs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMModel.forward.langs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>langs</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — A parallel sequence of tokens to be used to indicate the language of each token in the input. Indices are languages ids which can be obtained from the language names by using two conversion mappings provided in the configuration of the model (only provided for multilingual models). More precisely, the <em>language name to language id</em> mapping is in <code>model.config.lang2id</code> (which is a dictionary string to int) and the <em>language id to language name</em> mapping is in <code>model.config.id2lang</code> (dictionary int to string).<p></p> <p>See usage examples detailed in the <a href="../multilingual">multilingual documentation</a>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMModel.forward.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMModel.forward.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token.</li> </ul> <p><a href="../glossary#token-type-ids">What are token type IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMModel.forward.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMModel.forward.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.<p></p> <p><a href="../glossary#position-ids">What are position IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMModel.forward.lengths" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMModel.forward.lengths"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>lengths</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Length of each sentence that can be used to avoid performing attention on padding token indices. You can also use <em>attention_mask</em> for the same result (see above), kept here for compatibility. Indices selected in <code>[0, ..., input_ids.size(-1)]</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMModel.forward.cache" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMModel.forward.cache"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>cache</strong> (<code>Dict[str, torch.FloatTensor]</code>, <em>optional</em>) — Dictionary string to <code>torch.FloatTensor</code> that contains precomputed hidden states (key and values in the attention blocks) as computed by the model (see <code>cache</code> output below). Can be used to speed up sequential decoding.<p></p> <p>The dictionary object will be modified in-place during the forward pass to add newly computed hidden-states.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMModel.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMModel.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMModel.forward.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMModel.forward.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMModel.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMModel.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMModel.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMModel.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMModel.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMModel.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li></ul> <div id="transformers.XLMModel.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.BaseModelOutput">transformers.modeling_outputs.BaseModelOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.BaseModelOutput">transformers.modeling_outputs.BaseModelOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.XLMConfig">XLMConfig</a>) and inputs.</p> <ul> <li> <p><strong>last_hidden_state</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>) — Sequence of hidden-states at the output of the last layer of the model.</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-1qotf8w">The <a href="/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.XLMModel">XLMModel</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.XLMModel.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMModel.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, XLMModel <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"xlm-mlm-en-2048"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = XLMModel.from_pretrained(<span class="hljs-string">"xlm-mlm-en-2048"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(<span class="hljs-string">"Hello, my dog is cute"</span>, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(**inputs) <span class="hljs-meta">&gt;&gt;&gt; </span>last_hidden_states = outputs.last_hidden_state</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.XLMWithLMHeadModel" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMWithLMHeadModel"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-15dh4sw">XLMWithLMHeadModel</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMWithLMHeadModel"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XLMWithLMHeadModel</span></span></h3> <a id="transformers.XLMWithLMHeadModel" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMWithLMHeadModel"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/modeling_xlm.py#L672" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMWithLMHeadModel.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMWithLMHeadModel.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.XLMConfig">XLMConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-zowfck">The XLM Model transformer with a language modeling head on top (linear layer with weights tied to the input embeddings).</p> <p data-svelte-h="svelte-hmtw9k">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-hswkmf">This model is also a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMWithLMHeadModel.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.XLMWithLMHeadModel.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMWithLMHeadModel.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/modeling_xlm.py#L702" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">langs<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">lengths<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">cache<span class="opacity-60">: typing.Union[typing.Dict[str, torch.Tensor], NoneType] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.MaskedLMOutput">transformers.modeling_outputs.MaskedLMOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 13 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMWithLMHeadModel.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMWithLMHeadModel.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMWithLMHeadModel.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMWithLMHeadModel.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMWithLMHeadModel.forward.langs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMWithLMHeadModel.forward.langs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>langs</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — A parallel sequence of tokens to be used to indicate the language of each token in the input. Indices are languages ids which can be obtained from the language names by using two conversion mappings provided in the configuration of the model (only provided for multilingual models). More precisely, the <em>language name to language id</em> mapping is in <code>model.config.lang2id</code> (which is a dictionary string to int) and the <em>language id to language name</em> mapping is in <code>model.config.id2lang</code> (dictionary int to string).<p></p> <p>See usage examples detailed in the <a href="../multilingual">multilingual documentation</a>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMWithLMHeadModel.forward.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMWithLMHeadModel.forward.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token.</li> </ul> <p><a href="../glossary#token-type-ids">What are token type IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMWithLMHeadModel.forward.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMWithLMHeadModel.forward.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.<p></p> <p><a href="../glossary#position-ids">What are position IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMWithLMHeadModel.forward.lengths" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMWithLMHeadModel.forward.lengths"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>lengths</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Length of each sentence that can be used to avoid performing attention on padding token indices. You can also use <em>attention_mask</em> for the same result (see above), kept here for compatibility. Indices selected in <code>[0, ..., input_ids.size(-1)]</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMWithLMHeadModel.forward.cache" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMWithLMHeadModel.forward.cache"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>cache</strong> (<code>Dict[str, torch.FloatTensor]</code>, <em>optional</em>) — Dictionary string to <code>torch.FloatTensor</code> that contains precomputed hidden states (key and values in the attention blocks) as computed by the model (see <code>cache</code> output below). Can be used to speed up sequential decoding.<p></p> <p>The dictionary object will be modified in-place during the forward pass to add newly computed hidden-states.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMWithLMHeadModel.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMWithLMHeadModel.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMWithLMHeadModel.forward.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMWithLMHeadModel.forward.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMWithLMHeadModel.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMWithLMHeadModel.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMWithLMHeadModel.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMWithLMHeadModel.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMWithLMHeadModel.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMWithLMHeadModel.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMWithLMHeadModel.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMWithLMHeadModel.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Labels for language modeling. Note that the labels <strong>are shifted</strong> inside the model, i.e. you can set <code>labels = input_ids</code> Indices are selected in <code>[-100, 0, ..., config.vocab_size]</code> All labels set to <code>-100</code> are ignored (masked), the loss is only computed for labels in <code>[0, ..., config.vocab_size]</code></span></span> </li></ul> <div id="transformers.XLMWithLMHeadModel.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.MaskedLMOutput">transformers.modeling_outputs.MaskedLMOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.MaskedLMOutput">transformers.modeling_outputs.MaskedLMOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.XLMConfig">XLMConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Masked language modeling (MLM) loss.</p> </li> <li> <p><strong>logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, config.vocab_size)</code>) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-1g5c45a">The <a href="/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.XLMWithLMHeadModel">XLMWithLMHeadModel</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.XLMWithLMHeadModel.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMWithLMHeadModel.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, XLMWithLMHeadModel <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"xlm-mlm-en-2048"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = XLMWithLMHeadModel.from_pretrained(<span class="hljs-string">"xlm-mlm-en-2048"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(<span class="hljs-string">"The capital of France is &lt;special1&gt;."</span>, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> logits = model(**inputs).logits <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># retrieve index of &lt;special1&gt;</span> <span class="hljs-meta">&gt;&gt;&gt; </span>mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[<span class="hljs-number">0</span>].nonzero(as_tuple=<span class="hljs-literal">True</span>)[<span class="hljs-number">0</span>] <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_token_id = logits[<span class="hljs-number">0</span>, mask_token_index].argmax(axis=-<span class="hljs-number">1</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>labels = tokenizer(<span class="hljs-string">"The capital of France is Paris."</span>, return_tensors=<span class="hljs-string">"pt"</span>)[<span class="hljs-string">"input_ids"</span>] <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># mask labels of non-&lt;special1&gt; tokens</span> <span class="hljs-meta">&gt;&gt;&gt; </span>labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -<span class="hljs-number">100</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(**inputs, labels=labels)</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.XLMForSequenceClassification" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForSequenceClassification"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-l8qia8">XLMForSequenceClassification</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMForSequenceClassification"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XLMForSequenceClassification</span></span></h3> <a id="transformers.XLMForSequenceClassification" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMForSequenceClassification"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/modeling_xlm.py#L769" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForSequenceClassification.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForSequenceClassification.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.XLMConfig">XLMConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-6ot96a">XLM Model with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks.</p> <p data-svelte-h="svelte-hmtw9k">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-hswkmf">This model is also a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMForSequenceClassification.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.XLMForSequenceClassification.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMForSequenceClassification.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/modeling_xlm.py#L781" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">langs<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">lengths<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">cache<span class="opacity-60">: typing.Union[typing.Dict[str, torch.Tensor], NoneType] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.SequenceClassifierOutput">transformers.modeling_outputs.SequenceClassifierOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 13 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForSequenceClassification.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForSequenceClassification.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForSequenceClassification.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForSequenceClassification.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForSequenceClassification.forward.langs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForSequenceClassification.forward.langs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>langs</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — A parallel sequence of tokens to be used to indicate the language of each token in the input. Indices are languages ids which can be obtained from the language names by using two conversion mappings provided in the configuration of the model (only provided for multilingual models). More precisely, the <em>language name to language id</em> mapping is in <code>model.config.lang2id</code> (which is a dictionary string to int) and the <em>language id to language name</em> mapping is in <code>model.config.id2lang</code> (dictionary int to string).<p></p> <p>See usage examples detailed in the <a href="../multilingual">multilingual documentation</a>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForSequenceClassification.forward.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForSequenceClassification.forward.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token.</li> </ul> <p><a href="../glossary#token-type-ids">What are token type IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForSequenceClassification.forward.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForSequenceClassification.forward.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.<p></p> <p><a href="../glossary#position-ids">What are position IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForSequenceClassification.forward.lengths" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForSequenceClassification.forward.lengths"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>lengths</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Length of each sentence that can be used to avoid performing attention on padding token indices. You can also use <em>attention_mask</em> for the same result (see above), kept here for compatibility. Indices selected in <code>[0, ..., input_ids.size(-1)]</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForSequenceClassification.forward.cache" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForSequenceClassification.forward.cache"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>cache</strong> (<code>Dict[str, torch.FloatTensor]</code>, <em>optional</em>) — Dictionary string to <code>torch.FloatTensor</code> that contains precomputed hidden states (key and values in the attention blocks) as computed by the model (see <code>cache</code> output below). Can be used to speed up sequential decoding.<p></p> <p>The dictionary object will be modified in-place during the forward pass to add newly computed hidden-states.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForSequenceClassification.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForSequenceClassification.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForSequenceClassification.forward.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForSequenceClassification.forward.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForSequenceClassification.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForSequenceClassification.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForSequenceClassification.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForSequenceClassification.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForSequenceClassification.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForSequenceClassification.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForSequenceClassification.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForSequenceClassification.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Labels for computing the sequence classification/regression loss. Indices should be in <code>[0, ..., config.num_labels - 1]</code>. If <code>config.num_labels == 1</code> a regression loss is computed (Mean-Square loss), If <code>config.num_labels &gt; 1</code> a classification loss is computed (Cross-Entropy).</span></span> </li></ul> <div id="transformers.XLMForSequenceClassification.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.SequenceClassifierOutput">transformers.modeling_outputs.SequenceClassifierOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.SequenceClassifierOutput">transformers.modeling_outputs.SequenceClassifierOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.XLMConfig">XLMConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Classification (or regression if config.num_labels==1) loss.</p> </li> <li> <p><strong>logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, config.num_labels)</code>) — Classification (or regression if config.num_labels==1) scores (before SoftMax).</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-zrb2qy">The <a href="/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.XLMForSequenceClassification">XLMForSequenceClassification</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.XLMForSequenceClassification.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForSequenceClassification.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-ykxpe4">Example of single-label classification:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, XLMForSequenceClassification <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"xlm-mlm-en-2048"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = XLMForSequenceClassification.from_pretrained(<span class="hljs-string">"xlm-mlm-en-2048"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(<span class="hljs-string">"Hello, my dog is cute"</span>, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> logits = model(**inputs).logits <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_class_id = logits.argmax().item() <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`</span> <span class="hljs-meta">&gt;&gt;&gt; </span>num_labels = <span class="hljs-built_in">len</span>(model.config.id2label) <span class="hljs-meta">&gt;&gt;&gt; </span>model = XLMForSequenceClassification.from_pretrained(<span class="hljs-string">"xlm-mlm-en-2048"</span>, num_labels=num_labels) <span class="hljs-meta">&gt;&gt;&gt; </span>labels = torch.tensor([<span class="hljs-number">1</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span>loss = model(**inputs, labels=labels).loss</pre></div></div> <div class="relative group rounded-md"><a id="transformers.XLMForSequenceClassification.forward.example-2" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForSequenceClassification.forward.example-2"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-1l8e32d">Example of multi-label classification:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, XLMForSequenceClassification <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"xlm-mlm-en-2048"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = XLMForSequenceClassification.from_pretrained(<span class="hljs-string">"xlm-mlm-en-2048"</span>, problem_type=<span class="hljs-string">"multi_label_classification"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(<span class="hljs-string">"Hello, my dog is cute"</span>, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> logits = model(**inputs).logits <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_class_ids = torch.arange(<span class="hljs-number">0</span>, logits.shape[-<span class="hljs-number">1</span>])[torch.sigmoid(logits).squeeze(dim=<span class="hljs-number">0</span>) &gt; <span class="hljs-number">0.5</span>] <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`</span> <span class="hljs-meta">&gt;&gt;&gt; </span>num_labels = <span class="hljs-built_in">len</span>(model.config.id2label) <span class="hljs-meta">&gt;&gt;&gt; </span>model = XLMForSequenceClassification.from_pretrained( <span class="hljs-meta">... </span> <span class="hljs-string">"xlm-mlm-en-2048"</span>, num_labels=num_labels, problem_type=<span class="hljs-string">"multi_label_classification"</span> <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>labels = torch.<span class="hljs-built_in">sum</span>( <span class="hljs-meta">... </span> torch.nn.functional.one_hot(predicted_class_ids[<span class="hljs-literal">None</span>, :].clone(), num_classes=num_labels), dim=<span class="hljs-number">1</span> <span class="hljs-meta">... </span>).to(torch.<span class="hljs-built_in">float</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>loss = model(**inputs, labels=labels).loss</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.XLMForMultipleChoice" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForMultipleChoice"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-23gscm">XLMForMultipleChoice</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMForMultipleChoice"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XLMForMultipleChoice</span></span></h3> <a id="transformers.XLMForMultipleChoice" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMForMultipleChoice"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/modeling_xlm.py#L1180" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">*inputs<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForMultipleChoice.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForMultipleChoice.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.XLMConfig">XLMConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-oydsgx">XLM Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks.</p> <p data-svelte-h="svelte-hmtw9k">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-hswkmf">This model is also a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMForMultipleChoice.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.XLMForMultipleChoice.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMForMultipleChoice.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/modeling_xlm.py#L1191" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">langs<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">lengths<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">cache<span class="opacity-60">: typing.Union[typing.Dict[str, torch.Tensor], NoneType] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.MultipleChoiceModelOutput">transformers.modeling_outputs.MultipleChoiceModelOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 13 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForMultipleChoice.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForMultipleChoice.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, num_choices, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForMultipleChoice.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForMultipleChoice.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_choices, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForMultipleChoice.forward.langs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForMultipleChoice.forward.langs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>langs</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, num_choices, sequence_length)</code>, <em>optional</em>) — A parallel sequence of tokens to be used to indicate the language of each token in the input. Indices are languages ids which can be obtained from the language names by using two conversion mappings provided in the configuration of the model (only provided for multilingual models). More precisely, the <em>language name to language id</em> mapping is in <code>model.config.lang2id</code> (which is a dictionary string to int) and the <em>language id to language name</em> mapping is in <code>model.config.id2lang</code> (dictionary int to string).<p></p> <p>See usage examples detailed in the <a href="../multilingual">multilingual documentation</a>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForMultipleChoice.forward.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForMultipleChoice.forward.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, num_choices, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token.</li> </ul> <p><a href="../glossary#token-type-ids">What are token type IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForMultipleChoice.forward.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForMultipleChoice.forward.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, num_choices, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.<p></p> <p><a href="../glossary#position-ids">What are position IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForMultipleChoice.forward.lengths" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForMultipleChoice.forward.lengths"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>lengths</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Length of each sentence that can be used to avoid performing attention on padding token indices. You can also use <em>attention_mask</em> for the same result (see above), kept here for compatibility. Indices selected in <code>[0, ..., input_ids.size(-1)]</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForMultipleChoice.forward.cache" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForMultipleChoice.forward.cache"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>cache</strong> (<code>Dict[str, torch.FloatTensor]</code>, <em>optional</em>) — Dictionary string to <code>torch.FloatTensor</code> that contains precomputed hidden states (key and values in the attention blocks) as computed by the model (see <code>cache</code> output below). Can be used to speed up sequential decoding.<p></p> <p>The dictionary object will be modified in-place during the forward pass to add newly computed hidden-states.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForMultipleChoice.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForMultipleChoice.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForMultipleChoice.forward.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForMultipleChoice.forward.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_choices, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForMultipleChoice.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForMultipleChoice.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForMultipleChoice.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForMultipleChoice.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForMultipleChoice.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForMultipleChoice.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForMultipleChoice.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForMultipleChoice.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Labels for computing the multiple choice classification loss. Indices should be in <code>[0, ..., num_choices-1]</code> where <code>num_choices</code> is the size of the second dimension of the input tensors. (See <code>input_ids</code> above)</span></span> </li></ul> <div id="transformers.XLMForMultipleChoice.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.MultipleChoiceModelOutput">transformers.modeling_outputs.MultipleChoiceModelOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.MultipleChoiceModelOutput">transformers.modeling_outputs.MultipleChoiceModelOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.XLMConfig">XLMConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <em>(1,)</em>, <em>optional</em>, returned when <code>labels</code> is provided) — Classification loss.</p> </li> <li> <p><strong>logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_choices)</code>) — <em>num_choices</em> is the second dimension of the input tensors. (see <em>input_ids</em> above).</p> <p>Classification scores (before SoftMax).</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-1ib2vcq">The <a href="/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.XLMForMultipleChoice">XLMForMultipleChoice</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.XLMForMultipleChoice.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForMultipleChoice.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, XLMForMultipleChoice <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"xlm-mlm-en-2048"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = XLMForMultipleChoice.from_pretrained(<span class="hljs-string">"xlm-mlm-en-2048"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>prompt = <span class="hljs-string">"In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."</span> <span class="hljs-meta">&gt;&gt;&gt; </span>choice0 = <span class="hljs-string">"It is eaten with a fork and a knife."</span> <span class="hljs-meta">&gt;&gt;&gt; </span>choice1 = <span class="hljs-string">"It is eaten while held in the hand."</span> <span class="hljs-meta">&gt;&gt;&gt; </span>labels = torch.tensor(<span class="hljs-number">0</span>).unsqueeze(<span class="hljs-number">0</span>) <span class="hljs-comment"># choice0 is correct (according to Wikipedia ;)), batch size 1</span> <span class="hljs-meta">&gt;&gt;&gt; </span>encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors=<span class="hljs-string">"pt"</span>, padding=<span class="hljs-literal">True</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(**{k: v.unsqueeze(<span class="hljs-number">0</span>) <span class="hljs-keyword">for</span> k, v <span class="hljs-keyword">in</span> encoding.items()}, labels=labels) <span class="hljs-comment"># batch size is 1</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># the linear classifier still needs to be trained</span> <span class="hljs-meta">&gt;&gt;&gt; </span>loss = outputs.loss <span class="hljs-meta">&gt;&gt;&gt; </span>logits = outputs.logits</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.XLMForTokenClassification" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForTokenClassification"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1xtdn8g">XLMForTokenClassification</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMForTokenClassification"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XLMForTokenClassification</span></span></h3> <a id="transformers.XLMForTokenClassification" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMForTokenClassification"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/modeling_xlm.py#L1096" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForTokenClassification.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForTokenClassification.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.XLMConfig">XLMConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-denc24">XLM Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks.</p> <p data-svelte-h="svelte-hmtw9k">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-hswkmf">This model is also a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMForTokenClassification.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.XLMForTokenClassification.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMForTokenClassification.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/modeling_xlm.py#L1108" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">langs<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">lengths<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">cache<span class="opacity-60">: typing.Union[typing.Dict[str, torch.Tensor], NoneType] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.TokenClassifierOutput">transformers.modeling_outputs.TokenClassifierOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 13 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForTokenClassification.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForTokenClassification.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForTokenClassification.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForTokenClassification.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForTokenClassification.forward.langs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForTokenClassification.forward.langs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>langs</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — A parallel sequence of tokens to be used to indicate the language of each token in the input. Indices are languages ids which can be obtained from the language names by using two conversion mappings provided in the configuration of the model (only provided for multilingual models). More precisely, the <em>language name to language id</em> mapping is in <code>model.config.lang2id</code> (which is a dictionary string to int) and the <em>language id to language name</em> mapping is in <code>model.config.id2lang</code> (dictionary int to string).<p></p> <p>See usage examples detailed in the <a href="../multilingual">multilingual documentation</a>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForTokenClassification.forward.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForTokenClassification.forward.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token.</li> </ul> <p><a href="../glossary#token-type-ids">What are token type IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForTokenClassification.forward.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForTokenClassification.forward.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.<p></p> <p><a href="../glossary#position-ids">What are position IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForTokenClassification.forward.lengths" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForTokenClassification.forward.lengths"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>lengths</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Length of each sentence that can be used to avoid performing attention on padding token indices. You can also use <em>attention_mask</em> for the same result (see above), kept here for compatibility. Indices selected in <code>[0, ..., input_ids.size(-1)]</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForTokenClassification.forward.cache" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForTokenClassification.forward.cache"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>cache</strong> (<code>Dict[str, torch.FloatTensor]</code>, <em>optional</em>) — Dictionary string to <code>torch.FloatTensor</code> that contains precomputed hidden states (key and values in the attention blocks) as computed by the model (see <code>cache</code> output below). Can be used to speed up sequential decoding.<p></p> <p>The dictionary object will be modified in-place during the forward pass to add newly computed hidden-states.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForTokenClassification.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForTokenClassification.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForTokenClassification.forward.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForTokenClassification.forward.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForTokenClassification.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForTokenClassification.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForTokenClassification.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForTokenClassification.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForTokenClassification.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForTokenClassification.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForTokenClassification.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForTokenClassification.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Labels for computing the token classification loss. Indices should be in <code>[0, ..., config.num_labels - 1]</code>.</span></span> </li></ul> <div id="transformers.XLMForTokenClassification.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.TokenClassifierOutput">transformers.modeling_outputs.TokenClassifierOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.TokenClassifierOutput">transformers.modeling_outputs.TokenClassifierOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.XLMConfig">XLMConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Classification loss.</p> </li> <li> <p><strong>logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, config.num_labels)</code>) — Classification scores (before SoftMax).</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-1249l0g">The <a href="/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.XLMForTokenClassification">XLMForTokenClassification</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.XLMForTokenClassification.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForTokenClassification.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, XLMForTokenClassification <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"xlm-mlm-en-2048"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = XLMForTokenClassification.from_pretrained(<span class="hljs-string">"xlm-mlm-en-2048"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer( <span class="hljs-meta">... </span> <span class="hljs-string">"HuggingFace is a company based in Paris and New York"</span>, add_special_tokens=<span class="hljs-literal">False</span>, return_tensors=<span class="hljs-string">"pt"</span> <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> logits = model(**inputs).logits <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_token_class_ids = logits.argmax(-<span class="hljs-number">1</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Note that tokens are classified rather then input words which means that</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># there might be more predicted token classes than words.</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Multiple token classes might account for the same word</span> <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_tokens_classes = [model.config.id2label[t.item()] <span class="hljs-keyword">for</span> t <span class="hljs-keyword">in</span> predicted_token_class_ids[<span class="hljs-number">0</span>]] <span class="hljs-meta">&gt;&gt;&gt; </span>labels = predicted_token_class_ids <span class="hljs-meta">&gt;&gt;&gt; </span>loss = model(**inputs, labels=labels).loss</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.XLMForQuestionAnsweringSimple" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForQuestionAnsweringSimple"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-13gzdp">XLMForQuestionAnsweringSimple</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMForQuestionAnsweringSimple"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XLMForQuestionAnsweringSimple</span></span></h3> <a id="transformers.XLMForQuestionAnsweringSimple" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMForQuestionAnsweringSimple"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/modeling_xlm.py#L871" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForQuestionAnsweringSimple.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForQuestionAnsweringSimple.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.XLMConfig">XLMConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-77smje">XLM Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute <code>span start logits</code> and <code>span end logits</code>).</p> <p data-svelte-h="svelte-hmtw9k">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-hswkmf">This model is also a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMForQuestionAnsweringSimple.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.XLMForQuestionAnsweringSimple.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMForQuestionAnsweringSimple.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/modeling_xlm.py#L881" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">langs<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">lengths<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">cache<span class="opacity-60">: typing.Union[typing.Dict[str, torch.Tensor], NoneType] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">start_positions<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">end_positions<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.QuestionAnsweringModelOutput">transformers.modeling_outputs.QuestionAnsweringModelOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 14 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForQuestionAnsweringSimple.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForQuestionAnsweringSimple.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForQuestionAnsweringSimple.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForQuestionAnsweringSimple.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForQuestionAnsweringSimple.forward.langs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForQuestionAnsweringSimple.forward.langs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>langs</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — A parallel sequence of tokens to be used to indicate the language of each token in the input. Indices are languages ids which can be obtained from the language names by using two conversion mappings provided in the configuration of the model (only provided for multilingual models). More precisely, the <em>language name to language id</em> mapping is in <code>model.config.lang2id</code> (which is a dictionary string to int) and the <em>language id to language name</em> mapping is in <code>model.config.id2lang</code> (dictionary int to string).<p></p> <p>See usage examples detailed in the <a href="../multilingual">multilingual documentation</a>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForQuestionAnsweringSimple.forward.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForQuestionAnsweringSimple.forward.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token.</li> </ul> <p><a href="../glossary#token-type-ids">What are token type IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForQuestionAnsweringSimple.forward.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForQuestionAnsweringSimple.forward.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.<p></p> <p><a href="../glossary#position-ids">What are position IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForQuestionAnsweringSimple.forward.lengths" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForQuestionAnsweringSimple.forward.lengths"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>lengths</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Length of each sentence that can be used to avoid performing attention on padding token indices. You can also use <em>attention_mask</em> for the same result (see above), kept here for compatibility. Indices selected in <code>[0, ..., input_ids.size(-1)]</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForQuestionAnsweringSimple.forward.cache" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForQuestionAnsweringSimple.forward.cache"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>cache</strong> (<code>Dict[str, torch.FloatTensor]</code>, <em>optional</em>) — Dictionary string to <code>torch.FloatTensor</code> that contains precomputed hidden states (key and values in the attention blocks) as computed by the model (see <code>cache</code> output below). Can be used to speed up sequential decoding.<p></p> <p>The dictionary object will be modified in-place during the forward pass to add newly computed hidden-states.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForQuestionAnsweringSimple.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForQuestionAnsweringSimple.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForQuestionAnsweringSimple.forward.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForQuestionAnsweringSimple.forward.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForQuestionAnsweringSimple.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForQuestionAnsweringSimple.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForQuestionAnsweringSimple.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForQuestionAnsweringSimple.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForQuestionAnsweringSimple.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForQuestionAnsweringSimple.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForQuestionAnsweringSimple.forward.start_positions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForQuestionAnsweringSimple.forward.start_positions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>start_positions</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (<code>sequence_length</code>). Position outside of the sequence are not taken into account for computing the loss.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForQuestionAnsweringSimple.forward.end_positions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForQuestionAnsweringSimple.forward.end_positions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>end_positions</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (<code>sequence_length</code>). Position outside of the sequence are not taken into account for computing the loss.</span></span> </li></ul> <div id="transformers.XLMForQuestionAnsweringSimple.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.QuestionAnsweringModelOutput">transformers.modeling_outputs.QuestionAnsweringModelOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.QuestionAnsweringModelOutput">transformers.modeling_outputs.QuestionAnsweringModelOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.XLMConfig">XLMConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.</p> </li> <li> <p><strong>start_logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Span-start scores (before SoftMax).</p> </li> <li> <p><strong>end_logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Span-end scores (before SoftMax).</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-aembl4">The <a href="/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.XLMForQuestionAnsweringSimple">XLMForQuestionAnsweringSimple</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.XLMForQuestionAnsweringSimple.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForQuestionAnsweringSimple.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, XLMForQuestionAnsweringSimple <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"xlm-mlm-en-2048"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = XLMForQuestionAnsweringSimple.from_pretrained(<span class="hljs-string">"xlm-mlm-en-2048"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>question, text = <span class="hljs-string">"Who was Jim Henson?"</span>, <span class="hljs-string">"Jim Henson was a nice puppet"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(question, text, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> outputs = model(**inputs) <span class="hljs-meta">&gt;&gt;&gt; </span>answer_start_index = outputs.start_logits.argmax() <span class="hljs-meta">&gt;&gt;&gt; </span>answer_end_index = outputs.end_logits.argmax() <span class="hljs-meta">&gt;&gt;&gt; </span>predict_answer_tokens = inputs.input_ids[<span class="hljs-number">0</span>, answer_start_index : answer_end_index + <span class="hljs-number">1</span>] <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># target is "nice puppet"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>target_start_index = torch.tensor([<span class="hljs-number">14</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span>target_end_index = torch.tensor([<span class="hljs-number">15</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index) <span class="hljs-meta">&gt;&gt;&gt; </span>loss = outputs.loss</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.XLMForQuestionAnswering" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForQuestionAnswering"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-f8rnhn">XLMForQuestionAnswering</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMForQuestionAnswering"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XLMForQuestionAnswering</span></span></h3> <a id="transformers.XLMForQuestionAnswering" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMForQuestionAnswering"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/modeling_xlm.py#L975" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForQuestionAnswering.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForQuestionAnswering.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.XLMConfig">XLMConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-3ihem0">XLM Model with a beam-search span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute <code>span start logits</code> and <code>span end logits</code>).</p> <p data-svelte-h="svelte-hmtw9k">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-hswkmf">This model is also a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMForQuestionAnswering.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.XLMForQuestionAnswering.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMForQuestionAnswering.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/modeling_xlm.py#L985" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">langs<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">lengths<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">cache<span class="opacity-60">: typing.Union[typing.Dict[str, torch.Tensor], NoneType] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">start_positions<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">end_positions<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">is_impossible<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">cls_index<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">p_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.models.xlm.modeling_xlm.XLMForQuestionAnsweringOutput">transformers.models.xlm.modeling_xlm.XLMForQuestionAnsweringOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 17 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForQuestionAnswering.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForQuestionAnswering.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForQuestionAnswering.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForQuestionAnswering.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForQuestionAnswering.forward.langs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForQuestionAnswering.forward.langs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>langs</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — A parallel sequence of tokens to be used to indicate the language of each token in the input. Indices are languages ids which can be obtained from the language names by using two conversion mappings provided in the configuration of the model (only provided for multilingual models). More precisely, the <em>language name to language id</em> mapping is in <code>model.config.lang2id</code> (which is a dictionary string to int) and the <em>language id to language name</em> mapping is in <code>model.config.id2lang</code> (dictionary int to string).<p></p> <p>See usage examples detailed in the <a href="../multilingual">multilingual documentation</a>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForQuestionAnswering.forward.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForQuestionAnswering.forward.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token.</li> </ul> <p><a href="../glossary#token-type-ids">What are token type IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForQuestionAnswering.forward.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForQuestionAnswering.forward.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.<p></p> <p><a href="../glossary#position-ids">What are position IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForQuestionAnswering.forward.lengths" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForQuestionAnswering.forward.lengths"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>lengths</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Length of each sentence that can be used to avoid performing attention on padding token indices. You can also use <em>attention_mask</em> for the same result (see above), kept here for compatibility. Indices selected in <code>[0, ..., input_ids.size(-1)]</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForQuestionAnswering.forward.cache" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForQuestionAnswering.forward.cache"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>cache</strong> (<code>Dict[str, torch.FloatTensor]</code>, <em>optional</em>) — Dictionary string to <code>torch.FloatTensor</code> that contains precomputed hidden states (key and values in the attention blocks) as computed by the model (see <code>cache</code> output below). Can be used to speed up sequential decoding.<p></p> <p>The dictionary object will be modified in-place during the forward pass to add newly computed hidden-states.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForQuestionAnswering.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForQuestionAnswering.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForQuestionAnswering.forward.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForQuestionAnswering.forward.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForQuestionAnswering.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForQuestionAnswering.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForQuestionAnswering.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForQuestionAnswering.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForQuestionAnswering.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForQuestionAnswering.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForQuestionAnswering.forward.start_positions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForQuestionAnswering.forward.start_positions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>start_positions</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (<code>sequence_length</code>). Position outside of the sequence are not taken into account for computing the loss.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForQuestionAnswering.forward.end_positions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForQuestionAnswering.forward.end_positions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>end_positions</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (<code>sequence_length</code>). Position outside of the sequence are not taken into account for computing the loss.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForQuestionAnswering.forward.is_impossible" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForQuestionAnswering.forward.is_impossible"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>is_impossible</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Labels whether a question has an answer or no answer (SQuAD 2.0)</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForQuestionAnswering.forward.cls_index" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForQuestionAnswering.forward.cls_index"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>cls_index</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Labels for position (index) of the classification token to use as input for computing plausibility of the answer.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMForQuestionAnswering.forward.p_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForQuestionAnswering.forward.p_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>p_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Optional mask of tokens which can’t be in answers (e.g. [CLS], [PAD], …). 1.0 means token should be masked. 0.0 mean token is not masked.</span></span> </li></ul> <div id="transformers.XLMForQuestionAnswering.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.models.xlm.modeling_xlm.XLMForQuestionAnsweringOutput">transformers.models.xlm.modeling_xlm.XLMForQuestionAnsweringOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.models.xlm.modeling_xlm.XLMForQuestionAnsweringOutput">transformers.models.xlm.modeling_xlm.XLMForQuestionAnsweringOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.XLMConfig">XLMConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned if both <code>start_positions</code> and <code>end_positions</code> are provided) — Classification loss as the sum of start token, end token (and is_impossible if provided) classification losses.</p> </li> <li> <p><strong>start_top_log_probs</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, config.start_n_top)</code>, <em>optional</em>, returned if <code>start_positions</code> or <code>end_positions</code> is not provided) — Log probabilities for the top config.start_n_top start token possibilities (beam-search).</p> </li> <li> <p><strong>start_top_index</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, config.start_n_top)</code>, <em>optional</em>, returned if <code>start_positions</code> or <code>end_positions</code> is not provided) — Indices for the top config.start_n_top start token possibilities (beam-search).</p> </li> <li> <p><strong>end_top_log_probs</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, config.start_n_top * config.end_n_top)</code>, <em>optional</em>, returned if <code>start_positions</code> or <code>end_positions</code> is not provided) — Log probabilities for the top <code>config.start_n_top * config.end_n_top</code> end token possibilities (beam-search).</p> </li> <li> <p><strong>end_top_index</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, config.start_n_top * config.end_n_top)</code>, <em>optional</em>, returned if <code>start_positions</code> or <code>end_positions</code> is not provided) — Indices for the top <code>config.start_n_top * config.end_n_top</code> end token possibilities (beam-search).</p> </li> <li> <p><strong>cls_logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>, returned if <code>start_positions</code> or <code>end_positions</code> is not provided) — Log probabilities for the <code>is_impossible</code> label of the answers.</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-1iqncjc">The <a href="/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.XLMForQuestionAnswering">XLMForQuestionAnswering</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.XLMForQuestionAnswering.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMForQuestionAnswering.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, XLMForQuestionAnswering <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"xlm-mlm-en-2048"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = XLMForQuestionAnswering.from_pretrained(<span class="hljs-string">"xlm-mlm-en-2048"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>input_ids = torch.tensor(tokenizer.encode(<span class="hljs-string">"Hello, my dog is cute"</span>, add_special_tokens=<span class="hljs-literal">True</span>)).unsqueeze( <span class="hljs-meta">... </span> <span class="hljs-number">0</span> <span class="hljs-meta">... </span>) <span class="hljs-comment"># Batch size 1</span> <span class="hljs-meta">&gt;&gt;&gt; </span>start_positions = torch.tensor([<span class="hljs-number">1</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span>end_positions = torch.tensor([<span class="hljs-number">3</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(input_ids, start_positions=start_positions, end_positions=end_positions) <span class="hljs-meta">&gt;&gt;&gt; </span>loss = outputs.loss</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.TFXLMModel" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMModel"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1ic81wv">TFXLMModel</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFXLMModel"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">TFXLMModel</span></span></h3> <a id="transformers.TFXLMModel" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFXLMModel"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/modeling_tf_xlm.py#L688" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">*args<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMModel.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMModel.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.XLMConfig">XLMConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-hyefcu">The bare XLM Model transformer outputting raw hidden-states without any specific head on top.</p> <p data-svelte-h="svelte-1i0vt4o">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel">TFPreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-1ivrf8m">This model is also a <a href="https://www.tensorflow.org/api_docs/python/tf/keras/Model" rel="nofollow">tf.keras.Model</a> subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1ajbfxg">TensorFlow models and layers in <code>transformers</code> accept two formats as input:</p> <ul data-svelte-h="svelte-qm1t26"><li>having all inputs as keyword arguments (like PyTorch models), or</li> <li>having all inputs as a list, tuple or dict in the first positional argument.</li></ul> <p data-svelte-h="svelte-1v9qsc5">The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like <code>model.fit()</code> things should “just work” for you - just pass your inputs and labels in any format that <code>model.fit()</code> supports! If, however, you want to use the second format outside of Keras methods like <code>fit()</code> and <code>predict()</code>, such as when creating your own layers or models with the Keras <code>Functional</code> API, there are three possibilities you can use to gather all the input Tensors in the first positional argument:</p> <ul data-svelte-h="svelte-15scerc"><li>a single Tensor with <code>input_ids</code> only and nothing else: <code>model(input_ids)</code></li> <li>a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: <code>model([input_ids, attention_mask])</code> or <code>model([input_ids, attention_mask, token_type_ids])</code></li> <li>a dictionary with one or several input Tensors associated to the input names given in the docstring: <code>model({"input_ids": input_ids, "token_type_ids": token_type_ids})</code></li></ul> <p data-svelte-h="svelte-1an3odd">Note that when creating models and layers with <a href="https://keras.io/guides/making_new_layers_and_models_via_subclassing/" rel="nofollow">subclassing</a> then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function!</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFXLMModel.call"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>call</span></h4> <a id="transformers.TFXLMModel.call" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFXLMModel.call"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/modeling_tf_xlm.py#L693" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: TFModelInputType | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">langs<span class="opacity-60">: tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">lengths<span class="opacity-60">: tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">cache<span class="opacity-60">: Dict[str, tf.Tensor] | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: bool | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: bool | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: bool | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">training<span class="opacity-60">: bool = False</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFBaseModelOutput">transformers.modeling_tf_outputs.TFBaseModelOutput</a> or <code>tuple(tf.Tensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 13 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMModel.call.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMModel.call.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> and <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMModel.call.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMModel.call.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMModel.call.langs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMModel.call.langs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>langs</strong> (<code>tf.Tensor</code> or <code>Numpy array</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — A parallel sequence of tokens to be used to indicate the language of each token in the input. Indices are languages ids which can be obtained from the language names by using two conversion mappings provided in the configuration of the model (only provided for multilingual models). More precisely, the <em>language name to language id</em> mapping is in <code>model.config.lang2id</code> (which is a dictionary string to int) and the <em>language id to language name</em> mapping is in <code>model.config.id2lang</code> (dictionary int to string).<p></p> <p>See usage examples detailed in the <a href="../multilingual">multilingual documentation</a>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMModel.call.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMModel.call.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token.</li> </ul> <p><a href="../glossary#token-type-ids">What are token type IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMModel.call.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMModel.call.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.<p></p> <p><a href="../glossary#position-ids">What are position IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMModel.call.lengths" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMModel.call.lengths"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>lengths</strong> (<code>tf.Tensor</code> or <code>Numpy array</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Length of each sentence that can be used to avoid performing attention on padding token indices. You can also use <em>attention_mask</em> for the same result (see above), kept here for compatibility. Indices selected in <code>[0, ..., input_ids.size(-1)]</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMModel.call.cache" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMModel.call.cache"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>cache</strong> (<code>Dict[str, tf.Tensor]</code>, <em>optional</em>) — Dictionary string to <code>tf.Tensor</code> that contains precomputed hidden states (key and values in the attention blocks) as computed by the model (see <code>cache</code> output below). Can be used to speed up sequential decoding.<p></p> <p>The dictionary object will be modified in-place during the forward pass to add newly computed hidden-states.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMModel.call.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMModel.call.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMModel.call.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMModel.call.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMModel.call.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMModel.call.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMModel.call.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMModel.call.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMModel.call.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMModel.call.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMModel.call.training" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMModel.call.training"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>training</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation).</span></span> </li></ul> <div id="transformers.TFXLMModel.call.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFBaseModelOutput">transformers.modeling_tf_outputs.TFBaseModelOutput</a> or <code>tuple(tf.Tensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFBaseModelOutput">transformers.modeling_tf_outputs.TFBaseModelOutput</a> or a tuple of <code>tf.Tensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.XLMConfig">XLMConfig</a>) and inputs.</p> <ul> <li> <p><strong>last_hidden_state</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>) — Sequence of hidden-states at the output of the last layer of the model.</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(tf.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>tf.Tensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>tf.Tensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-e8ityc">The <a href="/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.TFXLMModel">TFXLMModel</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.TFXLMModel.call.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMModel.call.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, TFXLMModel <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> tensorflow <span class="hljs-keyword">as</span> tf <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"xlm-mlm-en-2048"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = TFXLMModel.from_pretrained(<span class="hljs-string">"xlm-mlm-en-2048"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(<span class="hljs-string">"Hello, my dog is cute"</span>, return_tensors=<span class="hljs-string">"tf"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(inputs) <span class="hljs-meta">&gt;&gt;&gt; </span>last_hidden_states = outputs.last_hidden_state</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.TFXLMWithLMHeadModel" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMWithLMHeadModel"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1n2uhva">TFXLMWithLMHeadModel</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFXLMWithLMHeadModel"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">TFXLMWithLMHeadModel</span></span></h3> <a id="transformers.TFXLMWithLMHeadModel" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFXLMWithLMHeadModel"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/modeling_tf_xlm.py#L793" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">*args<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMWithLMHeadModel.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMWithLMHeadModel.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.XLMConfig">XLMConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-zowfck">The XLM Model transformer with a language modeling head on top (linear layer with weights tied to the input embeddings).</p> <p data-svelte-h="svelte-1i0vt4o">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel">TFPreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-1ivrf8m">This model is also a <a href="https://www.tensorflow.org/api_docs/python/tf/keras/Model" rel="nofollow">tf.keras.Model</a> subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1ajbfxg">TensorFlow models and layers in <code>transformers</code> accept two formats as input:</p> <ul data-svelte-h="svelte-qm1t26"><li>having all inputs as keyword arguments (like PyTorch models), or</li> <li>having all inputs as a list, tuple or dict in the first positional argument.</li></ul> <p data-svelte-h="svelte-1v9qsc5">The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like <code>model.fit()</code> things should “just work” for you - just pass your inputs and labels in any format that <code>model.fit()</code> supports! If, however, you want to use the second format outside of Keras methods like <code>fit()</code> and <code>predict()</code>, such as when creating your own layers or models with the Keras <code>Functional</code> API, there are three possibilities you can use to gather all the input Tensors in the first positional argument:</p> <ul data-svelte-h="svelte-15scerc"><li>a single Tensor with <code>input_ids</code> only and nothing else: <code>model(input_ids)</code></li> <li>a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: <code>model([input_ids, attention_mask])</code> or <code>model([input_ids, attention_mask, token_type_ids])</code></li> <li>a dictionary with one or several input Tensors associated to the input names given in the docstring: <code>model({"input_ids": input_ids, "token_type_ids": token_type_ids})</code></li></ul> <p data-svelte-h="svelte-1an3odd">Note that when creating models and layers with <a href="https://keras.io/guides/making_new_layers_and_models_via_subclassing/" rel="nofollow">subclassing</a> then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function!</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFXLMWithLMHeadModel.call"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>call</span></h4> <a id="transformers.TFXLMWithLMHeadModel.call" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFXLMWithLMHeadModel.call"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/modeling_tf_xlm.py#L822" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: TFModelInputType | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">langs<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">lengths<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">cache<span class="opacity-60">: Optional[Dict[str, tf.Tensor]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">training<span class="opacity-60">: bool = False</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>transformers.models.xlm.modeling_tf_xlm.TFXLMWithLMHeadModelOutput</code> or <code>tuple(tf.Tensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 13 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMWithLMHeadModel.call.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMWithLMHeadModel.call.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> and <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMWithLMHeadModel.call.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMWithLMHeadModel.call.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMWithLMHeadModel.call.langs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMWithLMHeadModel.call.langs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>langs</strong> (<code>tf.Tensor</code> or <code>Numpy array</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — A parallel sequence of tokens to be used to indicate the language of each token in the input. Indices are languages ids which can be obtained from the language names by using two conversion mappings provided in the configuration of the model (only provided for multilingual models). More precisely, the <em>language name to language id</em> mapping is in <code>model.config.lang2id</code> (which is a dictionary string to int) and the <em>language id to language name</em> mapping is in <code>model.config.id2lang</code> (dictionary int to string).<p></p> <p>See usage examples detailed in the <a href="../multilingual">multilingual documentation</a>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMWithLMHeadModel.call.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMWithLMHeadModel.call.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token.</li> </ul> <p><a href="../glossary#token-type-ids">What are token type IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMWithLMHeadModel.call.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMWithLMHeadModel.call.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.<p></p> <p><a href="../glossary#position-ids">What are position IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMWithLMHeadModel.call.lengths" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMWithLMHeadModel.call.lengths"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>lengths</strong> (<code>tf.Tensor</code> or <code>Numpy array</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Length of each sentence that can be used to avoid performing attention on padding token indices. You can also use <em>attention_mask</em> for the same result (see above), kept here for compatibility. Indices selected in <code>[0, ..., input_ids.size(-1)]</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMWithLMHeadModel.call.cache" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMWithLMHeadModel.call.cache"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>cache</strong> (<code>Dict[str, tf.Tensor]</code>, <em>optional</em>) — Dictionary string to <code>tf.Tensor</code> that contains precomputed hidden states (key and values in the attention blocks) as computed by the model (see <code>cache</code> output below). Can be used to speed up sequential decoding.<p></p> <p>The dictionary object will be modified in-place during the forward pass to add newly computed hidden-states.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMWithLMHeadModel.call.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMWithLMHeadModel.call.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMWithLMHeadModel.call.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMWithLMHeadModel.call.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMWithLMHeadModel.call.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMWithLMHeadModel.call.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMWithLMHeadModel.call.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMWithLMHeadModel.call.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMWithLMHeadModel.call.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMWithLMHeadModel.call.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMWithLMHeadModel.call.training" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMWithLMHeadModel.call.training"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>training</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation).</span></span> </li></ul> <div id="transformers.TFXLMWithLMHeadModel.call.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><code>transformers.models.xlm.modeling_tf_xlm.TFXLMWithLMHeadModelOutput</code> or <code>tuple(tf.Tensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <code>transformers.models.xlm.modeling_tf_xlm.TFXLMWithLMHeadModelOutput</code> or a tuple of <code>tf.Tensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.XLMConfig">XLMConfig</a>) and inputs.</p> <ul> <li> <p><strong>logits</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length, config.vocab_size)</code>) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>tf.Tensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>tf.Tensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-etgeya">The <a href="/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.TFXLMWithLMHeadModel">TFXLMWithLMHeadModel</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.TFXLMWithLMHeadModel.call.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMWithLMHeadModel.call.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, TFXLMWithLMHeadModel <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> tensorflow <span class="hljs-keyword">as</span> tf <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"xlm-mlm-en-2048"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = TFXLMWithLMHeadModel.from_pretrained(<span class="hljs-string">"xlm-mlm-en-2048"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(<span class="hljs-string">"Hello, my dog is cute"</span>, return_tensors=<span class="hljs-string">"tf"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(inputs) <span class="hljs-meta">&gt;&gt;&gt; </span>logits = outputs.logits</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.TFXLMForSequenceClassification" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForSequenceClassification"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-wyqauu">TFXLMForSequenceClassification</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFXLMForSequenceClassification"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">TFXLMForSequenceClassification</span></span></h3> <a id="transformers.TFXLMForSequenceClassification" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFXLMForSequenceClassification"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/modeling_tf_xlm.py#L879" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">*args<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMForSequenceClassification.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForSequenceClassification.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.XLMConfig">XLMConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-6ot96a">XLM Model with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks.</p> <p data-svelte-h="svelte-1i0vt4o">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel">TFPreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-1ivrf8m">This model is also a <a href="https://www.tensorflow.org/api_docs/python/tf/keras/Model" rel="nofollow">tf.keras.Model</a> subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1ajbfxg">TensorFlow models and layers in <code>transformers</code> accept two formats as input:</p> <ul data-svelte-h="svelte-qm1t26"><li>having all inputs as keyword arguments (like PyTorch models), or</li> <li>having all inputs as a list, tuple or dict in the first positional argument.</li></ul> <p data-svelte-h="svelte-1v9qsc5">The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like <code>model.fit()</code> things should “just work” for you - just pass your inputs and labels in any format that <code>model.fit()</code> supports! If, however, you want to use the second format outside of Keras methods like <code>fit()</code> and <code>predict()</code>, such as when creating your own layers or models with the Keras <code>Functional</code> API, there are three possibilities you can use to gather all the input Tensors in the first positional argument:</p> <ul data-svelte-h="svelte-15scerc"><li>a single Tensor with <code>input_ids</code> only and nothing else: <code>model(input_ids)</code></li> <li>a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: <code>model([input_ids, attention_mask])</code> or <code>model([input_ids, attention_mask, token_type_ids])</code></li> <li>a dictionary with one or several input Tensors associated to the input names given in the docstring: <code>model({"input_ids": input_ids, "token_type_ids": token_type_ids})</code></li></ul> <p data-svelte-h="svelte-1an3odd">Note that when creating models and layers with <a href="https://keras.io/guides/making_new_layers_and_models_via_subclassing/" rel="nofollow">subclassing</a> then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function!</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFXLMForSequenceClassification.call"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>call</span></h4> <a id="transformers.TFXLMForSequenceClassification.call" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFXLMForSequenceClassification.call"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/modeling_tf_xlm.py#L887" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: TFModelInputType | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">langs<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">lengths<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">cache<span class="opacity-60">: Optional[Dict[str, tf.Tensor]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">training<span class="opacity-60">: bool = False</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFSequenceClassifierOutput">transformers.modeling_tf_outputs.TFSequenceClassifierOutput</a> or <code>tuple(tf.Tensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 14 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMForSequenceClassification.call.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForSequenceClassification.call.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> and <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMForSequenceClassification.call.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForSequenceClassification.call.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMForSequenceClassification.call.langs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForSequenceClassification.call.langs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>langs</strong> (<code>tf.Tensor</code> or <code>Numpy array</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — A parallel sequence of tokens to be used to indicate the language of each token in the input. Indices are languages ids which can be obtained from the language names by using two conversion mappings provided in the configuration of the model (only provided for multilingual models). More precisely, the <em>language name to language id</em> mapping is in <code>model.config.lang2id</code> (which is a dictionary string to int) and the <em>language id to language name</em> mapping is in <code>model.config.id2lang</code> (dictionary int to string).<p></p> <p>See usage examples detailed in the <a href="../multilingual">multilingual documentation</a>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMForSequenceClassification.call.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForSequenceClassification.call.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token.</li> </ul> <p><a href="../glossary#token-type-ids">What are token type IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMForSequenceClassification.call.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForSequenceClassification.call.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.<p></p> <p><a href="../glossary#position-ids">What are position IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMForSequenceClassification.call.lengths" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForSequenceClassification.call.lengths"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>lengths</strong> (<code>tf.Tensor</code> or <code>Numpy array</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Length of each sentence that can be used to avoid performing attention on padding token indices. You can also use <em>attention_mask</em> for the same result (see above), kept here for compatibility. Indices selected in <code>[0, ..., input_ids.size(-1)]</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMForSequenceClassification.call.cache" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForSequenceClassification.call.cache"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>cache</strong> (<code>Dict[str, tf.Tensor]</code>, <em>optional</em>) — Dictionary string to <code>tf.Tensor</code> that contains precomputed hidden states (key and values in the attention blocks) as computed by the model (see <code>cache</code> output below). Can be used to speed up sequential decoding.<p></p> <p>The dictionary object will be modified in-place during the forward pass to add newly computed hidden-states.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMForSequenceClassification.call.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForSequenceClassification.call.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMForSequenceClassification.call.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForSequenceClassification.call.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMForSequenceClassification.call.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForSequenceClassification.call.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMForSequenceClassification.call.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForSequenceClassification.call.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMForSequenceClassification.call.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForSequenceClassification.call.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMForSequenceClassification.call.training" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForSequenceClassification.call.training"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>training</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMForSequenceClassification.call.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForSequenceClassification.call.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>tf.Tensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Labels for computing the sequence classification/regression loss. Indices should be in <code>[0, ..., config.num_labels - 1]</code>. If <code>config.num_labels == 1</code> a regression loss is computed (Mean-Square loss), If <code>config.num_labels &gt; 1</code> a classification loss is computed (Cross-Entropy).</span></span> </li></ul> <div id="transformers.TFXLMForSequenceClassification.call.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFSequenceClassifierOutput">transformers.modeling_tf_outputs.TFSequenceClassifierOutput</a> or <code>tuple(tf.Tensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFSequenceClassifierOutput">transformers.modeling_tf_outputs.TFSequenceClassifierOutput</a> or a tuple of <code>tf.Tensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.XLMConfig">XLMConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, )</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Classification (or regression if config.num_labels==1) loss.</p> </li> <li> <p><strong>logits</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, config.num_labels)</code>) — Classification (or regression if config.num_labels==1) scores (before SoftMax).</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>tf.Tensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>tf.Tensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-1lx9l9e">The <a href="/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.TFXLMForSequenceClassification">TFXLMForSequenceClassification</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.TFXLMForSequenceClassification.call.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForSequenceClassification.call.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, TFXLMForSequenceClassification <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> tensorflow <span class="hljs-keyword">as</span> tf <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"xlm-mlm-en-2048"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = TFXLMForSequenceClassification.from_pretrained(<span class="hljs-string">"xlm-mlm-en-2048"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(<span class="hljs-string">"Hello, my dog is cute"</span>, return_tensors=<span class="hljs-string">"tf"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>logits = model(**inputs).logits <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_class_id = <span class="hljs-built_in">int</span>(tf.math.argmax(logits, axis=-<span class="hljs-number">1</span>)[<span class="hljs-number">0</span>])</pre></div></div> <div class="relative group rounded-md"><a id="transformers.TFXLMForSequenceClassification.call.example-2" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForSequenceClassification.call.example-2"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`</span> <span class="hljs-meta">&gt;&gt;&gt; </span>num_labels = <span class="hljs-built_in">len</span>(model.config.id2label) <span class="hljs-meta">&gt;&gt;&gt; </span>model = TFXLMForSequenceClassification.from_pretrained(<span class="hljs-string">"xlm-mlm-en-2048"</span>, num_labels=num_labels) <span class="hljs-meta">&gt;&gt;&gt; </span>labels = tf.constant(<span class="hljs-number">1</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>loss = model(**inputs, labels=labels).loss</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.TFXLMForMultipleChoice" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForMultipleChoice"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-6h82ko">TFXLMForMultipleChoice</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFXLMForMultipleChoice"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">TFXLMForMultipleChoice</span></span></h3> <a id="transformers.TFXLMForMultipleChoice" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFXLMForMultipleChoice"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/modeling_tf_xlm.py#L957" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">*args<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMForMultipleChoice.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForMultipleChoice.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.XLMConfig">XLMConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-oydsgx">XLM Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks.</p> <p data-svelte-h="svelte-1i0vt4o">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel">TFPreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-1ivrf8m">This model is also a <a href="https://www.tensorflow.org/api_docs/python/tf/keras/Model" rel="nofollow">tf.keras.Model</a> subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1ajbfxg">TensorFlow models and layers in <code>transformers</code> accept two formats as input:</p> <ul data-svelte-h="svelte-qm1t26"><li>having all inputs as keyword arguments (like PyTorch models), or</li> <li>having all inputs as a list, tuple or dict in the first positional argument.</li></ul> <p data-svelte-h="svelte-1v9qsc5">The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like <code>model.fit()</code> things should “just work” for you - just pass your inputs and labels in any format that <code>model.fit()</code> supports! If, however, you want to use the second format outside of Keras methods like <code>fit()</code> and <code>predict()</code>, such as when creating your own layers or models with the Keras <code>Functional</code> API, there are three possibilities you can use to gather all the input Tensors in the first positional argument:</p> <ul data-svelte-h="svelte-15scerc"><li>a single Tensor with <code>input_ids</code> only and nothing else: <code>model(input_ids)</code></li> <li>a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: <code>model([input_ids, attention_mask])</code> or <code>model([input_ids, attention_mask, token_type_ids])</code></li> <li>a dictionary with one or several input Tensors associated to the input names given in the docstring: <code>model({"input_ids": input_ids, "token_type_ids": token_type_ids})</code></li></ul> <p data-svelte-h="svelte-1an3odd">Note that when creating models and layers with <a href="https://keras.io/guides/making_new_layers_and_models_via_subclassing/" rel="nofollow">subclassing</a> then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function!</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFXLMForMultipleChoice.call"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>call</span></h4> <a id="transformers.TFXLMForMultipleChoice.call" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFXLMForMultipleChoice.call"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/modeling_tf_xlm.py#L986" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: TFModelInputType | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">langs<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">lengths<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">cache<span class="opacity-60">: Optional[Dict[str, tf.Tensor]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">training<span class="opacity-60">: bool = False</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput">transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput</a> or <code>tuple(tf.Tensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 13 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMForMultipleChoice.call.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForMultipleChoice.call.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(batch_size, num_choices, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> and <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMForMultipleChoice.call.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForMultipleChoice.call.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(batch_size, num_choices, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMForMultipleChoice.call.langs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForMultipleChoice.call.langs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>langs</strong> (<code>tf.Tensor</code> or <code>Numpy array</code> of shape <code>(batch_size, num_choices, sequence_length)</code>, <em>optional</em>) — A parallel sequence of tokens to be used to indicate the language of each token in the input. Indices are languages ids which can be obtained from the language names by using two conversion mappings provided in the configuration of the model (only provided for multilingual models). More precisely, the <em>language name to language id</em> mapping is in <code>model.config.lang2id</code> (which is a dictionary string to int) and the <em>language id to language name</em> mapping is in <code>model.config.id2lang</code> (dictionary int to string).<p></p> <p>See usage examples detailed in the <a href="../multilingual">multilingual documentation</a>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMForMultipleChoice.call.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForMultipleChoice.call.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(batch_size, num_choices, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token.</li> </ul> <p><a href="../glossary#token-type-ids">What are token type IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMForMultipleChoice.call.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForMultipleChoice.call.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(batch_size, num_choices, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.<p></p> <p><a href="../glossary#position-ids">What are position IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMForMultipleChoice.call.lengths" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForMultipleChoice.call.lengths"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>lengths</strong> (<code>tf.Tensor</code> or <code>Numpy array</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Length of each sentence that can be used to avoid performing attention on padding token indices. You can also use <em>attention_mask</em> for the same result (see above), kept here for compatibility. Indices selected in <code>[0, ..., input_ids.size(-1)]</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMForMultipleChoice.call.cache" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForMultipleChoice.call.cache"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>cache</strong> (<code>Dict[str, tf.Tensor]</code>, <em>optional</em>) — Dictionary string to <code>tf.Tensor</code> that contains precomputed hidden states (key and values in the attention blocks) as computed by the model (see <code>cache</code> output below). Can be used to speed up sequential decoding.<p></p> <p>The dictionary object will be modified in-place during the forward pass to add newly computed hidden-states.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMForMultipleChoice.call.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForMultipleChoice.call.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMForMultipleChoice.call.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForMultipleChoice.call.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, num_choices, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMForMultipleChoice.call.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForMultipleChoice.call.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMForMultipleChoice.call.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForMultipleChoice.call.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMForMultipleChoice.call.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForMultipleChoice.call.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMForMultipleChoice.call.training" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForMultipleChoice.call.training"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>training</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation).</span></span> </li></ul> <div id="transformers.TFXLMForMultipleChoice.call.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput">transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput</a> or <code>tuple(tf.Tensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput">transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput</a> or a tuple of <code>tf.Tensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.XLMConfig">XLMConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>tf.Tensor</code> of shape <em>(batch_size, )</em>, <em>optional</em>, returned when <code>labels</code> is provided) — Classification loss.</p> </li> <li> <p><strong>logits</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, num_choices)</code>) — <em>num_choices</em> is the second dimension of the input tensors. (see <em>input_ids</em> above).</p> <p>Classification scores (before SoftMax).</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>tf.Tensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>tf.Tensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-kijyfe">The <a href="/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.TFXLMForMultipleChoice">TFXLMForMultipleChoice</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.TFXLMForMultipleChoice.call.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForMultipleChoice.call.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, TFXLMForMultipleChoice <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> tensorflow <span class="hljs-keyword">as</span> tf <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"xlm-mlm-en-2048"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = TFXLMForMultipleChoice.from_pretrained(<span class="hljs-string">"xlm-mlm-en-2048"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>prompt = <span class="hljs-string">"In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."</span> <span class="hljs-meta">&gt;&gt;&gt; </span>choice0 = <span class="hljs-string">"It is eaten with a fork and a knife."</span> <span class="hljs-meta">&gt;&gt;&gt; </span>choice1 = <span class="hljs-string">"It is eaten while held in the hand."</span> <span class="hljs-meta">&gt;&gt;&gt; </span>encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors=<span class="hljs-string">"tf"</span>, padding=<span class="hljs-literal">True</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = {k: tf.expand_dims(v, <span class="hljs-number">0</span>) <span class="hljs-keyword">for</span> k, v <span class="hljs-keyword">in</span> encoding.items()} <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(inputs) <span class="hljs-comment"># batch size is 1</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># the linear classifier still needs to be trained</span> <span class="hljs-meta">&gt;&gt;&gt; </span>logits = outputs.logits</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.TFXLMForTokenClassification" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForTokenClassification"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1bhtjza">TFXLMForTokenClassification</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFXLMForTokenClassification"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">TFXLMForTokenClassification</span></span></h3> <a id="transformers.TFXLMForTokenClassification" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFXLMForTokenClassification"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/modeling_tf_xlm.py#L1076" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">*args<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMForTokenClassification.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForTokenClassification.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.XLMConfig">XLMConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-denc24">XLM Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks.</p> <p data-svelte-h="svelte-1i0vt4o">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel">TFPreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-1ivrf8m">This model is also a <a href="https://www.tensorflow.org/api_docs/python/tf/keras/Model" rel="nofollow">tf.keras.Model</a> subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1ajbfxg">TensorFlow models and layers in <code>transformers</code> accept two formats as input:</p> <ul data-svelte-h="svelte-qm1t26"><li>having all inputs as keyword arguments (like PyTorch models), or</li> <li>having all inputs as a list, tuple or dict in the first positional argument.</li></ul> <p data-svelte-h="svelte-1v9qsc5">The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like <code>model.fit()</code> things should “just work” for you - just pass your inputs and labels in any format that <code>model.fit()</code> supports! If, however, you want to use the second format outside of Keras methods like <code>fit()</code> and <code>predict()</code>, such as when creating your own layers or models with the Keras <code>Functional</code> API, there are three possibilities you can use to gather all the input Tensors in the first positional argument:</p> <ul data-svelte-h="svelte-15scerc"><li>a single Tensor with <code>input_ids</code> only and nothing else: <code>model(input_ids)</code></li> <li>a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: <code>model([input_ids, attention_mask])</code> or <code>model([input_ids, attention_mask, token_type_ids])</code></li> <li>a dictionary with one or several input Tensors associated to the input names given in the docstring: <code>model({"input_ids": input_ids, "token_type_ids": token_type_ids})</code></li></ul> <p data-svelte-h="svelte-1an3odd">Note that when creating models and layers with <a href="https://keras.io/guides/making_new_layers_and_models_via_subclassing/" rel="nofollow">subclassing</a> then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function!</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFXLMForTokenClassification.call"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>call</span></h4> <a id="transformers.TFXLMForTokenClassification.call" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFXLMForTokenClassification.call"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/modeling_tf_xlm.py#L1087" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: TFModelInputType | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">langs<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">lengths<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">cache<span class="opacity-60">: Optional[Dict[str, tf.Tensor]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">training<span class="opacity-60">: bool = False</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFTokenClassifierOutput">transformers.modeling_tf_outputs.TFTokenClassifierOutput</a> or <code>tuple(tf.Tensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 14 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMForTokenClassification.call.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForTokenClassification.call.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> and <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMForTokenClassification.call.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForTokenClassification.call.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMForTokenClassification.call.langs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForTokenClassification.call.langs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>langs</strong> (<code>tf.Tensor</code> or <code>Numpy array</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — A parallel sequence of tokens to be used to indicate the language of each token in the input. Indices are languages ids which can be obtained from the language names by using two conversion mappings provided in the configuration of the model (only provided for multilingual models). More precisely, the <em>language name to language id</em> mapping is in <code>model.config.lang2id</code> (which is a dictionary string to int) and the <em>language id to language name</em> mapping is in <code>model.config.id2lang</code> (dictionary int to string).<p></p> <p>See usage examples detailed in the <a href="../multilingual">multilingual documentation</a>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMForTokenClassification.call.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForTokenClassification.call.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token.</li> </ul> <p><a href="../glossary#token-type-ids">What are token type IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMForTokenClassification.call.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForTokenClassification.call.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.<p></p> <p><a href="../glossary#position-ids">What are position IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMForTokenClassification.call.lengths" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForTokenClassification.call.lengths"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>lengths</strong> (<code>tf.Tensor</code> or <code>Numpy array</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Length of each sentence that can be used to avoid performing attention on padding token indices. You can also use <em>attention_mask</em> for the same result (see above), kept here for compatibility. Indices selected in <code>[0, ..., input_ids.size(-1)]</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMForTokenClassification.call.cache" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForTokenClassification.call.cache"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>cache</strong> (<code>Dict[str, tf.Tensor]</code>, <em>optional</em>) — Dictionary string to <code>tf.Tensor</code> that contains precomputed hidden states (key and values in the attention blocks) as computed by the model (see <code>cache</code> output below). Can be used to speed up sequential decoding.<p></p> <p>The dictionary object will be modified in-place during the forward pass to add newly computed hidden-states.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMForTokenClassification.call.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForTokenClassification.call.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMForTokenClassification.call.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForTokenClassification.call.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMForTokenClassification.call.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForTokenClassification.call.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMForTokenClassification.call.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForTokenClassification.call.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMForTokenClassification.call.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForTokenClassification.call.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMForTokenClassification.call.training" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForTokenClassification.call.training"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>training</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMForTokenClassification.call.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForTokenClassification.call.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Labels for computing the token classification loss. Indices should be in <code>[0, ..., config.num_labels - 1]</code>.</span></span> </li></ul> <div id="transformers.TFXLMForTokenClassification.call.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFTokenClassifierOutput">transformers.modeling_tf_outputs.TFTokenClassifierOutput</a> or <code>tuple(tf.Tensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFTokenClassifierOutput">transformers.modeling_tf_outputs.TFTokenClassifierOutput</a> or a tuple of <code>tf.Tensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.XLMConfig">XLMConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>tf.Tensor</code> of shape <code>(n,)</code>, <em>optional</em>, where n is the number of unmasked labels, returned when <code>labels</code> is provided) — Classification loss.</p> </li> <li> <p><strong>logits</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length, config.num_labels)</code>) — Classification scores (before SoftMax).</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>tf.Tensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>tf.Tensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-iowcww">The <a href="/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.TFXLMForTokenClassification">TFXLMForTokenClassification</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.TFXLMForTokenClassification.call.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForTokenClassification.call.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, TFXLMForTokenClassification <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> tensorflow <span class="hljs-keyword">as</span> tf <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"xlm-mlm-en-2048"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = TFXLMForTokenClassification.from_pretrained(<span class="hljs-string">"xlm-mlm-en-2048"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer( <span class="hljs-meta">... </span> <span class="hljs-string">"HuggingFace is a company based in Paris and New York"</span>, add_special_tokens=<span class="hljs-literal">False</span>, return_tensors=<span class="hljs-string">"tf"</span> <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>logits = model(**inputs).logits <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_token_class_ids = tf.math.argmax(logits, axis=-<span class="hljs-number">1</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Note that tokens are classified rather then input words which means that</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># there might be more predicted token classes than words.</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Multiple token classes might account for the same word</span> <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_tokens_classes = [model.config.id2label[t] <span class="hljs-keyword">for</span> t <span class="hljs-keyword">in</span> predicted_token_class_ids[<span class="hljs-number">0</span>].numpy().tolist()]</pre></div></div> <div class="relative group rounded-md"><a id="transformers.TFXLMForTokenClassification.call.example-2" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForTokenClassification.call.example-2"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>labels = predicted_token_class_ids <span class="hljs-meta">&gt;&gt;&gt; </span>loss = tf.math.reduce_mean(model(**inputs, labels=labels).loss)</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.TFXLMForQuestionAnsweringSimple" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForQuestionAnsweringSimple"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-10s6v8z">TFXLMForQuestionAnsweringSimple</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFXLMForQuestionAnsweringSimple"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">TFXLMForQuestionAnsweringSimple</span></span></h3> <a id="transformers.TFXLMForQuestionAnsweringSimple" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFXLMForQuestionAnsweringSimple"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/modeling_tf_xlm.py#L1156" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">*args<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMForQuestionAnsweringSimple.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForQuestionAnsweringSimple.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.XLMConfig">XLMConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1hbg3bv">XLM Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layer on top of the hidden-states output to compute <code>span start logits</code> and <code>span end logits</code>).</p> <p data-svelte-h="svelte-1i0vt4o">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel">TFPreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-1ivrf8m">This model is also a <a href="https://www.tensorflow.org/api_docs/python/tf/keras/Model" rel="nofollow">tf.keras.Model</a> subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1ajbfxg">TensorFlow models and layers in <code>transformers</code> accept two formats as input:</p> <ul data-svelte-h="svelte-qm1t26"><li>having all inputs as keyword arguments (like PyTorch models), or</li> <li>having all inputs as a list, tuple or dict in the first positional argument.</li></ul> <p data-svelte-h="svelte-1v9qsc5">The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like <code>model.fit()</code> things should “just work” for you - just pass your inputs and labels in any format that <code>model.fit()</code> supports! If, however, you want to use the second format outside of Keras methods like <code>fit()</code> and <code>predict()</code>, such as when creating your own layers or models with the Keras <code>Functional</code> API, there are three possibilities you can use to gather all the input Tensors in the first positional argument:</p> <ul data-svelte-h="svelte-15scerc"><li>a single Tensor with <code>input_ids</code> only and nothing else: <code>model(input_ids)</code></li> <li>a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: <code>model([input_ids, attention_mask])</code> or <code>model([input_ids, attention_mask, token_type_ids])</code></li> <li>a dictionary with one or several input Tensors associated to the input names given in the docstring: <code>model({"input_ids": input_ids, "token_type_ids": token_type_ids})</code></li></ul> <p data-svelte-h="svelte-1an3odd">Note that when creating models and layers with <a href="https://keras.io/guides/making_new_layers_and_models_via_subclassing/" rel="nofollow">subclassing</a> then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function!</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFXLMForQuestionAnsweringSimple.call"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>call</span></h4> <a id="transformers.TFXLMForQuestionAnsweringSimple.call" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFXLMForQuestionAnsweringSimple.call"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm/modeling_tf_xlm.py#L1164" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: TFModelInputType | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">langs<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">lengths<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">cache<span class="opacity-60">: Optional[Dict[str, tf.Tensor]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">start_positions<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">end_positions<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">training<span class="opacity-60">: bool = False</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput">transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput</a> or <code>tuple(tf.Tensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 15 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMForQuestionAnsweringSimple.call.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForQuestionAnsweringSimple.call.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> and <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMForQuestionAnsweringSimple.call.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForQuestionAnsweringSimple.call.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMForQuestionAnsweringSimple.call.langs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForQuestionAnsweringSimple.call.langs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>langs</strong> (<code>tf.Tensor</code> or <code>Numpy array</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — A parallel sequence of tokens to be used to indicate the language of each token in the input. Indices are languages ids which can be obtained from the language names by using two conversion mappings provided in the configuration of the model (only provided for multilingual models). More precisely, the <em>language name to language id</em> mapping is in <code>model.config.lang2id</code> (which is a dictionary string to int) and the <em>language id to language name</em> mapping is in <code>model.config.id2lang</code> (dictionary int to string).<p></p> <p>See usage examples detailed in the <a href="../multilingual">multilingual documentation</a>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMForQuestionAnsweringSimple.call.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForQuestionAnsweringSimple.call.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token.</li> </ul> <p><a href="../glossary#token-type-ids">What are token type IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMForQuestionAnsweringSimple.call.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForQuestionAnsweringSimple.call.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.<p></p> <p><a href="../glossary#position-ids">What are position IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMForQuestionAnsweringSimple.call.lengths" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForQuestionAnsweringSimple.call.lengths"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>lengths</strong> (<code>tf.Tensor</code> or <code>Numpy array</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Length of each sentence that can be used to avoid performing attention on padding token indices. You can also use <em>attention_mask</em> for the same result (see above), kept here for compatibility. Indices selected in <code>[0, ..., input_ids.size(-1)]</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMForQuestionAnsweringSimple.call.cache" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForQuestionAnsweringSimple.call.cache"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>cache</strong> (<code>Dict[str, tf.Tensor]</code>, <em>optional</em>) — Dictionary string to <code>tf.Tensor</code> that contains precomputed hidden states (key and values in the attention blocks) as computed by the model (see <code>cache</code> output below). Can be used to speed up sequential decoding.<p></p> <p>The dictionary object will be modified in-place during the forward pass to add newly computed hidden-states.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMForQuestionAnsweringSimple.call.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForQuestionAnsweringSimple.call.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMForQuestionAnsweringSimple.call.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForQuestionAnsweringSimple.call.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMForQuestionAnsweringSimple.call.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForQuestionAnsweringSimple.call.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMForQuestionAnsweringSimple.call.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForQuestionAnsweringSimple.call.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMForQuestionAnsweringSimple.call.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForQuestionAnsweringSimple.call.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMForQuestionAnsweringSimple.call.training" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForQuestionAnsweringSimple.call.training"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>training</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMForQuestionAnsweringSimple.call.start_positions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForQuestionAnsweringSimple.call.start_positions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>start_positions</strong> (<code>tf.Tensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (<code>sequence_length</code>). Position outside of the sequence are not taken into account for computing the loss.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMForQuestionAnsweringSimple.call.end_positions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForQuestionAnsweringSimple.call.end_positions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>end_positions</strong> (<code>tf.Tensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (<code>sequence_length</code>). Position outside of the sequence are not taken into account for computing the loss.</span></span> </li></ul> <div id="transformers.TFXLMForQuestionAnsweringSimple.call.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput">transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput</a> or <code>tuple(tf.Tensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput">transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput</a> or a tuple of <code>tf.Tensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.XLMConfig">XLMConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, )</code>, <em>optional</em>, returned when <code>start_positions</code> and <code>end_positions</code> are provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.</p> </li> <li> <p><strong>start_logits</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>) — Span-start scores (before SoftMax).</p> </li> <li> <p><strong>end_logits</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>) — Span-end scores (before SoftMax).</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>tf.Tensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>tf.Tensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-5kqtcw">The <a href="/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.TFXLMForQuestionAnsweringSimple">TFXLMForQuestionAnsweringSimple</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.TFXLMForQuestionAnsweringSimple.call.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForQuestionAnsweringSimple.call.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, TFXLMForQuestionAnsweringSimple <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> tensorflow <span class="hljs-keyword">as</span> tf <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"xlm-mlm-en-2048"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = TFXLMForQuestionAnsweringSimple.from_pretrained(<span class="hljs-string">"xlm-mlm-en-2048"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>question, text = <span class="hljs-string">"Who was Jim Henson?"</span>, <span class="hljs-string">"Jim Henson was a nice puppet"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(question, text, return_tensors=<span class="hljs-string">"tf"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(**inputs) <span class="hljs-meta">&gt;&gt;&gt; </span>answer_start_index = <span class="hljs-built_in">int</span>(tf.math.argmax(outputs.start_logits, axis=-<span class="hljs-number">1</span>)[<span class="hljs-number">0</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span>answer_end_index = <span class="hljs-built_in">int</span>(tf.math.argmax(outputs.end_logits, axis=-<span class="hljs-number">1</span>)[<span class="hljs-number">0</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span>predict_answer_tokens = inputs.input_ids[<span class="hljs-number">0</span>, answer_start_index : answer_end_index + <span class="hljs-number">1</span>]</pre></div></div> <div class="relative group rounded-md"><a id="transformers.TFXLMForQuestionAnsweringSimple.call.example-2" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMForQuestionAnsweringSimple.call.example-2"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># target is "nice puppet"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>target_start_index = tf.constant([<span class="hljs-number">14</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span>target_end_index = tf.constant([<span class="hljs-number">15</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index) <span class="hljs-meta">&gt;&gt;&gt; </span>loss = tf.math.reduce_mean(outputs.loss)</pre></div></div></div></div> <p></p> <div id="svelte-announcer" aria-live="assertive" aria-atomic="true" style="position: absolute; left: 0px; top: 0px; clip: rect(0px, 0px, 0px, 0px); clip-path: inset(50%); overflow: hidden; white-space: nowrap; width: 1px; height: 1px;"></div></div> <div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/xglm" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>XGLM</a> <a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">XLM-ProphetNet<span class="ml-2 translate-y-px">→</span></a></div></div></div> <div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapter&quot;:{&quot;title&quot;:&quot;XLM&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;xlm&quot;,&quot;url&quot;:&quot;#xlm&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;overview&quot;,&quot;url&quot;:&quot;#overview&quot;},{&quot;title&quot;:&quot;Documentation resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;documentation-resources&quot;,&quot;url&quot;:&quot;#documentation-resources&quot;},{&quot;title&quot;:&quot;XLMConfig&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XLMConfig&quot;,&quot;url&quot;:&quot;#transformers.XLMConfig&quot;},{&quot;title&quot;:&quot;XLMTokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XLMTokenizer&quot;,&quot;url&quot;:&quot;#transformers.XLMTokenizer&quot;},{&quot;title&quot;:&quot;XLM specific outputs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.models.xlm.modeling_xlm.XLMForQuestionAnsweringOutput&quot;,&quot;url&quot;:&quot;#transformers.models.xlm.modeling_xlm.XLMForQuestionAnsweringOutput&quot;},{&quot;title&quot;:&quot;XLMModel&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XLMModel&quot;,&quot;url&quot;:&quot;#transformers.XLMModel&quot;},{&quot;title&quot;:&quot;XLMWithLMHeadModel&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XLMWithLMHeadModel&quot;,&quot;url&quot;:&quot;#transformers.XLMWithLMHeadModel&quot;},{&quot;title&quot;:&quot;XLMForSequenceClassification&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XLMForSequenceClassification&quot;,&quot;url&quot;:&quot;#transformers.XLMForSequenceClassification&quot;},{&quot;title&quot;:&quot;XLMForMultipleChoice&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XLMForMultipleChoice&quot;,&quot;url&quot;:&quot;#transformers.XLMForMultipleChoice&quot;},{&quot;title&quot;:&quot;XLMForTokenClassification&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XLMForTokenClassification&quot;,&quot;url&quot;:&quot;#transformers.XLMForTokenClassification&quot;},{&quot;title&quot;:&quot;XLMForQuestionAnsweringSimple&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XLMForQuestionAnsweringSimple&quot;,&quot;url&quot;:&quot;#transformers.XLMForQuestionAnsweringSimple&quot;},{&quot;title&quot;:&quot;XLMForQuestionAnswering&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XLMForQuestionAnswering&quot;,&quot;url&quot;:&quot;#transformers.XLMForQuestionAnswering&quot;},{&quot;title&quot;:&quot;TFXLMModel&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.TFXLMModel&quot;,&quot;url&quot;:&quot;#transformers.TFXLMModel&quot;},{&quot;title&quot;:&quot;TFXLMWithLMHeadModel&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.TFXLMWithLMHeadModel&quot;,&quot;url&quot;:&quot;#transformers.TFXLMWithLMHeadModel&quot;},{&quot;title&quot;:&quot;TFXLMForSequenceClassification&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.TFXLMForSequenceClassification&quot;,&quot;url&quot;:&quot;#transformers.TFXLMForSequenceClassification&quot;},{&quot;title&quot;:&quot;TFXLMForMultipleChoice&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.TFXLMForMultipleChoice&quot;,&quot;url&quot;:&quot;#transformers.TFXLMForMultipleChoice&quot;},{&quot;title&quot;:&quot;TFXLMForTokenClassification&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.TFXLMForTokenClassification&quot;,&quot;url&quot;:&quot;#transformers.TFXLMForTokenClassification&quot;},{&quot;title&quot;:&quot;TFXLMForQuestionAnsweringSimple&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.TFXLMForQuestionAnsweringSimple&quot;,&quot;url&quot;:&quot;#transformers.TFXLMForQuestionAnsweringSimple&quot;}]}}" data-target="SubSideMenu"> <nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#xlm" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-xlm"><!-- HTML_TAG_START -->XLM<!-- HTML_TAG_END --></a> <a href="#overview" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-overview"><!-- HTML_TAG_START --><wbr>Overview<!-- HTML_TAG_END --></a> <a href="#documentation-resources" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-documentation-resources"><!-- HTML_TAG_START --><wbr>Documentation resources<!-- HTML_TAG_END --></a> <a href="#transformers.XLMConfig" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XLMConfig"><!-- HTML_TAG_START -->XLM<wbr>Config<!-- HTML_TAG_END --></a> <a href="#transformers.XLMTokenizer" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XLMTokenizer"><!-- HTML_TAG_START -->XLM<wbr>Tokenizer<!-- HTML_TAG_END --></a> <a href="#transformers.models.xlm.modeling_xlm.XLMForQuestionAnsweringOutput" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.models.xlm.modeling_xlm.XLMForQuestionAnsweringOutput"><!-- HTML_TAG_START -->XL<wbr>M specific outputs<!-- HTML_TAG_END --></a> <a href="#transformers.XLMModel" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XLMModel"><!-- HTML_TAG_START -->XLM<wbr>Model<!-- HTML_TAG_END --></a> <a href="#transformers.XLMWithLMHeadModel" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XLMWithLMHeadModel"><!-- HTML_TAG_START -->XLM<wbr>WithLM<wbr>Head<wbr>Model<!-- HTML_TAG_END --></a> <a href="#transformers.XLMForSequenceClassification" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XLMForSequenceClassification"><!-- HTML_TAG_START -->XLM<wbr>For<wbr>Sequence<wbr>Classification<!-- HTML_TAG_END --></a> <a href="#transformers.XLMForMultipleChoice" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XLMForMultipleChoice"><!-- HTML_TAG_START -->XLM<wbr>For<wbr>Multiple<wbr>Choice<!-- HTML_TAG_END --></a> <a href="#transformers.XLMForTokenClassification" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XLMForTokenClassification"><!-- HTML_TAG_START -->XLM<wbr>For<wbr>Token<wbr>Classification<!-- HTML_TAG_END --></a> <a href="#transformers.XLMForQuestionAnsweringSimple" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XLMForQuestionAnsweringSimple"><!-- HTML_TAG_START -->XLM<wbr>For<wbr>Question<wbr>Answering<wbr>Simple<!-- HTML_TAG_END --></a> <a href="#transformers.XLMForQuestionAnswering" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XLMForQuestionAnswering"><!-- HTML_TAG_START -->XLM<wbr>For<wbr>Question<wbr>Answering<!-- HTML_TAG_END --></a> <a href="#transformers.TFXLMModel" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.TFXLMModel"><!-- HTML_TAG_START -->TFXLM<wbr>Model<!-- HTML_TAG_END --></a> <a href="#transformers.TFXLMWithLMHeadModel" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.TFXLMWithLMHeadModel"><!-- HTML_TAG_START -->TFXLM<wbr>WithLM<wbr>Head<wbr>Model<!-- HTML_TAG_END --></a> <a href="#transformers.TFXLMForSequenceClassification" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.TFXLMForSequenceClassification"><!-- HTML_TAG_START -->TFXLM<wbr>For<wbr>Sequence<wbr>Classification<!-- HTML_TAG_END --></a> <a href="#transformers.TFXLMForMultipleChoice" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.TFXLMForMultipleChoice"><!-- HTML_TAG_START -->TFXLM<wbr>For<wbr>Multiple<wbr>Choice<!-- HTML_TAG_END --></a> <a href="#transformers.TFXLMForTokenClassification" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.TFXLMForTokenClassification"><!-- HTML_TAG_START -->TFXLM<wbr>For<wbr>Token<wbr>Classification<!-- HTML_TAG_END --></a> <a href="#transformers.TFXLMForQuestionAnsweringSimple" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.TFXLMForQuestionAnsweringSimple"><!-- HTML_TAG_START -->TFXLM<wbr>For<wbr>Question<wbr>Answering<wbr>Simple<!-- HTML_TAG_END --></a> </nav></div></div></div> <div id="doc-footer"></div></main> </div> <script> import("/front/build/kube-5e23f38/index.js"); window.moonSha = "kube-5e23f38/"; window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc","environment":"production","userAgent":"HuggingFace (production)"}`); </script> <!-- Stripe --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://js.stripe.com/v3/"; script.async = true; document.head.appendChild(script); } </script> <!-- Google analytics v4 --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL"; script.async = true; document.head.appendChild(script); window.dataLayer = window.dataLayer || []; function gtag() { if (window.dataLayer !== undefined) { window.dataLayer.push(arguments); } } gtag("js", new Date()); gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/v4.34.0/en/model_doc/xlm" }); /// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" }); /// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent /// TODO: ask the user for their consent and update this with gtag('consent', 'update') } </script> <!-- Google Analytics v3 (deprecated) --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { (function (i, s, o, g, r, a, m) { i["GoogleAnalyticsObject"] = r; (i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments); }), (i[r].l = 1 * new Date()); (a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]); a.async = 1; a.src = g; m.parentNode.insertBefore(a, m); })(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics"); ganalytics("create", "UA-83738774-2", "auto"); ganalytics("send", "pageview", "/docs/transformers/v4.34.0/en/model_doc/xlm"); } </script> <iframe name="__privateStripeMetricsController7700" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-27c67c0d52761104439bb051c7856ab1.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fv4.34.0%2Fen%2Fmodel_doc%2Fxlm&amp;title=XLM&amp;referrer=&amp;muid=NA&amp;sid=NA&amp;version=6&amp;preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html>
2023-10-05T13:33:34.649Z
XLM-ProphetNet
https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet
# XLM-ProphetNet [![Models](https://img.shields.io/badge/All_model_pages-xprophetnet-blueviolet)](https://huggingface.co/models?filter=xprophetnet) [![Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/docs-demos/xprophetnet-large-wiki100-cased-xglue-ntg) **DISCLAIMER:** If you see something strange, file a [Github Issue](https://github.com/huggingface/transformers/issues/new?assignees=&labels=&template=bug-report.md&title) and assign @patrickvonplaten ## Overview The XLM-ProphetNet model was proposed in [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training,](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang, Ming Zhou on 13 Jan, 2020. XLM-ProphetNet is an encoder-decoder model and can predict n-future tokens for “ngram” language modeling instead of just the next token. Its architecture is identical to ProhpetNet, but the model was trained on the multi-lingual “wiki100” Wikipedia dump. The abstract from the paper is the following: _In this paper, we present a new sequence-to-sequence pretraining model called ProphetNet, which introduces a novel self-supervised objective named future n-gram prediction and the proposed n-stream self-attention mechanism. Instead of the optimization of one-step ahead prediction in traditional sequence-to-sequence model, the ProphetNet is optimized by n-step ahead prediction which predicts the next n tokens simultaneously based on previous context tokens at each time step. The future n-gram prediction explicitly encourages the model to plan for the future tokens and prevent overfitting on strong local correlations. We pre-train ProphetNet using a base scale dataset (16GB) and a large scale dataset (160GB) respectively. Then we conduct experiments on CNN/DailyMail, Gigaword, and SQuAD 1.1 benchmarks for abstractive summarization and question generation tasks. Experimental results show that ProphetNet achieves new state-of-the-art results on all these datasets compared to the models using the same scale pretraining corpus._ The Authors’ code can be found [here](https://github.com/microsoft/ProphetNet). Tips: - XLM-ProphetNet’s model architecture and pretraining objective is same as ProphetNet, but XLM-ProphetNet was pre-trained on the cross-lingual dataset XGLUE. ## Documentation resources - [Causal language modeling task guide](../tasks/language_modeling) - [Translation task guide](../tasks/translation) - [Summarization task guide](../tasks/summarization) ## XLMProphetNetConfig ### class transformers.XLMProphetNetConfig [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_prophetnet/configuration_xlm_prophetnet.py#L33) ( activation\_dropout: typing.Optional\[float\] = 0.1activation\_function: typing.Union\[str, typing.Callable, NoneType\] = 'gelu'vocab\_size: typing.Optional\[int\] = 30522hidden\_size: typing.Optional\[int\] = 1024encoder\_ffn\_dim: typing.Optional\[int\] = 4096num\_encoder\_layers: typing.Optional\[int\] = 12num\_encoder\_attention\_heads: typing.Optional\[int\] = 16decoder\_ffn\_dim: typing.Optional\[int\] = 4096num\_decoder\_layers: typing.Optional\[int\] = 12num\_decoder\_attention\_heads: typing.Optional\[int\] = 16attention\_dropout: typing.Optional\[float\] = 0.1dropout: typing.Optional\[float\] = 0.1max\_position\_embeddings: typing.Optional\[int\] = 512init\_std: typing.Optional\[float\] = 0.02is\_encoder\_decoder: typing.Optional\[bool\] = Trueadd\_cross\_attention: typing.Optional\[bool\] = Truedecoder\_start\_token\_id: typing.Optional\[int\] = 0ngram: typing.Optional\[int\] = 2num\_buckets: typing.Optional\[int\] = 32relative\_max\_distance: typing.Optional\[int\] = 128disable\_ngram\_loss: typing.Optional\[bool\] = Falseeps: typing.Optional\[float\] = 0.0use\_cache: typing.Optional\[bool\] = Truepad\_token\_id: typing.Optional\[int\] = 0bos\_token\_id: typing.Optional\[int\] = 1eos\_token\_id: typing.Optional\[int\] = 2\*\*kwargs ) This is the configuration class to store the configuration of a [XLMProphetNetModel](/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet#transformers.XLMProphetNetModel). It is used to instantiate a XLMProphetNet model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the XLMProphetNet [microsoft/xprophetnet-large-wiki100-cased](https://huggingface.co/microsoft/xprophetnet-large-wiki100-cased) architecture. Configuration objects inherit from [PretrainedConfig](/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig) and can be used to control the model outputs. Read the documentation from [PretrainedConfig](/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig) for more information. ## XLMProphetNetTokenizer ### class transformers.XLMProphetNetTokenizer [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_prophetnet/tokenization_xlm_prophetnet.py#L59) ( vocab\_filebos\_token = '\[SEP\]'eos\_token = '\[SEP\]'sep\_token = '\[SEP\]'unk\_token = '\[UNK\]'pad\_token = '\[PAD\]'cls\_token = '\[CLS\]'mask\_token = '\[MASK\]'sp\_model\_kwargs: typing.Union\[typing.Dict\[str, typing.Any\], NoneType\] = None\*\*kwargs ) Adapted from [RobertaTokenizer](/docs/transformers/v4.34.0/en/model_doc/roberta#transformers.RobertaTokenizer) and [XLNetTokenizer](/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetTokenizer). Based on [SentencePiece](https://github.com/google/sentencepiece). This tokenizer inherits from [PreTrainedTokenizer](/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizer) which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. #### build\_inputs\_with\_special\_tokens [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_prophetnet/tokenization_xlm_prophetnet.py#L320) ( token\_ids\_0: typing.List\[int\]token\_ids\_1: typing.Optional\[typing.List\[int\]\] = None ) → `List[int]` Parameters - **token\_ids\_0** (`List[int]`) — List of IDs to which the special tokens will be added - **token\_ids\_1** (`List[int]`, _optional_) — Optional second list of IDs for sequence pairs. list of [input IDs](../glossary#input-ids) with the appropriate special tokens. Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A XLMProphetNet sequence has the following format: - single sequence: `X [SEP]` - pair of sequences: `A [SEP] B [SEP]` Converts a sequence of tokens (strings for sub-words) in a single string. #### create\_token\_type\_ids\_from\_sequences [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_prophetnet/tokenization_xlm_prophetnet.py#L247) ( token\_ids\_0: typing.List\[int\]token\_ids\_1: typing.Optional\[typing.List\[int\]\] = None ) → `List[int]` Parameters - **token\_ids\_0** (`List[int]`) — List of IDs. - **token\_ids\_1** (`List[int]`, _optional_) — Optional second list of IDs for sequence pairs. List of zeros. Create a mask from the two sequences passed to be used in a sequence-pair classification task. XLMProphetNet does not make use of token type ids, therefore a list of zeros is returned. #### get\_special\_tokens\_mask [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_prophetnet/tokenization_xlm_prophetnet.py#L219) ( token\_ids\_0: typing.List\[int\]token\_ids\_1: typing.Optional\[typing.List\[int\]\] = Nonealready\_has\_special\_tokens: bool = False ) → `List[int]` Parameters - **token\_ids\_0** (`List[int]`) — List of IDs. - **token\_ids\_1** (`List[int]`, _optional_) — Optional second list of IDs for sequence pairs. - **already\_has\_special\_tokens** (`bool`, _optional_, defaults to `False`) — Whether or not the token list is already formatted with special tokens for the model. A list of integers in the range \[0, 1\]: 1 for a special token, 0 for a sequence token. Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer `prepare_for_model` method. ## XLMProphetNetModel ### class transformers.XLMProphetNetModel [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_prophetnet/modeling_xlm_prophetnet.py#L1770) ( config: XLMProphetNetConfig ) Parameters - **config** ([XLMProphetNetConfig](/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet#transformers.XLMProphetNetConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. The bare XLMProphetNet Model outputting raw hidden-states without any specific head on top. This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) Original ProphetNet code can be found [here](https://github.com/microsoft/ProphetNet). Checkpoints were converted from original Fairseq checkpoints. For more information on the checkpoint conversion, please take a look at the file `convert_prophetnet_original_pytorch_checkpoint_to_pytorch.py`. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matters related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_prophetnet/modeling_xlm_prophetnet.py#L1804) ( input\_ids: typing.Optional\[torch.Tensor\] = Noneattention\_mask: typing.Optional\[torch.Tensor\] = Nonedecoder\_input\_ids: typing.Optional\[torch.Tensor\] = Nonedecoder\_attention\_mask: typing.Optional\[torch.BoolTensor\] = Nonehead\_mask: typing.Optional\[torch.Tensor\] = Nonedecoder\_head\_mask: typing.Optional\[torch.Tensor\] = Nonecross\_attn\_head\_mask: typing.Optional\[torch.Tensor\] = Noneencoder\_outputs: typing.Optional\[typing.Tuple\] = Nonepast\_key\_values: typing.Optional\[typing.Tuple\[typing.Tuple\[torch.Tensor\]\]\] = Noneinputs\_embeds: typing.Optional\[torch.Tensor\] = Nonedecoder\_inputs\_embeds: typing.Optional\[torch.Tensor\] = Noneuse\_cache: typing.Optional\[bool\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → `transformers.models.xlm_prophetnet.modeling_xlm_prophetnet.XLMProphetNetSeq2SeqModelOutput` or `tuple(torch.FloatTensor)` The [XLMProphetNetModel](/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet#transformers.XLMProphetNetModel) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, XLMProphetNetModel >>> tokenizer = AutoTokenizer.from_pretrained("patrickvonplaten/xprophetnet-large-uncased-standalone") >>> model = XLMProphetNetModel.from_pretrained("patrickvonplaten/xprophetnet-large-uncased-standalone") >>> input_ids = tokenizer( ... "Studies have been shown that owning a dog is good for you", return_tensors="pt" ... ).input_ids >>> decoder_input_ids = tokenizer("Studies show that", return_tensors="pt").input_ids >>> outputs = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids) >>> last_hidden_states = outputs.last_hidden_state >>> last_hidden_states_ngram = outputs.last_hidden_state_ngram ``` ## XLMProphetNetEncoder ### class transformers.XLMProphetNetEncoder [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_prophetnet/modeling_xlm_prophetnet.py#L1252) ( config: XLMProphetNetConfigword\_embeddings: Embedding = None ) Parameters - **config** ([XLMProphetNetConfig](/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet#transformers.XLMProphetNetConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. The standalone encoder part of the XLMProphetNetModel. This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) Original ProphetNet code can be found [here](https://github.com/microsoft/ProphetNet). Checkpoints were converted from original Fairseq checkpoints. For more information on the checkpoint conversion, please take a look at the file `convert_prophetnet_original_pytorch_checkpoint_to_pytorch.py`. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matters related to general usage and behavior. word\_embeddings (`torch.nn.Embeddings` of shape `(config.vocab_size, config.hidden_size)`, _optional_): The word embedding parameters. This can be used to initialize [XLMProphetNetEncoder](/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet#transformers.XLMProphetNetEncoder) with pre-defined word embeddings instead of randomly initialized word embeddings. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_prophetnet/modeling_xlm_prophetnet.py#L1282) ( input\_ids: typing.Optional\[torch.Tensor\] = Noneattention\_mask: typing.Optional\[torch.Tensor\] = Nonehead\_mask: typing.Optional\[torch.Tensor\] = Noneinputs\_embeds: typing.Optional\[torch.Tensor\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → [transformers.modeling\_outputs.BaseModelOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.BaseModelOutput) or `tuple(torch.FloatTensor)` The [XLMProphetNetEncoder](/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet#transformers.XLMProphetNetEncoder) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, XLMProphetNetEncoder >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("patrickvonplaten/xprophetnet-large-uncased-standalone") >>> model = XLMProphetNetEncoder.from_pretrained("patrickvonplaten/prophetnet-large-uncased-standalone") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state ``` ## XLMProphetNetDecoder ### class transformers.XLMProphetNetDecoder [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_prophetnet/modeling_xlm_prophetnet.py#L1393) ( config: XLMProphetNetConfigword\_embeddings: typing.Optional\[torch.nn.modules.sparse.Embedding\] = None ) Parameters - **config** ([XLMProphetNetConfig](/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet#transformers.XLMProphetNetConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. The standalone decoder part of the XLMProphetNetModel. This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) Original ProphetNet code can be found [here](https://github.com/microsoft/ProphetNet). Checkpoints were converted from original Fairseq checkpoints. For more information on the checkpoint conversion, please take a look at the file `convert_prophetnet_original_pytorch_checkpoint_to_pytorch.py`. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matters related to general usage and behavior. word\_embeddings (`torch.nn.Embeddings` of shape `(config.vocab_size, config.hidden_size)`, _optional_): The word embedding parameters. This can be used to initialize [XLMProphetNetEncoder](/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet#transformers.XLMProphetNetEncoder) with pre-defined word embeddings instead of randomly initialized word embeddings. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_prophetnet/modeling_xlm_prophetnet.py#L1430) ( input\_ids: typing.Optional\[torch.Tensor\] = Noneattention\_mask: typing.Optional\[torch.Tensor\] = Noneencoder\_hidden\_states: typing.Optional\[torch.Tensor\] = Noneencoder\_attention\_mask: typing.Optional\[torch.Tensor\] = Nonehead\_mask: typing.Optional\[torch.Tensor\] = Nonecross\_attn\_head\_mask: typing.Optional\[torch.Tensor\] = Nonepast\_key\_values: typing.Optional\[typing.Tuple\[typing.Tuple\[torch.Tensor\]\]\] = Noneinputs\_embeds: typing.Optional\[torch.Tensor\] = Noneuse\_cache: typing.Optional\[bool\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → `transformers.models.xlm_prophetnet.modeling_xlm_prophetnet.XLMProphetNetDecoderModelOutput` or `tuple(torch.FloatTensor)` The [XLMProphetNetDecoder](/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet#transformers.XLMProphetNetDecoder) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, XLMProphetNetDecoder >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("patrickvonplaten/xprophetnet-large-uncased-standalone") >>> model = XLMProphetNetDecoder.from_pretrained( ... "patrickvonplaten/xprophetnet-large-uncased-standalone", add_cross_attention=False ... ) >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state ``` ## XLMProphetNetForConditionalGeneration ### class transformers.XLMProphetNetForConditionalGeneration [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_prophetnet/modeling_xlm_prophetnet.py#L1900) ( config: XLMProphetNetConfig ) Parameters - **config** ([XLMProphetNetConfig](/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet#transformers.XLMProphetNetConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. The XLMProphetNet Model with a language modeling head. Can be used for sequence generation tasks. This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) Original ProphetNet code can be found [here](https://github.com/microsoft/ProphetNet). Checkpoints were converted from original Fairseq checkpoints. For more information on the checkpoint conversion, please take a look at the file `convert_prophetnet_original_pytorch_checkpoint_to_pytorch.py`. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matters related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_prophetnet/modeling_xlm_prophetnet.py#L1923) ( input\_ids: typing.Optional\[torch.Tensor\] = Noneattention\_mask: typing.Optional\[torch.Tensor\] = Nonedecoder\_input\_ids: typing.Optional\[torch.Tensor\] = Nonedecoder\_attention\_mask: typing.Optional\[torch.BoolTensor\] = Nonehead\_mask: typing.Optional\[torch.Tensor\] = Nonedecoder\_head\_mask: typing.Optional\[torch.Tensor\] = Nonecross\_attn\_head\_mask: typing.Optional\[torch.Tensor\] = Noneencoder\_outputs: typing.Optional\[torch.Tensor\] = Nonepast\_key\_values: typing.Optional\[typing.Tuple\[typing.Tuple\[torch.Tensor\]\]\] = Noneinputs\_embeds: typing.Optional\[torch.Tensor\] = Nonedecoder\_inputs\_embeds: typing.Optional\[torch.Tensor\] = Nonelabels: typing.Optional\[torch.Tensor\] = Noneuse\_cache: typing.Optional\[bool\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → `transformers.models.xlm_prophetnet.modeling_xlm_prophetnet.XLMProphetNetSeq2SeqLMOutput` or `tuple(torch.FloatTensor)` The [XLMProphetNetForConditionalGeneration](/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet#transformers.XLMProphetNetForConditionalGeneration) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, XLMProphetNetForConditionalGeneration >>> tokenizer = AutoTokenizer.from_pretrained("patrickvonplaten/xprophetnet-large-uncased-standalone") >>> model = XLMProphetNetForConditionalGeneration.from_pretrained( ... "patrickvonplaten/xprophetnet-large-uncased-standalone" ... ) >>> input_ids = tokenizer( ... "Studies have been shown that owning a dog is good for you", return_tensors="pt" ... ).input_ids >>> decoder_input_ids = tokenizer("Studies show that", return_tensors="pt").input_ids >>> outputs = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids) >>> logits_next_token = outputs.logits >>> logits_ngram_next_tokens = outputs.logits_ngram ``` ## XLMProphetNetForCausalLM ### class transformers.XLMProphetNetForCausalLM [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_prophetnet/modeling_xlm_prophetnet.py#L2116) ( config: XLMProphetNetConfig ) Parameters - **config** ([XLMProphetNetConfig](/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet#transformers.XLMProphetNetConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. The standalone decoder part of the XLMProphetNetModel with a lm head on top. The model can be used for causal language modeling. This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) Original ProphetNet code can be found [here](https://github.com/microsoft/ProphetNet). Checkpoints were converted from original Fairseq checkpoints. For more information on the checkpoint conversion, please take a look at the file `convert_prophetnet_original_pytorch_checkpoint_to_pytorch.py`. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matters related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_prophetnet/modeling_xlm_prophetnet.py#L2153) ( input\_ids: typing.Optional\[torch.Tensor\] = Noneattention\_mask: typing.Optional\[torch.Tensor\] = Noneencoder\_hidden\_states: typing.Optional\[torch.Tensor\] = Noneencoder\_attention\_mask: typing.Optional\[torch.Tensor\] = Nonehead\_mask: typing.Optional\[torch.Tensor\] = Nonecross\_attn\_head\_mask: typing.Optional\[torch.Tensor\] = Nonepast\_key\_values: typing.Optional\[typing.Tuple\[typing.Tuple\[torch.Tensor\]\]\] = Noneinputs\_embeds: typing.Optional\[torch.Tensor\] = Nonelabels: typing.Optional\[torch.Tensor\] = Noneuse\_cache: typing.Optional\[bool\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → `transformers.models.xlm_prophetnet.modeling_xlm_prophetnet.XLMProphetNetDecoderLMOutput` or `tuple(torch.FloatTensor)` The [XLMProphetNetForCausalLM](/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet#transformers.XLMProphetNetForCausalLM) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, XLMProphetNetForCausalLM >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("patrickvonplaten/xprophetnet-large-uncased-standalone") >>> model = XLMProphetNetForCausalLM.from_pretrained("patrickvonplaten/xprophetnet-large-uncased-standalone") >>> assert model.config.is_decoder, f"{model.__class__} has to be configured as a decoder." >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs) >>> logits = outputs.logits >>> >>> from transformers import BertTokenizer, EncoderDecoderModel, AutoTokenizer >>> import torch >>> tokenizer_enc = BertTokenizer.from_pretrained("bert-large-uncased") >>> tokenizer_dec = AutoTokenizer.from_pretrained("patrickvonplaten/xprophetnet-large-uncased-standalone") >>> model = EncoderDecoderModel.from_encoder_decoder_pretrained( ... "bert-large-uncased", "patrickvonplaten/xprophetnet-large-uncased-standalone" ... ) >>> ARTICLE = ( ... "the us state department said wednesday it had received no " ... "formal word from bolivia that it was expelling the us ambassador there " ... "but said the charges made against him are `` baseless ." ... ) >>> input_ids = tokenizer_enc(ARTICLE, return_tensors="pt").input_ids >>> labels = tokenizer_dec( ... "us rejects charges against its ambassador in bolivia", return_tensors="pt" ... ).input_ids >>> outputs = model(input_ids=input_ids, decoder_input_ids=labels[:, :-1], labels=labels[:, 1:]) >>> loss = outputs.loss ```
<!DOCTYPE html><html class=""><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"> <meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science."> <meta property="fb:app_id" content="1321688464574422"> <meta name="twitter:card" content="summary_large_image"> <meta name="twitter:site" content="@huggingface"> <meta property="og:title" content="XLM-ProphetNet"> <meta property="og:type" content="website"> <meta property="og:url" content="https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet"> <meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png"> <link rel="stylesheet" href="/front/build/kube-5e23f38/style.css"> <link rel="preconnect" href="https://fonts.gstatic.com"> <link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&amp;display=swap" rel="stylesheet"> <link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&amp;display=swap" rel="stylesheet"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" as="style" onload="this.onload=null;this.rel='stylesheet'"> <noscript> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" /> </noscript> <title>XLM-ProphetNet</title> <script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script> <script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><link rel="stylesheet" href="/docs/transformers/v4.34.0/en/_app/immutable/assets/0.e3b0c442.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/1.38c5c2f6.js"><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"><meta name="hf:doc:metadata" content="{&quot;local&quot;:&quot;xlmprophetnet&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;overview&quot;,&quot;title&quot;:&quot;Overview&quot;},{&quot;local&quot;:&quot;documentation-resources&quot;,&quot;title&quot;:&quot;Documentation resources&quot;},{&quot;local&quot;:&quot;transformers.XLMProphetNetConfig&quot;,&quot;title&quot;:&quot;XLMProphetNetConfig&quot;},{&quot;local&quot;:&quot;transformers.XLMProphetNetTokenizer&quot;,&quot;title&quot;:&quot;XLMProphetNetTokenizer&quot;},{&quot;local&quot;:&quot;transformers.XLMProphetNetModel&quot;,&quot;title&quot;:&quot;XLMProphetNetModel&quot;},{&quot;local&quot;:&quot;transformers.XLMProphetNetEncoder&quot;,&quot;title&quot;:&quot;XLMProphetNetEncoder&quot;},{&quot;local&quot;:&quot;transformers.XLMProphetNetDecoder&quot;,&quot;title&quot;:&quot;XLMProphetNetDecoder&quot;},{&quot;local&quot;:&quot;transformers.XLMProphetNetForConditionalGeneration&quot;,&quot;title&quot;:&quot;XLMProphetNetForConditionalGeneration&quot;},{&quot;local&quot;:&quot;transformers.XLMProphetNetForCausalLM&quot;,&quot;title&quot;:&quot;XLMProphetNetForCausalLM&quot;}],&quot;title&quot;:&quot;XLM-ProphetNet&quot;}"></head> <body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage"> <div class="flex min-h-screen flex-col"> <div class="SVELTE_HYDRATER contents" data-props="{&quot;classNames&quot;:&quot;&quot;,&quot;isWide&quot;:true,&quot;isZh&quot;:false}" data-target="MainHeader"><header class="border-b border-gray-100 "><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <div class="flex flex-none items-center justify-center p-0.5 place-self-stretch lg:hidden"><button class="relative z-40 flex h-6 w-8 items-center justify-center" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div></div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="rounded-full border border-transparent bg-gray-900 px-3 py-1 leading-none text-white hover:border-black hover:bg-white hover:text-black" href="/join">Sign Up</a></li></ul></nav></div></header></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div> <main class="flex flex-1 flex-col"><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapters&quot;:[{&quot;title&quot;:&quot;Get started&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;🤗 Transformers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;index&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/index&quot;},{&quot;title&quot;:&quot;Quick tour&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;quicktour&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/quicktour&quot;},{&quot;title&quot;:&quot;Installation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;installation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/installation&quot;}]},{&quot;title&quot;:&quot;Tutorials&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Run inference with pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_tutorial&quot;},{&quot;title&quot;:&quot;Write portable code with AutoClass&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;autoclass_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/autoclass_tutorial&quot;},{&quot;title&quot;:&quot;Preprocess data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocessing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/preprocessing&quot;},{&quot;title&quot;:&quot;Fine-tune a pretrained model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;training&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/training&quot;},{&quot;title&quot;:&quot;Train with a script&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;run_scripts&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/run_scripts&quot;},{&quot;title&quot;:&quot;Set up distributed training with 🤗 Accelerate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;accelerate&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/accelerate&quot;},{&quot;title&quot;:&quot;Load and train adapters with 🤗 PEFT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;peft&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/peft&quot;},{&quot;title&quot;:&quot;Share your model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_sharing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_sharing&quot;},{&quot;title&quot;:&quot;Agents&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers_agents&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/transformers_agents&quot;},{&quot;title&quot;:&quot;Generation with LLMs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;llm_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/llm_tutorial&quot;}]},{&quot;title&quot;:&quot;Task Guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Natural Language Processing&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text classification&quot;,&quot;id&quot;:&quot;tasks/sequence_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/sequence_classification&quot;},{&quot;title&quot;:&quot;Token classification&quot;,&quot;id&quot;:&quot;tasks/token_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/token_classification&quot;},{&quot;title&quot;:&quot;Question answering&quot;,&quot;id&quot;:&quot;tasks/question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/question_answering&quot;},{&quot;title&quot;:&quot;Causal language modeling&quot;,&quot;id&quot;:&quot;tasks/language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/language_modeling&quot;},{&quot;title&quot;:&quot;Masked language modeling&quot;,&quot;id&quot;:&quot;tasks/masked_language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/masked_language_modeling&quot;},{&quot;title&quot;:&quot;Translation&quot;,&quot;id&quot;:&quot;tasks/translation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/translation&quot;},{&quot;title&quot;:&quot;Summarization&quot;,&quot;id&quot;:&quot;tasks/summarization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/summarization&quot;},{&quot;title&quot;:&quot;Multiple choice&quot;,&quot;id&quot;:&quot;tasks/multiple_choice&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/multiple_choice&quot;}]},{&quot;title&quot;:&quot;Audio&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio classification&quot;,&quot;id&quot;:&quot;tasks/audio_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/audio_classification&quot;},{&quot;title&quot;:&quot;Automatic speech recognition&quot;,&quot;id&quot;:&quot;tasks/asr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/asr&quot;}]},{&quot;title&quot;:&quot;Computer Vision&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image classification&quot;,&quot;id&quot;:&quot;tasks/image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_classification&quot;},{&quot;title&quot;:&quot;Semantic segmentation&quot;,&quot;id&quot;:&quot;tasks/semantic_segmentation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/semantic_segmentation&quot;},{&quot;title&quot;:&quot;Video classification&quot;,&quot;id&quot;:&quot;tasks/video_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/video_classification&quot;},{&quot;title&quot;:&quot;Object detection&quot;,&quot;id&quot;:&quot;tasks/object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot object detection&quot;,&quot;id&quot;:&quot;tasks/zero_shot_object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot image classification&quot;,&quot;id&quot;:&quot;tasks/zero_shot_image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification&quot;},{&quot;title&quot;:&quot;Depth estimation&quot;,&quot;id&quot;:&quot;tasks/monocular_depth_estimation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation&quot;}]},{&quot;title&quot;:&quot;Multimodal&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image captioning&quot;,&quot;id&quot;:&quot;tasks/image_captioning&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_captioning&quot;},{&quot;title&quot;:&quot;Document Question Answering&quot;,&quot;id&quot;:&quot;tasks/document_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/document_question_answering&quot;},{&quot;title&quot;:&quot;Visual Question Answering&quot;,&quot;id&quot;:&quot;tasks/visual_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/visual_question_answering&quot;},{&quot;title&quot;:&quot;Text to speech&quot;,&quot;id&quot;:&quot;tasks/text-to-speech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/text-to-speech&quot;}]},{&quot;title&quot;:&quot;Generation&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Customize the generation strategy&quot;,&quot;id&quot;:&quot;generation_strategies&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/generation_strategies&quot;}]},{&quot;title&quot;:&quot;Prompting&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image tasks with IDEFICS&quot;,&quot;id&quot;:&quot;tasks/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/idefics&quot;}]}]},{&quot;title&quot;:&quot;Developer guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Use fast tokenizers from 🤗 Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;fast_tokenizers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/fast_tokenizers&quot;},{&quot;title&quot;:&quot;Run inference with multilingual models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;multilingual&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/multilingual&quot;},{&quot;title&quot;:&quot;Use model-specific APIs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;create_a_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/create_a_model&quot;},{&quot;title&quot;:&quot;Share a custom model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_models&quot;},{&quot;title&quot;:&quot;Templates for chat models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;chat_templating&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/chat_templating&quot;},{&quot;title&quot;:&quot;Run training on Amazon SageMaker&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;sagemaker&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/sagemaker&quot;},{&quot;title&quot;:&quot;Export to ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;serialization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/serialization&quot;},{&quot;title&quot;:&quot;Export to TFLite&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tflite&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tflite&quot;},{&quot;title&quot;:&quot;Export to TorchScript&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;torchscript&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/torchscript&quot;},{&quot;title&quot;:&quot;Benchmarks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;benchmarks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/benchmarks&quot;},{&quot;title&quot;:&quot;Notebooks with examples&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;notebooks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/notebooks&quot;},{&quot;title&quot;:&quot;Community resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;community&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/community&quot;},{&quot;title&quot;:&quot;Custom Tools and Prompts&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_tools&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_tools&quot;},{&quot;title&quot;:&quot;Troubleshoot&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;troubleshooting&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/troubleshooting&quot;}]},{&quot;title&quot;:&quot;Performance and scalability&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;performance&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/performance&quot;},{&quot;title&quot;:&quot;Efficient training techniques&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Methods and tools for efficient training on a single GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_one&quot;},{&quot;title&quot;:&quot;Multiple GPUs and parallelism&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_many&quot;},{&quot;title&quot;:&quot;Efficient training on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu&quot;},{&quot;title&quot;:&quot;Distributed CPU training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu_many&quot;},{&quot;title&quot;:&quot;Training on TPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu&quot;},{&quot;title&quot;:&quot;Training on TPU with TensorFlow&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu_tf&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu_tf&quot;},{&quot;title&quot;:&quot;Training on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_special&quot;},{&quot;title&quot;:&quot;Custom hardware for training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_hardware&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_hardware&quot;},{&quot;title&quot;:&quot;Hyperparameter Search using Trainer API&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;hpo_train&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/hpo_train&quot;}]},{&quot;title&quot;:&quot;Optimizing inference&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Inference on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_cpu&quot;},{&quot;title&quot;:&quot;Inference on one GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_one&quot;},{&quot;title&quot;:&quot;Inference on many GPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_many&quot;},{&quot;title&quot;:&quot;Inference on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_special&quot;}]},{&quot;title&quot;:&quot;Instantiating a big model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;big_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/big_models&quot;},{&quot;title&quot;:&quot;Troubleshooting&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;debugging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/debugging&quot;},{&quot;title&quot;:&quot;XLA Integration for TensorFlow Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tf_xla&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tf_xla&quot;},{&quot;title&quot;:&quot;Optimize inference using `torch.compile()`&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_torch_compile&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_torch_compile&quot;}]},{&quot;title&quot;:&quot;Contribute&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;How to contribute to transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;contributing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/contributing&quot;},{&quot;title&quot;:&quot;How to add a model to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_model&quot;},{&quot;title&quot;:&quot;How to convert a 🤗 Transformers model to TensorFlow?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_tensorflow_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_tensorflow_model&quot;},{&quot;title&quot;:&quot;How to add a pipeline to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_pipeline&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_pipeline&quot;},{&quot;title&quot;:&quot;Testing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;testing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/testing&quot;},{&quot;title&quot;:&quot;Checks on a Pull Request&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pr_checks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pr_checks&quot;}]},{&quot;title&quot;:&quot;Conceptual guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Philosophy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;philosophy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/philosophy&quot;},{&quot;title&quot;:&quot;Glossary&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;glossary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/glossary&quot;},{&quot;title&quot;:&quot;What 🤗 Transformers can do&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;task_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/task_summary&quot;},{&quot;title&quot;:&quot;How 🤗 Transformers solve tasks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks_explained&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks_explained&quot;},{&quot;title&quot;:&quot;The Transformer model family&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_summary&quot;},{&quot;title&quot;:&quot;Summary of the tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tokenizer_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tokenizer_summary&quot;},{&quot;title&quot;:&quot;Attention mechanisms&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;attention&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/attention&quot;},{&quot;title&quot;:&quot;Padding and truncation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pad_truncation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pad_truncation&quot;},{&quot;title&quot;:&quot;BERTology&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;bertology&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/bertology&quot;},{&quot;title&quot;:&quot;Perplexity of fixed-length models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perplexity&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perplexity&quot;},{&quot;title&quot;:&quot;Pipelines for webserver inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_webserver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_webserver&quot;},{&quot;title&quot;:&quot;Model training anatomy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_memory_anatomy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_memory_anatomy&quot;}]},{&quot;title&quot;:&quot;API&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Main Classes&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Agents and Tools&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/agent&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/agent&quot;},{&quot;title&quot;:&quot;Auto Classes&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/auto&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/auto&quot;},{&quot;title&quot;:&quot;Callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/callback&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/callback&quot;},{&quot;title&quot;:&quot;Configuration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/configuration&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/configuration&quot;},{&quot;title&quot;:&quot;Data Collator&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/data_collator&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/data_collator&quot;},{&quot;title&quot;:&quot;Keras callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/keras_callbacks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/keras_callbacks&quot;},{&quot;title&quot;:&quot;Logging&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/logging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/logging&quot;},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/model&quot;},{&quot;title&quot;:&quot;Text Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/text_generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/text_generation&quot;},{&quot;title&quot;:&quot;ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/onnx&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/onnx&quot;},{&quot;title&quot;:&quot;Optimization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/optimizer_schedules&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules&quot;},{&quot;title&quot;:&quot;Model outputs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/output&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/output&quot;},{&quot;title&quot;:&quot;Pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/pipelines&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/pipelines&quot;},{&quot;title&quot;:&quot;Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/processors&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/processors&quot;},{&quot;title&quot;:&quot;Quantization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/quantization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/quantization&quot;},{&quot;title&quot;:&quot;Tokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/tokenizer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/tokenizer&quot;},{&quot;title&quot;:&quot;Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/trainer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/trainer&quot;},{&quot;title&quot;:&quot;DeepSpeed Integration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/deepspeed&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/deepspeed&quot;},{&quot;title&quot;:&quot;Feature Extractor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/feature_extractor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/feature_extractor&quot;},{&quot;title&quot;:&quot;Image Processor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/image_processor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/image_processor&quot;}]},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALBERT&quot;,&quot;id&quot;:&quot;model_doc/albert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/albert&quot;},{&quot;title&quot;:&quot;BART&quot;,&quot;id&quot;:&quot;model_doc/bart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bart&quot;},{&quot;title&quot;:&quot;BARThez&quot;,&quot;id&quot;:&quot;model_doc/barthez&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/barthez&quot;},{&quot;title&quot;:&quot;BARTpho&quot;,&quot;id&quot;:&quot;model_doc/bartpho&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bartpho&quot;},{&quot;title&quot;:&quot;BERT&quot;,&quot;id&quot;:&quot;model_doc/bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert&quot;},{&quot;title&quot;:&quot;BertGeneration&quot;,&quot;id&quot;:&quot;model_doc/bert-generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-generation&quot;},{&quot;title&quot;:&quot;BertJapanese&quot;,&quot;id&quot;:&quot;model_doc/bert-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-japanese&quot;},{&quot;title&quot;:&quot;Bertweet&quot;,&quot;id&quot;:&quot;model_doc/bertweet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bertweet&quot;},{&quot;title&quot;:&quot;BigBird&quot;,&quot;id&quot;:&quot;model_doc/big_bird&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/big_bird&quot;},{&quot;title&quot;:&quot;BigBirdPegasus&quot;,&quot;id&quot;:&quot;model_doc/bigbird_pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus&quot;},{&quot;title&quot;:&quot;BioGpt&quot;,&quot;id&quot;:&quot;model_doc/biogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/biogpt&quot;},{&quot;title&quot;:&quot;Blenderbot&quot;,&quot;id&quot;:&quot;model_doc/blenderbot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot&quot;},{&quot;title&quot;:&quot;Blenderbot Small&quot;,&quot;id&quot;:&quot;model_doc/blenderbot-small&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot-small&quot;},{&quot;title&quot;:&quot;BLOOM&quot;,&quot;id&quot;:&quot;model_doc/bloom&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bloom&quot;},{&quot;title&quot;:&quot;BORT&quot;,&quot;id&quot;:&quot;model_doc/bort&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bort&quot;},{&quot;title&quot;:&quot;ByT5&quot;,&quot;id&quot;:&quot;model_doc/byt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/byt5&quot;},{&quot;title&quot;:&quot;CamemBERT&quot;,&quot;id&quot;:&quot;model_doc/camembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/camembert&quot;},{&quot;title&quot;:&quot;CANINE&quot;,&quot;id&quot;:&quot;model_doc/canine&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/canine&quot;},{&quot;title&quot;:&quot;CodeGen&quot;,&quot;id&quot;:&quot;model_doc/codegen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/codegen&quot;},{&quot;title&quot;:&quot;CodeLlama&quot;,&quot;id&quot;:&quot;model_doc/code_llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/code_llama&quot;},{&quot;title&quot;:&quot;ConvBERT&quot;,&quot;id&quot;:&quot;model_doc/convbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convbert&quot;},{&quot;title&quot;:&quot;CPM&quot;,&quot;id&quot;:&quot;model_doc/cpm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpm&quot;},{&quot;title&quot;:&quot;CPMANT&quot;,&quot;id&quot;:&quot;model_doc/cpmant&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpmant&quot;},{&quot;title&quot;:&quot;CTRL&quot;,&quot;id&quot;:&quot;model_doc/ctrl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ctrl&quot;},{&quot;title&quot;:&quot;DeBERTa&quot;,&quot;id&quot;:&quot;model_doc/deberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta&quot;},{&quot;title&quot;:&quot;DeBERTa-v2&quot;,&quot;id&quot;:&quot;model_doc/deberta-v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta-v2&quot;},{&quot;title&quot;:&quot;DialoGPT&quot;,&quot;id&quot;:&quot;model_doc/dialogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dialogpt&quot;},{&quot;title&quot;:&quot;DistilBERT&quot;,&quot;id&quot;:&quot;model_doc/distilbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/distilbert&quot;},{&quot;title&quot;:&quot;DPR&quot;,&quot;id&quot;:&quot;model_doc/dpr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpr&quot;},{&quot;title&quot;:&quot;ELECTRA&quot;,&quot;id&quot;:&quot;model_doc/electra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/electra&quot;},{&quot;title&quot;:&quot;Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encoder-decoder&quot;},{&quot;title&quot;:&quot;ERNIE&quot;,&quot;id&quot;:&quot;model_doc/ernie&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie&quot;},{&quot;title&quot;:&quot;ErnieM&quot;,&quot;id&quot;:&quot;model_doc/ernie_m&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie_m&quot;},{&quot;title&quot;:&quot;ESM&quot;,&quot;id&quot;:&quot;model_doc/esm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/esm&quot;},{&quot;title&quot;:&quot;Falcon&quot;,&quot;id&quot;:&quot;model_doc/falcon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/falcon&quot;},{&quot;title&quot;:&quot;FLAN-T5&quot;,&quot;id&quot;:&quot;model_doc/flan-t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-t5&quot;},{&quot;title&quot;:&quot;FLAN-UL2&quot;,&quot;id&quot;:&quot;model_doc/flan-ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-ul2&quot;},{&quot;title&quot;:&quot;FlauBERT&quot;,&quot;id&quot;:&quot;model_doc/flaubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flaubert&quot;},{&quot;title&quot;:&quot;FNet&quot;,&quot;id&quot;:&quot;model_doc/fnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fnet&quot;},{&quot;title&quot;:&quot;FSMT&quot;,&quot;id&quot;:&quot;model_doc/fsmt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fsmt&quot;},{&quot;title&quot;:&quot;Funnel Transformer&quot;,&quot;id&quot;:&quot;model_doc/funnel&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/funnel&quot;},{&quot;title&quot;:&quot;GPT&quot;,&quot;id&quot;:&quot;model_doc/openai-gpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/openai-gpt&quot;},{&quot;title&quot;:&quot;GPT Neo&quot;,&quot;id&quot;:&quot;model_doc/gpt_neo&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neo&quot;},{&quot;title&quot;:&quot;GPT NeoX&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox&quot;},{&quot;title&quot;:&quot;GPT NeoX Japanese&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox_japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese&quot;},{&quot;title&quot;:&quot;GPT-J&quot;,&quot;id&quot;:&quot;model_doc/gptj&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptj&quot;},{&quot;title&quot;:&quot;GPT2&quot;,&quot;id&quot;:&quot;model_doc/gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt2&quot;},{&quot;title&quot;:&quot;GPTBigCode&quot;,&quot;id&quot;:&quot;model_doc/gpt_bigcode&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode&quot;},{&quot;title&quot;:&quot;GPTSAN Japanese&quot;,&quot;id&quot;:&quot;model_doc/gptsan-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese&quot;},{&quot;title&quot;:&quot;GPTSw3&quot;,&quot;id&quot;:&quot;model_doc/gpt-sw3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt-sw3&quot;},{&quot;title&quot;:&quot;HerBERT&quot;,&quot;id&quot;:&quot;model_doc/herbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/herbert&quot;},{&quot;title&quot;:&quot;I-BERT&quot;,&quot;id&quot;:&quot;model_doc/ibert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ibert&quot;},{&quot;title&quot;:&quot;Jukebox&quot;,&quot;id&quot;:&quot;model_doc/jukebox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/jukebox&quot;},{&quot;title&quot;:&quot;LED&quot;,&quot;id&quot;:&quot;model_doc/led&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/led&quot;},{&quot;title&quot;:&quot;LLaMA&quot;,&quot;id&quot;:&quot;model_doc/llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama&quot;},{&quot;title&quot;:&quot;Llama2&quot;,&quot;id&quot;:&quot;model_doc/llama2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama2&quot;},{&quot;title&quot;:&quot;Longformer&quot;,&quot;id&quot;:&quot;model_doc/longformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longformer&quot;},{&quot;title&quot;:&quot;LongT5&quot;,&quot;id&quot;:&quot;model_doc/longt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longt5&quot;},{&quot;title&quot;:&quot;LUKE&quot;,&quot;id&quot;:&quot;model_doc/luke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/luke&quot;},{&quot;title&quot;:&quot;M2M100&quot;,&quot;id&quot;:&quot;model_doc/m2m_100&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/m2m_100&quot;},{&quot;title&quot;:&quot;MarianMT&quot;,&quot;id&quot;:&quot;model_doc/marian&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/marian&quot;},{&quot;title&quot;:&quot;MarkupLM&quot;,&quot;id&quot;:&quot;model_doc/markuplm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/markuplm&quot;},{&quot;title&quot;:&quot;MBart and MBart-50&quot;,&quot;id&quot;:&quot;model_doc/mbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mbart&quot;},{&quot;title&quot;:&quot;MEGA&quot;,&quot;id&quot;:&quot;model_doc/mega&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mega&quot;},{&quot;title&quot;:&quot;MegatronBERT&quot;,&quot;id&quot;:&quot;model_doc/megatron-bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron-bert&quot;},{&quot;title&quot;:&quot;MegatronGPT2&quot;,&quot;id&quot;:&quot;model_doc/megatron_gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2&quot;},{&quot;title&quot;:&quot;Mistral&quot;,&quot;id&quot;:&quot;model_doc/mistral&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mistral&quot;},{&quot;title&quot;:&quot;mLUKE&quot;,&quot;id&quot;:&quot;model_doc/mluke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mluke&quot;},{&quot;title&quot;:&quot;MobileBERT&quot;,&quot;id&quot;:&quot;model_doc/mobilebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilebert&quot;},{&quot;title&quot;:&quot;MPNet&quot;,&quot;id&quot;:&quot;model_doc/mpnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpnet&quot;},{&quot;title&quot;:&quot;MPT&quot;,&quot;id&quot;:&quot;model_doc/mpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpt&quot;},{&quot;title&quot;:&quot;MRA&quot;,&quot;id&quot;:&quot;model_doc/mra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mra&quot;},{&quot;title&quot;:&quot;MT5&quot;,&quot;id&quot;:&quot;model_doc/mt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mt5&quot;},{&quot;title&quot;:&quot;MVP&quot;,&quot;id&quot;:&quot;model_doc/mvp&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mvp&quot;},{&quot;title&quot;:&quot;NEZHA&quot;,&quot;id&quot;:&quot;model_doc/nezha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nezha&quot;},{&quot;title&quot;:&quot;NLLB&quot;,&quot;id&quot;:&quot;model_doc/nllb&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb&quot;},{&quot;title&quot;:&quot;NLLB-MoE&quot;,&quot;id&quot;:&quot;model_doc/nllb-moe&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb-moe&quot;},{&quot;title&quot;:&quot;Nyströmformer&quot;,&quot;id&quot;:&quot;model_doc/nystromformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nystromformer&quot;},{&quot;title&quot;:&quot;Open-Llama&quot;,&quot;id&quot;:&quot;model_doc/open-llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/open-llama&quot;},{&quot;title&quot;:&quot;OPT&quot;,&quot;id&quot;:&quot;model_doc/opt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/opt&quot;},{&quot;title&quot;:&quot;Pegasus&quot;,&quot;id&quot;:&quot;model_doc/pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus&quot;},{&quot;title&quot;:&quot;PEGASUS-X&quot;,&quot;id&quot;:&quot;model_doc/pegasus_x&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus_x&quot;},{&quot;title&quot;:&quot;Persimmon&quot;,&quot;id&quot;:&quot;model_doc/persimmon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/persimmon&quot;},{&quot;title&quot;:&quot;PhoBERT&quot;,&quot;id&quot;:&quot;model_doc/phobert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/phobert&quot;},{&quot;title&quot;:&quot;PLBart&quot;,&quot;id&quot;:&quot;model_doc/plbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/plbart&quot;},{&quot;title&quot;:&quot;ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/prophetnet&quot;},{&quot;title&quot;:&quot;QDQBert&quot;,&quot;id&quot;:&quot;model_doc/qdqbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/qdqbert&quot;},{&quot;title&quot;:&quot;RAG&quot;,&quot;id&quot;:&quot;model_doc/rag&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rag&quot;},{&quot;title&quot;:&quot;REALM&quot;,&quot;id&quot;:&quot;model_doc/realm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/realm&quot;},{&quot;title&quot;:&quot;Reformer&quot;,&quot;id&quot;:&quot;model_doc/reformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/reformer&quot;},{&quot;title&quot;:&quot;RemBERT&quot;,&quot;id&quot;:&quot;model_doc/rembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rembert&quot;},{&quot;title&quot;:&quot;RetriBERT&quot;,&quot;id&quot;:&quot;model_doc/retribert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/retribert&quot;},{&quot;title&quot;:&quot;RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta&quot;},{&quot;title&quot;:&quot;RoBERTa-PreLayerNorm&quot;,&quot;id&quot;:&quot;model_doc/roberta-prelayernorm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm&quot;},{&quot;title&quot;:&quot;RoCBert&quot;,&quot;id&quot;:&quot;model_doc/roc_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roc_bert&quot;},{&quot;title&quot;:&quot;RoFormer&quot;,&quot;id&quot;:&quot;model_doc/roformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roformer&quot;},{&quot;title&quot;:&quot;RWKV&quot;,&quot;id&quot;:&quot;model_doc/rwkv&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rwkv&quot;},{&quot;title&quot;:&quot;Splinter&quot;,&quot;id&quot;:&quot;model_doc/splinter&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/splinter&quot;},{&quot;title&quot;:&quot;SqueezeBERT&quot;,&quot;id&quot;:&quot;model_doc/squeezebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/squeezebert&quot;},{&quot;title&quot;:&quot;SwitchTransformers&quot;,&quot;id&quot;:&quot;model_doc/switch_transformers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/switch_transformers&quot;},{&quot;title&quot;:&quot;T5&quot;,&quot;id&quot;:&quot;model_doc/t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5&quot;},{&quot;title&quot;:&quot;T5v1.1&quot;,&quot;id&quot;:&quot;model_doc/t5v1.1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5v1.1&quot;},{&quot;title&quot;:&quot;TAPEX&quot;,&quot;id&quot;:&quot;model_doc/tapex&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapex&quot;},{&quot;title&quot;:&quot;Transformer XL&quot;,&quot;id&quot;:&quot;model_doc/transfo-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/transfo-xl&quot;},{&quot;title&quot;:&quot;UL2&quot;,&quot;id&quot;:&quot;model_doc/ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ul2&quot;},{&quot;title&quot;:&quot;UMT5&quot;,&quot;id&quot;:&quot;model_doc/umt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/umt5&quot;},{&quot;title&quot;:&quot;X-MOD&quot;,&quot;id&quot;:&quot;model_doc/xmod&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xmod&quot;},{&quot;title&quot;:&quot;XGLM&quot;,&quot;id&quot;:&quot;model_doc/xglm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xglm&quot;},{&quot;title&quot;:&quot;XLM&quot;,&quot;id&quot;:&quot;model_doc/xlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm&quot;},{&quot;title&quot;:&quot;XLM-ProphetNet&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/xlm-prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa-XL&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl&quot;},{&quot;title&quot;:&quot;XLM-V&quot;,&quot;id&quot;:&quot;model_doc/xlm-v&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-v&quot;},{&quot;title&quot;:&quot;XLNet&quot;,&quot;id&quot;:&quot;model_doc/xlnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlnet&quot;},{&quot;title&quot;:&quot;YOSO&quot;,&quot;id&quot;:&quot;model_doc/yoso&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yoso&quot;}]},{&quot;title&quot;:&quot;Vision models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;BEiT&quot;,&quot;id&quot;:&quot;model_doc/beit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/beit&quot;},{&quot;title&quot;:&quot;BiT&quot;,&quot;id&quot;:&quot;model_doc/bit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bit&quot;},{&quot;title&quot;:&quot;Conditional DETR&quot;,&quot;id&quot;:&quot;model_doc/conditional_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/conditional_detr&quot;},{&quot;title&quot;:&quot;ConvNeXT&quot;,&quot;id&quot;:&quot;model_doc/convnext&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnext&quot;},{&quot;title&quot;:&quot;ConvNeXTV2&quot;,&quot;id&quot;:&quot;model_doc/convnextv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnextv2&quot;},{&quot;title&quot;:&quot;CvT&quot;,&quot;id&quot;:&quot;model_doc/cvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cvt&quot;},{&quot;title&quot;:&quot;Deformable DETR&quot;,&quot;id&quot;:&quot;model_doc/deformable_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deformable_detr&quot;},{&quot;title&quot;:&quot;DeiT&quot;,&quot;id&quot;:&quot;model_doc/deit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deit&quot;},{&quot;title&quot;:&quot;DETA&quot;,&quot;id&quot;:&quot;model_doc/deta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deta&quot;},{&quot;title&quot;:&quot;DETR&quot;,&quot;id&quot;:&quot;model_doc/detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/detr&quot;},{&quot;title&quot;:&quot;DiNAT&quot;,&quot;id&quot;:&quot;model_doc/dinat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinat&quot;},{&quot;title&quot;:&quot;DINO V2&quot;,&quot;id&quot;:&quot;model_doc/dinov2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinov2&quot;},{&quot;title&quot;:&quot;DiT&quot;,&quot;id&quot;:&quot;model_doc/dit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dit&quot;},{&quot;title&quot;:&quot;DPT&quot;,&quot;id&quot;:&quot;model_doc/dpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpt&quot;},{&quot;title&quot;:&quot;EfficientFormer&quot;,&quot;id&quot;:&quot;model_doc/efficientformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientformer&quot;},{&quot;title&quot;:&quot;EfficientNet&quot;,&quot;id&quot;:&quot;model_doc/efficientnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientnet&quot;},{&quot;title&quot;:&quot;FocalNet&quot;,&quot;id&quot;:&quot;model_doc/focalnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/focalnet&quot;},{&quot;title&quot;:&quot;GLPN&quot;,&quot;id&quot;:&quot;model_doc/glpn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/glpn&quot;},{&quot;title&quot;:&quot;ImageGPT&quot;,&quot;id&quot;:&quot;model_doc/imagegpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/imagegpt&quot;},{&quot;title&quot;:&quot;LeViT&quot;,&quot;id&quot;:&quot;model_doc/levit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/levit&quot;},{&quot;title&quot;:&quot;Mask2Former&quot;,&quot;id&quot;:&quot;model_doc/mask2former&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mask2former&quot;},{&quot;title&quot;:&quot;MaskFormer&quot;,&quot;id&quot;:&quot;model_doc/maskformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/maskformer&quot;},{&quot;title&quot;:&quot;MobileNetV1&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v1&quot;},{&quot;title&quot;:&quot;MobileNetV2&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v2&quot;},{&quot;title&quot;:&quot;MobileViT&quot;,&quot;id&quot;:&quot;model_doc/mobilevit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevit&quot;},{&quot;title&quot;:&quot;MobileViTV2&quot;,&quot;id&quot;:&quot;model_doc/mobilevitv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevitv2&quot;},{&quot;title&quot;:&quot;NAT&quot;,&quot;id&quot;:&quot;model_doc/nat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nat&quot;},{&quot;title&quot;:&quot;PoolFormer&quot;,&quot;id&quot;:&quot;model_doc/poolformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/poolformer&quot;},{&quot;title&quot;:&quot;Pyramid Vision Transformer (PVT)&quot;,&quot;id&quot;:&quot;model_doc/pvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pvt&quot;},{&quot;title&quot;:&quot;RegNet&quot;,&quot;id&quot;:&quot;model_doc/regnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/regnet&quot;},{&quot;title&quot;:&quot;ResNet&quot;,&quot;id&quot;:&quot;model_doc/resnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/resnet&quot;},{&quot;title&quot;:&quot;SegFormer&quot;,&quot;id&quot;:&quot;model_doc/segformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/segformer&quot;},{&quot;title&quot;:&quot;SwiftFormer&quot;,&quot;id&quot;:&quot;model_doc/swiftformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swiftformer&quot;},{&quot;title&quot;:&quot;Swin Transformer&quot;,&quot;id&quot;:&quot;model_doc/swin&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin&quot;},{&quot;title&quot;:&quot;Swin Transformer V2&quot;,&quot;id&quot;:&quot;model_doc/swinv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swinv2&quot;},{&quot;title&quot;:&quot;Swin2SR&quot;,&quot;id&quot;:&quot;model_doc/swin2sr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin2sr&quot;},{&quot;title&quot;:&quot;Table Transformer&quot;,&quot;id&quot;:&quot;model_doc/table-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/table-transformer&quot;},{&quot;title&quot;:&quot;TimeSformer&quot;,&quot;id&quot;:&quot;model_doc/timesformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/timesformer&quot;},{&quot;title&quot;:&quot;UperNet&quot;,&quot;id&quot;:&quot;model_doc/upernet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/upernet&quot;},{&quot;title&quot;:&quot;VAN&quot;,&quot;id&quot;:&quot;model_doc/van&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/van&quot;},{&quot;title&quot;:&quot;VideoMAE&quot;,&quot;id&quot;:&quot;model_doc/videomae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/videomae&quot;},{&quot;title&quot;:&quot;Vision Transformer (ViT)&quot;,&quot;id&quot;:&quot;model_doc/vit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit&quot;},{&quot;title&quot;:&quot;ViT Hybrid&quot;,&quot;id&quot;:&quot;model_doc/vit_hybrid&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_hybrid&quot;},{&quot;title&quot;:&quot;ViTDet&quot;,&quot;id&quot;:&quot;model_doc/vitdet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitdet&quot;},{&quot;title&quot;:&quot;ViTMAE&quot;,&quot;id&quot;:&quot;model_doc/vit_mae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_mae&quot;},{&quot;title&quot;:&quot;ViTMatte&quot;,&quot;id&quot;:&quot;model_doc/vitmatte&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitmatte&quot;},{&quot;title&quot;:&quot;ViTMSN&quot;,&quot;id&quot;:&quot;model_doc/vit_msn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_msn&quot;},{&quot;title&quot;:&quot;ViViT&quot;,&quot;id&quot;:&quot;model_doc/vivit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vivit&quot;},{&quot;title&quot;:&quot;YOLOS&quot;,&quot;id&quot;:&quot;model_doc/yolos&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yolos&quot;}]},{&quot;title&quot;:&quot;Audio models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio Spectrogram Transformer&quot;,&quot;id&quot;:&quot;model_doc/audio-spectrogram-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer&quot;},{&quot;title&quot;:&quot;Bark&quot;,&quot;id&quot;:&quot;model_doc/bark&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bark&quot;},{&quot;title&quot;:&quot;CLAP&quot;,&quot;id&quot;:&quot;model_doc/clap&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clap&quot;},{&quot;title&quot;:&quot;EnCodec&quot;,&quot;id&quot;:&quot;model_doc/encodec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encodec&quot;},{&quot;title&quot;:&quot;Hubert&quot;,&quot;id&quot;:&quot;model_doc/hubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/hubert&quot;},{&quot;title&quot;:&quot;MCTCT&quot;,&quot;id&quot;:&quot;model_doc/mctct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mctct&quot;},{&quot;title&quot;:&quot;MMS&quot;,&quot;id&quot;:&quot;model_doc/mms&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mms&quot;},{&quot;title&quot;:&quot;MusicGen&quot;,&quot;id&quot;:&quot;model_doc/musicgen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/musicgen&quot;},{&quot;title&quot;:&quot;Pop2Piano&quot;,&quot;id&quot;:&quot;model_doc/pop2piano&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pop2piano&quot;},{&quot;title&quot;:&quot;SEW&quot;,&quot;id&quot;:&quot;model_doc/sew&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew&quot;},{&quot;title&quot;:&quot;SEW-D&quot;,&quot;id&quot;:&quot;model_doc/sew-d&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew-d&quot;},{&quot;title&quot;:&quot;Speech2Text&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text&quot;},{&quot;title&quot;:&quot;Speech2Text2&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text_2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2&quot;},{&quot;title&quot;:&quot;SpeechT5&quot;,&quot;id&quot;:&quot;model_doc/speecht5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speecht5&quot;},{&quot;title&quot;:&quot;UniSpeech&quot;,&quot;id&quot;:&quot;model_doc/unispeech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech&quot;},{&quot;title&quot;:&quot;UniSpeech-SAT&quot;,&quot;id&quot;:&quot;model_doc/unispeech-sat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech-sat&quot;},{&quot;title&quot;:&quot;VITS&quot;,&quot;id&quot;:&quot;model_doc/vits&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vits&quot;},{&quot;title&quot;:&quot;Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2&quot;},{&quot;title&quot;:&quot;Wav2Vec2-Conformer&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2-conformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer&quot;},{&quot;title&quot;:&quot;Wav2Vec2Phoneme&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2_phoneme&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme&quot;},{&quot;title&quot;:&quot;WavLM&quot;,&quot;id&quot;:&quot;model_doc/wavlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wavlm&quot;},{&quot;title&quot;:&quot;Whisper&quot;,&quot;id&quot;:&quot;model_doc/whisper&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/whisper&quot;},{&quot;title&quot;:&quot;XLS-R&quot;,&quot;id&quot;:&quot;model_doc/xls_r&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xls_r&quot;},{&quot;title&quot;:&quot;XLSR-Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/xlsr_wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2&quot;}]},{&quot;title&quot;:&quot;Multimodal models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALIGN&quot;,&quot;id&quot;:&quot;model_doc/align&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/align&quot;},{&quot;title&quot;:&quot;AltCLIP&quot;,&quot;id&quot;:&quot;model_doc/altclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/altclip&quot;},{&quot;title&quot;:&quot;BLIP&quot;,&quot;id&quot;:&quot;model_doc/blip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip&quot;},{&quot;title&quot;:&quot;BLIP-2&quot;,&quot;id&quot;:&quot;model_doc/blip-2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip-2&quot;},{&quot;title&quot;:&quot;BridgeTower&quot;,&quot;id&quot;:&quot;model_doc/bridgetower&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bridgetower&quot;},{&quot;title&quot;:&quot;BROS&quot;,&quot;id&quot;:&quot;model_doc/bros&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bros&quot;},{&quot;title&quot;:&quot;Chinese-CLIP&quot;,&quot;id&quot;:&quot;model_doc/chinese_clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/chinese_clip&quot;},{&quot;title&quot;:&quot;CLIP&quot;,&quot;id&quot;:&quot;model_doc/clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clip&quot;},{&quot;title&quot;:&quot;CLIPSeg&quot;,&quot;id&quot;:&quot;model_doc/clipseg&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clipseg&quot;},{&quot;title&quot;:&quot;Data2Vec&quot;,&quot;id&quot;:&quot;model_doc/data2vec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/data2vec&quot;},{&quot;title&quot;:&quot;DePlot&quot;,&quot;id&quot;:&quot;model_doc/deplot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deplot&quot;},{&quot;title&quot;:&quot;Donut&quot;,&quot;id&quot;:&quot;model_doc/donut&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/donut&quot;},{&quot;title&quot;:&quot;FLAVA&quot;,&quot;id&quot;:&quot;model_doc/flava&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flava&quot;},{&quot;title&quot;:&quot;GIT&quot;,&quot;id&quot;:&quot;model_doc/git&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/git&quot;},{&quot;title&quot;:&quot;GroupViT&quot;,&quot;id&quot;:&quot;model_doc/groupvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/groupvit&quot;},{&quot;title&quot;:&quot;IDEFICS&quot;,&quot;id&quot;:&quot;model_doc/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/idefics&quot;},{&quot;title&quot;:&quot;InstructBLIP&quot;,&quot;id&quot;:&quot;model_doc/instructblip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/instructblip&quot;},{&quot;title&quot;:&quot;LayoutLM&quot;,&quot;id&quot;:&quot;model_doc/layoutlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlm&quot;},{&quot;title&quot;:&quot;LayoutLMV2&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv2&quot;},{&quot;title&quot;:&quot;LayoutLMV3&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv3&quot;},{&quot;title&quot;:&quot;LayoutXLM&quot;,&quot;id&quot;:&quot;model_doc/layoutxlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutxlm&quot;},{&quot;title&quot;:&quot;LiLT&quot;,&quot;id&quot;:&quot;model_doc/lilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lilt&quot;},{&quot;title&quot;:&quot;LXMERT&quot;,&quot;id&quot;:&quot;model_doc/lxmert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lxmert&quot;},{&quot;title&quot;:&quot;MatCha&quot;,&quot;id&quot;:&quot;model_doc/matcha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/matcha&quot;},{&quot;title&quot;:&quot;MGP-STR&quot;,&quot;id&quot;:&quot;model_doc/mgp-str&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mgp-str&quot;},{&quot;title&quot;:&quot;Nougat&quot;,&quot;id&quot;:&quot;model_doc/nougat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nougat&quot;},{&quot;title&quot;:&quot;OneFormer&quot;,&quot;id&quot;:&quot;model_doc/oneformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/oneformer&quot;},{&quot;title&quot;:&quot;OWL-ViT&quot;,&quot;id&quot;:&quot;model_doc/owlvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/owlvit&quot;},{&quot;title&quot;:&quot;Perceiver&quot;,&quot;id&quot;:&quot;model_doc/perceiver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/perceiver&quot;},{&quot;title&quot;:&quot;Pix2Struct&quot;,&quot;id&quot;:&quot;model_doc/pix2struct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pix2struct&quot;},{&quot;title&quot;:&quot;Segment Anything&quot;,&quot;id&quot;:&quot;model_doc/sam&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sam&quot;},{&quot;title&quot;:&quot;Speech Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/speech-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder&quot;},{&quot;title&quot;:&quot;TAPAS&quot;,&quot;id&quot;:&quot;model_doc/tapas&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapas&quot;},{&quot;title&quot;:&quot;TrOCR&quot;,&quot;id&quot;:&quot;model_doc/trocr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trocr&quot;},{&quot;title&quot;:&quot;TVLT&quot;,&quot;id&quot;:&quot;model_doc/tvlt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tvlt&quot;},{&quot;title&quot;:&quot;ViLT&quot;,&quot;id&quot;:&quot;model_doc/vilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vilt&quot;},{&quot;title&quot;:&quot;Vision Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/vision-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder&quot;},{&quot;title&quot;:&quot;Vision Text Dual Encoder&quot;,&quot;id&quot;:&quot;model_doc/vision-text-dual-encoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder&quot;},{&quot;title&quot;:&quot;VisualBERT&quot;,&quot;id&quot;:&quot;model_doc/visual_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/visual_bert&quot;},{&quot;title&quot;:&quot;X-CLIP&quot;,&quot;id&quot;:&quot;model_doc/xclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xclip&quot;}]},{&quot;title&quot;:&quot;Reinforcement learning models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Decision Transformer&quot;,&quot;id&quot;:&quot;model_doc/decision_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/decision_transformer&quot;},{&quot;title&quot;:&quot;Trajectory Transformer&quot;,&quot;id&quot;:&quot;model_doc/trajectory_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer&quot;}]},{&quot;title&quot;:&quot;Time series models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Autoformer&quot;,&quot;id&quot;:&quot;model_doc/autoformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/autoformer&quot;},{&quot;title&quot;:&quot;Informer&quot;,&quot;id&quot;:&quot;model_doc/informer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/informer&quot;},{&quot;title&quot;:&quot;Time Series Transformer&quot;,&quot;id&quot;:&quot;model_doc/time_series_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/time_series_transformer&quot;}]},{&quot;title&quot;:&quot;Graph models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Graphormer&quot;,&quot;id&quot;:&quot;model_doc/graphormer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/graphormer&quot;}]}]},{&quot;title&quot;:&quot;Internal Helpers&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Custom Layers and Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/modeling_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/modeling_utils&quot;},{&quot;title&quot;:&quot;Utilities for pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/pipelines_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/pipelines_utils&quot;},{&quot;title&quot;:&quot;Utilities for Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/tokenization_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/tokenization_utils&quot;},{&quot;title&quot;:&quot;Utilities for Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/trainer_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/trainer_utils&quot;},{&quot;title&quot;:&quot;Utilities for Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/generation_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/generation_utils&quot;},{&quot;title&quot;:&quot;Utilities for Image Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/image_processing_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/image_processing_utils&quot;},{&quot;title&quot;:&quot;Utilities for Audio processing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/audio_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/audio_utils&quot;},{&quot;title&quot;:&quot;General Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/file_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/file_utils&quot;},{&quot;title&quot;:&quot;Utilities for Time Series&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/time_series_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/time_series_utils&quot;}]}]}],&quot;chapterId&quot;:&quot;model_doc/xlm-prophetnet&quot;,&quot;docType&quot;:&quot;docs&quot;,&quot;isLoggedIn&quot;:false,&quot;lang&quot;:&quot;en&quot;,&quot;langs&quot;:[&quot;de&quot;,&quot;en&quot;,&quot;es&quot;,&quot;fr&quot;,&quot;it&quot;,&quot;ko&quot;,&quot;pt&quot;,&quot;zh&quot;],&quot;library&quot;:&quot;transformers&quot;,&quot;theme&quot;:&quot;light&quot;,&quot;version&quot;:&quot;v4.34.0&quot;,&quot;versions&quot;:[{&quot;version&quot;:&quot;main&quot;},{&quot;version&quot;:&quot;v4.34.0&quot;},{&quot;version&quot;:&quot;v4.33.3&quot;},{&quot;version&quot;:&quot;v4.33.2&quot;},{&quot;version&quot;:&quot;v4.33.0&quot;},{&quot;version&quot;:&quot;v4.32.1&quot;},{&quot;version&quot;:&quot;v4.32.0&quot;},{&quot;version&quot;:&quot;v4.31.0&quot;},{&quot;version&quot;:&quot;v4.30.0&quot;},{&quot;version&quot;:&quot;v4.29.1&quot;},{&quot;version&quot;:&quot;v4.29.0&quot;},{&quot;version&quot;:&quot;v4.28.1&quot;},{&quot;version&quot;:&quot;v4.28.0&quot;},{&quot;version&quot;:&quot;v4.27.2&quot;},{&quot;version&quot;:&quot;v4.27.1&quot;},{&quot;version&quot;:&quot;v4.27.0&quot;},{&quot;version&quot;:&quot;v4.26.1&quot;},{&quot;version&quot;:&quot;v4.26.0&quot;},{&quot;version&quot;:&quot;v4.25.1&quot;},{&quot;version&quot;:&quot;v4.24.0&quot;},{&quot;version&quot;:&quot;v4.23.1&quot;},{&quot;version&quot;:&quot;v4.23.0&quot;},{&quot;version&quot;:&quot;v4.22.2&quot;},{&quot;version&quot;:&quot;v4.22.1&quot;},{&quot;version&quot;:&quot;v4.22.0&quot;},{&quot;version&quot;:&quot;v4.21.3&quot;},{&quot;version&quot;:&quot;v4.21.2&quot;},{&quot;version&quot;:&quot;v4.21.1&quot;},{&quot;version&quot;:&quot;v4.21.0&quot;},{&quot;version&quot;:&quot;v4.20.1&quot;},{&quot;version&quot;:&quot;v4.20.0&quot;},{&quot;version&quot;:&quot;v4.19.4&quot;},{&quot;version&quot;:&quot;v4.19.3&quot;},{&quot;version&quot;:&quot;v4.19.2&quot;},{&quot;version&quot;:&quot;v4.19.0&quot;},{&quot;version&quot;:&quot;v4.18.0&quot;},{&quot;version&quot;:&quot;v4.17.0&quot;},{&quot;version&quot;:&quot;v4.16.2&quot;},{&quot;version&quot;:&quot;v4.16.1&quot;},{&quot;version&quot;:&quot;v4.16.0&quot;},{&quot;version&quot;:&quot;v4.15.0&quot;},{&quot;version&quot;:&quot;v4.14.1&quot;},{&quot;version&quot;:&quot;v4.13.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.5&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.4&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.0.0&quot;},{&quot;version&quot;:&quot;doc-builder-html&quot;}],&quot;title&quot;:&quot;XLM-ProphetNet&quot;}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"><div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation</p> <div class="flex items-center"><p class="font-semibold">XLM-ProphetNet</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "><button class=" " type="button"><h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> <span class="ml-auto rounded border border-gray-200 bg-gray-100 px-0.5 text-xs dark:border-gray-800 dark:bg-gray-800"><kbd class="font-sans">⌘K</kbd></span></button> <div class="flex items-center"><select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1">v4.34.0</option><option value="2">v4.33.3</option><option value="3">v4.32.1</option><option value="4">v4.31.0</option><option value="5">v4.30.0</option><option value="6">v4.29.1</option><option value="7">v4.28.1</option><option value="8">v4.27.2</option><option value="9">v4.26.1</option><option value="10">v4.25.1</option><option value="11">v4.24.0</option><option value="12">v4.23.1</option><option value="13">v4.22.2</option><option value="14">v4.21.3</option><option value="15">v4.20.1</option><option value="16">v4.19.4</option><option value="17">v4.18.0</option><option value="18">v4.17.0</option><option value="19">v4.16.2</option><option value="20">v4.15.0</option><option value="21">v4.14.1</option><option value="22">v4.13.0</option><option value="23">v4.12.5</option><option value="24">v4.11.3</option><option value="25">v4.10.1</option><option value="26">v4.9.2</option><option value="27">v4.8.2</option><option value="28">v4.7.0</option><option value="29">v4.6.0</option><option value="30">v4.5.1</option><option value="31">v4.4.2</option><option value="32">v4.3.3</option><option value="33">v4.2.2</option><option value="34">v4.1.1</option><option value="35">v4.0.1</option><option value="36">v3.5.1</option><option value="37">v3.4.0</option><option value="38">v3.3.1</option><option value="39">v3.2.0</option><option value="40">v3.1.0</option><option value="41">v3.0.2</option><option value="42">v2.11.0</option><option value="43">v2.10.0</option><option value="44">v2.9.1</option><option value="45">v2.8.0</option><option value="46">v2.7.0</option><option value="47">v2.6.0</option><option value="48">v2.5.1</option><option value="49">v2.4.1</option><option value="50">v2.3.0</option><option value="51">v2.2.2</option><option value="52">v2.1.1</option><option value="53">v2.0.0</option><option value="54">v1.2.0</option><option value="55">v1.1.0</option><option value="56">v1.0.0</option><option value="57">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"><button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"><svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> 112,792</a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Get started</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/index">🤗 Transformers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/quicktour">Quick tour </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/installation">Installation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Tutorials</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_tutorial">Run inference with pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/autoclass_tutorial">Write portable code with AutoClass </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/preprocessing">Preprocess data </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/training">Fine-tune a pretrained model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/run_scripts">Train with a script </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/accelerate">Set up distributed training with 🤗 Accelerate </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/peft">Load and train adapters with 🤗 PEFT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_sharing">Share your model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/transformers_agents">Agents </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/llm_tutorial">Generation with LLMs </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Task Guides</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Natural Language Processing</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Computer Vision</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Generation</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Prompting</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Developer guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/fast_tokenizers">Use fast tokenizers from 🤗 Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/multilingual">Run inference with multilingual models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/create_a_model">Use model-specific APIs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_models">Share a custom model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/chat_templating">Templates for chat models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/sagemaker">Run training on Amazon SageMaker </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/serialization">Export to ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tflite">Export to TFLite </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/torchscript">Export to TorchScript </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/benchmarks">Benchmarks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/notebooks">Notebooks with examples </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/community">Community resources </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_tools">Custom Tools and Prompts </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/troubleshooting">Troubleshoot </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Performance and scalability</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/performance">Overview </a><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Efficient training techniques</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_one">Methods and tools for efficient training on a single GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_many">Multiple GPUs and parallelism </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu">Efficient training on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu_many">Distributed CPU training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu">Training on TPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu_tf">Training on TPU with TensorFlow </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_special">Training on Specialized Hardware </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_hardware">Custom hardware for training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/hpo_train">Hyperparameter Search using Trainer API </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Optimizing inference</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_cpu">Inference on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_one">Inference on one GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_many">Inference on many GPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_special">Inference on Specialized Hardware </a> </div><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/big_models">Instantiating a big model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/debugging">Troubleshooting </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tf_xla">XLA Integration for TensorFlow Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perf_torch_compile">Optimize inference using `torch.compile()` </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Contribute</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/contributing">How to contribute to transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_model">How to add a model to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_tensorflow_model">How to convert a 🤗 Transformers model to TensorFlow? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_pipeline">How to add a pipeline to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/testing">Testing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pr_checks">Checks on a Pull Request </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Conceptual guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/philosophy">Philosophy </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/glossary">Glossary </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/task_summary">What 🤗 Transformers can do </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tasks_explained">How 🤗 Transformers solve tasks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_summary">The Transformer model family </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tokenizer_summary">Summary of the tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/attention">Attention mechanisms </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pad_truncation">Padding and truncation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/bertology">BERTology </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perplexity">Perplexity of fixed-length models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_webserver">Pipelines for webserver inference </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_memory_anatomy">Model training anatomy </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>API</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Main Classes</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/agent">Agents and Tools </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/model_doc/auto">Auto Classes </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/callback">Callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/configuration">Configuration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/data_collator">Data Collator </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks">Keras callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/logging">Logging </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/model">Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/text_generation">Text Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/onnx">ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules">Optimization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/output">Model outputs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/pipelines">Pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/processors">Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/quantization">Quantization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/tokenizer">Tokenizer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/trainer">Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/deepspeed">DeepSpeed Integration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor">Feature Extractor </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/image_processor">Image Processor </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Models</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Text models</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/albert">ALBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bart">BART </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/barthez">BARThez </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bartpho">BARTpho </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bert">BERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bert-generation">BertGeneration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bert-japanese">BertJapanese </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bertweet">Bertweet </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/big_bird">BigBird </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus">BigBirdPegasus </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/biogpt">BioGpt </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/blenderbot">Blenderbot </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/blenderbot-small">Blenderbot Small </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bloom">BLOOM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bort">BORT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/byt5">ByT5 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/camembert">CamemBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/canine">CANINE </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/codegen">CodeGen </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/code_llama">CodeLlama </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/convbert">ConvBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/cpm">CPM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/cpmant">CPMANT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ctrl">CTRL </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/deberta">DeBERTa </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/deberta-v2">DeBERTa-v2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/dialogpt">DialoGPT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/distilbert">DistilBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/dpr">DPR </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/electra">ELECTRA </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/encoder-decoder">Encoder Decoder Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ernie">ERNIE </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ernie_m">ErnieM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/esm">ESM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/falcon">Falcon </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/flan-t5">FLAN-T5 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/flan-ul2">FLAN-UL2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/flaubert">FlauBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/fnet">FNet </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/fsmt">FSMT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/funnel">Funnel Transformer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/openai-gpt">GPT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt_neo">GPT Neo </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt_neox">GPT NeoX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese">GPT NeoX Japanese </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gptj">GPT-J </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt2">GPT2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode">GPTBigCode </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese">GPTSAN Japanese </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt-sw3">GPTSw3 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/herbert">HerBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ibert">I-BERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/jukebox">Jukebox </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/led">LED </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/llama">LLaMA </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/llama2">Llama2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/longformer">Longformer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/longt5">LongT5 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/luke">LUKE </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/m2m_100">M2M100 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/marian">MarianMT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/markuplm">MarkupLM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mbart">MBart and MBart-50 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mega">MEGA </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/megatron-bert">MegatronBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2">MegatronGPT2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mistral">Mistral </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mluke">mLUKE </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mobilebert">MobileBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mpnet">MPNet </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mpt">MPT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mra">MRA </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mt5">MT5 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mvp">MVP </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nezha">NEZHA </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nllb">NLLB </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nllb-moe">NLLB-MoE </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nystromformer">Nyströmformer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/open-llama">Open-Llama </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/opt">OPT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/pegasus">Pegasus </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/pegasus_x">PEGASUS-X </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/persimmon">Persimmon </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/phobert">PhoBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/plbart">PLBart </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/prophetnet">ProphetNet </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/qdqbert">QDQBert </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/rag">RAG </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/realm">REALM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/reformer">Reformer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/rembert">RemBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/retribert">RetriBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/roberta">RoBERTa </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm">RoBERTa-PreLayerNorm </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/roc_bert">RoCBert </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/roformer">RoFormer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/rwkv">RWKV </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/splinter">Splinter </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/squeezebert">SqueezeBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/switch_transformers">SwitchTransformers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/t5">T5 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/t5v1.1">T5v1.1 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/tapex">TAPEX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/transfo-xl">Transformer XL </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ul2">UL2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/umt5">UMT5 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xmod">X-MOD </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xglm">XGLM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm">XLM </a><a data-sveltekit-reload="" class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet">XLM-ProphetNet </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta">XLM-RoBERTa </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl">XLM-RoBERTa-XL </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm-v">XLM-V </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlnet">XLNet </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/yoso">YOSO </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Vision models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Reinforcement learning models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Time series models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Graph models</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Internal Helpers</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/modeling_utils">Custom Layers and Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/pipelines_utils">Utilities for pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/tokenization_utils">Utilities for Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/trainer_utils">Utilities for Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/generation_utils">Utilities for Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/image_processing_utils">Utilities for Image Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/audio_utils">Utilities for Audio processing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/file_utils">General Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/time_series_utils">Utilities for Time Series </a> </div> </div></nav></div></div></div> <div class="z-1 min-w-0 flex-1"> <div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg"> <div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div> <p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience </p> <div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes </div></div></div> <div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a> <p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div> <div class="prose-doc prose relative mx-auto max-w-4xl break-words"> <p></p> <h1 class="relative group"><a id="xlmprophetnet" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#xlmprophetnet"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-fjqamc">XLM-ProphetNet</span></h1> <div class="flex flex-wrap space-x-1" data-svelte-h="svelte-u6l7ab"><a href="https://huggingface.co/models?filter=xprophetnet"><img alt="Models" src="https://img.shields.io/badge/All_model_pages-xprophetnet-blueviolet"></a> <a href="https://huggingface.co/spaces/docs-demos/xprophetnet-large-wiki100-cased-xglue-ntg"><img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"></a></div> <p data-svelte-h="svelte-s752y"><strong>DISCLAIMER:</strong> If you see something strange, file a <a href="https://github.com/huggingface/transformers/issues/new?assignees=&amp;labels=&amp;template=bug-report.md&amp;title" rel="nofollow">Github Issue</a> and assign @patrickvonplaten</p> <h2 class="relative group"><a id="overview" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#overview"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1jsw1pg">Overview</span></h2> <p data-svelte-h="svelte-yaw1ro">The XLM-ProphetNet model was proposed in <a href="https://arxiv.org/abs/2001.04063" rel="nofollow">ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training,</a> by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang, Ming Zhou on 13 Jan, 2020.</p> <p data-svelte-h="svelte-jvmxz0">XLM-ProphetNet is an encoder-decoder model and can predict n-future tokens for “ngram” language modeling instead of just the next token. Its architecture is identical to ProhpetNet, but the model was trained on the multi-lingual “wiki100” Wikipedia dump.</p> <p data-svelte-h="svelte-vfdo9a">The abstract from the paper is the following:</p> <p data-svelte-h="svelte-1jvtdli"><em>In this paper, we present a new sequence-to-sequence pretraining model called ProphetNet, which introduces a novel self-supervised objective named future n-gram prediction and the proposed n-stream self-attention mechanism. Instead of the optimization of one-step ahead prediction in traditional sequence-to-sequence model, the ProphetNet is optimized by n-step ahead prediction which predicts the next n tokens simultaneously based on previous context tokens at each time step. The future n-gram prediction explicitly encourages the model to plan for the future tokens and prevent overfitting on strong local correlations. We pre-train ProphetNet using a base scale dataset (16GB) and a large scale dataset (160GB) respectively. Then we conduct experiments on CNN/DailyMail, Gigaword, and SQuAD 1.1 benchmarks for abstractive summarization and question generation tasks. Experimental results show that ProphetNet achieves new state-of-the-art results on all these datasets compared to the models using the same scale pretraining corpus.</em></p> <p data-svelte-h="svelte-mvxxnf">The Authors’ code can be found <a href="https://github.com/microsoft/ProphetNet" rel="nofollow">here</a>.</p> <p data-svelte-h="svelte-axv494">Tips:</p> <ul data-svelte-h="svelte-imprc0"><li>XLM-ProphetNet’s model architecture and pretraining objective is same as ProphetNet, but XLM-ProphetNet was pre-trained on the cross-lingual dataset XGLUE.</li></ul> <h2 class="relative group"><a id="documentation-resources" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#documentation-resources"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-n3f0j0">Documentation resources</span></h2> <ul data-svelte-h="svelte-jwyjs9"><li><a href="../tasks/language_modeling">Causal language modeling task guide</a></li> <li><a href="../tasks/translation">Translation task guide</a></li> <li><a href="../tasks/summarization">Summarization task guide</a></li></ul> <h2 class="relative group"><a id="transformers.XLMProphetNetConfig" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetConfig"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-mfhkjj">XLMProphetNetConfig</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMProphetNetConfig"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XLMProphetNetConfig</span></span></h3> <a id="transformers.XLMProphetNetConfig" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMProphetNetConfig"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_prophetnet/configuration_xlm_prophetnet.py#L33" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">activation_dropout<span class="opacity-60">: typing.Optional[float] = 0.1</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">activation_function<span class="opacity-60">: typing.Union[str, typing.Callable, NoneType] = 'gelu'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">vocab_size<span class="opacity-60">: typing.Optional[int] = 30522</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_size<span class="opacity-60">: typing.Optional[int] = 1024</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_ffn_dim<span class="opacity-60">: typing.Optional[int] = 4096</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_encoder_layers<span class="opacity-60">: typing.Optional[int] = 12</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_encoder_attention_heads<span class="opacity-60">: typing.Optional[int] = 16</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_ffn_dim<span class="opacity-60">: typing.Optional[int] = 4096</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_decoder_layers<span class="opacity-60">: typing.Optional[int] = 12</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_decoder_attention_heads<span class="opacity-60">: typing.Optional[int] = 16</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_dropout<span class="opacity-60">: typing.Optional[float] = 0.1</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">dropout<span class="opacity-60">: typing.Optional[float] = 0.1</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">max_position_embeddings<span class="opacity-60">: typing.Optional[int] = 512</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">init_std<span class="opacity-60">: typing.Optional[float] = 0.02</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">is_encoder_decoder<span class="opacity-60">: typing.Optional[bool] = True</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">add_cross_attention<span class="opacity-60">: typing.Optional[bool] = True</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_start_token_id<span class="opacity-60">: typing.Optional[int] = 0</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">ngram<span class="opacity-60">: typing.Optional[int] = 2</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_buckets<span class="opacity-60">: typing.Optional[int] = 32</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">relative_max_distance<span class="opacity-60">: typing.Optional[int] = 128</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">disable_ngram_loss<span class="opacity-60">: typing.Optional[bool] = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">eps<span class="opacity-60">: typing.Optional[float] = 0.0</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_cache<span class="opacity-60">: typing.Optional[bool] = True</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pad_token_id<span class="opacity-60">: typing.Optional[int] = 0</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">bos_token_id<span class="opacity-60">: typing.Optional[int] = 1</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">eos_token_id<span class="opacity-60">: typing.Optional[int] = 2</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 25 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetConfig.activation_dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetConfig.activation_dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>activation_dropout</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) — The dropout ratio for activations inside the fully connected layer.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetConfig.activation_function" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetConfig.activation_function"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>activation_function</strong> (<code>str</code> or <code>function</code>, <em>optional</em>, defaults to <code>"gelu"</code>) — The non-linear activation function (function or string) in the encoder and pooler. If string, <code>"gelu"</code>, <code>"relu"</code>, <code>"silu"</code> and <code>"gelu_new"</code> are supported.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetConfig.vocab_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetConfig.vocab_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>vocab_size</strong> (<code>int</code>, <em>optional</em>, defaults to 30522) — Vocabulary size of the ProphetNET model. Defines the number of different tokens that can be represented by the <code>inputs_ids</code> passed when calling <a href="/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet#transformers.XLMProphetNetModel">XLMProphetNetModel</a>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetConfig.hidden_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetConfig.hidden_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hidden_size</strong> (<code>int</code>, <em>optional</em>, defaults to 1024) — Dimensionality of the layers and the pooler layer.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetConfig.encoder_ffn_dim" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetConfig.encoder_ffn_dim"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>encoder_ffn_dim</strong> (<code>int</code>, <em>optional</em>, defaults to 4096) — Dimensionality of the “intermediate” (often named feed-forward) layer in decoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetConfig.num_encoder_layers" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetConfig.num_encoder_layers"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>num_encoder_layers</strong> (<code>int</code>, <em>optional</em>, defaults to 12) — Number of encoder layers.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetConfig.num_encoder_attention_heads" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetConfig.num_encoder_attention_heads"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>num_encoder_attention_heads</strong> (<code>int</code>, <em>optional</em>, defaults to 16) — Number of attention heads for each attention layer in the Transformer encoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetConfig.decoder_ffn_dim" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetConfig.decoder_ffn_dim"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_ffn_dim</strong> (<code>int</code>, <em>optional</em>, defaults to 4096) — Dimensionality of the <code>intermediate</code> (often named feed-forward) layer in decoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetConfig.num_decoder_layers" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetConfig.num_decoder_layers"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>num_decoder_layers</strong> (<code>int</code>, <em>optional</em>, defaults to 12) — Number of decoder layers.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetConfig.num_decoder_attention_heads" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetConfig.num_decoder_attention_heads"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>num_decoder_attention_heads</strong> (<code>int</code>, <em>optional</em>, defaults to 16) — Number of attention heads for each attention layer in the Transformer decoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetConfig.attention_dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetConfig.attention_dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_dropout</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) — The dropout ratio for the attention probabilities.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetConfig.dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetConfig.dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>dropout</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetConfig.max_position_embeddings" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetConfig.max_position_embeddings"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>max_position_embeddings</strong> (<code>int</code>, <em>optional</em>, defaults to 512) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetConfig.init_std" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetConfig.init_std"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>init_std</strong> (<code>float</code>, <em>optional</em>, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetConfig.add_cross_attention" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetConfig.add_cross_attention"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>add_cross_attention</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether cross-attention layers should be added to the model.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetConfig.is_encoder_decoder" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetConfig.is_encoder_decoder"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>is_encoder_decoder</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether this is an encoder/decoder model.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetConfig.pad_token_id" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetConfig.pad_token_id"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>pad_token_id</strong> (<code>int</code>, <em>optional</em>, defaults to 1) — Padding token id.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetConfig.bos_token_id" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetConfig.bos_token_id"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>bos_token_id</strong> (<code>int</code>, <em>optional</em>, defaults to 0) — Beginning of stream token id.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetConfig.eos_token_id" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetConfig.eos_token_id"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>eos_token_id</strong> (<code>int</code>, <em>optional</em>, defaults to 2) — End of stream token id.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetConfig.ngram" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetConfig.ngram"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>ngram</strong> (<code>int</code>, <em>optional</em>, defaults to 2) — Number of future tokens to predict. Set to 1 to be same as traditional Language model to predict next first token.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetConfig.num_buckets" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetConfig.num_buckets"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>num_buckets</strong> (<code>int</code>, <em>optional</em>, defaults to 32) — The number of buckets to use for each attention layer. This is for relative position calculation. See the [T5 paper](see <a href="https://arxiv.org/abs/1910.10683" rel="nofollow">https://arxiv.org/abs/1910.10683</a>) for more details.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetConfig.relative_max_distance" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetConfig.relative_max_distance"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>relative_max_distance</strong> (<code>int</code>, <em>optional</em>, defaults to 128) — Relative distances greater than this number will be put into the last same bucket. This is for relative position calculation. See the [T5 paper](see <a href="https://arxiv.org/abs/1910.10683" rel="nofollow">https://arxiv.org/abs/1910.10683</a>) for more details.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetConfig.disable_ngram_loss" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetConfig.disable_ngram_loss"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>disable_ngram_loss</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether be trained predicting only the next first token.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetConfig.eps" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetConfig.eps"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>eps</strong> (<code>float</code>, <em>optional</em>, defaults to 0.0) — Controls the <code>epsilon</code> parameter value for label smoothing in the loss calculation. If set to 0, no label smoothing is performed.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetConfig.use_cache" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetConfig.use_cache"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>use_cache</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether or not the model should return the last key/values attentions (not used by all models).</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1csn0bs">This is the configuration class to store the configuration of a <a href="/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet#transformers.XLMProphetNetModel">XLMProphetNetModel</a>. It is used to instantiate a XLMProphetNet model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the XLMProphetNet <a href="https://huggingface.co/microsoft/xprophetnet-large-wiki100-cased" rel="nofollow">microsoft/xprophetnet-large-wiki100-cased</a> architecture.</p> <p data-svelte-h="svelte-10kqkkl">Configuration objects inherit from <a href="/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> and can be used to control the model outputs. Read the documentation from <a href="/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> for more information.</p></div> <h2 class="relative group"><a id="transformers.XLMProphetNetTokenizer" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetTokenizer"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1ms1cuc">XLMProphetNetTokenizer</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMProphetNetTokenizer"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XLMProphetNetTokenizer</span></span></h3> <a id="transformers.XLMProphetNetTokenizer" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMProphetNetTokenizer"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_prophetnet/tokenization_xlm_prophetnet.py#L59" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">vocab_file<span class="opacity-60"></span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">bos_token<span class="opacity-60"> = '[SEP]'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">eos_token<span class="opacity-60"> = '[SEP]'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">sep_token<span class="opacity-60"> = '[SEP]'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">unk_token<span class="opacity-60"> = '[UNK]'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pad_token<span class="opacity-60"> = '[PAD]'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">cls_token<span class="opacity-60"> = '[CLS]'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_token<span class="opacity-60"> = '[MASK]'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">sp_model_kwargs<span class="opacity-60">: typing.Union[typing.Dict[str, typing.Any], NoneType] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 11 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetTokenizer.vocab_file" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetTokenizer.vocab_file"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>vocab_file</strong> (<code>str</code>) — Path to the vocabulary file.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetTokenizer.bos_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetTokenizer.bos_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>bos_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;s&gt;"</code>) — The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.<p></p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"> <p>When building a sequence using special tokens, this is not the token that is used for the beginning of sequence. The token used is the <code>cls_token</code>.</p> </div></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetTokenizer.eos_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetTokenizer.eos_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>eos_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;/s&gt;"</code>) — The end of sequence token.<p></p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"> <p>When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the <code>sep_token</code>.</p> </div></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetTokenizer.sep_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetTokenizer.sep_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>sep_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;/s&gt;"</code>) — The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetTokenizer.cls_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetTokenizer.cls_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>cls_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;s&gt;"</code>) — The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetTokenizer.unk_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetTokenizer.unk_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>unk_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;unk&gt;"</code>) — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetTokenizer.pad_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetTokenizer.pad_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>pad_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;pad&gt;"</code>) — The token used for padding, for example when batching sequences of different lengths.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetTokenizer.mask_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetTokenizer.mask_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mask_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;mask&gt;"</code>) — The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetTokenizer.additional_special_tokens" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetTokenizer.additional_special_tokens"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>additional_special_tokens</strong> (<code>List[str]</code>, <em>optional</em>, defaults to <code>["&lt;s&gt;NOTUSED", "&lt;/s&gt;NOTUSED"]</code>) — Additional special tokens used by the tokenizer.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetTokenizer.sp_model_kwargs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetTokenizer.sp_model_kwargs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>sp_model_kwargs</strong> (<code>dict</code>, <em>optional</em>) — Will be passed to the <code>SentencePieceProcessor.__init__()</code> method. The <a href="https://github.com/google/sentencepiece/tree/master/python" rel="nofollow">Python wrapper for SentencePiece</a> can be used, among other things, to set:<p></p> <ul> <li> <p><code>enable_sampling</code>: Enable subword regularization.</p> </li> <li> <p><code>nbest_size</code>: Sampling parameters for unigram. Invalid for BPE-Dropout.</p> <ul> <li><code>nbest_size = {0,1}</code>: No sampling is performed.</li> <li><code>nbest_size &gt; 1</code>: samples from the nbest_size results.</li> <li><code>nbest_size &lt; 0</code>: assuming that nbest_size is infinite and samples from the all hypothesis (lattice) using forward-filtering-and-backward-sampling algorithm.</li> </ul> </li> <li> <p><code>alpha</code>: Smoothing parameter for unigram sampling, and dropout probability of merge operations for BPE-dropout.</p> </li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetTokenizer.sp_model" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetTokenizer.sp_model"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>sp_model</strong> (<code>SentencePieceProcessor</code>) — The <em>SentencePiece</em> processor that is used for every conversion (string, tokens and IDs).</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-z2kpmr">Adapted from <a href="/docs/transformers/v4.34.0/en/model_doc/roberta#transformers.RobertaTokenizer">RobertaTokenizer</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetTokenizer">XLNetTokenizer</a>. Based on <a href="https://github.com/google/sentencepiece" rel="nofollow">SentencePiece</a>.</p> <p data-svelte-h="svelte-1b0fouy">This tokenizer inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizer">PreTrainedTokenizer</a> which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMProphetNetTokenizer.build_inputs_with_special_tokens"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>build_inputs_with_special_tokens</span></h4> <a id="transformers.XLMProphetNetTokenizer.build_inputs_with_special_tokens" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMProphetNetTokenizer.build_inputs_with_special_tokens"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_prophetnet/tokenization_xlm_prophetnet.py#L320" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_0<span class="opacity-60">: typing.List[int]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_1<span class="opacity-60">: typing.Optional[typing.List[int]] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>List[int]</code></span></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetTokenizer.build_inputs_with_special_tokens.token_ids_0" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetTokenizer.build_inputs_with_special_tokens.token_ids_0"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_0</strong> (<code>List[int]</code>) — List of IDs to which the special tokens will be added</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetTokenizer.build_inputs_with_special_tokens.token_ids_1" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetTokenizer.build_inputs_with_special_tokens.token_ids_1"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_1</strong> (<code>List[int]</code>, <em>optional</em>) — Optional second list of IDs for sequence pairs.</span></span> </li></ul> <div id="transformers.XLMProphetNetTokenizer.build_inputs_with_special_tokens.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><code>List[int]</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>list of <a href="../glossary#input-ids">input IDs</a> with the appropriate special tokens.</p> </p> </div></div> <p data-svelte-h="svelte-c34cyj">Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A XLMProphetNet sequence has the following format:</p> <ul data-svelte-h="svelte-rua507"><li>single sequence: <code>X [SEP]</code></li> <li>pair of sequences: <code>A [SEP] B [SEP]</code></li></ul></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMProphetNetTokenizer.convert_tokens_to_string"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>convert_tokens_to_string</span></h4> <a id="transformers.XLMProphetNetTokenizer.convert_tokens_to_string" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMProphetNetTokenizer.convert_tokens_to_string"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_prophetnet/tokenization_xlm_prophetnet.py#L298" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">tokens<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> </div></div> <p data-svelte-h="svelte-1ne8awa">Converts a sequence of tokens (strings for sub-words) in a single string.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMProphetNetTokenizer.create_token_type_ids_from_sequences"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>create_token_type_ids_from_sequences</span></h4> <a id="transformers.XLMProphetNetTokenizer.create_token_type_ids_from_sequences" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMProphetNetTokenizer.create_token_type_ids_from_sequences"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_prophetnet/tokenization_xlm_prophetnet.py#L247" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_0<span class="opacity-60">: typing.List[int]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_1<span class="opacity-60">: typing.Optional[typing.List[int]] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>List[int]</code></span></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetTokenizer.create_token_type_ids_from_sequences.token_ids_0" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetTokenizer.create_token_type_ids_from_sequences.token_ids_0"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_0</strong> (<code>List[int]</code>) — List of IDs.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetTokenizer.create_token_type_ids_from_sequences.token_ids_1" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetTokenizer.create_token_type_ids_from_sequences.token_ids_1"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_1</strong> (<code>List[int]</code>, <em>optional</em>) — Optional second list of IDs for sequence pairs.</span></span> </li></ul> <div id="transformers.XLMProphetNetTokenizer.create_token_type_ids_from_sequences.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><code>List[int]</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>List of zeros.</p> </p> </div></div> <p data-svelte-h="svelte-194ygpb">Create a mask from the two sequences passed to be used in a sequence-pair classification task. XLMProphetNet does not make use of token type ids, therefore a list of zeros is returned.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMProphetNetTokenizer.get_special_tokens_mask"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>get_special_tokens_mask</span></h4> <a id="transformers.XLMProphetNetTokenizer.get_special_tokens_mask" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMProphetNetTokenizer.get_special_tokens_mask"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_prophetnet/tokenization_xlm_prophetnet.py#L219" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_0<span class="opacity-60">: typing.List[int]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_1<span class="opacity-60">: typing.Optional[typing.List[int]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">already_has_special_tokens<span class="opacity-60">: bool = False</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>List[int]</code></span></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetTokenizer.get_special_tokens_mask.token_ids_0" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetTokenizer.get_special_tokens_mask.token_ids_0"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_0</strong> (<code>List[int]</code>) — List of IDs.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetTokenizer.get_special_tokens_mask.token_ids_1" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetTokenizer.get_special_tokens_mask.token_ids_1"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_1</strong> (<code>List[int]</code>, <em>optional</em>) — Optional second list of IDs for sequence pairs.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetTokenizer.get_special_tokens_mask.already_has_special_tokens" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetTokenizer.get_special_tokens_mask.already_has_special_tokens"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>already_has_special_tokens</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not the token list is already formatted with special tokens for the model.</span></span> </li></ul> <div id="transformers.XLMProphetNetTokenizer.get_special_tokens_mask.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><code>List[int]</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.</p> </p> </div></div> <p data-svelte-h="svelte-1f4f5kp">Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer <code>prepare_for_model</code> method.</p></div></div> <h2 class="relative group"><a id="transformers.XLMProphetNetModel" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetModel"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1qfzu5g">XLMProphetNetModel</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMProphetNetModel"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XLMProphetNetModel</span></span></h3> <a id="transformers.XLMProphetNetModel" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMProphetNetModel"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_prophetnet/modeling_xlm_prophetnet.py#L1770" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60">: XLMProphetNetConfig</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetModel.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetModel.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet#transformers.XLMProphetNetConfig">XLMProphetNetConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1sed8ym">The bare XLMProphetNet Model outputting raw hidden-states without any specific head on top. This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-jbq9y7">Original ProphetNet code can be found <a href="https://github.com/microsoft/ProphetNet" rel="nofollow">here</a>. Checkpoints were converted from original Fairseq checkpoints. For more information on the checkpoint conversion, please take a look at the file <code>convert_prophetnet_original_pytorch_checkpoint_to_pytorch.py</code>.</p> <p data-svelte-h="svelte-1707pv8">This model is a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matters related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMProphetNetModel.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.XLMProphetNetModel.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMProphetNetModel.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_prophetnet/modeling_xlm_prophetnet.py#L1804" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_input_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_attention_mask<span class="opacity-60">: typing.Optional[torch.BoolTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_head_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">cross_attn_head_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_outputs<span class="opacity-60">: typing.Optional[typing.Tuple] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">past_key_values<span class="opacity-60">: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_inputs_embeds<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_cache<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>transformers.models.xlm_prophetnet.modeling_xlm_prophetnet.XLMProphetNetSeq2SeqModelOutput</code> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 13 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetModel.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetModel.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetModel.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetModel.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetModel.forward.decoder_input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetModel.forward.decoder_input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, target_sequence_length)</code>, <em>optional</em>) — Indices of decoder input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#decoder-input-ids">What are decoder input IDs?</a></p> <p>XLMProphetNet uses the <code>eos_token_id</code> as the starting token for <code>decoder_input_ids</code> generation. If <code>past_key_values</code> is used, optionally only the last <code>decoder_input_ids</code> have to be input (see <code>past_key_values</code>).</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetModel.forward.decoder_attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetModel.forward.decoder_attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_attention_mask</strong> (<code>torch.BoolTensor</code> of shape <code>(batch_size, target_sequence_length)</code>, <em>optional</em>) — Default behavior: generate a tensor that ignores pad tokens in <code>decoder_input_ids</code>. Causal mask will also be used by default.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetModel.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetModel.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.Tensor</code> of shape <code>(encoder_layers, encoder_attention_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetModel.forward.decoder_head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetModel.forward.decoder_head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_head_mask</strong> (<code>torch.Tensor</code> of shape <code>(decoder_layers, decoder_attention_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetModel.forward.cross_attn_head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetModel.forward.cross_attn_head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>cross_attn_head_mask</strong> (<code>torch.Tensor</code> of shape <code>(decoder_layers, decoder_attention_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the cross-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetModel.forward.encoder_outputs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetModel.forward.encoder_outputs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>encoder_outputs</strong> (<code>tuple(tuple(torch.FloatTensor)</code>, <em>optional</em>) — Tuple consists of (<code>last_hidden_state</code>, <em>optional</em>: <code>hidden_states</code>, <em>optional</em>: <code>attentions</code>) <code>last_hidden_state</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetModel.forward.past_key_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetModel.forward.past_key_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>past_key_values</strong> (<code>tuple(tuple(torch.FloatTensor))</code> of length <code>config.n_layers</code> with each tuple having 4 tensors of shape <code>(batch_size, num_heads, sequence_length - 1, embed_size_per_head)</code>) — Contains precomputed key and value hidden-states of the attention blocks. Can be used to speed up decoding.<p></p> <p>If <code>past_key_values</code> are used, the user can optionally input only the last <code>decoder_input_ids</code> (those that don’t have their past key value states given to this model) of shape <code>(batch_size, 1)</code> instead of all <code>decoder_input_ids</code> of shape <code>(batch_size, sequence_length)</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetModel.forward.use_cache" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetModel.forward.use_cache"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>use_cache</strong> (<code>bool</code>, <em>optional</em>) — If set to <code>True</code>, <code>past_key_values</code> key value states are returned and can be used to speed up decoding (see <code>past_key_values</code>).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetModel.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetModel.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetModel.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetModel.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetModel.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetModel.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li></ul> <div id="transformers.XLMProphetNetModel.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><code>transformers.models.xlm_prophetnet.modeling_xlm_prophetnet.XLMProphetNetSeq2SeqModelOutput</code> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <code>transformers.models.xlm_prophetnet.modeling_xlm_prophetnet.XLMProphetNetSeq2SeqModelOutput</code> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet#transformers.XLMProphetNetConfig">XLMProphetNetConfig</a>) and inputs.</p> <ul> <li> <p><strong>last_hidden_state</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, decoder_sequence_length, hidden_size)</code>) — Sequence of main stream hidden-states at the output of the last layer of the decoder of the model.</p> <p>If <code>past_key_values</code> is used only the last hidden-state of the sequences of shape <code>(batch_size, 1, hidden_size)</code> is output.</p> </li> <li> <p><strong>last_hidden_state_ngram</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size,ngram * decoder_sequence_length, config.vocab_size)</code>, <em>optional</em>) — Sequence of predict stream hidden-states at the output of the last layer of the decoder of the model.</p> </li> <li> <p><strong>past_key_values</strong> (<code>List[torch.FloatTensor]</code>, <em>optional</em>, returned when <code>use_cache=True</code> is passed or when <code>config.use_cache=True</code>) — List of <code>torch.FloatTensor</code> of length <code>config.n_layers</code>, with each tensor of shape <code>(2, batch_size, num_attn_heads, decoder_sequence_length, embed_size_per_head)</code>).</p> <p>Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be used (see <code>past_key_values</code> input) to speed up sequential decoding.</p> </li> <li> <p><strong>decoder_hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, decoder_sequence_length, hidden_size)</code>.</p> <p>Hidden-states of main stream of the decoder at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>decoder_ngram_hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, ngram * decoder_sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the predict stream of the decoder at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>decoder_attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_attn_heads, decoder_sequence_length, decoder_sequence_length)</code>.</p> <p>Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> <li> <p><strong>decoder_ngram_attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_attn_heads, decoder_sequence_length, decoder_sequence_length)</code>.</p> <p>Attentions weights of the predict stream of the decoder, after the attention softmax, used to compute the weighted average in the</p> </li> <li> <p><strong>cross_attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_attn_heads, encoder_sequence_length, decoder_sequence_length)</code>.</p> <p>Attentions weights of the cross-attention layer of the decoder, after the attention softmax, used to compute the weighted average in the</p> </li> <li> <p><strong>encoder_last_hidden_state</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, encoder_sequence_length, hidden_size)</code>, <em>optional</em>) — Sequence of hidden-states at the output of the last layer of the encoder of the model.</p> </li> <li> <p><strong>encoder_hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, encoder_sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>encoder_attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_attn_heads, encoder_sequence_length, encoder_sequence_length)</code>.</p> <p>Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-7v7qh8">The <a href="/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet#transformers.XLMProphetNetModel">XLMProphetNetModel</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.XLMProphetNetModel.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetModel.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, XLMProphetNetModel <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"patrickvonplaten/xprophetnet-large-uncased-standalone"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = XLMProphetNetModel.from_pretrained(<span class="hljs-string">"patrickvonplaten/xprophetnet-large-uncased-standalone"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>input_ids = tokenizer( <span class="hljs-meta">... </span> <span class="hljs-string">"Studies have been shown that owning a dog is good for you"</span>, return_tensors=<span class="hljs-string">"pt"</span> <span class="hljs-meta">... </span>).input_ids <span class="hljs-comment"># Batch size 1</span> <span class="hljs-meta">&gt;&gt;&gt; </span>decoder_input_ids = tokenizer(<span class="hljs-string">"Studies show that"</span>, return_tensors=<span class="hljs-string">"pt"</span>).input_ids <span class="hljs-comment"># Batch size 1</span> <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids) <span class="hljs-meta">&gt;&gt;&gt; </span>last_hidden_states = outputs.last_hidden_state <span class="hljs-comment"># main stream hidden states</span> <span class="hljs-meta">&gt;&gt;&gt; </span>last_hidden_states_ngram = outputs.last_hidden_state_ngram <span class="hljs-comment"># predict hidden states</span></pre></div></div></div></div> <h2 class="relative group"><a id="transformers.XLMProphetNetEncoder" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetEncoder"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-gz4y0j">XLMProphetNetEncoder</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMProphetNetEncoder"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XLMProphetNetEncoder</span></span></h3> <a id="transformers.XLMProphetNetEncoder" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMProphetNetEncoder"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_prophetnet/modeling_xlm_prophetnet.py#L1252" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60">: XLMProphetNetConfig</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">word_embeddings<span class="opacity-60">: Embedding = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetEncoder.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetEncoder.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet#transformers.XLMProphetNetConfig">XLMProphetNetConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1ibnn1i">The standalone encoder part of the XLMProphetNetModel. This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-jbq9y7">Original ProphetNet code can be found <a href="https://github.com/microsoft/ProphetNet" rel="nofollow">here</a>. Checkpoints were converted from original Fairseq checkpoints. For more information on the checkpoint conversion, please take a look at the file <code>convert_prophetnet_original_pytorch_checkpoint_to_pytorch.py</code>.</p> <p data-svelte-h="svelte-1707pv8">This model is a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matters related to general usage and behavior.</p> <p data-svelte-h="svelte-repdd8">word_embeddings (<code>torch.nn.Embeddings</code> of shape <code>(config.vocab_size, config.hidden_size)</code>, <em>optional</em>): The word embedding parameters. This can be used to initialize <a href="/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet#transformers.XLMProphetNetEncoder">XLMProphetNetEncoder</a> with pre-defined word embeddings instead of randomly initialized word embeddings.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMProphetNetEncoder.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.XLMProphetNetEncoder.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMProphetNetEncoder.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_prophetnet/modeling_xlm_prophetnet.py#L1282" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.BaseModelOutput">transformers.modeling_outputs.BaseModelOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 6 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetEncoder.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetEncoder.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetEncoder.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetEncoder.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetEncoder.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetEncoder.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.Tensor</code> of shape <code>(encoder_layers, encoder_attention_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetEncoder.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetEncoder.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetEncoder.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetEncoder.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetEncoder.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetEncoder.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li></ul> <div id="transformers.XLMProphetNetEncoder.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.BaseModelOutput">transformers.modeling_outputs.BaseModelOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.BaseModelOutput">transformers.modeling_outputs.BaseModelOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet#transformers.XLMProphetNetConfig">XLMProphetNetConfig</a>) and inputs.</p> <ul> <li> <p><strong>last_hidden_state</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>) — Sequence of hidden-states at the output of the last layer of the model.</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-1ikt3m6">The <a href="/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet#transformers.XLMProphetNetEncoder">XLMProphetNetEncoder</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.XLMProphetNetEncoder.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetEncoder.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, XLMProphetNetEncoder <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"patrickvonplaten/xprophetnet-large-uncased-standalone"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = XLMProphetNetEncoder.from_pretrained(<span class="hljs-string">"patrickvonplaten/prophetnet-large-uncased-standalone"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(<span class="hljs-string">"Hello, my dog is cute"</span>, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(**inputs) <span class="hljs-meta">&gt;&gt;&gt; </span>last_hidden_states = outputs.last_hidden_state</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.XLMProphetNetDecoder" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetDecoder"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1ifz3yz">XLMProphetNetDecoder</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMProphetNetDecoder"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XLMProphetNetDecoder</span></span></h3> <a id="transformers.XLMProphetNetDecoder" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMProphetNetDecoder"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_prophetnet/modeling_xlm_prophetnet.py#L1393" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60">: XLMProphetNetConfig</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">word_embeddings<span class="opacity-60">: typing.Optional[torch.nn.modules.sparse.Embedding] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetDecoder.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetDecoder.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet#transformers.XLMProphetNetConfig">XLMProphetNetConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-hzdwtq">The standalone decoder part of the XLMProphetNetModel. This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-jbq9y7">Original ProphetNet code can be found <a href="https://github.com/microsoft/ProphetNet" rel="nofollow">here</a>. Checkpoints were converted from original Fairseq checkpoints. For more information on the checkpoint conversion, please take a look at the file <code>convert_prophetnet_original_pytorch_checkpoint_to_pytorch.py</code>.</p> <p data-svelte-h="svelte-1707pv8">This model is a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matters related to general usage and behavior.</p> <p data-svelte-h="svelte-repdd8">word_embeddings (<code>torch.nn.Embeddings</code> of shape <code>(config.vocab_size, config.hidden_size)</code>, <em>optional</em>): The word embedding parameters. This can be used to initialize <a href="/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet#transformers.XLMProphetNetEncoder">XLMProphetNetEncoder</a> with pre-defined word embeddings instead of randomly initialized word embeddings.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMProphetNetDecoder.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.XLMProphetNetDecoder.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMProphetNetDecoder.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_prophetnet/modeling_xlm_prophetnet.py#L1430" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_hidden_states<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">cross_attn_head_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">past_key_values<span class="opacity-60">: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_cache<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>transformers.models.xlm_prophetnet.modeling_xlm_prophetnet.XLMProphetNetDecoderModelOutput</code> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 11 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetDecoder.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetDecoder.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetDecoder.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetDecoder.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetDecoder.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetDecoder.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.Tensor</code> of shape <code>(encoder_layers, encoder_attention_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetDecoder.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetDecoder.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetDecoder.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetDecoder.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetDecoder.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetDecoder.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetDecoder.forward.encoder_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetDecoder.forward.encoder_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>encoder_hidden_states</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetDecoder.forward.encoder_attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetDecoder.forward.encoder_attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>encoder_attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in <code>[0, 1]</code>:</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetDecoder.forward.cross_attn_head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetDecoder.forward.cross_attn_head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>cross_attn_head_mask</strong> (<code>torch.Tensor</code> of shape <code>(decoder_layers, decoder_attention_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the cross-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetDecoder.forward.past_key_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetDecoder.forward.past_key_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>past_key_values</strong> (<code>tuple(tuple(torch.FloatTensor))</code> of length <code>config.n_layers</code> with each tuple having 4 tensors of shape <code>(batch_size, num_heads, sequence_length - 1, embed_size_per_head)</code>) — Contains precomputed key and value hidden-states of the attention blocks. Can be used to speed up decoding.<p></p> <p>If <code>past_key_values</code> are used, the user can optionally input only the last <code>decoder_input_ids</code> (those that don’t have their past key value states given to this model) of shape <code>(batch_size, 1)</code> instead of all <code>decoder_input_ids</code> of shape <code>(batch_size, sequence_length)</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetDecoder.forward.use_cache" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetDecoder.forward.use_cache"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>use_cache</strong> (<code>bool</code>, <em>optional</em>) — If set to <code>True</code>, <code>past_key_values</code> key value states are returned and can be used to speed up decoding (see <code>past_key_values</code>).<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul></span></span> </li></ul> <div id="transformers.XLMProphetNetDecoder.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><code>transformers.models.xlm_prophetnet.modeling_xlm_prophetnet.XLMProphetNetDecoderModelOutput</code> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <code>transformers.models.xlm_prophetnet.modeling_xlm_prophetnet.XLMProphetNetDecoderModelOutput</code> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet#transformers.XLMProphetNetConfig">XLMProphetNetConfig</a>) and inputs.</p> <ul> <li> <p><strong>last_hidden_state</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, decoder_sequence_length, hidden_size)</code>) — Sequence of main stream hidden-states at the output of the last layer of the decoder of the model.</p> <p>If <code>past_key_values</code> is used only the last hidden-state of the sequences of shape <code>(batch_size, 1, hidden_size)</code> is output.</p> </li> <li> <p><strong>last_hidden_state_ngram</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, ngram * decoder_sequence_length, config.vocab_size)</code>) — Sequence of predict stream hidden-states at the output of the last layer of the decoder of the model.</p> </li> <li> <p><strong>past_key_values</strong> (<code>List[torch.FloatTensor]</code>, <em>optional</em>, returned when <code>use_cache=True</code> is passed or when <code>config.use_cache=True</code>) — List of <code>torch.FloatTensor</code> of length <code>config.n_layers</code>, with each tensor of shape <code>(2, batch_size, num_attn_heads, decoder_sequence_length, embed_size_per_head)</code>).</p> <p>Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be used (see <code>past_key_values</code> input) to speed up sequential decoding.</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, decoder_sequence_length, hidden_size)</code>.</p> <p>Hidden-states of main stream of the decoder at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>ngram_hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, ngram * decoder_sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the predict stream of the decoder at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_attn_heads, decoder_sequence_length, decoder_sequence_length)</code>.</p> <p>Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> <li> <p><strong>ngram_attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_attn_heads, decoder_sequence_length, decoder_sequence_length)</code>.</p> <p>Attentions weights of the predict stream of the decoder, after the attention softmax, used to compute the weighted average in the</p> </li> <li> <p><strong>cross_attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_attn_heads, encoder_sequence_length, decoder_sequence_length)</code>.</p> <p>Attentions weights of the cross-attention layer of the decoder, after the attention softmax, used to compute the weighted average in the</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-5i7qfu">The <a href="/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet#transformers.XLMProphetNetDecoder">XLMProphetNetDecoder</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.XLMProphetNetDecoder.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetDecoder.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, XLMProphetNetDecoder <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"patrickvonplaten/xprophetnet-large-uncased-standalone"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = XLMProphetNetDecoder.from_pretrained( <span class="hljs-meta">... </span> <span class="hljs-string">"patrickvonplaten/xprophetnet-large-uncased-standalone"</span>, add_cross_attention=<span class="hljs-literal">False</span> <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(<span class="hljs-string">"Hello, my dog is cute"</span>, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(**inputs) <span class="hljs-meta">&gt;&gt;&gt; </span>last_hidden_states = outputs.last_hidden_state</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.XLMProphetNetForConditionalGeneration" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetForConditionalGeneration"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1un0sva">XLMProphetNetForConditionalGeneration</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMProphetNetForConditionalGeneration"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XLMProphetNetForConditionalGeneration</span></span></h3> <a id="transformers.XLMProphetNetForConditionalGeneration" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMProphetNetForConditionalGeneration"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_prophetnet/modeling_xlm_prophetnet.py#L1900" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60">: XLMProphetNetConfig</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetForConditionalGeneration.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetForConditionalGeneration.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet#transformers.XLMProphetNetConfig">XLMProphetNetConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-r6u7ju">The XLMProphetNet Model with a language modeling head. Can be used for sequence generation tasks. This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-jbq9y7">Original ProphetNet code can be found <a href="https://github.com/microsoft/ProphetNet" rel="nofollow">here</a>. Checkpoints were converted from original Fairseq checkpoints. For more information on the checkpoint conversion, please take a look at the file <code>convert_prophetnet_original_pytorch_checkpoint_to_pytorch.py</code>.</p> <p data-svelte-h="svelte-1707pv8">This model is a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matters related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMProphetNetForConditionalGeneration.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.XLMProphetNetForConditionalGeneration.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMProphetNetForConditionalGeneration.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_prophetnet/modeling_xlm_prophetnet.py#L1923" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_input_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_attention_mask<span class="opacity-60">: typing.Optional[torch.BoolTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_head_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">cross_attn_head_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_outputs<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">past_key_values<span class="opacity-60">: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_inputs_embeds<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_cache<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>transformers.models.xlm_prophetnet.modeling_xlm_prophetnet.XLMProphetNetSeq2SeqLMOutput</code> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 14 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetForConditionalGeneration.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetForConditionalGeneration.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetForConditionalGeneration.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetForConditionalGeneration.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetForConditionalGeneration.forward.decoder_input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetForConditionalGeneration.forward.decoder_input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, target_sequence_length)</code>, <em>optional</em>) — Indices of decoder input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#decoder-input-ids">What are decoder input IDs?</a></p> <p>XLMProphetNet uses the <code>eos_token_id</code> as the starting token for <code>decoder_input_ids</code> generation. If <code>past_key_values</code> is used, optionally only the last <code>decoder_input_ids</code> have to be input (see <code>past_key_values</code>).</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetForConditionalGeneration.forward.decoder_attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetForConditionalGeneration.forward.decoder_attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_attention_mask</strong> (<code>torch.BoolTensor</code> of shape <code>(batch_size, target_sequence_length)</code>, <em>optional</em>) — Default behavior: generate a tensor that ignores pad tokens in <code>decoder_input_ids</code>. Causal mask will also be used by default.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetForConditionalGeneration.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetForConditionalGeneration.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.Tensor</code> of shape <code>(encoder_layers, encoder_attention_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetForConditionalGeneration.forward.decoder_head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetForConditionalGeneration.forward.decoder_head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_head_mask</strong> (<code>torch.Tensor</code> of shape <code>(decoder_layers, decoder_attention_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetForConditionalGeneration.forward.cross_attn_head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetForConditionalGeneration.forward.cross_attn_head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>cross_attn_head_mask</strong> (<code>torch.Tensor</code> of shape <code>(decoder_layers, decoder_attention_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the cross-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetForConditionalGeneration.forward.encoder_outputs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetForConditionalGeneration.forward.encoder_outputs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>encoder_outputs</strong> (<code>tuple(tuple(torch.FloatTensor)</code>, <em>optional</em>) — Tuple consists of (<code>last_hidden_state</code>, <em>optional</em>: <code>hidden_states</code>, <em>optional</em>: <code>attentions</code>) <code>last_hidden_state</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetForConditionalGeneration.forward.past_key_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetForConditionalGeneration.forward.past_key_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>past_key_values</strong> (<code>tuple(tuple(torch.FloatTensor))</code> of length <code>config.n_layers</code> with each tuple having 4 tensors of shape <code>(batch_size, num_heads, sequence_length - 1, embed_size_per_head)</code>) — Contains precomputed key and value hidden-states of the attention blocks. Can be used to speed up decoding.<p></p> <p>If <code>past_key_values</code> are used, the user can optionally input only the last <code>decoder_input_ids</code> (those that don’t have their past key value states given to this model) of shape <code>(batch_size, 1)</code> instead of all <code>decoder_input_ids</code> of shape <code>(batch_size, sequence_length)</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetForConditionalGeneration.forward.use_cache" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetForConditionalGeneration.forward.use_cache"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>use_cache</strong> (<code>bool</code>, <em>optional</em>) — If set to <code>True</code>, <code>past_key_values</code> key value states are returned and can be used to speed up decoding (see <code>past_key_values</code>).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetForConditionalGeneration.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetForConditionalGeneration.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetForConditionalGeneration.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetForConditionalGeneration.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetForConditionalGeneration.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetForConditionalGeneration.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetForConditionalGeneration.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetForConditionalGeneration.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Labels for computing the sequence classification/regression loss. Indices should be in <code>[-100, 0, ..., config.vocab_size - 1]</code>. All labels set to <code>-100</code> are ignored (masked), the loss is only computed for labels in <code>[0, ..., config.vocab_size]</code></span></span> </li></ul> <div id="transformers.XLMProphetNetForConditionalGeneration.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><code>transformers.models.xlm_prophetnet.modeling_xlm_prophetnet.XLMProphetNetSeq2SeqLMOutput</code> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <code>transformers.models.xlm_prophetnet.modeling_xlm_prophetnet.XLMProphetNetSeq2SeqLMOutput</code> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet#transformers.XLMProphetNetConfig">XLMProphetNetConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Language modeling loss.</p> </li> <li> <p><strong>logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, decoder_sequence_length, config.vocab_size)</code>) — Prediction scores of the main stream language modeling head (scores for each vocabulary token before SoftMax).</p> </li> <li> <p><strong>logits_ngram</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, ngram * decoder_sequence_length, config.vocab_size)</code>) — Prediction scores of the predict stream language modeling head (scores for each vocabulary token before SoftMax).</p> </li> <li> <p><strong>past_key_values</strong> (<code>List[torch.FloatTensor]</code>, <em>optional</em>, returned when <code>use_cache=True</code> is passed or when <code>config.use_cache=True</code>) — List of <code>torch.FloatTensor</code> of length <code>config.n_layers</code>, with each tensor of shape <code>(2, batch_size, num_attn_heads, decoder_sequence_length, embed_size_per_head)</code>).</p> <p>Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be used (see <code>past_key_values</code> input) to speed up sequential decoding.</p> </li> <li> <p><strong>decoder_hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, decoder_sequence_length, hidden_size)</code>.</p> <p>Hidden-states of main stream of the decoder at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>decoder_ngram_hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, ngram * decoder_sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the predict stream of the decoder at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>decoder_attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_attn_heads, decoder_sequence_length, decoder_sequence_length)</code>.</p> <p>Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> <li> <p><strong>decoder_ngram_attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_attn_heads, decoder_sequence_length, decoder_sequence_length)</code>.</p> <p>Attentions weights of the predict stream of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> <li> <p><strong>cross_attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_attn_heads, encoder_sequence_length, decoder_sequence_length)</code>.</p> <p>Attentions weights of the cross-attention layer of the decoder, after the attention softmax, used to compute the weighted average in the</p> </li> <li> <p><strong>encoder_last_hidden_state</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, encoder_sequence_length, hidden_size)</code>, <em>optional</em>) — Sequence of hidden-states at the output of the last layer of the encoder of the model.</p> </li> <li> <p><strong>encoder_hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, encoder_sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>encoder_attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_attn_heads, encoder_sequence_length, encoder_sequence_length)</code>. Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-16oygzq">The <a href="/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet#transformers.XLMProphetNetForConditionalGeneration">XLMProphetNetForConditionalGeneration</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.XLMProphetNetForConditionalGeneration.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetForConditionalGeneration.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, XLMProphetNetForConditionalGeneration <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"patrickvonplaten/xprophetnet-large-uncased-standalone"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = XLMProphetNetForConditionalGeneration.from_pretrained( <span class="hljs-meta">... </span> <span class="hljs-string">"patrickvonplaten/xprophetnet-large-uncased-standalone"</span> <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>input_ids = tokenizer( <span class="hljs-meta">... </span> <span class="hljs-string">"Studies have been shown that owning a dog is good for you"</span>, return_tensors=<span class="hljs-string">"pt"</span> <span class="hljs-meta">... </span>).input_ids <span class="hljs-comment"># Batch size 1</span> <span class="hljs-meta">&gt;&gt;&gt; </span>decoder_input_ids = tokenizer(<span class="hljs-string">"Studies show that"</span>, return_tensors=<span class="hljs-string">"pt"</span>).input_ids <span class="hljs-comment"># Batch size 1</span> <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids) <span class="hljs-meta">&gt;&gt;&gt; </span>logits_next_token = outputs.logits <span class="hljs-comment"># logits to predict next token as usual</span> <span class="hljs-meta">&gt;&gt;&gt; </span>logits_ngram_next_tokens = outputs.logits_ngram <span class="hljs-comment"># logits to predict 2nd, 3rd, ... next tokens</span></pre></div></div></div></div> <h2 class="relative group"><a id="transformers.XLMProphetNetForCausalLM" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetForCausalLM"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-ipre8k">XLMProphetNetForCausalLM</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMProphetNetForCausalLM"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XLMProphetNetForCausalLM</span></span></h3> <a id="transformers.XLMProphetNetForCausalLM" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMProphetNetForCausalLM"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_prophetnet/modeling_xlm_prophetnet.py#L2116" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60">: XLMProphetNetConfig</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetForCausalLM.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetForCausalLM.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet#transformers.XLMProphetNetConfig">XLMProphetNetConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1gp5ji9">The standalone decoder part of the XLMProphetNetModel with a lm head on top. The model can be used for causal language modeling. This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-jbq9y7">Original ProphetNet code can be found <a href="https://github.com/microsoft/ProphetNet" rel="nofollow">here</a>. Checkpoints were converted from original Fairseq checkpoints. For more information on the checkpoint conversion, please take a look at the file <code>convert_prophetnet_original_pytorch_checkpoint_to_pytorch.py</code>.</p> <p data-svelte-h="svelte-1707pv8">This model is a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matters related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMProphetNetForCausalLM.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.XLMProphetNetForCausalLM.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMProphetNetForCausalLM.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_prophetnet/modeling_xlm_prophetnet.py#L2153" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_hidden_states<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">cross_attn_head_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">past_key_values<span class="opacity-60">: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_cache<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>transformers.models.xlm_prophetnet.modeling_xlm_prophetnet.XLMProphetNetDecoderLMOutput</code> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 12 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetForCausalLM.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetForCausalLM.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetForCausalLM.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetForCausalLM.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetForCausalLM.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetForCausalLM.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.Tensor</code> of shape <code>(encoder_layers, encoder_attention_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetForCausalLM.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetForCausalLM.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetForCausalLM.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetForCausalLM.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetForCausalLM.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetForCausalLM.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetForCausalLM.forward.encoder_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetForCausalLM.forward.encoder_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>encoder_hidden_states</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetForCausalLM.forward.encoder_attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetForCausalLM.forward.encoder_attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>encoder_attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in <code>[0, 1]</code>:</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetForCausalLM.forward.cross_attn_head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetForCausalLM.forward.cross_attn_head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>cross_attn_head_mask</strong> (<code>torch.Tensor</code> of shape <code>(decoder_layers, decoder_attention_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the cross-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetForCausalLM.forward.past_key_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetForCausalLM.forward.past_key_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>past_key_values</strong> (<code>tuple(tuple(torch.FloatTensor))</code> of length <code>config.n_layers</code> with each tuple having 4 tensors of shape <code>(batch_size, num_heads, sequence_length - 1, embed_size_per_head)</code>) — Contains precomputed key and value hidden-states of the attention blocks. Can be used to speed up decoding.<p></p> <p>If <code>past_key_values</code> are used, the user can optionally input only the last <code>decoder_input_ids</code> (those that don’t have their past key value states given to this model) of shape <code>(batch_size, 1)</code> instead of all <code>decoder_input_ids</code> of shape <code>(batch_size, sequence_length)</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetForCausalLM.forward.use_cache" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetForCausalLM.forward.use_cache"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>use_cache</strong> (<code>bool</code>, <em>optional</em>) — If set to <code>True</code>, <code>past_key_values</code> key value states are returned and can be used to speed up decoding (see <code>past_key_values</code>).<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMProphetNetForCausalLM.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetForCausalLM.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be in <code>[-100, 0, ..., config.vocab_size]</code> (see <code>input_ids</code> docstring) Tokens with indices set to <code>-100</code> are ignored (masked), the loss is only computed for the tokens with labels n <code>[0, ..., config.vocab_size]</code></span></span> </li></ul> <div id="transformers.XLMProphetNetForCausalLM.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><code>transformers.models.xlm_prophetnet.modeling_xlm_prophetnet.XLMProphetNetDecoderLMOutput</code> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <code>transformers.models.xlm_prophetnet.modeling_xlm_prophetnet.XLMProphetNetDecoderLMOutput</code> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet#transformers.XLMProphetNetConfig">XLMProphetNetConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Language modeling loss.</p> </li> <li> <p><strong>logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, decoder_sequence_length, config.vocab_size)</code>) — Prediction scores of the main stream language modeling head (scores for each vocabulary token before SoftMax).</p> </li> <li> <p><strong>logits_ngram</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, ngram * decoder_sequence_length, config.vocab_size)</code>) — Prediction scores of the predict stream language modeling head (scores for each vocabulary token before SoftMax).</p> </li> <li> <p><strong>past_key_values</strong> (<code>List[torch.FloatTensor]</code>, <em>optional</em>, returned when <code>use_cache=True</code> is passed or when <code>config.use_cache=True</code>) — List of <code>torch.FloatTensor</code> of length <code>config.n_layers</code>, with each tensor of shape <code>(2, batch_size, num_attn_heads, decoder_sequence_length, embed_size_per_head)</code>).</p> <p>Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be used (see <code>past_key_values</code> input) to speed up sequential decoding.</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, decoder_sequence_length, hidden_size)</code>.</p> <p>Hidden-states of main stream of the decoder at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>ngram_hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, ngram * decoder_sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the predict stream of the decoder at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_attn_heads, decoder_sequence_length, decoder_sequence_length)</code>.</p> <p>Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> <li> <p><strong>ngram_attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_attn_heads, decoder_sequence_length, decoder_sequence_length)</code>.</p> <p>Attentions weights of the predict stream of the decoder, after the attention softmax, used to compute the weighted average in the</p> </li> <li> <p><strong>cross_attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_attn_heads, encoder_sequence_length, decoder_sequence_length)</code>.</p> <p>Attentions weights of the cross-attention layer of the decoder, after the attention softmax, used to compute the weighted average in the</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-1r4xjps">The <a href="/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet#transformers.XLMProphetNetForCausalLM">XLMProphetNetForCausalLM</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.XLMProphetNetForCausalLM.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMProphetNetForCausalLM.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, XLMProphetNetForCausalLM <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"patrickvonplaten/xprophetnet-large-uncased-standalone"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = XLMProphetNetForCausalLM.from_pretrained(<span class="hljs-string">"patrickvonplaten/xprophetnet-large-uncased-standalone"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">assert</span> model.config.is_decoder, <span class="hljs-string">f"<span class="hljs-subst">{model.__class__}</span> has to be configured as a decoder."</span> <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(<span class="hljs-string">"Hello, my dog is cute"</span>, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(**inputs) <span class="hljs-meta">&gt;&gt;&gt; </span>logits = outputs.logits <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Model can also be used with EncoderDecoder framework</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> BertTokenizer, EncoderDecoderModel, AutoTokenizer <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer_enc = BertTokenizer.from_pretrained(<span class="hljs-string">"bert-large-uncased"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer_dec = AutoTokenizer.from_pretrained(<span class="hljs-string">"patrickvonplaten/xprophetnet-large-uncased-standalone"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = EncoderDecoderModel.from_encoder_decoder_pretrained( <span class="hljs-meta">... </span> <span class="hljs-string">"bert-large-uncased"</span>, <span class="hljs-string">"patrickvonplaten/xprophetnet-large-uncased-standalone"</span> <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>ARTICLE = ( <span class="hljs-meta">... </span> <span class="hljs-string">"the us state department said wednesday it had received no "</span> <span class="hljs-meta">... </span> <span class="hljs-string">"formal word from bolivia that it was expelling the us ambassador there "</span> <span class="hljs-meta">... </span> <span class="hljs-string">"but said the charges made against him are `` baseless ."</span> <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>input_ids = tokenizer_enc(ARTICLE, return_tensors=<span class="hljs-string">"pt"</span>).input_ids <span class="hljs-meta">&gt;&gt;&gt; </span>labels = tokenizer_dec( <span class="hljs-meta">... </span> <span class="hljs-string">"us rejects charges against its ambassador in bolivia"</span>, return_tensors=<span class="hljs-string">"pt"</span> <span class="hljs-meta">... </span>).input_ids <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(input_ids=input_ids, decoder_input_ids=labels[:, :-<span class="hljs-number">1</span>], labels=labels[:, <span class="hljs-number">1</span>:]) <span class="hljs-meta">&gt;&gt;&gt; </span>loss = outputs.loss</pre></div></div></div></div> <p></p> <div id="svelte-announcer" aria-live="assertive" aria-atomic="true" style="position: absolute; left: 0px; top: 0px; clip: rect(0px, 0px, 0px, 0px); clip-path: inset(50%); overflow: hidden; white-space: nowrap; width: 1px; height: 1px;"></div></div> <div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/xlm" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>XLM</a> <a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">XLM-RoBERTa<span class="ml-2 translate-y-px">→</span></a></div></div></div> <div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapter&quot;:{&quot;title&quot;:&quot;XLM-ProphetNet&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;xlmprophetnet&quot;,&quot;url&quot;:&quot;#xlmprophetnet&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;overview&quot;,&quot;url&quot;:&quot;#overview&quot;},{&quot;title&quot;:&quot;Documentation resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;documentation-resources&quot;,&quot;url&quot;:&quot;#documentation-resources&quot;},{&quot;title&quot;:&quot;XLMProphetNetConfig&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XLMProphetNetConfig&quot;,&quot;url&quot;:&quot;#transformers.XLMProphetNetConfig&quot;},{&quot;title&quot;:&quot;XLMProphetNetTokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XLMProphetNetTokenizer&quot;,&quot;url&quot;:&quot;#transformers.XLMProphetNetTokenizer&quot;},{&quot;title&quot;:&quot;XLMProphetNetModel&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XLMProphetNetModel&quot;,&quot;url&quot;:&quot;#transformers.XLMProphetNetModel&quot;},{&quot;title&quot;:&quot;XLMProphetNetEncoder&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XLMProphetNetEncoder&quot;,&quot;url&quot;:&quot;#transformers.XLMProphetNetEncoder&quot;},{&quot;title&quot;:&quot;XLMProphetNetDecoder&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XLMProphetNetDecoder&quot;,&quot;url&quot;:&quot;#transformers.XLMProphetNetDecoder&quot;},{&quot;title&quot;:&quot;XLMProphetNetForConditionalGeneration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XLMProphetNetForConditionalGeneration&quot;,&quot;url&quot;:&quot;#transformers.XLMProphetNetForConditionalGeneration&quot;},{&quot;title&quot;:&quot;XLMProphetNetForCausalLM&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XLMProphetNetForCausalLM&quot;,&quot;url&quot;:&quot;#transformers.XLMProphetNetForCausalLM&quot;}]}}" data-target="SubSideMenu"><nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#xlmprophetnet" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-xlmprophetnet">XL<wbr>M-<wbr>Prophet<wbr>Net</a> <a href="#overview" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-overview"><wbr>Overview</a> <a href="#documentation-resources" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-documentation-resources"><wbr>Documentation resources</a> <a href="#transformers.XLMProphetNetConfig" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XLMProphetNetConfig">XLM<wbr>Prophet<wbr>Net<wbr>Config</a> <a href="#transformers.XLMProphetNetTokenizer" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XLMProphetNetTokenizer">XLM<wbr>Prophet<wbr>Net<wbr>Tokenizer</a> <a href="#transformers.XLMProphetNetModel" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XLMProphetNetModel">XLM<wbr>Prophet<wbr>Net<wbr>Model</a> <a href="#transformers.XLMProphetNetEncoder" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XLMProphetNetEncoder">XLM<wbr>Prophet<wbr>Net<wbr>Encoder</a> <a href="#transformers.XLMProphetNetDecoder" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XLMProphetNetDecoder">XLM<wbr>Prophet<wbr>Net<wbr>Decoder</a> <a href="#transformers.XLMProphetNetForConditionalGeneration" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XLMProphetNetForConditionalGeneration">XLM<wbr>Prophet<wbr>Net<wbr>For<wbr>Conditional<wbr>Generation</a> <a href="#transformers.XLMProphetNetForCausalLM" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XLMProphetNetForCausalLM">XLM<wbr>Prophet<wbr>Net<wbr>For<wbr>CausalLM</a> </nav></div></div></div> <div id="doc-footer"></div></main> </div> <script> import("/front/build/kube-5e23f38/index.js"); window.moonSha = "kube-5e23f38/"; window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc","environment":"production","userAgent":"HuggingFace (production)"}`); </script> <!-- Stripe --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://js.stripe.com/v3/"; script.async = true; document.head.appendChild(script); } </script> <!-- Google analytics v4 --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL"; script.async = true; document.head.appendChild(script); window.dataLayer = window.dataLayer || []; function gtag() { if (window.dataLayer !== undefined) { window.dataLayer.push(arguments); } } gtag("js", new Date()); gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet" }); /// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" }); /// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent /// TODO: ask the user for their consent and update this with gtag('consent', 'update') } </script> <!-- Google Analytics v3 (deprecated) --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { (function (i, s, o, g, r, a, m) { i["GoogleAnalyticsObject"] = r; (i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments); }), (i[r].l = 1 * new Date()); (a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]); a.async = 1; a.src = g; m.parentNode.insertBefore(a, m); })(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics"); ganalytics("create", "UA-83738774-2", "auto"); ganalytics("send", "pageview", "/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet"); } </script> <iframe name="__privateStripeMetricsController6880" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-27c67c0d52761104439bb051c7856ab1.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fv4.34.0%2Fen%2Fmodel_doc%2Fxlm-prophetnet&amp;title=XLM-ProphetNet&amp;referrer=&amp;muid=NA&amp;sid=NA&amp;version=6&amp;preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html>
2023-10-05T13:33:35.568Z
XLM-RoBERTa
https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/xlm-roberta
# XLM-RoBERTa [![Models](https://img.shields.io/badge/All_model_pages-xlm--roberta-blueviolet)](https://huggingface.co/models?filter=xlm-roberta) [![Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/docs-demos/xlm-roberta-base) ## Overview The XLM-RoBERTa model was proposed in [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. It is based on Facebook’s RoBERTa model released in 2019. It is a large multi-lingual language model, trained on 2.5TB of filtered CommonCrawl data. The abstract from the paper is the following: _This paper shows that pretraining multilingual language models at scale leads to significant performance gains for a wide range of cross-lingual transfer tasks. We train a Transformer-based masked language model on one hundred languages, using more than two terabytes of filtered CommonCrawl data. Our model, dubbed XLM-R, significantly outperforms multilingual BERT (mBERT) on a variety of cross-lingual benchmarks, including +13.8% average accuracy on XNLI, +12.3% average F1 score on MLQA, and +2.1% average F1 score on NER. XLM-R performs particularly well on low-resource languages, improving 11.8% in XNLI accuracy for Swahili and 9.2% for Urdu over the previous XLM model. We also present a detailed empirical evaluation of the key factors that are required to achieve these gains, including the trade-offs between (1) positive transfer and capacity dilution and (2) the performance of high and low resource languages at scale. Finally, we show, for the first time, the possibility of multilingual modeling without sacrificing per-language performance; XLM-Ris very competitive with strong monolingual models on the GLUE and XNLI benchmarks. We will make XLM-R code, data, and models publicly available._ Tips: - XLM-RoBERTa is a multilingual model trained on 100 different languages. Unlike some XLM multilingual models, it does not require `lang` tensors to understand which language is used, and should be able to determine the correct language from the input ids. - Uses RoBERTa tricks on the XLM approach, but does not use the translation language modeling objective. It only uses masked language modeling on sentences coming from one language. - This implementation is the same as RoBERTa. Refer to the [documentation of RoBERTa](roberta) for usage examples as well as the information relative to the inputs and outputs. This model was contributed by [stefan-it](https://huggingface.co/stefan-it). The original code can be found [here](https://github.com/pytorch/fairseq/tree/master/examples/xlmr). ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with XLM-RoBERTa. If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. - A blog post on how to [finetune XLM RoBERTa for multiclass classification with Habana Gaudi on AWS](https://www.philschmid.de/habana-distributed-training) - [XLMRobertaForSequenceClassification](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaForSequenceClassification) is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification.ipynb). - [TFXLMRobertaForSequenceClassification](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.TFXLMRobertaForSequenceClassification) is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification-tf.ipynb). - [FlaxXLMRobertaForSequenceClassification](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.FlaxXLMRobertaForSequenceClassification) is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification_flax.ipynb). - [Text classification](https://huggingface.co/docs/transformers/tasks/sequence_classification) chapter of the 🤗 Hugging Face Task Guides. - [Text classification task guide](../tasks/sequence_classification) - [XLMRobertaForTokenClassification](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaForTokenClassification) is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/token-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification.ipynb). - [TFXLMRobertaForTokenClassification](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.TFXLMRobertaForTokenClassification) is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/token-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification-tf.ipynb). - [FlaxXLMRobertaForTokenClassification](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.FlaxXLMRobertaForTokenClassification) is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/token-classification). - [Token classification](https://huggingface.co/course/chapter7/2?fw=pt) chapter of the 🤗 Hugging Face Course. - [Token classification task guide](../tasks/token_classification) - [XLMRobertaForCausalLM](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaForCausalLM) is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb). - [Causal language modeling](https://huggingface.co/docs/transformers/tasks/language_modeling) chapter of the 🤗 Hugging Face Task Guides. - [Causal language modeling task guide](../tasks/language_modeling) - [XLMRobertaForMaskedLM](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaForMaskedLM) is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling#robertabertdistilbert-and-masked-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb). - [TFXLMRobertaForMaskedLM](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.TFXLMRobertaForMaskedLM) is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/language-modeling#run_mlmpy) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb). - [FlaxXLMRobertaForMaskedLM](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.FlaxXLMRobertaForMaskedLM) is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling#masked-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/masked_language_modeling_flax.ipynb). - [Masked language modeling](https://huggingface.co/course/chapter7/3?fw=pt) chapter of the 🤗 Hugging Face Course. - [Masked language modeling](../tasks/masked_language_modeling) - [XLMRobertaForQuestionAnswering](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaForQuestionAnswering) is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering.ipynb). - [TFXLMRobertaForQuestionAnswering](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.TFXLMRobertaForQuestionAnswering) is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/question-answering) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering-tf.ipynb). - [FlaxXLMRobertaForQuestionAnswering](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.FlaxXLMRobertaForQuestionAnswering) is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/question-answering). - [Question answering](https://huggingface.co/course/chapter7/7?fw=pt) chapter of the 🤗 Hugging Face Course. - [Question answering task guide](../tasks/question_answering) **Multiple choice** - [XLMRobertaForMultipleChoice](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaForMultipleChoice) is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/multiple-choice) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice.ipynb). - [TFXLMRobertaForMultipleChoice](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.TFXLMRobertaForMultipleChoice) is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/multiple-choice) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice-tf.ipynb). - [Multiple choice task guide](../tasks/multiple_choice) 🚀 Deploy - A blog post on how to [Deploy Serverless XLM RoBERTa on AWS Lambda](https://www.philschmid.de/multilingual-serverless-xlm-roberta-with-huggingface). ## XLMRobertaConfig ### class transformers.XLMRobertaConfig [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/configuration_xlm_roberta.py#L45) ( vocab\_size = 30522hidden\_size = 768num\_hidden\_layers = 12num\_attention\_heads = 12intermediate\_size = 3072hidden\_act = 'gelu'hidden\_dropout\_prob = 0.1attention\_probs\_dropout\_prob = 0.1max\_position\_embeddings = 512type\_vocab\_size = 2initializer\_range = 0.02layer\_norm\_eps = 1e-12pad\_token\_id = 1bos\_token\_id = 0eos\_token\_id = 2position\_embedding\_type = 'absolute'use\_cache = Trueclassifier\_dropout = None\*\*kwargs ) This is the configuration class to store the configuration of a [XLMRobertaModel](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaModel) or a [TFXLMRobertaModel](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.TFXLMRobertaModel). It is used to instantiate a XLM-RoBERTa model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the XLMRoBERTa [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) architecture. Configuration objects inherit from [PretrainedConfig](/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig) and can be used to control the model outputs. Read the documentation from [PretrainedConfig](/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig) for more information. Examples: ``` >>> from transformers import XLMRobertaConfig, XLMRobertaModel >>> >>> configuration = XLMRobertaConfig() >>> >>> model = XLMRobertaModel(configuration) >>> >>> configuration = model.config ``` ## XLMRobertaTokenizer ### class transformers.XLMRobertaTokenizer [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/tokenization_xlm_roberta.py#L63) ( vocab\_filebos\_token = '<s>'eos\_token = '</s>'sep\_token = '</s>'cls\_token = '<s>'unk\_token = '<unk>'pad\_token = '<pad>'mask\_token = '<mask>'sp\_model\_kwargs: typing.Union\[typing.Dict\[str, typing.Any\], NoneType\] = None\*\*kwargs ) Adapted from [RobertaTokenizer](/docs/transformers/v4.34.0/en/model_doc/roberta#transformers.RobertaTokenizer) and [XLNetTokenizer](/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetTokenizer). Based on [SentencePiece](https://github.com/google/sentencepiece). This tokenizer inherits from [PreTrainedTokenizer](/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizer) which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. #### build\_inputs\_with\_special\_tokens [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/tokenization_xlm_roberta.py#L202) ( token\_ids\_0: typing.List\[int\]token\_ids\_1: typing.Optional\[typing.List\[int\]\] = None ) → `List[int]` Parameters - **token\_ids\_0** (`List[int]`) — List of IDs to which the special tokens will be added. - **token\_ids\_1** (`List[int]`, _optional_) — Optional second list of IDs for sequence pairs. List of [input IDs](../glossary#input-ids) with the appropriate special tokens. Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. An XLM-RoBERTa sequence has the following format: - single sequence: `<s> X </s>` - pair of sequences: `<s> A </s></s> B </s>` #### get\_special\_tokens\_mask [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/tokenization_xlm_roberta.py#L228) ( token\_ids\_0: typing.List\[int\]token\_ids\_1: typing.Optional\[typing.List\[int\]\] = Nonealready\_has\_special\_tokens: bool = False ) → `List[int]` Parameters - **token\_ids\_0** (`List[int]`) — List of IDs. - **token\_ids\_1** (`List[int]`, _optional_) — Optional second list of IDs for sequence pairs. - **already\_has\_special\_tokens** (`bool`, _optional_, defaults to `False`) — Whether or not the token list is already formatted with special tokens for the model. A list of integers in the range \[0, 1\]: 1 for a special token, 0 for a sequence token. Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer `prepare_for_model` method. #### create\_token\_type\_ids\_from\_sequences [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/tokenization_xlm_roberta.py#L256) ( token\_ids\_0: typing.List\[int\]token\_ids\_1: typing.Optional\[typing.List\[int\]\] = None ) → `List[int]` Parameters - **token\_ids\_0** (`List[int]`) — List of IDs. - **token\_ids\_1** (`List[int]`, _optional_) — Optional second list of IDs for sequence pairs. List of zeros. Create a mask from the two sequences passed to be used in a sequence-pair classification task. XLM-RoBERTa does not make use of token type ids, therefore a list of zeros is returned. #### save\_vocabulary [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/tokenization_xlm_roberta.py#L314) ( save\_directory: strfilename\_prefix: typing.Optional\[str\] = None ) ## XLMRobertaTokenizerFast ### class transformers.XLMRobertaTokenizerFast [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/tokenization_xlm_roberta_fast.py#L82) ( vocab\_file = Nonetokenizer\_file = Nonebos\_token = '<s>'eos\_token = '</s>'sep\_token = '</s>'cls\_token = '<s>'unk\_token = '<unk>'pad\_token = '<pad>'mask\_token = '<mask>'\*\*kwargs ) Construct a “fast” XLM-RoBERTa tokenizer (backed by HuggingFace’s _tokenizers_ library). Adapted from [RobertaTokenizer](/docs/transformers/v4.34.0/en/model_doc/roberta#transformers.RobertaTokenizer) and [XLNetTokenizer](/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetTokenizer). Based on [BPE](https://huggingface.co/docs/tokenizers/python/latest/components.html?highlight=BPE#models). This tokenizer inherits from [PreTrainedTokenizerFast](/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast) which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. #### build\_inputs\_with\_special\_tokens [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/tokenization_xlm_roberta_fast.py#L174) ( token\_ids\_0: typing.List\[int\]token\_ids\_1: typing.Optional\[typing.List\[int\]\] = None ) → `List[int]` Parameters - **token\_ids\_0** (`List[int]`) — List of IDs to which the special tokens will be added. - **token\_ids\_1** (`List[int]`, _optional_) — Optional second list of IDs for sequence pairs. List of [input IDs](../glossary#input-ids) with the appropriate special tokens. Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. An XLM-RoBERTa sequence has the following format: - single sequence: `<s> X </s>` - pair of sequences: `<s> A </s></s> B </s>` #### create\_token\_type\_ids\_from\_sequences [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/tokenization_xlm_roberta_fast.py#L200) ( token\_ids\_0: typing.List\[int\]token\_ids\_1: typing.Optional\[typing.List\[int\]\] = None ) → `List[int]` Parameters - **token\_ids\_0** (`List[int]`) — List of IDs. - **token\_ids\_1** (`List[int]`, _optional_) — Optional second list of IDs for sequence pairs. List of zeros. Create a mask from the two sequences passed to be used in a sequence-pair classification task. XLM-RoBERTa does not make use of token type ids, therefore a list of zeros is returned. ## XLMRobertaModel ### class transformers.XLMRobertaModel [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_xlm_roberta.py#L693) ( configadd\_pooling\_layer = True ) Parameters - **config** ([XLMRobertaConfig](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. The bare XLM-RoBERTa Model transformer outputting raw hidden-states without any specific head on top. This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of cross-attention is added between the self-attention layers, following the architecture described in _Attention is all you need_\_ by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin. To behave as an decoder the model needs to be initialized with the `is_decoder` argument of the configuration set to `True`. To be used in a Seq2Seq model, the model needs to initialized with both `is_decoder` argument and `add_cross_attention` set to `True`; an `encoder_hidden_states` is then expected as an input to the forward pass. .. \__Attention is all you need_: [https://arxiv.org/abs/1706.03762](https://arxiv.org/abs/1706.03762) #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_xlm_roberta.py#L736) ( input\_ids: typing.Optional\[torch.Tensor\] = Noneattention\_mask: typing.Optional\[torch.Tensor\] = Nonetoken\_type\_ids: typing.Optional\[torch.Tensor\] = Noneposition\_ids: typing.Optional\[torch.Tensor\] = Nonehead\_mask: typing.Optional\[torch.Tensor\] = Noneinputs\_embeds: typing.Optional\[torch.Tensor\] = Noneencoder\_hidden\_states: typing.Optional\[torch.Tensor\] = Noneencoder\_attention\_mask: typing.Optional\[torch.Tensor\] = Nonepast\_key\_values: typing.Optional\[typing.List\[torch.FloatTensor\]\] = Noneuse\_cache: typing.Optional\[bool\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → [transformers.modeling\_outputs.BaseModelOutputWithPoolingAndCrossAttentions](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions) or `tuple(torch.FloatTensor)` The [XLMRobertaModel](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaModel) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, XLMRobertaModel >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-base") >>> model = XLMRobertaModel.from_pretrained("xlm-roberta-base") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state ``` ## XLMRobertaForCausalLM ### class transformers.XLMRobertaForCausalLM [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_xlm_roberta.py#L879) ( config ) Parameters - **config** ([XLMRobertaConfig](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. XLM-RoBERTa Model with a `language modeling` head on top for CLM fine-tuning. This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_xlm_roberta.py#L900) ( input\_ids: typing.Optional\[torch.LongTensor\] = Noneattention\_mask: typing.Optional\[torch.FloatTensor\] = Nonetoken\_type\_ids: typing.Optional\[torch.LongTensor\] = Noneposition\_ids: typing.Optional\[torch.LongTensor\] = Nonehead\_mask: typing.Optional\[torch.FloatTensor\] = Noneinputs\_embeds: typing.Optional\[torch.FloatTensor\] = Noneencoder\_hidden\_states: typing.Optional\[torch.FloatTensor\] = Noneencoder\_attention\_mask: typing.Optional\[torch.FloatTensor\] = Nonelabels: typing.Optional\[torch.LongTensor\] = Nonepast\_key\_values: typing.Tuple\[typing.Tuple\[torch.FloatTensor\]\] = Noneuse\_cache: typing.Optional\[bool\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → [transformers.modeling\_outputs.CausalLMOutputWithCrossAttentions](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.CausalLMOutputWithCrossAttentions) or `tuple(torch.FloatTensor)` The [XLMRobertaForCausalLM](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaForCausalLM) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, XLMRobertaForCausalLM, AutoConfig >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("roberta-base") >>> config = AutoConfig.from_pretrained("roberta-base") >>> config.is_decoder = True >>> model = XLMRobertaForCausalLM.from_pretrained("roberta-base", config=config) >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs) >>> prediction_logits = outputs.logits ``` ## XLMRobertaForMaskedLM ### class transformers.XLMRobertaForMaskedLM [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_xlm_roberta.py#L1034) ( config ) Parameters - **config** ([XLMRobertaConfig](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. XLM-RoBERTa Model with a `language modeling` head on top. This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_xlm_roberta.py#L1058) ( input\_ids: typing.Optional\[torch.LongTensor\] = Noneattention\_mask: typing.Optional\[torch.FloatTensor\] = Nonetoken\_type\_ids: typing.Optional\[torch.LongTensor\] = Noneposition\_ids: typing.Optional\[torch.LongTensor\] = Nonehead\_mask: typing.Optional\[torch.FloatTensor\] = Noneinputs\_embeds: typing.Optional\[torch.FloatTensor\] = Noneencoder\_hidden\_states: typing.Optional\[torch.FloatTensor\] = Noneencoder\_attention\_mask: typing.Optional\[torch.FloatTensor\] = Nonelabels: typing.Optional\[torch.LongTensor\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → [transformers.modeling\_outputs.MaskedLMOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.MaskedLMOutput) or `tuple(torch.FloatTensor)` The [XLMRobertaForMaskedLM](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaForMaskedLM) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, XLMRobertaForMaskedLM >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-base") >>> model = XLMRobertaForMaskedLM.from_pretrained("xlm-roberta-base") >>> inputs = tokenizer("The capital of France is <mask>.", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> >>> mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0] >>> predicted_token_id = logits[0, mask_token_index].argmax(axis=-1) >>> tokenizer.decode(predicted_token_id) ' Paris' >>> labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"] >>> >>> labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100) >>> outputs = model(**inputs, labels=labels) >>> round(outputs.loss.item(), 2) 0.1 ``` ## XLMRobertaForSequenceClassification ### class transformers.XLMRobertaForSequenceClassification [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_xlm_roberta.py#L1167) ( config ) Parameters - **config** ([XLMRobertaConfig](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. XLM-RoBERTa Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_xlm_roberta.py#L1179) ( input\_ids: typing.Optional\[torch.LongTensor\] = Noneattention\_mask: typing.Optional\[torch.FloatTensor\] = Nonetoken\_type\_ids: typing.Optional\[torch.LongTensor\] = Noneposition\_ids: typing.Optional\[torch.LongTensor\] = Nonehead\_mask: typing.Optional\[torch.FloatTensor\] = Noneinputs\_embeds: typing.Optional\[torch.FloatTensor\] = Nonelabels: typing.Optional\[torch.LongTensor\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → [transformers.modeling\_outputs.SequenceClassifierOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.SequenceClassifierOutput) or `tuple(torch.FloatTensor)` The [XLMRobertaForSequenceClassification](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaForSequenceClassification) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example of single-label classification: ``` >>> import torch >>> from transformers import AutoTokenizer, XLMRobertaForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("cardiffnlp/twitter-roberta-base-emotion") >>> model = XLMRobertaForSequenceClassification.from_pretrained("cardiffnlp/twitter-roberta-base-emotion") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_id = logits.argmax().item() >>> model.config.id2label[predicted_class_id] 'optimism' >>> >>> num_labels = len(model.config.id2label) >>> model = XLMRobertaForSequenceClassification.from_pretrained("cardiffnlp/twitter-roberta-base-emotion", num_labels=num_labels) >>> labels = torch.tensor([1]) >>> loss = model(**inputs, labels=labels).loss >>> round(loss.item(), 2) 0.08 ``` Example of multi-label classification: ``` >>> import torch >>> from transformers import AutoTokenizer, XLMRobertaForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("cardiffnlp/twitter-roberta-base-emotion") >>> model = XLMRobertaForSequenceClassification.from_pretrained("cardiffnlp/twitter-roberta-base-emotion", problem_type="multi_label_classification") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5] >>> >>> num_labels = len(model.config.id2label) >>> model = XLMRobertaForSequenceClassification.from_pretrained( ... "cardiffnlp/twitter-roberta-base-emotion", num_labels=num_labels, problem_type="multi_label_classification" ... ) >>> labels = torch.sum( ... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1 ... ).to(torch.float) >>> loss = model(**inputs, labels=labels).loss ``` ## XLMRobertaForMultipleChoice ### class transformers.XLMRobertaForMultipleChoice [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_xlm_roberta.py#L1267) ( config ) Parameters - **config** ([XLMRobertaConfig](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. XLM-RoBERTa Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks. This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_xlm_roberta.py#L1278) ( input\_ids: typing.Optional\[torch.LongTensor\] = Nonetoken\_type\_ids: typing.Optional\[torch.LongTensor\] = Noneattention\_mask: typing.Optional\[torch.FloatTensor\] = Nonelabels: typing.Optional\[torch.LongTensor\] = Noneposition\_ids: typing.Optional\[torch.LongTensor\] = Nonehead\_mask: typing.Optional\[torch.FloatTensor\] = Noneinputs\_embeds: typing.Optional\[torch.FloatTensor\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → [transformers.modeling\_outputs.MultipleChoiceModelOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.MultipleChoiceModelOutput) or `tuple(torch.FloatTensor)` The [XLMRobertaForMultipleChoice](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaForMultipleChoice) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, XLMRobertaForMultipleChoice >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-base") >>> model = XLMRobertaForMultipleChoice.from_pretrained("xlm-roberta-base") >>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced." >>> choice0 = "It is eaten with a fork and a knife." >>> choice1 = "It is eaten while held in the hand." >>> labels = torch.tensor(0).unsqueeze(0) >>> encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="pt", padding=True) >>> outputs = model(**{k: v.unsqueeze(0) for k, v in encoding.items()}, labels=labels) >>> >>> loss = outputs.loss >>> logits = outputs.logits ``` ## XLMRobertaForTokenClassification ### class transformers.XLMRobertaForTokenClassification [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_xlm_roberta.py#L1362) ( config ) Parameters - **config** ([XLMRobertaConfig](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. XLM-RoBERTa Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_xlm_roberta.py#L1377) ( input\_ids: typing.Optional\[torch.LongTensor\] = Noneattention\_mask: typing.Optional\[torch.FloatTensor\] = Nonetoken\_type\_ids: typing.Optional\[torch.LongTensor\] = Noneposition\_ids: typing.Optional\[torch.LongTensor\] = Nonehead\_mask: typing.Optional\[torch.FloatTensor\] = Noneinputs\_embeds: typing.Optional\[torch.FloatTensor\] = Nonelabels: typing.Optional\[torch.LongTensor\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → [transformers.modeling\_outputs.TokenClassifierOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.TokenClassifierOutput) or `tuple(torch.FloatTensor)` The [XLMRobertaForTokenClassification](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaForTokenClassification) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, XLMRobertaForTokenClassification >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("Jean-Baptiste/roberta-large-ner-english") >>> model = XLMRobertaForTokenClassification.from_pretrained("Jean-Baptiste/roberta-large-ner-english") >>> inputs = tokenizer( ... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt" ... ) >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_token_class_ids = logits.argmax(-1) >>> >>> >>> >>> predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]] >>> predicted_tokens_classes ['O', 'ORG', 'ORG', 'O', 'O', 'O', 'O', 'O', 'LOC', 'O', 'LOC', 'LOC'] >>> labels = predicted_token_class_ids >>> loss = model(**inputs, labels=labels).loss >>> round(loss.item(), 2) 0.01 ``` ## XLMRobertaForQuestionAnswering ### class transformers.XLMRobertaForQuestionAnswering [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_xlm_roberta.py#L1471) ( config ) Parameters - **config** ([XLMRobertaConfig](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. XLM-RoBERTa Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute `span start logits` and `span end logits`). This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_xlm_roberta.py#L1482) ( input\_ids: typing.Optional\[torch.LongTensor\] = Noneattention\_mask: typing.Optional\[torch.FloatTensor\] = Nonetoken\_type\_ids: typing.Optional\[torch.LongTensor\] = Noneposition\_ids: typing.Optional\[torch.LongTensor\] = Nonehead\_mask: typing.Optional\[torch.FloatTensor\] = Noneinputs\_embeds: typing.Optional\[torch.FloatTensor\] = Nonestart\_positions: typing.Optional\[torch.LongTensor\] = Noneend\_positions: typing.Optional\[torch.LongTensor\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → [transformers.modeling\_outputs.QuestionAnsweringModelOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.QuestionAnsweringModelOutput) or `tuple(torch.FloatTensor)` The [XLMRobertaForQuestionAnswering](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaForQuestionAnswering) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, XLMRobertaForQuestionAnswering >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("deepset/roberta-base-squad2") >>> model = XLMRobertaForQuestionAnswering.from_pretrained("deepset/roberta-base-squad2") >>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" >>> inputs = tokenizer(question, text, return_tensors="pt") >>> with torch.no_grad(): ... outputs = model(**inputs) >>> answer_start_index = outputs.start_logits.argmax() >>> answer_end_index = outputs.end_logits.argmax() >>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1] >>> tokenizer.decode(predict_answer_tokens, skip_special_tokens=True) ' puppet' >>> >>> target_start_index = torch.tensor([14]) >>> target_end_index = torch.tensor([15]) >>> outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index) >>> loss = outputs.loss >>> round(loss.item(), 2) 0.86 ``` ## TFXLMRobertaModel ### class transformers.TFXLMRobertaModel [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_tf_xlm_roberta.py#L875) ( \*args\*\*kwargs ) Parameters - **config** ([XLMRobertaConfig](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. The bare XLM RoBERTa Model transformer outputting raw hidden-states without any specific head on top. This model inherits from [TFPreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a [tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in `transformers` accept two formats as input: - having all inputs as keyword arguments (like PyTorch models), or - having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like `model.fit()` things should “just work” for you - just pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: - a single Tensor with `input_ids` only and nothing else: `model(input_ids)` - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])` - a dictionary with one or several input Tensors associated to the input names given in the docstring: `model({"input_ids": input_ids, "token_type_ids": token_type_ids})` Note that when creating models and layers with [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! #### call [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_tf_xlm_roberta.py#L880) ( input\_ids: TFModelInputType | None = Noneattention\_mask: np.ndarray | tf.Tensor | None = Nonetoken\_type\_ids: np.ndarray | tf.Tensor | None = Noneposition\_ids: np.ndarray | tf.Tensor | None = Nonehead\_mask: np.ndarray | tf.Tensor | None = Noneinputs\_embeds: np.ndarray | tf.Tensor | None = Noneencoder\_hidden\_states: np.ndarray | tf.Tensor | None = Noneencoder\_attention\_mask: np.ndarray | tf.Tensor | None = Nonepast\_key\_values: Optional\[Tuple\[Tuple\[Union\[np.ndarray, tf.Tensor\]\]\]\] = Noneuse\_cache: Optional\[bool\] = Noneoutput\_attentions: Optional\[bool\] = Noneoutput\_hidden\_states: Optional\[bool\] = Nonereturn\_dict: Optional\[bool\] = Nonetraining: Optional\[bool\] = False ) → [transformers.modeling\_tf\_outputs.TFBaseModelOutputWithPoolingAndCrossAttentions](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFBaseModelOutputWithPoolingAndCrossAttentions) or `tuple(tf.Tensor)` The [TFXLMRobertaModel](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.TFXLMRobertaModel) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, TFXLMRobertaModel >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-base") >>> model = TFXLMRobertaModel.from_pretrained("xlm-roberta-base") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") >>> outputs = model(inputs) >>> last_hidden_states = outputs.last_hidden_state ``` ## TFXLMRobertaForCausalLM ### class transformers.TFXLMRobertaForCausalLM [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_tf_xlm_roberta.py#L1081) ( \*args\*\*kwargs ) Parameters - **config** ([XLMRobertaConfig](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. XLM-RoBERTa Model with a `language modeling` head on top for CLM fine-tuning. This model inherits from [TFPreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a [tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in `transformers` accept two formats as input: - having all inputs as keyword arguments (like PyTorch models), or - having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like `model.fit()` things should “just work” for you - just pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: - a single Tensor with `input_ids` only and nothing else: `model(input_ids)` - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])` - a dictionary with one or several input Tensors associated to the input names given in the docstring: `model({"input_ids": input_ids, "token_type_ids": token_type_ids})` Note that when creating models and layers with [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! #### call [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_tf_xlm_roberta.py#L1114) ( input\_ids: TFModelInputType | None = Noneattention\_mask: np.ndarray | tf.Tensor | None = Nonetoken\_type\_ids: np.ndarray | tf.Tensor | None = Noneposition\_ids: np.ndarray | tf.Tensor | None = Nonehead\_mask: np.ndarray | tf.Tensor | None = Noneinputs\_embeds: np.ndarray | tf.Tensor | None = Noneencoder\_hidden\_states: np.ndarray | tf.Tensor | None = Noneencoder\_attention\_mask: np.ndarray | tf.Tensor | None = Nonepast\_key\_values: Optional\[Tuple\[Tuple\[Union\[np.ndarray, tf.Tensor\]\]\]\] = Noneuse\_cache: Optional\[bool\] = Noneoutput\_attentions: Optional\[bool\] = Noneoutput\_hidden\_states: Optional\[bool\] = Nonereturn\_dict: Optional\[bool\] = Nonelabels: np.ndarray | tf.Tensor | None = Nonetraining: Optional\[bool\] = False ) → [transformers.modeling\_tf\_outputs.TFCausalLMOutputWithCrossAttentions](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions) or `tuple(tf.Tensor)` The [TFXLMRobertaForCausalLM](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.TFXLMRobertaForCausalLM) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, TFXLMRobertaForCausalLM >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-base") >>> model = TFXLMRobertaForCausalLM.from_pretrained("xlm-roberta-base") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") >>> outputs = model(inputs) >>> logits = outputs.logits ``` ## TFXLMRobertaForMaskedLM ### class transformers.TFXLMRobertaForMaskedLM [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_tf_xlm_roberta.py#L999) ( \*args\*\*kwargs ) Parameters - **config** ([XLMRobertaConfig](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. XLM RoBERTa Model with a `language modeling` head on top. This model inherits from [TFPreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a [tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in `transformers` accept two formats as input: - having all inputs as keyword arguments (like PyTorch models), or - having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like `model.fit()` things should “just work” for you - just pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: - a single Tensor with `input_ids` only and nothing else: `model(input_ids)` - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])` - a dictionary with one or several input Tensors associated to the input names given in the docstring: `model({"input_ids": input_ids, "token_type_ids": token_type_ids})` Note that when creating models and layers with [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! #### call [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_tf_xlm_roberta.py#L1016) ( input\_ids: TFModelInputType | None = Noneattention\_mask: np.ndarray | tf.Tensor | None = Nonetoken\_type\_ids: np.ndarray | tf.Tensor | None = Noneposition\_ids: np.ndarray | tf.Tensor | None = Nonehead\_mask: np.ndarray | tf.Tensor | None = Noneinputs\_embeds: np.ndarray | tf.Tensor | None = Noneoutput\_attentions: Optional\[bool\] = Noneoutput\_hidden\_states: Optional\[bool\] = Nonereturn\_dict: Optional\[bool\] = Nonelabels: np.ndarray | tf.Tensor | None = Nonetraining: Optional\[bool\] = False ) → [transformers.modeling\_tf\_outputs.TFMaskedLMOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFMaskedLMOutput) or `tuple(tf.Tensor)` The [TFXLMRobertaForMaskedLM](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.TFXLMRobertaForMaskedLM) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, TFXLMRobertaForMaskedLM >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-base") >>> model = TFXLMRobertaForMaskedLM.from_pretrained("xlm-roberta-base") >>> inputs = tokenizer("The capital of France is <mask>.", return_tensors="tf") >>> logits = model(**inputs).logits >>> >>> mask_token_index = tf.where((inputs.input_ids == tokenizer.mask_token_id)[0]) >>> selected_logits = tf.gather_nd(logits[0], indices=mask_token_index) >>> predicted_token_id = tf.math.argmax(selected_logits, axis=-1) >>> tokenizer.decode(predicted_token_id) ' Paris' ``` ``` >>> labels = tokenizer("The capital of France is Paris.", return_tensors="tf")["input_ids"] >>> >>> labels = tf.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100) >>> outputs = model(**inputs, labels=labels) >>> round(float(outputs.loss), 2) 0.1 ``` ## TFXLMRobertaForSequenceClassification ### class transformers.TFXLMRobertaForSequenceClassification [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_tf_xlm_roberta.py#L1240) ( \*args\*\*kwargs ) Parameters - **config** ([XLMRobertaConfig](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. XLM RoBERTa Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. This model inherits from [TFPreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a [tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in `transformers` accept two formats as input: - having all inputs as keyword arguments (like PyTorch models), or - having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like `model.fit()` things should “just work” for you - just pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: - a single Tensor with `input_ids` only and nothing else: `model(input_ids)` - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])` - a dictionary with one or several input Tensors associated to the input names given in the docstring: `model({"input_ids": input_ids, "token_type_ids": token_type_ids})` Note that when creating models and layers with [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! #### call [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_tf_xlm_roberta.py#L1251) ( input\_ids: TFModelInputType | None = Noneattention\_mask: np.ndarray | tf.Tensor | None = Nonetoken\_type\_ids: np.ndarray | tf.Tensor | None = Noneposition\_ids: np.ndarray | tf.Tensor | None = Nonehead\_mask: np.ndarray | tf.Tensor | None = Noneinputs\_embeds: np.ndarray | tf.Tensor | None = Noneoutput\_attentions: Optional\[bool\] = Noneoutput\_hidden\_states: Optional\[bool\] = Nonereturn\_dict: Optional\[bool\] = Nonelabels: np.ndarray | tf.Tensor | None = Nonetraining: Optional\[bool\] = False ) → [transformers.modeling\_tf\_outputs.TFSequenceClassifierOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFSequenceClassifierOutput) or `tuple(tf.Tensor)` The [TFXLMRobertaForSequenceClassification](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.TFXLMRobertaForSequenceClassification) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, TFXLMRobertaForSequenceClassification >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("cardiffnlp/twitter-roberta-base-emotion") >>> model = TFXLMRobertaForSequenceClassification.from_pretrained("cardiffnlp/twitter-roberta-base-emotion") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") >>> logits = model(**inputs).logits >>> predicted_class_id = int(tf.math.argmax(logits, axis=-1)[0]) >>> model.config.id2label[predicted_class_id] 'optimism' ``` ``` >>> >>> num_labels = len(model.config.id2label) >>> model = TFXLMRobertaForSequenceClassification.from_pretrained("cardiffnlp/twitter-roberta-base-emotion", num_labels=num_labels) >>> labels = tf.constant(1) >>> loss = model(**inputs, labels=labels).loss >>> round(float(loss), 2) 0.08 ``` ## TFXLMRobertaForMultipleChoice ### class transformers.TFXLMRobertaForMultipleChoice [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_tf_xlm_roberta.py#L1317) ( \*args\*\*kwargs ) Parameters - **config** ([XLMRobertaConfig](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. XLM Roberta Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks. This model inherits from [TFPreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a [tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in `transformers` accept two formats as input: - having all inputs as keyword arguments (like PyTorch models), or - having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like `model.fit()` things should “just work” for you - just pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: - a single Tensor with `input_ids` only and nothing else: `model(input_ids)` - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])` - a dictionary with one or several input Tensors associated to the input names given in the docstring: `model({"input_ids": input_ids, "token_type_ids": token_type_ids})` Note that when creating models and layers with [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! #### call [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_tf_xlm_roberta.py#L1331) ( input\_ids: TFModelInputType | None = Noneattention\_mask: np.ndarray | tf.Tensor | None = Nonetoken\_type\_ids: np.ndarray | tf.Tensor | None = Noneposition\_ids: np.ndarray | tf.Tensor | None = Nonehead\_mask: np.ndarray | tf.Tensor | None = Noneinputs\_embeds: np.ndarray | tf.Tensor | None = Noneoutput\_attentions: Optional\[bool\] = Noneoutput\_hidden\_states: Optional\[bool\] = Nonereturn\_dict: Optional\[bool\] = Nonelabels: np.ndarray | tf.Tensor | None = Nonetraining: Optional\[bool\] = False ) → [transformers.modeling\_tf\_outputs.TFMultipleChoiceModelOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput) or `tuple(tf.Tensor)` The [TFXLMRobertaForMultipleChoice](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.TFXLMRobertaForMultipleChoice) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, TFXLMRobertaForMultipleChoice >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-base") >>> model = TFXLMRobertaForMultipleChoice.from_pretrained("xlm-roberta-base") >>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced." >>> choice0 = "It is eaten with a fork and a knife." >>> choice1 = "It is eaten while held in the hand." >>> encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="tf", padding=True) >>> inputs = {k: tf.expand_dims(v, 0) for k, v in encoding.items()} >>> outputs = model(inputs) >>> >>> logits = outputs.logits ``` ## TFXLMRobertaForTokenClassification ### class transformers.TFXLMRobertaForTokenClassification [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_tf_xlm_roberta.py#L1410) ( \*args\*\*kwargs ) Parameters - **config** ([XLMRobertaConfig](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. XLM RoBERTa Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. This model inherits from [TFPreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a [tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in `transformers` accept two formats as input: - having all inputs as keyword arguments (like PyTorch models), or - having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like `model.fit()` things should “just work” for you - just pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: - a single Tensor with `input_ids` only and nothing else: `model(input_ids)` - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])` - a dictionary with one or several input Tensors associated to the input names given in the docstring: `model({"input_ids": input_ids, "token_type_ids": token_type_ids})` Note that when creating models and layers with [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! #### call [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_tf_xlm_roberta.py#L1428) ( input\_ids: TFModelInputType | None = Noneattention\_mask: np.ndarray | tf.Tensor | None = Nonetoken\_type\_ids: np.ndarray | tf.Tensor | None = Noneposition\_ids: np.ndarray | tf.Tensor | None = Nonehead\_mask: np.ndarray | tf.Tensor | None = Noneinputs\_embeds: np.ndarray | tf.Tensor | None = Noneoutput\_attentions: Optional\[bool\] = Noneoutput\_hidden\_states: Optional\[bool\] = Nonereturn\_dict: Optional\[bool\] = Nonelabels: np.ndarray | tf.Tensor | None = Nonetraining: Optional\[bool\] = False ) → [transformers.modeling\_tf\_outputs.TFTokenClassifierOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFTokenClassifierOutput) or `tuple(tf.Tensor)` The [TFXLMRobertaForTokenClassification](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.TFXLMRobertaForTokenClassification) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, TFXLMRobertaForTokenClassification >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("ydshieh/roberta-large-ner-english") >>> model = TFXLMRobertaForTokenClassification.from_pretrained("ydshieh/roberta-large-ner-english") >>> inputs = tokenizer( ... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="tf" ... ) >>> logits = model(**inputs).logits >>> predicted_token_class_ids = tf.math.argmax(logits, axis=-1) >>> >>> >>> >>> predicted_tokens_classes = [model.config.id2label[t] for t in predicted_token_class_ids[0].numpy().tolist()] >>> predicted_tokens_classes ['O', 'ORG', 'ORG', 'O', 'O', 'O', 'O', 'O', 'LOC', 'O', 'LOC', 'LOC'] ``` ``` >>> labels = predicted_token_class_ids >>> loss = tf.math.reduce_mean(model(**inputs, labels=labels).loss) >>> round(float(loss), 2) 0.01 ``` ## TFXLMRobertaForQuestionAnswering ### class transformers.TFXLMRobertaForQuestionAnswering [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_tf_xlm_roberta.py#L1494) ( \*args\*\*kwargs ) Parameters - **config** ([XLMRobertaConfig](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. XLM RoBERTa Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute `span start logits` and `span end logits`). This model inherits from [TFPreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a [tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in `transformers` accept two formats as input: - having all inputs as keyword arguments (like PyTorch models), or - having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like `model.fit()` things should “just work” for you - just pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: - a single Tensor with `input_ids` only and nothing else: `model(input_ids)` - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])` - a dictionary with one or several input Tensors associated to the input names given in the docstring: `model({"input_ids": input_ids, "token_type_ids": token_type_ids})` Note that when creating models and layers with [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! #### call [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_tf_xlm_roberta.py#L1507) ( input\_ids: TFModelInputType | None = Noneattention\_mask: np.ndarray | tf.Tensor | None = Nonetoken\_type\_ids: np.ndarray | tf.Tensor | None = Noneposition\_ids: np.ndarray | tf.Tensor | None = Nonehead\_mask: np.ndarray | tf.Tensor | None = Noneinputs\_embeds: np.ndarray | tf.Tensor | None = Noneoutput\_attentions: Optional\[bool\] = Noneoutput\_hidden\_states: Optional\[bool\] = Nonereturn\_dict: Optional\[bool\] = Nonestart\_positions: np.ndarray | tf.Tensor | None = Noneend\_positions: np.ndarray | tf.Tensor | None = Nonetraining: Optional\[bool\] = False ) → [transformers.modeling\_tf\_outputs.TFQuestionAnsweringModelOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput) or `tuple(tf.Tensor)` The [TFXLMRobertaForQuestionAnswering](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.TFXLMRobertaForQuestionAnswering) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, TFXLMRobertaForQuestionAnswering >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("ydshieh/roberta-base-squad2") >>> model = TFXLMRobertaForQuestionAnswering.from_pretrained("ydshieh/roberta-base-squad2") >>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" >>> inputs = tokenizer(question, text, return_tensors="tf") >>> outputs = model(**inputs) >>> answer_start_index = int(tf.math.argmax(outputs.start_logits, axis=-1)[0]) >>> answer_end_index = int(tf.math.argmax(outputs.end_logits, axis=-1)[0]) >>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1] >>> tokenizer.decode(predict_answer_tokens) ' puppet' ``` ``` >>> >>> target_start_index = tf.constant([14]) >>> target_end_index = tf.constant([15]) >>> outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index) >>> loss = tf.math.reduce_mean(outputs.loss) >>> round(float(loss), 2) 0.86 ``` ## FlaxXLMRobertaModel ### class transformers.FlaxXLMRobertaModel [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_flax_xlm_roberta.py#L1003) ( config: XLMRobertaConfiginput\_shape: typing.Tuple = (1, 1)seed: int = 0dtype: dtype = <class 'jax.numpy.float32'>\_do\_init: bool = Truegradient\_checkpointing: bool = False\*\*kwargs ) Parameters - **config** ([XLMRobertaConfig](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.from_pretrained) method to load the model weights. The bare XLM RoBERTa Model transformer outputting raw hidden-states without any specific head on top. This model inherits from [FlaxPreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models) This model is also a Flax Linen [flax.linen.Module](https://flax.readthedocs.io/en/latest/flax.linen.html#module) subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior. Finally, this model supports inherent JAX features such as: - [Just-In-Time (JIT) compilation](https://jax.readthedocs.io/en/latest/jax.html#just-in-time-compilation-jit) - [Automatic Differentiation](https://jax.readthedocs.io/en/latest/jax.html#automatic-differentiation) - [Vectorization](https://jax.readthedocs.io/en/latest/jax.html#vectorization-vmap) - [Parallelization](https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap) #### \_\_call\_\_ [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_flax_xlm_roberta.py#L829) ( input\_idsattention\_mask = Nonetoken\_type\_ids = Noneposition\_ids = Nonehead\_mask = Noneencoder\_hidden\_states = Noneencoder\_attention\_mask = Noneparams: dict = Nonedropout\_rng: PRNGKey = Nonetrain: bool = Falseoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = Nonepast\_key\_values: dict = None ) → [transformers.modeling\_flax\_outputs.FlaxBaseModelOutputWithPooling](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling) or `tuple(torch.FloatTensor)` The `FlaxXLMRobertaPreTrainedModel` forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, FlaxXLMRobertaModel >>> tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-base") >>> model = FlaxXLMRobertaModel.from_pretrained("xlm-roberta-base") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="jax") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state ``` ## FlaxXLMRobertaForCausalLM ### class transformers.FlaxXLMRobertaForCausalLM [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_flax_xlm_roberta.py#L1469) ( config: XLMRobertaConfiginput\_shape: typing.Tuple = (1, 1)seed: int = 0dtype: dtype = <class 'jax.numpy.float32'>\_do\_init: bool = Truegradient\_checkpointing: bool = False\*\*kwargs ) Parameters - **config** ([XLMRobertaConfig](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.from_pretrained) method to load the model weights. XLM Roberta Model with a language modeling head on top (a linear layer on top of the hidden-states output) e.g for autoregressive tasks. This model inherits from [FlaxPreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models) This model is also a Flax Linen [flax.linen.Module](https://flax.readthedocs.io/en/latest/flax.linen.html#module) subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior. Finally, this model supports inherent JAX features such as: - [Just-In-Time (JIT) compilation](https://jax.readthedocs.io/en/latest/jax.html#just-in-time-compilation-jit) - [Automatic Differentiation](https://jax.readthedocs.io/en/latest/jax.html#automatic-differentiation) - [Vectorization](https://jax.readthedocs.io/en/latest/jax.html#vectorization-vmap) - [Parallelization](https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap) #### \_\_call\_\_ [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_flax_xlm_roberta.py#L829) ( input\_idsattention\_mask = Nonetoken\_type\_ids = Noneposition\_ids = Nonehead\_mask = Noneencoder\_hidden\_states = Noneencoder\_attention\_mask = Noneparams: dict = Nonedropout\_rng: PRNGKey = Nonetrain: bool = Falseoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = Nonepast\_key\_values: dict = None ) → [transformers.modeling\_flax\_outputs.FlaxCausalLMOutputWithCrossAttentions](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions) or `tuple(torch.FloatTensor)` The `FlaxXLMRobertaPreTrainedModel` forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, FlaxXLMRobertaForCausalLM >>> tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-base") >>> model = FlaxXLMRobertaForCausalLM.from_pretrained("xlm-roberta-base") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="np") >>> outputs = model(**inputs) >>> >>> next_token_logits = outputs.logits[:, -1] ``` ## FlaxXLMRobertaForMaskedLM ### class transformers.FlaxXLMRobertaForMaskedLM [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_flax_xlm_roberta.py#L1070) ( config: XLMRobertaConfiginput\_shape: typing.Tuple = (1, 1)seed: int = 0dtype: dtype = <class 'jax.numpy.float32'>\_do\_init: bool = Truegradient\_checkpointing: bool = False\*\*kwargs ) Parameters - **config** ([XLMRobertaConfig](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.from_pretrained) method to load the model weights. XLM RoBERTa Model with a `language modeling` head on top. This model inherits from [FlaxPreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models) This model is also a Flax Linen [flax.linen.Module](https://flax.readthedocs.io/en/latest/flax.linen.html#module) subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior. Finally, this model supports inherent JAX features such as: - [Just-In-Time (JIT) compilation](https://jax.readthedocs.io/en/latest/jax.html#just-in-time-compilation-jit) - [Automatic Differentiation](https://jax.readthedocs.io/en/latest/jax.html#automatic-differentiation) - [Vectorization](https://jax.readthedocs.io/en/latest/jax.html#vectorization-vmap) - [Parallelization](https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap) #### \_\_call\_\_ [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_flax_xlm_roberta.py#L829) ( input\_idsattention\_mask = Nonetoken\_type\_ids = Noneposition\_ids = Nonehead\_mask = Noneencoder\_hidden\_states = Noneencoder\_attention\_mask = Noneparams: dict = Nonedropout\_rng: PRNGKey = Nonetrain: bool = Falseoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = Nonepast\_key\_values: dict = None ) → [transformers.modeling\_flax\_outputs.FlaxBaseModelOutputWithPooling](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling) or `tuple(torch.FloatTensor)` The `FlaxXLMRobertaPreTrainedModel` forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, FlaxXLMRobertaForMaskedLM >>> tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-base") >>> model = FlaxXLMRobertaForMaskedLM.from_pretrained("xlm-roberta-base") >>> inputs = tokenizer("The capital of France is [MASK].", return_tensors="jax") >>> outputs = model(**inputs) >>> logits = outputs.logits ``` ## FlaxXLMRobertaForSequenceClassification ### class transformers.FlaxXLMRobertaForSequenceClassification [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_flax_xlm_roberta.py#L1143) ( config: XLMRobertaConfiginput\_shape: typing.Tuple = (1, 1)seed: int = 0dtype: dtype = <class 'jax.numpy.float32'>\_do\_init: bool = Truegradient\_checkpointing: bool = False\*\*kwargs ) Parameters - **config** ([XLMRobertaConfig](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.from_pretrained) method to load the model weights. XLM Roberta Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. This model inherits from [FlaxPreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models) This model is also a Flax Linen [flax.linen.Module](https://flax.readthedocs.io/en/latest/flax.linen.html#module) subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior. Finally, this model supports inherent JAX features such as: - [Just-In-Time (JIT) compilation](https://jax.readthedocs.io/en/latest/jax.html#just-in-time-compilation-jit) - [Automatic Differentiation](https://jax.readthedocs.io/en/latest/jax.html#automatic-differentiation) - [Vectorization](https://jax.readthedocs.io/en/latest/jax.html#vectorization-vmap) - [Parallelization](https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap) #### \_\_call\_\_ [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_flax_xlm_roberta.py#L829) ( input\_idsattention\_mask = Nonetoken\_type\_ids = Noneposition\_ids = Nonehead\_mask = Noneencoder\_hidden\_states = Noneencoder\_attention\_mask = Noneparams: dict = Nonedropout\_rng: PRNGKey = Nonetrain: bool = Falseoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = Nonepast\_key\_values: dict = None ) → [transformers.modeling\_flax\_outputs.FlaxSequenceClassifierOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput) or `tuple(torch.FloatTensor)` The `FlaxXLMRobertaPreTrainedModel` forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, FlaxXLMRobertaForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-base") >>> model = FlaxXLMRobertaForSequenceClassification.from_pretrained("xlm-roberta-base") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="jax") >>> outputs = model(**inputs) >>> logits = outputs.logits ``` ## FlaxXLMRobertaForMultipleChoice ### class transformers.FlaxXLMRobertaForMultipleChoice [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_flax_xlm_roberta.py#L1224) ( config: XLMRobertaConfiginput\_shape: typing.Tuple = (1, 1)seed: int = 0dtype: dtype = <class 'jax.numpy.float32'>\_do\_init: bool = Truegradient\_checkpointing: bool = False\*\*kwargs ) Parameters - **config** ([XLMRobertaConfig](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.from_pretrained) method to load the model weights. XLM Roberta Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks. This model inherits from [FlaxPreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models) This model is also a Flax Linen [flax.linen.Module](https://flax.readthedocs.io/en/latest/flax.linen.html#module) subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior. Finally, this model supports inherent JAX features such as: - [Just-In-Time (JIT) compilation](https://jax.readthedocs.io/en/latest/jax.html#just-in-time-compilation-jit) - [Automatic Differentiation](https://jax.readthedocs.io/en/latest/jax.html#automatic-differentiation) - [Vectorization](https://jax.readthedocs.io/en/latest/jax.html#vectorization-vmap) - [Parallelization](https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap) #### \_\_call\_\_ [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_flax_xlm_roberta.py#L829) ( input\_idsattention\_mask = Nonetoken\_type\_ids = Noneposition\_ids = Nonehead\_mask = Noneencoder\_hidden\_states = Noneencoder\_attention\_mask = Noneparams: dict = Nonedropout\_rng: PRNGKey = Nonetrain: bool = Falseoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = Nonepast\_key\_values: dict = None ) → [transformers.modeling\_flax\_outputs.FlaxMultipleChoiceModelOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxMultipleChoiceModelOutput) or `tuple(torch.FloatTensor)` The `FlaxXLMRobertaPreTrainedModel` forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, FlaxXLMRobertaForMultipleChoice >>> tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-base") >>> model = FlaxXLMRobertaForMultipleChoice.from_pretrained("xlm-roberta-base") >>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced." >>> choice0 = "It is eaten with a fork and a knife." >>> choice1 = "It is eaten while held in the hand." >>> encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="jax", padding=True) >>> outputs = model(**{k: v[None, :] for k, v in encoding.items()}) >>> logits = outputs.logits ``` ## FlaxXLMRobertaForTokenClassification ### class transformers.FlaxXLMRobertaForTokenClassification [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_flax_xlm_roberta.py#L1306) ( config: XLMRobertaConfiginput\_shape: typing.Tuple = (1, 1)seed: int = 0dtype: dtype = <class 'jax.numpy.float32'>\_do\_init: bool = Truegradient\_checkpointing: bool = False\*\*kwargs ) Parameters - **config** ([XLMRobertaConfig](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.from_pretrained) method to load the model weights. XLM Roberta Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. This model inherits from [FlaxPreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models) This model is also a Flax Linen [flax.linen.Module](https://flax.readthedocs.io/en/latest/flax.linen.html#module) subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior. Finally, this model supports inherent JAX features such as: - [Just-In-Time (JIT) compilation](https://jax.readthedocs.io/en/latest/jax.html#just-in-time-compilation-jit) - [Automatic Differentiation](https://jax.readthedocs.io/en/latest/jax.html#automatic-differentiation) - [Vectorization](https://jax.readthedocs.io/en/latest/jax.html#vectorization-vmap) - [Parallelization](https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap) #### \_\_call\_\_ [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_flax_xlm_roberta.py#L829) ( input\_idsattention\_mask = Nonetoken\_type\_ids = Noneposition\_ids = Nonehead\_mask = Noneencoder\_hidden\_states = Noneencoder\_attention\_mask = Noneparams: dict = Nonedropout\_rng: PRNGKey = Nonetrain: bool = Falseoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = Nonepast\_key\_values: dict = None ) → [transformers.modeling\_flax\_outputs.FlaxTokenClassifierOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxTokenClassifierOutput) or `tuple(torch.FloatTensor)` The `FlaxXLMRobertaPreTrainedModel` forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, FlaxXLMRobertaForTokenClassification >>> tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-base") >>> model = FlaxXLMRobertaForTokenClassification.from_pretrained("xlm-roberta-base") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="jax") >>> outputs = model(**inputs) >>> logits = outputs.logits ``` ## FlaxXLMRobertaForQuestionAnswering ### class transformers.FlaxXLMRobertaForQuestionAnswering [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_flax_xlm_roberta.py#L1383) ( config: XLMRobertaConfiginput\_shape: typing.Tuple = (1, 1)seed: int = 0dtype: dtype = <class 'jax.numpy.float32'>\_do\_init: bool = Truegradient\_checkpointing: bool = False\*\*kwargs ) Parameters - **config** ([XLMRobertaConfig](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.from_pretrained) method to load the model weights. XLM Roberta Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute `span start logits` and `span end logits`). This model inherits from [FlaxPreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models) This model is also a Flax Linen [flax.linen.Module](https://flax.readthedocs.io/en/latest/flax.linen.html#module) subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior. Finally, this model supports inherent JAX features such as: - [Just-In-Time (JIT) compilation](https://jax.readthedocs.io/en/latest/jax.html#just-in-time-compilation-jit) - [Automatic Differentiation](https://jax.readthedocs.io/en/latest/jax.html#automatic-differentiation) - [Vectorization](https://jax.readthedocs.io/en/latest/jax.html#vectorization-vmap) - [Parallelization](https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap) #### \_\_call\_\_ [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_flax_xlm_roberta.py#L829) ( input\_idsattention\_mask = Nonetoken\_type\_ids = Noneposition\_ids = Nonehead\_mask = Noneencoder\_hidden\_states = Noneencoder\_attention\_mask = Noneparams: dict = Nonedropout\_rng: PRNGKey = Nonetrain: bool = Falseoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = Nonepast\_key\_values: dict = None ) → [transformers.modeling\_flax\_outputs.FlaxQuestionAnsweringModelOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxQuestionAnsweringModelOutput) or `tuple(torch.FloatTensor)` The `FlaxXLMRobertaPreTrainedModel` forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, FlaxXLMRobertaForQuestionAnswering >>> tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-base") >>> model = FlaxXLMRobertaForQuestionAnswering.from_pretrained("xlm-roberta-base") >>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" >>> inputs = tokenizer(question, text, return_tensors="jax") >>> outputs = model(**inputs) >>> start_scores = outputs.start_logits >>> end_scores = outputs.end_logits ```
<!DOCTYPE html><html class=""><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"> <meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science."> <meta property="fb:app_id" content="1321688464574422"> <meta name="twitter:card" content="summary_large_image"> <meta name="twitter:site" content="@huggingface"> <meta property="og:title" content="XLM-RoBERTa"> <meta property="og:type" content="website"> <meta property="og:url" content="https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/xlm-roberta"> <meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png"> <link rel="stylesheet" href="/front/build/kube-b0520c1/style.css"> <link rel="preconnect" href="https://fonts.gstatic.com"> <link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&amp;display=swap" rel="stylesheet"> <link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&amp;display=swap" rel="stylesheet"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" as="style" onload="this.onload=null;this.rel='stylesheet'"> <noscript> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" /> </noscript> <title>XLM-RoBERTa</title> <script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script> <script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><link rel="stylesheet" href="/docs/transformers/v4.34.0/en/_app/immutable/assets/0.e3b0c442.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/1.38c5c2f6.js"><meta name="hf:doc:metadata" content="{&quot;local&quot;:&quot;xlmroberta&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;overview&quot;,&quot;title&quot;:&quot;Overview&quot;},{&quot;local&quot;:&quot;resources&quot;,&quot;title&quot;:&quot;Resources&quot;},{&quot;local&quot;:&quot;transformers.XLMRobertaConfig&quot;,&quot;title&quot;:&quot;XLMRobertaConfig&quot;},{&quot;local&quot;:&quot;transformers.XLMRobertaTokenizer&quot;,&quot;title&quot;:&quot;XLMRobertaTokenizer&quot;},{&quot;local&quot;:&quot;transformers.XLMRobertaTokenizerFast&quot;,&quot;title&quot;:&quot;XLMRobertaTokenizerFast&quot;},{&quot;local&quot;:&quot;transformers.XLMRobertaModel&quot;,&quot;title&quot;:&quot;XLMRobertaModel&quot;},{&quot;local&quot;:&quot;transformers.XLMRobertaForCausalLM&quot;,&quot;title&quot;:&quot;XLMRobertaForCausalLM&quot;},{&quot;local&quot;:&quot;transformers.XLMRobertaForMaskedLM&quot;,&quot;title&quot;:&quot;XLMRobertaForMaskedLM&quot;},{&quot;local&quot;:&quot;transformers.XLMRobertaForSequenceClassification&quot;,&quot;title&quot;:&quot;XLMRobertaForSequenceClassification&quot;},{&quot;local&quot;:&quot;transformers.XLMRobertaForMultipleChoice&quot;,&quot;title&quot;:&quot;XLMRobertaForMultipleChoice&quot;},{&quot;local&quot;:&quot;transformers.XLMRobertaForTokenClassification&quot;,&quot;title&quot;:&quot;XLMRobertaForTokenClassification&quot;},{&quot;local&quot;:&quot;transformers.XLMRobertaForQuestionAnswering&quot;,&quot;title&quot;:&quot;XLMRobertaForQuestionAnswering&quot;},{&quot;local&quot;:&quot;transformers.TFXLMRobertaModel&quot;,&quot;title&quot;:&quot;TFXLMRobertaModel&quot;},{&quot;local&quot;:&quot;transformers.TFXLMRobertaForCausalLM&quot;,&quot;title&quot;:&quot;TFXLMRobertaForCausalLM&quot;},{&quot;local&quot;:&quot;transformers.TFXLMRobertaForMaskedLM&quot;,&quot;title&quot;:&quot;TFXLMRobertaForMaskedLM&quot;},{&quot;local&quot;:&quot;transformers.TFXLMRobertaForSequenceClassification&quot;,&quot;title&quot;:&quot;TFXLMRobertaForSequenceClassification&quot;},{&quot;local&quot;:&quot;transformers.TFXLMRobertaForMultipleChoice&quot;,&quot;title&quot;:&quot;TFXLMRobertaForMultipleChoice&quot;},{&quot;local&quot;:&quot;transformers.TFXLMRobertaForTokenClassification&quot;,&quot;title&quot;:&quot;TFXLMRobertaForTokenClassification&quot;},{&quot;local&quot;:&quot;transformers.TFXLMRobertaForQuestionAnswering&quot;,&quot;title&quot;:&quot;TFXLMRobertaForQuestionAnswering&quot;},{&quot;local&quot;:&quot;transformers.FlaxXLMRobertaModel&quot;,&quot;title&quot;:&quot;FlaxXLMRobertaModel&quot;},{&quot;local&quot;:&quot;transformers.FlaxXLMRobertaForCausalLM&quot;,&quot;title&quot;:&quot;FlaxXLMRobertaForCausalLM&quot;},{&quot;local&quot;:&quot;transformers.FlaxXLMRobertaForMaskedLM&quot;,&quot;title&quot;:&quot;FlaxXLMRobertaForMaskedLM&quot;},{&quot;local&quot;:&quot;transformers.FlaxXLMRobertaForSequenceClassification&quot;,&quot;title&quot;:&quot;FlaxXLMRobertaForSequenceClassification&quot;},{&quot;local&quot;:&quot;transformers.FlaxXLMRobertaForMultipleChoice&quot;,&quot;title&quot;:&quot;FlaxXLMRobertaForMultipleChoice&quot;},{&quot;local&quot;:&quot;transformers.FlaxXLMRobertaForTokenClassification&quot;,&quot;title&quot;:&quot;FlaxXLMRobertaForTokenClassification&quot;},{&quot;local&quot;:&quot;transformers.FlaxXLMRobertaForQuestionAnswering&quot;,&quot;title&quot;:&quot;FlaxXLMRobertaForQuestionAnswering&quot;}],&quot;title&quot;:&quot;XLM-RoBERTa&quot;}"><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head> <body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage"> <div class="flex min-h-screen flex-col"> <div class="SVELTE_HYDRATER contents" data-props="{&quot;classNames&quot;:&quot;&quot;,&quot;isWide&quot;:true,&quot;isZh&quot;:false}" data-target="MainHeader"><header class="border-b border-gray-100 "><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <div class="flex flex-none items-center justify-center p-0.5 place-self-stretch lg:hidden"><button class="relative z-40 flex h-6 w-8 items-center justify-center" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div></div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="rounded-full border border-transparent bg-gray-900 px-3 py-1 leading-none text-white hover:border-black hover:bg-white hover:text-black" href="/join">Sign Up</a></li></ul></nav></div></header></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div> <main class="flex flex-1 flex-col"><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapters&quot;:[{&quot;title&quot;:&quot;Get started&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;🤗 Transformers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;index&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/index&quot;},{&quot;title&quot;:&quot;Quick tour&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;quicktour&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/quicktour&quot;},{&quot;title&quot;:&quot;Installation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;installation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/installation&quot;}]},{&quot;title&quot;:&quot;Tutorials&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Run inference with pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_tutorial&quot;},{&quot;title&quot;:&quot;Write portable code with AutoClass&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;autoclass_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/autoclass_tutorial&quot;},{&quot;title&quot;:&quot;Preprocess data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocessing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/preprocessing&quot;},{&quot;title&quot;:&quot;Fine-tune a pretrained model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;training&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/training&quot;},{&quot;title&quot;:&quot;Train with a script&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;run_scripts&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/run_scripts&quot;},{&quot;title&quot;:&quot;Set up distributed training with 🤗 Accelerate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;accelerate&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/accelerate&quot;},{&quot;title&quot;:&quot;Load and train adapters with 🤗 PEFT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;peft&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/peft&quot;},{&quot;title&quot;:&quot;Share your model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_sharing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_sharing&quot;},{&quot;title&quot;:&quot;Agents&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers_agents&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/transformers_agents&quot;},{&quot;title&quot;:&quot;Generation with LLMs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;llm_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/llm_tutorial&quot;}]},{&quot;title&quot;:&quot;Task Guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Natural Language Processing&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text classification&quot;,&quot;id&quot;:&quot;tasks/sequence_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/sequence_classification&quot;},{&quot;title&quot;:&quot;Token classification&quot;,&quot;id&quot;:&quot;tasks/token_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/token_classification&quot;},{&quot;title&quot;:&quot;Question answering&quot;,&quot;id&quot;:&quot;tasks/question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/question_answering&quot;},{&quot;title&quot;:&quot;Causal language modeling&quot;,&quot;id&quot;:&quot;tasks/language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/language_modeling&quot;},{&quot;title&quot;:&quot;Masked language modeling&quot;,&quot;id&quot;:&quot;tasks/masked_language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/masked_language_modeling&quot;},{&quot;title&quot;:&quot;Translation&quot;,&quot;id&quot;:&quot;tasks/translation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/translation&quot;},{&quot;title&quot;:&quot;Summarization&quot;,&quot;id&quot;:&quot;tasks/summarization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/summarization&quot;},{&quot;title&quot;:&quot;Multiple choice&quot;,&quot;id&quot;:&quot;tasks/multiple_choice&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/multiple_choice&quot;}]},{&quot;title&quot;:&quot;Audio&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio classification&quot;,&quot;id&quot;:&quot;tasks/audio_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/audio_classification&quot;},{&quot;title&quot;:&quot;Automatic speech recognition&quot;,&quot;id&quot;:&quot;tasks/asr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/asr&quot;}]},{&quot;title&quot;:&quot;Computer Vision&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image classification&quot;,&quot;id&quot;:&quot;tasks/image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_classification&quot;},{&quot;title&quot;:&quot;Semantic segmentation&quot;,&quot;id&quot;:&quot;tasks/semantic_segmentation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/semantic_segmentation&quot;},{&quot;title&quot;:&quot;Video classification&quot;,&quot;id&quot;:&quot;tasks/video_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/video_classification&quot;},{&quot;title&quot;:&quot;Object detection&quot;,&quot;id&quot;:&quot;tasks/object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot object detection&quot;,&quot;id&quot;:&quot;tasks/zero_shot_object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot image classification&quot;,&quot;id&quot;:&quot;tasks/zero_shot_image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification&quot;},{&quot;title&quot;:&quot;Depth estimation&quot;,&quot;id&quot;:&quot;tasks/monocular_depth_estimation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation&quot;}]},{&quot;title&quot;:&quot;Multimodal&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image captioning&quot;,&quot;id&quot;:&quot;tasks/image_captioning&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_captioning&quot;},{&quot;title&quot;:&quot;Document Question Answering&quot;,&quot;id&quot;:&quot;tasks/document_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/document_question_answering&quot;},{&quot;title&quot;:&quot;Visual Question Answering&quot;,&quot;id&quot;:&quot;tasks/visual_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/visual_question_answering&quot;},{&quot;title&quot;:&quot;Text to speech&quot;,&quot;id&quot;:&quot;tasks/text-to-speech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/text-to-speech&quot;}]},{&quot;title&quot;:&quot;Generation&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Customize the generation strategy&quot;,&quot;id&quot;:&quot;generation_strategies&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/generation_strategies&quot;}]},{&quot;title&quot;:&quot;Prompting&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image tasks with IDEFICS&quot;,&quot;id&quot;:&quot;tasks/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/idefics&quot;}]}]},{&quot;title&quot;:&quot;Developer guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Use fast tokenizers from 🤗 Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;fast_tokenizers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/fast_tokenizers&quot;},{&quot;title&quot;:&quot;Run inference with multilingual models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;multilingual&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/multilingual&quot;},{&quot;title&quot;:&quot;Use model-specific APIs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;create_a_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/create_a_model&quot;},{&quot;title&quot;:&quot;Share a custom model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_models&quot;},{&quot;title&quot;:&quot;Templates for chat models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;chat_templating&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/chat_templating&quot;},{&quot;title&quot;:&quot;Run training on Amazon SageMaker&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;sagemaker&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/sagemaker&quot;},{&quot;title&quot;:&quot;Export to ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;serialization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/serialization&quot;},{&quot;title&quot;:&quot;Export to TFLite&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tflite&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tflite&quot;},{&quot;title&quot;:&quot;Export to TorchScript&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;torchscript&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/torchscript&quot;},{&quot;title&quot;:&quot;Benchmarks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;benchmarks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/benchmarks&quot;},{&quot;title&quot;:&quot;Notebooks with examples&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;notebooks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/notebooks&quot;},{&quot;title&quot;:&quot;Community resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;community&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/community&quot;},{&quot;title&quot;:&quot;Custom Tools and Prompts&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_tools&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_tools&quot;},{&quot;title&quot;:&quot;Troubleshoot&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;troubleshooting&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/troubleshooting&quot;}]},{&quot;title&quot;:&quot;Performance and scalability&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;performance&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/performance&quot;},{&quot;title&quot;:&quot;Efficient training techniques&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Methods and tools for efficient training on a single GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_one&quot;},{&quot;title&quot;:&quot;Multiple GPUs and parallelism&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_many&quot;},{&quot;title&quot;:&quot;Efficient training on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu&quot;},{&quot;title&quot;:&quot;Distributed CPU training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu_many&quot;},{&quot;title&quot;:&quot;Training on TPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu&quot;},{&quot;title&quot;:&quot;Training on TPU with TensorFlow&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu_tf&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu_tf&quot;},{&quot;title&quot;:&quot;Training on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_special&quot;},{&quot;title&quot;:&quot;Custom hardware for training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_hardware&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_hardware&quot;},{&quot;title&quot;:&quot;Hyperparameter Search using Trainer API&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;hpo_train&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/hpo_train&quot;}]},{&quot;title&quot;:&quot;Optimizing inference&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Inference on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_cpu&quot;},{&quot;title&quot;:&quot;Inference on one GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_one&quot;},{&quot;title&quot;:&quot;Inference on many GPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_many&quot;},{&quot;title&quot;:&quot;Inference on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_special&quot;}]},{&quot;title&quot;:&quot;Instantiating a big model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;big_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/big_models&quot;},{&quot;title&quot;:&quot;Troubleshooting&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;debugging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/debugging&quot;},{&quot;title&quot;:&quot;XLA Integration for TensorFlow Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tf_xla&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tf_xla&quot;},{&quot;title&quot;:&quot;Optimize inference using `torch.compile()`&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_torch_compile&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_torch_compile&quot;}]},{&quot;title&quot;:&quot;Contribute&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;How to contribute to transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;contributing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/contributing&quot;},{&quot;title&quot;:&quot;How to add a model to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_model&quot;},{&quot;title&quot;:&quot;How to convert a 🤗 Transformers model to TensorFlow?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_tensorflow_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_tensorflow_model&quot;},{&quot;title&quot;:&quot;How to add a pipeline to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_pipeline&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_pipeline&quot;},{&quot;title&quot;:&quot;Testing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;testing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/testing&quot;},{&quot;title&quot;:&quot;Checks on a Pull Request&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pr_checks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pr_checks&quot;}]},{&quot;title&quot;:&quot;Conceptual guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Philosophy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;philosophy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/philosophy&quot;},{&quot;title&quot;:&quot;Glossary&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;glossary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/glossary&quot;},{&quot;title&quot;:&quot;What 🤗 Transformers can do&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;task_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/task_summary&quot;},{&quot;title&quot;:&quot;How 🤗 Transformers solve tasks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks_explained&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks_explained&quot;},{&quot;title&quot;:&quot;The Transformer model family&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_summary&quot;},{&quot;title&quot;:&quot;Summary of the tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tokenizer_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tokenizer_summary&quot;},{&quot;title&quot;:&quot;Attention mechanisms&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;attention&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/attention&quot;},{&quot;title&quot;:&quot;Padding and truncation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pad_truncation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pad_truncation&quot;},{&quot;title&quot;:&quot;BERTology&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;bertology&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/bertology&quot;},{&quot;title&quot;:&quot;Perplexity of fixed-length models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perplexity&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perplexity&quot;},{&quot;title&quot;:&quot;Pipelines for webserver inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_webserver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_webserver&quot;},{&quot;title&quot;:&quot;Model training anatomy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_memory_anatomy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_memory_anatomy&quot;}]},{&quot;title&quot;:&quot;API&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Main Classes&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Agents and Tools&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/agent&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/agent&quot;},{&quot;title&quot;:&quot;Auto Classes&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/auto&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/auto&quot;},{&quot;title&quot;:&quot;Callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/callback&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/callback&quot;},{&quot;title&quot;:&quot;Configuration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/configuration&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/configuration&quot;},{&quot;title&quot;:&quot;Data Collator&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/data_collator&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/data_collator&quot;},{&quot;title&quot;:&quot;Keras callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/keras_callbacks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/keras_callbacks&quot;},{&quot;title&quot;:&quot;Logging&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/logging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/logging&quot;},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/model&quot;},{&quot;title&quot;:&quot;Text Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/text_generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/text_generation&quot;},{&quot;title&quot;:&quot;ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/onnx&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/onnx&quot;},{&quot;title&quot;:&quot;Optimization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/optimizer_schedules&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules&quot;},{&quot;title&quot;:&quot;Model outputs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/output&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/output&quot;},{&quot;title&quot;:&quot;Pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/pipelines&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/pipelines&quot;},{&quot;title&quot;:&quot;Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/processors&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/processors&quot;},{&quot;title&quot;:&quot;Quantization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/quantization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/quantization&quot;},{&quot;title&quot;:&quot;Tokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/tokenizer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/tokenizer&quot;},{&quot;title&quot;:&quot;Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/trainer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/trainer&quot;},{&quot;title&quot;:&quot;DeepSpeed Integration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/deepspeed&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/deepspeed&quot;},{&quot;title&quot;:&quot;Feature Extractor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/feature_extractor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/feature_extractor&quot;},{&quot;title&quot;:&quot;Image Processor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/image_processor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/image_processor&quot;}]},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALBERT&quot;,&quot;id&quot;:&quot;model_doc/albert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/albert&quot;},{&quot;title&quot;:&quot;BART&quot;,&quot;id&quot;:&quot;model_doc/bart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bart&quot;},{&quot;title&quot;:&quot;BARThez&quot;,&quot;id&quot;:&quot;model_doc/barthez&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/barthez&quot;},{&quot;title&quot;:&quot;BARTpho&quot;,&quot;id&quot;:&quot;model_doc/bartpho&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bartpho&quot;},{&quot;title&quot;:&quot;BERT&quot;,&quot;id&quot;:&quot;model_doc/bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert&quot;},{&quot;title&quot;:&quot;BertGeneration&quot;,&quot;id&quot;:&quot;model_doc/bert-generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-generation&quot;},{&quot;title&quot;:&quot;BertJapanese&quot;,&quot;id&quot;:&quot;model_doc/bert-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-japanese&quot;},{&quot;title&quot;:&quot;Bertweet&quot;,&quot;id&quot;:&quot;model_doc/bertweet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bertweet&quot;},{&quot;title&quot;:&quot;BigBird&quot;,&quot;id&quot;:&quot;model_doc/big_bird&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/big_bird&quot;},{&quot;title&quot;:&quot;BigBirdPegasus&quot;,&quot;id&quot;:&quot;model_doc/bigbird_pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus&quot;},{&quot;title&quot;:&quot;BioGpt&quot;,&quot;id&quot;:&quot;model_doc/biogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/biogpt&quot;},{&quot;title&quot;:&quot;Blenderbot&quot;,&quot;id&quot;:&quot;model_doc/blenderbot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot&quot;},{&quot;title&quot;:&quot;Blenderbot Small&quot;,&quot;id&quot;:&quot;model_doc/blenderbot-small&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot-small&quot;},{&quot;title&quot;:&quot;BLOOM&quot;,&quot;id&quot;:&quot;model_doc/bloom&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bloom&quot;},{&quot;title&quot;:&quot;BORT&quot;,&quot;id&quot;:&quot;model_doc/bort&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bort&quot;},{&quot;title&quot;:&quot;ByT5&quot;,&quot;id&quot;:&quot;model_doc/byt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/byt5&quot;},{&quot;title&quot;:&quot;CamemBERT&quot;,&quot;id&quot;:&quot;model_doc/camembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/camembert&quot;},{&quot;title&quot;:&quot;CANINE&quot;,&quot;id&quot;:&quot;model_doc/canine&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/canine&quot;},{&quot;title&quot;:&quot;CodeGen&quot;,&quot;id&quot;:&quot;model_doc/codegen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/codegen&quot;},{&quot;title&quot;:&quot;CodeLlama&quot;,&quot;id&quot;:&quot;model_doc/code_llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/code_llama&quot;},{&quot;title&quot;:&quot;ConvBERT&quot;,&quot;id&quot;:&quot;model_doc/convbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convbert&quot;},{&quot;title&quot;:&quot;CPM&quot;,&quot;id&quot;:&quot;model_doc/cpm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpm&quot;},{&quot;title&quot;:&quot;CPMANT&quot;,&quot;id&quot;:&quot;model_doc/cpmant&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpmant&quot;},{&quot;title&quot;:&quot;CTRL&quot;,&quot;id&quot;:&quot;model_doc/ctrl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ctrl&quot;},{&quot;title&quot;:&quot;DeBERTa&quot;,&quot;id&quot;:&quot;model_doc/deberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta&quot;},{&quot;title&quot;:&quot;DeBERTa-v2&quot;,&quot;id&quot;:&quot;model_doc/deberta-v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta-v2&quot;},{&quot;title&quot;:&quot;DialoGPT&quot;,&quot;id&quot;:&quot;model_doc/dialogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dialogpt&quot;},{&quot;title&quot;:&quot;DistilBERT&quot;,&quot;id&quot;:&quot;model_doc/distilbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/distilbert&quot;},{&quot;title&quot;:&quot;DPR&quot;,&quot;id&quot;:&quot;model_doc/dpr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpr&quot;},{&quot;title&quot;:&quot;ELECTRA&quot;,&quot;id&quot;:&quot;model_doc/electra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/electra&quot;},{&quot;title&quot;:&quot;Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encoder-decoder&quot;},{&quot;title&quot;:&quot;ERNIE&quot;,&quot;id&quot;:&quot;model_doc/ernie&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie&quot;},{&quot;title&quot;:&quot;ErnieM&quot;,&quot;id&quot;:&quot;model_doc/ernie_m&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie_m&quot;},{&quot;title&quot;:&quot;ESM&quot;,&quot;id&quot;:&quot;model_doc/esm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/esm&quot;},{&quot;title&quot;:&quot;Falcon&quot;,&quot;id&quot;:&quot;model_doc/falcon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/falcon&quot;},{&quot;title&quot;:&quot;FLAN-T5&quot;,&quot;id&quot;:&quot;model_doc/flan-t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-t5&quot;},{&quot;title&quot;:&quot;FLAN-UL2&quot;,&quot;id&quot;:&quot;model_doc/flan-ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-ul2&quot;},{&quot;title&quot;:&quot;FlauBERT&quot;,&quot;id&quot;:&quot;model_doc/flaubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flaubert&quot;},{&quot;title&quot;:&quot;FNet&quot;,&quot;id&quot;:&quot;model_doc/fnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fnet&quot;},{&quot;title&quot;:&quot;FSMT&quot;,&quot;id&quot;:&quot;model_doc/fsmt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fsmt&quot;},{&quot;title&quot;:&quot;Funnel Transformer&quot;,&quot;id&quot;:&quot;model_doc/funnel&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/funnel&quot;},{&quot;title&quot;:&quot;GPT&quot;,&quot;id&quot;:&quot;model_doc/openai-gpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/openai-gpt&quot;},{&quot;title&quot;:&quot;GPT Neo&quot;,&quot;id&quot;:&quot;model_doc/gpt_neo&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neo&quot;},{&quot;title&quot;:&quot;GPT NeoX&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox&quot;},{&quot;title&quot;:&quot;GPT NeoX Japanese&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox_japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese&quot;},{&quot;title&quot;:&quot;GPT-J&quot;,&quot;id&quot;:&quot;model_doc/gptj&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptj&quot;},{&quot;title&quot;:&quot;GPT2&quot;,&quot;id&quot;:&quot;model_doc/gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt2&quot;},{&quot;title&quot;:&quot;GPTBigCode&quot;,&quot;id&quot;:&quot;model_doc/gpt_bigcode&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode&quot;},{&quot;title&quot;:&quot;GPTSAN Japanese&quot;,&quot;id&quot;:&quot;model_doc/gptsan-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese&quot;},{&quot;title&quot;:&quot;GPTSw3&quot;,&quot;id&quot;:&quot;model_doc/gpt-sw3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt-sw3&quot;},{&quot;title&quot;:&quot;HerBERT&quot;,&quot;id&quot;:&quot;model_doc/herbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/herbert&quot;},{&quot;title&quot;:&quot;I-BERT&quot;,&quot;id&quot;:&quot;model_doc/ibert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ibert&quot;},{&quot;title&quot;:&quot;Jukebox&quot;,&quot;id&quot;:&quot;model_doc/jukebox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/jukebox&quot;},{&quot;title&quot;:&quot;LED&quot;,&quot;id&quot;:&quot;model_doc/led&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/led&quot;},{&quot;title&quot;:&quot;LLaMA&quot;,&quot;id&quot;:&quot;model_doc/llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama&quot;},{&quot;title&quot;:&quot;Llama2&quot;,&quot;id&quot;:&quot;model_doc/llama2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama2&quot;},{&quot;title&quot;:&quot;Longformer&quot;,&quot;id&quot;:&quot;model_doc/longformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longformer&quot;},{&quot;title&quot;:&quot;LongT5&quot;,&quot;id&quot;:&quot;model_doc/longt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longt5&quot;},{&quot;title&quot;:&quot;LUKE&quot;,&quot;id&quot;:&quot;model_doc/luke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/luke&quot;},{&quot;title&quot;:&quot;M2M100&quot;,&quot;id&quot;:&quot;model_doc/m2m_100&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/m2m_100&quot;},{&quot;title&quot;:&quot;MarianMT&quot;,&quot;id&quot;:&quot;model_doc/marian&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/marian&quot;},{&quot;title&quot;:&quot;MarkupLM&quot;,&quot;id&quot;:&quot;model_doc/markuplm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/markuplm&quot;},{&quot;title&quot;:&quot;MBart and MBart-50&quot;,&quot;id&quot;:&quot;model_doc/mbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mbart&quot;},{&quot;title&quot;:&quot;MEGA&quot;,&quot;id&quot;:&quot;model_doc/mega&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mega&quot;},{&quot;title&quot;:&quot;MegatronBERT&quot;,&quot;id&quot;:&quot;model_doc/megatron-bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron-bert&quot;},{&quot;title&quot;:&quot;MegatronGPT2&quot;,&quot;id&quot;:&quot;model_doc/megatron_gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2&quot;},{&quot;title&quot;:&quot;Mistral&quot;,&quot;id&quot;:&quot;model_doc/mistral&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mistral&quot;},{&quot;title&quot;:&quot;mLUKE&quot;,&quot;id&quot;:&quot;model_doc/mluke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mluke&quot;},{&quot;title&quot;:&quot;MobileBERT&quot;,&quot;id&quot;:&quot;model_doc/mobilebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilebert&quot;},{&quot;title&quot;:&quot;MPNet&quot;,&quot;id&quot;:&quot;model_doc/mpnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpnet&quot;},{&quot;title&quot;:&quot;MPT&quot;,&quot;id&quot;:&quot;model_doc/mpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpt&quot;},{&quot;title&quot;:&quot;MRA&quot;,&quot;id&quot;:&quot;model_doc/mra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mra&quot;},{&quot;title&quot;:&quot;MT5&quot;,&quot;id&quot;:&quot;model_doc/mt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mt5&quot;},{&quot;title&quot;:&quot;MVP&quot;,&quot;id&quot;:&quot;model_doc/mvp&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mvp&quot;},{&quot;title&quot;:&quot;NEZHA&quot;,&quot;id&quot;:&quot;model_doc/nezha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nezha&quot;},{&quot;title&quot;:&quot;NLLB&quot;,&quot;id&quot;:&quot;model_doc/nllb&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb&quot;},{&quot;title&quot;:&quot;NLLB-MoE&quot;,&quot;id&quot;:&quot;model_doc/nllb-moe&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb-moe&quot;},{&quot;title&quot;:&quot;Nyströmformer&quot;,&quot;id&quot;:&quot;model_doc/nystromformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nystromformer&quot;},{&quot;title&quot;:&quot;Open-Llama&quot;,&quot;id&quot;:&quot;model_doc/open-llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/open-llama&quot;},{&quot;title&quot;:&quot;OPT&quot;,&quot;id&quot;:&quot;model_doc/opt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/opt&quot;},{&quot;title&quot;:&quot;Pegasus&quot;,&quot;id&quot;:&quot;model_doc/pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus&quot;},{&quot;title&quot;:&quot;PEGASUS-X&quot;,&quot;id&quot;:&quot;model_doc/pegasus_x&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus_x&quot;},{&quot;title&quot;:&quot;Persimmon&quot;,&quot;id&quot;:&quot;model_doc/persimmon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/persimmon&quot;},{&quot;title&quot;:&quot;PhoBERT&quot;,&quot;id&quot;:&quot;model_doc/phobert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/phobert&quot;},{&quot;title&quot;:&quot;PLBart&quot;,&quot;id&quot;:&quot;model_doc/plbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/plbart&quot;},{&quot;title&quot;:&quot;ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/prophetnet&quot;},{&quot;title&quot;:&quot;QDQBert&quot;,&quot;id&quot;:&quot;model_doc/qdqbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/qdqbert&quot;},{&quot;title&quot;:&quot;RAG&quot;,&quot;id&quot;:&quot;model_doc/rag&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rag&quot;},{&quot;title&quot;:&quot;REALM&quot;,&quot;id&quot;:&quot;model_doc/realm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/realm&quot;},{&quot;title&quot;:&quot;Reformer&quot;,&quot;id&quot;:&quot;model_doc/reformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/reformer&quot;},{&quot;title&quot;:&quot;RemBERT&quot;,&quot;id&quot;:&quot;model_doc/rembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rembert&quot;},{&quot;title&quot;:&quot;RetriBERT&quot;,&quot;id&quot;:&quot;model_doc/retribert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/retribert&quot;},{&quot;title&quot;:&quot;RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta&quot;},{&quot;title&quot;:&quot;RoBERTa-PreLayerNorm&quot;,&quot;id&quot;:&quot;model_doc/roberta-prelayernorm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm&quot;},{&quot;title&quot;:&quot;RoCBert&quot;,&quot;id&quot;:&quot;model_doc/roc_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roc_bert&quot;},{&quot;title&quot;:&quot;RoFormer&quot;,&quot;id&quot;:&quot;model_doc/roformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roformer&quot;},{&quot;title&quot;:&quot;RWKV&quot;,&quot;id&quot;:&quot;model_doc/rwkv&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rwkv&quot;},{&quot;title&quot;:&quot;Splinter&quot;,&quot;id&quot;:&quot;model_doc/splinter&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/splinter&quot;},{&quot;title&quot;:&quot;SqueezeBERT&quot;,&quot;id&quot;:&quot;model_doc/squeezebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/squeezebert&quot;},{&quot;title&quot;:&quot;SwitchTransformers&quot;,&quot;id&quot;:&quot;model_doc/switch_transformers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/switch_transformers&quot;},{&quot;title&quot;:&quot;T5&quot;,&quot;id&quot;:&quot;model_doc/t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5&quot;},{&quot;title&quot;:&quot;T5v1.1&quot;,&quot;id&quot;:&quot;model_doc/t5v1.1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5v1.1&quot;},{&quot;title&quot;:&quot;TAPEX&quot;,&quot;id&quot;:&quot;model_doc/tapex&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapex&quot;},{&quot;title&quot;:&quot;Transformer XL&quot;,&quot;id&quot;:&quot;model_doc/transfo-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/transfo-xl&quot;},{&quot;title&quot;:&quot;UL2&quot;,&quot;id&quot;:&quot;model_doc/ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ul2&quot;},{&quot;title&quot;:&quot;UMT5&quot;,&quot;id&quot;:&quot;model_doc/umt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/umt5&quot;},{&quot;title&quot;:&quot;X-MOD&quot;,&quot;id&quot;:&quot;model_doc/xmod&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xmod&quot;},{&quot;title&quot;:&quot;XGLM&quot;,&quot;id&quot;:&quot;model_doc/xglm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xglm&quot;},{&quot;title&quot;:&quot;XLM&quot;,&quot;id&quot;:&quot;model_doc/xlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm&quot;},{&quot;title&quot;:&quot;XLM-ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/xlm-prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/xlm-roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa-XL&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl&quot;},{&quot;title&quot;:&quot;XLM-V&quot;,&quot;id&quot;:&quot;model_doc/xlm-v&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-v&quot;},{&quot;title&quot;:&quot;XLNet&quot;,&quot;id&quot;:&quot;model_doc/xlnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlnet&quot;},{&quot;title&quot;:&quot;YOSO&quot;,&quot;id&quot;:&quot;model_doc/yoso&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yoso&quot;}]},{&quot;title&quot;:&quot;Vision models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;BEiT&quot;,&quot;id&quot;:&quot;model_doc/beit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/beit&quot;},{&quot;title&quot;:&quot;BiT&quot;,&quot;id&quot;:&quot;model_doc/bit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bit&quot;},{&quot;title&quot;:&quot;Conditional DETR&quot;,&quot;id&quot;:&quot;model_doc/conditional_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/conditional_detr&quot;},{&quot;title&quot;:&quot;ConvNeXT&quot;,&quot;id&quot;:&quot;model_doc/convnext&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnext&quot;},{&quot;title&quot;:&quot;ConvNeXTV2&quot;,&quot;id&quot;:&quot;model_doc/convnextv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnextv2&quot;},{&quot;title&quot;:&quot;CvT&quot;,&quot;id&quot;:&quot;model_doc/cvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cvt&quot;},{&quot;title&quot;:&quot;Deformable DETR&quot;,&quot;id&quot;:&quot;model_doc/deformable_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deformable_detr&quot;},{&quot;title&quot;:&quot;DeiT&quot;,&quot;id&quot;:&quot;model_doc/deit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deit&quot;},{&quot;title&quot;:&quot;DETA&quot;,&quot;id&quot;:&quot;model_doc/deta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deta&quot;},{&quot;title&quot;:&quot;DETR&quot;,&quot;id&quot;:&quot;model_doc/detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/detr&quot;},{&quot;title&quot;:&quot;DiNAT&quot;,&quot;id&quot;:&quot;model_doc/dinat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinat&quot;},{&quot;title&quot;:&quot;DINO V2&quot;,&quot;id&quot;:&quot;model_doc/dinov2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinov2&quot;},{&quot;title&quot;:&quot;DiT&quot;,&quot;id&quot;:&quot;model_doc/dit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dit&quot;},{&quot;title&quot;:&quot;DPT&quot;,&quot;id&quot;:&quot;model_doc/dpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpt&quot;},{&quot;title&quot;:&quot;EfficientFormer&quot;,&quot;id&quot;:&quot;model_doc/efficientformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientformer&quot;},{&quot;title&quot;:&quot;EfficientNet&quot;,&quot;id&quot;:&quot;model_doc/efficientnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientnet&quot;},{&quot;title&quot;:&quot;FocalNet&quot;,&quot;id&quot;:&quot;model_doc/focalnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/focalnet&quot;},{&quot;title&quot;:&quot;GLPN&quot;,&quot;id&quot;:&quot;model_doc/glpn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/glpn&quot;},{&quot;title&quot;:&quot;ImageGPT&quot;,&quot;id&quot;:&quot;model_doc/imagegpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/imagegpt&quot;},{&quot;title&quot;:&quot;LeViT&quot;,&quot;id&quot;:&quot;model_doc/levit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/levit&quot;},{&quot;title&quot;:&quot;Mask2Former&quot;,&quot;id&quot;:&quot;model_doc/mask2former&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mask2former&quot;},{&quot;title&quot;:&quot;MaskFormer&quot;,&quot;id&quot;:&quot;model_doc/maskformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/maskformer&quot;},{&quot;title&quot;:&quot;MobileNetV1&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v1&quot;},{&quot;title&quot;:&quot;MobileNetV2&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v2&quot;},{&quot;title&quot;:&quot;MobileViT&quot;,&quot;id&quot;:&quot;model_doc/mobilevit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevit&quot;},{&quot;title&quot;:&quot;MobileViTV2&quot;,&quot;id&quot;:&quot;model_doc/mobilevitv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevitv2&quot;},{&quot;title&quot;:&quot;NAT&quot;,&quot;id&quot;:&quot;model_doc/nat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nat&quot;},{&quot;title&quot;:&quot;PoolFormer&quot;,&quot;id&quot;:&quot;model_doc/poolformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/poolformer&quot;},{&quot;title&quot;:&quot;Pyramid Vision Transformer (PVT)&quot;,&quot;id&quot;:&quot;model_doc/pvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pvt&quot;},{&quot;title&quot;:&quot;RegNet&quot;,&quot;id&quot;:&quot;model_doc/regnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/regnet&quot;},{&quot;title&quot;:&quot;ResNet&quot;,&quot;id&quot;:&quot;model_doc/resnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/resnet&quot;},{&quot;title&quot;:&quot;SegFormer&quot;,&quot;id&quot;:&quot;model_doc/segformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/segformer&quot;},{&quot;title&quot;:&quot;SwiftFormer&quot;,&quot;id&quot;:&quot;model_doc/swiftformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swiftformer&quot;},{&quot;title&quot;:&quot;Swin Transformer&quot;,&quot;id&quot;:&quot;model_doc/swin&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin&quot;},{&quot;title&quot;:&quot;Swin Transformer V2&quot;,&quot;id&quot;:&quot;model_doc/swinv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swinv2&quot;},{&quot;title&quot;:&quot;Swin2SR&quot;,&quot;id&quot;:&quot;model_doc/swin2sr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin2sr&quot;},{&quot;title&quot;:&quot;Table Transformer&quot;,&quot;id&quot;:&quot;model_doc/table-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/table-transformer&quot;},{&quot;title&quot;:&quot;TimeSformer&quot;,&quot;id&quot;:&quot;model_doc/timesformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/timesformer&quot;},{&quot;title&quot;:&quot;UperNet&quot;,&quot;id&quot;:&quot;model_doc/upernet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/upernet&quot;},{&quot;title&quot;:&quot;VAN&quot;,&quot;id&quot;:&quot;model_doc/van&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/van&quot;},{&quot;title&quot;:&quot;VideoMAE&quot;,&quot;id&quot;:&quot;model_doc/videomae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/videomae&quot;},{&quot;title&quot;:&quot;Vision Transformer (ViT)&quot;,&quot;id&quot;:&quot;model_doc/vit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit&quot;},{&quot;title&quot;:&quot;ViT Hybrid&quot;,&quot;id&quot;:&quot;model_doc/vit_hybrid&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_hybrid&quot;},{&quot;title&quot;:&quot;ViTDet&quot;,&quot;id&quot;:&quot;model_doc/vitdet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitdet&quot;},{&quot;title&quot;:&quot;ViTMAE&quot;,&quot;id&quot;:&quot;model_doc/vit_mae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_mae&quot;},{&quot;title&quot;:&quot;ViTMatte&quot;,&quot;id&quot;:&quot;model_doc/vitmatte&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitmatte&quot;},{&quot;title&quot;:&quot;ViTMSN&quot;,&quot;id&quot;:&quot;model_doc/vit_msn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_msn&quot;},{&quot;title&quot;:&quot;ViViT&quot;,&quot;id&quot;:&quot;model_doc/vivit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vivit&quot;},{&quot;title&quot;:&quot;YOLOS&quot;,&quot;id&quot;:&quot;model_doc/yolos&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yolos&quot;}]},{&quot;title&quot;:&quot;Audio models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio Spectrogram Transformer&quot;,&quot;id&quot;:&quot;model_doc/audio-spectrogram-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer&quot;},{&quot;title&quot;:&quot;Bark&quot;,&quot;id&quot;:&quot;model_doc/bark&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bark&quot;},{&quot;title&quot;:&quot;CLAP&quot;,&quot;id&quot;:&quot;model_doc/clap&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clap&quot;},{&quot;title&quot;:&quot;EnCodec&quot;,&quot;id&quot;:&quot;model_doc/encodec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encodec&quot;},{&quot;title&quot;:&quot;Hubert&quot;,&quot;id&quot;:&quot;model_doc/hubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/hubert&quot;},{&quot;title&quot;:&quot;MCTCT&quot;,&quot;id&quot;:&quot;model_doc/mctct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mctct&quot;},{&quot;title&quot;:&quot;MMS&quot;,&quot;id&quot;:&quot;model_doc/mms&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mms&quot;},{&quot;title&quot;:&quot;MusicGen&quot;,&quot;id&quot;:&quot;model_doc/musicgen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/musicgen&quot;},{&quot;title&quot;:&quot;Pop2Piano&quot;,&quot;id&quot;:&quot;model_doc/pop2piano&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pop2piano&quot;},{&quot;title&quot;:&quot;SEW&quot;,&quot;id&quot;:&quot;model_doc/sew&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew&quot;},{&quot;title&quot;:&quot;SEW-D&quot;,&quot;id&quot;:&quot;model_doc/sew-d&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew-d&quot;},{&quot;title&quot;:&quot;Speech2Text&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text&quot;},{&quot;title&quot;:&quot;Speech2Text2&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text_2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2&quot;},{&quot;title&quot;:&quot;SpeechT5&quot;,&quot;id&quot;:&quot;model_doc/speecht5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speecht5&quot;},{&quot;title&quot;:&quot;UniSpeech&quot;,&quot;id&quot;:&quot;model_doc/unispeech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech&quot;},{&quot;title&quot;:&quot;UniSpeech-SAT&quot;,&quot;id&quot;:&quot;model_doc/unispeech-sat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech-sat&quot;},{&quot;title&quot;:&quot;VITS&quot;,&quot;id&quot;:&quot;model_doc/vits&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vits&quot;},{&quot;title&quot;:&quot;Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2&quot;},{&quot;title&quot;:&quot;Wav2Vec2-Conformer&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2-conformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer&quot;},{&quot;title&quot;:&quot;Wav2Vec2Phoneme&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2_phoneme&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme&quot;},{&quot;title&quot;:&quot;WavLM&quot;,&quot;id&quot;:&quot;model_doc/wavlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wavlm&quot;},{&quot;title&quot;:&quot;Whisper&quot;,&quot;id&quot;:&quot;model_doc/whisper&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/whisper&quot;},{&quot;title&quot;:&quot;XLS-R&quot;,&quot;id&quot;:&quot;model_doc/xls_r&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xls_r&quot;},{&quot;title&quot;:&quot;XLSR-Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/xlsr_wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2&quot;}]},{&quot;title&quot;:&quot;Multimodal models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALIGN&quot;,&quot;id&quot;:&quot;model_doc/align&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/align&quot;},{&quot;title&quot;:&quot;AltCLIP&quot;,&quot;id&quot;:&quot;model_doc/altclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/altclip&quot;},{&quot;title&quot;:&quot;BLIP&quot;,&quot;id&quot;:&quot;model_doc/blip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip&quot;},{&quot;title&quot;:&quot;BLIP-2&quot;,&quot;id&quot;:&quot;model_doc/blip-2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip-2&quot;},{&quot;title&quot;:&quot;BridgeTower&quot;,&quot;id&quot;:&quot;model_doc/bridgetower&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bridgetower&quot;},{&quot;title&quot;:&quot;BROS&quot;,&quot;id&quot;:&quot;model_doc/bros&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bros&quot;},{&quot;title&quot;:&quot;Chinese-CLIP&quot;,&quot;id&quot;:&quot;model_doc/chinese_clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/chinese_clip&quot;},{&quot;title&quot;:&quot;CLIP&quot;,&quot;id&quot;:&quot;model_doc/clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clip&quot;},{&quot;title&quot;:&quot;CLIPSeg&quot;,&quot;id&quot;:&quot;model_doc/clipseg&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clipseg&quot;},{&quot;title&quot;:&quot;Data2Vec&quot;,&quot;id&quot;:&quot;model_doc/data2vec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/data2vec&quot;},{&quot;title&quot;:&quot;DePlot&quot;,&quot;id&quot;:&quot;model_doc/deplot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deplot&quot;},{&quot;title&quot;:&quot;Donut&quot;,&quot;id&quot;:&quot;model_doc/donut&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/donut&quot;},{&quot;title&quot;:&quot;FLAVA&quot;,&quot;id&quot;:&quot;model_doc/flava&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flava&quot;},{&quot;title&quot;:&quot;GIT&quot;,&quot;id&quot;:&quot;model_doc/git&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/git&quot;},{&quot;title&quot;:&quot;GroupViT&quot;,&quot;id&quot;:&quot;model_doc/groupvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/groupvit&quot;},{&quot;title&quot;:&quot;IDEFICS&quot;,&quot;id&quot;:&quot;model_doc/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/idefics&quot;},{&quot;title&quot;:&quot;InstructBLIP&quot;,&quot;id&quot;:&quot;model_doc/instructblip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/instructblip&quot;},{&quot;title&quot;:&quot;LayoutLM&quot;,&quot;id&quot;:&quot;model_doc/layoutlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlm&quot;},{&quot;title&quot;:&quot;LayoutLMV2&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv2&quot;},{&quot;title&quot;:&quot;LayoutLMV3&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv3&quot;},{&quot;title&quot;:&quot;LayoutXLM&quot;,&quot;id&quot;:&quot;model_doc/layoutxlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutxlm&quot;},{&quot;title&quot;:&quot;LiLT&quot;,&quot;id&quot;:&quot;model_doc/lilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lilt&quot;},{&quot;title&quot;:&quot;LXMERT&quot;,&quot;id&quot;:&quot;model_doc/lxmert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lxmert&quot;},{&quot;title&quot;:&quot;MatCha&quot;,&quot;id&quot;:&quot;model_doc/matcha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/matcha&quot;},{&quot;title&quot;:&quot;MGP-STR&quot;,&quot;id&quot;:&quot;model_doc/mgp-str&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mgp-str&quot;},{&quot;title&quot;:&quot;Nougat&quot;,&quot;id&quot;:&quot;model_doc/nougat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nougat&quot;},{&quot;title&quot;:&quot;OneFormer&quot;,&quot;id&quot;:&quot;model_doc/oneformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/oneformer&quot;},{&quot;title&quot;:&quot;OWL-ViT&quot;,&quot;id&quot;:&quot;model_doc/owlvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/owlvit&quot;},{&quot;title&quot;:&quot;Perceiver&quot;,&quot;id&quot;:&quot;model_doc/perceiver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/perceiver&quot;},{&quot;title&quot;:&quot;Pix2Struct&quot;,&quot;id&quot;:&quot;model_doc/pix2struct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pix2struct&quot;},{&quot;title&quot;:&quot;Segment Anything&quot;,&quot;id&quot;:&quot;model_doc/sam&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sam&quot;},{&quot;title&quot;:&quot;Speech Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/speech-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder&quot;},{&quot;title&quot;:&quot;TAPAS&quot;,&quot;id&quot;:&quot;model_doc/tapas&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapas&quot;},{&quot;title&quot;:&quot;TrOCR&quot;,&quot;id&quot;:&quot;model_doc/trocr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trocr&quot;},{&quot;title&quot;:&quot;TVLT&quot;,&quot;id&quot;:&quot;model_doc/tvlt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tvlt&quot;},{&quot;title&quot;:&quot;ViLT&quot;,&quot;id&quot;:&quot;model_doc/vilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vilt&quot;},{&quot;title&quot;:&quot;Vision Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/vision-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder&quot;},{&quot;title&quot;:&quot;Vision Text Dual Encoder&quot;,&quot;id&quot;:&quot;model_doc/vision-text-dual-encoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder&quot;},{&quot;title&quot;:&quot;VisualBERT&quot;,&quot;id&quot;:&quot;model_doc/visual_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/visual_bert&quot;},{&quot;title&quot;:&quot;X-CLIP&quot;,&quot;id&quot;:&quot;model_doc/xclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xclip&quot;}]},{&quot;title&quot;:&quot;Reinforcement learning models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Decision Transformer&quot;,&quot;id&quot;:&quot;model_doc/decision_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/decision_transformer&quot;},{&quot;title&quot;:&quot;Trajectory Transformer&quot;,&quot;id&quot;:&quot;model_doc/trajectory_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer&quot;}]},{&quot;title&quot;:&quot;Time series models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Autoformer&quot;,&quot;id&quot;:&quot;model_doc/autoformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/autoformer&quot;},{&quot;title&quot;:&quot;Informer&quot;,&quot;id&quot;:&quot;model_doc/informer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/informer&quot;},{&quot;title&quot;:&quot;Time Series Transformer&quot;,&quot;id&quot;:&quot;model_doc/time_series_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/time_series_transformer&quot;}]},{&quot;title&quot;:&quot;Graph models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Graphormer&quot;,&quot;id&quot;:&quot;model_doc/graphormer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/graphormer&quot;}]}]},{&quot;title&quot;:&quot;Internal Helpers&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Custom Layers and Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/modeling_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/modeling_utils&quot;},{&quot;title&quot;:&quot;Utilities for pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/pipelines_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/pipelines_utils&quot;},{&quot;title&quot;:&quot;Utilities for Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/tokenization_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/tokenization_utils&quot;},{&quot;title&quot;:&quot;Utilities for Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/trainer_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/trainer_utils&quot;},{&quot;title&quot;:&quot;Utilities for Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/generation_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/generation_utils&quot;},{&quot;title&quot;:&quot;Utilities for Image Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/image_processing_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/image_processing_utils&quot;},{&quot;title&quot;:&quot;Utilities for Audio processing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/audio_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/audio_utils&quot;},{&quot;title&quot;:&quot;General Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/file_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/file_utils&quot;},{&quot;title&quot;:&quot;Utilities for Time Series&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/time_series_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/time_series_utils&quot;}]}]}],&quot;chapterId&quot;:&quot;model_doc/xlm-roberta&quot;,&quot;docType&quot;:&quot;docs&quot;,&quot;isLoggedIn&quot;:false,&quot;lang&quot;:&quot;en&quot;,&quot;langs&quot;:[&quot;de&quot;,&quot;en&quot;,&quot;es&quot;,&quot;fr&quot;,&quot;it&quot;,&quot;ko&quot;,&quot;pt&quot;,&quot;zh&quot;],&quot;library&quot;:&quot;transformers&quot;,&quot;theme&quot;:&quot;light&quot;,&quot;version&quot;:&quot;v4.34.0&quot;,&quot;versions&quot;:[{&quot;version&quot;:&quot;main&quot;},{&quot;version&quot;:&quot;v4.34.0&quot;},{&quot;version&quot;:&quot;v4.33.3&quot;},{&quot;version&quot;:&quot;v4.33.2&quot;},{&quot;version&quot;:&quot;v4.33.0&quot;},{&quot;version&quot;:&quot;v4.32.1&quot;},{&quot;version&quot;:&quot;v4.32.0&quot;},{&quot;version&quot;:&quot;v4.31.0&quot;},{&quot;version&quot;:&quot;v4.30.0&quot;},{&quot;version&quot;:&quot;v4.29.1&quot;},{&quot;version&quot;:&quot;v4.29.0&quot;},{&quot;version&quot;:&quot;v4.28.1&quot;},{&quot;version&quot;:&quot;v4.28.0&quot;},{&quot;version&quot;:&quot;v4.27.2&quot;},{&quot;version&quot;:&quot;v4.27.1&quot;},{&quot;version&quot;:&quot;v4.27.0&quot;},{&quot;version&quot;:&quot;v4.26.1&quot;},{&quot;version&quot;:&quot;v4.26.0&quot;},{&quot;version&quot;:&quot;v4.25.1&quot;},{&quot;version&quot;:&quot;v4.24.0&quot;},{&quot;version&quot;:&quot;v4.23.1&quot;},{&quot;version&quot;:&quot;v4.23.0&quot;},{&quot;version&quot;:&quot;v4.22.2&quot;},{&quot;version&quot;:&quot;v4.22.1&quot;},{&quot;version&quot;:&quot;v4.22.0&quot;},{&quot;version&quot;:&quot;v4.21.3&quot;},{&quot;version&quot;:&quot;v4.21.2&quot;},{&quot;version&quot;:&quot;v4.21.1&quot;},{&quot;version&quot;:&quot;v4.21.0&quot;},{&quot;version&quot;:&quot;v4.20.1&quot;},{&quot;version&quot;:&quot;v4.20.0&quot;},{&quot;version&quot;:&quot;v4.19.4&quot;},{&quot;version&quot;:&quot;v4.19.3&quot;},{&quot;version&quot;:&quot;v4.19.2&quot;},{&quot;version&quot;:&quot;v4.19.0&quot;},{&quot;version&quot;:&quot;v4.18.0&quot;},{&quot;version&quot;:&quot;v4.17.0&quot;},{&quot;version&quot;:&quot;v4.16.2&quot;},{&quot;version&quot;:&quot;v4.16.1&quot;},{&quot;version&quot;:&quot;v4.16.0&quot;},{&quot;version&quot;:&quot;v4.15.0&quot;},{&quot;version&quot;:&quot;v4.14.1&quot;},{&quot;version&quot;:&quot;v4.13.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.5&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.4&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.0.0&quot;},{&quot;version&quot;:&quot;doc-builder-html&quot;}],&quot;title&quot;:&quot;XLM-RoBERTa&quot;}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"> <div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation </p> <div class="flex items-center"><p class="font-semibold">XLM-RoBERTa</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "> <button class=" " type="button"> <h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> </button> <div class="flex items-center"> <select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1" selected="">v4.34.0</option><option value="2">v4.33.3</option><option value="3">v4.32.1</option><option value="4">v4.31.0</option><option value="5">v4.30.0</option><option value="6">v4.29.1</option><option value="7">v4.28.1</option><option value="8">v4.27.2</option><option value="9">v4.26.1</option><option value="10">v4.25.1</option><option value="11">v4.24.0</option><option value="12">v4.23.1</option><option value="13">v4.22.2</option><option value="14">v4.21.3</option><option value="15">v4.20.1</option><option value="16">v4.19.4</option><option value="17">v4.18.0</option><option value="18">v4.17.0</option><option value="19">v4.16.2</option><option value="20">v4.15.0</option><option value="21">v4.14.1</option><option value="22">v4.13.0</option><option value="23">v4.12.5</option><option value="24">v4.11.3</option><option value="25">v4.10.1</option><option value="26">v4.9.2</option><option value="27">v4.8.2</option><option value="28">v4.7.0</option><option value="29">v4.6.0</option><option value="30">v4.5.1</option><option value="31">v4.4.2</option><option value="32">v4.3.3</option><option value="33">v4.2.2</option><option value="34">v4.1.1</option><option value="35">v4.0.1</option><option value="36">v3.5.1</option><option value="37">v3.4.0</option><option value="38">v3.3.1</option><option value="39">v3.2.0</option><option value="40">v3.1.0</option><option value="41">v3.0.2</option><option value="42">v2.11.0</option><option value="43">v2.10.0</option><option value="44">v2.9.1</option><option value="45">v2.8.0</option><option value="46">v2.7.0</option><option value="47">v2.6.0</option><option value="48">v2.5.1</option><option value="49">v2.4.1</option><option value="50">v2.3.0</option><option value="51">v2.2.2</option><option value="52">v2.1.1</option><option value="53">v2.0.0</option><option value="54">v1.2.0</option><option value="55">v1.1.0</option><option value="56">v1.0.0</option><option value="57">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en" selected="">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"> <button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"> <svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> </a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Get started<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/index"><!-- HTML_TAG_START -->🤗 Transformers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/quicktour"><!-- HTML_TAG_START -->Quick tour<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/installation"><!-- HTML_TAG_START -->Installation<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Tutorials<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_tutorial"><!-- HTML_TAG_START -->Run inference with pipelines<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/autoclass_tutorial"><!-- HTML_TAG_START -->Write portable code with AutoClass<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/preprocessing"><!-- HTML_TAG_START -->Preprocess data<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/training"><!-- HTML_TAG_START -->Fine-tune a pretrained model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/run_scripts"><!-- HTML_TAG_START -->Train with a script<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/accelerate"><!-- HTML_TAG_START -->Set up distributed training with 🤗 Accelerate<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/peft"><!-- HTML_TAG_START -->Load and train adapters with 🤗 PEFT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_sharing"><!-- HTML_TAG_START -->Share your model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/transformers_agents"><!-- HTML_TAG_START -->Agents<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/llm_tutorial"><!-- HTML_TAG_START -->Generation with LLMs<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Task Guides<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Natural Language Processing<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Audio<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Computer Vision<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Multimodal<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Generation<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Prompting<!-- HTML_TAG_END --></span> </span></span> </div></div> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Developer guides<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/fast_tokenizers"><!-- HTML_TAG_START -->Use fast tokenizers from 🤗 Tokenizers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/multilingual"><!-- HTML_TAG_START -->Run inference with multilingual models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/create_a_model"><!-- HTML_TAG_START -->Use model-specific APIs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_models"><!-- HTML_TAG_START -->Share a custom model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/chat_templating"><!-- HTML_TAG_START -->Templates for chat models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/sagemaker"><!-- HTML_TAG_START -->Run training on Amazon SageMaker<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/serialization"><!-- HTML_TAG_START -->Export to ONNX<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tflite"><!-- HTML_TAG_START -->Export to TFLite<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/torchscript"><!-- HTML_TAG_START -->Export to TorchScript<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/benchmarks"><!-- HTML_TAG_START -->Benchmarks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/notebooks"><!-- HTML_TAG_START -->Notebooks with examples<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/community"><!-- HTML_TAG_START -->Community resources<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_tools"><!-- HTML_TAG_START -->Custom Tools and Prompts<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/troubleshooting"><!-- HTML_TAG_START -->Troubleshoot<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Performance and scalability<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/performance"><!-- HTML_TAG_START -->Overview<!-- HTML_TAG_END --> </a> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Efficient training techniques<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_one"><!-- HTML_TAG_START -->Methods and tools for efficient training on a single GPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_many"><!-- HTML_TAG_START -->Multiple GPUs and parallelism<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu"><!-- HTML_TAG_START -->Efficient training on CPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu_many"><!-- HTML_TAG_START -->Distributed CPU training<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu"><!-- HTML_TAG_START -->Training on TPUs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu_tf"><!-- HTML_TAG_START -->Training on TPU with TensorFlow<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_special"><!-- HTML_TAG_START -->Training on Specialized Hardware<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_hardware"><!-- HTML_TAG_START -->Custom hardware for training<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/hpo_train"><!-- HTML_TAG_START -->Hyperparameter Search using Trainer API<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Optimizing inference<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_cpu"><!-- HTML_TAG_START -->Inference on CPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_one"><!-- HTML_TAG_START -->Inference on one GPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_many"><!-- HTML_TAG_START -->Inference on many GPUs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_special"><!-- HTML_TAG_START -->Inference on Specialized Hardware<!-- HTML_TAG_END --> </a> </div><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/big_models"><!-- HTML_TAG_START -->Instantiating a big model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/debugging"><!-- HTML_TAG_START -->Troubleshooting<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tf_xla"><!-- HTML_TAG_START -->XLA Integration for TensorFlow Models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perf_torch_compile"><!-- HTML_TAG_START -->Optimize inference using `torch.compile()`<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Contribute<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/contributing"><!-- HTML_TAG_START -->How to contribute to transformers?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_model"><!-- HTML_TAG_START -->How to add a model to 🤗 Transformers?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_tensorflow_model"><!-- HTML_TAG_START -->How to convert a 🤗 Transformers model to TensorFlow?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_pipeline"><!-- HTML_TAG_START -->How to add a pipeline to 🤗 Transformers?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/testing"><!-- HTML_TAG_START -->Testing<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pr_checks"><!-- HTML_TAG_START -->Checks on a Pull Request<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Conceptual guides<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/philosophy"><!-- HTML_TAG_START -->Philosophy<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/glossary"><!-- HTML_TAG_START -->Glossary<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/task_summary"><!-- HTML_TAG_START -->What 🤗 Transformers can do<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tasks_explained"><!-- HTML_TAG_START -->How 🤗 Transformers solve tasks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_summary"><!-- HTML_TAG_START -->The Transformer model family<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tokenizer_summary"><!-- HTML_TAG_START -->Summary of the tokenizers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/attention"><!-- HTML_TAG_START -->Attention mechanisms<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pad_truncation"><!-- HTML_TAG_START -->Padding and truncation<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/bertology"><!-- HTML_TAG_START -->BERTology<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perplexity"><!-- HTML_TAG_START -->Perplexity of fixed-length models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_webserver"><!-- HTML_TAG_START -->Pipelines for webserver inference<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_memory_anatomy"><!-- HTML_TAG_START -->Model training anatomy<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->API<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Main Classes<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/agent"><!-- HTML_TAG_START -->Agents and Tools<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/model_doc/auto"><!-- HTML_TAG_START -->Auto Classes<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/callback"><!-- HTML_TAG_START -->Callbacks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/configuration"><!-- HTML_TAG_START -->Configuration<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/data_collator"><!-- HTML_TAG_START -->Data Collator<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks"><!-- HTML_TAG_START -->Keras callbacks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/logging"><!-- HTML_TAG_START -->Logging<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/model"><!-- HTML_TAG_START -->Models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/text_generation"><!-- HTML_TAG_START -->Text Generation<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/onnx"><!-- HTML_TAG_START -->ONNX<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules"><!-- HTML_TAG_START -->Optimization<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/output"><!-- HTML_TAG_START -->Model outputs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/pipelines"><!-- HTML_TAG_START -->Pipelines<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/processors"><!-- HTML_TAG_START -->Processors<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/quantization"><!-- HTML_TAG_START -->Quantization<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/tokenizer"><!-- HTML_TAG_START -->Tokenizer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/trainer"><!-- HTML_TAG_START -->Trainer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/deepspeed"><!-- HTML_TAG_START -->DeepSpeed Integration<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor"><!-- HTML_TAG_START -->Feature Extractor<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/image_processor"><!-- HTML_TAG_START -->Image Processor<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Text models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/albert"><!-- HTML_TAG_START -->ALBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bart"><!-- HTML_TAG_START -->BART<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/barthez"><!-- HTML_TAG_START -->BARThez<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bartpho"><!-- HTML_TAG_START -->BARTpho<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bert"><!-- HTML_TAG_START -->BERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bert-generation"><!-- HTML_TAG_START -->BertGeneration<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bert-japanese"><!-- HTML_TAG_START -->BertJapanese<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bertweet"><!-- HTML_TAG_START -->Bertweet<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/big_bird"><!-- HTML_TAG_START -->BigBird<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus"><!-- HTML_TAG_START -->BigBirdPegasus<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/biogpt"><!-- HTML_TAG_START -->BioGpt<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/blenderbot"><!-- HTML_TAG_START -->Blenderbot<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/blenderbot-small"><!-- HTML_TAG_START -->Blenderbot Small<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bloom"><!-- HTML_TAG_START -->BLOOM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bort"><!-- HTML_TAG_START -->BORT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/byt5"><!-- HTML_TAG_START -->ByT5<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/camembert"><!-- HTML_TAG_START -->CamemBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/canine"><!-- HTML_TAG_START -->CANINE<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/codegen"><!-- HTML_TAG_START -->CodeGen<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/code_llama"><!-- HTML_TAG_START -->CodeLlama<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/convbert"><!-- HTML_TAG_START -->ConvBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/cpm"><!-- HTML_TAG_START -->CPM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/cpmant"><!-- HTML_TAG_START -->CPMANT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ctrl"><!-- HTML_TAG_START -->CTRL<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/deberta"><!-- HTML_TAG_START -->DeBERTa<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/deberta-v2"><!-- HTML_TAG_START -->DeBERTa-v2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/dialogpt"><!-- HTML_TAG_START -->DialoGPT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/distilbert"><!-- HTML_TAG_START -->DistilBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/dpr"><!-- HTML_TAG_START -->DPR<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/electra"><!-- HTML_TAG_START -->ELECTRA<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/encoder-decoder"><!-- HTML_TAG_START -->Encoder Decoder Models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ernie"><!-- HTML_TAG_START -->ERNIE<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ernie_m"><!-- HTML_TAG_START -->ErnieM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/esm"><!-- HTML_TAG_START -->ESM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/falcon"><!-- HTML_TAG_START -->Falcon<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/flan-t5"><!-- HTML_TAG_START -->FLAN-T5<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/flan-ul2"><!-- HTML_TAG_START -->FLAN-UL2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/flaubert"><!-- HTML_TAG_START -->FlauBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/fnet"><!-- HTML_TAG_START -->FNet<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/fsmt"><!-- HTML_TAG_START -->FSMT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/funnel"><!-- HTML_TAG_START -->Funnel Transformer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/openai-gpt"><!-- HTML_TAG_START -->GPT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt_neo"><!-- HTML_TAG_START -->GPT Neo<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt_neox"><!-- HTML_TAG_START -->GPT NeoX<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese"><!-- HTML_TAG_START -->GPT NeoX Japanese<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gptj"><!-- HTML_TAG_START -->GPT-J<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt2"><!-- HTML_TAG_START -->GPT2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode"><!-- HTML_TAG_START -->GPTBigCode<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese"><!-- HTML_TAG_START -->GPTSAN Japanese<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt-sw3"><!-- HTML_TAG_START -->GPTSw3<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/herbert"><!-- HTML_TAG_START -->HerBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ibert"><!-- HTML_TAG_START -->I-BERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/jukebox"><!-- HTML_TAG_START -->Jukebox<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/led"><!-- HTML_TAG_START -->LED<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/llama"><!-- HTML_TAG_START -->LLaMA<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/llama2"><!-- HTML_TAG_START -->Llama2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/longformer"><!-- HTML_TAG_START -->Longformer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/longt5"><!-- HTML_TAG_START -->LongT5<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/luke"><!-- HTML_TAG_START -->LUKE<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/m2m_100"><!-- HTML_TAG_START -->M2M100<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/marian"><!-- HTML_TAG_START -->MarianMT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/markuplm"><!-- HTML_TAG_START -->MarkupLM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mbart"><!-- HTML_TAG_START -->MBart and MBart-50<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mega"><!-- HTML_TAG_START -->MEGA<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/megatron-bert"><!-- HTML_TAG_START -->MegatronBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2"><!-- HTML_TAG_START -->MegatronGPT2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mistral"><!-- HTML_TAG_START -->Mistral<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mluke"><!-- HTML_TAG_START -->mLUKE<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mobilebert"><!-- HTML_TAG_START -->MobileBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mpnet"><!-- HTML_TAG_START -->MPNet<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mpt"><!-- HTML_TAG_START -->MPT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mra"><!-- HTML_TAG_START -->MRA<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mt5"><!-- HTML_TAG_START -->MT5<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mvp"><!-- HTML_TAG_START -->MVP<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nezha"><!-- HTML_TAG_START -->NEZHA<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nllb"><!-- HTML_TAG_START -->NLLB<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nllb-moe"><!-- HTML_TAG_START -->NLLB-MoE<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nystromformer"><!-- HTML_TAG_START -->Nyströmformer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/open-llama"><!-- HTML_TAG_START -->Open-Llama<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/opt"><!-- HTML_TAG_START -->OPT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/pegasus"><!-- HTML_TAG_START -->Pegasus<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/pegasus_x"><!-- HTML_TAG_START -->PEGASUS-X<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/persimmon"><!-- HTML_TAG_START -->Persimmon<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/phobert"><!-- HTML_TAG_START -->PhoBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/plbart"><!-- HTML_TAG_START -->PLBart<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/prophetnet"><!-- HTML_TAG_START -->ProphetNet<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/qdqbert"><!-- HTML_TAG_START -->QDQBert<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/rag"><!-- HTML_TAG_START -->RAG<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/realm"><!-- HTML_TAG_START -->REALM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/reformer"><!-- HTML_TAG_START -->Reformer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/rembert"><!-- HTML_TAG_START -->RemBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/retribert"><!-- HTML_TAG_START -->RetriBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/roberta"><!-- HTML_TAG_START -->RoBERTa<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm"><!-- HTML_TAG_START -->RoBERTa-PreLayerNorm<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/roc_bert"><!-- HTML_TAG_START -->RoCBert<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/roformer"><!-- HTML_TAG_START -->RoFormer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/rwkv"><!-- HTML_TAG_START -->RWKV<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/splinter"><!-- HTML_TAG_START -->Splinter<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/squeezebert"><!-- HTML_TAG_START -->SqueezeBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/switch_transformers"><!-- HTML_TAG_START -->SwitchTransformers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/t5"><!-- HTML_TAG_START -->T5<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/t5v1.1"><!-- HTML_TAG_START -->T5v1.1<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/tapex"><!-- HTML_TAG_START -->TAPEX<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/transfo-xl"><!-- HTML_TAG_START -->Transformer XL<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ul2"><!-- HTML_TAG_START -->UL2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/umt5"><!-- HTML_TAG_START -->UMT5<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xmod"><!-- HTML_TAG_START -->X-MOD<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xglm"><!-- HTML_TAG_START -->XGLM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm"><!-- HTML_TAG_START -->XLM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet"><!-- HTML_TAG_START -->XLM-ProphetNet<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta"><!-- HTML_TAG_START -->XLM-RoBERTa<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl"><!-- HTML_TAG_START -->XLM-RoBERTa-XL<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm-v"><!-- HTML_TAG_START -->XLM-V<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlnet"><!-- HTML_TAG_START -->XLNet<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/yoso"><!-- HTML_TAG_START -->YOSO<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Vision models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Audio models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Multimodal models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Reinforcement learning models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Time series models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Graph models<!-- HTML_TAG_END --></span> </span></span> </div></div> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Internal Helpers<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/modeling_utils"><!-- HTML_TAG_START -->Custom Layers and Utilities<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/pipelines_utils"><!-- HTML_TAG_START -->Utilities for pipelines<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/tokenization_utils"><!-- HTML_TAG_START -->Utilities for Tokenizers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/trainer_utils"><!-- HTML_TAG_START -->Utilities for Trainer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/generation_utils"><!-- HTML_TAG_START -->Utilities for Generation<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/image_processing_utils"><!-- HTML_TAG_START -->Utilities for Image Processors<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/audio_utils"><!-- HTML_TAG_START -->Utilities for Audio processing<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/file_utils"><!-- HTML_TAG_START -->General Utilities<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/time_series_utils"><!-- HTML_TAG_START -->Utilities for Time Series<!-- HTML_TAG_END --> </a> </div> </div></nav></div></div></div> <div class="z-1 min-w-0 flex-1"> <div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg"> <div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div> <p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience </p> <div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes </div></div></div> <div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a> <p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div> <div class="prose-doc prose relative mx-auto max-w-4xl break-words"> <p></p> <h1 class="relative group"><a id="xlmroberta" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#xlmroberta"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-on8il6">XLM-RoBERTa</span></h1> <div class="flex flex-wrap space-x-1" data-svelte-h="svelte-4acett"><a href="https://huggingface.co/models?filter=xlm-roberta"><img alt="Models" src="https://img.shields.io/badge/All_model_pages-xlm--roberta-blueviolet"></a> <a href="https://huggingface.co/spaces/docs-demos/xlm-roberta-base"><img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"></a></div> <h2 class="relative group"><a id="overview" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#overview"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1jsw1pg">Overview</span></h2> <p data-svelte-h="svelte-1lfpqti">The XLM-RoBERTa model was proposed in <a href="https://arxiv.org/abs/1911.02116" rel="nofollow">Unsupervised Cross-lingual Representation Learning at Scale</a> by Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. It is based on Facebook’s RoBERTa model released in 2019. It is a large multi-lingual language model, trained on 2.5TB of filtered CommonCrawl data.</p> <p data-svelte-h="svelte-vfdo9a">The abstract from the paper is the following:</p> <p data-svelte-h="svelte-11xokj5"><em>This paper shows that pretraining multilingual language models at scale leads to significant performance gains for a wide range of cross-lingual transfer tasks. We train a Transformer-based masked language model on one hundred languages, using more than two terabytes of filtered CommonCrawl data. Our model, dubbed XLM-R, significantly outperforms multilingual BERT (mBERT) on a variety of cross-lingual benchmarks, including +13.8% average accuracy on XNLI, +12.3% average F1 score on MLQA, and +2.1% average F1 score on NER. XLM-R performs particularly well on low-resource languages, improving 11.8% in XNLI accuracy for Swahili and 9.2% for Urdu over the previous XLM model. We also present a detailed empirical evaluation of the key factors that are required to achieve these gains, including the trade-offs between (1) positive transfer and capacity dilution and (2) the performance of high and low resource languages at scale. Finally, we show, for the first time, the possibility of multilingual modeling without sacrificing per-language performance; XLM-Ris very competitive with strong monolingual models on the GLUE and XNLI benchmarks. We will make XLM-R code, data, and models publicly available.</em></p> <p data-svelte-h="svelte-axv494">Tips:</p> <ul data-svelte-h="svelte-kjnvye"><li>XLM-RoBERTa is a multilingual model trained on 100 different languages. Unlike some XLM multilingual models, it does not require <code>lang</code> tensors to understand which language is used, and should be able to determine the correct language from the input ids.</li> <li>Uses RoBERTa tricks on the XLM approach, but does not use the translation language modeling objective. It only uses masked language modeling on sentences coming from one language.</li> <li>This implementation is the same as RoBERTa. Refer to the <a href="roberta">documentation of RoBERTa</a> for usage examples as well as the information relative to the inputs and outputs.</li></ul> <p data-svelte-h="svelte-1yn1pvv">This model was contributed by <a href="https://huggingface.co/stefan-it" rel="nofollow">stefan-it</a>. The original code can be found <a href="https://github.com/pytorch/fairseq/tree/master/examples/xlmr" rel="nofollow">here</a>.</p> <h2 class="relative group"><a id="resources" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#resources"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-w4zzv6">Resources</span></h2> <p data-svelte-h="svelte-1ohr9zi">A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with XLM-RoBERTa. If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.</p> <div class="inline-flex items-center border pr-1 rounded-xl "><svg class="mr-1 tag-ico tag-ico-orange" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32" style="transform: rotate(360deg);"><circle cx="10" cy="20" r="2" fill="currentColor"></circle><circle cx="10" cy="28" r="2" fill="currentColor"></circle><circle cx="10" cy="14" r="2" fill="currentColor"></circle><circle cx="28" cy="4" r="2" fill="currentColor"></circle><circle cx="22" cy="6" r="2" fill="currentColor"></circle><circle cx="28" cy="10" r="2" fill="currentColor"></circle><circle cx="20" cy="12" r="2" fill="currentColor"></circle><circle cx="28" cy="22" r="2" fill="currentColor"></circle><circle cx="26" cy="28" r="2" fill="currentColor"></circle><circle cx="20" cy="26" r="2" fill="currentColor"></circle><circle cx="22" cy="20" r="2" fill="currentColor"></circle><circle cx="16" cy="4" r="2" fill="currentColor"></circle><circle cx="4" cy="24" r="2" fill="currentColor"></circle><circle cx="4" cy="16" r="2" fill="currentColor"></circle></svg> <span>Text Classification</span></div> <ul data-svelte-h="svelte-1szecj3"><li>A blog post on how to <a href="https://www.philschmid.de/habana-distributed-training" rel="nofollow">finetune XLM RoBERTa for multiclass classification with Habana Gaudi on AWS</a></li> <li><a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaForSequenceClassification">XLMRobertaForSequenceClassification</a> is supported by this <a href="https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification" rel="nofollow">example script</a> and <a href="https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification.ipynb" rel="nofollow">notebook</a>.</li> <li><a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.TFXLMRobertaForSequenceClassification">TFXLMRobertaForSequenceClassification</a> is supported by this <a href="https://github.com/huggingface/transformers/tree/main/examples/tensorflow/text-classification" rel="nofollow">example script</a> and <a href="https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification-tf.ipynb" rel="nofollow">notebook</a>.</li> <li><a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.FlaxXLMRobertaForSequenceClassification">FlaxXLMRobertaForSequenceClassification</a> is supported by this <a href="https://github.com/huggingface/transformers/tree/main/examples/flax/text-classification" rel="nofollow">example script</a> and <a href="https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification_flax.ipynb" rel="nofollow">notebook</a>.</li> <li><a href="https://huggingface.co/docs/transformers/tasks/sequence_classification" rel="nofollow">Text classification</a> chapter of the 🤗 Hugging Face Task Guides.</li> <li><a href="../tasks/sequence_classification">Text classification task guide</a></li></ul> <div class="inline-flex items-center border pr-1 rounded-xl "><svg class="mr-1 tag-ico tag-ico-blue" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 18 18"><path d="M11.075 10.1875H12.1625V11.275H11.075V10.1875Z"></path><path d="M15.425 9.10004H16.5125V10.1875H15.425V9.10004Z"></path><path d="M7.8125 3.66254H8.9V4.75004H7.8125V3.66254Z"></path><path d="M8.90001 12.3625H6.72501V9.09998C6.72472 8.81165 6.61005 8.5352 6.40617 8.33132C6.20228 8.12744 5.92584 8.01277 5.63751 8.01248H2.37501C2.08667 8.01277 1.81023 8.12744 1.60635 8.33132C1.40246 8.5352 1.28779 8.81165 1.28751 9.09998V12.3625C1.28779 12.6508 1.40246 12.9273 1.60635 13.1311C1.81023 13.335 2.08667 13.4497 2.37501 13.45H5.63751V15.625C5.63779 15.9133 5.75246 16.1898 5.95635 16.3936C6.16023 16.5975 6.43667 16.7122 6.72501 16.7125H8.90001C9.18834 16.7122 9.46478 16.5975 9.66867 16.3936C9.87255 16.1898 9.98722 15.9133 9.98751 15.625V13.45C9.98722 13.1616 9.87255 12.8852 9.66867 12.6813C9.46478 12.4774 9.18834 12.3628 8.90001 12.3625V12.3625ZM2.37501 12.3625V9.09998H5.63751V12.3625H2.37501ZM6.72501 15.625V13.45H8.90001V15.625H6.72501Z"></path><path d="M15.425 16.7125H13.25C12.9617 16.7122 12.6852 16.5976 12.4813 16.3937C12.2775 16.1898 12.1628 15.9134 12.1625 15.625V13.45C12.1628 13.1617 12.2775 12.8852 12.4813 12.6814C12.6852 12.4775 12.9617 12.3628 13.25 12.3625H15.425C15.7133 12.3628 15.9898 12.4775 16.1937 12.6814C16.3976 12.8852 16.5122 13.1617 16.5125 13.45V15.625C16.5122 15.9134 16.3976 16.1898 16.1937 16.3937C15.9898 16.5976 15.7133 16.7122 15.425 16.7125ZM13.25 13.45V15.625H15.425V13.45H13.25Z"></path><path d="M15.425 1.48752H12.1625C11.8742 1.48781 11.5977 1.60247 11.3938 1.80636C11.19 2.01024 11.0753 2.28668 11.075 2.57502V5.83752H9.98751C9.69917 5.83781 9.42273 5.95247 9.21885 6.15636C9.01496 6.36024 8.9003 6.63668 8.90001 6.92502V8.01252C8.9003 8.30085 9.01496 8.5773 9.21885 8.78118C9.42273 8.98506 9.69917 9.09973 9.98751 9.10002H11.075C11.3633 9.09973 11.6398 8.98506 11.8437 8.78118C12.0476 8.5773 12.1622 8.30085 12.1625 8.01252V6.92502H15.425C15.7133 6.92473 15.9898 6.81006 16.1937 6.60618C16.3976 6.4023 16.5122 6.12585 16.5125 5.83752V2.57502C16.5122 2.28668 16.3976 2.01024 16.1937 1.80636C15.9898 1.60247 15.7133 1.48781 15.425 1.48752ZM9.98751 8.01252V6.92502H11.075V8.01252H9.98751ZM12.1625 5.83752V2.57502H15.425V5.83752H12.1625Z"></path><path d="M4.55001 5.83752H2.37501C2.08667 5.83723 1.81023 5.72256 1.60635 5.51868C1.40246 5.3148 1.28779 5.03835 1.28751 4.75002V2.57502C1.28779 2.28668 1.40246 2.01024 1.60635 1.80636C1.81023 1.60247 2.08667 1.48781 2.37501 1.48752H4.55001C4.83834 1.48781 5.11478 1.60247 5.31867 1.80636C5.52255 2.01024 5.63722 2.28668 5.63751 2.57502V4.75002C5.63722 5.03835 5.52255 5.3148 5.31867 5.51868C5.11478 5.72256 4.83834 5.83723 4.55001 5.83752V5.83752ZM2.37501 2.57502V4.75002H4.55001V2.57502H2.37501Z"></path></svg> <span>Token Classification</span></div> <ul data-svelte-h="svelte-1wmnwgi"><li><a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaForTokenClassification">XLMRobertaForTokenClassification</a> is supported by this <a href="https://github.com/huggingface/transformers/tree/main/examples/pytorch/token-classification" rel="nofollow">example script</a> and <a href="https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification.ipynb" rel="nofollow">notebook</a>.</li> <li><a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.TFXLMRobertaForTokenClassification">TFXLMRobertaForTokenClassification</a> is supported by this <a href="https://github.com/huggingface/transformers/tree/main/examples/tensorflow/token-classification" rel="nofollow">example script</a> and <a href="https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification-tf.ipynb" rel="nofollow">notebook</a>.</li> <li><a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.FlaxXLMRobertaForTokenClassification">FlaxXLMRobertaForTokenClassification</a> is supported by this <a href="https://github.com/huggingface/transformers/tree/main/examples/flax/token-classification" rel="nofollow">example script</a>.</li> <li><a href="https://huggingface.co/course/chapter7/2?fw=pt" rel="nofollow">Token classification</a> chapter of the 🤗 Hugging Face Course.</li> <li><a href="../tasks/token_classification">Token classification task guide</a></li></ul> <div class="inline-flex items-center border pr-1 rounded-xl "><svg class="mr-1 tag-ico tag-ico-indigo" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 18 18"><path d="M16.2607 8.08202L14.468 6.28928C14.3063 6.12804 14.0873 6.03749 13.859 6.03749C13.6307 6.03749 13.4117 6.12804 13.25 6.28928L5.6375 13.904V16.9125H8.64607L16.2607 9.30002C16.422 9.13836 16.5125 8.91935 16.5125 8.69102C16.5125 8.4627 16.422 8.24369 16.2607 8.08202V8.08202ZM8.1953 15.825H6.725V14.3547L11.858 9.22118L13.3288 10.6915L8.1953 15.825ZM14.0982 9.92262L12.6279 8.45232L13.8606 7.21964L15.3309 8.68994L14.0982 9.92262Z"></path><path d="M6.18125 9.84373H7.26875V6.03748H8.9V4.94998H4.55V6.03748H6.18125V9.84373Z"></path><path d="M4.55 11.475H2.375V2.775H11.075V4.95H12.1625V2.775C12.1625 2.48658 12.0479 2.20997 11.844 2.00602C11.64 1.80208 11.3634 1.6875 11.075 1.6875H2.375C2.08658 1.6875 1.80997 1.80208 1.60602 2.00602C1.40207 2.20997 1.2875 2.48658 1.2875 2.775V11.475C1.2875 11.7634 1.40207 12.04 1.60602 12.244C1.80997 12.4479 2.08658 12.5625 2.375 12.5625H4.55V11.475Z"></path></svg> <span>Text Generation</span></div> <ul data-svelte-h="svelte-oze1yv"><li><a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaForCausalLM">XLMRobertaForCausalLM</a> is supported by this <a href="https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling" rel="nofollow">example script</a> and <a href="https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb" rel="nofollow">notebook</a>.</li> <li><a href="https://huggingface.co/docs/transformers/tasks/language_modeling" rel="nofollow">Causal language modeling</a> chapter of the 🤗 Hugging Face Task Guides.</li> <li><a href="../tasks/language_modeling">Causal language modeling task guide</a></li></ul> <div class="inline-flex items-center border pr-1 rounded-xl "><svg class="mr-1 tag-ico tag-ico-red" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 18 19"><path d="M12.3625 13.85H10.1875V12.7625H12.3625V10.5875H13.45V12.7625C13.4497 13.0508 13.335 13.3272 13.1312 13.5311C12.9273 13.735 12.6508 13.8497 12.3625 13.85V13.85Z"></path><path d="M5.8375 8.41246H4.75V6.23746C4.75029 5.94913 4.86496 5.67269 5.06884 5.4688C5.27272 5.26492 5.54917 5.15025 5.8375 5.14996H8.0125V6.23746H5.8375V8.41246Z"></path><path d="M15.625 5.14998H13.45V2.97498C13.4497 2.68665 13.335 2.4102 13.1312 2.20632C12.9273 2.00244 12.6508 1.88777 12.3625 1.88748H2.575C2.28666 1.88777 2.01022 2.00244 1.80633 2.20632C1.60245 2.4102 1.48778 2.68665 1.4875 2.97498V12.7625C1.48778 13.0508 1.60245 13.3273 1.80633 13.5311C2.01022 13.735 2.28666 13.8497 2.575 13.85H4.75V16.025C4.75028 16.3133 4.86495 16.5898 5.06883 16.7936C5.27272 16.9975 5.54916 17.1122 5.8375 17.1125H15.625C15.9133 17.1122 16.1898 16.9975 16.3937 16.7936C16.5975 16.5898 16.7122 16.3133 16.7125 16.025V6.23748C16.7122 5.94915 16.5975 5.6727 16.3937 5.46882C16.1898 5.26494 15.9133 5.15027 15.625 5.14998V5.14998ZM15.625 16.025H5.8375V13.85H8.0125V12.7625H5.8375V10.5875H4.75V12.7625H2.575V2.97498H12.3625V5.14998H10.1875V6.23748H12.3625V8.41248H13.45V6.23748H15.625V16.025Z"></path></svg> <span>Fill-Mask</span></div> <ul data-svelte-h="svelte-1hahzku"><li><a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaForMaskedLM">XLMRobertaForMaskedLM</a> is supported by this <a href="https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling#robertabertdistilbert-and-masked-language-modeling" rel="nofollow">example script</a> and <a href="https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb" rel="nofollow">notebook</a>.</li> <li><a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.TFXLMRobertaForMaskedLM">TFXLMRobertaForMaskedLM</a> is supported by this <a href="https://github.com/huggingface/transformers/tree/main/examples/tensorflow/language-modeling#run_mlmpy" rel="nofollow">example script</a> and <a href="https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb" rel="nofollow">notebook</a>.</li> <li><a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.FlaxXLMRobertaForMaskedLM">FlaxXLMRobertaForMaskedLM</a> is supported by this <a href="https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling#masked-language-modeling" rel="nofollow">example script</a> and <a href="https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/masked_language_modeling_flax.ipynb" rel="nofollow">notebook</a>.</li> <li><a href="https://huggingface.co/course/chapter7/3?fw=pt" rel="nofollow">Masked language modeling</a> chapter of the 🤗 Hugging Face Course.</li> <li><a href="../tasks/masked_language_modeling">Masked language modeling</a></li></ul> <div class="inline-flex items-center border pr-1 rounded-xl "><svg class="mr-1 tag-ico tag-ico-blue" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M2 9h9V2H2zm2-5h5v3H4z" fill="currentColor"></path><path d="M2 19h9v-7H2zm2-5h5v3H4z" fill="currentColor"></path><path d="M2 29h9v-7H2zm2-5h5v3H4z" fill="currentColor"></path><path d="M27 9h-9l3.41-3.59L20 4l-6 6l6 6l1.41-1.41L18 11h9a1 1 0 0 1 1 1v12a1 1 0 0 1-1 1H15v2h12a3 3 0 0 0 3-3V12a3 3 0 0 0-3-3z" fill="currentColor"></path></svg> <span>Question Answering</span></div> <ul data-svelte-h="svelte-q3lt4t"><li><a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaForQuestionAnswering">XLMRobertaForQuestionAnswering</a> is supported by this <a href="https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering" rel="nofollow">example script</a> and <a href="https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering.ipynb" rel="nofollow">notebook</a>.</li> <li><a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.TFXLMRobertaForQuestionAnswering">TFXLMRobertaForQuestionAnswering</a> is supported by this <a href="https://github.com/huggingface/transformers/tree/main/examples/tensorflow/question-answering" rel="nofollow">example script</a> and <a href="https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering-tf.ipynb" rel="nofollow">notebook</a>.</li> <li><a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.FlaxXLMRobertaForQuestionAnswering">FlaxXLMRobertaForQuestionAnswering</a> is supported by this <a href="https://github.com/huggingface/transformers/tree/main/examples/flax/question-answering" rel="nofollow">example script</a>.</li> <li><a href="https://huggingface.co/course/chapter7/7?fw=pt" rel="nofollow">Question answering</a> chapter of the 🤗 Hugging Face Course.</li> <li><a href="../tasks/question_answering">Question answering task guide</a></li></ul> <p data-svelte-h="svelte-cplu6u"><strong>Multiple choice</strong></p> <ul data-svelte-h="svelte-14s3efx"><li><a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaForMultipleChoice">XLMRobertaForMultipleChoice</a> is supported by this <a href="https://github.com/huggingface/transformers/tree/main/examples/pytorch/multiple-choice" rel="nofollow">example script</a> and <a href="https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice.ipynb" rel="nofollow">notebook</a>.</li> <li><a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.TFXLMRobertaForMultipleChoice">TFXLMRobertaForMultipleChoice</a> is supported by this <a href="https://github.com/huggingface/transformers/tree/main/examples/tensorflow/multiple-choice" rel="nofollow">example script</a> and <a href="https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice-tf.ipynb" rel="nofollow">notebook</a>.</li> <li><a href="../tasks/multiple_choice">Multiple choice task guide</a></li></ul> <p data-svelte-h="svelte-lk14e4">🚀 Deploy</p> <ul data-svelte-h="svelte-1eioy9o"><li>A blog post on how to <a href="https://www.philschmid.de/multilingual-serverless-xlm-roberta-with-huggingface" rel="nofollow">Deploy Serverless XLM RoBERTa on AWS Lambda</a>.</li></ul> <h2 class="relative group"><a id="transformers.XLMRobertaConfig" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaConfig"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1wm3wkx">XLMRobertaConfig</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMRobertaConfig"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XLMRobertaConfig</span></span></h3> <a id="transformers.XLMRobertaConfig" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMRobertaConfig"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/configuration_xlm_roberta.py#L45" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">vocab_size<span class="opacity-60"> = 30522</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_size<span class="opacity-60"> = 768</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_hidden_layers<span class="opacity-60"> = 12</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_attention_heads<span class="opacity-60"> = 12</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">intermediate_size<span class="opacity-60"> = 3072</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_act<span class="opacity-60"> = 'gelu'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_dropout_prob<span class="opacity-60"> = 0.1</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_probs_dropout_prob<span class="opacity-60"> = 0.1</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">max_position_embeddings<span class="opacity-60"> = 512</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">type_vocab_size<span class="opacity-60"> = 2</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">initializer_range<span class="opacity-60"> = 0.02</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">layer_norm_eps<span class="opacity-60"> = 1e-12</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pad_token_id<span class="opacity-60"> = 1</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">bos_token_id<span class="opacity-60"> = 0</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">eos_token_id<span class="opacity-60"> = 2</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_embedding_type<span class="opacity-60"> = 'absolute'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_cache<span class="opacity-60"> = True</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">classifier_dropout<span class="opacity-60"> = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 16 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaConfig.vocab_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaConfig.vocab_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>vocab_size</strong> (<code>int</code>, <em>optional</em>, defaults to 30522) — Vocabulary size of the XLM-RoBERTa model. Defines the number of different tokens that can be represented by the <code>inputs_ids</code> passed when calling <a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaModel">XLMRobertaModel</a> or <a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.TFXLMRobertaModel">TFXLMRobertaModel</a>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaConfig.hidden_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaConfig.hidden_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hidden_size</strong> (<code>int</code>, <em>optional</em>, defaults to 768) — Dimensionality of the encoder layers and the pooler layer.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaConfig.num_hidden_layers" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaConfig.num_hidden_layers"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>num_hidden_layers</strong> (<code>int</code>, <em>optional</em>, defaults to 12) — Number of hidden layers in the Transformer encoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaConfig.num_attention_heads" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaConfig.num_attention_heads"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>num_attention_heads</strong> (<code>int</code>, <em>optional</em>, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaConfig.intermediate_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaConfig.intermediate_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>intermediate_size</strong> (<code>int</code>, <em>optional</em>, defaults to 3072) — Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaConfig.hidden_act" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaConfig.hidden_act"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hidden_act</strong> (<code>str</code> or <code>Callable</code>, <em>optional</em>, defaults to <code>"gelu"</code>) — The non-linear activation function (function or string) in the encoder and pooler. If string, <code>"gelu"</code>, <code>"relu"</code>, <code>"silu"</code> and <code>"gelu_new"</code> are supported.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaConfig.hidden_dropout_prob" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaConfig.hidden_dropout_prob"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hidden_dropout_prob</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaConfig.attention_probs_dropout_prob" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaConfig.attention_probs_dropout_prob"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_probs_dropout_prob</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) — The dropout ratio for the attention probabilities.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaConfig.max_position_embeddings" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaConfig.max_position_embeddings"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>max_position_embeddings</strong> (<code>int</code>, <em>optional</em>, defaults to 512) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaConfig.type_vocab_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaConfig.type_vocab_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>type_vocab_size</strong> (<code>int</code>, <em>optional</em>, defaults to 2) — The vocabulary size of the <code>token_type_ids</code> passed when calling <a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaModel">XLMRobertaModel</a> or <a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.TFXLMRobertaModel">TFXLMRobertaModel</a>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaConfig.initializer_range" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaConfig.initializer_range"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>initializer_range</strong> (<code>float</code>, <em>optional</em>, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaConfig.layer_norm_eps" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaConfig.layer_norm_eps"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>layer_norm_eps</strong> (<code>float</code>, <em>optional</em>, defaults to 1e-12) — The epsilon used by the layer normalization layers.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaConfig.position_embedding_type" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaConfig.position_embedding_type"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_embedding_type</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"absolute"</code>) — Type of position embedding. Choose one of <code>"absolute"</code>, <code>"relative_key"</code>, <code>"relative_key_query"</code>. For positional embeddings use <code>"absolute"</code>. For more information on <code>"relative_key"</code>, please refer to <a href="https://arxiv.org/abs/1803.02155" rel="nofollow">Self-Attention with Relative Position Representations (Shaw et al.)</a>. For more information on <code>"relative_key_query"</code>, please refer to <em>Method 4</em> in <a href="https://arxiv.org/abs/2009.13658" rel="nofollow">Improve Transformer Models with Better Relative Position Embeddings (Huang et al.)</a>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaConfig.is_decoder" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaConfig.is_decoder"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>is_decoder</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether the model is used as a decoder or not. If <code>False</code>, the model is used as an encoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaConfig.use_cache" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaConfig.use_cache"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>use_cache</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if <code>config.is_decoder=True</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaConfig.classifier_dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaConfig.classifier_dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>classifier_dropout</strong> (<code>float</code>, <em>optional</em>) — The dropout ratio for the classification head.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-t0iftb">This is the configuration class to store the configuration of a <a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaModel">XLMRobertaModel</a> or a <a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.TFXLMRobertaModel">TFXLMRobertaModel</a>. It is used to instantiate a XLM-RoBERTa model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the XLMRoBERTa <a href="https://huggingface.co/xlm-roberta-base" rel="nofollow">xlm-roberta-base</a> architecture.</p> <p data-svelte-h="svelte-10kqkkl">Configuration objects inherit from <a href="/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> and can be used to control the model outputs. Read the documentation from <a href="/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> for more information.</p> <div class="relative group rounded-md"><a id="transformers.XLMRobertaConfig.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaConfig.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-kvfsh7">Examples:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> XLMRobertaConfig, XLMRobertaModel <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Initializing a XLM-RoBERTa xlm-roberta-base style configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>configuration = XLMRobertaConfig() <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Initializing a model (with random weights) from the xlm-roberta-base style configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model = XLMRobertaModel(configuration) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Accessing the model configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>configuration = model.config</pre></div></div></div> <h2 class="relative group"><a id="transformers.XLMRobertaTokenizer" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaTokenizer"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1jsugzc">XLMRobertaTokenizer</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMRobertaTokenizer"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XLMRobertaTokenizer</span></span></h3> <a id="transformers.XLMRobertaTokenizer" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMRobertaTokenizer"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/tokenization_xlm_roberta.py#L63" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">vocab_file<span class="opacity-60"></span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">bos_token<span class="opacity-60"> = '&lt;s&gt;'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">eos_token<span class="opacity-60"> = '&lt;/s&gt;'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">sep_token<span class="opacity-60"> = '&lt;/s&gt;'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">cls_token<span class="opacity-60"> = '&lt;s&gt;'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">unk_token<span class="opacity-60"> = '&lt;unk&gt;'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pad_token<span class="opacity-60"> = '&lt;pad&gt;'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_token<span class="opacity-60"> = '&lt;mask&gt;'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">sp_model_kwargs<span class="opacity-60">: typing.Union[typing.Dict[str, typing.Any], NoneType] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 11 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaTokenizer.vocab_file" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaTokenizer.vocab_file"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>vocab_file</strong> (<code>str</code>) — Path to the vocabulary file.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaTokenizer.bos_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaTokenizer.bos_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>bos_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;s&gt;"</code>) — The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.<p></p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"> <p>When building a sequence using special tokens, this is not the token that is used for the beginning of sequence. The token used is the <code>cls_token</code>.</p> </div></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaTokenizer.eos_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaTokenizer.eos_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>eos_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;/s&gt;"</code>) — The end of sequence token.<p></p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"> <p>When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the <code>sep_token</code>.</p> </div></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaTokenizer.sep_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaTokenizer.sep_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>sep_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;/s&gt;"</code>) — The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaTokenizer.cls_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaTokenizer.cls_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>cls_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;s&gt;"</code>) — The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaTokenizer.unk_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaTokenizer.unk_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>unk_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;unk&gt;"</code>) — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaTokenizer.pad_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaTokenizer.pad_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>pad_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;pad&gt;"</code>) — The token used for padding, for example when batching sequences of different lengths.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaTokenizer.mask_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaTokenizer.mask_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mask_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;mask&gt;"</code>) — The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaTokenizer.additional_special_tokens" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaTokenizer.additional_special_tokens"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>additional_special_tokens</strong> (<code>List[str]</code>, <em>optional</em>, defaults to <code>["&lt;s&gt;NOTUSED", "&lt;/s&gt;NOTUSED"]</code>) — Additional special tokens used by the tokenizer.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaTokenizer.sp_model_kwargs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaTokenizer.sp_model_kwargs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>sp_model_kwargs</strong> (<code>dict</code>, <em>optional</em>) — Will be passed to the <code>SentencePieceProcessor.__init__()</code> method. The <a href="https://github.com/google/sentencepiece/tree/master/python" rel="nofollow">Python wrapper for SentencePiece</a> can be used, among other things, to set:<p></p> <ul> <li> <p><code>enable_sampling</code>: Enable subword regularization.</p> </li> <li> <p><code>nbest_size</code>: Sampling parameters for unigram. Invalid for BPE-Dropout.</p> <ul> <li><code>nbest_size = {0,1}</code>: No sampling is performed.</li> <li><code>nbest_size &gt; 1</code>: samples from the nbest_size results.</li> <li><code>nbest_size &lt; 0</code>: assuming that nbest_size is infinite and samples from the all hypothesis (lattice) using forward-filtering-and-backward-sampling algorithm.</li> </ul> </li> <li> <p><code>alpha</code>: Smoothing parameter for unigram sampling, and dropout probability of merge operations for BPE-dropout.</p> </li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaTokenizer.sp_model" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaTokenizer.sp_model"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>sp_model</strong> (<code>SentencePieceProcessor</code>) — The <em>SentencePiece</em> processor that is used for every conversion (string, tokens and IDs).</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-z2kpmr">Adapted from <a href="/docs/transformers/v4.34.0/en/model_doc/roberta#transformers.RobertaTokenizer">RobertaTokenizer</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetTokenizer">XLNetTokenizer</a>. Based on <a href="https://github.com/google/sentencepiece" rel="nofollow">SentencePiece</a>.</p> <p data-svelte-h="svelte-1b0fouy">This tokenizer inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizer">PreTrainedTokenizer</a> which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMRobertaTokenizer.build_inputs_with_special_tokens"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>build_inputs_with_special_tokens</span></h4> <a id="transformers.XLMRobertaTokenizer.build_inputs_with_special_tokens" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMRobertaTokenizer.build_inputs_with_special_tokens"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/tokenization_xlm_roberta.py#L202" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_0<span class="opacity-60">: typing.List[int]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_1<span class="opacity-60">: typing.Optional[typing.List[int]] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>List[int]</code></span></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaTokenizer.build_inputs_with_special_tokens.token_ids_0" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaTokenizer.build_inputs_with_special_tokens.token_ids_0"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_0</strong> (<code>List[int]</code>) — List of IDs to which the special tokens will be added.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaTokenizer.build_inputs_with_special_tokens.token_ids_1" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaTokenizer.build_inputs_with_special_tokens.token_ids_1"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_1</strong> (<code>List[int]</code>, <em>optional</em>) — Optional second list of IDs for sequence pairs.</span></span> </li></ul> <div id="transformers.XLMRobertaTokenizer.build_inputs_with_special_tokens.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><code>List[int]</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>List of <a href="../glossary#input-ids">input IDs</a> with the appropriate special tokens.</p> </p> </div></div> <p data-svelte-h="svelte-1ooxl9e">Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. An XLM-RoBERTa sequence has the following format:</p> <ul data-svelte-h="svelte-rq8uot"><li>single sequence: <code>&lt;s&gt; X &lt;/s&gt;</code></li> <li>pair of sequences: <code>&lt;s&gt; A &lt;/s&gt;&lt;/s&gt; B &lt;/s&gt;</code></li></ul></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMRobertaTokenizer.get_special_tokens_mask"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>get_special_tokens_mask</span></h4> <a id="transformers.XLMRobertaTokenizer.get_special_tokens_mask" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMRobertaTokenizer.get_special_tokens_mask"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/tokenization_xlm_roberta.py#L228" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_0<span class="opacity-60">: typing.List[int]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_1<span class="opacity-60">: typing.Optional[typing.List[int]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">already_has_special_tokens<span class="opacity-60">: bool = False</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>List[int]</code></span></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaTokenizer.get_special_tokens_mask.token_ids_0" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaTokenizer.get_special_tokens_mask.token_ids_0"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_0</strong> (<code>List[int]</code>) — List of IDs.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaTokenizer.get_special_tokens_mask.token_ids_1" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaTokenizer.get_special_tokens_mask.token_ids_1"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_1</strong> (<code>List[int]</code>, <em>optional</em>) — Optional second list of IDs for sequence pairs.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaTokenizer.get_special_tokens_mask.already_has_special_tokens" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaTokenizer.get_special_tokens_mask.already_has_special_tokens"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>already_has_special_tokens</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not the token list is already formatted with special tokens for the model.</span></span> </li></ul> <div id="transformers.XLMRobertaTokenizer.get_special_tokens_mask.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><code>List[int]</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.</p> </p> </div></div> <p data-svelte-h="svelte-1f4f5kp">Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer <code>prepare_for_model</code> method.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMRobertaTokenizer.create_token_type_ids_from_sequences"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>create_token_type_ids_from_sequences</span></h4> <a id="transformers.XLMRobertaTokenizer.create_token_type_ids_from_sequences" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMRobertaTokenizer.create_token_type_ids_from_sequences"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/tokenization_xlm_roberta.py#L256" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_0<span class="opacity-60">: typing.List[int]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_1<span class="opacity-60">: typing.Optional[typing.List[int]] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>List[int]</code></span></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaTokenizer.create_token_type_ids_from_sequences.token_ids_0" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaTokenizer.create_token_type_ids_from_sequences.token_ids_0"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_0</strong> (<code>List[int]</code>) — List of IDs.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaTokenizer.create_token_type_ids_from_sequences.token_ids_1" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaTokenizer.create_token_type_ids_from_sequences.token_ids_1"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_1</strong> (<code>List[int]</code>, <em>optional</em>) — Optional second list of IDs for sequence pairs.</span></span> </li></ul> <div id="transformers.XLMRobertaTokenizer.create_token_type_ids_from_sequences.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><code>List[int]</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>List of zeros.</p> </p> </div></div> <p data-svelte-h="svelte-bub0ru">Create a mask from the two sequences passed to be used in a sequence-pair classification task. XLM-RoBERTa does not make use of token type ids, therefore a list of zeros is returned.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMRobertaTokenizer.save_vocabulary"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>save_vocabulary</span></h4> <a id="transformers.XLMRobertaTokenizer.save_vocabulary" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMRobertaTokenizer.save_vocabulary"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/tokenization_xlm_roberta.py#L314" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">save_directory<span class="opacity-60">: str</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">filename_prefix<span class="opacity-60">: typing.Optional[str] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> </div></div></div></div> <h2 class="relative group"><a id="transformers.XLMRobertaTokenizerFast" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaTokenizerFast"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-puyeu6">XLMRobertaTokenizerFast</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMRobertaTokenizerFast"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XLMRobertaTokenizerFast</span></span></h3> <a id="transformers.XLMRobertaTokenizerFast" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMRobertaTokenizerFast"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/tokenization_xlm_roberta_fast.py#L82" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">vocab_file<span class="opacity-60"> = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">tokenizer_file<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">bos_token<span class="opacity-60"> = '&lt;s&gt;'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">eos_token<span class="opacity-60"> = '&lt;/s&gt;'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">sep_token<span class="opacity-60"> = '&lt;/s&gt;'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">cls_token<span class="opacity-60"> = '&lt;s&gt;'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">unk_token<span class="opacity-60"> = '&lt;unk&gt;'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pad_token<span class="opacity-60"> = '&lt;pad&gt;'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_token<span class="opacity-60"> = '&lt;mask&gt;'</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 9 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaTokenizerFast.vocab_file" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaTokenizerFast.vocab_file"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>vocab_file</strong> (<code>str</code>) — Path to the vocabulary file.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaTokenizerFast.bos_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaTokenizerFast.bos_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>bos_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;s&gt;"</code>) — The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.<p></p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"> <p>When building a sequence using special tokens, this is not the token that is used for the beginning of sequence. The token used is the <code>cls_token</code>.</p> </div></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaTokenizerFast.eos_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaTokenizerFast.eos_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>eos_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;/s&gt;"</code>) — The end of sequence token.<p></p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"> <p>When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the <code>sep_token</code>.</p> </div></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaTokenizerFast.sep_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaTokenizerFast.sep_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>sep_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;/s&gt;"</code>) — The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaTokenizerFast.cls_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaTokenizerFast.cls_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>cls_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;s&gt;"</code>) — The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaTokenizerFast.unk_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaTokenizerFast.unk_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>unk_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;unk&gt;"</code>) — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaTokenizerFast.pad_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaTokenizerFast.pad_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>pad_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;pad&gt;"</code>) — The token used for padding, for example when batching sequences of different lengths.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaTokenizerFast.mask_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaTokenizerFast.mask_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mask_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;mask&gt;"</code>) — The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaTokenizerFast.additional_special_tokens" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaTokenizerFast.additional_special_tokens"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>additional_special_tokens</strong> (<code>List[str]</code>, <em>optional</em>, defaults to <code>["&lt;s&gt;NOTUSED", "&lt;/s&gt;NOTUSED"]</code>) — Additional special tokens used by the tokenizer.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-o5k0qm">Construct a “fast” XLM-RoBERTa tokenizer (backed by HuggingFace’s <em>tokenizers</em> library). Adapted from <a href="/docs/transformers/v4.34.0/en/model_doc/roberta#transformers.RobertaTokenizer">RobertaTokenizer</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetTokenizer">XLNetTokenizer</a>. Based on <a href="https://huggingface.co/docs/tokenizers/python/latest/components.html?highlight=BPE#models" rel="nofollow">BPE</a>.</p> <p data-svelte-h="svelte-ttxvs6">This tokenizer inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast">PreTrainedTokenizerFast</a> which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMRobertaTokenizerFast.build_inputs_with_special_tokens"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>build_inputs_with_special_tokens</span></h4> <a id="transformers.XLMRobertaTokenizerFast.build_inputs_with_special_tokens" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMRobertaTokenizerFast.build_inputs_with_special_tokens"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/tokenization_xlm_roberta_fast.py#L174" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_0<span class="opacity-60">: typing.List[int]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_1<span class="opacity-60">: typing.Optional[typing.List[int]] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>List[int]</code></span></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaTokenizerFast.build_inputs_with_special_tokens.token_ids_0" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaTokenizerFast.build_inputs_with_special_tokens.token_ids_0"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_0</strong> (<code>List[int]</code>) — List of IDs to which the special tokens will be added.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaTokenizerFast.build_inputs_with_special_tokens.token_ids_1" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaTokenizerFast.build_inputs_with_special_tokens.token_ids_1"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_1</strong> (<code>List[int]</code>, <em>optional</em>) — Optional second list of IDs for sequence pairs.</span></span> </li></ul> <div id="transformers.XLMRobertaTokenizerFast.build_inputs_with_special_tokens.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><code>List[int]</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>List of <a href="../glossary#input-ids">input IDs</a> with the appropriate special tokens.</p> </p> </div></div> <p data-svelte-h="svelte-1ooxl9e">Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. An XLM-RoBERTa sequence has the following format:</p> <ul data-svelte-h="svelte-rq8uot"><li>single sequence: <code>&lt;s&gt; X &lt;/s&gt;</code></li> <li>pair of sequences: <code>&lt;s&gt; A &lt;/s&gt;&lt;/s&gt; B &lt;/s&gt;</code></li></ul></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMRobertaTokenizerFast.create_token_type_ids_from_sequences"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>create_token_type_ids_from_sequences</span></h4> <a id="transformers.XLMRobertaTokenizerFast.create_token_type_ids_from_sequences" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMRobertaTokenizerFast.create_token_type_ids_from_sequences"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/tokenization_xlm_roberta_fast.py#L200" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_0<span class="opacity-60">: typing.List[int]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_1<span class="opacity-60">: typing.Optional[typing.List[int]] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>List[int]</code></span></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaTokenizerFast.create_token_type_ids_from_sequences.token_ids_0" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaTokenizerFast.create_token_type_ids_from_sequences.token_ids_0"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_0</strong> (<code>List[int]</code>) — List of IDs.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaTokenizerFast.create_token_type_ids_from_sequences.token_ids_1" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaTokenizerFast.create_token_type_ids_from_sequences.token_ids_1"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_1</strong> (<code>List[int]</code>, <em>optional</em>) — Optional second list of IDs for sequence pairs.</span></span> </li></ul> <div id="transformers.XLMRobertaTokenizerFast.create_token_type_ids_from_sequences.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><code>List[int]</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>List of zeros.</p> </p> </div></div> <p data-svelte-h="svelte-bub0ru">Create a mask from the two sequences passed to be used in a sequence-pair classification task. XLM-RoBERTa does not make use of token type ids, therefore a list of zeros is returned.</p></div></div> <h2 class="relative group"><a id="transformers.XLMRobertaModel" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaModel"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-ax53o8">XLMRobertaModel</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMRobertaModel"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XLMRobertaModel</span></span></h3> <a id="transformers.XLMRobertaModel" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMRobertaModel"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_xlm_roberta.py#L693" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">add_pooling_layer<span class="opacity-60"> = True</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaModel.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaModel.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig">XLMRobertaConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1xk02f4">The bare XLM-RoBERTa Model transformer outputting raw hidden-states without any specific head on top.</p> <p data-svelte-h="svelte-hmtw9k">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-hswkmf">This model is also a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <p data-svelte-h="svelte-rehfhh">The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of cross-attention is added between the self-attention layers, following the architecture described in <em>Attention is all you need</em>_ by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin.</p> <p data-svelte-h="svelte-174erte">To behave as an decoder the model needs to be initialized with the <code>is_decoder</code> argument of the configuration set to <code>True</code>. To be used in a Seq2Seq model, the model needs to initialized with both <code>is_decoder</code> argument and <code>add_cross_attention</code> set to <code>True</code>; an <code>encoder_hidden_states</code> is then expected as an input to the forward pass.</p> <p data-svelte-h="svelte-p9qvd1">.. _<em>Attention is all you need</em>: <a href="https://arxiv.org/abs/1706.03762" rel="nofollow">https://arxiv.org/abs/1706.03762</a></p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMRobertaModel.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.XLMRobertaModel.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMRobertaModel.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_xlm_roberta.py#L736" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_hidden_states<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">past_key_values<span class="opacity-60">: typing.Optional[typing.List[torch.FloatTensor]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_cache<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions">transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 13 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaModel.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaModel.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaModel.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaModel.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaModel.forward.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaModel.forward.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token.</li> </ul> <p><a href="../glossary#token-type-ids">What are token type IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaModel.forward.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaModel.forward.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.<p></p> <p><a href="../glossary#position-ids">What are position IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaModel.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaModel.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaModel.forward.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaModel.forward.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaModel.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaModel.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaModel.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaModel.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaModel.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaModel.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaModel.forward.encoder_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaModel.forward.encoder_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>encoder_hidden_states</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaModel.forward.encoder_attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaModel.forward.encoder_attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>encoder_attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaModel.forward.past_key_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaModel.forward.past_key_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>past_key_values</strong> (<code>tuple(tuple(torch.FloatTensor))</code> of length <code>config.n_layers</code> with each tuple having 4 tensors of shape <code>(batch_size, num_heads, sequence_length - 1, embed_size_per_head)</code>) — Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.<p></p> <p>If <code>past_key_values</code> are used, the user can optionally input only the last <code>decoder_input_ids</code> (those that don’t have their past key value states given to this model) of shape <code>(batch_size, 1)</code> instead of all <code>decoder_input_ids</code> of shape <code>(batch_size, sequence_length)</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaModel.forward.use_cache" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaModel.forward.use_cache"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>use_cache</strong> (<code>bool</code>, <em>optional</em>) — If set to <code>True</code>, <code>past_key_values</code> key value states are returned and can be used to speed up decoding (see <code>past_key_values</code>).</span></span> </li></ul> <div id="transformers.XLMRobertaModel.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions">transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions">transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig">XLMRobertaConfig</a>) and inputs.</p> <ul> <li> <p><strong>last_hidden_state</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>) — Sequence of hidden-states at the output of the last layer of the model.</p> </li> <li> <p><strong>pooler_output</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, hidden_size)</code>) — Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining.</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> <li> <p><strong>cross_attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> and <code>config.add_cross_attention=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.</p> </li> <li> <p><strong>past_key_values</strong> (<code>tuple(tuple(torch.FloatTensor))</code>, <em>optional</em>, returned when <code>use_cache=True</code> is passed or when <code>config.use_cache=True</code>) — Tuple of <code>tuple(torch.FloatTensor)</code> of length <code>config.n_layers</code>, with each tuple having 2 tensors of shape <code>(batch_size, num_heads, sequence_length, embed_size_per_head)</code>) and optionally if <code>config.is_encoder_decoder=True</code> 2 additional tensors of shape <code>(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)</code>.</p> <p>Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if <code>config.is_encoder_decoder=True</code> in the cross-attention blocks) that can be used (see <code>past_key_values</code> input) to speed up sequential decoding.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-e4nu96">The <a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaModel">XLMRobertaModel</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.XLMRobertaModel.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaModel.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, XLMRobertaModel <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"xlm-roberta-base"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = XLMRobertaModel.from_pretrained(<span class="hljs-string">"xlm-roberta-base"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(<span class="hljs-string">"Hello, my dog is cute"</span>, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(**inputs) <span class="hljs-meta">&gt;&gt;&gt; </span>last_hidden_states = outputs.last_hidden_state</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.XLMRobertaForCausalLM" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForCausalLM"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1jgc608">XLMRobertaForCausalLM</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMRobertaForCausalLM"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XLMRobertaForCausalLM</span></span></h3> <a id="transformers.XLMRobertaForCausalLM" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMRobertaForCausalLM"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_xlm_roberta.py#L879" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForCausalLM.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForCausalLM.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig">XLMRobertaConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-15apchw">XLM-RoBERTa Model with a <code>language modeling</code> head on top for CLM fine-tuning.</p> <p data-svelte-h="svelte-hmtw9k">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-hswkmf">This model is also a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMRobertaForCausalLM.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.XLMRobertaForCausalLM.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMRobertaForCausalLM.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_xlm_roberta.py#L900" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_hidden_states<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_attention_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">past_key_values<span class="opacity-60">: typing.Tuple[typing.Tuple[torch.FloatTensor]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_cache<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.CausalLMOutputWithCrossAttentions">transformers.modeling_outputs.CausalLMOutputWithCrossAttentions</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 14 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForCausalLM.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForCausalLM.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForCausalLM.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForCausalLM.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForCausalLM.forward.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForCausalLM.forward.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token.</li> </ul> <p><a href="../glossary#token-type-ids">What are token type IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForCausalLM.forward.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForCausalLM.forward.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.<p></p> <p><a href="../glossary#position-ids">What are position IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForCausalLM.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForCausalLM.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForCausalLM.forward.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForCausalLM.forward.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForCausalLM.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForCausalLM.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForCausalLM.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForCausalLM.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForCausalLM.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForCausalLM.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForCausalLM.forward.encoder_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForCausalLM.forward.encoder_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>encoder_hidden_states</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForCausalLM.forward.encoder_attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForCausalLM.forward.encoder_attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>encoder_attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForCausalLM.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForCausalLM.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be in <code>[-100, 0, ..., config.vocab_size]</code> (see <code>input_ids</code> docstring) Tokens with indices set to <code>-100</code> are ignored (masked), the loss is only computed for the tokens with labels in <code>[0, ..., config.vocab_size]</code></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForCausalLM.forward.past_key_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForCausalLM.forward.past_key_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>past_key_values</strong> (<code>tuple(tuple(torch.FloatTensor))</code> of length <code>config.n_layers</code> with each tuple having 4 tensors of shape <code>(batch_size, num_heads, sequence_length - 1, embed_size_per_head)</code>) — Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.<p></p> <p>If <code>past_key_values</code> are used, the user can optionally input only the last <code>decoder_input_ids</code> (those that don’t have their past key value states given to this model) of shape <code>(batch_size, 1)</code> instead of all <code>decoder_input_ids</code> of shape <code>(batch_size, sequence_length)</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForCausalLM.forward.use_cache" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForCausalLM.forward.use_cache"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>use_cache</strong> (<code>bool</code>, <em>optional</em>) — If set to <code>True</code>, <code>past_key_values</code> key value states are returned and can be used to speed up decoding (see <code>past_key_values</code>).</span></span> </li></ul> <div id="transformers.XLMRobertaForCausalLM.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.CausalLMOutputWithCrossAttentions">transformers.modeling_outputs.CausalLMOutputWithCrossAttentions</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.CausalLMOutputWithCrossAttentions">transformers.modeling_outputs.CausalLMOutputWithCrossAttentions</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig">XLMRobertaConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Language modeling loss (for next-token prediction).</p> </li> <li> <p><strong>logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, config.vocab_size)</code>) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> <li> <p><strong>cross_attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Cross attentions weights after the attention softmax, used to compute the weighted average in the cross-attention heads.</p> </li> <li> <p><strong>past_key_values</strong> (<code>tuple(tuple(torch.FloatTensor))</code>, <em>optional</em>, returned when <code>use_cache=True</code> is passed or when <code>config.use_cache=True</code>) — Tuple of <code>torch.FloatTensor</code> tuples of length <code>config.n_layers</code>, with each tuple containing the cached key, value states of the self-attention and the cross-attention layers if model is used in encoder-decoder setting. Only relevant if <code>config.is_decoder = True</code>.</p> <p>Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see <code>past_key_values</code> input) to speed up sequential decoding.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-gnnedi">The <a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaForCausalLM">XLMRobertaForCausalLM</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.XLMRobertaForCausalLM.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForCausalLM.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, XLMRobertaForCausalLM, AutoConfig <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"roberta-base"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>config = AutoConfig.from_pretrained(<span class="hljs-string">"roberta-base"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>config.is_decoder = <span class="hljs-literal">True</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model = XLMRobertaForCausalLM.from_pretrained(<span class="hljs-string">"roberta-base"</span>, config=config) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(<span class="hljs-string">"Hello, my dog is cute"</span>, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(**inputs) <span class="hljs-meta">&gt;&gt;&gt; </span>prediction_logits = outputs.logits</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.XLMRobertaForMaskedLM" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForMaskedLM"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1afu0w0">XLMRobertaForMaskedLM</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMRobertaForMaskedLM"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XLMRobertaForMaskedLM</span></span></h3> <a id="transformers.XLMRobertaForMaskedLM" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMRobertaForMaskedLM"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_xlm_roberta.py#L1034" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForMaskedLM.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForMaskedLM.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig">XLMRobertaConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1ponqk3">XLM-RoBERTa Model with a <code>language modeling</code> head on top.</p> <p data-svelte-h="svelte-hmtw9k">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-hswkmf">This model is also a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMRobertaForMaskedLM.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.XLMRobertaForMaskedLM.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMRobertaForMaskedLM.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_xlm_roberta.py#L1058" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_hidden_states<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_attention_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.MaskedLMOutput">transformers.modeling_outputs.MaskedLMOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 11 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForMaskedLM.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForMaskedLM.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForMaskedLM.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForMaskedLM.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForMaskedLM.forward.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForMaskedLM.forward.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token.</li> </ul> <p><a href="../glossary#token-type-ids">What are token type IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForMaskedLM.forward.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForMaskedLM.forward.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.<p></p> <p><a href="../glossary#position-ids">What are position IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForMaskedLM.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForMaskedLM.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForMaskedLM.forward.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForMaskedLM.forward.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForMaskedLM.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForMaskedLM.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForMaskedLM.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForMaskedLM.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForMaskedLM.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForMaskedLM.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForMaskedLM.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForMaskedLM.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Labels for computing the masked language modeling loss. Indices should be in <code>[-100, 0, ..., config.vocab_size]</code> (see <code>input_ids</code> docstring) Tokens with indices set to <code>-100</code> are ignored (masked), the loss is only computed for the tokens with labels in <code>[0, ..., config.vocab_size]</code></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForMaskedLM.forward.kwargs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForMaskedLM.forward.kwargs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>kwargs</strong> (<code>Dict[str, any]</code>, optional, defaults to <em>{}</em>) — Used to hide legacy arguments that have been deprecated.</span></span> </li></ul> <div id="transformers.XLMRobertaForMaskedLM.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.MaskedLMOutput">transformers.modeling_outputs.MaskedLMOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.MaskedLMOutput">transformers.modeling_outputs.MaskedLMOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig">XLMRobertaConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Masked language modeling (MLM) loss.</p> </li> <li> <p><strong>logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, config.vocab_size)</code>) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-zfkk2u">The <a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaForMaskedLM">XLMRobertaForMaskedLM</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.XLMRobertaForMaskedLM.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForMaskedLM.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, XLMRobertaForMaskedLM <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"xlm-roberta-base"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = XLMRobertaForMaskedLM.from_pretrained(<span class="hljs-string">"xlm-roberta-base"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(<span class="hljs-string">"The capital of France is &lt;mask&gt;."</span>, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> logits = model(**inputs).logits <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># retrieve index of &lt;mask&gt;</span> <span class="hljs-meta">&gt;&gt;&gt; </span>mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[<span class="hljs-number">0</span>].nonzero(as_tuple=<span class="hljs-literal">True</span>)[<span class="hljs-number">0</span>] <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_token_id = logits[<span class="hljs-number">0</span>, mask_token_index].argmax(axis=-<span class="hljs-number">1</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer.decode(predicted_token_id) <span class="hljs-string">' Paris'</span> <span class="hljs-meta">&gt;&gt;&gt; </span>labels = tokenizer(<span class="hljs-string">"The capital of France is Paris."</span>, return_tensors=<span class="hljs-string">"pt"</span>)[<span class="hljs-string">"input_ids"</span>] <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># mask labels of non-&lt;mask&gt; tokens</span> <span class="hljs-meta">&gt;&gt;&gt; </span>labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -<span class="hljs-number">100</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(**inputs, labels=labels) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-built_in">round</span>(outputs.loss.item(), <span class="hljs-number">2</span>) <span class="hljs-number">0.1</span></pre></div></div></div></div> <h2 class="relative group"><a id="transformers.XLMRobertaForSequenceClassification" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForSequenceClassification"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1xicp7">XLMRobertaForSequenceClassification</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMRobertaForSequenceClassification"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XLMRobertaForSequenceClassification</span></span></h3> <a id="transformers.XLMRobertaForSequenceClassification" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMRobertaForSequenceClassification"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_xlm_roberta.py#L1167" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForSequenceClassification.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForSequenceClassification.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig">XLMRobertaConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-nc5ddr">XLM-RoBERTa Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks.</p> <p data-svelte-h="svelte-hmtw9k">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-hswkmf">This model is also a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMRobertaForSequenceClassification.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.XLMRobertaForSequenceClassification.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMRobertaForSequenceClassification.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_xlm_roberta.py#L1179" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.SequenceClassifierOutput">transformers.modeling_outputs.SequenceClassifierOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 10 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForSequenceClassification.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForSequenceClassification.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForSequenceClassification.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForSequenceClassification.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForSequenceClassification.forward.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForSequenceClassification.forward.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token.</li> </ul> <p><a href="../glossary#token-type-ids">What are token type IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForSequenceClassification.forward.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForSequenceClassification.forward.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.<p></p> <p><a href="../glossary#position-ids">What are position IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForSequenceClassification.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForSequenceClassification.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForSequenceClassification.forward.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForSequenceClassification.forward.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForSequenceClassification.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForSequenceClassification.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForSequenceClassification.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForSequenceClassification.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForSequenceClassification.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForSequenceClassification.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForSequenceClassification.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForSequenceClassification.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Labels for computing the sequence classification/regression loss. Indices should be in <code>[0, ..., config.num_labels - 1]</code>. If <code>config.num_labels == 1</code> a regression loss is computed (Mean-Square loss), If <code>config.num_labels &gt; 1</code> a classification loss is computed (Cross-Entropy).</span></span> </li></ul> <div id="transformers.XLMRobertaForSequenceClassification.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.SequenceClassifierOutput">transformers.modeling_outputs.SequenceClassifierOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.SequenceClassifierOutput">transformers.modeling_outputs.SequenceClassifierOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig">XLMRobertaConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Classification (or regression if config.num_labels==1) loss.</p> </li> <li> <p><strong>logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, config.num_labels)</code>) — Classification (or regression if config.num_labels==1) scores (before SoftMax).</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-etmdhy">The <a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaForSequenceClassification">XLMRobertaForSequenceClassification</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.XLMRobertaForSequenceClassification.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForSequenceClassification.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-ykxpe4">Example of single-label classification:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, XLMRobertaForSequenceClassification <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"cardiffnlp/twitter-roberta-base-emotion"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = XLMRobertaForSequenceClassification.from_pretrained(<span class="hljs-string">"cardiffnlp/twitter-roberta-base-emotion"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(<span class="hljs-string">"Hello, my dog is cute"</span>, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> logits = model(**inputs).logits <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_class_id = logits.argmax().item() <span class="hljs-meta">&gt;&gt;&gt; </span>model.config.id2label[predicted_class_id] <span class="hljs-string">'optimism'</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`</span> <span class="hljs-meta">&gt;&gt;&gt; </span>num_labels = <span class="hljs-built_in">len</span>(model.config.id2label) <span class="hljs-meta">&gt;&gt;&gt; </span>model = XLMRobertaForSequenceClassification.from_pretrained(<span class="hljs-string">"cardiffnlp/twitter-roberta-base-emotion"</span>, num_labels=num_labels) <span class="hljs-meta">&gt;&gt;&gt; </span>labels = torch.tensor([<span class="hljs-number">1</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span>loss = model(**inputs, labels=labels).loss <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-built_in">round</span>(loss.item(), <span class="hljs-number">2</span>) <span class="hljs-number">0.08</span></pre></div></div> <div class="relative group rounded-md"><a id="transformers.XLMRobertaForSequenceClassification.forward.example-2" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForSequenceClassification.forward.example-2"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-1l8e32d">Example of multi-label classification:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, XLMRobertaForSequenceClassification <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"cardiffnlp/twitter-roberta-base-emotion"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = XLMRobertaForSequenceClassification.from_pretrained(<span class="hljs-string">"cardiffnlp/twitter-roberta-base-emotion"</span>, problem_type=<span class="hljs-string">"multi_label_classification"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(<span class="hljs-string">"Hello, my dog is cute"</span>, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> logits = model(**inputs).logits <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_class_ids = torch.arange(<span class="hljs-number">0</span>, logits.shape[-<span class="hljs-number">1</span>])[torch.sigmoid(logits).squeeze(dim=<span class="hljs-number">0</span>) &gt; <span class="hljs-number">0.5</span>] <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`</span> <span class="hljs-meta">&gt;&gt;&gt; </span>num_labels = <span class="hljs-built_in">len</span>(model.config.id2label) <span class="hljs-meta">&gt;&gt;&gt; </span>model = XLMRobertaForSequenceClassification.from_pretrained( <span class="hljs-meta">... </span> <span class="hljs-string">"cardiffnlp/twitter-roberta-base-emotion"</span>, num_labels=num_labels, problem_type=<span class="hljs-string">"multi_label_classification"</span> <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>labels = torch.<span class="hljs-built_in">sum</span>( <span class="hljs-meta">... </span> torch.nn.functional.one_hot(predicted_class_ids[<span class="hljs-literal">None</span>, :].clone(), num_classes=num_labels), dim=<span class="hljs-number">1</span> <span class="hljs-meta">... </span>).to(torch.<span class="hljs-built_in">float</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>loss = model(**inputs, labels=labels).loss</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.XLMRobertaForMultipleChoice" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForMultipleChoice"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-gouirp">XLMRobertaForMultipleChoice</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMRobertaForMultipleChoice"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XLMRobertaForMultipleChoice</span></span></h3> <a id="transformers.XLMRobertaForMultipleChoice" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMRobertaForMultipleChoice"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_xlm_roberta.py#L1267" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForMultipleChoice.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForMultipleChoice.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig">XLMRobertaConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-z8vnw3">XLM-RoBERTa Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks.</p> <p data-svelte-h="svelte-hmtw9k">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-hswkmf">This model is also a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMRobertaForMultipleChoice.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.XLMRobertaForMultipleChoice.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMRobertaForMultipleChoice.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_xlm_roberta.py#L1278" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.MultipleChoiceModelOutput">transformers.modeling_outputs.MultipleChoiceModelOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 10 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForMultipleChoice.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForMultipleChoice.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, num_choices, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForMultipleChoice.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForMultipleChoice.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_choices, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForMultipleChoice.forward.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForMultipleChoice.forward.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, num_choices, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token.</li> </ul> <p><a href="../glossary#token-type-ids">What are token type IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForMultipleChoice.forward.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForMultipleChoice.forward.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, num_choices, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.<p></p> <p><a href="../glossary#position-ids">What are position IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForMultipleChoice.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForMultipleChoice.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForMultipleChoice.forward.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForMultipleChoice.forward.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_choices, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForMultipleChoice.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForMultipleChoice.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForMultipleChoice.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForMultipleChoice.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForMultipleChoice.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForMultipleChoice.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForMultipleChoice.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForMultipleChoice.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Labels for computing the multiple choice classification loss. Indices should be in <code>[0, ..., num_choices-1]</code> where <code>num_choices</code> is the size of the second dimension of the input tensors. (See <code>input_ids</code> above)</span></span> </li></ul> <div id="transformers.XLMRobertaForMultipleChoice.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.MultipleChoiceModelOutput">transformers.modeling_outputs.MultipleChoiceModelOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.MultipleChoiceModelOutput">transformers.modeling_outputs.MultipleChoiceModelOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig">XLMRobertaConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <em>(1,)</em>, <em>optional</em>, returned when <code>labels</code> is provided) — Classification loss.</p> </li> <li> <p><strong>logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_choices)</code>) — <em>num_choices</em> is the second dimension of the input tensors. (see <em>input_ids</em> above).</p> <p>Classification scores (before SoftMax).</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-14pr7wa">The <a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaForMultipleChoice">XLMRobertaForMultipleChoice</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.XLMRobertaForMultipleChoice.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForMultipleChoice.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, XLMRobertaForMultipleChoice <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"xlm-roberta-base"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = XLMRobertaForMultipleChoice.from_pretrained(<span class="hljs-string">"xlm-roberta-base"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>prompt = <span class="hljs-string">"In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."</span> <span class="hljs-meta">&gt;&gt;&gt; </span>choice0 = <span class="hljs-string">"It is eaten with a fork and a knife."</span> <span class="hljs-meta">&gt;&gt;&gt; </span>choice1 = <span class="hljs-string">"It is eaten while held in the hand."</span> <span class="hljs-meta">&gt;&gt;&gt; </span>labels = torch.tensor(<span class="hljs-number">0</span>).unsqueeze(<span class="hljs-number">0</span>) <span class="hljs-comment"># choice0 is correct (according to Wikipedia ;)), batch size 1</span> <span class="hljs-meta">&gt;&gt;&gt; </span>encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors=<span class="hljs-string">"pt"</span>, padding=<span class="hljs-literal">True</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(**{k: v.unsqueeze(<span class="hljs-number">0</span>) <span class="hljs-keyword">for</span> k, v <span class="hljs-keyword">in</span> encoding.items()}, labels=labels) <span class="hljs-comment"># batch size is 1</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># the linear classifier still needs to be trained</span> <span class="hljs-meta">&gt;&gt;&gt; </span>loss = outputs.loss <span class="hljs-meta">&gt;&gt;&gt; </span>logits = outputs.logits</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.XLMRobertaForTokenClassification" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForTokenClassification"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-ruffh7">XLMRobertaForTokenClassification</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMRobertaForTokenClassification"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XLMRobertaForTokenClassification</span></span></h3> <a id="transformers.XLMRobertaForTokenClassification" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMRobertaForTokenClassification"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_xlm_roberta.py#L1362" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForTokenClassification.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForTokenClassification.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig">XLMRobertaConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-bqrckq">XLM-RoBERTa Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks.</p> <p data-svelte-h="svelte-hmtw9k">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-hswkmf">This model is also a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMRobertaForTokenClassification.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.XLMRobertaForTokenClassification.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMRobertaForTokenClassification.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_xlm_roberta.py#L1377" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.TokenClassifierOutput">transformers.modeling_outputs.TokenClassifierOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 10 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForTokenClassification.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForTokenClassification.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForTokenClassification.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForTokenClassification.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForTokenClassification.forward.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForTokenClassification.forward.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token.</li> </ul> <p><a href="../glossary#token-type-ids">What are token type IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForTokenClassification.forward.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForTokenClassification.forward.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.<p></p> <p><a href="../glossary#position-ids">What are position IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForTokenClassification.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForTokenClassification.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForTokenClassification.forward.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForTokenClassification.forward.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForTokenClassification.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForTokenClassification.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForTokenClassification.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForTokenClassification.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForTokenClassification.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForTokenClassification.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForTokenClassification.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForTokenClassification.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Labels for computing the token classification loss. Indices should be in <code>[0, ..., config.num_labels - 1]</code>.</span></span> </li></ul> <div id="transformers.XLMRobertaForTokenClassification.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.TokenClassifierOutput">transformers.modeling_outputs.TokenClassifierOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.TokenClassifierOutput">transformers.modeling_outputs.TokenClassifierOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig">XLMRobertaConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Classification loss.</p> </li> <li> <p><strong>logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, config.num_labels)</code>) — Classification scores (before SoftMax).</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-16j4s9m">The <a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaForTokenClassification">XLMRobertaForTokenClassification</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.XLMRobertaForTokenClassification.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForTokenClassification.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, XLMRobertaForTokenClassification <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"Jean-Baptiste/roberta-large-ner-english"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = XLMRobertaForTokenClassification.from_pretrained(<span class="hljs-string">"Jean-Baptiste/roberta-large-ner-english"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer( <span class="hljs-meta">... </span> <span class="hljs-string">"HuggingFace is a company based in Paris and New York"</span>, add_special_tokens=<span class="hljs-literal">False</span>, return_tensors=<span class="hljs-string">"pt"</span> <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> logits = model(**inputs).logits <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_token_class_ids = logits.argmax(-<span class="hljs-number">1</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Note that tokens are classified rather then input words which means that</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># there might be more predicted token classes than words.</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Multiple token classes might account for the same word</span> <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_tokens_classes = [model.config.id2label[t.item()] <span class="hljs-keyword">for</span> t <span class="hljs-keyword">in</span> predicted_token_class_ids[<span class="hljs-number">0</span>]] <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_tokens_classes [<span class="hljs-string">'O'</span>, <span class="hljs-string">'ORG'</span>, <span class="hljs-string">'ORG'</span>, <span class="hljs-string">'O'</span>, <span class="hljs-string">'O'</span>, <span class="hljs-string">'O'</span>, <span class="hljs-string">'O'</span>, <span class="hljs-string">'O'</span>, <span class="hljs-string">'LOC'</span>, <span class="hljs-string">'O'</span>, <span class="hljs-string">'LOC'</span>, <span class="hljs-string">'LOC'</span>] <span class="hljs-meta">&gt;&gt;&gt; </span>labels = predicted_token_class_ids <span class="hljs-meta">&gt;&gt;&gt; </span>loss = model(**inputs, labels=labels).loss <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-built_in">round</span>(loss.item(), <span class="hljs-number">2</span>) <span class="hljs-number">0.01</span></pre></div></div></div></div> <h2 class="relative group"><a id="transformers.XLMRobertaForQuestionAnswering" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForQuestionAnswering"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1rw6fku">XLMRobertaForQuestionAnswering</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMRobertaForQuestionAnswering"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XLMRobertaForQuestionAnswering</span></span></h3> <a id="transformers.XLMRobertaForQuestionAnswering" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMRobertaForQuestionAnswering"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_xlm_roberta.py#L1471" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForQuestionAnswering.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForQuestionAnswering.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig">XLMRobertaConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1x9d61o">XLM-RoBERTa Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute <code>span start logits</code> and <code>span end logits</code>).</p> <p data-svelte-h="svelte-hmtw9k">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-hswkmf">This model is also a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMRobertaForQuestionAnswering.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.XLMRobertaForQuestionAnswering.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMRobertaForQuestionAnswering.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_xlm_roberta.py#L1482" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">start_positions<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">end_positions<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.QuestionAnsweringModelOutput">transformers.modeling_outputs.QuestionAnsweringModelOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 11 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForQuestionAnswering.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForQuestionAnswering.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForQuestionAnswering.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForQuestionAnswering.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForQuestionAnswering.forward.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForQuestionAnswering.forward.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token.</li> </ul> <p><a href="../glossary#token-type-ids">What are token type IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForQuestionAnswering.forward.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForQuestionAnswering.forward.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.<p></p> <p><a href="../glossary#position-ids">What are position IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForQuestionAnswering.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForQuestionAnswering.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForQuestionAnswering.forward.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForQuestionAnswering.forward.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForQuestionAnswering.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForQuestionAnswering.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForQuestionAnswering.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForQuestionAnswering.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForQuestionAnswering.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForQuestionAnswering.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForQuestionAnswering.forward.start_positions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForQuestionAnswering.forward.start_positions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>start_positions</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (<code>sequence_length</code>). Position outside of the sequence are not taken into account for computing the loss.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaForQuestionAnswering.forward.end_positions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForQuestionAnswering.forward.end_positions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>end_positions</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (<code>sequence_length</code>). Position outside of the sequence are not taken into account for computing the loss.</span></span> </li></ul> <div id="transformers.XLMRobertaForQuestionAnswering.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.QuestionAnsweringModelOutput">transformers.modeling_outputs.QuestionAnsweringModelOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.QuestionAnsweringModelOutput">transformers.modeling_outputs.QuestionAnsweringModelOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig">XLMRobertaConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.</p> </li> <li> <p><strong>start_logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Span-start scores (before SoftMax).</p> </li> <li> <p><strong>end_logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Span-end scores (before SoftMax).</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-19uhgjk">The <a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaForQuestionAnswering">XLMRobertaForQuestionAnswering</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.XLMRobertaForQuestionAnswering.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaForQuestionAnswering.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, XLMRobertaForQuestionAnswering <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"deepset/roberta-base-squad2"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = XLMRobertaForQuestionAnswering.from_pretrained(<span class="hljs-string">"deepset/roberta-base-squad2"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>question, text = <span class="hljs-string">"Who was Jim Henson?"</span>, <span class="hljs-string">"Jim Henson was a nice puppet"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(question, text, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> outputs = model(**inputs) <span class="hljs-meta">&gt;&gt;&gt; </span>answer_start_index = outputs.start_logits.argmax() <span class="hljs-meta">&gt;&gt;&gt; </span>answer_end_index = outputs.end_logits.argmax() <span class="hljs-meta">&gt;&gt;&gt; </span>predict_answer_tokens = inputs.input_ids[<span class="hljs-number">0</span>, answer_start_index : answer_end_index + <span class="hljs-number">1</span>] <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer.decode(predict_answer_tokens, skip_special_tokens=<span class="hljs-literal">True</span>) <span class="hljs-string">' puppet'</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># target is "nice puppet"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>target_start_index = torch.tensor([<span class="hljs-number">14</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span>target_end_index = torch.tensor([<span class="hljs-number">15</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index) <span class="hljs-meta">&gt;&gt;&gt; </span>loss = outputs.loss <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-built_in">round</span>(loss.item(), <span class="hljs-number">2</span>) <span class="hljs-number">0.86</span></pre></div></div></div></div> <h2 class="relative group"><a id="transformers.TFXLMRobertaModel" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaModel"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1y9cqxq">TFXLMRobertaModel</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFXLMRobertaModel"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">TFXLMRobertaModel</span></span></h3> <a id="transformers.TFXLMRobertaModel" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFXLMRobertaModel"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_tf_xlm_roberta.py#L875" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">*args<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaModel.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaModel.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig">XLMRobertaConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-iz46qb">The bare XLM RoBERTa Model transformer outputting raw hidden-states without any specific head on top.</p> <p data-svelte-h="svelte-1i0vt4o">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel">TFPreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-1ivrf8m">This model is also a <a href="https://www.tensorflow.org/api_docs/python/tf/keras/Model" rel="nofollow">tf.keras.Model</a> subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1ajbfxg">TensorFlow models and layers in <code>transformers</code> accept two formats as input:</p> <ul data-svelte-h="svelte-qm1t26"><li>having all inputs as keyword arguments (like PyTorch models), or</li> <li>having all inputs as a list, tuple or dict in the first positional argument.</li></ul> <p data-svelte-h="svelte-1v9qsc5">The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like <code>model.fit()</code> things should “just work” for you - just pass your inputs and labels in any format that <code>model.fit()</code> supports! If, however, you want to use the second format outside of Keras methods like <code>fit()</code> and <code>predict()</code>, such as when creating your own layers or models with the Keras <code>Functional</code> API, there are three possibilities you can use to gather all the input Tensors in the first positional argument:</p> <ul data-svelte-h="svelte-15scerc"><li>a single Tensor with <code>input_ids</code> only and nothing else: <code>model(input_ids)</code></li> <li>a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: <code>model([input_ids, attention_mask])</code> or <code>model([input_ids, attention_mask, token_type_ids])</code></li> <li>a dictionary with one or several input Tensors associated to the input names given in the docstring: <code>model({"input_ids": input_ids, "token_type_ids": token_type_ids})</code></li></ul> <p data-svelte-h="svelte-1an3odd">Note that when creating models and layers with <a href="https://keras.io/guides/making_new_layers_and_models_via_subclassing/" rel="nofollow">subclassing</a> then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function!</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFXLMRobertaModel.call"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>call</span></h4> <a id="transformers.TFXLMRobertaModel.call" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFXLMRobertaModel.call"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_tf_xlm_roberta.py#L880" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: TFModelInputType | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_hidden_states<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_attention_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">past_key_values<span class="opacity-60">: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_cache<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">training<span class="opacity-60">: Optional[bool] = False</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFBaseModelOutputWithPoolingAndCrossAttentions">transformers.modeling_tf_outputs.TFBaseModelOutputWithPoolingAndCrossAttentions</a> or <code>tuple(tf.Tensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 14 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaModel.call.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaModel.call.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> and <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> for details. <a href="../glossary#input-ids">What are input IDs?</a></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaModel.call.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaModel.call.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>. <a href="../glossary#attention-mask">What are attention masks?</a></li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaModel.call.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaModel.call.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token. <a href="../glossary#token-type-ids">What are token type IDs?</a></li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaModel.call.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaModel.call.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>. <a href="../glossary#position-ids">What are position IDs?</a></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaModel.call.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaModel.call.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaModel.call.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaModel.call.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaModel.call.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaModel.call.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaModel.call.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaModel.call.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaModel.call.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaModel.call.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaModel.call.training" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaModel.call.training"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>training</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaModel.call.encoder_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaModel.call.encoder_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>encoder_hidden_states</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaModel.call.encoder_attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaModel.call.encoder_attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>encoder_attention_mask</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaModel.call.past_key_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaModel.call.past_key_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>past_key_values</strong> (<code>Tuple[Tuple[tf.Tensor]]</code> of length <code>config.n_layers</code>) — contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. If <code>past_key_values</code> are used, the user can optionally input only the last <code>decoder_input_ids</code> (those that don’t have their past key value states given to this model) of shape <code>(batch_size, 1)</code> instead of all <code>decoder_input_ids</code> of shape <code>(batch_size, sequence_length)</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaModel.call.use_cache" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaModel.call.use_cache"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>use_cache</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — If set to <code>True</code>, <code>past_key_values</code> key value states are returned and can be used to speed up decoding (see <code>past_key_values</code>). Set to <code>False</code> during training, <code>True</code> during generation</span></span> </li></ul> <div id="transformers.TFXLMRobertaModel.call.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFBaseModelOutputWithPoolingAndCrossAttentions">transformers.modeling_tf_outputs.TFBaseModelOutputWithPoolingAndCrossAttentions</a> or <code>tuple(tf.Tensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFBaseModelOutputWithPoolingAndCrossAttentions">transformers.modeling_tf_outputs.TFBaseModelOutputWithPoolingAndCrossAttentions</a> or a tuple of <code>tf.Tensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig">XLMRobertaConfig</a>) and inputs.</p> <ul> <li> <p><strong>last_hidden_state</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>) — Sequence of hidden-states at the output of the last layer of the model.</p> </li> <li> <p><strong>pooler_output</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, hidden_size)</code>) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence prediction (classification) objective during pretraining.</p> <p>This output is usually <em>not</em> a good summary of the semantic content of the input, you’re often better with averaging or pooling the sequence of hidden-states for the whole input sequence.</p> </li> <li> <p><strong>past_key_values</strong> (<code>List[tf.Tensor]</code>, <em>optional</em>, returned when <code>use_cache=True</code> is passed or when <code>config.use_cache=True</code>) — List of <code>tf.Tensor</code> of length <code>config.n_layers</code>, with each tensor of shape <code>(2, batch_size, num_heads, sequence_length, embed_size_per_head)</code>).</p> <p>Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see <code>past_key_values</code> input) to speed up sequential decoding.</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>tf.Tensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>tf.Tensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> <li> <p><strong>cross_attentions</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>tf.Tensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-1uyc73m">The <a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.TFXLMRobertaModel">TFXLMRobertaModel</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.TFXLMRobertaModel.call.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaModel.call.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, TFXLMRobertaModel <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> tensorflow <span class="hljs-keyword">as</span> tf <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"xlm-roberta-base"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = TFXLMRobertaModel.from_pretrained(<span class="hljs-string">"xlm-roberta-base"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(<span class="hljs-string">"Hello, my dog is cute"</span>, return_tensors=<span class="hljs-string">"tf"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(inputs) <span class="hljs-meta">&gt;&gt;&gt; </span>last_hidden_states = outputs.last_hidden_state</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.TFXLMRobertaForCausalLM" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForCausalLM"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-y9gbny">TFXLMRobertaForCausalLM</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFXLMRobertaForCausalLM"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">TFXLMRobertaForCausalLM</span></span></h3> <a id="transformers.TFXLMRobertaForCausalLM" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFXLMRobertaForCausalLM"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_tf_xlm_roberta.py#L1081" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">*args<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForCausalLM.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForCausalLM.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig">XLMRobertaConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-15apchw">XLM-RoBERTa Model with a <code>language modeling</code> head on top for CLM fine-tuning.</p> <p data-svelte-h="svelte-1i0vt4o">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel">TFPreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-1ivrf8m">This model is also a <a href="https://www.tensorflow.org/api_docs/python/tf/keras/Model" rel="nofollow">tf.keras.Model</a> subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1ajbfxg">TensorFlow models and layers in <code>transformers</code> accept two formats as input:</p> <ul data-svelte-h="svelte-qm1t26"><li>having all inputs as keyword arguments (like PyTorch models), or</li> <li>having all inputs as a list, tuple or dict in the first positional argument.</li></ul> <p data-svelte-h="svelte-1v9qsc5">The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like <code>model.fit()</code> things should “just work” for you - just pass your inputs and labels in any format that <code>model.fit()</code> supports! If, however, you want to use the second format outside of Keras methods like <code>fit()</code> and <code>predict()</code>, such as when creating your own layers or models with the Keras <code>Functional</code> API, there are three possibilities you can use to gather all the input Tensors in the first positional argument:</p> <ul data-svelte-h="svelte-15scerc"><li>a single Tensor with <code>input_ids</code> only and nothing else: <code>model(input_ids)</code></li> <li>a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: <code>model([input_ids, attention_mask])</code> or <code>model([input_ids, attention_mask, token_type_ids])</code></li> <li>a dictionary with one or several input Tensors associated to the input names given in the docstring: <code>model({"input_ids": input_ids, "token_type_ids": token_type_ids})</code></li></ul> <p data-svelte-h="svelte-1an3odd">Note that when creating models and layers with <a href="https://keras.io/guides/making_new_layers_and_models_via_subclassing/" rel="nofollow">subclassing</a> then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function!</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFXLMRobertaForCausalLM.call"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>call</span></h4> <a id="transformers.TFXLMRobertaForCausalLM.call" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFXLMRobertaForCausalLM.call"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_tf_xlm_roberta.py#L1114" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: TFModelInputType | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_hidden_states<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_attention_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">past_key_values<span class="opacity-60">: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_cache<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">training<span class="opacity-60">: Optional[bool] = False</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions">transformers.modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions</a> or <code>tuple(tf.Tensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 15 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForCausalLM.call.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForCausalLM.call.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> and <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> for details. <a href="../glossary#input-ids">What are input IDs?</a></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForCausalLM.call.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForCausalLM.call.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>. <a href="../glossary#attention-mask">What are attention masks?</a></li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForCausalLM.call.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForCausalLM.call.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token. <a href="../glossary#token-type-ids">What are token type IDs?</a></li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForCausalLM.call.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForCausalLM.call.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>. <a href="../glossary#position-ids">What are position IDs?</a></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForCausalLM.call.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForCausalLM.call.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForCausalLM.call.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForCausalLM.call.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForCausalLM.call.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForCausalLM.call.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForCausalLM.call.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForCausalLM.call.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForCausalLM.call.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForCausalLM.call.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForCausalLM.call.training" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForCausalLM.call.training"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>training</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForCausalLM.call.encoder_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForCausalLM.call.encoder_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>encoder_hidden_states</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForCausalLM.call.encoder_attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForCausalLM.call.encoder_attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>encoder_attention_mask</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForCausalLM.call.past_key_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForCausalLM.call.past_key_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>past_key_values</strong> (<code>Tuple[Tuple[tf.Tensor]]</code> of length <code>config.n_layers</code>) — contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. If <code>past_key_values</code> are used, the user can optionally input only the last <code>decoder_input_ids</code> (those that don’t have their past key value states given to this model) of shape <code>(batch_size, 1)</code> instead of all <code>decoder_input_ids</code> of shape <code>(batch_size, sequence_length)</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForCausalLM.call.use_cache" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForCausalLM.call.use_cache"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>use_cache</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — If set to <code>True</code>, <code>past_key_values</code> key value states are returned and can be used to speed up decoding (see <code>past_key_values</code>). Set to <code>False</code> during training, <code>True</code> during generation</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForCausalLM.call.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForCausalLM.call.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>tf.Tensor</code> or <code>np.ndarray</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Labels for computing the cross entropy classification loss. Indices should be in <code>[0, ..., config.vocab_size - 1]</code>.</span></span> </li></ul> <div id="transformers.TFXLMRobertaForCausalLM.call.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions">transformers.modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions</a> or <code>tuple(tf.Tensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions">transformers.modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions</a> or a tuple of <code>tf.Tensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig">XLMRobertaConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>tf.Tensor</code> of shape <code>(n,)</code>, <em>optional</em>, where n is the number of non-masked labels, returned when <code>labels</code> is provided) — Language modeling loss (for next-token prediction).</p> </li> <li> <p><strong>logits</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length, config.vocab_size)</code>) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>tf.Tensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>tf.Tensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> <li> <p><strong>cross_attentions</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>tf.Tensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.</p> </li> <li> <p><strong>past_key_values</strong> (<code>List[tf.Tensor]</code>, <em>optional</em>, returned when <code>use_cache=True</code> is passed or when <code>config.use_cache=True</code>) — List of <code>tf.Tensor</code> of length <code>config.n_layers</code>, with each tensor of shape <code>(2, batch_size, num_heads, sequence_length, embed_size_per_head)</code>).</p> <p>Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see <code>past_key_values</code> input) to speed up sequential decoding.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-1t5t9ce">The <a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.TFXLMRobertaForCausalLM">TFXLMRobertaForCausalLM</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.TFXLMRobertaForCausalLM.call.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForCausalLM.call.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, TFXLMRobertaForCausalLM <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> tensorflow <span class="hljs-keyword">as</span> tf <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"xlm-roberta-base"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = TFXLMRobertaForCausalLM.from_pretrained(<span class="hljs-string">"xlm-roberta-base"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(<span class="hljs-string">"Hello, my dog is cute"</span>, return_tensors=<span class="hljs-string">"tf"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(inputs) <span class="hljs-meta">&gt;&gt;&gt; </span>logits = outputs.logits</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.TFXLMRobertaForMaskedLM" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForMaskedLM"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-9tgrk6">TFXLMRobertaForMaskedLM</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFXLMRobertaForMaskedLM"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">TFXLMRobertaForMaskedLM</span></span></h3> <a id="transformers.TFXLMRobertaForMaskedLM" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFXLMRobertaForMaskedLM"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_tf_xlm_roberta.py#L999" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">*args<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForMaskedLM.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForMaskedLM.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig">XLMRobertaConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-cpwdey">XLM RoBERTa Model with a <code>language modeling</code> head on top.</p> <p data-svelte-h="svelte-1i0vt4o">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel">TFPreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-1ivrf8m">This model is also a <a href="https://www.tensorflow.org/api_docs/python/tf/keras/Model" rel="nofollow">tf.keras.Model</a> subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1ajbfxg">TensorFlow models and layers in <code>transformers</code> accept two formats as input:</p> <ul data-svelte-h="svelte-qm1t26"><li>having all inputs as keyword arguments (like PyTorch models), or</li> <li>having all inputs as a list, tuple or dict in the first positional argument.</li></ul> <p data-svelte-h="svelte-1v9qsc5">The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like <code>model.fit()</code> things should “just work” for you - just pass your inputs and labels in any format that <code>model.fit()</code> supports! If, however, you want to use the second format outside of Keras methods like <code>fit()</code> and <code>predict()</code>, such as when creating your own layers or models with the Keras <code>Functional</code> API, there are three possibilities you can use to gather all the input Tensors in the first positional argument:</p> <ul data-svelte-h="svelte-15scerc"><li>a single Tensor with <code>input_ids</code> only and nothing else: <code>model(input_ids)</code></li> <li>a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: <code>model([input_ids, attention_mask])</code> or <code>model([input_ids, attention_mask, token_type_ids])</code></li> <li>a dictionary with one or several input Tensors associated to the input names given in the docstring: <code>model({"input_ids": input_ids, "token_type_ids": token_type_ids})</code></li></ul> <p data-svelte-h="svelte-1an3odd">Note that when creating models and layers with <a href="https://keras.io/guides/making_new_layers_and_models_via_subclassing/" rel="nofollow">subclassing</a> then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function!</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFXLMRobertaForMaskedLM.call"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>call</span></h4> <a id="transformers.TFXLMRobertaForMaskedLM.call" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFXLMRobertaForMaskedLM.call"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_tf_xlm_roberta.py#L1016" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: TFModelInputType | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">training<span class="opacity-60">: Optional[bool] = False</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFMaskedLMOutput">transformers.modeling_tf_outputs.TFMaskedLMOutput</a> or <code>tuple(tf.Tensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 11 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForMaskedLM.call.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForMaskedLM.call.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> and <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> for details. <a href="../glossary#input-ids">What are input IDs?</a></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForMaskedLM.call.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForMaskedLM.call.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>. <a href="../glossary#attention-mask">What are attention masks?</a></li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForMaskedLM.call.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForMaskedLM.call.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token. <a href="../glossary#token-type-ids">What are token type IDs?</a></li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForMaskedLM.call.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForMaskedLM.call.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>. <a href="../glossary#position-ids">What are position IDs?</a></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForMaskedLM.call.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForMaskedLM.call.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForMaskedLM.call.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForMaskedLM.call.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForMaskedLM.call.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForMaskedLM.call.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForMaskedLM.call.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForMaskedLM.call.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForMaskedLM.call.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForMaskedLM.call.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForMaskedLM.call.training" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForMaskedLM.call.training"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>training</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForMaskedLM.call.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForMaskedLM.call.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Labels for computing the masked language modeling loss. Indices should be in <code>[-100, 0, ..., config.vocab_size]</code> (see <code>input_ids</code> docstring) Tokens with indices set to <code>-100</code> are ignored (masked), the loss is only computed for the tokens with labels in <code>[0, ..., config.vocab_size]</code></span></span> </li></ul> <div id="transformers.TFXLMRobertaForMaskedLM.call.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFMaskedLMOutput">transformers.modeling_tf_outputs.TFMaskedLMOutput</a> or <code>tuple(tf.Tensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFMaskedLMOutput">transformers.modeling_tf_outputs.TFMaskedLMOutput</a> or a tuple of <code>tf.Tensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig">XLMRobertaConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>tf.Tensor</code> of shape <code>(n,)</code>, <em>optional</em>, where n is the number of non-masked labels, returned when <code>labels</code> is provided) — Masked language modeling (MLM) loss.</p> </li> <li> <p><strong>logits</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length, config.vocab_size)</code>) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>tf.Tensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>tf.Tensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-1nhfs12">The <a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.TFXLMRobertaForMaskedLM">TFXLMRobertaForMaskedLM</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.TFXLMRobertaForMaskedLM.call.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForMaskedLM.call.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, TFXLMRobertaForMaskedLM <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> tensorflow <span class="hljs-keyword">as</span> tf <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"xlm-roberta-base"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = TFXLMRobertaForMaskedLM.from_pretrained(<span class="hljs-string">"xlm-roberta-base"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(<span class="hljs-string">"The capital of France is &lt;mask&gt;."</span>, return_tensors=<span class="hljs-string">"tf"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>logits = model(**inputs).logits <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># retrieve index of &lt;mask&gt;</span> <span class="hljs-meta">&gt;&gt;&gt; </span>mask_token_index = tf.where((inputs.input_ids == tokenizer.mask_token_id)[<span class="hljs-number">0</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span>selected_logits = tf.gather_nd(logits[<span class="hljs-number">0</span>], indices=mask_token_index) <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_token_id = tf.math.argmax(selected_logits, axis=-<span class="hljs-number">1</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer.decode(predicted_token_id) <span class="hljs-string">' Paris'</span></pre></div></div> <div class="relative group rounded-md"><a id="transformers.TFXLMRobertaForMaskedLM.call.example-2" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForMaskedLM.call.example-2"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>labels = tokenizer(<span class="hljs-string">"The capital of France is Paris."</span>, return_tensors=<span class="hljs-string">"tf"</span>)[<span class="hljs-string">"input_ids"</span>] <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># mask labels of non-&lt;mask&gt; tokens</span> <span class="hljs-meta">&gt;&gt;&gt; </span>labels = tf.where(inputs.input_ids == tokenizer.mask_token_id, labels, -<span class="hljs-number">100</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(**inputs, labels=labels) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-built_in">round</span>(<span class="hljs-built_in">float</span>(outputs.loss), <span class="hljs-number">2</span>) <span class="hljs-number">0.1</span></pre></div></div></div></div> <h2 class="relative group"><a id="transformers.TFXLMRobertaForSequenceClassification" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForSequenceClassification"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-16jjod1">TFXLMRobertaForSequenceClassification</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFXLMRobertaForSequenceClassification"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">TFXLMRobertaForSequenceClassification</span></span></h3> <a id="transformers.TFXLMRobertaForSequenceClassification" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFXLMRobertaForSequenceClassification"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_tf_xlm_roberta.py#L1240" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">*args<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForSequenceClassification.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForSequenceClassification.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig">XLMRobertaConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-sh0p1a">XLM RoBERTa Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks.</p> <p data-svelte-h="svelte-1i0vt4o">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel">TFPreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-1ivrf8m">This model is also a <a href="https://www.tensorflow.org/api_docs/python/tf/keras/Model" rel="nofollow">tf.keras.Model</a> subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1ajbfxg">TensorFlow models and layers in <code>transformers</code> accept two formats as input:</p> <ul data-svelte-h="svelte-qm1t26"><li>having all inputs as keyword arguments (like PyTorch models), or</li> <li>having all inputs as a list, tuple or dict in the first positional argument.</li></ul> <p data-svelte-h="svelte-1v9qsc5">The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like <code>model.fit()</code> things should “just work” for you - just pass your inputs and labels in any format that <code>model.fit()</code> supports! If, however, you want to use the second format outside of Keras methods like <code>fit()</code> and <code>predict()</code>, such as when creating your own layers or models with the Keras <code>Functional</code> API, there are three possibilities you can use to gather all the input Tensors in the first positional argument:</p> <ul data-svelte-h="svelte-15scerc"><li>a single Tensor with <code>input_ids</code> only and nothing else: <code>model(input_ids)</code></li> <li>a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: <code>model([input_ids, attention_mask])</code> or <code>model([input_ids, attention_mask, token_type_ids])</code></li> <li>a dictionary with one or several input Tensors associated to the input names given in the docstring: <code>model({"input_ids": input_ids, "token_type_ids": token_type_ids})</code></li></ul> <p data-svelte-h="svelte-1an3odd">Note that when creating models and layers with <a href="https://keras.io/guides/making_new_layers_and_models_via_subclassing/" rel="nofollow">subclassing</a> then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function!</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFXLMRobertaForSequenceClassification.call"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>call</span></h4> <a id="transformers.TFXLMRobertaForSequenceClassification.call" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFXLMRobertaForSequenceClassification.call"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_tf_xlm_roberta.py#L1251" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: TFModelInputType | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">training<span class="opacity-60">: Optional[bool] = False</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFSequenceClassifierOutput">transformers.modeling_tf_outputs.TFSequenceClassifierOutput</a> or <code>tuple(tf.Tensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 11 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForSequenceClassification.call.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForSequenceClassification.call.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> and <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> for details. <a href="../glossary#input-ids">What are input IDs?</a></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForSequenceClassification.call.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForSequenceClassification.call.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>. <a href="../glossary#attention-mask">What are attention masks?</a></li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForSequenceClassification.call.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForSequenceClassification.call.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token. <a href="../glossary#token-type-ids">What are token type IDs?</a></li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForSequenceClassification.call.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForSequenceClassification.call.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>. <a href="../glossary#position-ids">What are position IDs?</a></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForSequenceClassification.call.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForSequenceClassification.call.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForSequenceClassification.call.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForSequenceClassification.call.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForSequenceClassification.call.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForSequenceClassification.call.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForSequenceClassification.call.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForSequenceClassification.call.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForSequenceClassification.call.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForSequenceClassification.call.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForSequenceClassification.call.training" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForSequenceClassification.call.training"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>training</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForSequenceClassification.call.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForSequenceClassification.call.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>tf.Tensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Labels for computing the sequence classification/regression loss. Indices should be in <code>[0, ..., config.num_labels - 1]</code>. If <code>config.num_labels == 1</code> a regression loss is computed (Mean-Square loss), If <code>config.num_labels &gt; 1</code> a classification loss is computed (Cross-Entropy).</span></span> </li></ul> <div id="transformers.TFXLMRobertaForSequenceClassification.call.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFSequenceClassifierOutput">transformers.modeling_tf_outputs.TFSequenceClassifierOutput</a> or <code>tuple(tf.Tensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFSequenceClassifierOutput">transformers.modeling_tf_outputs.TFSequenceClassifierOutput</a> or a tuple of <code>tf.Tensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig">XLMRobertaConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, )</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Classification (or regression if config.num_labels==1) loss.</p> </li> <li> <p><strong>logits</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, config.num_labels)</code>) — Classification (or regression if config.num_labels==1) scores (before SoftMax).</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>tf.Tensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>tf.Tensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-179xpiy">The <a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.TFXLMRobertaForSequenceClassification">TFXLMRobertaForSequenceClassification</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.TFXLMRobertaForSequenceClassification.call.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForSequenceClassification.call.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, TFXLMRobertaForSequenceClassification <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> tensorflow <span class="hljs-keyword">as</span> tf <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"cardiffnlp/twitter-roberta-base-emotion"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = TFXLMRobertaForSequenceClassification.from_pretrained(<span class="hljs-string">"cardiffnlp/twitter-roberta-base-emotion"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(<span class="hljs-string">"Hello, my dog is cute"</span>, return_tensors=<span class="hljs-string">"tf"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>logits = model(**inputs).logits <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_class_id = <span class="hljs-built_in">int</span>(tf.math.argmax(logits, axis=-<span class="hljs-number">1</span>)[<span class="hljs-number">0</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span>model.config.id2label[predicted_class_id] <span class="hljs-string">'optimism'</span></pre></div></div> <div class="relative group rounded-md"><a id="transformers.TFXLMRobertaForSequenceClassification.call.example-2" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForSequenceClassification.call.example-2"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`</span> <span class="hljs-meta">&gt;&gt;&gt; </span>num_labels = <span class="hljs-built_in">len</span>(model.config.id2label) <span class="hljs-meta">&gt;&gt;&gt; </span>model = TFXLMRobertaForSequenceClassification.from_pretrained(<span class="hljs-string">"cardiffnlp/twitter-roberta-base-emotion"</span>, num_labels=num_labels) <span class="hljs-meta">&gt;&gt;&gt; </span>labels = tf.constant(<span class="hljs-number">1</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>loss = model(**inputs, labels=labels).loss <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-built_in">round</span>(<span class="hljs-built_in">float</span>(loss), <span class="hljs-number">2</span>) <span class="hljs-number">0.08</span></pre></div></div></div></div> <h2 class="relative group"><a id="transformers.TFXLMRobertaForMultipleChoice" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForMultipleChoice"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-7leifv">TFXLMRobertaForMultipleChoice</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFXLMRobertaForMultipleChoice"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">TFXLMRobertaForMultipleChoice</span></span></h3> <a id="transformers.TFXLMRobertaForMultipleChoice" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFXLMRobertaForMultipleChoice"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_tf_xlm_roberta.py#L1317" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">*args<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForMultipleChoice.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForMultipleChoice.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig">XLMRobertaConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-xq071m">XLM Roberta Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks.</p> <p data-svelte-h="svelte-1i0vt4o">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel">TFPreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-1ivrf8m">This model is also a <a href="https://www.tensorflow.org/api_docs/python/tf/keras/Model" rel="nofollow">tf.keras.Model</a> subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1ajbfxg">TensorFlow models and layers in <code>transformers</code> accept two formats as input:</p> <ul data-svelte-h="svelte-qm1t26"><li>having all inputs as keyword arguments (like PyTorch models), or</li> <li>having all inputs as a list, tuple or dict in the first positional argument.</li></ul> <p data-svelte-h="svelte-1v9qsc5">The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like <code>model.fit()</code> things should “just work” for you - just pass your inputs and labels in any format that <code>model.fit()</code> supports! If, however, you want to use the second format outside of Keras methods like <code>fit()</code> and <code>predict()</code>, such as when creating your own layers or models with the Keras <code>Functional</code> API, there are three possibilities you can use to gather all the input Tensors in the first positional argument:</p> <ul data-svelte-h="svelte-15scerc"><li>a single Tensor with <code>input_ids</code> only and nothing else: <code>model(input_ids)</code></li> <li>a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: <code>model([input_ids, attention_mask])</code> or <code>model([input_ids, attention_mask, token_type_ids])</code></li> <li>a dictionary with one or several input Tensors associated to the input names given in the docstring: <code>model({"input_ids": input_ids, "token_type_ids": token_type_ids})</code></li></ul> <p data-svelte-h="svelte-1an3odd">Note that when creating models and layers with <a href="https://keras.io/guides/making_new_layers_and_models_via_subclassing/" rel="nofollow">subclassing</a> then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function!</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFXLMRobertaForMultipleChoice.call"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>call</span></h4> <a id="transformers.TFXLMRobertaForMultipleChoice.call" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFXLMRobertaForMultipleChoice.call"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_tf_xlm_roberta.py#L1331" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: TFModelInputType | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">training<span class="opacity-60">: Optional[bool] = False</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput">transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput</a> or <code>tuple(tf.Tensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 11 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForMultipleChoice.call.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForMultipleChoice.call.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(batch_size, num_choices, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> and <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> for details. <a href="../glossary#input-ids">What are input IDs?</a></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForMultipleChoice.call.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForMultipleChoice.call.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(batch_size, num_choices, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>. <a href="../glossary#attention-mask">What are attention masks?</a></li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForMultipleChoice.call.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForMultipleChoice.call.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(batch_size, num_choices, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token. <a href="../glossary#token-type-ids">What are token type IDs?</a></li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForMultipleChoice.call.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForMultipleChoice.call.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(batch_size, num_choices, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>. <a href="../glossary#position-ids">What are position IDs?</a></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForMultipleChoice.call.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForMultipleChoice.call.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForMultipleChoice.call.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForMultipleChoice.call.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, num_choices, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForMultipleChoice.call.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForMultipleChoice.call.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForMultipleChoice.call.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForMultipleChoice.call.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForMultipleChoice.call.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForMultipleChoice.call.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForMultipleChoice.call.training" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForMultipleChoice.call.training"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>training</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForMultipleChoice.call.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForMultipleChoice.call.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>tf.Tensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Labels for computing the multiple choice classification loss. Indices should be in <code>[0, ..., num_choices]</code> where <code>num_choices</code> is the size of the second dimension of the input tensors. (See <code>input_ids</code> above)</span></span> </li></ul> <div id="transformers.TFXLMRobertaForMultipleChoice.call.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput">transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput</a> or <code>tuple(tf.Tensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput">transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput</a> or a tuple of <code>tf.Tensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig">XLMRobertaConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>tf.Tensor</code> of shape <em>(batch_size, )</em>, <em>optional</em>, returned when <code>labels</code> is provided) — Classification loss.</p> </li> <li> <p><strong>logits</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, num_choices)</code>) — <em>num_choices</em> is the second dimension of the input tensors. (see <em>input_ids</em> above).</p> <p>Classification scores (before SoftMax).</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>tf.Tensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>tf.Tensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-18j0km">The <a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.TFXLMRobertaForMultipleChoice">TFXLMRobertaForMultipleChoice</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.TFXLMRobertaForMultipleChoice.call.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForMultipleChoice.call.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, TFXLMRobertaForMultipleChoice <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> tensorflow <span class="hljs-keyword">as</span> tf <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"xlm-roberta-base"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = TFXLMRobertaForMultipleChoice.from_pretrained(<span class="hljs-string">"xlm-roberta-base"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>prompt = <span class="hljs-string">"In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."</span> <span class="hljs-meta">&gt;&gt;&gt; </span>choice0 = <span class="hljs-string">"It is eaten with a fork and a knife."</span> <span class="hljs-meta">&gt;&gt;&gt; </span>choice1 = <span class="hljs-string">"It is eaten while held in the hand."</span> <span class="hljs-meta">&gt;&gt;&gt; </span>encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors=<span class="hljs-string">"tf"</span>, padding=<span class="hljs-literal">True</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = {k: tf.expand_dims(v, <span class="hljs-number">0</span>) <span class="hljs-keyword">for</span> k, v <span class="hljs-keyword">in</span> encoding.items()} <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(inputs) <span class="hljs-comment"># batch size is 1</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># the linear classifier still needs to be trained</span> <span class="hljs-meta">&gt;&gt;&gt; </span>logits = outputs.logits</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.TFXLMRobertaForTokenClassification" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForTokenClassification"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1qnpzd1">TFXLMRobertaForTokenClassification</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFXLMRobertaForTokenClassification"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">TFXLMRobertaForTokenClassification</span></span></h3> <a id="transformers.TFXLMRobertaForTokenClassification" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFXLMRobertaForTokenClassification"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_tf_xlm_roberta.py#L1410" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">*args<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForTokenClassification.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForTokenClassification.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig">XLMRobertaConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1opippv">XLM RoBERTa Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks.</p> <p data-svelte-h="svelte-1i0vt4o">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel">TFPreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-1ivrf8m">This model is also a <a href="https://www.tensorflow.org/api_docs/python/tf/keras/Model" rel="nofollow">tf.keras.Model</a> subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1ajbfxg">TensorFlow models and layers in <code>transformers</code> accept two formats as input:</p> <ul data-svelte-h="svelte-qm1t26"><li>having all inputs as keyword arguments (like PyTorch models), or</li> <li>having all inputs as a list, tuple or dict in the first positional argument.</li></ul> <p data-svelte-h="svelte-1v9qsc5">The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like <code>model.fit()</code> things should “just work” for you - just pass your inputs and labels in any format that <code>model.fit()</code> supports! If, however, you want to use the second format outside of Keras methods like <code>fit()</code> and <code>predict()</code>, such as when creating your own layers or models with the Keras <code>Functional</code> API, there are three possibilities you can use to gather all the input Tensors in the first positional argument:</p> <ul data-svelte-h="svelte-15scerc"><li>a single Tensor with <code>input_ids</code> only and nothing else: <code>model(input_ids)</code></li> <li>a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: <code>model([input_ids, attention_mask])</code> or <code>model([input_ids, attention_mask, token_type_ids])</code></li> <li>a dictionary with one or several input Tensors associated to the input names given in the docstring: <code>model({"input_ids": input_ids, "token_type_ids": token_type_ids})</code></li></ul> <p data-svelte-h="svelte-1an3odd">Note that when creating models and layers with <a href="https://keras.io/guides/making_new_layers_and_models_via_subclassing/" rel="nofollow">subclassing</a> then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function!</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFXLMRobertaForTokenClassification.call"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>call</span></h4> <a id="transformers.TFXLMRobertaForTokenClassification.call" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFXLMRobertaForTokenClassification.call"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_tf_xlm_roberta.py#L1428" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: TFModelInputType | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">training<span class="opacity-60">: Optional[bool] = False</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFTokenClassifierOutput">transformers.modeling_tf_outputs.TFTokenClassifierOutput</a> or <code>tuple(tf.Tensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 11 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForTokenClassification.call.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForTokenClassification.call.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> and <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> for details. <a href="../glossary#input-ids">What are input IDs?</a></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForTokenClassification.call.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForTokenClassification.call.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>. <a href="../glossary#attention-mask">What are attention masks?</a></li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForTokenClassification.call.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForTokenClassification.call.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token. <a href="../glossary#token-type-ids">What are token type IDs?</a></li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForTokenClassification.call.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForTokenClassification.call.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>. <a href="../glossary#position-ids">What are position IDs?</a></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForTokenClassification.call.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForTokenClassification.call.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForTokenClassification.call.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForTokenClassification.call.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForTokenClassification.call.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForTokenClassification.call.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForTokenClassification.call.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForTokenClassification.call.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForTokenClassification.call.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForTokenClassification.call.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForTokenClassification.call.training" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForTokenClassification.call.training"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>training</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForTokenClassification.call.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForTokenClassification.call.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Labels for computing the token classification loss. Indices should be in <code>[0, ..., config.num_labels - 1]</code>.</span></span> </li></ul> <div id="transformers.TFXLMRobertaForTokenClassification.call.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFTokenClassifierOutput">transformers.modeling_tf_outputs.TFTokenClassifierOutput</a> or <code>tuple(tf.Tensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFTokenClassifierOutput">transformers.modeling_tf_outputs.TFTokenClassifierOutput</a> or a tuple of <code>tf.Tensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig">XLMRobertaConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>tf.Tensor</code> of shape <code>(n,)</code>, <em>optional</em>, where n is the number of unmasked labels, returned when <code>labels</code> is provided) — Classification loss.</p> </li> <li> <p><strong>logits</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length, config.num_labels)</code>) — Classification scores (before SoftMax).</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>tf.Tensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>tf.Tensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-5jukia">The <a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.TFXLMRobertaForTokenClassification">TFXLMRobertaForTokenClassification</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.TFXLMRobertaForTokenClassification.call.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForTokenClassification.call.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, TFXLMRobertaForTokenClassification <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> tensorflow <span class="hljs-keyword">as</span> tf <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"ydshieh/roberta-large-ner-english"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = TFXLMRobertaForTokenClassification.from_pretrained(<span class="hljs-string">"ydshieh/roberta-large-ner-english"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer( <span class="hljs-meta">... </span> <span class="hljs-string">"HuggingFace is a company based in Paris and New York"</span>, add_special_tokens=<span class="hljs-literal">False</span>, return_tensors=<span class="hljs-string">"tf"</span> <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>logits = model(**inputs).logits <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_token_class_ids = tf.math.argmax(logits, axis=-<span class="hljs-number">1</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Note that tokens are classified rather then input words which means that</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># there might be more predicted token classes than words.</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Multiple token classes might account for the same word</span> <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_tokens_classes = [model.config.id2label[t] <span class="hljs-keyword">for</span> t <span class="hljs-keyword">in</span> predicted_token_class_ids[<span class="hljs-number">0</span>].numpy().tolist()] <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_tokens_classes [<span class="hljs-string">'O'</span>, <span class="hljs-string">'ORG'</span>, <span class="hljs-string">'ORG'</span>, <span class="hljs-string">'O'</span>, <span class="hljs-string">'O'</span>, <span class="hljs-string">'O'</span>, <span class="hljs-string">'O'</span>, <span class="hljs-string">'O'</span>, <span class="hljs-string">'LOC'</span>, <span class="hljs-string">'O'</span>, <span class="hljs-string">'LOC'</span>, <span class="hljs-string">'LOC'</span>]</pre></div></div> <div class="relative group rounded-md"><a id="transformers.TFXLMRobertaForTokenClassification.call.example-2" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForTokenClassification.call.example-2"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>labels = predicted_token_class_ids <span class="hljs-meta">&gt;&gt;&gt; </span>loss = tf.math.reduce_mean(model(**inputs, labels=labels).loss) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-built_in">round</span>(<span class="hljs-built_in">float</span>(loss), <span class="hljs-number">2</span>) <span class="hljs-number">0.01</span></pre></div></div></div></div> <h2 class="relative group"><a id="transformers.TFXLMRobertaForQuestionAnswering" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForQuestionAnswering"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-pyumdc">TFXLMRobertaForQuestionAnswering</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFXLMRobertaForQuestionAnswering"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">TFXLMRobertaForQuestionAnswering</span></span></h3> <a id="transformers.TFXLMRobertaForQuestionAnswering" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFXLMRobertaForQuestionAnswering"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_tf_xlm_roberta.py#L1494" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">*args<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForQuestionAnswering.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForQuestionAnswering.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig">XLMRobertaConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-179t071">XLM RoBERTa Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute <code>span start logits</code> and <code>span end logits</code>).</p> <p data-svelte-h="svelte-1i0vt4o">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel">TFPreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-1ivrf8m">This model is also a <a href="https://www.tensorflow.org/api_docs/python/tf/keras/Model" rel="nofollow">tf.keras.Model</a> subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1ajbfxg">TensorFlow models and layers in <code>transformers</code> accept two formats as input:</p> <ul data-svelte-h="svelte-qm1t26"><li>having all inputs as keyword arguments (like PyTorch models), or</li> <li>having all inputs as a list, tuple or dict in the first positional argument.</li></ul> <p data-svelte-h="svelte-1v9qsc5">The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like <code>model.fit()</code> things should “just work” for you - just pass your inputs and labels in any format that <code>model.fit()</code> supports! If, however, you want to use the second format outside of Keras methods like <code>fit()</code> and <code>predict()</code>, such as when creating your own layers or models with the Keras <code>Functional</code> API, there are three possibilities you can use to gather all the input Tensors in the first positional argument:</p> <ul data-svelte-h="svelte-15scerc"><li>a single Tensor with <code>input_ids</code> only and nothing else: <code>model(input_ids)</code></li> <li>a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: <code>model([input_ids, attention_mask])</code> or <code>model([input_ids, attention_mask, token_type_ids])</code></li> <li>a dictionary with one or several input Tensors associated to the input names given in the docstring: <code>model({"input_ids": input_ids, "token_type_ids": token_type_ids})</code></li></ul> <p data-svelte-h="svelte-1an3odd">Note that when creating models and layers with <a href="https://keras.io/guides/making_new_layers_and_models_via_subclassing/" rel="nofollow">subclassing</a> then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function!</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFXLMRobertaForQuestionAnswering.call"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>call</span></h4> <a id="transformers.TFXLMRobertaForQuestionAnswering.call" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFXLMRobertaForQuestionAnswering.call"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_tf_xlm_roberta.py#L1507" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: TFModelInputType | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">start_positions<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">end_positions<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">training<span class="opacity-60">: Optional[bool] = False</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput">transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput</a> or <code>tuple(tf.Tensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 12 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForQuestionAnswering.call.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForQuestionAnswering.call.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> and <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> for details. <a href="../glossary#input-ids">What are input IDs?</a></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForQuestionAnswering.call.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForQuestionAnswering.call.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>. <a href="../glossary#attention-mask">What are attention masks?</a></li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForQuestionAnswering.call.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForQuestionAnswering.call.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token. <a href="../glossary#token-type-ids">What are token type IDs?</a></li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForQuestionAnswering.call.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForQuestionAnswering.call.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>. <a href="../glossary#position-ids">What are position IDs?</a></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForQuestionAnswering.call.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForQuestionAnswering.call.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>Numpy array</code> or <code>tf.Tensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForQuestionAnswering.call.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForQuestionAnswering.call.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForQuestionAnswering.call.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForQuestionAnswering.call.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForQuestionAnswering.call.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForQuestionAnswering.call.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForQuestionAnswering.call.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForQuestionAnswering.call.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForQuestionAnswering.call.training" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForQuestionAnswering.call.training"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>training</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForQuestionAnswering.call.start_positions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForQuestionAnswering.call.start_positions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>start_positions</strong> (<code>tf.Tensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (<code>sequence_length</code>). Position outside of the sequence are not taken into account for computing the loss.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLMRobertaForQuestionAnswering.call.end_positions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForQuestionAnswering.call.end_positions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>end_positions</strong> (<code>tf.Tensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (<code>sequence_length</code>). Position outside of the sequence are not taken into account for computing the loss.</span></span> </li></ul> <div id="transformers.TFXLMRobertaForQuestionAnswering.call.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput">transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput</a> or <code>tuple(tf.Tensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput">transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput</a> or a tuple of <code>tf.Tensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig">XLMRobertaConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, )</code>, <em>optional</em>, returned when <code>start_positions</code> and <code>end_positions</code> are provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.</p> </li> <li> <p><strong>start_logits</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>) — Span-start scores (before SoftMax).</p> </li> <li> <p><strong>end_logits</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>) — Span-end scores (before SoftMax).</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>tf.Tensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>tf.Tensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-gxfb20">The <a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.TFXLMRobertaForQuestionAnswering">TFXLMRobertaForQuestionAnswering</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.TFXLMRobertaForQuestionAnswering.call.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForQuestionAnswering.call.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, TFXLMRobertaForQuestionAnswering <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> tensorflow <span class="hljs-keyword">as</span> tf <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"ydshieh/roberta-base-squad2"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = TFXLMRobertaForQuestionAnswering.from_pretrained(<span class="hljs-string">"ydshieh/roberta-base-squad2"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>question, text = <span class="hljs-string">"Who was Jim Henson?"</span>, <span class="hljs-string">"Jim Henson was a nice puppet"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(question, text, return_tensors=<span class="hljs-string">"tf"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(**inputs) <span class="hljs-meta">&gt;&gt;&gt; </span>answer_start_index = <span class="hljs-built_in">int</span>(tf.math.argmax(outputs.start_logits, axis=-<span class="hljs-number">1</span>)[<span class="hljs-number">0</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span>answer_end_index = <span class="hljs-built_in">int</span>(tf.math.argmax(outputs.end_logits, axis=-<span class="hljs-number">1</span>)[<span class="hljs-number">0</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span>predict_answer_tokens = inputs.input_ids[<span class="hljs-number">0</span>, answer_start_index : answer_end_index + <span class="hljs-number">1</span>] <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer.decode(predict_answer_tokens) <span class="hljs-string">' puppet'</span></pre></div></div> <div class="relative group rounded-md"><a id="transformers.TFXLMRobertaForQuestionAnswering.call.example-2" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLMRobertaForQuestionAnswering.call.example-2"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># target is "nice puppet"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>target_start_index = tf.constant([<span class="hljs-number">14</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span>target_end_index = tf.constant([<span class="hljs-number">15</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index) <span class="hljs-meta">&gt;&gt;&gt; </span>loss = tf.math.reduce_mean(outputs.loss) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-built_in">round</span>(<span class="hljs-built_in">float</span>(loss), <span class="hljs-number">2</span>) <span class="hljs-number">0.86</span></pre></div></div></div></div> <h2 class="relative group"><a id="transformers.FlaxXLMRobertaModel" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaModel"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-19sp0jx">FlaxXLMRobertaModel</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FlaxXLMRobertaModel"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">FlaxXLMRobertaModel</span></span></h3> <a id="transformers.FlaxXLMRobertaModel" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FlaxXLMRobertaModel"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_flax_xlm_roberta.py#L1003" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60">: XLMRobertaConfig</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_shape<span class="opacity-60">: typing.Tuple = (1, 1)</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">seed<span class="opacity-60">: int = 0</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">dtype<span class="opacity-60">: dtype = &lt;class 'jax.numpy.float32'&gt;</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">_do_init<span class="opacity-60">: bool = True</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">gradient_checkpointing<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXLMRobertaModel.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaModel.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig">XLMRobertaConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-iz46qb">The bare XLM RoBERTa Model transformer outputting raw hidden-states without any specific head on top.</p> <p data-svelte-h="svelte-1co8q4b">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel">FlaxPreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models)</p> <p data-svelte-h="svelte-9ybkh">This model is also a Flax Linen <a href="https://flax.readthedocs.io/en/latest/flax.linen.html#module" rel="nofollow">flax.linen.Module</a> subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior.</p> <p data-svelte-h="svelte-1pplc4a">Finally, this model supports inherent JAX features such as:</p> <ul data-svelte-h="svelte-1w7z84m"><li><a href="https://jax.readthedocs.io/en/latest/jax.html#just-in-time-compilation-jit" rel="nofollow">Just-In-Time (JIT) compilation</a></li> <li><a href="https://jax.readthedocs.io/en/latest/jax.html#automatic-differentiation" rel="nofollow">Automatic Differentiation</a></li> <li><a href="https://jax.readthedocs.io/en/latest/jax.html#vectorization-vmap" rel="nofollow">Vectorization</a></li> <li><a href="https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap" rel="nofollow">Parallelization</a></li></ul> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FlaxXLMRobertaModel.__call__"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>__call__</span></h4> <a id="transformers.FlaxXLMRobertaModel.__call__" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FlaxXLMRobertaModel.__call__"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_flax_xlm_roberta.py#L829" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60"></span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60"> = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_hidden_states<span class="opacity-60"> = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_attention_mask<span class="opacity-60"> = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">params<span class="opacity-60">: dict = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">dropout_rng<span class="opacity-60">: PRNGKey = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">train<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">past_key_values<span class="opacity-60">: dict = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling">transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 6 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXLMRobertaModel.__call__.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaModel.__call__.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>numpy.ndarray</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXLMRobertaModel.__call__.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaModel.__call__.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>numpy.ndarray</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXLMRobertaModel.__call__.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaModel.__call__.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>numpy.ndarray</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token.</li> </ul> <p><a href="../glossary#token-type-ids">What are token type IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXLMRobertaModel.__call__.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaModel.__call__.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>numpy.ndarray</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXLMRobertaModel.__call__.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaModel.__call__.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>numpy.ndarray</code> of shape <code>(batch_size, sequence_length)</code>, <code>optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in </code>[0, 1]`:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXLMRobertaModel.__call__.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaModel.__call__.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li></ul> <div id="transformers.FlaxXLMRobertaModel.__call__.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling">transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling">transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig">XLMRobertaConfig</a>) and inputs.</p> <ul> <li> <p><strong>last_hidden_state</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>) — Sequence of hidden-states at the output of the last layer of the model.</p> </li> <li> <p><strong>pooler_output</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, hidden_size)</code>) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence prediction (classification) objective during pretraining.</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>jnp.ndarray</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>jnp.ndarray</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-js3dq1">The <code>FlaxXLMRobertaPreTrainedModel</code> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.FlaxXLMRobertaModel.__call__.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaModel.__call__.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, FlaxXLMRobertaModel <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"xlm-roberta-base"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = FlaxXLMRobertaModel.from_pretrained(<span class="hljs-string">"xlm-roberta-base"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(<span class="hljs-string">"Hello, my dog is cute"</span>, return_tensors=<span class="hljs-string">"jax"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(**inputs) <span class="hljs-meta">&gt;&gt;&gt; </span>last_hidden_states = outputs.last_hidden_state</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.FlaxXLMRobertaForCausalLM" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaForCausalLM"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1b4omt9">FlaxXLMRobertaForCausalLM</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FlaxXLMRobertaForCausalLM"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">FlaxXLMRobertaForCausalLM</span></span></h3> <a id="transformers.FlaxXLMRobertaForCausalLM" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FlaxXLMRobertaForCausalLM"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_flax_xlm_roberta.py#L1469" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60">: XLMRobertaConfig</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_shape<span class="opacity-60">: typing.Tuple = (1, 1)</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">seed<span class="opacity-60">: int = 0</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">dtype<span class="opacity-60">: dtype = &lt;class 'jax.numpy.float32'&gt;</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">_do_init<span class="opacity-60">: bool = True</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">gradient_checkpointing<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXLMRobertaForCausalLM.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaForCausalLM.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig">XLMRobertaConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-th2k4q">XLM Roberta Model with a language modeling head on top (a linear layer on top of the hidden-states output) e.g for autoregressive tasks.</p> <p data-svelte-h="svelte-1co8q4b">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel">FlaxPreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models)</p> <p data-svelte-h="svelte-9ybkh">This model is also a Flax Linen <a href="https://flax.readthedocs.io/en/latest/flax.linen.html#module" rel="nofollow">flax.linen.Module</a> subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior.</p> <p data-svelte-h="svelte-1pplc4a">Finally, this model supports inherent JAX features such as:</p> <ul data-svelte-h="svelte-1w7z84m"><li><a href="https://jax.readthedocs.io/en/latest/jax.html#just-in-time-compilation-jit" rel="nofollow">Just-In-Time (JIT) compilation</a></li> <li><a href="https://jax.readthedocs.io/en/latest/jax.html#automatic-differentiation" rel="nofollow">Automatic Differentiation</a></li> <li><a href="https://jax.readthedocs.io/en/latest/jax.html#vectorization-vmap" rel="nofollow">Vectorization</a></li> <li><a href="https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap" rel="nofollow">Parallelization</a></li></ul> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FlaxXLMRobertaForCausalLM.__call__"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>__call__</span></h4> <a id="transformers.FlaxXLMRobertaForCausalLM.__call__" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FlaxXLMRobertaForCausalLM.__call__"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_flax_xlm_roberta.py#L829" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60"></span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60"> = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_hidden_states<span class="opacity-60"> = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_attention_mask<span class="opacity-60"> = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">params<span class="opacity-60">: dict = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">dropout_rng<span class="opacity-60">: PRNGKey = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">train<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">past_key_values<span class="opacity-60">: dict = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions">transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 6 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXLMRobertaForCausalLM.__call__.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaForCausalLM.__call__.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>numpy.ndarray</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXLMRobertaForCausalLM.__call__.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaForCausalLM.__call__.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>numpy.ndarray</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXLMRobertaForCausalLM.__call__.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaForCausalLM.__call__.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>numpy.ndarray</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token.</li> </ul> <p><a href="../glossary#token-type-ids">What are token type IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXLMRobertaForCausalLM.__call__.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaForCausalLM.__call__.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>numpy.ndarray</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXLMRobertaForCausalLM.__call__.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaForCausalLM.__call__.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>numpy.ndarray</code> of shape <code>(batch_size, sequence_length)</code>, <code>optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in </code>[0, 1]`:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXLMRobertaForCausalLM.__call__.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaForCausalLM.__call__.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li></ul> <div id="transformers.FlaxXLMRobertaForCausalLM.__call__.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions">transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions">transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig">XLMRobertaConfig</a>) and inputs.</p> <ul> <li> <p><strong>logits</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, sequence_length, config.vocab_size)</code>) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>jnp.ndarray</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>jnp.ndarray</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> <li> <p><strong>cross_attentions</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>jnp.ndarray</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Cross attentions weights after the attention softmax, used to compute the weighted average in the cross-attention heads.</p> </li> <li> <p><strong>past_key_values</strong> (<code>tuple(tuple(jnp.ndarray))</code>, <em>optional</em>, returned when <code>use_cache=True</code> is passed or when <code>config.use_cache=True</code>) — Tuple of <code>jnp.ndarray</code> tuples of length <code>config.n_layers</code>, with each tuple containing the cached key, value states of the self-attention and the cross-attention layers if model is used in encoder-decoder setting. Only relevant if <code>config.is_decoder = True</code>.</p> <p>Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see <code>past_key_values</code> input) to speed up sequential decoding.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-js3dq1">The <code>FlaxXLMRobertaPreTrainedModel</code> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.FlaxXLMRobertaForCausalLM.__call__.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaForCausalLM.__call__.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, FlaxXLMRobertaForCausalLM <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"xlm-roberta-base"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = FlaxXLMRobertaForCausalLM.from_pretrained(<span class="hljs-string">"xlm-roberta-base"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(<span class="hljs-string">"Hello, my dog is cute"</span>, return_tensors=<span class="hljs-string">"np"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(**inputs) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># retrieve logts for next token</span> <span class="hljs-meta">&gt;&gt;&gt; </span>next_token_logits = outputs.logits[:, -<span class="hljs-number">1</span>]</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.FlaxXLMRobertaForMaskedLM" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaForMaskedLM"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-lhrtjp">FlaxXLMRobertaForMaskedLM</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FlaxXLMRobertaForMaskedLM"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">FlaxXLMRobertaForMaskedLM</span></span></h3> <a id="transformers.FlaxXLMRobertaForMaskedLM" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FlaxXLMRobertaForMaskedLM"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_flax_xlm_roberta.py#L1070" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60">: XLMRobertaConfig</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_shape<span class="opacity-60">: typing.Tuple = (1, 1)</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">seed<span class="opacity-60">: int = 0</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">dtype<span class="opacity-60">: dtype = &lt;class 'jax.numpy.float32'&gt;</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">_do_init<span class="opacity-60">: bool = True</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">gradient_checkpointing<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXLMRobertaForMaskedLM.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaForMaskedLM.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig">XLMRobertaConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-cpwdey">XLM RoBERTa Model with a <code>language modeling</code> head on top.</p> <p data-svelte-h="svelte-1co8q4b">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel">FlaxPreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models)</p> <p data-svelte-h="svelte-9ybkh">This model is also a Flax Linen <a href="https://flax.readthedocs.io/en/latest/flax.linen.html#module" rel="nofollow">flax.linen.Module</a> subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior.</p> <p data-svelte-h="svelte-1pplc4a">Finally, this model supports inherent JAX features such as:</p> <ul data-svelte-h="svelte-1w7z84m"><li><a href="https://jax.readthedocs.io/en/latest/jax.html#just-in-time-compilation-jit" rel="nofollow">Just-In-Time (JIT) compilation</a></li> <li><a href="https://jax.readthedocs.io/en/latest/jax.html#automatic-differentiation" rel="nofollow">Automatic Differentiation</a></li> <li><a href="https://jax.readthedocs.io/en/latest/jax.html#vectorization-vmap" rel="nofollow">Vectorization</a></li> <li><a href="https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap" rel="nofollow">Parallelization</a></li></ul> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FlaxXLMRobertaForMaskedLM.__call__"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>__call__</span></h4> <a id="transformers.FlaxXLMRobertaForMaskedLM.__call__" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FlaxXLMRobertaForMaskedLM.__call__"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_flax_xlm_roberta.py#L829" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60"></span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60"> = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_hidden_states<span class="opacity-60"> = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_attention_mask<span class="opacity-60"> = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">params<span class="opacity-60">: dict = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">dropout_rng<span class="opacity-60">: PRNGKey = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">train<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">past_key_values<span class="opacity-60">: dict = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling">transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 6 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXLMRobertaForMaskedLM.__call__.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaForMaskedLM.__call__.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>numpy.ndarray</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXLMRobertaForMaskedLM.__call__.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaForMaskedLM.__call__.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>numpy.ndarray</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXLMRobertaForMaskedLM.__call__.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaForMaskedLM.__call__.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>numpy.ndarray</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token.</li> </ul> <p><a href="../glossary#token-type-ids">What are token type IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXLMRobertaForMaskedLM.__call__.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaForMaskedLM.__call__.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>numpy.ndarray</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXLMRobertaForMaskedLM.__call__.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaForMaskedLM.__call__.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>numpy.ndarray</code> of shape <code>(batch_size, sequence_length)</code>, <code>optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in </code>[0, 1]`:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXLMRobertaForMaskedLM.__call__.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaForMaskedLM.__call__.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li></ul> <div id="transformers.FlaxXLMRobertaForMaskedLM.__call__.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling">transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling">transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig">XLMRobertaConfig</a>) and inputs.</p> <ul> <li> <p><strong>last_hidden_state</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>) — Sequence of hidden-states at the output of the last layer of the model.</p> </li> <li> <p><strong>pooler_output</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, hidden_size)</code>) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence prediction (classification) objective during pretraining.</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>jnp.ndarray</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>jnp.ndarray</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-js3dq1">The <code>FlaxXLMRobertaPreTrainedModel</code> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.FlaxXLMRobertaForMaskedLM.__call__.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaForMaskedLM.__call__.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, FlaxXLMRobertaForMaskedLM <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"xlm-roberta-base"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = FlaxXLMRobertaForMaskedLM.from_pretrained(<span class="hljs-string">"xlm-roberta-base"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(<span class="hljs-string">"The capital of France is [MASK]."</span>, return_tensors=<span class="hljs-string">"jax"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(**inputs) <span class="hljs-meta">&gt;&gt;&gt; </span>logits = outputs.logits</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.FlaxXLMRobertaForSequenceClassification" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaForSequenceClassification"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1n9ljiy">FlaxXLMRobertaForSequenceClassification</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FlaxXLMRobertaForSequenceClassification"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">FlaxXLMRobertaForSequenceClassification</span></span></h3> <a id="transformers.FlaxXLMRobertaForSequenceClassification" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FlaxXLMRobertaForSequenceClassification"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_flax_xlm_roberta.py#L1143" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60">: XLMRobertaConfig</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_shape<span class="opacity-60">: typing.Tuple = (1, 1)</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">seed<span class="opacity-60">: int = 0</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">dtype<span class="opacity-60">: dtype = &lt;class 'jax.numpy.float32'&gt;</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">_do_init<span class="opacity-60">: bool = True</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">gradient_checkpointing<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXLMRobertaForSequenceClassification.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaForSequenceClassification.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig">XLMRobertaConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1g3g1ku">XLM Roberta Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks.</p> <p data-svelte-h="svelte-1co8q4b">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel">FlaxPreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models)</p> <p data-svelte-h="svelte-9ybkh">This model is also a Flax Linen <a href="https://flax.readthedocs.io/en/latest/flax.linen.html#module" rel="nofollow">flax.linen.Module</a> subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior.</p> <p data-svelte-h="svelte-1pplc4a">Finally, this model supports inherent JAX features such as:</p> <ul data-svelte-h="svelte-1w7z84m"><li><a href="https://jax.readthedocs.io/en/latest/jax.html#just-in-time-compilation-jit" rel="nofollow">Just-In-Time (JIT) compilation</a></li> <li><a href="https://jax.readthedocs.io/en/latest/jax.html#automatic-differentiation" rel="nofollow">Automatic Differentiation</a></li> <li><a href="https://jax.readthedocs.io/en/latest/jax.html#vectorization-vmap" rel="nofollow">Vectorization</a></li> <li><a href="https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap" rel="nofollow">Parallelization</a></li></ul> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FlaxXLMRobertaForSequenceClassification.__call__"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>__call__</span></h4> <a id="transformers.FlaxXLMRobertaForSequenceClassification.__call__" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FlaxXLMRobertaForSequenceClassification.__call__"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_flax_xlm_roberta.py#L829" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60"></span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60"> = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_hidden_states<span class="opacity-60"> = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_attention_mask<span class="opacity-60"> = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">params<span class="opacity-60">: dict = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">dropout_rng<span class="opacity-60">: PRNGKey = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">train<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">past_key_values<span class="opacity-60">: dict = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput">transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 6 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXLMRobertaForSequenceClassification.__call__.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaForSequenceClassification.__call__.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>numpy.ndarray</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXLMRobertaForSequenceClassification.__call__.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaForSequenceClassification.__call__.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>numpy.ndarray</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXLMRobertaForSequenceClassification.__call__.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaForSequenceClassification.__call__.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>numpy.ndarray</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token.</li> </ul> <p><a href="../glossary#token-type-ids">What are token type IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXLMRobertaForSequenceClassification.__call__.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaForSequenceClassification.__call__.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>numpy.ndarray</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXLMRobertaForSequenceClassification.__call__.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaForSequenceClassification.__call__.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>numpy.ndarray</code> of shape <code>(batch_size, sequence_length)</code>, <code>optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in </code>[0, 1]`:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXLMRobertaForSequenceClassification.__call__.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaForSequenceClassification.__call__.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li></ul> <div id="transformers.FlaxXLMRobertaForSequenceClassification.__call__.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput">transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput">transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig">XLMRobertaConfig</a>) and inputs.</p> <ul> <li> <p><strong>logits</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, config.num_labels)</code>) — Classification (or regression if config.num_labels==1) scores (before SoftMax).</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>jnp.ndarray</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>jnp.ndarray</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-js3dq1">The <code>FlaxXLMRobertaPreTrainedModel</code> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.FlaxXLMRobertaForSequenceClassification.__call__.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaForSequenceClassification.__call__.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, FlaxXLMRobertaForSequenceClassification <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"xlm-roberta-base"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = FlaxXLMRobertaForSequenceClassification.from_pretrained(<span class="hljs-string">"xlm-roberta-base"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(<span class="hljs-string">"Hello, my dog is cute"</span>, return_tensors=<span class="hljs-string">"jax"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(**inputs) <span class="hljs-meta">&gt;&gt;&gt; </span>logits = outputs.logits</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.FlaxXLMRobertaForMultipleChoice" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaForMultipleChoice"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1barvbc">FlaxXLMRobertaForMultipleChoice</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FlaxXLMRobertaForMultipleChoice"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">FlaxXLMRobertaForMultipleChoice</span></span></h3> <a id="transformers.FlaxXLMRobertaForMultipleChoice" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FlaxXLMRobertaForMultipleChoice"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_flax_xlm_roberta.py#L1224" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60">: XLMRobertaConfig</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_shape<span class="opacity-60">: typing.Tuple = (1, 1)</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">seed<span class="opacity-60">: int = 0</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">dtype<span class="opacity-60">: dtype = &lt;class 'jax.numpy.float32'&gt;</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">_do_init<span class="opacity-60">: bool = True</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">gradient_checkpointing<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXLMRobertaForMultipleChoice.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaForMultipleChoice.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig">XLMRobertaConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-xq071m">XLM Roberta Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks.</p> <p data-svelte-h="svelte-1co8q4b">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel">FlaxPreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models)</p> <p data-svelte-h="svelte-9ybkh">This model is also a Flax Linen <a href="https://flax.readthedocs.io/en/latest/flax.linen.html#module" rel="nofollow">flax.linen.Module</a> subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior.</p> <p data-svelte-h="svelte-1pplc4a">Finally, this model supports inherent JAX features such as:</p> <ul data-svelte-h="svelte-1w7z84m"><li><a href="https://jax.readthedocs.io/en/latest/jax.html#just-in-time-compilation-jit" rel="nofollow">Just-In-Time (JIT) compilation</a></li> <li><a href="https://jax.readthedocs.io/en/latest/jax.html#automatic-differentiation" rel="nofollow">Automatic Differentiation</a></li> <li><a href="https://jax.readthedocs.io/en/latest/jax.html#vectorization-vmap" rel="nofollow">Vectorization</a></li> <li><a href="https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap" rel="nofollow">Parallelization</a></li></ul> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FlaxXLMRobertaForMultipleChoice.__call__"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>__call__</span></h4> <a id="transformers.FlaxXLMRobertaForMultipleChoice.__call__" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FlaxXLMRobertaForMultipleChoice.__call__"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_flax_xlm_roberta.py#L829" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60"></span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60"> = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_hidden_states<span class="opacity-60"> = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_attention_mask<span class="opacity-60"> = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">params<span class="opacity-60">: dict = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">dropout_rng<span class="opacity-60">: PRNGKey = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">train<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">past_key_values<span class="opacity-60">: dict = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxMultipleChoiceModelOutput">transformers.modeling_flax_outputs.FlaxMultipleChoiceModelOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 6 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXLMRobertaForMultipleChoice.__call__.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaForMultipleChoice.__call__.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>numpy.ndarray</code> of shape <code>(batch_size, num_choices, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXLMRobertaForMultipleChoice.__call__.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaForMultipleChoice.__call__.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>numpy.ndarray</code> of shape <code>(batch_size, num_choices, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXLMRobertaForMultipleChoice.__call__.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaForMultipleChoice.__call__.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>numpy.ndarray</code> of shape <code>(batch_size, num_choices, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token.</li> </ul> <p><a href="../glossary#token-type-ids">What are token type IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXLMRobertaForMultipleChoice.__call__.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaForMultipleChoice.__call__.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>numpy.ndarray</code> of shape <code>(batch_size, num_choices, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXLMRobertaForMultipleChoice.__call__.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaForMultipleChoice.__call__.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>numpy.ndarray</code> of shape <code>(batch_size, num_choices, sequence_length)</code>, <code>optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in </code>[0, 1]`:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXLMRobertaForMultipleChoice.__call__.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaForMultipleChoice.__call__.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li></ul> <div id="transformers.FlaxXLMRobertaForMultipleChoice.__call__.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxMultipleChoiceModelOutput">transformers.modeling_flax_outputs.FlaxMultipleChoiceModelOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxMultipleChoiceModelOutput">transformers.modeling_flax_outputs.FlaxMultipleChoiceModelOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig">XLMRobertaConfig</a>) and inputs.</p> <ul> <li> <p><strong>logits</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, num_choices)</code>) — <em>num_choices</em> is the second dimension of the input tensors. (see <em>input_ids</em> above).</p> <p>Classification scores (before SoftMax).</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>jnp.ndarray</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>jnp.ndarray</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-js3dq1">The <code>FlaxXLMRobertaPreTrainedModel</code> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.FlaxXLMRobertaForMultipleChoice.__call__.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaForMultipleChoice.__call__.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, FlaxXLMRobertaForMultipleChoice <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"xlm-roberta-base"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = FlaxXLMRobertaForMultipleChoice.from_pretrained(<span class="hljs-string">"xlm-roberta-base"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>prompt = <span class="hljs-string">"In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."</span> <span class="hljs-meta">&gt;&gt;&gt; </span>choice0 = <span class="hljs-string">"It is eaten with a fork and a knife."</span> <span class="hljs-meta">&gt;&gt;&gt; </span>choice1 = <span class="hljs-string">"It is eaten while held in the hand."</span> <span class="hljs-meta">&gt;&gt;&gt; </span>encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors=<span class="hljs-string">"jax"</span>, padding=<span class="hljs-literal">True</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(**{k: v[<span class="hljs-literal">None</span>, :] <span class="hljs-keyword">for</span> k, v <span class="hljs-keyword">in</span> encoding.items()}) <span class="hljs-meta">&gt;&gt;&gt; </span>logits = outputs.logits</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.FlaxXLMRobertaForTokenClassification" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaForTokenClassification"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-rh1fm2">FlaxXLMRobertaForTokenClassification</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FlaxXLMRobertaForTokenClassification"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">FlaxXLMRobertaForTokenClassification</span></span></h3> <a id="transformers.FlaxXLMRobertaForTokenClassification" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FlaxXLMRobertaForTokenClassification"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_flax_xlm_roberta.py#L1306" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60">: XLMRobertaConfig</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_shape<span class="opacity-60">: typing.Tuple = (1, 1)</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">seed<span class="opacity-60">: int = 0</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">dtype<span class="opacity-60">: dtype = &lt;class 'jax.numpy.float32'&gt;</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">_do_init<span class="opacity-60">: bool = True</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">gradient_checkpointing<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXLMRobertaForTokenClassification.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaForTokenClassification.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig">XLMRobertaConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-o5xng3">XLM Roberta Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks.</p> <p data-svelte-h="svelte-1co8q4b">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel">FlaxPreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models)</p> <p data-svelte-h="svelte-9ybkh">This model is also a Flax Linen <a href="https://flax.readthedocs.io/en/latest/flax.linen.html#module" rel="nofollow">flax.linen.Module</a> subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior.</p> <p data-svelte-h="svelte-1pplc4a">Finally, this model supports inherent JAX features such as:</p> <ul data-svelte-h="svelte-1w7z84m"><li><a href="https://jax.readthedocs.io/en/latest/jax.html#just-in-time-compilation-jit" rel="nofollow">Just-In-Time (JIT) compilation</a></li> <li><a href="https://jax.readthedocs.io/en/latest/jax.html#automatic-differentiation" rel="nofollow">Automatic Differentiation</a></li> <li><a href="https://jax.readthedocs.io/en/latest/jax.html#vectorization-vmap" rel="nofollow">Vectorization</a></li> <li><a href="https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap" rel="nofollow">Parallelization</a></li></ul> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FlaxXLMRobertaForTokenClassification.__call__"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>__call__</span></h4> <a id="transformers.FlaxXLMRobertaForTokenClassification.__call__" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FlaxXLMRobertaForTokenClassification.__call__"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_flax_xlm_roberta.py#L829" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60"></span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60"> = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_hidden_states<span class="opacity-60"> = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_attention_mask<span class="opacity-60"> = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">params<span class="opacity-60">: dict = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">dropout_rng<span class="opacity-60">: PRNGKey = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">train<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">past_key_values<span class="opacity-60">: dict = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxTokenClassifierOutput">transformers.modeling_flax_outputs.FlaxTokenClassifierOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 6 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXLMRobertaForTokenClassification.__call__.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaForTokenClassification.__call__.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>numpy.ndarray</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXLMRobertaForTokenClassification.__call__.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaForTokenClassification.__call__.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>numpy.ndarray</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXLMRobertaForTokenClassification.__call__.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaForTokenClassification.__call__.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>numpy.ndarray</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token.</li> </ul> <p><a href="../glossary#token-type-ids">What are token type IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXLMRobertaForTokenClassification.__call__.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaForTokenClassification.__call__.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>numpy.ndarray</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXLMRobertaForTokenClassification.__call__.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaForTokenClassification.__call__.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>numpy.ndarray</code> of shape <code>(batch_size, sequence_length)</code>, <code>optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in </code>[0, 1]`:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXLMRobertaForTokenClassification.__call__.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaForTokenClassification.__call__.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li></ul> <div id="transformers.FlaxXLMRobertaForTokenClassification.__call__.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxTokenClassifierOutput">transformers.modeling_flax_outputs.FlaxTokenClassifierOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxTokenClassifierOutput">transformers.modeling_flax_outputs.FlaxTokenClassifierOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig">XLMRobertaConfig</a>) and inputs.</p> <ul> <li> <p><strong>logits</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, sequence_length, config.num_labels)</code>) — Classification scores (before SoftMax).</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>jnp.ndarray</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>jnp.ndarray</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-js3dq1">The <code>FlaxXLMRobertaPreTrainedModel</code> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.FlaxXLMRobertaForTokenClassification.__call__.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaForTokenClassification.__call__.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, FlaxXLMRobertaForTokenClassification <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"xlm-roberta-base"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = FlaxXLMRobertaForTokenClassification.from_pretrained(<span class="hljs-string">"xlm-roberta-base"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(<span class="hljs-string">"Hello, my dog is cute"</span>, return_tensors=<span class="hljs-string">"jax"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(**inputs) <span class="hljs-meta">&gt;&gt;&gt; </span>logits = outputs.logits</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.FlaxXLMRobertaForQuestionAnswering" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaForQuestionAnswering"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-2ywcnj">FlaxXLMRobertaForQuestionAnswering</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FlaxXLMRobertaForQuestionAnswering"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">FlaxXLMRobertaForQuestionAnswering</span></span></h3> <a id="transformers.FlaxXLMRobertaForQuestionAnswering" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FlaxXLMRobertaForQuestionAnswering"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_flax_xlm_roberta.py#L1383" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60">: XLMRobertaConfig</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_shape<span class="opacity-60">: typing.Tuple = (1, 1)</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">seed<span class="opacity-60">: int = 0</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">dtype<span class="opacity-60">: dtype = &lt;class 'jax.numpy.float32'&gt;</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">_do_init<span class="opacity-60">: bool = True</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">gradient_checkpointing<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXLMRobertaForQuestionAnswering.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaForQuestionAnswering.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig">XLMRobertaConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1xzmurh">XLM Roberta Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute <code>span start logits</code> and <code>span end logits</code>).</p> <p data-svelte-h="svelte-1co8q4b">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel">FlaxPreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models)</p> <p data-svelte-h="svelte-9ybkh">This model is also a Flax Linen <a href="https://flax.readthedocs.io/en/latest/flax.linen.html#module" rel="nofollow">flax.linen.Module</a> subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior.</p> <p data-svelte-h="svelte-1pplc4a">Finally, this model supports inherent JAX features such as:</p> <ul data-svelte-h="svelte-1w7z84m"><li><a href="https://jax.readthedocs.io/en/latest/jax.html#just-in-time-compilation-jit" rel="nofollow">Just-In-Time (JIT) compilation</a></li> <li><a href="https://jax.readthedocs.io/en/latest/jax.html#automatic-differentiation" rel="nofollow">Automatic Differentiation</a></li> <li><a href="https://jax.readthedocs.io/en/latest/jax.html#vectorization-vmap" rel="nofollow">Vectorization</a></li> <li><a href="https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap" rel="nofollow">Parallelization</a></li></ul> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FlaxXLMRobertaForQuestionAnswering.__call__"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>__call__</span></h4> <a id="transformers.FlaxXLMRobertaForQuestionAnswering.__call__" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FlaxXLMRobertaForQuestionAnswering.__call__"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta/modeling_flax_xlm_roberta.py#L829" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60"></span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60"> = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_hidden_states<span class="opacity-60"> = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_attention_mask<span class="opacity-60"> = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">params<span class="opacity-60">: dict = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">dropout_rng<span class="opacity-60">: PRNGKey = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">train<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">past_key_values<span class="opacity-60">: dict = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxQuestionAnsweringModelOutput">transformers.modeling_flax_outputs.FlaxQuestionAnsweringModelOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 6 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXLMRobertaForQuestionAnswering.__call__.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaForQuestionAnswering.__call__.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>numpy.ndarray</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXLMRobertaForQuestionAnswering.__call__.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaForQuestionAnswering.__call__.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>numpy.ndarray</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXLMRobertaForQuestionAnswering.__call__.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaForQuestionAnswering.__call__.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>numpy.ndarray</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token.</li> </ul> <p><a href="../glossary#token-type-ids">What are token type IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXLMRobertaForQuestionAnswering.__call__.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaForQuestionAnswering.__call__.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>numpy.ndarray</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXLMRobertaForQuestionAnswering.__call__.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaForQuestionAnswering.__call__.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>numpy.ndarray</code> of shape <code>(batch_size, sequence_length)</code>, <code>optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in </code>[0, 1]`:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxXLMRobertaForQuestionAnswering.__call__.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaForQuestionAnswering.__call__.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li></ul> <div id="transformers.FlaxXLMRobertaForQuestionAnswering.__call__.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxQuestionAnsweringModelOutput">transformers.modeling_flax_outputs.FlaxQuestionAnsweringModelOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxQuestionAnsweringModelOutput">transformers.modeling_flax_outputs.FlaxQuestionAnsweringModelOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta#transformers.XLMRobertaConfig">XLMRobertaConfig</a>) and inputs.</p> <ul> <li> <p><strong>start_logits</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, sequence_length)</code>) — Span-start scores (before SoftMax).</p> </li> <li> <p><strong>end_logits</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, sequence_length)</code>) — Span-end scores (before SoftMax).</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>jnp.ndarray</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>jnp.ndarray</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-js3dq1">The <code>FlaxXLMRobertaPreTrainedModel</code> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.FlaxXLMRobertaForQuestionAnswering.__call__.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxXLMRobertaForQuestionAnswering.__call__.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, FlaxXLMRobertaForQuestionAnswering <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"xlm-roberta-base"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = FlaxXLMRobertaForQuestionAnswering.from_pretrained(<span class="hljs-string">"xlm-roberta-base"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>question, text = <span class="hljs-string">"Who was Jim Henson?"</span>, <span class="hljs-string">"Jim Henson was a nice puppet"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(question, text, return_tensors=<span class="hljs-string">"jax"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(**inputs) <span class="hljs-meta">&gt;&gt;&gt; </span>start_scores = outputs.start_logits <span class="hljs-meta">&gt;&gt;&gt; </span>end_scores = outputs.end_logits</pre></div></div></div></div> <p></p> <div id="svelte-announcer" aria-live="assertive" aria-atomic="true" style="position: absolute; left: 0px; top: 0px; clip: rect(0px, 0px, 0px, 0px); clip-path: inset(50%); overflow: hidden; white-space: nowrap; width: 1px; height: 1px;"></div></div> <div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>XLM-ProphetNet</a> <a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">XLM-RoBERTa-XL<span class="ml-2 translate-y-px">→</span></a></div></div></div> <div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapter&quot;:{&quot;title&quot;:&quot;XLM-RoBERTa&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;xlmroberta&quot;,&quot;url&quot;:&quot;#xlmroberta&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;overview&quot;,&quot;url&quot;:&quot;#overview&quot;},{&quot;title&quot;:&quot;Resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;resources&quot;,&quot;url&quot;:&quot;#resources&quot;},{&quot;title&quot;:&quot;XLMRobertaConfig&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XLMRobertaConfig&quot;,&quot;url&quot;:&quot;#transformers.XLMRobertaConfig&quot;},{&quot;title&quot;:&quot;XLMRobertaTokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XLMRobertaTokenizer&quot;,&quot;url&quot;:&quot;#transformers.XLMRobertaTokenizer&quot;},{&quot;title&quot;:&quot;XLMRobertaTokenizerFast&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XLMRobertaTokenizerFast&quot;,&quot;url&quot;:&quot;#transformers.XLMRobertaTokenizerFast&quot;},{&quot;title&quot;:&quot;XLMRobertaModel&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XLMRobertaModel&quot;,&quot;url&quot;:&quot;#transformers.XLMRobertaModel&quot;},{&quot;title&quot;:&quot;XLMRobertaForCausalLM&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XLMRobertaForCausalLM&quot;,&quot;url&quot;:&quot;#transformers.XLMRobertaForCausalLM&quot;},{&quot;title&quot;:&quot;XLMRobertaForMaskedLM&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XLMRobertaForMaskedLM&quot;,&quot;url&quot;:&quot;#transformers.XLMRobertaForMaskedLM&quot;},{&quot;title&quot;:&quot;XLMRobertaForSequenceClassification&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XLMRobertaForSequenceClassification&quot;,&quot;url&quot;:&quot;#transformers.XLMRobertaForSequenceClassification&quot;},{&quot;title&quot;:&quot;XLMRobertaForMultipleChoice&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XLMRobertaForMultipleChoice&quot;,&quot;url&quot;:&quot;#transformers.XLMRobertaForMultipleChoice&quot;},{&quot;title&quot;:&quot;XLMRobertaForTokenClassification&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XLMRobertaForTokenClassification&quot;,&quot;url&quot;:&quot;#transformers.XLMRobertaForTokenClassification&quot;},{&quot;title&quot;:&quot;XLMRobertaForQuestionAnswering&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XLMRobertaForQuestionAnswering&quot;,&quot;url&quot;:&quot;#transformers.XLMRobertaForQuestionAnswering&quot;},{&quot;title&quot;:&quot;TFXLMRobertaModel&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.TFXLMRobertaModel&quot;,&quot;url&quot;:&quot;#transformers.TFXLMRobertaModel&quot;},{&quot;title&quot;:&quot;TFXLMRobertaForCausalLM&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.TFXLMRobertaForCausalLM&quot;,&quot;url&quot;:&quot;#transformers.TFXLMRobertaForCausalLM&quot;},{&quot;title&quot;:&quot;TFXLMRobertaForMaskedLM&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.TFXLMRobertaForMaskedLM&quot;,&quot;url&quot;:&quot;#transformers.TFXLMRobertaForMaskedLM&quot;},{&quot;title&quot;:&quot;TFXLMRobertaForSequenceClassification&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.TFXLMRobertaForSequenceClassification&quot;,&quot;url&quot;:&quot;#transformers.TFXLMRobertaForSequenceClassification&quot;},{&quot;title&quot;:&quot;TFXLMRobertaForMultipleChoice&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.TFXLMRobertaForMultipleChoice&quot;,&quot;url&quot;:&quot;#transformers.TFXLMRobertaForMultipleChoice&quot;},{&quot;title&quot;:&quot;TFXLMRobertaForTokenClassification&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.TFXLMRobertaForTokenClassification&quot;,&quot;url&quot;:&quot;#transformers.TFXLMRobertaForTokenClassification&quot;},{&quot;title&quot;:&quot;TFXLMRobertaForQuestionAnswering&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.TFXLMRobertaForQuestionAnswering&quot;,&quot;url&quot;:&quot;#transformers.TFXLMRobertaForQuestionAnswering&quot;},{&quot;title&quot;:&quot;FlaxXLMRobertaModel&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.FlaxXLMRobertaModel&quot;,&quot;url&quot;:&quot;#transformers.FlaxXLMRobertaModel&quot;},{&quot;title&quot;:&quot;FlaxXLMRobertaForCausalLM&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.FlaxXLMRobertaForCausalLM&quot;,&quot;url&quot;:&quot;#transformers.FlaxXLMRobertaForCausalLM&quot;},{&quot;title&quot;:&quot;FlaxXLMRobertaForMaskedLM&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.FlaxXLMRobertaForMaskedLM&quot;,&quot;url&quot;:&quot;#transformers.FlaxXLMRobertaForMaskedLM&quot;},{&quot;title&quot;:&quot;FlaxXLMRobertaForSequenceClassification&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.FlaxXLMRobertaForSequenceClassification&quot;,&quot;url&quot;:&quot;#transformers.FlaxXLMRobertaForSequenceClassification&quot;},{&quot;title&quot;:&quot;FlaxXLMRobertaForMultipleChoice&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.FlaxXLMRobertaForMultipleChoice&quot;,&quot;url&quot;:&quot;#transformers.FlaxXLMRobertaForMultipleChoice&quot;},{&quot;title&quot;:&quot;FlaxXLMRobertaForTokenClassification&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.FlaxXLMRobertaForTokenClassification&quot;,&quot;url&quot;:&quot;#transformers.FlaxXLMRobertaForTokenClassification&quot;},{&quot;title&quot;:&quot;FlaxXLMRobertaForQuestionAnswering&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.FlaxXLMRobertaForQuestionAnswering&quot;,&quot;url&quot;:&quot;#transformers.FlaxXLMRobertaForQuestionAnswering&quot;}]}}" data-target="SubSideMenu"> <nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#xlmroberta" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-xlmroberta"><!-- HTML_TAG_START -->XL<wbr>M-<wbr>RoBER<wbr>Ta<!-- HTML_TAG_END --></a> <a href="#overview" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-overview"><!-- HTML_TAG_START --><wbr>Overview<!-- HTML_TAG_END --></a> <a href="#resources" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-resources"><!-- HTML_TAG_START --><wbr>Resources<!-- HTML_TAG_END --></a> <a href="#transformers.XLMRobertaConfig" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XLMRobertaConfig"><!-- HTML_TAG_START -->XLM<wbr>Roberta<wbr>Config<!-- HTML_TAG_END --></a> <a href="#transformers.XLMRobertaTokenizer" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XLMRobertaTokenizer"><!-- HTML_TAG_START -->XLM<wbr>Roberta<wbr>Tokenizer<!-- HTML_TAG_END --></a> <a href="#transformers.XLMRobertaTokenizerFast" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XLMRobertaTokenizerFast"><!-- HTML_TAG_START -->XLM<wbr>Roberta<wbr>Tokenizer<wbr>Fast<!-- HTML_TAG_END --></a> <a href="#transformers.XLMRobertaModel" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XLMRobertaModel"><!-- HTML_TAG_START -->XLM<wbr>Roberta<wbr>Model<!-- HTML_TAG_END --></a> <a href="#transformers.XLMRobertaForCausalLM" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XLMRobertaForCausalLM"><!-- HTML_TAG_START -->XLM<wbr>Roberta<wbr>For<wbr>CausalLM<!-- HTML_TAG_END --></a> <a href="#transformers.XLMRobertaForMaskedLM" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XLMRobertaForMaskedLM"><!-- HTML_TAG_START -->XLM<wbr>Roberta<wbr>For<wbr>MaskedLM<!-- HTML_TAG_END --></a> <a href="#transformers.XLMRobertaForSequenceClassification" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XLMRobertaForSequenceClassification"><!-- HTML_TAG_START -->XLM<wbr>Roberta<wbr>For<wbr>Sequence<wbr>Classification<!-- HTML_TAG_END --></a> <a href="#transformers.XLMRobertaForMultipleChoice" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XLMRobertaForMultipleChoice"><!-- HTML_TAG_START -->XLM<wbr>Roberta<wbr>For<wbr>Multiple<wbr>Choice<!-- HTML_TAG_END --></a> <a href="#transformers.XLMRobertaForTokenClassification" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XLMRobertaForTokenClassification"><!-- HTML_TAG_START -->XLM<wbr>Roberta<wbr>For<wbr>Token<wbr>Classification<!-- HTML_TAG_END --></a> <a href="#transformers.XLMRobertaForQuestionAnswering" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XLMRobertaForQuestionAnswering"><!-- HTML_TAG_START -->XLM<wbr>Roberta<wbr>For<wbr>Question<wbr>Answering<!-- HTML_TAG_END --></a> <a href="#transformers.TFXLMRobertaModel" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.TFXLMRobertaModel"><!-- HTML_TAG_START -->TFXLM<wbr>Roberta<wbr>Model<!-- HTML_TAG_END --></a> <a href="#transformers.TFXLMRobertaForCausalLM" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.TFXLMRobertaForCausalLM"><!-- HTML_TAG_START -->TFXLM<wbr>Roberta<wbr>For<wbr>CausalLM<!-- HTML_TAG_END --></a> <a href="#transformers.TFXLMRobertaForMaskedLM" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.TFXLMRobertaForMaskedLM"><!-- HTML_TAG_START -->TFXLM<wbr>Roberta<wbr>For<wbr>MaskedLM<!-- HTML_TAG_END --></a> <a href="#transformers.TFXLMRobertaForSequenceClassification" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.TFXLMRobertaForSequenceClassification"><!-- HTML_TAG_START -->TFXLM<wbr>Roberta<wbr>For<wbr>Sequence<wbr>Classification<!-- HTML_TAG_END --></a> <a href="#transformers.TFXLMRobertaForMultipleChoice" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.TFXLMRobertaForMultipleChoice"><!-- HTML_TAG_START -->TFXLM<wbr>Roberta<wbr>For<wbr>Multiple<wbr>Choice<!-- HTML_TAG_END --></a> <a href="#transformers.TFXLMRobertaForTokenClassification" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.TFXLMRobertaForTokenClassification"><!-- HTML_TAG_START -->TFXLM<wbr>Roberta<wbr>For<wbr>Token<wbr>Classification<!-- HTML_TAG_END --></a> <a href="#transformers.TFXLMRobertaForQuestionAnswering" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.TFXLMRobertaForQuestionAnswering"><!-- HTML_TAG_START -->TFXLM<wbr>Roberta<wbr>For<wbr>Question<wbr>Answering<!-- HTML_TAG_END --></a> <a href="#transformers.FlaxXLMRobertaModel" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.FlaxXLMRobertaModel"><!-- HTML_TAG_START --><wbr>FlaxXLM<wbr>Roberta<wbr>Model<!-- HTML_TAG_END --></a> <a href="#transformers.FlaxXLMRobertaForCausalLM" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.FlaxXLMRobertaForCausalLM"><!-- HTML_TAG_START --><wbr>FlaxXLM<wbr>Roberta<wbr>For<wbr>CausalLM<!-- HTML_TAG_END --></a> <a href="#transformers.FlaxXLMRobertaForMaskedLM" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.FlaxXLMRobertaForMaskedLM"><!-- HTML_TAG_START --><wbr>FlaxXLM<wbr>Roberta<wbr>For<wbr>MaskedLM<!-- HTML_TAG_END --></a> <a href="#transformers.FlaxXLMRobertaForSequenceClassification" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.FlaxXLMRobertaForSequenceClassification"><!-- HTML_TAG_START --><wbr>FlaxXLM<wbr>Roberta<wbr>For<wbr>Sequence<wbr>Classification<!-- HTML_TAG_END --></a> <a href="#transformers.FlaxXLMRobertaForMultipleChoice" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.FlaxXLMRobertaForMultipleChoice"><!-- HTML_TAG_START --><wbr>FlaxXLM<wbr>Roberta<wbr>For<wbr>Multiple<wbr>Choice<!-- HTML_TAG_END --></a> <a href="#transformers.FlaxXLMRobertaForTokenClassification" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.FlaxXLMRobertaForTokenClassification"><!-- HTML_TAG_START --><wbr>FlaxXLM<wbr>Roberta<wbr>For<wbr>Token<wbr>Classification<!-- HTML_TAG_END --></a> <a href="#transformers.FlaxXLMRobertaForQuestionAnswering" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.FlaxXLMRobertaForQuestionAnswering"><!-- HTML_TAG_START --><wbr>FlaxXLM<wbr>Roberta<wbr>For<wbr>Question<wbr>Answering<!-- HTML_TAG_END --></a> </nav></div></div></div> <div id="doc-footer"></div></main> </div> <script> import("/front/build/kube-b0520c1/index.js"); window.moonSha = "kube-b0520c1/"; window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc","environment":"production","userAgent":"HuggingFace (production)"}`); </script> <!-- Stripe --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://js.stripe.com/v3/"; script.async = true; document.head.appendChild(script); } </script> <!-- Google analytics v4 --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL"; script.async = true; document.head.appendChild(script); window.dataLayer = window.dataLayer || []; function gtag() { if (window.dataLayer !== undefined) { window.dataLayer.push(arguments); } } gtag("js", new Date()); gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/v4.34.0/en/model_doc/xlm-roberta" }); /// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" }); /// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent /// TODO: ask the user for their consent and update this with gtag('consent', 'update') } </script> <!-- Google Analytics v3 (deprecated) --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { (function (i, s, o, g, r, a, m) { i["GoogleAnalyticsObject"] = r; (i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments); }), (i[r].l = 1 * new Date()); (a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]); a.async = 1; a.src = g; m.parentNode.insertBefore(a, m); })(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics"); ganalytics("create", "UA-83738774-2", "auto"); ganalytics("send", "pageview", "/docs/transformers/v4.34.0/en/model_doc/xlm-roberta"); } </script> <iframe name="__privateStripeMetricsController3130" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-27c67c0d52761104439bb051c7856ab1.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fv4.34.0%2Fen%2Fmodel_doc%2Fxlm-roberta&amp;title=XLM-RoBERTa&amp;referrer=&amp;muid=NA&amp;sid=NA&amp;version=6&amp;preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html>
2023-10-05T13:33:36.676Z
XLM-V
https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/xlm-v
# XLM-V ## Overview XLM-V is multilingual language model with a one million token vocabulary trained on 2.5TB of data from Common Crawl (same as XLM-R). It was introduced in the [XLM-V: Overcoming the Vocabulary Bottleneck in Multilingual Masked Language Models](https://arxiv.org/abs/2301.10472) paper by Davis Liang, Hila Gonen, Yuning Mao, Rui Hou, Naman Goyal, Marjan Ghazvininejad, Luke Zettlemoyer and Madian Khabsa. From the abstract of the XLM-V paper: _Large multilingual language models typically rely on a single vocabulary shared across 100+ languages. As these models have increased in parameter count and depth, vocabulary size has remained largely unchanged. This vocabulary bottleneck limits the representational capabilities of multilingual models like XLM-R. In this paper, we introduce a new approach for scaling to very large multilingual vocabularies by de-emphasizing token sharing between languages with little lexical overlap and assigning vocabulary capacity to achieve sufficient coverage for each individual language. Tokenizations using our vocabulary are typically more semantically meaningful and shorter compared to XLM-R. Leveraging this improved vocabulary, we train XLM-V, a multilingual language model with a one million token vocabulary. XLM-V outperforms XLM-R on every task we tested on ranging from natural language inference (XNLI), question answering (MLQA, XQuAD, TyDiQA), and named entity recognition (WikiAnn) to low-resource tasks (Americas NLI, MasakhaNER)._ Tips: - XLM-V is compatible with the XLM-RoBERTa model architecture, only model weights from [`fairseq`](https://github.com/facebookresearch/fairseq) library had to be converted. - The `XLMTokenizer` implementation is used to load the vocab and performs tokenization. A XLM-V (base size) model is available under the [`facebook/xlm-v-base`](https://huggingface.co/facebook/xlm-v-base) identifier. This model was contributed by [stefan-it](https://huggingface.co/stefan-it), including detailed experiments with XLM-V on downstream tasks. The experiments repository can be found [here](https://github.com/stefan-it/xlm-v-experiments).
<!DOCTYPE html><html class=""><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"> <meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science."> <meta property="fb:app_id" content="1321688464574422"> <meta name="twitter:card" content="summary_large_image"> <meta name="twitter:site" content="@huggingface"> <meta property="og:title" content="XLM-V"> <meta property="og:type" content="website"> <meta property="og:url" content="https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/xlm-v"> <meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png"> <link rel="stylesheet" href="/front/build/kube-b0520c1/style.css"> <link rel="preconnect" href="https://fonts.gstatic.com"> <link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&amp;display=swap" rel="stylesheet"> <link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&amp;display=swap" rel="stylesheet"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" as="style" onload="this.onload=null;this.rel='stylesheet'"> <noscript> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" /> </noscript> <title>XLM-V</title> <script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script> <script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head> <body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage"> <div class="flex min-h-screen flex-col"> <div class="SVELTE_HYDRATER contents" data-props="{&quot;classNames&quot;:&quot;&quot;,&quot;isWide&quot;:true,&quot;isZh&quot;:false}" data-target="MainHeader"><header class="border-b border-gray-100 "><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <div class="flex flex-none items-center justify-center p-0.5 place-self-stretch lg:hidden"><button class="relative z-40 flex h-6 w-8 items-center justify-center" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div></div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="rounded-full border border-transparent bg-gray-900 px-3 py-1 leading-none text-white hover:border-black hover:bg-white hover:text-black" href="/join">Sign Up</a></li></ul></nav></div></header></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div> <main class="flex flex-1 flex-col"><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapters&quot;:[{&quot;title&quot;:&quot;Get started&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;🤗 Transformers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;index&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/index&quot;},{&quot;title&quot;:&quot;Quick tour&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;quicktour&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/quicktour&quot;},{&quot;title&quot;:&quot;Installation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;installation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/installation&quot;}]},{&quot;title&quot;:&quot;Tutorials&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Run inference with pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_tutorial&quot;},{&quot;title&quot;:&quot;Write portable code with AutoClass&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;autoclass_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/autoclass_tutorial&quot;},{&quot;title&quot;:&quot;Preprocess data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocessing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/preprocessing&quot;},{&quot;title&quot;:&quot;Fine-tune a pretrained model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;training&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/training&quot;},{&quot;title&quot;:&quot;Train with a script&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;run_scripts&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/run_scripts&quot;},{&quot;title&quot;:&quot;Set up distributed training with 🤗 Accelerate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;accelerate&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/accelerate&quot;},{&quot;title&quot;:&quot;Load and train adapters with 🤗 PEFT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;peft&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/peft&quot;},{&quot;title&quot;:&quot;Share your model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_sharing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_sharing&quot;},{&quot;title&quot;:&quot;Agents&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers_agents&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/transformers_agents&quot;},{&quot;title&quot;:&quot;Generation with LLMs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;llm_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/llm_tutorial&quot;}]},{&quot;title&quot;:&quot;Task Guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Natural Language Processing&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text classification&quot;,&quot;id&quot;:&quot;tasks/sequence_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/sequence_classification&quot;},{&quot;title&quot;:&quot;Token classification&quot;,&quot;id&quot;:&quot;tasks/token_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/token_classification&quot;},{&quot;title&quot;:&quot;Question answering&quot;,&quot;id&quot;:&quot;tasks/question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/question_answering&quot;},{&quot;title&quot;:&quot;Causal language modeling&quot;,&quot;id&quot;:&quot;tasks/language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/language_modeling&quot;},{&quot;title&quot;:&quot;Masked language modeling&quot;,&quot;id&quot;:&quot;tasks/masked_language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/masked_language_modeling&quot;},{&quot;title&quot;:&quot;Translation&quot;,&quot;id&quot;:&quot;tasks/translation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/translation&quot;},{&quot;title&quot;:&quot;Summarization&quot;,&quot;id&quot;:&quot;tasks/summarization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/summarization&quot;},{&quot;title&quot;:&quot;Multiple choice&quot;,&quot;id&quot;:&quot;tasks/multiple_choice&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/multiple_choice&quot;}]},{&quot;title&quot;:&quot;Audio&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio classification&quot;,&quot;id&quot;:&quot;tasks/audio_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/audio_classification&quot;},{&quot;title&quot;:&quot;Automatic speech recognition&quot;,&quot;id&quot;:&quot;tasks/asr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/asr&quot;}]},{&quot;title&quot;:&quot;Computer Vision&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image classification&quot;,&quot;id&quot;:&quot;tasks/image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_classification&quot;},{&quot;title&quot;:&quot;Semantic segmentation&quot;,&quot;id&quot;:&quot;tasks/semantic_segmentation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/semantic_segmentation&quot;},{&quot;title&quot;:&quot;Video classification&quot;,&quot;id&quot;:&quot;tasks/video_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/video_classification&quot;},{&quot;title&quot;:&quot;Object detection&quot;,&quot;id&quot;:&quot;tasks/object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot object detection&quot;,&quot;id&quot;:&quot;tasks/zero_shot_object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot image classification&quot;,&quot;id&quot;:&quot;tasks/zero_shot_image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification&quot;},{&quot;title&quot;:&quot;Depth estimation&quot;,&quot;id&quot;:&quot;tasks/monocular_depth_estimation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation&quot;}]},{&quot;title&quot;:&quot;Multimodal&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image captioning&quot;,&quot;id&quot;:&quot;tasks/image_captioning&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_captioning&quot;},{&quot;title&quot;:&quot;Document Question Answering&quot;,&quot;id&quot;:&quot;tasks/document_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/document_question_answering&quot;},{&quot;title&quot;:&quot;Visual Question Answering&quot;,&quot;id&quot;:&quot;tasks/visual_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/visual_question_answering&quot;},{&quot;title&quot;:&quot;Text to speech&quot;,&quot;id&quot;:&quot;tasks/text-to-speech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/text-to-speech&quot;}]},{&quot;title&quot;:&quot;Generation&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Customize the generation strategy&quot;,&quot;id&quot;:&quot;generation_strategies&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/generation_strategies&quot;}]},{&quot;title&quot;:&quot;Prompting&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image tasks with IDEFICS&quot;,&quot;id&quot;:&quot;tasks/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/idefics&quot;}]}]},{&quot;title&quot;:&quot;Developer guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Use fast tokenizers from 🤗 Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;fast_tokenizers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/fast_tokenizers&quot;},{&quot;title&quot;:&quot;Run inference with multilingual models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;multilingual&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/multilingual&quot;},{&quot;title&quot;:&quot;Use model-specific APIs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;create_a_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/create_a_model&quot;},{&quot;title&quot;:&quot;Share a custom model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_models&quot;},{&quot;title&quot;:&quot;Templates for chat models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;chat_templating&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/chat_templating&quot;},{&quot;title&quot;:&quot;Run training on Amazon SageMaker&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;sagemaker&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/sagemaker&quot;},{&quot;title&quot;:&quot;Export to ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;serialization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/serialization&quot;},{&quot;title&quot;:&quot;Export to TFLite&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tflite&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tflite&quot;},{&quot;title&quot;:&quot;Export to TorchScript&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;torchscript&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/torchscript&quot;},{&quot;title&quot;:&quot;Benchmarks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;benchmarks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/benchmarks&quot;},{&quot;title&quot;:&quot;Notebooks with examples&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;notebooks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/notebooks&quot;},{&quot;title&quot;:&quot;Community resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;community&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/community&quot;},{&quot;title&quot;:&quot;Custom Tools and Prompts&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_tools&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_tools&quot;},{&quot;title&quot;:&quot;Troubleshoot&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;troubleshooting&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/troubleshooting&quot;}]},{&quot;title&quot;:&quot;Performance and scalability&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;performance&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/performance&quot;},{&quot;title&quot;:&quot;Efficient training techniques&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Methods and tools for efficient training on a single GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_one&quot;},{&quot;title&quot;:&quot;Multiple GPUs and parallelism&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_many&quot;},{&quot;title&quot;:&quot;Efficient training on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu&quot;},{&quot;title&quot;:&quot;Distributed CPU training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu_many&quot;},{&quot;title&quot;:&quot;Training on TPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu&quot;},{&quot;title&quot;:&quot;Training on TPU with TensorFlow&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu_tf&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu_tf&quot;},{&quot;title&quot;:&quot;Training on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_special&quot;},{&quot;title&quot;:&quot;Custom hardware for training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_hardware&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_hardware&quot;},{&quot;title&quot;:&quot;Hyperparameter Search using Trainer API&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;hpo_train&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/hpo_train&quot;}]},{&quot;title&quot;:&quot;Optimizing inference&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Inference on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_cpu&quot;},{&quot;title&quot;:&quot;Inference on one GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_one&quot;},{&quot;title&quot;:&quot;Inference on many GPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_many&quot;},{&quot;title&quot;:&quot;Inference on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_special&quot;}]},{&quot;title&quot;:&quot;Instantiating a big model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;big_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/big_models&quot;},{&quot;title&quot;:&quot;Troubleshooting&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;debugging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/debugging&quot;},{&quot;title&quot;:&quot;XLA Integration for TensorFlow Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tf_xla&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tf_xla&quot;},{&quot;title&quot;:&quot;Optimize inference using `torch.compile()`&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_torch_compile&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_torch_compile&quot;}]},{&quot;title&quot;:&quot;Contribute&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;How to contribute to transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;contributing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/contributing&quot;},{&quot;title&quot;:&quot;How to add a model to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_model&quot;},{&quot;title&quot;:&quot;How to convert a 🤗 Transformers model to TensorFlow?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_tensorflow_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_tensorflow_model&quot;},{&quot;title&quot;:&quot;How to add a pipeline to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_pipeline&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_pipeline&quot;},{&quot;title&quot;:&quot;Testing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;testing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/testing&quot;},{&quot;title&quot;:&quot;Checks on a Pull Request&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pr_checks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pr_checks&quot;}]},{&quot;title&quot;:&quot;Conceptual guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Philosophy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;philosophy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/philosophy&quot;},{&quot;title&quot;:&quot;Glossary&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;glossary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/glossary&quot;},{&quot;title&quot;:&quot;What 🤗 Transformers can do&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;task_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/task_summary&quot;},{&quot;title&quot;:&quot;How 🤗 Transformers solve tasks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks_explained&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks_explained&quot;},{&quot;title&quot;:&quot;The Transformer model family&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_summary&quot;},{&quot;title&quot;:&quot;Summary of the tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tokenizer_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tokenizer_summary&quot;},{&quot;title&quot;:&quot;Attention mechanisms&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;attention&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/attention&quot;},{&quot;title&quot;:&quot;Padding and truncation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pad_truncation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pad_truncation&quot;},{&quot;title&quot;:&quot;BERTology&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;bertology&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/bertology&quot;},{&quot;title&quot;:&quot;Perplexity of fixed-length models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perplexity&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perplexity&quot;},{&quot;title&quot;:&quot;Pipelines for webserver inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_webserver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_webserver&quot;},{&quot;title&quot;:&quot;Model training anatomy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_memory_anatomy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_memory_anatomy&quot;}]},{&quot;title&quot;:&quot;API&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Main Classes&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Agents and Tools&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/agent&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/agent&quot;},{&quot;title&quot;:&quot;Auto Classes&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/auto&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/auto&quot;},{&quot;title&quot;:&quot;Callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/callback&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/callback&quot;},{&quot;title&quot;:&quot;Configuration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/configuration&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/configuration&quot;},{&quot;title&quot;:&quot;Data Collator&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/data_collator&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/data_collator&quot;},{&quot;title&quot;:&quot;Keras callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/keras_callbacks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/keras_callbacks&quot;},{&quot;title&quot;:&quot;Logging&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/logging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/logging&quot;},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/model&quot;},{&quot;title&quot;:&quot;Text Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/text_generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/text_generation&quot;},{&quot;title&quot;:&quot;ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/onnx&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/onnx&quot;},{&quot;title&quot;:&quot;Optimization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/optimizer_schedules&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules&quot;},{&quot;title&quot;:&quot;Model outputs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/output&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/output&quot;},{&quot;title&quot;:&quot;Pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/pipelines&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/pipelines&quot;},{&quot;title&quot;:&quot;Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/processors&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/processors&quot;},{&quot;title&quot;:&quot;Quantization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/quantization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/quantization&quot;},{&quot;title&quot;:&quot;Tokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/tokenizer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/tokenizer&quot;},{&quot;title&quot;:&quot;Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/trainer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/trainer&quot;},{&quot;title&quot;:&quot;DeepSpeed Integration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/deepspeed&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/deepspeed&quot;},{&quot;title&quot;:&quot;Feature Extractor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/feature_extractor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/feature_extractor&quot;},{&quot;title&quot;:&quot;Image Processor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/image_processor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/image_processor&quot;}]},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALBERT&quot;,&quot;id&quot;:&quot;model_doc/albert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/albert&quot;},{&quot;title&quot;:&quot;BART&quot;,&quot;id&quot;:&quot;model_doc/bart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bart&quot;},{&quot;title&quot;:&quot;BARThez&quot;,&quot;id&quot;:&quot;model_doc/barthez&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/barthez&quot;},{&quot;title&quot;:&quot;BARTpho&quot;,&quot;id&quot;:&quot;model_doc/bartpho&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bartpho&quot;},{&quot;title&quot;:&quot;BERT&quot;,&quot;id&quot;:&quot;model_doc/bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert&quot;},{&quot;title&quot;:&quot;BertGeneration&quot;,&quot;id&quot;:&quot;model_doc/bert-generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-generation&quot;},{&quot;title&quot;:&quot;BertJapanese&quot;,&quot;id&quot;:&quot;model_doc/bert-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-japanese&quot;},{&quot;title&quot;:&quot;Bertweet&quot;,&quot;id&quot;:&quot;model_doc/bertweet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bertweet&quot;},{&quot;title&quot;:&quot;BigBird&quot;,&quot;id&quot;:&quot;model_doc/big_bird&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/big_bird&quot;},{&quot;title&quot;:&quot;BigBirdPegasus&quot;,&quot;id&quot;:&quot;model_doc/bigbird_pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus&quot;},{&quot;title&quot;:&quot;BioGpt&quot;,&quot;id&quot;:&quot;model_doc/biogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/biogpt&quot;},{&quot;title&quot;:&quot;Blenderbot&quot;,&quot;id&quot;:&quot;model_doc/blenderbot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot&quot;},{&quot;title&quot;:&quot;Blenderbot Small&quot;,&quot;id&quot;:&quot;model_doc/blenderbot-small&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot-small&quot;},{&quot;title&quot;:&quot;BLOOM&quot;,&quot;id&quot;:&quot;model_doc/bloom&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bloom&quot;},{&quot;title&quot;:&quot;BORT&quot;,&quot;id&quot;:&quot;model_doc/bort&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bort&quot;},{&quot;title&quot;:&quot;ByT5&quot;,&quot;id&quot;:&quot;model_doc/byt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/byt5&quot;},{&quot;title&quot;:&quot;CamemBERT&quot;,&quot;id&quot;:&quot;model_doc/camembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/camembert&quot;},{&quot;title&quot;:&quot;CANINE&quot;,&quot;id&quot;:&quot;model_doc/canine&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/canine&quot;},{&quot;title&quot;:&quot;CodeGen&quot;,&quot;id&quot;:&quot;model_doc/codegen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/codegen&quot;},{&quot;title&quot;:&quot;CodeLlama&quot;,&quot;id&quot;:&quot;model_doc/code_llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/code_llama&quot;},{&quot;title&quot;:&quot;ConvBERT&quot;,&quot;id&quot;:&quot;model_doc/convbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convbert&quot;},{&quot;title&quot;:&quot;CPM&quot;,&quot;id&quot;:&quot;model_doc/cpm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpm&quot;},{&quot;title&quot;:&quot;CPMANT&quot;,&quot;id&quot;:&quot;model_doc/cpmant&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpmant&quot;},{&quot;title&quot;:&quot;CTRL&quot;,&quot;id&quot;:&quot;model_doc/ctrl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ctrl&quot;},{&quot;title&quot;:&quot;DeBERTa&quot;,&quot;id&quot;:&quot;model_doc/deberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta&quot;},{&quot;title&quot;:&quot;DeBERTa-v2&quot;,&quot;id&quot;:&quot;model_doc/deberta-v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta-v2&quot;},{&quot;title&quot;:&quot;DialoGPT&quot;,&quot;id&quot;:&quot;model_doc/dialogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dialogpt&quot;},{&quot;title&quot;:&quot;DistilBERT&quot;,&quot;id&quot;:&quot;model_doc/distilbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/distilbert&quot;},{&quot;title&quot;:&quot;DPR&quot;,&quot;id&quot;:&quot;model_doc/dpr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpr&quot;},{&quot;title&quot;:&quot;ELECTRA&quot;,&quot;id&quot;:&quot;model_doc/electra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/electra&quot;},{&quot;title&quot;:&quot;Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encoder-decoder&quot;},{&quot;title&quot;:&quot;ERNIE&quot;,&quot;id&quot;:&quot;model_doc/ernie&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie&quot;},{&quot;title&quot;:&quot;ErnieM&quot;,&quot;id&quot;:&quot;model_doc/ernie_m&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie_m&quot;},{&quot;title&quot;:&quot;ESM&quot;,&quot;id&quot;:&quot;model_doc/esm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/esm&quot;},{&quot;title&quot;:&quot;Falcon&quot;,&quot;id&quot;:&quot;model_doc/falcon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/falcon&quot;},{&quot;title&quot;:&quot;FLAN-T5&quot;,&quot;id&quot;:&quot;model_doc/flan-t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-t5&quot;},{&quot;title&quot;:&quot;FLAN-UL2&quot;,&quot;id&quot;:&quot;model_doc/flan-ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-ul2&quot;},{&quot;title&quot;:&quot;FlauBERT&quot;,&quot;id&quot;:&quot;model_doc/flaubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flaubert&quot;},{&quot;title&quot;:&quot;FNet&quot;,&quot;id&quot;:&quot;model_doc/fnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fnet&quot;},{&quot;title&quot;:&quot;FSMT&quot;,&quot;id&quot;:&quot;model_doc/fsmt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fsmt&quot;},{&quot;title&quot;:&quot;Funnel Transformer&quot;,&quot;id&quot;:&quot;model_doc/funnel&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/funnel&quot;},{&quot;title&quot;:&quot;GPT&quot;,&quot;id&quot;:&quot;model_doc/openai-gpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/openai-gpt&quot;},{&quot;title&quot;:&quot;GPT Neo&quot;,&quot;id&quot;:&quot;model_doc/gpt_neo&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neo&quot;},{&quot;title&quot;:&quot;GPT NeoX&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox&quot;},{&quot;title&quot;:&quot;GPT NeoX Japanese&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox_japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese&quot;},{&quot;title&quot;:&quot;GPT-J&quot;,&quot;id&quot;:&quot;model_doc/gptj&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptj&quot;},{&quot;title&quot;:&quot;GPT2&quot;,&quot;id&quot;:&quot;model_doc/gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt2&quot;},{&quot;title&quot;:&quot;GPTBigCode&quot;,&quot;id&quot;:&quot;model_doc/gpt_bigcode&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode&quot;},{&quot;title&quot;:&quot;GPTSAN Japanese&quot;,&quot;id&quot;:&quot;model_doc/gptsan-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese&quot;},{&quot;title&quot;:&quot;GPTSw3&quot;,&quot;id&quot;:&quot;model_doc/gpt-sw3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt-sw3&quot;},{&quot;title&quot;:&quot;HerBERT&quot;,&quot;id&quot;:&quot;model_doc/herbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/herbert&quot;},{&quot;title&quot;:&quot;I-BERT&quot;,&quot;id&quot;:&quot;model_doc/ibert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ibert&quot;},{&quot;title&quot;:&quot;Jukebox&quot;,&quot;id&quot;:&quot;model_doc/jukebox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/jukebox&quot;},{&quot;title&quot;:&quot;LED&quot;,&quot;id&quot;:&quot;model_doc/led&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/led&quot;},{&quot;title&quot;:&quot;LLaMA&quot;,&quot;id&quot;:&quot;model_doc/llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama&quot;},{&quot;title&quot;:&quot;Llama2&quot;,&quot;id&quot;:&quot;model_doc/llama2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama2&quot;},{&quot;title&quot;:&quot;Longformer&quot;,&quot;id&quot;:&quot;model_doc/longformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longformer&quot;},{&quot;title&quot;:&quot;LongT5&quot;,&quot;id&quot;:&quot;model_doc/longt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longt5&quot;},{&quot;title&quot;:&quot;LUKE&quot;,&quot;id&quot;:&quot;model_doc/luke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/luke&quot;},{&quot;title&quot;:&quot;M2M100&quot;,&quot;id&quot;:&quot;model_doc/m2m_100&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/m2m_100&quot;},{&quot;title&quot;:&quot;MarianMT&quot;,&quot;id&quot;:&quot;model_doc/marian&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/marian&quot;},{&quot;title&quot;:&quot;MarkupLM&quot;,&quot;id&quot;:&quot;model_doc/markuplm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/markuplm&quot;},{&quot;title&quot;:&quot;MBart and MBart-50&quot;,&quot;id&quot;:&quot;model_doc/mbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mbart&quot;},{&quot;title&quot;:&quot;MEGA&quot;,&quot;id&quot;:&quot;model_doc/mega&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mega&quot;},{&quot;title&quot;:&quot;MegatronBERT&quot;,&quot;id&quot;:&quot;model_doc/megatron-bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron-bert&quot;},{&quot;title&quot;:&quot;MegatronGPT2&quot;,&quot;id&quot;:&quot;model_doc/megatron_gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2&quot;},{&quot;title&quot;:&quot;Mistral&quot;,&quot;id&quot;:&quot;model_doc/mistral&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mistral&quot;},{&quot;title&quot;:&quot;mLUKE&quot;,&quot;id&quot;:&quot;model_doc/mluke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mluke&quot;},{&quot;title&quot;:&quot;MobileBERT&quot;,&quot;id&quot;:&quot;model_doc/mobilebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilebert&quot;},{&quot;title&quot;:&quot;MPNet&quot;,&quot;id&quot;:&quot;model_doc/mpnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpnet&quot;},{&quot;title&quot;:&quot;MPT&quot;,&quot;id&quot;:&quot;model_doc/mpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpt&quot;},{&quot;title&quot;:&quot;MRA&quot;,&quot;id&quot;:&quot;model_doc/mra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mra&quot;},{&quot;title&quot;:&quot;MT5&quot;,&quot;id&quot;:&quot;model_doc/mt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mt5&quot;},{&quot;title&quot;:&quot;MVP&quot;,&quot;id&quot;:&quot;model_doc/mvp&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mvp&quot;},{&quot;title&quot;:&quot;NEZHA&quot;,&quot;id&quot;:&quot;model_doc/nezha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nezha&quot;},{&quot;title&quot;:&quot;NLLB&quot;,&quot;id&quot;:&quot;model_doc/nllb&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb&quot;},{&quot;title&quot;:&quot;NLLB-MoE&quot;,&quot;id&quot;:&quot;model_doc/nllb-moe&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb-moe&quot;},{&quot;title&quot;:&quot;Nyströmformer&quot;,&quot;id&quot;:&quot;model_doc/nystromformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nystromformer&quot;},{&quot;title&quot;:&quot;Open-Llama&quot;,&quot;id&quot;:&quot;model_doc/open-llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/open-llama&quot;},{&quot;title&quot;:&quot;OPT&quot;,&quot;id&quot;:&quot;model_doc/opt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/opt&quot;},{&quot;title&quot;:&quot;Pegasus&quot;,&quot;id&quot;:&quot;model_doc/pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus&quot;},{&quot;title&quot;:&quot;PEGASUS-X&quot;,&quot;id&quot;:&quot;model_doc/pegasus_x&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus_x&quot;},{&quot;title&quot;:&quot;Persimmon&quot;,&quot;id&quot;:&quot;model_doc/persimmon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/persimmon&quot;},{&quot;title&quot;:&quot;PhoBERT&quot;,&quot;id&quot;:&quot;model_doc/phobert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/phobert&quot;},{&quot;title&quot;:&quot;PLBart&quot;,&quot;id&quot;:&quot;model_doc/plbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/plbart&quot;},{&quot;title&quot;:&quot;ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/prophetnet&quot;},{&quot;title&quot;:&quot;QDQBert&quot;,&quot;id&quot;:&quot;model_doc/qdqbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/qdqbert&quot;},{&quot;title&quot;:&quot;RAG&quot;,&quot;id&quot;:&quot;model_doc/rag&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rag&quot;},{&quot;title&quot;:&quot;REALM&quot;,&quot;id&quot;:&quot;model_doc/realm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/realm&quot;},{&quot;title&quot;:&quot;Reformer&quot;,&quot;id&quot;:&quot;model_doc/reformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/reformer&quot;},{&quot;title&quot;:&quot;RemBERT&quot;,&quot;id&quot;:&quot;model_doc/rembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rembert&quot;},{&quot;title&quot;:&quot;RetriBERT&quot;,&quot;id&quot;:&quot;model_doc/retribert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/retribert&quot;},{&quot;title&quot;:&quot;RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta&quot;},{&quot;title&quot;:&quot;RoBERTa-PreLayerNorm&quot;,&quot;id&quot;:&quot;model_doc/roberta-prelayernorm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm&quot;},{&quot;title&quot;:&quot;RoCBert&quot;,&quot;id&quot;:&quot;model_doc/roc_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roc_bert&quot;},{&quot;title&quot;:&quot;RoFormer&quot;,&quot;id&quot;:&quot;model_doc/roformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roformer&quot;},{&quot;title&quot;:&quot;RWKV&quot;,&quot;id&quot;:&quot;model_doc/rwkv&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rwkv&quot;},{&quot;title&quot;:&quot;Splinter&quot;,&quot;id&quot;:&quot;model_doc/splinter&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/splinter&quot;},{&quot;title&quot;:&quot;SqueezeBERT&quot;,&quot;id&quot;:&quot;model_doc/squeezebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/squeezebert&quot;},{&quot;title&quot;:&quot;SwitchTransformers&quot;,&quot;id&quot;:&quot;model_doc/switch_transformers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/switch_transformers&quot;},{&quot;title&quot;:&quot;T5&quot;,&quot;id&quot;:&quot;model_doc/t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5&quot;},{&quot;title&quot;:&quot;T5v1.1&quot;,&quot;id&quot;:&quot;model_doc/t5v1.1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5v1.1&quot;},{&quot;title&quot;:&quot;TAPEX&quot;,&quot;id&quot;:&quot;model_doc/tapex&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapex&quot;},{&quot;title&quot;:&quot;Transformer XL&quot;,&quot;id&quot;:&quot;model_doc/transfo-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/transfo-xl&quot;},{&quot;title&quot;:&quot;UL2&quot;,&quot;id&quot;:&quot;model_doc/ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ul2&quot;},{&quot;title&quot;:&quot;UMT5&quot;,&quot;id&quot;:&quot;model_doc/umt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/umt5&quot;},{&quot;title&quot;:&quot;X-MOD&quot;,&quot;id&quot;:&quot;model_doc/xmod&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xmod&quot;},{&quot;title&quot;:&quot;XGLM&quot;,&quot;id&quot;:&quot;model_doc/xglm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xglm&quot;},{&quot;title&quot;:&quot;XLM&quot;,&quot;id&quot;:&quot;model_doc/xlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm&quot;},{&quot;title&quot;:&quot;XLM-ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/xlm-prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa-XL&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl&quot;},{&quot;title&quot;:&quot;XLM-V&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/xlm-v&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-v&quot;},{&quot;title&quot;:&quot;XLNet&quot;,&quot;id&quot;:&quot;model_doc/xlnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlnet&quot;},{&quot;title&quot;:&quot;YOSO&quot;,&quot;id&quot;:&quot;model_doc/yoso&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yoso&quot;}]},{&quot;title&quot;:&quot;Vision models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;BEiT&quot;,&quot;id&quot;:&quot;model_doc/beit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/beit&quot;},{&quot;title&quot;:&quot;BiT&quot;,&quot;id&quot;:&quot;model_doc/bit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bit&quot;},{&quot;title&quot;:&quot;Conditional DETR&quot;,&quot;id&quot;:&quot;model_doc/conditional_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/conditional_detr&quot;},{&quot;title&quot;:&quot;ConvNeXT&quot;,&quot;id&quot;:&quot;model_doc/convnext&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnext&quot;},{&quot;title&quot;:&quot;ConvNeXTV2&quot;,&quot;id&quot;:&quot;model_doc/convnextv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnextv2&quot;},{&quot;title&quot;:&quot;CvT&quot;,&quot;id&quot;:&quot;model_doc/cvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cvt&quot;},{&quot;title&quot;:&quot;Deformable DETR&quot;,&quot;id&quot;:&quot;model_doc/deformable_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deformable_detr&quot;},{&quot;title&quot;:&quot;DeiT&quot;,&quot;id&quot;:&quot;model_doc/deit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deit&quot;},{&quot;title&quot;:&quot;DETA&quot;,&quot;id&quot;:&quot;model_doc/deta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deta&quot;},{&quot;title&quot;:&quot;DETR&quot;,&quot;id&quot;:&quot;model_doc/detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/detr&quot;},{&quot;title&quot;:&quot;DiNAT&quot;,&quot;id&quot;:&quot;model_doc/dinat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinat&quot;},{&quot;title&quot;:&quot;DINO V2&quot;,&quot;id&quot;:&quot;model_doc/dinov2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinov2&quot;},{&quot;title&quot;:&quot;DiT&quot;,&quot;id&quot;:&quot;model_doc/dit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dit&quot;},{&quot;title&quot;:&quot;DPT&quot;,&quot;id&quot;:&quot;model_doc/dpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpt&quot;},{&quot;title&quot;:&quot;EfficientFormer&quot;,&quot;id&quot;:&quot;model_doc/efficientformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientformer&quot;},{&quot;title&quot;:&quot;EfficientNet&quot;,&quot;id&quot;:&quot;model_doc/efficientnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientnet&quot;},{&quot;title&quot;:&quot;FocalNet&quot;,&quot;id&quot;:&quot;model_doc/focalnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/focalnet&quot;},{&quot;title&quot;:&quot;GLPN&quot;,&quot;id&quot;:&quot;model_doc/glpn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/glpn&quot;},{&quot;title&quot;:&quot;ImageGPT&quot;,&quot;id&quot;:&quot;model_doc/imagegpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/imagegpt&quot;},{&quot;title&quot;:&quot;LeViT&quot;,&quot;id&quot;:&quot;model_doc/levit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/levit&quot;},{&quot;title&quot;:&quot;Mask2Former&quot;,&quot;id&quot;:&quot;model_doc/mask2former&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mask2former&quot;},{&quot;title&quot;:&quot;MaskFormer&quot;,&quot;id&quot;:&quot;model_doc/maskformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/maskformer&quot;},{&quot;title&quot;:&quot;MobileNetV1&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v1&quot;},{&quot;title&quot;:&quot;MobileNetV2&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v2&quot;},{&quot;title&quot;:&quot;MobileViT&quot;,&quot;id&quot;:&quot;model_doc/mobilevit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevit&quot;},{&quot;title&quot;:&quot;MobileViTV2&quot;,&quot;id&quot;:&quot;model_doc/mobilevitv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevitv2&quot;},{&quot;title&quot;:&quot;NAT&quot;,&quot;id&quot;:&quot;model_doc/nat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nat&quot;},{&quot;title&quot;:&quot;PoolFormer&quot;,&quot;id&quot;:&quot;model_doc/poolformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/poolformer&quot;},{&quot;title&quot;:&quot;Pyramid Vision Transformer (PVT)&quot;,&quot;id&quot;:&quot;model_doc/pvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pvt&quot;},{&quot;title&quot;:&quot;RegNet&quot;,&quot;id&quot;:&quot;model_doc/regnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/regnet&quot;},{&quot;title&quot;:&quot;ResNet&quot;,&quot;id&quot;:&quot;model_doc/resnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/resnet&quot;},{&quot;title&quot;:&quot;SegFormer&quot;,&quot;id&quot;:&quot;model_doc/segformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/segformer&quot;},{&quot;title&quot;:&quot;SwiftFormer&quot;,&quot;id&quot;:&quot;model_doc/swiftformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swiftformer&quot;},{&quot;title&quot;:&quot;Swin Transformer&quot;,&quot;id&quot;:&quot;model_doc/swin&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin&quot;},{&quot;title&quot;:&quot;Swin Transformer V2&quot;,&quot;id&quot;:&quot;model_doc/swinv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swinv2&quot;},{&quot;title&quot;:&quot;Swin2SR&quot;,&quot;id&quot;:&quot;model_doc/swin2sr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin2sr&quot;},{&quot;title&quot;:&quot;Table Transformer&quot;,&quot;id&quot;:&quot;model_doc/table-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/table-transformer&quot;},{&quot;title&quot;:&quot;TimeSformer&quot;,&quot;id&quot;:&quot;model_doc/timesformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/timesformer&quot;},{&quot;title&quot;:&quot;UperNet&quot;,&quot;id&quot;:&quot;model_doc/upernet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/upernet&quot;},{&quot;title&quot;:&quot;VAN&quot;,&quot;id&quot;:&quot;model_doc/van&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/van&quot;},{&quot;title&quot;:&quot;VideoMAE&quot;,&quot;id&quot;:&quot;model_doc/videomae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/videomae&quot;},{&quot;title&quot;:&quot;Vision Transformer (ViT)&quot;,&quot;id&quot;:&quot;model_doc/vit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit&quot;},{&quot;title&quot;:&quot;ViT Hybrid&quot;,&quot;id&quot;:&quot;model_doc/vit_hybrid&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_hybrid&quot;},{&quot;title&quot;:&quot;ViTDet&quot;,&quot;id&quot;:&quot;model_doc/vitdet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitdet&quot;},{&quot;title&quot;:&quot;ViTMAE&quot;,&quot;id&quot;:&quot;model_doc/vit_mae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_mae&quot;},{&quot;title&quot;:&quot;ViTMatte&quot;,&quot;id&quot;:&quot;model_doc/vitmatte&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitmatte&quot;},{&quot;title&quot;:&quot;ViTMSN&quot;,&quot;id&quot;:&quot;model_doc/vit_msn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_msn&quot;},{&quot;title&quot;:&quot;ViViT&quot;,&quot;id&quot;:&quot;model_doc/vivit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vivit&quot;},{&quot;title&quot;:&quot;YOLOS&quot;,&quot;id&quot;:&quot;model_doc/yolos&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yolos&quot;}]},{&quot;title&quot;:&quot;Audio models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio Spectrogram Transformer&quot;,&quot;id&quot;:&quot;model_doc/audio-spectrogram-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer&quot;},{&quot;title&quot;:&quot;Bark&quot;,&quot;id&quot;:&quot;model_doc/bark&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bark&quot;},{&quot;title&quot;:&quot;CLAP&quot;,&quot;id&quot;:&quot;model_doc/clap&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clap&quot;},{&quot;title&quot;:&quot;EnCodec&quot;,&quot;id&quot;:&quot;model_doc/encodec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encodec&quot;},{&quot;title&quot;:&quot;Hubert&quot;,&quot;id&quot;:&quot;model_doc/hubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/hubert&quot;},{&quot;title&quot;:&quot;MCTCT&quot;,&quot;id&quot;:&quot;model_doc/mctct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mctct&quot;},{&quot;title&quot;:&quot;MMS&quot;,&quot;id&quot;:&quot;model_doc/mms&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mms&quot;},{&quot;title&quot;:&quot;MusicGen&quot;,&quot;id&quot;:&quot;model_doc/musicgen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/musicgen&quot;},{&quot;title&quot;:&quot;Pop2Piano&quot;,&quot;id&quot;:&quot;model_doc/pop2piano&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pop2piano&quot;},{&quot;title&quot;:&quot;SEW&quot;,&quot;id&quot;:&quot;model_doc/sew&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew&quot;},{&quot;title&quot;:&quot;SEW-D&quot;,&quot;id&quot;:&quot;model_doc/sew-d&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew-d&quot;},{&quot;title&quot;:&quot;Speech2Text&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text&quot;},{&quot;title&quot;:&quot;Speech2Text2&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text_2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2&quot;},{&quot;title&quot;:&quot;SpeechT5&quot;,&quot;id&quot;:&quot;model_doc/speecht5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speecht5&quot;},{&quot;title&quot;:&quot;UniSpeech&quot;,&quot;id&quot;:&quot;model_doc/unispeech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech&quot;},{&quot;title&quot;:&quot;UniSpeech-SAT&quot;,&quot;id&quot;:&quot;model_doc/unispeech-sat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech-sat&quot;},{&quot;title&quot;:&quot;VITS&quot;,&quot;id&quot;:&quot;model_doc/vits&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vits&quot;},{&quot;title&quot;:&quot;Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2&quot;},{&quot;title&quot;:&quot;Wav2Vec2-Conformer&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2-conformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer&quot;},{&quot;title&quot;:&quot;Wav2Vec2Phoneme&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2_phoneme&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme&quot;},{&quot;title&quot;:&quot;WavLM&quot;,&quot;id&quot;:&quot;model_doc/wavlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wavlm&quot;},{&quot;title&quot;:&quot;Whisper&quot;,&quot;id&quot;:&quot;model_doc/whisper&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/whisper&quot;},{&quot;title&quot;:&quot;XLS-R&quot;,&quot;id&quot;:&quot;model_doc/xls_r&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xls_r&quot;},{&quot;title&quot;:&quot;XLSR-Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/xlsr_wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2&quot;}]},{&quot;title&quot;:&quot;Multimodal models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALIGN&quot;,&quot;id&quot;:&quot;model_doc/align&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/align&quot;},{&quot;title&quot;:&quot;AltCLIP&quot;,&quot;id&quot;:&quot;model_doc/altclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/altclip&quot;},{&quot;title&quot;:&quot;BLIP&quot;,&quot;id&quot;:&quot;model_doc/blip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip&quot;},{&quot;title&quot;:&quot;BLIP-2&quot;,&quot;id&quot;:&quot;model_doc/blip-2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip-2&quot;},{&quot;title&quot;:&quot;BridgeTower&quot;,&quot;id&quot;:&quot;model_doc/bridgetower&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bridgetower&quot;},{&quot;title&quot;:&quot;BROS&quot;,&quot;id&quot;:&quot;model_doc/bros&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bros&quot;},{&quot;title&quot;:&quot;Chinese-CLIP&quot;,&quot;id&quot;:&quot;model_doc/chinese_clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/chinese_clip&quot;},{&quot;title&quot;:&quot;CLIP&quot;,&quot;id&quot;:&quot;model_doc/clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clip&quot;},{&quot;title&quot;:&quot;CLIPSeg&quot;,&quot;id&quot;:&quot;model_doc/clipseg&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clipseg&quot;},{&quot;title&quot;:&quot;Data2Vec&quot;,&quot;id&quot;:&quot;model_doc/data2vec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/data2vec&quot;},{&quot;title&quot;:&quot;DePlot&quot;,&quot;id&quot;:&quot;model_doc/deplot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deplot&quot;},{&quot;title&quot;:&quot;Donut&quot;,&quot;id&quot;:&quot;model_doc/donut&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/donut&quot;},{&quot;title&quot;:&quot;FLAVA&quot;,&quot;id&quot;:&quot;model_doc/flava&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flava&quot;},{&quot;title&quot;:&quot;GIT&quot;,&quot;id&quot;:&quot;model_doc/git&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/git&quot;},{&quot;title&quot;:&quot;GroupViT&quot;,&quot;id&quot;:&quot;model_doc/groupvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/groupvit&quot;},{&quot;title&quot;:&quot;IDEFICS&quot;,&quot;id&quot;:&quot;model_doc/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/idefics&quot;},{&quot;title&quot;:&quot;InstructBLIP&quot;,&quot;id&quot;:&quot;model_doc/instructblip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/instructblip&quot;},{&quot;title&quot;:&quot;LayoutLM&quot;,&quot;id&quot;:&quot;model_doc/layoutlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlm&quot;},{&quot;title&quot;:&quot;LayoutLMV2&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv2&quot;},{&quot;title&quot;:&quot;LayoutLMV3&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv3&quot;},{&quot;title&quot;:&quot;LayoutXLM&quot;,&quot;id&quot;:&quot;model_doc/layoutxlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutxlm&quot;},{&quot;title&quot;:&quot;LiLT&quot;,&quot;id&quot;:&quot;model_doc/lilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lilt&quot;},{&quot;title&quot;:&quot;LXMERT&quot;,&quot;id&quot;:&quot;model_doc/lxmert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lxmert&quot;},{&quot;title&quot;:&quot;MatCha&quot;,&quot;id&quot;:&quot;model_doc/matcha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/matcha&quot;},{&quot;title&quot;:&quot;MGP-STR&quot;,&quot;id&quot;:&quot;model_doc/mgp-str&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mgp-str&quot;},{&quot;title&quot;:&quot;Nougat&quot;,&quot;id&quot;:&quot;model_doc/nougat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nougat&quot;},{&quot;title&quot;:&quot;OneFormer&quot;,&quot;id&quot;:&quot;model_doc/oneformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/oneformer&quot;},{&quot;title&quot;:&quot;OWL-ViT&quot;,&quot;id&quot;:&quot;model_doc/owlvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/owlvit&quot;},{&quot;title&quot;:&quot;Perceiver&quot;,&quot;id&quot;:&quot;model_doc/perceiver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/perceiver&quot;},{&quot;title&quot;:&quot;Pix2Struct&quot;,&quot;id&quot;:&quot;model_doc/pix2struct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pix2struct&quot;},{&quot;title&quot;:&quot;Segment Anything&quot;,&quot;id&quot;:&quot;model_doc/sam&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sam&quot;},{&quot;title&quot;:&quot;Speech Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/speech-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder&quot;},{&quot;title&quot;:&quot;TAPAS&quot;,&quot;id&quot;:&quot;model_doc/tapas&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapas&quot;},{&quot;title&quot;:&quot;TrOCR&quot;,&quot;id&quot;:&quot;model_doc/trocr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trocr&quot;},{&quot;title&quot;:&quot;TVLT&quot;,&quot;id&quot;:&quot;model_doc/tvlt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tvlt&quot;},{&quot;title&quot;:&quot;ViLT&quot;,&quot;id&quot;:&quot;model_doc/vilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vilt&quot;},{&quot;title&quot;:&quot;Vision Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/vision-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder&quot;},{&quot;title&quot;:&quot;Vision Text Dual Encoder&quot;,&quot;id&quot;:&quot;model_doc/vision-text-dual-encoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder&quot;},{&quot;title&quot;:&quot;VisualBERT&quot;,&quot;id&quot;:&quot;model_doc/visual_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/visual_bert&quot;},{&quot;title&quot;:&quot;X-CLIP&quot;,&quot;id&quot;:&quot;model_doc/xclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xclip&quot;}]},{&quot;title&quot;:&quot;Reinforcement learning models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Decision Transformer&quot;,&quot;id&quot;:&quot;model_doc/decision_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/decision_transformer&quot;},{&quot;title&quot;:&quot;Trajectory Transformer&quot;,&quot;id&quot;:&quot;model_doc/trajectory_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer&quot;}]},{&quot;title&quot;:&quot;Time series models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Autoformer&quot;,&quot;id&quot;:&quot;model_doc/autoformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/autoformer&quot;},{&quot;title&quot;:&quot;Informer&quot;,&quot;id&quot;:&quot;model_doc/informer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/informer&quot;},{&quot;title&quot;:&quot;Time Series Transformer&quot;,&quot;id&quot;:&quot;model_doc/time_series_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/time_series_transformer&quot;}]},{&quot;title&quot;:&quot;Graph models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Graphormer&quot;,&quot;id&quot;:&quot;model_doc/graphormer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/graphormer&quot;}]}]},{&quot;title&quot;:&quot;Internal Helpers&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Custom Layers and Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/modeling_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/modeling_utils&quot;},{&quot;title&quot;:&quot;Utilities for pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/pipelines_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/pipelines_utils&quot;},{&quot;title&quot;:&quot;Utilities for Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/tokenization_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/tokenization_utils&quot;},{&quot;title&quot;:&quot;Utilities for Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/trainer_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/trainer_utils&quot;},{&quot;title&quot;:&quot;Utilities for Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/generation_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/generation_utils&quot;},{&quot;title&quot;:&quot;Utilities for Image Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/image_processing_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/image_processing_utils&quot;},{&quot;title&quot;:&quot;Utilities for Audio processing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/audio_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/audio_utils&quot;},{&quot;title&quot;:&quot;General Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/file_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/file_utils&quot;},{&quot;title&quot;:&quot;Utilities for Time Series&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/time_series_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/time_series_utils&quot;}]}]}],&quot;chapterId&quot;:&quot;model_doc/xlm-v&quot;,&quot;docType&quot;:&quot;docs&quot;,&quot;isLoggedIn&quot;:false,&quot;lang&quot;:&quot;en&quot;,&quot;langs&quot;:[&quot;de&quot;,&quot;en&quot;,&quot;es&quot;,&quot;fr&quot;,&quot;it&quot;,&quot;ko&quot;,&quot;pt&quot;,&quot;zh&quot;],&quot;library&quot;:&quot;transformers&quot;,&quot;theme&quot;:&quot;light&quot;,&quot;version&quot;:&quot;v4.34.0&quot;,&quot;versions&quot;:[{&quot;version&quot;:&quot;main&quot;},{&quot;version&quot;:&quot;v4.34.0&quot;},{&quot;version&quot;:&quot;v4.33.3&quot;},{&quot;version&quot;:&quot;v4.33.2&quot;},{&quot;version&quot;:&quot;v4.33.0&quot;},{&quot;version&quot;:&quot;v4.32.1&quot;},{&quot;version&quot;:&quot;v4.32.0&quot;},{&quot;version&quot;:&quot;v4.31.0&quot;},{&quot;version&quot;:&quot;v4.30.0&quot;},{&quot;version&quot;:&quot;v4.29.1&quot;},{&quot;version&quot;:&quot;v4.29.0&quot;},{&quot;version&quot;:&quot;v4.28.1&quot;},{&quot;version&quot;:&quot;v4.28.0&quot;},{&quot;version&quot;:&quot;v4.27.2&quot;},{&quot;version&quot;:&quot;v4.27.1&quot;},{&quot;version&quot;:&quot;v4.27.0&quot;},{&quot;version&quot;:&quot;v4.26.1&quot;},{&quot;version&quot;:&quot;v4.26.0&quot;},{&quot;version&quot;:&quot;v4.25.1&quot;},{&quot;version&quot;:&quot;v4.24.0&quot;},{&quot;version&quot;:&quot;v4.23.1&quot;},{&quot;version&quot;:&quot;v4.23.0&quot;},{&quot;version&quot;:&quot;v4.22.2&quot;},{&quot;version&quot;:&quot;v4.22.1&quot;},{&quot;version&quot;:&quot;v4.22.0&quot;},{&quot;version&quot;:&quot;v4.21.3&quot;},{&quot;version&quot;:&quot;v4.21.2&quot;},{&quot;version&quot;:&quot;v4.21.1&quot;},{&quot;version&quot;:&quot;v4.21.0&quot;},{&quot;version&quot;:&quot;v4.20.1&quot;},{&quot;version&quot;:&quot;v4.20.0&quot;},{&quot;version&quot;:&quot;v4.19.4&quot;},{&quot;version&quot;:&quot;v4.19.3&quot;},{&quot;version&quot;:&quot;v4.19.2&quot;},{&quot;version&quot;:&quot;v4.19.0&quot;},{&quot;version&quot;:&quot;v4.18.0&quot;},{&quot;version&quot;:&quot;v4.17.0&quot;},{&quot;version&quot;:&quot;v4.16.2&quot;},{&quot;version&quot;:&quot;v4.16.1&quot;},{&quot;version&quot;:&quot;v4.16.0&quot;},{&quot;version&quot;:&quot;v4.15.0&quot;},{&quot;version&quot;:&quot;v4.14.1&quot;},{&quot;version&quot;:&quot;v4.13.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.5&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.4&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.0.0&quot;},{&quot;version&quot;:&quot;doc-builder-html&quot;}],&quot;title&quot;:&quot;XLM-V&quot;}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"> <div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation </p> <div class="flex items-center"><p class="font-semibold">XLM-V</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "> <button class=" " type="button"> <h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> </button> <div class="flex items-center"> <select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1" selected="">v4.34.0</option><option value="2">v4.33.3</option><option value="3">v4.32.1</option><option value="4">v4.31.0</option><option value="5">v4.30.0</option><option value="6">v4.29.1</option><option value="7">v4.28.1</option><option value="8">v4.27.2</option><option value="9">v4.26.1</option><option value="10">v4.25.1</option><option value="11">v4.24.0</option><option value="12">v4.23.1</option><option value="13">v4.22.2</option><option value="14">v4.21.3</option><option value="15">v4.20.1</option><option value="16">v4.19.4</option><option value="17">v4.18.0</option><option value="18">v4.17.0</option><option value="19">v4.16.2</option><option value="20">v4.15.0</option><option value="21">v4.14.1</option><option value="22">v4.13.0</option><option value="23">v4.12.5</option><option value="24">v4.11.3</option><option value="25">v4.10.1</option><option value="26">v4.9.2</option><option value="27">v4.8.2</option><option value="28">v4.7.0</option><option value="29">v4.6.0</option><option value="30">v4.5.1</option><option value="31">v4.4.2</option><option value="32">v4.3.3</option><option value="33">v4.2.2</option><option value="34">v4.1.1</option><option value="35">v4.0.1</option><option value="36">v3.5.1</option><option value="37">v3.4.0</option><option value="38">v3.3.1</option><option value="39">v3.2.0</option><option value="40">v3.1.0</option><option value="41">v3.0.2</option><option value="42">v2.11.0</option><option value="43">v2.10.0</option><option value="44">v2.9.1</option><option value="45">v2.8.0</option><option value="46">v2.7.0</option><option value="47">v2.6.0</option><option value="48">v2.5.1</option><option value="49">v2.4.1</option><option value="50">v2.3.0</option><option value="51">v2.2.2</option><option value="52">v2.1.1</option><option value="53">v2.0.0</option><option value="54">v1.2.0</option><option value="55">v1.1.0</option><option value="56">v1.0.0</option><option value="57">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en" selected="">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"> <button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"> <svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> </a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Get started<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/index"><!-- HTML_TAG_START -->🤗 Transformers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/quicktour"><!-- HTML_TAG_START -->Quick tour<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/installation"><!-- HTML_TAG_START -->Installation<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Tutorials<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_tutorial"><!-- HTML_TAG_START -->Run inference with pipelines<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/autoclass_tutorial"><!-- HTML_TAG_START -->Write portable code with AutoClass<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/preprocessing"><!-- HTML_TAG_START -->Preprocess data<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/training"><!-- HTML_TAG_START -->Fine-tune a pretrained model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/run_scripts"><!-- HTML_TAG_START -->Train with a script<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/accelerate"><!-- HTML_TAG_START -->Set up distributed training with 🤗 Accelerate<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/peft"><!-- HTML_TAG_START -->Load and train adapters with 🤗 PEFT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_sharing"><!-- HTML_TAG_START -->Share your model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/transformers_agents"><!-- HTML_TAG_START -->Agents<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/llm_tutorial"><!-- HTML_TAG_START -->Generation with LLMs<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Task Guides<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Natural Language Processing<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Audio<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Computer Vision<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Multimodal<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Generation<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Prompting<!-- HTML_TAG_END --></span> </span></span> </div></div> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Developer guides<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/fast_tokenizers"><!-- HTML_TAG_START -->Use fast tokenizers from 🤗 Tokenizers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/multilingual"><!-- HTML_TAG_START -->Run inference with multilingual models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/create_a_model"><!-- HTML_TAG_START -->Use model-specific APIs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_models"><!-- HTML_TAG_START -->Share a custom model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/chat_templating"><!-- HTML_TAG_START -->Templates for chat models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/sagemaker"><!-- HTML_TAG_START -->Run training on Amazon SageMaker<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/serialization"><!-- HTML_TAG_START -->Export to ONNX<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tflite"><!-- HTML_TAG_START -->Export to TFLite<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/torchscript"><!-- HTML_TAG_START -->Export to TorchScript<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/benchmarks"><!-- HTML_TAG_START -->Benchmarks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/notebooks"><!-- HTML_TAG_START -->Notebooks with examples<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/community"><!-- HTML_TAG_START -->Community resources<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_tools"><!-- HTML_TAG_START -->Custom Tools and Prompts<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/troubleshooting"><!-- HTML_TAG_START -->Troubleshoot<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Performance and scalability<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/performance"><!-- HTML_TAG_START -->Overview<!-- HTML_TAG_END --> </a> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Efficient training techniques<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_one"><!-- HTML_TAG_START -->Methods and tools for efficient training on a single GPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_many"><!-- HTML_TAG_START -->Multiple GPUs and parallelism<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu"><!-- HTML_TAG_START -->Efficient training on CPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu_many"><!-- HTML_TAG_START -->Distributed CPU training<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu"><!-- HTML_TAG_START -->Training on TPUs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu_tf"><!-- HTML_TAG_START -->Training on TPU with TensorFlow<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_special"><!-- HTML_TAG_START -->Training on Specialized Hardware<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_hardware"><!-- HTML_TAG_START -->Custom hardware for training<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/hpo_train"><!-- HTML_TAG_START -->Hyperparameter Search using Trainer API<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Optimizing inference<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_cpu"><!-- HTML_TAG_START -->Inference on CPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_one"><!-- HTML_TAG_START -->Inference on one GPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_many"><!-- HTML_TAG_START -->Inference on many GPUs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_special"><!-- HTML_TAG_START -->Inference on Specialized Hardware<!-- HTML_TAG_END --> </a> </div><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/big_models"><!-- HTML_TAG_START -->Instantiating a big model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/debugging"><!-- HTML_TAG_START -->Troubleshooting<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tf_xla"><!-- HTML_TAG_START -->XLA Integration for TensorFlow Models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perf_torch_compile"><!-- HTML_TAG_START -->Optimize inference using `torch.compile()`<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Contribute<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/contributing"><!-- HTML_TAG_START -->How to contribute to transformers?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_model"><!-- HTML_TAG_START -->How to add a model to 🤗 Transformers?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_tensorflow_model"><!-- HTML_TAG_START -->How to convert a 🤗 Transformers model to TensorFlow?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_pipeline"><!-- HTML_TAG_START -->How to add a pipeline to 🤗 Transformers?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/testing"><!-- HTML_TAG_START -->Testing<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pr_checks"><!-- HTML_TAG_START -->Checks on a Pull Request<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Conceptual guides<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/philosophy"><!-- HTML_TAG_START -->Philosophy<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/glossary"><!-- HTML_TAG_START -->Glossary<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/task_summary"><!-- HTML_TAG_START -->What 🤗 Transformers can do<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tasks_explained"><!-- HTML_TAG_START -->How 🤗 Transformers solve tasks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_summary"><!-- HTML_TAG_START -->The Transformer model family<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tokenizer_summary"><!-- HTML_TAG_START -->Summary of the tokenizers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/attention"><!-- HTML_TAG_START -->Attention mechanisms<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pad_truncation"><!-- HTML_TAG_START -->Padding and truncation<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/bertology"><!-- HTML_TAG_START -->BERTology<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perplexity"><!-- HTML_TAG_START -->Perplexity of fixed-length models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_webserver"><!-- HTML_TAG_START -->Pipelines for webserver inference<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_memory_anatomy"><!-- HTML_TAG_START -->Model training anatomy<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->API<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Main Classes<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/agent"><!-- HTML_TAG_START -->Agents and Tools<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/model_doc/auto"><!-- HTML_TAG_START -->Auto Classes<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/callback"><!-- HTML_TAG_START -->Callbacks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/configuration"><!-- HTML_TAG_START -->Configuration<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/data_collator"><!-- HTML_TAG_START -->Data Collator<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks"><!-- HTML_TAG_START -->Keras callbacks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/logging"><!-- HTML_TAG_START -->Logging<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/model"><!-- HTML_TAG_START -->Models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/text_generation"><!-- HTML_TAG_START -->Text Generation<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/onnx"><!-- HTML_TAG_START -->ONNX<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules"><!-- HTML_TAG_START -->Optimization<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/output"><!-- HTML_TAG_START -->Model outputs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/pipelines"><!-- HTML_TAG_START -->Pipelines<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/processors"><!-- HTML_TAG_START -->Processors<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/quantization"><!-- HTML_TAG_START -->Quantization<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/tokenizer"><!-- HTML_TAG_START -->Tokenizer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/trainer"><!-- HTML_TAG_START -->Trainer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/deepspeed"><!-- HTML_TAG_START -->DeepSpeed Integration<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor"><!-- HTML_TAG_START -->Feature Extractor<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/image_processor"><!-- HTML_TAG_START -->Image Processor<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Text models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/albert"><!-- HTML_TAG_START -->ALBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bart"><!-- HTML_TAG_START -->BART<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/barthez"><!-- HTML_TAG_START -->BARThez<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bartpho"><!-- HTML_TAG_START -->BARTpho<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bert"><!-- HTML_TAG_START -->BERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bert-generation"><!-- HTML_TAG_START -->BertGeneration<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bert-japanese"><!-- HTML_TAG_START -->BertJapanese<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bertweet"><!-- HTML_TAG_START -->Bertweet<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/big_bird"><!-- HTML_TAG_START -->BigBird<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus"><!-- HTML_TAG_START -->BigBirdPegasus<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/biogpt"><!-- HTML_TAG_START -->BioGpt<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/blenderbot"><!-- HTML_TAG_START -->Blenderbot<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/blenderbot-small"><!-- HTML_TAG_START -->Blenderbot Small<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bloom"><!-- HTML_TAG_START -->BLOOM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bort"><!-- HTML_TAG_START -->BORT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/byt5"><!-- HTML_TAG_START -->ByT5<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/camembert"><!-- HTML_TAG_START -->CamemBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/canine"><!-- HTML_TAG_START -->CANINE<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/codegen"><!-- HTML_TAG_START -->CodeGen<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/code_llama"><!-- HTML_TAG_START -->CodeLlama<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/convbert"><!-- HTML_TAG_START -->ConvBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/cpm"><!-- HTML_TAG_START -->CPM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/cpmant"><!-- HTML_TAG_START -->CPMANT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ctrl"><!-- HTML_TAG_START -->CTRL<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/deberta"><!-- HTML_TAG_START -->DeBERTa<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/deberta-v2"><!-- HTML_TAG_START -->DeBERTa-v2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/dialogpt"><!-- HTML_TAG_START -->DialoGPT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/distilbert"><!-- HTML_TAG_START -->DistilBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/dpr"><!-- HTML_TAG_START -->DPR<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/electra"><!-- HTML_TAG_START -->ELECTRA<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/encoder-decoder"><!-- HTML_TAG_START -->Encoder Decoder Models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ernie"><!-- HTML_TAG_START -->ERNIE<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ernie_m"><!-- HTML_TAG_START -->ErnieM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/esm"><!-- HTML_TAG_START -->ESM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/falcon"><!-- HTML_TAG_START -->Falcon<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/flan-t5"><!-- HTML_TAG_START -->FLAN-T5<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/flan-ul2"><!-- HTML_TAG_START -->FLAN-UL2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/flaubert"><!-- HTML_TAG_START -->FlauBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/fnet"><!-- HTML_TAG_START -->FNet<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/fsmt"><!-- HTML_TAG_START -->FSMT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/funnel"><!-- HTML_TAG_START -->Funnel Transformer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/openai-gpt"><!-- HTML_TAG_START -->GPT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt_neo"><!-- HTML_TAG_START -->GPT Neo<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt_neox"><!-- HTML_TAG_START -->GPT NeoX<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese"><!-- HTML_TAG_START -->GPT NeoX Japanese<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gptj"><!-- HTML_TAG_START -->GPT-J<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt2"><!-- HTML_TAG_START -->GPT2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode"><!-- HTML_TAG_START -->GPTBigCode<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese"><!-- HTML_TAG_START -->GPTSAN Japanese<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt-sw3"><!-- HTML_TAG_START -->GPTSw3<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/herbert"><!-- HTML_TAG_START -->HerBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ibert"><!-- HTML_TAG_START -->I-BERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/jukebox"><!-- HTML_TAG_START -->Jukebox<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/led"><!-- HTML_TAG_START -->LED<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/llama"><!-- HTML_TAG_START -->LLaMA<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/llama2"><!-- HTML_TAG_START -->Llama2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/longformer"><!-- HTML_TAG_START -->Longformer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/longt5"><!-- HTML_TAG_START -->LongT5<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/luke"><!-- HTML_TAG_START -->LUKE<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/m2m_100"><!-- HTML_TAG_START -->M2M100<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/marian"><!-- HTML_TAG_START -->MarianMT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/markuplm"><!-- HTML_TAG_START -->MarkupLM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mbart"><!-- HTML_TAG_START -->MBart and MBart-50<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mega"><!-- HTML_TAG_START -->MEGA<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/megatron-bert"><!-- HTML_TAG_START -->MegatronBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2"><!-- HTML_TAG_START -->MegatronGPT2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mistral"><!-- HTML_TAG_START -->Mistral<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mluke"><!-- HTML_TAG_START -->mLUKE<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mobilebert"><!-- HTML_TAG_START -->MobileBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mpnet"><!-- HTML_TAG_START -->MPNet<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mpt"><!-- HTML_TAG_START -->MPT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mra"><!-- HTML_TAG_START -->MRA<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mt5"><!-- HTML_TAG_START -->MT5<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mvp"><!-- HTML_TAG_START -->MVP<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nezha"><!-- HTML_TAG_START -->NEZHA<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nllb"><!-- HTML_TAG_START -->NLLB<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nllb-moe"><!-- HTML_TAG_START -->NLLB-MoE<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nystromformer"><!-- HTML_TAG_START -->Nyströmformer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/open-llama"><!-- HTML_TAG_START -->Open-Llama<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/opt"><!-- HTML_TAG_START -->OPT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/pegasus"><!-- HTML_TAG_START -->Pegasus<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/pegasus_x"><!-- HTML_TAG_START -->PEGASUS-X<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/persimmon"><!-- HTML_TAG_START -->Persimmon<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/phobert"><!-- HTML_TAG_START -->PhoBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/plbart"><!-- HTML_TAG_START -->PLBart<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/prophetnet"><!-- HTML_TAG_START -->ProphetNet<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/qdqbert"><!-- HTML_TAG_START -->QDQBert<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/rag"><!-- HTML_TAG_START -->RAG<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/realm"><!-- HTML_TAG_START -->REALM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/reformer"><!-- HTML_TAG_START -->Reformer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/rembert"><!-- HTML_TAG_START -->RemBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/retribert"><!-- HTML_TAG_START -->RetriBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/roberta"><!-- HTML_TAG_START -->RoBERTa<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm"><!-- HTML_TAG_START -->RoBERTa-PreLayerNorm<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/roc_bert"><!-- HTML_TAG_START -->RoCBert<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/roformer"><!-- HTML_TAG_START -->RoFormer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/rwkv"><!-- HTML_TAG_START -->RWKV<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/splinter"><!-- HTML_TAG_START -->Splinter<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/squeezebert"><!-- HTML_TAG_START -->SqueezeBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/switch_transformers"><!-- HTML_TAG_START -->SwitchTransformers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/t5"><!-- HTML_TAG_START -->T5<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/t5v1.1"><!-- HTML_TAG_START -->T5v1.1<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/tapex"><!-- HTML_TAG_START -->TAPEX<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/transfo-xl"><!-- HTML_TAG_START -->Transformer XL<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ul2"><!-- HTML_TAG_START -->UL2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/umt5"><!-- HTML_TAG_START -->UMT5<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xmod"><!-- HTML_TAG_START -->X-MOD<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xglm"><!-- HTML_TAG_START -->XGLM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm"><!-- HTML_TAG_START -->XLM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet"><!-- HTML_TAG_START -->XLM-ProphetNet<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta"><!-- HTML_TAG_START -->XLM-RoBERTa<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl"><!-- HTML_TAG_START -->XLM-RoBERTa-XL<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm-v"><!-- HTML_TAG_START -->XLM-V<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlnet"><!-- HTML_TAG_START -->XLNet<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/yoso"><!-- HTML_TAG_START -->YOSO<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Vision models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Audio models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Multimodal models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Reinforcement learning models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Time series models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Graph models<!-- HTML_TAG_END --></span> </span></span> </div></div> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Internal Helpers<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/modeling_utils"><!-- HTML_TAG_START -->Custom Layers and Utilities<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/pipelines_utils"><!-- HTML_TAG_START -->Utilities for pipelines<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/tokenization_utils"><!-- HTML_TAG_START -->Utilities for Tokenizers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/trainer_utils"><!-- HTML_TAG_START -->Utilities for Trainer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/generation_utils"><!-- HTML_TAG_START -->Utilities for Generation<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/image_processing_utils"><!-- HTML_TAG_START -->Utilities for Image Processors<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/audio_utils"><!-- HTML_TAG_START -->Utilities for Audio processing<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/file_utils"><!-- HTML_TAG_START -->General Utilities<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/time_series_utils"><!-- HTML_TAG_START -->Utilities for Time Series<!-- HTML_TAG_END --> </a> </div> </div></nav></div></div></div> <div class="z-1 min-w-0 flex-1"> <div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg"> <div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div> <p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience </p> <div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes </div></div></div> <div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a> <p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div> <div class="prose-doc prose relative mx-auto max-w-4xl break-words"><!-- HTML_TAG_START --> <link href="/docs/transformers/v4.34.0/en/_app/immutable/assets/0.e3b0c442.css" rel="modulepreload"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/entry/start.c2db227a.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/scheduler.9bc65507.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/singletons.e3057404.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/index.3b203c72.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/paths.e7de6301.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/entry/app.879d9b87.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/index.78c82d43.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/0.242aaaff.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/each.e59479a4.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/281.4ac28ac8.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/IconCopyLink.bedaa44d.js"><!-- HEAD_svelte-1phssyn_START --><meta name="hf:doc:metadata" content="{&quot;local&quot;:&quot;xlmv&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;overview&quot;,&quot;title&quot;:&quot;Overview&quot;}],&quot;title&quot;:&quot;XLM-V&quot;}"><!-- HEAD_svelte-1phssyn_END --> <p></p> <h1 class="relative group"><a id="xlmv" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#xlmv"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-w2deu3">XLM-V</span></h1> <h2 class="relative group"><a id="overview" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#overview"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1jsw1pg">Overview</span></h2> <p data-svelte-h="svelte-hcobiv">XLM-V is multilingual language model with a one million token vocabulary trained on 2.5TB of data from Common Crawl (same as XLM-R). It was introduced in the <a href="https://arxiv.org/abs/2301.10472" rel="nofollow">XLM-V: Overcoming the Vocabulary Bottleneck in Multilingual Masked Language Models</a> paper by Davis Liang, Hila Gonen, Yuning Mao, Rui Hou, Naman Goyal, Marjan Ghazvininejad, Luke Zettlemoyer and Madian Khabsa.</p> <p data-svelte-h="svelte-rlu8sx">From the abstract of the XLM-V paper:</p> <p data-svelte-h="svelte-pvh528"><em>Large multilingual language models typically rely on a single vocabulary shared across 100+ languages. As these models have increased in parameter count and depth, vocabulary size has remained largely unchanged. This vocabulary bottleneck limits the representational capabilities of multilingual models like XLM-R. In this paper, we introduce a new approach for scaling to very large multilingual vocabularies by de-emphasizing token sharing between languages with little lexical overlap and assigning vocabulary capacity to achieve sufficient coverage for each individual language. Tokenizations using our vocabulary are typically more semantically meaningful and shorter compared to XLM-R. Leveraging this improved vocabulary, we train XLM-V, a multilingual language model with a one million token vocabulary. XLM-V outperforms XLM-R on every task we tested on ranging from natural language inference (XNLI), question answering (MLQA, XQuAD, TyDiQA), and named entity recognition (WikiAnn) to low-resource tasks (Americas NLI, MasakhaNER).</em></p> <p data-svelte-h="svelte-axv494">Tips:</p> <ul data-svelte-h="svelte-h4j5w0"><li>XLM-V is compatible with the XLM-RoBERTa model architecture, only model weights from <a href="https://github.com/facebookresearch/fairseq" rel="nofollow"><code>fairseq</code></a> library had to be converted.</li> <li>The <code>XLMTokenizer</code> implementation is used to load the vocab and performs tokenization.</li></ul> <p data-svelte-h="svelte-1tzrs8">A XLM-V (base size) model is available under the <a href="https://huggingface.co/facebook/xlm-v-base" rel="nofollow"><code>facebook/xlm-v-base</code></a> identifier.</p> <p data-svelte-h="svelte-1g4cthl">This model was contributed by <a href="https://huggingface.co/stefan-it" rel="nofollow">stefan-it</a>, including detailed experiments with XLM-V on downstream tasks. The experiments repository can be found <a href="https://github.com/stefan-it/xlm-v-experiments" rel="nofollow">here</a>.</p> <p></p> <script> { __sveltekit_1yybmhh = { assets: "/docs/transformers/v4.34.0/en", base: "/docs/transformers/v4.34.0/en", env: {} }; const element = document.currentScript.parentElement; const data = [null,null]; Promise.all([ import("/docs/transformers/v4.34.0/en/_app/immutable/entry/start.c2db227a.js"), import("/docs/transformers/v4.34.0/en/_app/immutable/entry/app.879d9b87.js") ]).then(([kit, app]) => { kit.start(app, element, { node_ids: [0, 281], data, form: null, error: null }); }); } </script> <!-- HTML_TAG_END --></div> <div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>XLM-RoBERTa-XL</a> <a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/xlnet" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">XLNet<span class="ml-2 translate-y-px">→</span></a></div></div></div> <div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapter&quot;:{&quot;title&quot;:&quot;XLM-V&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;xlmv&quot;,&quot;url&quot;:&quot;#xlmv&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;overview&quot;,&quot;url&quot;:&quot;#overview&quot;}]}}" data-target="SubSideMenu"> <nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#xlmv" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-xlmv"><!-- HTML_TAG_START -->XL<wbr>M-V<!-- HTML_TAG_END --></a> <a href="#overview" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-overview"><!-- HTML_TAG_START --><wbr>Overview<!-- HTML_TAG_END --></a> </nav></div></div></div> <div id="doc-footer"></div></main> </div> <script> import("/front/build/kube-b0520c1/index.js"); window.moonSha = "kube-b0520c1/"; window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc","environment":"production","userAgent":"HuggingFace (production)"}`); </script> <!-- Stripe --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://js.stripe.com/v3/"; script.async = true; document.head.appendChild(script); } </script> <!-- Google analytics v4 --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL"; script.async = true; document.head.appendChild(script); window.dataLayer = window.dataLayer || []; function gtag() { if (window.dataLayer !== undefined) { window.dataLayer.push(arguments); } } gtag("js", new Date()); gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/v4.34.0/en/model_doc/xlm-v" }); /// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" }); /// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent /// TODO: ask the user for their consent and update this with gtag('consent', 'update') } </script> <!-- Google Analytics v3 (deprecated) --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { (function (i, s, o, g, r, a, m) { i["GoogleAnalyticsObject"] = r; (i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments); }), (i[r].l = 1 * new Date()); (a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]); a.async = 1; a.src = g; m.parentNode.insertBefore(a, m); })(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics"); ganalytics("create", "UA-83738774-2", "auto"); ganalytics("send", "pageview", "/docs/transformers/v4.34.0/en/model_doc/xlm-v"); } </script> <iframe name="__privateStripeMetricsController0910" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-27c67c0d52761104439bb051c7856ab1.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fv4.34.0%2Fen%2Fmodel_doc%2Fxlm-v&amp;title=XLM-V&amp;referrer=&amp;muid=NA&amp;sid=NA&amp;version=6&amp;preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html>
2023-10-05T13:33:37.491Z
XLS-R
https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/xls_r
# XLS-R ## Overview The XLS-R model was proposed in [XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale](https://arxiv.org/abs/2111.09296) by Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli. The abstract from the paper is the following: _This paper presents XLS-R, a large-scale model for cross-lingual speech representation learning based on wav2vec 2.0. We train models with up to 2B parameters on nearly half a million hours of publicly available speech audio in 128 languages, an order of magnitude more public data than the largest known prior work. Our evaluation covers a wide range of tasks, domains, data regimes and languages, both high and low-resource. On the CoVoST-2 speech translation benchmark, we improve the previous state of the art by an average of 7.4 BLEU over 21 translation directions into English. For speech recognition, XLS-R improves over the best known prior work on BABEL, MLS, CommonVoice as well as VoxPopuli, lowering error rates by 14-34% relative on average. XLS-R also sets a new state of the art on VoxLingua107 language identification. Moreover, we show that with sufficient model size, cross-lingual pretraining can outperform English-only pretraining when translating English speech into other languages, a setting which favors monolingual pretraining. We hope XLS-R can help to improve speech processing tasks for many more languages of the world._ Tips: - XLS-R is a speech model that accepts a float array corresponding to the raw waveform of the speech signal. - XLS-R model was trained using connectionist temporal classification (CTC) so the model output has to be decoded using [Wav2Vec2CTCTokenizer](/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2CTCTokenizer). Relevant checkpoints can be found under [https://huggingface.co/models?other=xls\_r](https://huggingface.co/models?other=xls_r). XLS-R’s architecture is based on the Wav2Vec2 model, so one can refer to [Wav2Vec2’s documentation page](wav2vec2). The original code can be found [here](https://github.com/pytorch/fairseq/tree/master/fairseq/models/wav2vec).
<!DOCTYPE html><html class=""><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"> <meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science."> <meta property="fb:app_id" content="1321688464574422"> <meta name="twitter:card" content="summary_large_image"> <meta name="twitter:site" content="@huggingface"> <meta property="og:title" content="XLS-R"> <meta property="og:type" content="website"> <meta property="og:url" content="https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/xls_r"> <meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png"> <link rel="stylesheet" href="/front/build/kube-b0520c1/style.css"> <link rel="preconnect" href="https://fonts.gstatic.com"> <link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&amp;display=swap" rel="stylesheet"> <link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&amp;display=swap" rel="stylesheet"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" as="style" onload="this.onload=null;this.rel='stylesheet'"> <noscript> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" /> </noscript> <title>XLS-R</title> <script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script> <script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head> <body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage"> <div class="flex min-h-screen flex-col"> <div class="SVELTE_HYDRATER contents" data-props="{&quot;classNames&quot;:&quot;&quot;,&quot;isWide&quot;:true,&quot;isZh&quot;:false}" data-target="MainHeader"><header class="border-b border-gray-100 "><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <div class="flex flex-none items-center justify-center p-0.5 place-self-stretch lg:hidden"><button class="relative z-40 flex h-6 w-8 items-center justify-center" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div></div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="rounded-full border border-transparent bg-gray-900 px-3 py-1 leading-none text-white hover:border-black hover:bg-white hover:text-black" href="/join">Sign Up</a></li></ul></nav></div></header></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div> <main class="flex flex-1 flex-col"><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapters&quot;:[{&quot;title&quot;:&quot;Get started&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;🤗 Transformers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;index&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/index&quot;},{&quot;title&quot;:&quot;Quick tour&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;quicktour&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/quicktour&quot;},{&quot;title&quot;:&quot;Installation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;installation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/installation&quot;}]},{&quot;title&quot;:&quot;Tutorials&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Run inference with pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_tutorial&quot;},{&quot;title&quot;:&quot;Write portable code with AutoClass&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;autoclass_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/autoclass_tutorial&quot;},{&quot;title&quot;:&quot;Preprocess data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocessing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/preprocessing&quot;},{&quot;title&quot;:&quot;Fine-tune a pretrained model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;training&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/training&quot;},{&quot;title&quot;:&quot;Train with a script&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;run_scripts&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/run_scripts&quot;},{&quot;title&quot;:&quot;Set up distributed training with 🤗 Accelerate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;accelerate&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/accelerate&quot;},{&quot;title&quot;:&quot;Load and train adapters with 🤗 PEFT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;peft&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/peft&quot;},{&quot;title&quot;:&quot;Share your model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_sharing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_sharing&quot;},{&quot;title&quot;:&quot;Agents&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers_agents&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/transformers_agents&quot;},{&quot;title&quot;:&quot;Generation with LLMs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;llm_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/llm_tutorial&quot;}]},{&quot;title&quot;:&quot;Task Guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Natural Language Processing&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text classification&quot;,&quot;id&quot;:&quot;tasks/sequence_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/sequence_classification&quot;},{&quot;title&quot;:&quot;Token classification&quot;,&quot;id&quot;:&quot;tasks/token_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/token_classification&quot;},{&quot;title&quot;:&quot;Question answering&quot;,&quot;id&quot;:&quot;tasks/question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/question_answering&quot;},{&quot;title&quot;:&quot;Causal language modeling&quot;,&quot;id&quot;:&quot;tasks/language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/language_modeling&quot;},{&quot;title&quot;:&quot;Masked language modeling&quot;,&quot;id&quot;:&quot;tasks/masked_language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/masked_language_modeling&quot;},{&quot;title&quot;:&quot;Translation&quot;,&quot;id&quot;:&quot;tasks/translation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/translation&quot;},{&quot;title&quot;:&quot;Summarization&quot;,&quot;id&quot;:&quot;tasks/summarization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/summarization&quot;},{&quot;title&quot;:&quot;Multiple choice&quot;,&quot;id&quot;:&quot;tasks/multiple_choice&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/multiple_choice&quot;}]},{&quot;title&quot;:&quot;Audio&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio classification&quot;,&quot;id&quot;:&quot;tasks/audio_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/audio_classification&quot;},{&quot;title&quot;:&quot;Automatic speech recognition&quot;,&quot;id&quot;:&quot;tasks/asr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/asr&quot;}]},{&quot;title&quot;:&quot;Computer Vision&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image classification&quot;,&quot;id&quot;:&quot;tasks/image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_classification&quot;},{&quot;title&quot;:&quot;Semantic segmentation&quot;,&quot;id&quot;:&quot;tasks/semantic_segmentation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/semantic_segmentation&quot;},{&quot;title&quot;:&quot;Video classification&quot;,&quot;id&quot;:&quot;tasks/video_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/video_classification&quot;},{&quot;title&quot;:&quot;Object detection&quot;,&quot;id&quot;:&quot;tasks/object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot object detection&quot;,&quot;id&quot;:&quot;tasks/zero_shot_object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot image classification&quot;,&quot;id&quot;:&quot;tasks/zero_shot_image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification&quot;},{&quot;title&quot;:&quot;Depth estimation&quot;,&quot;id&quot;:&quot;tasks/monocular_depth_estimation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation&quot;}]},{&quot;title&quot;:&quot;Multimodal&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image captioning&quot;,&quot;id&quot;:&quot;tasks/image_captioning&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_captioning&quot;},{&quot;title&quot;:&quot;Document Question Answering&quot;,&quot;id&quot;:&quot;tasks/document_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/document_question_answering&quot;},{&quot;title&quot;:&quot;Visual Question Answering&quot;,&quot;id&quot;:&quot;tasks/visual_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/visual_question_answering&quot;},{&quot;title&quot;:&quot;Text to speech&quot;,&quot;id&quot;:&quot;tasks/text-to-speech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/text-to-speech&quot;}]},{&quot;title&quot;:&quot;Generation&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Customize the generation strategy&quot;,&quot;id&quot;:&quot;generation_strategies&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/generation_strategies&quot;}]},{&quot;title&quot;:&quot;Prompting&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image tasks with IDEFICS&quot;,&quot;id&quot;:&quot;tasks/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/idefics&quot;}]}]},{&quot;title&quot;:&quot;Developer guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Use fast tokenizers from 🤗 Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;fast_tokenizers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/fast_tokenizers&quot;},{&quot;title&quot;:&quot;Run inference with multilingual models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;multilingual&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/multilingual&quot;},{&quot;title&quot;:&quot;Use model-specific APIs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;create_a_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/create_a_model&quot;},{&quot;title&quot;:&quot;Share a custom model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_models&quot;},{&quot;title&quot;:&quot;Templates for chat models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;chat_templating&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/chat_templating&quot;},{&quot;title&quot;:&quot;Run training on Amazon SageMaker&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;sagemaker&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/sagemaker&quot;},{&quot;title&quot;:&quot;Export to ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;serialization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/serialization&quot;},{&quot;title&quot;:&quot;Export to TFLite&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tflite&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tflite&quot;},{&quot;title&quot;:&quot;Export to TorchScript&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;torchscript&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/torchscript&quot;},{&quot;title&quot;:&quot;Benchmarks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;benchmarks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/benchmarks&quot;},{&quot;title&quot;:&quot;Notebooks with examples&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;notebooks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/notebooks&quot;},{&quot;title&quot;:&quot;Community resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;community&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/community&quot;},{&quot;title&quot;:&quot;Custom Tools and Prompts&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_tools&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_tools&quot;},{&quot;title&quot;:&quot;Troubleshoot&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;troubleshooting&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/troubleshooting&quot;}]},{&quot;title&quot;:&quot;Performance and scalability&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;performance&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/performance&quot;},{&quot;title&quot;:&quot;Efficient training techniques&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Methods and tools for efficient training on a single GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_one&quot;},{&quot;title&quot;:&quot;Multiple GPUs and parallelism&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_many&quot;},{&quot;title&quot;:&quot;Efficient training on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu&quot;},{&quot;title&quot;:&quot;Distributed CPU training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu_many&quot;},{&quot;title&quot;:&quot;Training on TPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu&quot;},{&quot;title&quot;:&quot;Training on TPU with TensorFlow&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu_tf&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu_tf&quot;},{&quot;title&quot;:&quot;Training on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_special&quot;},{&quot;title&quot;:&quot;Custom hardware for training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_hardware&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_hardware&quot;},{&quot;title&quot;:&quot;Hyperparameter Search using Trainer API&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;hpo_train&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/hpo_train&quot;}]},{&quot;title&quot;:&quot;Optimizing inference&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Inference on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_cpu&quot;},{&quot;title&quot;:&quot;Inference on one GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_one&quot;},{&quot;title&quot;:&quot;Inference on many GPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_many&quot;},{&quot;title&quot;:&quot;Inference on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_special&quot;}]},{&quot;title&quot;:&quot;Instantiating a big model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;big_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/big_models&quot;},{&quot;title&quot;:&quot;Troubleshooting&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;debugging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/debugging&quot;},{&quot;title&quot;:&quot;XLA Integration for TensorFlow Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tf_xla&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tf_xla&quot;},{&quot;title&quot;:&quot;Optimize inference using `torch.compile()`&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_torch_compile&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_torch_compile&quot;}]},{&quot;title&quot;:&quot;Contribute&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;How to contribute to transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;contributing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/contributing&quot;},{&quot;title&quot;:&quot;How to add a model to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_model&quot;},{&quot;title&quot;:&quot;How to convert a 🤗 Transformers model to TensorFlow?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_tensorflow_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_tensorflow_model&quot;},{&quot;title&quot;:&quot;How to add a pipeline to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_pipeline&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_pipeline&quot;},{&quot;title&quot;:&quot;Testing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;testing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/testing&quot;},{&quot;title&quot;:&quot;Checks on a Pull Request&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pr_checks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pr_checks&quot;}]},{&quot;title&quot;:&quot;Conceptual guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Philosophy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;philosophy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/philosophy&quot;},{&quot;title&quot;:&quot;Glossary&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;glossary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/glossary&quot;},{&quot;title&quot;:&quot;What 🤗 Transformers can do&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;task_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/task_summary&quot;},{&quot;title&quot;:&quot;How 🤗 Transformers solve tasks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks_explained&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks_explained&quot;},{&quot;title&quot;:&quot;The Transformer model family&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_summary&quot;},{&quot;title&quot;:&quot;Summary of the tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tokenizer_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tokenizer_summary&quot;},{&quot;title&quot;:&quot;Attention mechanisms&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;attention&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/attention&quot;},{&quot;title&quot;:&quot;Padding and truncation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pad_truncation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pad_truncation&quot;},{&quot;title&quot;:&quot;BERTology&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;bertology&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/bertology&quot;},{&quot;title&quot;:&quot;Perplexity of fixed-length models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perplexity&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perplexity&quot;},{&quot;title&quot;:&quot;Pipelines for webserver inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_webserver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_webserver&quot;},{&quot;title&quot;:&quot;Model training anatomy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_memory_anatomy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_memory_anatomy&quot;}]},{&quot;title&quot;:&quot;API&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Main Classes&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Agents and Tools&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/agent&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/agent&quot;},{&quot;title&quot;:&quot;Auto Classes&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/auto&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/auto&quot;},{&quot;title&quot;:&quot;Callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/callback&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/callback&quot;},{&quot;title&quot;:&quot;Configuration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/configuration&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/configuration&quot;},{&quot;title&quot;:&quot;Data Collator&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/data_collator&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/data_collator&quot;},{&quot;title&quot;:&quot;Keras callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/keras_callbacks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/keras_callbacks&quot;},{&quot;title&quot;:&quot;Logging&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/logging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/logging&quot;},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/model&quot;},{&quot;title&quot;:&quot;Text Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/text_generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/text_generation&quot;},{&quot;title&quot;:&quot;ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/onnx&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/onnx&quot;},{&quot;title&quot;:&quot;Optimization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/optimizer_schedules&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules&quot;},{&quot;title&quot;:&quot;Model outputs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/output&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/output&quot;},{&quot;title&quot;:&quot;Pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/pipelines&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/pipelines&quot;},{&quot;title&quot;:&quot;Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/processors&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/processors&quot;},{&quot;title&quot;:&quot;Quantization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/quantization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/quantization&quot;},{&quot;title&quot;:&quot;Tokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/tokenizer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/tokenizer&quot;},{&quot;title&quot;:&quot;Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/trainer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/trainer&quot;},{&quot;title&quot;:&quot;DeepSpeed Integration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/deepspeed&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/deepspeed&quot;},{&quot;title&quot;:&quot;Feature Extractor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/feature_extractor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/feature_extractor&quot;},{&quot;title&quot;:&quot;Image Processor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/image_processor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/image_processor&quot;}]},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALBERT&quot;,&quot;id&quot;:&quot;model_doc/albert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/albert&quot;},{&quot;title&quot;:&quot;BART&quot;,&quot;id&quot;:&quot;model_doc/bart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bart&quot;},{&quot;title&quot;:&quot;BARThez&quot;,&quot;id&quot;:&quot;model_doc/barthez&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/barthez&quot;},{&quot;title&quot;:&quot;BARTpho&quot;,&quot;id&quot;:&quot;model_doc/bartpho&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bartpho&quot;},{&quot;title&quot;:&quot;BERT&quot;,&quot;id&quot;:&quot;model_doc/bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert&quot;},{&quot;title&quot;:&quot;BertGeneration&quot;,&quot;id&quot;:&quot;model_doc/bert-generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-generation&quot;},{&quot;title&quot;:&quot;BertJapanese&quot;,&quot;id&quot;:&quot;model_doc/bert-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-japanese&quot;},{&quot;title&quot;:&quot;Bertweet&quot;,&quot;id&quot;:&quot;model_doc/bertweet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bertweet&quot;},{&quot;title&quot;:&quot;BigBird&quot;,&quot;id&quot;:&quot;model_doc/big_bird&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/big_bird&quot;},{&quot;title&quot;:&quot;BigBirdPegasus&quot;,&quot;id&quot;:&quot;model_doc/bigbird_pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus&quot;},{&quot;title&quot;:&quot;BioGpt&quot;,&quot;id&quot;:&quot;model_doc/biogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/biogpt&quot;},{&quot;title&quot;:&quot;Blenderbot&quot;,&quot;id&quot;:&quot;model_doc/blenderbot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot&quot;},{&quot;title&quot;:&quot;Blenderbot Small&quot;,&quot;id&quot;:&quot;model_doc/blenderbot-small&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot-small&quot;},{&quot;title&quot;:&quot;BLOOM&quot;,&quot;id&quot;:&quot;model_doc/bloom&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bloom&quot;},{&quot;title&quot;:&quot;BORT&quot;,&quot;id&quot;:&quot;model_doc/bort&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bort&quot;},{&quot;title&quot;:&quot;ByT5&quot;,&quot;id&quot;:&quot;model_doc/byt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/byt5&quot;},{&quot;title&quot;:&quot;CamemBERT&quot;,&quot;id&quot;:&quot;model_doc/camembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/camembert&quot;},{&quot;title&quot;:&quot;CANINE&quot;,&quot;id&quot;:&quot;model_doc/canine&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/canine&quot;},{&quot;title&quot;:&quot;CodeGen&quot;,&quot;id&quot;:&quot;model_doc/codegen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/codegen&quot;},{&quot;title&quot;:&quot;CodeLlama&quot;,&quot;id&quot;:&quot;model_doc/code_llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/code_llama&quot;},{&quot;title&quot;:&quot;ConvBERT&quot;,&quot;id&quot;:&quot;model_doc/convbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convbert&quot;},{&quot;title&quot;:&quot;CPM&quot;,&quot;id&quot;:&quot;model_doc/cpm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpm&quot;},{&quot;title&quot;:&quot;CPMANT&quot;,&quot;id&quot;:&quot;model_doc/cpmant&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpmant&quot;},{&quot;title&quot;:&quot;CTRL&quot;,&quot;id&quot;:&quot;model_doc/ctrl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ctrl&quot;},{&quot;title&quot;:&quot;DeBERTa&quot;,&quot;id&quot;:&quot;model_doc/deberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta&quot;},{&quot;title&quot;:&quot;DeBERTa-v2&quot;,&quot;id&quot;:&quot;model_doc/deberta-v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta-v2&quot;},{&quot;title&quot;:&quot;DialoGPT&quot;,&quot;id&quot;:&quot;model_doc/dialogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dialogpt&quot;},{&quot;title&quot;:&quot;DistilBERT&quot;,&quot;id&quot;:&quot;model_doc/distilbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/distilbert&quot;},{&quot;title&quot;:&quot;DPR&quot;,&quot;id&quot;:&quot;model_doc/dpr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpr&quot;},{&quot;title&quot;:&quot;ELECTRA&quot;,&quot;id&quot;:&quot;model_doc/electra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/electra&quot;},{&quot;title&quot;:&quot;Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encoder-decoder&quot;},{&quot;title&quot;:&quot;ERNIE&quot;,&quot;id&quot;:&quot;model_doc/ernie&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie&quot;},{&quot;title&quot;:&quot;ErnieM&quot;,&quot;id&quot;:&quot;model_doc/ernie_m&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie_m&quot;},{&quot;title&quot;:&quot;ESM&quot;,&quot;id&quot;:&quot;model_doc/esm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/esm&quot;},{&quot;title&quot;:&quot;Falcon&quot;,&quot;id&quot;:&quot;model_doc/falcon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/falcon&quot;},{&quot;title&quot;:&quot;FLAN-T5&quot;,&quot;id&quot;:&quot;model_doc/flan-t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-t5&quot;},{&quot;title&quot;:&quot;FLAN-UL2&quot;,&quot;id&quot;:&quot;model_doc/flan-ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-ul2&quot;},{&quot;title&quot;:&quot;FlauBERT&quot;,&quot;id&quot;:&quot;model_doc/flaubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flaubert&quot;},{&quot;title&quot;:&quot;FNet&quot;,&quot;id&quot;:&quot;model_doc/fnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fnet&quot;},{&quot;title&quot;:&quot;FSMT&quot;,&quot;id&quot;:&quot;model_doc/fsmt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fsmt&quot;},{&quot;title&quot;:&quot;Funnel Transformer&quot;,&quot;id&quot;:&quot;model_doc/funnel&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/funnel&quot;},{&quot;title&quot;:&quot;GPT&quot;,&quot;id&quot;:&quot;model_doc/openai-gpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/openai-gpt&quot;},{&quot;title&quot;:&quot;GPT Neo&quot;,&quot;id&quot;:&quot;model_doc/gpt_neo&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neo&quot;},{&quot;title&quot;:&quot;GPT NeoX&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox&quot;},{&quot;title&quot;:&quot;GPT NeoX Japanese&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox_japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese&quot;},{&quot;title&quot;:&quot;GPT-J&quot;,&quot;id&quot;:&quot;model_doc/gptj&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptj&quot;},{&quot;title&quot;:&quot;GPT2&quot;,&quot;id&quot;:&quot;model_doc/gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt2&quot;},{&quot;title&quot;:&quot;GPTBigCode&quot;,&quot;id&quot;:&quot;model_doc/gpt_bigcode&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode&quot;},{&quot;title&quot;:&quot;GPTSAN Japanese&quot;,&quot;id&quot;:&quot;model_doc/gptsan-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese&quot;},{&quot;title&quot;:&quot;GPTSw3&quot;,&quot;id&quot;:&quot;model_doc/gpt-sw3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt-sw3&quot;},{&quot;title&quot;:&quot;HerBERT&quot;,&quot;id&quot;:&quot;model_doc/herbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/herbert&quot;},{&quot;title&quot;:&quot;I-BERT&quot;,&quot;id&quot;:&quot;model_doc/ibert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ibert&quot;},{&quot;title&quot;:&quot;Jukebox&quot;,&quot;id&quot;:&quot;model_doc/jukebox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/jukebox&quot;},{&quot;title&quot;:&quot;LED&quot;,&quot;id&quot;:&quot;model_doc/led&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/led&quot;},{&quot;title&quot;:&quot;LLaMA&quot;,&quot;id&quot;:&quot;model_doc/llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama&quot;},{&quot;title&quot;:&quot;Llama2&quot;,&quot;id&quot;:&quot;model_doc/llama2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama2&quot;},{&quot;title&quot;:&quot;Longformer&quot;,&quot;id&quot;:&quot;model_doc/longformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longformer&quot;},{&quot;title&quot;:&quot;LongT5&quot;,&quot;id&quot;:&quot;model_doc/longt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longt5&quot;},{&quot;title&quot;:&quot;LUKE&quot;,&quot;id&quot;:&quot;model_doc/luke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/luke&quot;},{&quot;title&quot;:&quot;M2M100&quot;,&quot;id&quot;:&quot;model_doc/m2m_100&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/m2m_100&quot;},{&quot;title&quot;:&quot;MarianMT&quot;,&quot;id&quot;:&quot;model_doc/marian&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/marian&quot;},{&quot;title&quot;:&quot;MarkupLM&quot;,&quot;id&quot;:&quot;model_doc/markuplm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/markuplm&quot;},{&quot;title&quot;:&quot;MBart and MBart-50&quot;,&quot;id&quot;:&quot;model_doc/mbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mbart&quot;},{&quot;title&quot;:&quot;MEGA&quot;,&quot;id&quot;:&quot;model_doc/mega&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mega&quot;},{&quot;title&quot;:&quot;MegatronBERT&quot;,&quot;id&quot;:&quot;model_doc/megatron-bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron-bert&quot;},{&quot;title&quot;:&quot;MegatronGPT2&quot;,&quot;id&quot;:&quot;model_doc/megatron_gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2&quot;},{&quot;title&quot;:&quot;Mistral&quot;,&quot;id&quot;:&quot;model_doc/mistral&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mistral&quot;},{&quot;title&quot;:&quot;mLUKE&quot;,&quot;id&quot;:&quot;model_doc/mluke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mluke&quot;},{&quot;title&quot;:&quot;MobileBERT&quot;,&quot;id&quot;:&quot;model_doc/mobilebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilebert&quot;},{&quot;title&quot;:&quot;MPNet&quot;,&quot;id&quot;:&quot;model_doc/mpnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpnet&quot;},{&quot;title&quot;:&quot;MPT&quot;,&quot;id&quot;:&quot;model_doc/mpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpt&quot;},{&quot;title&quot;:&quot;MRA&quot;,&quot;id&quot;:&quot;model_doc/mra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mra&quot;},{&quot;title&quot;:&quot;MT5&quot;,&quot;id&quot;:&quot;model_doc/mt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mt5&quot;},{&quot;title&quot;:&quot;MVP&quot;,&quot;id&quot;:&quot;model_doc/mvp&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mvp&quot;},{&quot;title&quot;:&quot;NEZHA&quot;,&quot;id&quot;:&quot;model_doc/nezha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nezha&quot;},{&quot;title&quot;:&quot;NLLB&quot;,&quot;id&quot;:&quot;model_doc/nllb&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb&quot;},{&quot;title&quot;:&quot;NLLB-MoE&quot;,&quot;id&quot;:&quot;model_doc/nllb-moe&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb-moe&quot;},{&quot;title&quot;:&quot;Nyströmformer&quot;,&quot;id&quot;:&quot;model_doc/nystromformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nystromformer&quot;},{&quot;title&quot;:&quot;Open-Llama&quot;,&quot;id&quot;:&quot;model_doc/open-llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/open-llama&quot;},{&quot;title&quot;:&quot;OPT&quot;,&quot;id&quot;:&quot;model_doc/opt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/opt&quot;},{&quot;title&quot;:&quot;Pegasus&quot;,&quot;id&quot;:&quot;model_doc/pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus&quot;},{&quot;title&quot;:&quot;PEGASUS-X&quot;,&quot;id&quot;:&quot;model_doc/pegasus_x&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus_x&quot;},{&quot;title&quot;:&quot;Persimmon&quot;,&quot;id&quot;:&quot;model_doc/persimmon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/persimmon&quot;},{&quot;title&quot;:&quot;PhoBERT&quot;,&quot;id&quot;:&quot;model_doc/phobert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/phobert&quot;},{&quot;title&quot;:&quot;PLBart&quot;,&quot;id&quot;:&quot;model_doc/plbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/plbart&quot;},{&quot;title&quot;:&quot;ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/prophetnet&quot;},{&quot;title&quot;:&quot;QDQBert&quot;,&quot;id&quot;:&quot;model_doc/qdqbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/qdqbert&quot;},{&quot;title&quot;:&quot;RAG&quot;,&quot;id&quot;:&quot;model_doc/rag&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rag&quot;},{&quot;title&quot;:&quot;REALM&quot;,&quot;id&quot;:&quot;model_doc/realm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/realm&quot;},{&quot;title&quot;:&quot;Reformer&quot;,&quot;id&quot;:&quot;model_doc/reformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/reformer&quot;},{&quot;title&quot;:&quot;RemBERT&quot;,&quot;id&quot;:&quot;model_doc/rembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rembert&quot;},{&quot;title&quot;:&quot;RetriBERT&quot;,&quot;id&quot;:&quot;model_doc/retribert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/retribert&quot;},{&quot;title&quot;:&quot;RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta&quot;},{&quot;title&quot;:&quot;RoBERTa-PreLayerNorm&quot;,&quot;id&quot;:&quot;model_doc/roberta-prelayernorm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm&quot;},{&quot;title&quot;:&quot;RoCBert&quot;,&quot;id&quot;:&quot;model_doc/roc_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roc_bert&quot;},{&quot;title&quot;:&quot;RoFormer&quot;,&quot;id&quot;:&quot;model_doc/roformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roformer&quot;},{&quot;title&quot;:&quot;RWKV&quot;,&quot;id&quot;:&quot;model_doc/rwkv&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rwkv&quot;},{&quot;title&quot;:&quot;Splinter&quot;,&quot;id&quot;:&quot;model_doc/splinter&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/splinter&quot;},{&quot;title&quot;:&quot;SqueezeBERT&quot;,&quot;id&quot;:&quot;model_doc/squeezebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/squeezebert&quot;},{&quot;title&quot;:&quot;SwitchTransformers&quot;,&quot;id&quot;:&quot;model_doc/switch_transformers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/switch_transformers&quot;},{&quot;title&quot;:&quot;T5&quot;,&quot;id&quot;:&quot;model_doc/t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5&quot;},{&quot;title&quot;:&quot;T5v1.1&quot;,&quot;id&quot;:&quot;model_doc/t5v1.1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5v1.1&quot;},{&quot;title&quot;:&quot;TAPEX&quot;,&quot;id&quot;:&quot;model_doc/tapex&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapex&quot;},{&quot;title&quot;:&quot;Transformer XL&quot;,&quot;id&quot;:&quot;model_doc/transfo-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/transfo-xl&quot;},{&quot;title&quot;:&quot;UL2&quot;,&quot;id&quot;:&quot;model_doc/ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ul2&quot;},{&quot;title&quot;:&quot;UMT5&quot;,&quot;id&quot;:&quot;model_doc/umt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/umt5&quot;},{&quot;title&quot;:&quot;X-MOD&quot;,&quot;id&quot;:&quot;model_doc/xmod&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xmod&quot;},{&quot;title&quot;:&quot;XGLM&quot;,&quot;id&quot;:&quot;model_doc/xglm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xglm&quot;},{&quot;title&quot;:&quot;XLM&quot;,&quot;id&quot;:&quot;model_doc/xlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm&quot;},{&quot;title&quot;:&quot;XLM-ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/xlm-prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa-XL&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl&quot;},{&quot;title&quot;:&quot;XLM-V&quot;,&quot;id&quot;:&quot;model_doc/xlm-v&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-v&quot;},{&quot;title&quot;:&quot;XLNet&quot;,&quot;id&quot;:&quot;model_doc/xlnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlnet&quot;},{&quot;title&quot;:&quot;YOSO&quot;,&quot;id&quot;:&quot;model_doc/yoso&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yoso&quot;}]},{&quot;title&quot;:&quot;Vision models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;BEiT&quot;,&quot;id&quot;:&quot;model_doc/beit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/beit&quot;},{&quot;title&quot;:&quot;BiT&quot;,&quot;id&quot;:&quot;model_doc/bit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bit&quot;},{&quot;title&quot;:&quot;Conditional DETR&quot;,&quot;id&quot;:&quot;model_doc/conditional_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/conditional_detr&quot;},{&quot;title&quot;:&quot;ConvNeXT&quot;,&quot;id&quot;:&quot;model_doc/convnext&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnext&quot;},{&quot;title&quot;:&quot;ConvNeXTV2&quot;,&quot;id&quot;:&quot;model_doc/convnextv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnextv2&quot;},{&quot;title&quot;:&quot;CvT&quot;,&quot;id&quot;:&quot;model_doc/cvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cvt&quot;},{&quot;title&quot;:&quot;Deformable DETR&quot;,&quot;id&quot;:&quot;model_doc/deformable_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deformable_detr&quot;},{&quot;title&quot;:&quot;DeiT&quot;,&quot;id&quot;:&quot;model_doc/deit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deit&quot;},{&quot;title&quot;:&quot;DETA&quot;,&quot;id&quot;:&quot;model_doc/deta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deta&quot;},{&quot;title&quot;:&quot;DETR&quot;,&quot;id&quot;:&quot;model_doc/detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/detr&quot;},{&quot;title&quot;:&quot;DiNAT&quot;,&quot;id&quot;:&quot;model_doc/dinat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinat&quot;},{&quot;title&quot;:&quot;DINO V2&quot;,&quot;id&quot;:&quot;model_doc/dinov2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinov2&quot;},{&quot;title&quot;:&quot;DiT&quot;,&quot;id&quot;:&quot;model_doc/dit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dit&quot;},{&quot;title&quot;:&quot;DPT&quot;,&quot;id&quot;:&quot;model_doc/dpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpt&quot;},{&quot;title&quot;:&quot;EfficientFormer&quot;,&quot;id&quot;:&quot;model_doc/efficientformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientformer&quot;},{&quot;title&quot;:&quot;EfficientNet&quot;,&quot;id&quot;:&quot;model_doc/efficientnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientnet&quot;},{&quot;title&quot;:&quot;FocalNet&quot;,&quot;id&quot;:&quot;model_doc/focalnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/focalnet&quot;},{&quot;title&quot;:&quot;GLPN&quot;,&quot;id&quot;:&quot;model_doc/glpn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/glpn&quot;},{&quot;title&quot;:&quot;ImageGPT&quot;,&quot;id&quot;:&quot;model_doc/imagegpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/imagegpt&quot;},{&quot;title&quot;:&quot;LeViT&quot;,&quot;id&quot;:&quot;model_doc/levit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/levit&quot;},{&quot;title&quot;:&quot;Mask2Former&quot;,&quot;id&quot;:&quot;model_doc/mask2former&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mask2former&quot;},{&quot;title&quot;:&quot;MaskFormer&quot;,&quot;id&quot;:&quot;model_doc/maskformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/maskformer&quot;},{&quot;title&quot;:&quot;MobileNetV1&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v1&quot;},{&quot;title&quot;:&quot;MobileNetV2&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v2&quot;},{&quot;title&quot;:&quot;MobileViT&quot;,&quot;id&quot;:&quot;model_doc/mobilevit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevit&quot;},{&quot;title&quot;:&quot;MobileViTV2&quot;,&quot;id&quot;:&quot;model_doc/mobilevitv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevitv2&quot;},{&quot;title&quot;:&quot;NAT&quot;,&quot;id&quot;:&quot;model_doc/nat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nat&quot;},{&quot;title&quot;:&quot;PoolFormer&quot;,&quot;id&quot;:&quot;model_doc/poolformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/poolformer&quot;},{&quot;title&quot;:&quot;Pyramid Vision Transformer (PVT)&quot;,&quot;id&quot;:&quot;model_doc/pvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pvt&quot;},{&quot;title&quot;:&quot;RegNet&quot;,&quot;id&quot;:&quot;model_doc/regnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/regnet&quot;},{&quot;title&quot;:&quot;ResNet&quot;,&quot;id&quot;:&quot;model_doc/resnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/resnet&quot;},{&quot;title&quot;:&quot;SegFormer&quot;,&quot;id&quot;:&quot;model_doc/segformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/segformer&quot;},{&quot;title&quot;:&quot;SwiftFormer&quot;,&quot;id&quot;:&quot;model_doc/swiftformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swiftformer&quot;},{&quot;title&quot;:&quot;Swin Transformer&quot;,&quot;id&quot;:&quot;model_doc/swin&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin&quot;},{&quot;title&quot;:&quot;Swin Transformer V2&quot;,&quot;id&quot;:&quot;model_doc/swinv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swinv2&quot;},{&quot;title&quot;:&quot;Swin2SR&quot;,&quot;id&quot;:&quot;model_doc/swin2sr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin2sr&quot;},{&quot;title&quot;:&quot;Table Transformer&quot;,&quot;id&quot;:&quot;model_doc/table-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/table-transformer&quot;},{&quot;title&quot;:&quot;TimeSformer&quot;,&quot;id&quot;:&quot;model_doc/timesformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/timesformer&quot;},{&quot;title&quot;:&quot;UperNet&quot;,&quot;id&quot;:&quot;model_doc/upernet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/upernet&quot;},{&quot;title&quot;:&quot;VAN&quot;,&quot;id&quot;:&quot;model_doc/van&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/van&quot;},{&quot;title&quot;:&quot;VideoMAE&quot;,&quot;id&quot;:&quot;model_doc/videomae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/videomae&quot;},{&quot;title&quot;:&quot;Vision Transformer (ViT)&quot;,&quot;id&quot;:&quot;model_doc/vit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit&quot;},{&quot;title&quot;:&quot;ViT Hybrid&quot;,&quot;id&quot;:&quot;model_doc/vit_hybrid&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_hybrid&quot;},{&quot;title&quot;:&quot;ViTDet&quot;,&quot;id&quot;:&quot;model_doc/vitdet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitdet&quot;},{&quot;title&quot;:&quot;ViTMAE&quot;,&quot;id&quot;:&quot;model_doc/vit_mae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_mae&quot;},{&quot;title&quot;:&quot;ViTMatte&quot;,&quot;id&quot;:&quot;model_doc/vitmatte&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitmatte&quot;},{&quot;title&quot;:&quot;ViTMSN&quot;,&quot;id&quot;:&quot;model_doc/vit_msn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_msn&quot;},{&quot;title&quot;:&quot;ViViT&quot;,&quot;id&quot;:&quot;model_doc/vivit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vivit&quot;},{&quot;title&quot;:&quot;YOLOS&quot;,&quot;id&quot;:&quot;model_doc/yolos&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yolos&quot;}]},{&quot;title&quot;:&quot;Audio models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio Spectrogram Transformer&quot;,&quot;id&quot;:&quot;model_doc/audio-spectrogram-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer&quot;},{&quot;title&quot;:&quot;Bark&quot;,&quot;id&quot;:&quot;model_doc/bark&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bark&quot;},{&quot;title&quot;:&quot;CLAP&quot;,&quot;id&quot;:&quot;model_doc/clap&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clap&quot;},{&quot;title&quot;:&quot;EnCodec&quot;,&quot;id&quot;:&quot;model_doc/encodec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encodec&quot;},{&quot;title&quot;:&quot;Hubert&quot;,&quot;id&quot;:&quot;model_doc/hubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/hubert&quot;},{&quot;title&quot;:&quot;MCTCT&quot;,&quot;id&quot;:&quot;model_doc/mctct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mctct&quot;},{&quot;title&quot;:&quot;MMS&quot;,&quot;id&quot;:&quot;model_doc/mms&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mms&quot;},{&quot;title&quot;:&quot;MusicGen&quot;,&quot;id&quot;:&quot;model_doc/musicgen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/musicgen&quot;},{&quot;title&quot;:&quot;Pop2Piano&quot;,&quot;id&quot;:&quot;model_doc/pop2piano&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pop2piano&quot;},{&quot;title&quot;:&quot;SEW&quot;,&quot;id&quot;:&quot;model_doc/sew&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew&quot;},{&quot;title&quot;:&quot;SEW-D&quot;,&quot;id&quot;:&quot;model_doc/sew-d&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew-d&quot;},{&quot;title&quot;:&quot;Speech2Text&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text&quot;},{&quot;title&quot;:&quot;Speech2Text2&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text_2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2&quot;},{&quot;title&quot;:&quot;SpeechT5&quot;,&quot;id&quot;:&quot;model_doc/speecht5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speecht5&quot;},{&quot;title&quot;:&quot;UniSpeech&quot;,&quot;id&quot;:&quot;model_doc/unispeech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech&quot;},{&quot;title&quot;:&quot;UniSpeech-SAT&quot;,&quot;id&quot;:&quot;model_doc/unispeech-sat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech-sat&quot;},{&quot;title&quot;:&quot;VITS&quot;,&quot;id&quot;:&quot;model_doc/vits&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vits&quot;},{&quot;title&quot;:&quot;Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2&quot;},{&quot;title&quot;:&quot;Wav2Vec2-Conformer&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2-conformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer&quot;},{&quot;title&quot;:&quot;Wav2Vec2Phoneme&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2_phoneme&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme&quot;},{&quot;title&quot;:&quot;WavLM&quot;,&quot;id&quot;:&quot;model_doc/wavlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wavlm&quot;},{&quot;title&quot;:&quot;Whisper&quot;,&quot;id&quot;:&quot;model_doc/whisper&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/whisper&quot;},{&quot;title&quot;:&quot;XLS-R&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/xls_r&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xls_r&quot;},{&quot;title&quot;:&quot;XLSR-Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/xlsr_wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2&quot;}]},{&quot;title&quot;:&quot;Multimodal models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALIGN&quot;,&quot;id&quot;:&quot;model_doc/align&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/align&quot;},{&quot;title&quot;:&quot;AltCLIP&quot;,&quot;id&quot;:&quot;model_doc/altclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/altclip&quot;},{&quot;title&quot;:&quot;BLIP&quot;,&quot;id&quot;:&quot;model_doc/blip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip&quot;},{&quot;title&quot;:&quot;BLIP-2&quot;,&quot;id&quot;:&quot;model_doc/blip-2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip-2&quot;},{&quot;title&quot;:&quot;BridgeTower&quot;,&quot;id&quot;:&quot;model_doc/bridgetower&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bridgetower&quot;},{&quot;title&quot;:&quot;BROS&quot;,&quot;id&quot;:&quot;model_doc/bros&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bros&quot;},{&quot;title&quot;:&quot;Chinese-CLIP&quot;,&quot;id&quot;:&quot;model_doc/chinese_clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/chinese_clip&quot;},{&quot;title&quot;:&quot;CLIP&quot;,&quot;id&quot;:&quot;model_doc/clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clip&quot;},{&quot;title&quot;:&quot;CLIPSeg&quot;,&quot;id&quot;:&quot;model_doc/clipseg&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clipseg&quot;},{&quot;title&quot;:&quot;Data2Vec&quot;,&quot;id&quot;:&quot;model_doc/data2vec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/data2vec&quot;},{&quot;title&quot;:&quot;DePlot&quot;,&quot;id&quot;:&quot;model_doc/deplot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deplot&quot;},{&quot;title&quot;:&quot;Donut&quot;,&quot;id&quot;:&quot;model_doc/donut&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/donut&quot;},{&quot;title&quot;:&quot;FLAVA&quot;,&quot;id&quot;:&quot;model_doc/flava&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flava&quot;},{&quot;title&quot;:&quot;GIT&quot;,&quot;id&quot;:&quot;model_doc/git&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/git&quot;},{&quot;title&quot;:&quot;GroupViT&quot;,&quot;id&quot;:&quot;model_doc/groupvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/groupvit&quot;},{&quot;title&quot;:&quot;IDEFICS&quot;,&quot;id&quot;:&quot;model_doc/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/idefics&quot;},{&quot;title&quot;:&quot;InstructBLIP&quot;,&quot;id&quot;:&quot;model_doc/instructblip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/instructblip&quot;},{&quot;title&quot;:&quot;LayoutLM&quot;,&quot;id&quot;:&quot;model_doc/layoutlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlm&quot;},{&quot;title&quot;:&quot;LayoutLMV2&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv2&quot;},{&quot;title&quot;:&quot;LayoutLMV3&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv3&quot;},{&quot;title&quot;:&quot;LayoutXLM&quot;,&quot;id&quot;:&quot;model_doc/layoutxlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutxlm&quot;},{&quot;title&quot;:&quot;LiLT&quot;,&quot;id&quot;:&quot;model_doc/lilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lilt&quot;},{&quot;title&quot;:&quot;LXMERT&quot;,&quot;id&quot;:&quot;model_doc/lxmert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lxmert&quot;},{&quot;title&quot;:&quot;MatCha&quot;,&quot;id&quot;:&quot;model_doc/matcha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/matcha&quot;},{&quot;title&quot;:&quot;MGP-STR&quot;,&quot;id&quot;:&quot;model_doc/mgp-str&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mgp-str&quot;},{&quot;title&quot;:&quot;Nougat&quot;,&quot;id&quot;:&quot;model_doc/nougat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nougat&quot;},{&quot;title&quot;:&quot;OneFormer&quot;,&quot;id&quot;:&quot;model_doc/oneformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/oneformer&quot;},{&quot;title&quot;:&quot;OWL-ViT&quot;,&quot;id&quot;:&quot;model_doc/owlvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/owlvit&quot;},{&quot;title&quot;:&quot;Perceiver&quot;,&quot;id&quot;:&quot;model_doc/perceiver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/perceiver&quot;},{&quot;title&quot;:&quot;Pix2Struct&quot;,&quot;id&quot;:&quot;model_doc/pix2struct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pix2struct&quot;},{&quot;title&quot;:&quot;Segment Anything&quot;,&quot;id&quot;:&quot;model_doc/sam&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sam&quot;},{&quot;title&quot;:&quot;Speech Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/speech-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder&quot;},{&quot;title&quot;:&quot;TAPAS&quot;,&quot;id&quot;:&quot;model_doc/tapas&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapas&quot;},{&quot;title&quot;:&quot;TrOCR&quot;,&quot;id&quot;:&quot;model_doc/trocr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trocr&quot;},{&quot;title&quot;:&quot;TVLT&quot;,&quot;id&quot;:&quot;model_doc/tvlt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tvlt&quot;},{&quot;title&quot;:&quot;ViLT&quot;,&quot;id&quot;:&quot;model_doc/vilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vilt&quot;},{&quot;title&quot;:&quot;Vision Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/vision-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder&quot;},{&quot;title&quot;:&quot;Vision Text Dual Encoder&quot;,&quot;id&quot;:&quot;model_doc/vision-text-dual-encoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder&quot;},{&quot;title&quot;:&quot;VisualBERT&quot;,&quot;id&quot;:&quot;model_doc/visual_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/visual_bert&quot;},{&quot;title&quot;:&quot;X-CLIP&quot;,&quot;id&quot;:&quot;model_doc/xclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xclip&quot;}]},{&quot;title&quot;:&quot;Reinforcement learning models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Decision Transformer&quot;,&quot;id&quot;:&quot;model_doc/decision_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/decision_transformer&quot;},{&quot;title&quot;:&quot;Trajectory Transformer&quot;,&quot;id&quot;:&quot;model_doc/trajectory_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer&quot;}]},{&quot;title&quot;:&quot;Time series models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Autoformer&quot;,&quot;id&quot;:&quot;model_doc/autoformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/autoformer&quot;},{&quot;title&quot;:&quot;Informer&quot;,&quot;id&quot;:&quot;model_doc/informer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/informer&quot;},{&quot;title&quot;:&quot;Time Series Transformer&quot;,&quot;id&quot;:&quot;model_doc/time_series_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/time_series_transformer&quot;}]},{&quot;title&quot;:&quot;Graph models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Graphormer&quot;,&quot;id&quot;:&quot;model_doc/graphormer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/graphormer&quot;}]}]},{&quot;title&quot;:&quot;Internal Helpers&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Custom Layers and Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/modeling_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/modeling_utils&quot;},{&quot;title&quot;:&quot;Utilities for pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/pipelines_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/pipelines_utils&quot;},{&quot;title&quot;:&quot;Utilities for Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/tokenization_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/tokenization_utils&quot;},{&quot;title&quot;:&quot;Utilities for Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/trainer_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/trainer_utils&quot;},{&quot;title&quot;:&quot;Utilities for Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/generation_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/generation_utils&quot;},{&quot;title&quot;:&quot;Utilities for Image Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/image_processing_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/image_processing_utils&quot;},{&quot;title&quot;:&quot;Utilities for Audio processing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/audio_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/audio_utils&quot;},{&quot;title&quot;:&quot;General Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/file_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/file_utils&quot;},{&quot;title&quot;:&quot;Utilities for Time Series&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/time_series_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/time_series_utils&quot;}]}]}],&quot;chapterId&quot;:&quot;model_doc/xls_r&quot;,&quot;docType&quot;:&quot;docs&quot;,&quot;isLoggedIn&quot;:false,&quot;lang&quot;:&quot;en&quot;,&quot;langs&quot;:[&quot;de&quot;,&quot;en&quot;,&quot;es&quot;,&quot;fr&quot;,&quot;it&quot;,&quot;ko&quot;,&quot;pt&quot;,&quot;zh&quot;],&quot;library&quot;:&quot;transformers&quot;,&quot;theme&quot;:&quot;light&quot;,&quot;version&quot;:&quot;v4.34.0&quot;,&quot;versions&quot;:[{&quot;version&quot;:&quot;main&quot;},{&quot;version&quot;:&quot;v4.34.0&quot;},{&quot;version&quot;:&quot;v4.33.3&quot;},{&quot;version&quot;:&quot;v4.33.2&quot;},{&quot;version&quot;:&quot;v4.33.0&quot;},{&quot;version&quot;:&quot;v4.32.1&quot;},{&quot;version&quot;:&quot;v4.32.0&quot;},{&quot;version&quot;:&quot;v4.31.0&quot;},{&quot;version&quot;:&quot;v4.30.0&quot;},{&quot;version&quot;:&quot;v4.29.1&quot;},{&quot;version&quot;:&quot;v4.29.0&quot;},{&quot;version&quot;:&quot;v4.28.1&quot;},{&quot;version&quot;:&quot;v4.28.0&quot;},{&quot;version&quot;:&quot;v4.27.2&quot;},{&quot;version&quot;:&quot;v4.27.1&quot;},{&quot;version&quot;:&quot;v4.27.0&quot;},{&quot;version&quot;:&quot;v4.26.1&quot;},{&quot;version&quot;:&quot;v4.26.0&quot;},{&quot;version&quot;:&quot;v4.25.1&quot;},{&quot;version&quot;:&quot;v4.24.0&quot;},{&quot;version&quot;:&quot;v4.23.1&quot;},{&quot;version&quot;:&quot;v4.23.0&quot;},{&quot;version&quot;:&quot;v4.22.2&quot;},{&quot;version&quot;:&quot;v4.22.1&quot;},{&quot;version&quot;:&quot;v4.22.0&quot;},{&quot;version&quot;:&quot;v4.21.3&quot;},{&quot;version&quot;:&quot;v4.21.2&quot;},{&quot;version&quot;:&quot;v4.21.1&quot;},{&quot;version&quot;:&quot;v4.21.0&quot;},{&quot;version&quot;:&quot;v4.20.1&quot;},{&quot;version&quot;:&quot;v4.20.0&quot;},{&quot;version&quot;:&quot;v4.19.4&quot;},{&quot;version&quot;:&quot;v4.19.3&quot;},{&quot;version&quot;:&quot;v4.19.2&quot;},{&quot;version&quot;:&quot;v4.19.0&quot;},{&quot;version&quot;:&quot;v4.18.0&quot;},{&quot;version&quot;:&quot;v4.17.0&quot;},{&quot;version&quot;:&quot;v4.16.2&quot;},{&quot;version&quot;:&quot;v4.16.1&quot;},{&quot;version&quot;:&quot;v4.16.0&quot;},{&quot;version&quot;:&quot;v4.15.0&quot;},{&quot;version&quot;:&quot;v4.14.1&quot;},{&quot;version&quot;:&quot;v4.13.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.5&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.4&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.0.0&quot;},{&quot;version&quot;:&quot;doc-builder-html&quot;}],&quot;title&quot;:&quot;XLS-R&quot;}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"> <div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation </p> <div class="flex items-center"><p class="font-semibold">XLS-R</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "> <button class=" " type="button"> <h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> </button> <div class="flex items-center"> <select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1" selected="">v4.34.0</option><option value="2">v4.33.3</option><option value="3">v4.32.1</option><option value="4">v4.31.0</option><option value="5">v4.30.0</option><option value="6">v4.29.1</option><option value="7">v4.28.1</option><option value="8">v4.27.2</option><option value="9">v4.26.1</option><option value="10">v4.25.1</option><option value="11">v4.24.0</option><option value="12">v4.23.1</option><option value="13">v4.22.2</option><option value="14">v4.21.3</option><option value="15">v4.20.1</option><option value="16">v4.19.4</option><option value="17">v4.18.0</option><option value="18">v4.17.0</option><option value="19">v4.16.2</option><option value="20">v4.15.0</option><option value="21">v4.14.1</option><option value="22">v4.13.0</option><option value="23">v4.12.5</option><option value="24">v4.11.3</option><option value="25">v4.10.1</option><option value="26">v4.9.2</option><option value="27">v4.8.2</option><option value="28">v4.7.0</option><option value="29">v4.6.0</option><option value="30">v4.5.1</option><option value="31">v4.4.2</option><option value="32">v4.3.3</option><option value="33">v4.2.2</option><option value="34">v4.1.1</option><option value="35">v4.0.1</option><option value="36">v3.5.1</option><option value="37">v3.4.0</option><option value="38">v3.3.1</option><option value="39">v3.2.0</option><option value="40">v3.1.0</option><option value="41">v3.0.2</option><option value="42">v2.11.0</option><option value="43">v2.10.0</option><option value="44">v2.9.1</option><option value="45">v2.8.0</option><option value="46">v2.7.0</option><option value="47">v2.6.0</option><option value="48">v2.5.1</option><option value="49">v2.4.1</option><option value="50">v2.3.0</option><option value="51">v2.2.2</option><option value="52">v2.1.1</option><option value="53">v2.0.0</option><option value="54">v1.2.0</option><option value="55">v1.1.0</option><option value="56">v1.0.0</option><option value="57">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en" selected="">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"> <button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"> <svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> </a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Get started<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/index"><!-- HTML_TAG_START -->🤗 Transformers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/quicktour"><!-- HTML_TAG_START -->Quick tour<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/installation"><!-- HTML_TAG_START -->Installation<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Tutorials<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_tutorial"><!-- HTML_TAG_START -->Run inference with pipelines<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/autoclass_tutorial"><!-- HTML_TAG_START -->Write portable code with AutoClass<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/preprocessing"><!-- HTML_TAG_START -->Preprocess data<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/training"><!-- HTML_TAG_START -->Fine-tune a pretrained model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/run_scripts"><!-- HTML_TAG_START -->Train with a script<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/accelerate"><!-- HTML_TAG_START -->Set up distributed training with 🤗 Accelerate<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/peft"><!-- HTML_TAG_START -->Load and train adapters with 🤗 PEFT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_sharing"><!-- HTML_TAG_START -->Share your model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/transformers_agents"><!-- HTML_TAG_START -->Agents<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/llm_tutorial"><!-- HTML_TAG_START -->Generation with LLMs<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Task Guides<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Natural Language Processing<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Audio<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Computer Vision<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Multimodal<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Generation<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Prompting<!-- HTML_TAG_END --></span> </span></span> </div></div> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Developer guides<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/fast_tokenizers"><!-- HTML_TAG_START -->Use fast tokenizers from 🤗 Tokenizers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/multilingual"><!-- HTML_TAG_START -->Run inference with multilingual models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/create_a_model"><!-- HTML_TAG_START -->Use model-specific APIs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_models"><!-- HTML_TAG_START -->Share a custom model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/chat_templating"><!-- HTML_TAG_START -->Templates for chat models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/sagemaker"><!-- HTML_TAG_START -->Run training on Amazon SageMaker<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/serialization"><!-- HTML_TAG_START -->Export to ONNX<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tflite"><!-- HTML_TAG_START -->Export to TFLite<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/torchscript"><!-- HTML_TAG_START -->Export to TorchScript<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/benchmarks"><!-- HTML_TAG_START -->Benchmarks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/notebooks"><!-- HTML_TAG_START -->Notebooks with examples<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/community"><!-- HTML_TAG_START -->Community resources<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_tools"><!-- HTML_TAG_START -->Custom Tools and Prompts<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/troubleshooting"><!-- HTML_TAG_START -->Troubleshoot<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Performance and scalability<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/performance"><!-- HTML_TAG_START -->Overview<!-- HTML_TAG_END --> </a> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Efficient training techniques<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_one"><!-- HTML_TAG_START -->Methods and tools for efficient training on a single GPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_many"><!-- HTML_TAG_START -->Multiple GPUs and parallelism<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu"><!-- HTML_TAG_START -->Efficient training on CPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu_many"><!-- HTML_TAG_START -->Distributed CPU training<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu"><!-- HTML_TAG_START -->Training on TPUs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu_tf"><!-- HTML_TAG_START -->Training on TPU with TensorFlow<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_special"><!-- HTML_TAG_START -->Training on Specialized Hardware<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_hardware"><!-- HTML_TAG_START -->Custom hardware for training<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/hpo_train"><!-- HTML_TAG_START -->Hyperparameter Search using Trainer API<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Optimizing inference<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_cpu"><!-- HTML_TAG_START -->Inference on CPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_one"><!-- HTML_TAG_START -->Inference on one GPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_many"><!-- HTML_TAG_START -->Inference on many GPUs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_special"><!-- HTML_TAG_START -->Inference on Specialized Hardware<!-- HTML_TAG_END --> </a> </div><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/big_models"><!-- HTML_TAG_START -->Instantiating a big model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/debugging"><!-- HTML_TAG_START -->Troubleshooting<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tf_xla"><!-- HTML_TAG_START -->XLA Integration for TensorFlow Models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perf_torch_compile"><!-- HTML_TAG_START -->Optimize inference using `torch.compile()`<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Contribute<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/contributing"><!-- HTML_TAG_START -->How to contribute to transformers?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_model"><!-- HTML_TAG_START -->How to add a model to 🤗 Transformers?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_tensorflow_model"><!-- HTML_TAG_START -->How to convert a 🤗 Transformers model to TensorFlow?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_pipeline"><!-- HTML_TAG_START -->How to add a pipeline to 🤗 Transformers?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/testing"><!-- HTML_TAG_START -->Testing<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pr_checks"><!-- HTML_TAG_START -->Checks on a Pull Request<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Conceptual guides<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/philosophy"><!-- HTML_TAG_START -->Philosophy<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/glossary"><!-- HTML_TAG_START -->Glossary<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/task_summary"><!-- HTML_TAG_START -->What 🤗 Transformers can do<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tasks_explained"><!-- HTML_TAG_START -->How 🤗 Transformers solve tasks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_summary"><!-- HTML_TAG_START -->The Transformer model family<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tokenizer_summary"><!-- HTML_TAG_START -->Summary of the tokenizers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/attention"><!-- HTML_TAG_START -->Attention mechanisms<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pad_truncation"><!-- HTML_TAG_START -->Padding and truncation<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/bertology"><!-- HTML_TAG_START -->BERTology<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perplexity"><!-- HTML_TAG_START -->Perplexity of fixed-length models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_webserver"><!-- HTML_TAG_START -->Pipelines for webserver inference<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_memory_anatomy"><!-- HTML_TAG_START -->Model training anatomy<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->API<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Main Classes<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/agent"><!-- HTML_TAG_START -->Agents and Tools<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/model_doc/auto"><!-- HTML_TAG_START -->Auto Classes<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/callback"><!-- HTML_TAG_START -->Callbacks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/configuration"><!-- HTML_TAG_START -->Configuration<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/data_collator"><!-- HTML_TAG_START -->Data Collator<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks"><!-- HTML_TAG_START -->Keras callbacks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/logging"><!-- HTML_TAG_START -->Logging<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/model"><!-- HTML_TAG_START -->Models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/text_generation"><!-- HTML_TAG_START -->Text Generation<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/onnx"><!-- HTML_TAG_START -->ONNX<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules"><!-- HTML_TAG_START -->Optimization<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/output"><!-- HTML_TAG_START -->Model outputs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/pipelines"><!-- HTML_TAG_START -->Pipelines<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/processors"><!-- HTML_TAG_START -->Processors<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/quantization"><!-- HTML_TAG_START -->Quantization<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/tokenizer"><!-- HTML_TAG_START -->Tokenizer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/trainer"><!-- HTML_TAG_START -->Trainer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/deepspeed"><!-- HTML_TAG_START -->DeepSpeed Integration<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor"><!-- HTML_TAG_START -->Feature Extractor<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/image_processor"><!-- HTML_TAG_START -->Image Processor<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Text models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Vision models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Audio models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer"><!-- HTML_TAG_START -->Audio Spectrogram Transformer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bark"><!-- HTML_TAG_START -->Bark<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/clap"><!-- HTML_TAG_START -->CLAP<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/encodec"><!-- HTML_TAG_START -->EnCodec<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/hubert"><!-- HTML_TAG_START -->Hubert<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mctct"><!-- HTML_TAG_START -->MCTCT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mms"><!-- HTML_TAG_START -->MMS<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/musicgen"><!-- HTML_TAG_START -->MusicGen<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/pop2piano"><!-- HTML_TAG_START -->Pop2Piano<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/sew"><!-- HTML_TAG_START -->SEW<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/sew-d"><!-- HTML_TAG_START -->SEW-D<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/speech_to_text"><!-- HTML_TAG_START -->Speech2Text<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2"><!-- HTML_TAG_START -->Speech2Text2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/speecht5"><!-- HTML_TAG_START -->SpeechT5<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/unispeech"><!-- HTML_TAG_START -->UniSpeech<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/unispeech-sat"><!-- HTML_TAG_START -->UniSpeech-SAT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/vits"><!-- HTML_TAG_START -->VITS<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2"><!-- HTML_TAG_START -->Wav2Vec2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer"><!-- HTML_TAG_START -->Wav2Vec2-Conformer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme"><!-- HTML_TAG_START -->Wav2Vec2Phoneme<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/wavlm"><!-- HTML_TAG_START -->WavLM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/whisper"><!-- HTML_TAG_START -->Whisper<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xls_r"><!-- HTML_TAG_START -->XLS-R<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2"><!-- HTML_TAG_START -->XLSR-Wav2Vec2<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Multimodal models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Reinforcement learning models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Time series models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Graph models<!-- HTML_TAG_END --></span> </span></span> </div></div> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Internal Helpers<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/modeling_utils"><!-- HTML_TAG_START -->Custom Layers and Utilities<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/pipelines_utils"><!-- HTML_TAG_START -->Utilities for pipelines<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/tokenization_utils"><!-- HTML_TAG_START -->Utilities for Tokenizers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/trainer_utils"><!-- HTML_TAG_START -->Utilities for Trainer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/generation_utils"><!-- HTML_TAG_START -->Utilities for Generation<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/image_processing_utils"><!-- HTML_TAG_START -->Utilities for Image Processors<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/audio_utils"><!-- HTML_TAG_START -->Utilities for Audio processing<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/file_utils"><!-- HTML_TAG_START -->General Utilities<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/time_series_utils"><!-- HTML_TAG_START -->Utilities for Time Series<!-- HTML_TAG_END --> </a> </div> </div></nav></div></div></div> <div class="z-1 min-w-0 flex-1"> <div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg"> <div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div> <p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience </p> <div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes </div></div></div> <div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a> <p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div> <div class="prose-doc prose relative mx-auto max-w-4xl break-words"><!-- HTML_TAG_START --> <link href="/docs/transformers/v4.34.0/en/_app/immutable/assets/0.e3b0c442.css" rel="modulepreload"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/entry/start.c2db227a.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/scheduler.9bc65507.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/singletons.e3057404.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/index.3b203c72.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/paths.e7de6301.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/entry/app.879d9b87.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/index.78c82d43.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/0.242aaaff.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/each.e59479a4.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/283.6a32fc71.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/IconCopyLink.bedaa44d.js"><!-- HEAD_svelte-1phssyn_START --><meta name="hf:doc:metadata" content="{&quot;local&quot;:&quot;xlsr&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;overview&quot;,&quot;title&quot;:&quot;Overview&quot;}],&quot;title&quot;:&quot;XLS-R&quot;}"><!-- HEAD_svelte-1phssyn_END --> <p></p> <h1 class="relative group"><a id="xlsr" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#xlsr"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-xlu39p">XLS-R</span></h1> <h2 class="relative group"><a id="overview" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#overview"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1jsw1pg">Overview</span></h2> <p data-svelte-h="svelte-1nyy59j">The XLS-R model was proposed in <a href="https://arxiv.org/abs/2111.09296" rel="nofollow">XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale</a> by Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli.</p> <p data-svelte-h="svelte-vfdo9a">The abstract from the paper is the following:</p> <p data-svelte-h="svelte-ul075o"><em>This paper presents XLS-R, a large-scale model for cross-lingual speech representation learning based on wav2vec 2.0. We train models with up to 2B parameters on nearly half a million hours of publicly available speech audio in 128 languages, an order of magnitude more public data than the largest known prior work. Our evaluation covers a wide range of tasks, domains, data regimes and languages, both high and low-resource. On the CoVoST-2 speech translation benchmark, we improve the previous state of the art by an average of 7.4 BLEU over 21 translation directions into English. For speech recognition, XLS-R improves over the best known prior work on BABEL, MLS, CommonVoice as well as VoxPopuli, lowering error rates by 14-34% relative on average. XLS-R also sets a new state of the art on VoxLingua107 language identification. Moreover, we show that with sufficient model size, cross-lingual pretraining can outperform English-only pretraining when translating English speech into other languages, a setting which favors monolingual pretraining. We hope XLS-R can help to improve speech processing tasks for many more languages of the world.</em></p> <p data-svelte-h="svelte-axv494">Tips:</p> <ul data-svelte-h="svelte-62ifcc"><li>XLS-R is a speech model that accepts a float array corresponding to the raw waveform of the speech signal.</li> <li>XLS-R model was trained using connectionist temporal classification (CTC) so the model output has to be decoded using <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2CTCTokenizer">Wav2Vec2CTCTokenizer</a>.</li></ul> <p data-svelte-h="svelte-p0mu8e">Relevant checkpoints can be found under <a href="https://huggingface.co/models?other=xls_r" rel="nofollow">https://huggingface.co/models?other=xls_r</a>.</p> <p data-svelte-h="svelte-fxuwrw">XLS-R’s architecture is based on the Wav2Vec2 model, so one can refer to <a href="wav2vec2">Wav2Vec2’s documentation page</a>.</p> <p data-svelte-h="svelte-12gzw10">The original code can be found <a href="https://github.com/pytorch/fairseq/tree/master/fairseq/models/wav2vec" rel="nofollow">here</a>.</p> <p></p> <script> { __sveltekit_1yybmhh = { assets: "/docs/transformers/v4.34.0/en", base: "/docs/transformers/v4.34.0/en", env: {} }; const element = document.currentScript.parentElement; const data = [null,null]; Promise.all([ import("/docs/transformers/v4.34.0/en/_app/immutable/entry/start.c2db227a.js"), import("/docs/transformers/v4.34.0/en/_app/immutable/entry/app.879d9b87.js") ]).then(([kit, app]) => { kit.start(app, element, { node_ids: [0, 283], data, form: null, error: null }); }); } </script> <!-- HTML_TAG_END --></div> <div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/whisper" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>Whisper</a> <a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">XLSR-Wav2Vec2<span class="ml-2 translate-y-px">→</span></a></div></div></div> <div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapter&quot;:{&quot;title&quot;:&quot;XLS-R&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;xlsr&quot;,&quot;url&quot;:&quot;#xlsr&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;overview&quot;,&quot;url&quot;:&quot;#overview&quot;}]}}" data-target="SubSideMenu"> <nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#xlsr" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-xlsr"><!-- HTML_TAG_START -->XL<wbr>S-R<!-- HTML_TAG_END --></a> <a href="#overview" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-overview"><!-- HTML_TAG_START --><wbr>Overview<!-- HTML_TAG_END --></a> </nav></div></div></div> <div id="doc-footer"></div></main> </div> <script> import("/front/build/kube-b0520c1/index.js"); window.moonSha = "kube-b0520c1/"; window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc","environment":"production","userAgent":"HuggingFace (production)"}`); </script> <!-- Stripe --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://js.stripe.com/v3/"; script.async = true; document.head.appendChild(script); } </script> <!-- Google analytics v4 --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL"; script.async = true; document.head.appendChild(script); window.dataLayer = window.dataLayer || []; function gtag() { if (window.dataLayer !== undefined) { window.dataLayer.push(arguments); } } gtag("js", new Date()); gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/v4.34.0/en/model_doc/xls_r" }); /// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" }); /// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent /// TODO: ask the user for their consent and update this with gtag('consent', 'update') } </script> <!-- Google Analytics v3 (deprecated) --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { (function (i, s, o, g, r, a, m) { i["GoogleAnalyticsObject"] = r; (i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments); }), (i[r].l = 1 * new Date()); (a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]); a.async = 1; a.src = g; m.parentNode.insertBefore(a, m); })(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics"); ganalytics("create", "UA-83738774-2", "auto"); ganalytics("send", "pageview", "/docs/transformers/v4.34.0/en/model_doc/xls_r"); } </script> <iframe name="__privateStripeMetricsController7020" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-27c67c0d52761104439bb051c7856ab1.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fv4.34.0%2Fen%2Fmodel_doc%2Fxls_r&amp;title=XLS-R&amp;referrer=&amp;muid=NA&amp;sid=NA&amp;version=6&amp;preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html>
2023-10-05T13:33:37.560Z
XLM-RoBERTa-XL
https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl
# XLM-RoBERTa-XL ## Overview The XLM-RoBERTa-XL model was proposed in [Larger-Scale Transformers for Multilingual Masked Language Modeling](https://arxiv.org/abs/2105.00572) by Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau. The abstract from the paper is the following: _Recent work has demonstrated the effectiveness of cross-lingual language model pretraining for cross-lingual understanding. In this study, we present the results of two larger multilingual masked language models, with 3.5B and 10.7B parameters. Our two new models dubbed XLM-R XL and XLM-R XXL outperform XLM-R by 1.8% and 2.4% average accuracy on XNLI. Our model also outperforms the RoBERTa-Large model on several English tasks of the GLUE benchmark by 0.3% on average while handling 99 more languages. This suggests pretrained models with larger capacity may obtain both strong performance on high-resource languages while greatly improving low-resource languages. We make our code and models publicly available._ Tips: - XLM-RoBERTa-XL is a multilingual model trained on 100 different languages. Unlike some XLM multilingual models, it does not require `lang` tensors to understand which language is used, and should be able to determine the correct language from the input ids. This model was contributed by [Soonhwan-Kwon](https://github.com/Soonhwan-Kwon) and [stefan-it](https://huggingface.co/stefan-it). The original code can be found [here](https://github.com/pytorch/fairseq/tree/master/examples/xlmr). ## Documentation resources - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Causal language modeling task guide](../tasks/language_modeling) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/multiple_choice) ## XLMRobertaXLConfig ### class transformers.XLMRobertaXLConfig [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta_xl/configuration_xlm_roberta_xl.py#L34) ( vocab\_size = 250880hidden\_size = 2560num\_hidden\_layers = 36num\_attention\_heads = 32intermediate\_size = 10240hidden\_act = 'gelu'hidden\_dropout\_prob = 0.1attention\_probs\_dropout\_prob = 0.1max\_position\_embeddings = 514type\_vocab\_size = 1initializer\_range = 0.02layer\_norm\_eps = 1e-05pad\_token\_id = 1bos\_token\_id = 0eos\_token\_id = 2position\_embedding\_type = 'absolute'use\_cache = Trueclassifier\_dropout = None\*\*kwargs ) This is the configuration class to store the configuration of a [XLMRobertaXLModel](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl#transformers.XLMRobertaXLModel) or a `TFXLMRobertaXLModel`. It is used to instantiate a XLM\_ROBERTA\_XL model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the XLM\_ROBERTA\_XL [facebook/xlm-roberta-xl](https://huggingface.co/facebook/xlm-roberta-xl) architecture. Configuration objects inherit from [PretrainedConfig](/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig) and can be used to control the model outputs. Read the documentation from [PretrainedConfig](/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig) for more information. Examples: ``` >>> from transformers import XLMRobertaXLConfig, XLMRobertaXLModel >>> >>> configuration = XLMRobertaXLConfig() >>> >>> model = XLMRobertaXLModel(configuration) >>> >>> configuration = model.config ``` ## XLMRobertaXLModel ### class transformers.XLMRobertaXLModel [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta_xl/modeling_xlm_roberta_xl.py#L664) ( configadd\_pooling\_layer = True ) Parameters - **config** ([XLMRobertaXLConfig](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl#transformers.XLMRobertaXLConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. The bare XLM-RoBERTa-xlarge Model transformer outputting raw hidden-states without any specific head on top. This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of cross-attention is added between the self-attention layers, following the architecture described in _Attention is all you need__by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin. To behave as an decoder the model needs to be initialized with the `is_decoder` argument of the configuration set to `True`. To be used in a Seq2Seq model, the model needs to initialized with both `is_decoder` argument and `add_cross_attention` set to `True`; an `encoder_hidden_states` is then expected as an input to the forward pass. .._ _Attention is all you need_: [https://arxiv.org/abs/1706.03762](https://arxiv.org/abs/1706.03762) #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta_xl/modeling_xlm_roberta_xl.py#L702) ( input\_ids: typing.Optional\[torch.Tensor\] = Noneattention\_mask: typing.Optional\[torch.Tensor\] = Nonetoken\_type\_ids: typing.Optional\[torch.Tensor\] = Noneposition\_ids: typing.Optional\[torch.Tensor\] = Nonehead\_mask: typing.Optional\[torch.Tensor\] = Noneinputs\_embeds: typing.Optional\[torch.Tensor\] = Noneencoder\_hidden\_states: typing.Optional\[torch.Tensor\] = Noneencoder\_attention\_mask: typing.Optional\[torch.Tensor\] = Nonepast\_key\_values: typing.Optional\[typing.List\[torch.FloatTensor\]\] = Noneuse\_cache: typing.Optional\[bool\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → [transformers.modeling\_outputs.BaseModelOutputWithPoolingAndCrossAttentions](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions) or `tuple(torch.FloatTensor)` The [XLMRobertaXLModel](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl#transformers.XLMRobertaXLModel) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, XLMRobertaXLModel >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-xlarge") >>> model = XLMRobertaXLModel.from_pretrained("xlm-roberta-xlarge") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state ``` ## XLMRobertaXLForCausalLM ### class transformers.XLMRobertaXLForCausalLM [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta_xl/modeling_xlm_roberta_xl.py#L844) ( config ) Parameters - **config** ([XLMRobertaXLConfig](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl#transformers.XLMRobertaXLConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. XLM-RoBERTa-xlarge Model with a `language modeling` head on top for CLM fine-tuning. This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta_xl/modeling_xlm_roberta_xl.py#L864) ( input\_ids: typing.Optional\[torch.LongTensor\] = Noneattention\_mask: typing.Optional\[torch.FloatTensor\] = Nonetoken\_type\_ids: typing.Optional\[torch.LongTensor\] = Noneposition\_ids: typing.Optional\[torch.LongTensor\] = Nonehead\_mask: typing.Optional\[torch.FloatTensor\] = Noneinputs\_embeds: typing.Optional\[torch.FloatTensor\] = Noneencoder\_hidden\_states: typing.Optional\[torch.FloatTensor\] = Noneencoder\_attention\_mask: typing.Optional\[torch.FloatTensor\] = Nonelabels: typing.Optional\[torch.LongTensor\] = Nonepast\_key\_values: typing.Optional\[typing.Tuple\[typing.Tuple\[torch.FloatTensor\]\]\] = Noneuse\_cache: typing.Optional\[bool\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → [transformers.modeling\_outputs.CausalLMOutputWithCrossAttentions](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.CausalLMOutputWithCrossAttentions) or `tuple(torch.FloatTensor)` The [XLMRobertaXLForCausalLM](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl#transformers.XLMRobertaXLForCausalLM) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, RobertaForCausalLM, RobertaConfig >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("roberta-base") >>> config = RobertaConfig.from_pretrained("roberta-base") >>> config.is_decoder = True >>> model = RobertaForCausalLM.from_pretrained("roberta-base", config=config) >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs) >>> prediction_logits = outputs.logits ``` ## XLMRobertaXLForMaskedLM ### class transformers.XLMRobertaXLForMaskedLM [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta_xl/modeling_xlm_roberta_xl.py#L991) ( config ) Parameters - **config** ([XLMRobertaXLConfig](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl#transformers.XLMRobertaXLConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. XLM-RoBERTa-xlarge Model with a `language modeling` head on top. This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta_xl/modeling_xlm_roberta_xl.py#L1014) ( input\_ids: typing.Optional\[torch.LongTensor\] = Noneattention\_mask: typing.Optional\[torch.FloatTensor\] = Nonetoken\_type\_ids: typing.Optional\[torch.LongTensor\] = Noneposition\_ids: typing.Optional\[torch.LongTensor\] = Nonehead\_mask: typing.Optional\[torch.FloatTensor\] = Noneinputs\_embeds: typing.Optional\[torch.FloatTensor\] = Noneencoder\_hidden\_states: typing.Optional\[torch.Tensor\] = Noneencoder\_attention\_mask: typing.Optional\[torch.FloatTensor\] = Nonelabels: typing.Optional\[torch.LongTensor\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → [transformers.modeling\_outputs.MaskedLMOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.MaskedLMOutput) or `tuple(torch.FloatTensor)` The [XLMRobertaXLForMaskedLM](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl#transformers.XLMRobertaXLForMaskedLM) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, XLMRobertaXLForMaskedLM >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-xlarge") >>> model = XLMRobertaXLForMaskedLM.from_pretrained("xlm-roberta-xlarge") >>> inputs = tokenizer("The capital of France is <mask>.", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> >>> mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0] >>> predicted_token_id = logits[0, mask_token_index].argmax(axis=-1) >>> labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"] >>> >>> labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100) >>> outputs = model(**inputs, labels=labels) ``` ## XLMRobertaXLForSequenceClassification ### class transformers.XLMRobertaXLForSequenceClassification [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta_xl/modeling_xlm_roberta_xl.py#L1113) ( config ) Parameters - **config** ([XLMRobertaXLConfig](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl#transformers.XLMRobertaXLConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. XLM-RoBERTa-xlarge Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta_xl/modeling_xlm_roberta_xl.py#L1124) ( input\_ids: typing.Optional\[torch.LongTensor\] = Noneattention\_mask: typing.Optional\[torch.FloatTensor\] = Nonetoken\_type\_ids: typing.Optional\[torch.LongTensor\] = Noneposition\_ids: typing.Optional\[torch.LongTensor\] = Nonehead\_mask: typing.Optional\[torch.FloatTensor\] = Noneinputs\_embeds: typing.Optional\[torch.FloatTensor\] = Nonelabels: typing.Optional\[torch.LongTensor\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → [transformers.modeling\_outputs.SequenceClassifierOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.SequenceClassifierOutput) or `tuple(torch.FloatTensor)` The [XLMRobertaXLForSequenceClassification](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl#transformers.XLMRobertaXLForSequenceClassification) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example of single-label classification: ``` >>> import torch >>> from transformers import AutoTokenizer, XLMRobertaXLForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-xlarge") >>> model = XLMRobertaXLForSequenceClassification.from_pretrained("xlm-roberta-xlarge") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_id = logits.argmax().item() >>> >>> num_labels = len(model.config.id2label) >>> model = XLMRobertaXLForSequenceClassification.from_pretrained("xlm-roberta-xlarge", num_labels=num_labels) >>> labels = torch.tensor([1]) >>> loss = model(**inputs, labels=labels).loss ``` Example of multi-label classification: ``` >>> import torch >>> from transformers import AutoTokenizer, XLMRobertaXLForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-xlarge") >>> model = XLMRobertaXLForSequenceClassification.from_pretrained("xlm-roberta-xlarge", problem_type="multi_label_classification") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5] >>> >>> num_labels = len(model.config.id2label) >>> model = XLMRobertaXLForSequenceClassification.from_pretrained( ... "xlm-roberta-xlarge", num_labels=num_labels, problem_type="multi_label_classification" ... ) >>> labels = torch.sum( ... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1 ... ).to(torch.float) >>> loss = model(**inputs, labels=labels).loss ``` ## XLMRobertaXLForMultipleChoice ### class transformers.XLMRobertaXLForMultipleChoice [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta_xl/modeling_xlm_roberta_xl.py#L1207) ( config ) Parameters - **config** ([XLMRobertaXLConfig](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl#transformers.XLMRobertaXLConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. XLM-Roberta-xlarge Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks. This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta_xl/modeling_xlm_roberta_xl.py#L1217) ( input\_ids: typing.Optional\[torch.LongTensor\] = Nonetoken\_type\_ids: typing.Optional\[torch.LongTensor\] = Noneattention\_mask: typing.Optional\[torch.FloatTensor\] = Nonelabels: typing.Optional\[torch.LongTensor\] = Noneposition\_ids: typing.Optional\[torch.LongTensor\] = Nonehead\_mask: typing.Optional\[torch.FloatTensor\] = Noneinputs\_embeds: typing.Optional\[torch.FloatTensor\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → [transformers.modeling\_outputs.MultipleChoiceModelOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.MultipleChoiceModelOutput) or `tuple(torch.FloatTensor)` The [XLMRobertaXLForMultipleChoice](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl#transformers.XLMRobertaXLForMultipleChoice) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, XLMRobertaXLForMultipleChoice >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-xlarge") >>> model = XLMRobertaXLForMultipleChoice.from_pretrained("xlm-roberta-xlarge") >>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced." >>> choice0 = "It is eaten with a fork and a knife." >>> choice1 = "It is eaten while held in the hand." >>> labels = torch.tensor(0).unsqueeze(0) >>> encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="pt", padding=True) >>> outputs = model(**{k: v.unsqueeze(0) for k, v in encoding.items()}, labels=labels) >>> >>> loss = outputs.loss >>> logits = outputs.logits ``` ## XLMRobertaXLForTokenClassification ### class transformers.XLMRobertaXLForTokenClassification [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta_xl/modeling_xlm_roberta_xl.py#L1298) ( config ) Parameters - **config** ([XLMRobertaXLConfig](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl#transformers.XLMRobertaXLConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. XLM-Roberta-xlarge Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta_xl/modeling_xlm_roberta_xl.py#L1312) ( input\_ids: typing.Optional\[torch.LongTensor\] = Noneattention\_mask: typing.Optional\[torch.FloatTensor\] = Nonetoken\_type\_ids: typing.Optional\[torch.LongTensor\] = Noneposition\_ids: typing.Optional\[torch.LongTensor\] = Nonehead\_mask: typing.Optional\[torch.FloatTensor\] = Noneinputs\_embeds: typing.Optional\[torch.FloatTensor\] = Nonelabels: typing.Optional\[torch.LongTensor\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → [transformers.modeling\_outputs.TokenClassifierOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.TokenClassifierOutput) or `tuple(torch.FloatTensor)` The [XLMRobertaXLForTokenClassification](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl#transformers.XLMRobertaXLForTokenClassification) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, XLMRobertaXLForTokenClassification >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-xlarge") >>> model = XLMRobertaXLForTokenClassification.from_pretrained("xlm-roberta-xlarge") >>> inputs = tokenizer( ... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt" ... ) >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_token_class_ids = logits.argmax(-1) >>> >>> >>> >>> predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]] >>> labels = predicted_token_class_ids >>> loss = model(**inputs, labels=labels).loss ``` ## XLMRobertaXLForQuestionAnswering ### class transformers.XLMRobertaXLForQuestionAnswering [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta_xl/modeling_xlm_roberta_xl.py#L1409) ( config ) Parameters - **config** ([XLMRobertaXLConfig](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl#transformers.XLMRobertaXLConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. XLM-Roberta-xlarge Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute `span start logits` and `span end logits`). This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta_xl/modeling_xlm_roberta_xl.py#L1419) ( input\_ids: typing.Optional\[torch.LongTensor\] = Noneattention\_mask: typing.Optional\[torch.FloatTensor\] = Nonetoken\_type\_ids: typing.Optional\[torch.LongTensor\] = Noneposition\_ids: typing.Optional\[torch.LongTensor\] = Nonehead\_mask: typing.Optional\[torch.FloatTensor\] = Noneinputs\_embeds: typing.Optional\[torch.FloatTensor\] = Nonestart\_positions: typing.Optional\[torch.LongTensor\] = Noneend\_positions: typing.Optional\[torch.LongTensor\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → [transformers.modeling\_outputs.QuestionAnsweringModelOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.QuestionAnsweringModelOutput) or `tuple(torch.FloatTensor)` The [XLMRobertaXLForQuestionAnswering](/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl#transformers.XLMRobertaXLForQuestionAnswering) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, XLMRobertaXLForQuestionAnswering >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-xlarge") >>> model = XLMRobertaXLForQuestionAnswering.from_pretrained("xlm-roberta-xlarge") >>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" >>> inputs = tokenizer(question, text, return_tensors="pt") >>> with torch.no_grad(): ... outputs = model(**inputs) >>> answer_start_index = outputs.start_logits.argmax() >>> answer_end_index = outputs.end_logits.argmax() >>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1] >>> >>> target_start_index = torch.tensor([14]) >>> target_end_index = torch.tensor([15]) >>> outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index) >>> loss = outputs.loss ```
<!DOCTYPE html><html class=""><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"> <meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science."> <meta property="fb:app_id" content="1321688464574422"> <meta name="twitter:card" content="summary_large_image"> <meta name="twitter:site" content="@huggingface"> <meta property="og:title" content="XLM-RoBERTa-XL"> <meta property="og:type" content="website"> <meta property="og:url" content="https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl"> <meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png"> <link rel="stylesheet" href="/front/build/kube-b0520c1/style.css"> <link rel="preconnect" href="https://fonts.gstatic.com"> <link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&amp;display=swap" rel="stylesheet"> <link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&amp;display=swap" rel="stylesheet"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" as="style" onload="this.onload=null;this.rel='stylesheet'"> <noscript> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" /> </noscript> <title>XLM-RoBERTa-XL</title> <script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script> <script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><link rel="stylesheet" href="/docs/transformers/v4.34.0/en/_app/immutable/assets/0.e3b0c442.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/1.38c5c2f6.js"><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"><meta name="hf:doc:metadata" content="{&quot;local&quot;:&quot;xlmrobertaxl&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;overview&quot;,&quot;title&quot;:&quot;Overview&quot;},{&quot;local&quot;:&quot;documentation-resources&quot;,&quot;title&quot;:&quot;Documentation resources&quot;},{&quot;local&quot;:&quot;transformers.XLMRobertaXLConfig&quot;,&quot;title&quot;:&quot;XLMRobertaXLConfig&quot;},{&quot;local&quot;:&quot;transformers.XLMRobertaXLModel&quot;,&quot;title&quot;:&quot;XLMRobertaXLModel&quot;},{&quot;local&quot;:&quot;transformers.XLMRobertaXLForCausalLM&quot;,&quot;title&quot;:&quot;XLMRobertaXLForCausalLM&quot;},{&quot;local&quot;:&quot;transformers.XLMRobertaXLForMaskedLM&quot;,&quot;title&quot;:&quot;XLMRobertaXLForMaskedLM&quot;},{&quot;local&quot;:&quot;transformers.XLMRobertaXLForSequenceClassification&quot;,&quot;title&quot;:&quot;XLMRobertaXLForSequenceClassification&quot;},{&quot;local&quot;:&quot;transformers.XLMRobertaXLForMultipleChoice&quot;,&quot;title&quot;:&quot;XLMRobertaXLForMultipleChoice&quot;},{&quot;local&quot;:&quot;transformers.XLMRobertaXLForTokenClassification&quot;,&quot;title&quot;:&quot;XLMRobertaXLForTokenClassification&quot;},{&quot;local&quot;:&quot;transformers.XLMRobertaXLForQuestionAnswering&quot;,&quot;title&quot;:&quot;XLMRobertaXLForQuestionAnswering&quot;}],&quot;title&quot;:&quot;XLM-RoBERTa-XL&quot;}"></head> <body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage"> <div class="flex min-h-screen flex-col"> <div class="SVELTE_HYDRATER contents" data-props="{&quot;classNames&quot;:&quot;&quot;,&quot;isWide&quot;:true,&quot;isZh&quot;:false}" data-target="MainHeader"><header class="border-b border-gray-100 "><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <div class="flex flex-none items-center justify-center p-0.5 place-self-stretch lg:hidden"><button class="relative z-40 flex h-6 w-8 items-center justify-center" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div></div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="rounded-full border border-transparent bg-gray-900 px-3 py-1 leading-none text-white hover:border-black hover:bg-white hover:text-black" href="/join">Sign Up</a></li></ul></nav></div></header></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div> <main class="flex flex-1 flex-col"><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapters&quot;:[{&quot;title&quot;:&quot;Get started&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;🤗 Transformers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;index&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/index&quot;},{&quot;title&quot;:&quot;Quick tour&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;quicktour&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/quicktour&quot;},{&quot;title&quot;:&quot;Installation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;installation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/installation&quot;}]},{&quot;title&quot;:&quot;Tutorials&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Run inference with pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_tutorial&quot;},{&quot;title&quot;:&quot;Write portable code with AutoClass&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;autoclass_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/autoclass_tutorial&quot;},{&quot;title&quot;:&quot;Preprocess data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocessing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/preprocessing&quot;},{&quot;title&quot;:&quot;Fine-tune a pretrained model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;training&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/training&quot;},{&quot;title&quot;:&quot;Train with a script&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;run_scripts&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/run_scripts&quot;},{&quot;title&quot;:&quot;Set up distributed training with 🤗 Accelerate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;accelerate&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/accelerate&quot;},{&quot;title&quot;:&quot;Load and train adapters with 🤗 PEFT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;peft&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/peft&quot;},{&quot;title&quot;:&quot;Share your model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_sharing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_sharing&quot;},{&quot;title&quot;:&quot;Agents&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers_agents&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/transformers_agents&quot;},{&quot;title&quot;:&quot;Generation with LLMs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;llm_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/llm_tutorial&quot;}]},{&quot;title&quot;:&quot;Task Guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Natural Language Processing&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text classification&quot;,&quot;id&quot;:&quot;tasks/sequence_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/sequence_classification&quot;},{&quot;title&quot;:&quot;Token classification&quot;,&quot;id&quot;:&quot;tasks/token_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/token_classification&quot;},{&quot;title&quot;:&quot;Question answering&quot;,&quot;id&quot;:&quot;tasks/question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/question_answering&quot;},{&quot;title&quot;:&quot;Causal language modeling&quot;,&quot;id&quot;:&quot;tasks/language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/language_modeling&quot;},{&quot;title&quot;:&quot;Masked language modeling&quot;,&quot;id&quot;:&quot;tasks/masked_language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/masked_language_modeling&quot;},{&quot;title&quot;:&quot;Translation&quot;,&quot;id&quot;:&quot;tasks/translation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/translation&quot;},{&quot;title&quot;:&quot;Summarization&quot;,&quot;id&quot;:&quot;tasks/summarization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/summarization&quot;},{&quot;title&quot;:&quot;Multiple choice&quot;,&quot;id&quot;:&quot;tasks/multiple_choice&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/multiple_choice&quot;}]},{&quot;title&quot;:&quot;Audio&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio classification&quot;,&quot;id&quot;:&quot;tasks/audio_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/audio_classification&quot;},{&quot;title&quot;:&quot;Automatic speech recognition&quot;,&quot;id&quot;:&quot;tasks/asr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/asr&quot;}]},{&quot;title&quot;:&quot;Computer Vision&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image classification&quot;,&quot;id&quot;:&quot;tasks/image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_classification&quot;},{&quot;title&quot;:&quot;Semantic segmentation&quot;,&quot;id&quot;:&quot;tasks/semantic_segmentation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/semantic_segmentation&quot;},{&quot;title&quot;:&quot;Video classification&quot;,&quot;id&quot;:&quot;tasks/video_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/video_classification&quot;},{&quot;title&quot;:&quot;Object detection&quot;,&quot;id&quot;:&quot;tasks/object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot object detection&quot;,&quot;id&quot;:&quot;tasks/zero_shot_object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot image classification&quot;,&quot;id&quot;:&quot;tasks/zero_shot_image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification&quot;},{&quot;title&quot;:&quot;Depth estimation&quot;,&quot;id&quot;:&quot;tasks/monocular_depth_estimation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation&quot;}]},{&quot;title&quot;:&quot;Multimodal&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image captioning&quot;,&quot;id&quot;:&quot;tasks/image_captioning&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_captioning&quot;},{&quot;title&quot;:&quot;Document Question Answering&quot;,&quot;id&quot;:&quot;tasks/document_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/document_question_answering&quot;},{&quot;title&quot;:&quot;Visual Question Answering&quot;,&quot;id&quot;:&quot;tasks/visual_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/visual_question_answering&quot;},{&quot;title&quot;:&quot;Text to speech&quot;,&quot;id&quot;:&quot;tasks/text-to-speech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/text-to-speech&quot;}]},{&quot;title&quot;:&quot;Generation&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Customize the generation strategy&quot;,&quot;id&quot;:&quot;generation_strategies&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/generation_strategies&quot;}]},{&quot;title&quot;:&quot;Prompting&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image tasks with IDEFICS&quot;,&quot;id&quot;:&quot;tasks/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/idefics&quot;}]}]},{&quot;title&quot;:&quot;Developer guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Use fast tokenizers from 🤗 Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;fast_tokenizers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/fast_tokenizers&quot;},{&quot;title&quot;:&quot;Run inference with multilingual models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;multilingual&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/multilingual&quot;},{&quot;title&quot;:&quot;Use model-specific APIs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;create_a_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/create_a_model&quot;},{&quot;title&quot;:&quot;Share a custom model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_models&quot;},{&quot;title&quot;:&quot;Templates for chat models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;chat_templating&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/chat_templating&quot;},{&quot;title&quot;:&quot;Run training on Amazon SageMaker&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;sagemaker&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/sagemaker&quot;},{&quot;title&quot;:&quot;Export to ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;serialization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/serialization&quot;},{&quot;title&quot;:&quot;Export to TFLite&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tflite&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tflite&quot;},{&quot;title&quot;:&quot;Export to TorchScript&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;torchscript&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/torchscript&quot;},{&quot;title&quot;:&quot;Benchmarks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;benchmarks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/benchmarks&quot;},{&quot;title&quot;:&quot;Notebooks with examples&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;notebooks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/notebooks&quot;},{&quot;title&quot;:&quot;Community resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;community&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/community&quot;},{&quot;title&quot;:&quot;Custom Tools and Prompts&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_tools&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_tools&quot;},{&quot;title&quot;:&quot;Troubleshoot&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;troubleshooting&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/troubleshooting&quot;}]},{&quot;title&quot;:&quot;Performance and scalability&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;performance&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/performance&quot;},{&quot;title&quot;:&quot;Efficient training techniques&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Methods and tools for efficient training on a single GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_one&quot;},{&quot;title&quot;:&quot;Multiple GPUs and parallelism&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_many&quot;},{&quot;title&quot;:&quot;Efficient training on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu&quot;},{&quot;title&quot;:&quot;Distributed CPU training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu_many&quot;},{&quot;title&quot;:&quot;Training on TPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu&quot;},{&quot;title&quot;:&quot;Training on TPU with TensorFlow&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu_tf&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu_tf&quot;},{&quot;title&quot;:&quot;Training on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_special&quot;},{&quot;title&quot;:&quot;Custom hardware for training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_hardware&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_hardware&quot;},{&quot;title&quot;:&quot;Hyperparameter Search using Trainer API&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;hpo_train&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/hpo_train&quot;}]},{&quot;title&quot;:&quot;Optimizing inference&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Inference on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_cpu&quot;},{&quot;title&quot;:&quot;Inference on one GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_one&quot;},{&quot;title&quot;:&quot;Inference on many GPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_many&quot;},{&quot;title&quot;:&quot;Inference on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_special&quot;}]},{&quot;title&quot;:&quot;Instantiating a big model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;big_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/big_models&quot;},{&quot;title&quot;:&quot;Troubleshooting&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;debugging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/debugging&quot;},{&quot;title&quot;:&quot;XLA Integration for TensorFlow Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tf_xla&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tf_xla&quot;},{&quot;title&quot;:&quot;Optimize inference using `torch.compile()`&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_torch_compile&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_torch_compile&quot;}]},{&quot;title&quot;:&quot;Contribute&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;How to contribute to transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;contributing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/contributing&quot;},{&quot;title&quot;:&quot;How to add a model to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_model&quot;},{&quot;title&quot;:&quot;How to convert a 🤗 Transformers model to TensorFlow?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_tensorflow_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_tensorflow_model&quot;},{&quot;title&quot;:&quot;How to add a pipeline to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_pipeline&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_pipeline&quot;},{&quot;title&quot;:&quot;Testing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;testing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/testing&quot;},{&quot;title&quot;:&quot;Checks on a Pull Request&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pr_checks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pr_checks&quot;}]},{&quot;title&quot;:&quot;Conceptual guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Philosophy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;philosophy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/philosophy&quot;},{&quot;title&quot;:&quot;Glossary&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;glossary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/glossary&quot;},{&quot;title&quot;:&quot;What 🤗 Transformers can do&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;task_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/task_summary&quot;},{&quot;title&quot;:&quot;How 🤗 Transformers solve tasks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks_explained&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks_explained&quot;},{&quot;title&quot;:&quot;The Transformer model family&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_summary&quot;},{&quot;title&quot;:&quot;Summary of the tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tokenizer_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tokenizer_summary&quot;},{&quot;title&quot;:&quot;Attention mechanisms&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;attention&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/attention&quot;},{&quot;title&quot;:&quot;Padding and truncation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pad_truncation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pad_truncation&quot;},{&quot;title&quot;:&quot;BERTology&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;bertology&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/bertology&quot;},{&quot;title&quot;:&quot;Perplexity of fixed-length models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perplexity&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perplexity&quot;},{&quot;title&quot;:&quot;Pipelines for webserver inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_webserver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_webserver&quot;},{&quot;title&quot;:&quot;Model training anatomy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_memory_anatomy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_memory_anatomy&quot;}]},{&quot;title&quot;:&quot;API&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Main Classes&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Agents and Tools&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/agent&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/agent&quot;},{&quot;title&quot;:&quot;Auto Classes&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/auto&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/auto&quot;},{&quot;title&quot;:&quot;Callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/callback&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/callback&quot;},{&quot;title&quot;:&quot;Configuration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/configuration&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/configuration&quot;},{&quot;title&quot;:&quot;Data Collator&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/data_collator&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/data_collator&quot;},{&quot;title&quot;:&quot;Keras callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/keras_callbacks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/keras_callbacks&quot;},{&quot;title&quot;:&quot;Logging&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/logging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/logging&quot;},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/model&quot;},{&quot;title&quot;:&quot;Text Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/text_generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/text_generation&quot;},{&quot;title&quot;:&quot;ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/onnx&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/onnx&quot;},{&quot;title&quot;:&quot;Optimization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/optimizer_schedules&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules&quot;},{&quot;title&quot;:&quot;Model outputs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/output&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/output&quot;},{&quot;title&quot;:&quot;Pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/pipelines&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/pipelines&quot;},{&quot;title&quot;:&quot;Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/processors&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/processors&quot;},{&quot;title&quot;:&quot;Quantization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/quantization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/quantization&quot;},{&quot;title&quot;:&quot;Tokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/tokenizer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/tokenizer&quot;},{&quot;title&quot;:&quot;Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/trainer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/trainer&quot;},{&quot;title&quot;:&quot;DeepSpeed Integration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/deepspeed&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/deepspeed&quot;},{&quot;title&quot;:&quot;Feature Extractor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/feature_extractor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/feature_extractor&quot;},{&quot;title&quot;:&quot;Image Processor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/image_processor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/image_processor&quot;}]},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALBERT&quot;,&quot;id&quot;:&quot;model_doc/albert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/albert&quot;},{&quot;title&quot;:&quot;BART&quot;,&quot;id&quot;:&quot;model_doc/bart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bart&quot;},{&quot;title&quot;:&quot;BARThez&quot;,&quot;id&quot;:&quot;model_doc/barthez&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/barthez&quot;},{&quot;title&quot;:&quot;BARTpho&quot;,&quot;id&quot;:&quot;model_doc/bartpho&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bartpho&quot;},{&quot;title&quot;:&quot;BERT&quot;,&quot;id&quot;:&quot;model_doc/bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert&quot;},{&quot;title&quot;:&quot;BertGeneration&quot;,&quot;id&quot;:&quot;model_doc/bert-generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-generation&quot;},{&quot;title&quot;:&quot;BertJapanese&quot;,&quot;id&quot;:&quot;model_doc/bert-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-japanese&quot;},{&quot;title&quot;:&quot;Bertweet&quot;,&quot;id&quot;:&quot;model_doc/bertweet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bertweet&quot;},{&quot;title&quot;:&quot;BigBird&quot;,&quot;id&quot;:&quot;model_doc/big_bird&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/big_bird&quot;},{&quot;title&quot;:&quot;BigBirdPegasus&quot;,&quot;id&quot;:&quot;model_doc/bigbird_pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus&quot;},{&quot;title&quot;:&quot;BioGpt&quot;,&quot;id&quot;:&quot;model_doc/biogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/biogpt&quot;},{&quot;title&quot;:&quot;Blenderbot&quot;,&quot;id&quot;:&quot;model_doc/blenderbot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot&quot;},{&quot;title&quot;:&quot;Blenderbot Small&quot;,&quot;id&quot;:&quot;model_doc/blenderbot-small&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot-small&quot;},{&quot;title&quot;:&quot;BLOOM&quot;,&quot;id&quot;:&quot;model_doc/bloom&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bloom&quot;},{&quot;title&quot;:&quot;BORT&quot;,&quot;id&quot;:&quot;model_doc/bort&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bort&quot;},{&quot;title&quot;:&quot;ByT5&quot;,&quot;id&quot;:&quot;model_doc/byt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/byt5&quot;},{&quot;title&quot;:&quot;CamemBERT&quot;,&quot;id&quot;:&quot;model_doc/camembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/camembert&quot;},{&quot;title&quot;:&quot;CANINE&quot;,&quot;id&quot;:&quot;model_doc/canine&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/canine&quot;},{&quot;title&quot;:&quot;CodeGen&quot;,&quot;id&quot;:&quot;model_doc/codegen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/codegen&quot;},{&quot;title&quot;:&quot;CodeLlama&quot;,&quot;id&quot;:&quot;model_doc/code_llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/code_llama&quot;},{&quot;title&quot;:&quot;ConvBERT&quot;,&quot;id&quot;:&quot;model_doc/convbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convbert&quot;},{&quot;title&quot;:&quot;CPM&quot;,&quot;id&quot;:&quot;model_doc/cpm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpm&quot;},{&quot;title&quot;:&quot;CPMANT&quot;,&quot;id&quot;:&quot;model_doc/cpmant&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpmant&quot;},{&quot;title&quot;:&quot;CTRL&quot;,&quot;id&quot;:&quot;model_doc/ctrl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ctrl&quot;},{&quot;title&quot;:&quot;DeBERTa&quot;,&quot;id&quot;:&quot;model_doc/deberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta&quot;},{&quot;title&quot;:&quot;DeBERTa-v2&quot;,&quot;id&quot;:&quot;model_doc/deberta-v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta-v2&quot;},{&quot;title&quot;:&quot;DialoGPT&quot;,&quot;id&quot;:&quot;model_doc/dialogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dialogpt&quot;},{&quot;title&quot;:&quot;DistilBERT&quot;,&quot;id&quot;:&quot;model_doc/distilbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/distilbert&quot;},{&quot;title&quot;:&quot;DPR&quot;,&quot;id&quot;:&quot;model_doc/dpr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpr&quot;},{&quot;title&quot;:&quot;ELECTRA&quot;,&quot;id&quot;:&quot;model_doc/electra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/electra&quot;},{&quot;title&quot;:&quot;Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encoder-decoder&quot;},{&quot;title&quot;:&quot;ERNIE&quot;,&quot;id&quot;:&quot;model_doc/ernie&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie&quot;},{&quot;title&quot;:&quot;ErnieM&quot;,&quot;id&quot;:&quot;model_doc/ernie_m&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie_m&quot;},{&quot;title&quot;:&quot;ESM&quot;,&quot;id&quot;:&quot;model_doc/esm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/esm&quot;},{&quot;title&quot;:&quot;Falcon&quot;,&quot;id&quot;:&quot;model_doc/falcon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/falcon&quot;},{&quot;title&quot;:&quot;FLAN-T5&quot;,&quot;id&quot;:&quot;model_doc/flan-t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-t5&quot;},{&quot;title&quot;:&quot;FLAN-UL2&quot;,&quot;id&quot;:&quot;model_doc/flan-ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-ul2&quot;},{&quot;title&quot;:&quot;FlauBERT&quot;,&quot;id&quot;:&quot;model_doc/flaubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flaubert&quot;},{&quot;title&quot;:&quot;FNet&quot;,&quot;id&quot;:&quot;model_doc/fnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fnet&quot;},{&quot;title&quot;:&quot;FSMT&quot;,&quot;id&quot;:&quot;model_doc/fsmt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fsmt&quot;},{&quot;title&quot;:&quot;Funnel Transformer&quot;,&quot;id&quot;:&quot;model_doc/funnel&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/funnel&quot;},{&quot;title&quot;:&quot;GPT&quot;,&quot;id&quot;:&quot;model_doc/openai-gpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/openai-gpt&quot;},{&quot;title&quot;:&quot;GPT Neo&quot;,&quot;id&quot;:&quot;model_doc/gpt_neo&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neo&quot;},{&quot;title&quot;:&quot;GPT NeoX&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox&quot;},{&quot;title&quot;:&quot;GPT NeoX Japanese&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox_japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese&quot;},{&quot;title&quot;:&quot;GPT-J&quot;,&quot;id&quot;:&quot;model_doc/gptj&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptj&quot;},{&quot;title&quot;:&quot;GPT2&quot;,&quot;id&quot;:&quot;model_doc/gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt2&quot;},{&quot;title&quot;:&quot;GPTBigCode&quot;,&quot;id&quot;:&quot;model_doc/gpt_bigcode&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode&quot;},{&quot;title&quot;:&quot;GPTSAN Japanese&quot;,&quot;id&quot;:&quot;model_doc/gptsan-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese&quot;},{&quot;title&quot;:&quot;GPTSw3&quot;,&quot;id&quot;:&quot;model_doc/gpt-sw3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt-sw3&quot;},{&quot;title&quot;:&quot;HerBERT&quot;,&quot;id&quot;:&quot;model_doc/herbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/herbert&quot;},{&quot;title&quot;:&quot;I-BERT&quot;,&quot;id&quot;:&quot;model_doc/ibert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ibert&quot;},{&quot;title&quot;:&quot;Jukebox&quot;,&quot;id&quot;:&quot;model_doc/jukebox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/jukebox&quot;},{&quot;title&quot;:&quot;LED&quot;,&quot;id&quot;:&quot;model_doc/led&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/led&quot;},{&quot;title&quot;:&quot;LLaMA&quot;,&quot;id&quot;:&quot;model_doc/llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama&quot;},{&quot;title&quot;:&quot;Llama2&quot;,&quot;id&quot;:&quot;model_doc/llama2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama2&quot;},{&quot;title&quot;:&quot;Longformer&quot;,&quot;id&quot;:&quot;model_doc/longformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longformer&quot;},{&quot;title&quot;:&quot;LongT5&quot;,&quot;id&quot;:&quot;model_doc/longt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longt5&quot;},{&quot;title&quot;:&quot;LUKE&quot;,&quot;id&quot;:&quot;model_doc/luke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/luke&quot;},{&quot;title&quot;:&quot;M2M100&quot;,&quot;id&quot;:&quot;model_doc/m2m_100&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/m2m_100&quot;},{&quot;title&quot;:&quot;MarianMT&quot;,&quot;id&quot;:&quot;model_doc/marian&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/marian&quot;},{&quot;title&quot;:&quot;MarkupLM&quot;,&quot;id&quot;:&quot;model_doc/markuplm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/markuplm&quot;},{&quot;title&quot;:&quot;MBart and MBart-50&quot;,&quot;id&quot;:&quot;model_doc/mbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mbart&quot;},{&quot;title&quot;:&quot;MEGA&quot;,&quot;id&quot;:&quot;model_doc/mega&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mega&quot;},{&quot;title&quot;:&quot;MegatronBERT&quot;,&quot;id&quot;:&quot;model_doc/megatron-bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron-bert&quot;},{&quot;title&quot;:&quot;MegatronGPT2&quot;,&quot;id&quot;:&quot;model_doc/megatron_gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2&quot;},{&quot;title&quot;:&quot;Mistral&quot;,&quot;id&quot;:&quot;model_doc/mistral&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mistral&quot;},{&quot;title&quot;:&quot;mLUKE&quot;,&quot;id&quot;:&quot;model_doc/mluke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mluke&quot;},{&quot;title&quot;:&quot;MobileBERT&quot;,&quot;id&quot;:&quot;model_doc/mobilebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilebert&quot;},{&quot;title&quot;:&quot;MPNet&quot;,&quot;id&quot;:&quot;model_doc/mpnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpnet&quot;},{&quot;title&quot;:&quot;MPT&quot;,&quot;id&quot;:&quot;model_doc/mpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpt&quot;},{&quot;title&quot;:&quot;MRA&quot;,&quot;id&quot;:&quot;model_doc/mra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mra&quot;},{&quot;title&quot;:&quot;MT5&quot;,&quot;id&quot;:&quot;model_doc/mt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mt5&quot;},{&quot;title&quot;:&quot;MVP&quot;,&quot;id&quot;:&quot;model_doc/mvp&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mvp&quot;},{&quot;title&quot;:&quot;NEZHA&quot;,&quot;id&quot;:&quot;model_doc/nezha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nezha&quot;},{&quot;title&quot;:&quot;NLLB&quot;,&quot;id&quot;:&quot;model_doc/nllb&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb&quot;},{&quot;title&quot;:&quot;NLLB-MoE&quot;,&quot;id&quot;:&quot;model_doc/nllb-moe&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb-moe&quot;},{&quot;title&quot;:&quot;Nyströmformer&quot;,&quot;id&quot;:&quot;model_doc/nystromformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nystromformer&quot;},{&quot;title&quot;:&quot;Open-Llama&quot;,&quot;id&quot;:&quot;model_doc/open-llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/open-llama&quot;},{&quot;title&quot;:&quot;OPT&quot;,&quot;id&quot;:&quot;model_doc/opt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/opt&quot;},{&quot;title&quot;:&quot;Pegasus&quot;,&quot;id&quot;:&quot;model_doc/pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus&quot;},{&quot;title&quot;:&quot;PEGASUS-X&quot;,&quot;id&quot;:&quot;model_doc/pegasus_x&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus_x&quot;},{&quot;title&quot;:&quot;Persimmon&quot;,&quot;id&quot;:&quot;model_doc/persimmon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/persimmon&quot;},{&quot;title&quot;:&quot;PhoBERT&quot;,&quot;id&quot;:&quot;model_doc/phobert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/phobert&quot;},{&quot;title&quot;:&quot;PLBart&quot;,&quot;id&quot;:&quot;model_doc/plbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/plbart&quot;},{&quot;title&quot;:&quot;ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/prophetnet&quot;},{&quot;title&quot;:&quot;QDQBert&quot;,&quot;id&quot;:&quot;model_doc/qdqbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/qdqbert&quot;},{&quot;title&quot;:&quot;RAG&quot;,&quot;id&quot;:&quot;model_doc/rag&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rag&quot;},{&quot;title&quot;:&quot;REALM&quot;,&quot;id&quot;:&quot;model_doc/realm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/realm&quot;},{&quot;title&quot;:&quot;Reformer&quot;,&quot;id&quot;:&quot;model_doc/reformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/reformer&quot;},{&quot;title&quot;:&quot;RemBERT&quot;,&quot;id&quot;:&quot;model_doc/rembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rembert&quot;},{&quot;title&quot;:&quot;RetriBERT&quot;,&quot;id&quot;:&quot;model_doc/retribert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/retribert&quot;},{&quot;title&quot;:&quot;RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta&quot;},{&quot;title&quot;:&quot;RoBERTa-PreLayerNorm&quot;,&quot;id&quot;:&quot;model_doc/roberta-prelayernorm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm&quot;},{&quot;title&quot;:&quot;RoCBert&quot;,&quot;id&quot;:&quot;model_doc/roc_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roc_bert&quot;},{&quot;title&quot;:&quot;RoFormer&quot;,&quot;id&quot;:&quot;model_doc/roformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roformer&quot;},{&quot;title&quot;:&quot;RWKV&quot;,&quot;id&quot;:&quot;model_doc/rwkv&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rwkv&quot;},{&quot;title&quot;:&quot;Splinter&quot;,&quot;id&quot;:&quot;model_doc/splinter&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/splinter&quot;},{&quot;title&quot;:&quot;SqueezeBERT&quot;,&quot;id&quot;:&quot;model_doc/squeezebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/squeezebert&quot;},{&quot;title&quot;:&quot;SwitchTransformers&quot;,&quot;id&quot;:&quot;model_doc/switch_transformers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/switch_transformers&quot;},{&quot;title&quot;:&quot;T5&quot;,&quot;id&quot;:&quot;model_doc/t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5&quot;},{&quot;title&quot;:&quot;T5v1.1&quot;,&quot;id&quot;:&quot;model_doc/t5v1.1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5v1.1&quot;},{&quot;title&quot;:&quot;TAPEX&quot;,&quot;id&quot;:&quot;model_doc/tapex&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapex&quot;},{&quot;title&quot;:&quot;Transformer XL&quot;,&quot;id&quot;:&quot;model_doc/transfo-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/transfo-xl&quot;},{&quot;title&quot;:&quot;UL2&quot;,&quot;id&quot;:&quot;model_doc/ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ul2&quot;},{&quot;title&quot;:&quot;UMT5&quot;,&quot;id&quot;:&quot;model_doc/umt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/umt5&quot;},{&quot;title&quot;:&quot;X-MOD&quot;,&quot;id&quot;:&quot;model_doc/xmod&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xmod&quot;},{&quot;title&quot;:&quot;XGLM&quot;,&quot;id&quot;:&quot;model_doc/xglm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xglm&quot;},{&quot;title&quot;:&quot;XLM&quot;,&quot;id&quot;:&quot;model_doc/xlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm&quot;},{&quot;title&quot;:&quot;XLM-ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/xlm-prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa-XL&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/xlm-roberta-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl&quot;},{&quot;title&quot;:&quot;XLM-V&quot;,&quot;id&quot;:&quot;model_doc/xlm-v&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-v&quot;},{&quot;title&quot;:&quot;XLNet&quot;,&quot;id&quot;:&quot;model_doc/xlnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlnet&quot;},{&quot;title&quot;:&quot;YOSO&quot;,&quot;id&quot;:&quot;model_doc/yoso&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yoso&quot;}]},{&quot;title&quot;:&quot;Vision models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;BEiT&quot;,&quot;id&quot;:&quot;model_doc/beit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/beit&quot;},{&quot;title&quot;:&quot;BiT&quot;,&quot;id&quot;:&quot;model_doc/bit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bit&quot;},{&quot;title&quot;:&quot;Conditional DETR&quot;,&quot;id&quot;:&quot;model_doc/conditional_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/conditional_detr&quot;},{&quot;title&quot;:&quot;ConvNeXT&quot;,&quot;id&quot;:&quot;model_doc/convnext&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnext&quot;},{&quot;title&quot;:&quot;ConvNeXTV2&quot;,&quot;id&quot;:&quot;model_doc/convnextv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnextv2&quot;},{&quot;title&quot;:&quot;CvT&quot;,&quot;id&quot;:&quot;model_doc/cvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cvt&quot;},{&quot;title&quot;:&quot;Deformable DETR&quot;,&quot;id&quot;:&quot;model_doc/deformable_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deformable_detr&quot;},{&quot;title&quot;:&quot;DeiT&quot;,&quot;id&quot;:&quot;model_doc/deit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deit&quot;},{&quot;title&quot;:&quot;DETA&quot;,&quot;id&quot;:&quot;model_doc/deta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deta&quot;},{&quot;title&quot;:&quot;DETR&quot;,&quot;id&quot;:&quot;model_doc/detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/detr&quot;},{&quot;title&quot;:&quot;DiNAT&quot;,&quot;id&quot;:&quot;model_doc/dinat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinat&quot;},{&quot;title&quot;:&quot;DINO V2&quot;,&quot;id&quot;:&quot;model_doc/dinov2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinov2&quot;},{&quot;title&quot;:&quot;DiT&quot;,&quot;id&quot;:&quot;model_doc/dit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dit&quot;},{&quot;title&quot;:&quot;DPT&quot;,&quot;id&quot;:&quot;model_doc/dpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpt&quot;},{&quot;title&quot;:&quot;EfficientFormer&quot;,&quot;id&quot;:&quot;model_doc/efficientformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientformer&quot;},{&quot;title&quot;:&quot;EfficientNet&quot;,&quot;id&quot;:&quot;model_doc/efficientnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientnet&quot;},{&quot;title&quot;:&quot;FocalNet&quot;,&quot;id&quot;:&quot;model_doc/focalnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/focalnet&quot;},{&quot;title&quot;:&quot;GLPN&quot;,&quot;id&quot;:&quot;model_doc/glpn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/glpn&quot;},{&quot;title&quot;:&quot;ImageGPT&quot;,&quot;id&quot;:&quot;model_doc/imagegpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/imagegpt&quot;},{&quot;title&quot;:&quot;LeViT&quot;,&quot;id&quot;:&quot;model_doc/levit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/levit&quot;},{&quot;title&quot;:&quot;Mask2Former&quot;,&quot;id&quot;:&quot;model_doc/mask2former&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mask2former&quot;},{&quot;title&quot;:&quot;MaskFormer&quot;,&quot;id&quot;:&quot;model_doc/maskformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/maskformer&quot;},{&quot;title&quot;:&quot;MobileNetV1&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v1&quot;},{&quot;title&quot;:&quot;MobileNetV2&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v2&quot;},{&quot;title&quot;:&quot;MobileViT&quot;,&quot;id&quot;:&quot;model_doc/mobilevit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevit&quot;},{&quot;title&quot;:&quot;MobileViTV2&quot;,&quot;id&quot;:&quot;model_doc/mobilevitv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevitv2&quot;},{&quot;title&quot;:&quot;NAT&quot;,&quot;id&quot;:&quot;model_doc/nat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nat&quot;},{&quot;title&quot;:&quot;PoolFormer&quot;,&quot;id&quot;:&quot;model_doc/poolformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/poolformer&quot;},{&quot;title&quot;:&quot;Pyramid Vision Transformer (PVT)&quot;,&quot;id&quot;:&quot;model_doc/pvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pvt&quot;},{&quot;title&quot;:&quot;RegNet&quot;,&quot;id&quot;:&quot;model_doc/regnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/regnet&quot;},{&quot;title&quot;:&quot;ResNet&quot;,&quot;id&quot;:&quot;model_doc/resnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/resnet&quot;},{&quot;title&quot;:&quot;SegFormer&quot;,&quot;id&quot;:&quot;model_doc/segformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/segformer&quot;},{&quot;title&quot;:&quot;SwiftFormer&quot;,&quot;id&quot;:&quot;model_doc/swiftformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swiftformer&quot;},{&quot;title&quot;:&quot;Swin Transformer&quot;,&quot;id&quot;:&quot;model_doc/swin&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin&quot;},{&quot;title&quot;:&quot;Swin Transformer V2&quot;,&quot;id&quot;:&quot;model_doc/swinv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swinv2&quot;},{&quot;title&quot;:&quot;Swin2SR&quot;,&quot;id&quot;:&quot;model_doc/swin2sr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin2sr&quot;},{&quot;title&quot;:&quot;Table Transformer&quot;,&quot;id&quot;:&quot;model_doc/table-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/table-transformer&quot;},{&quot;title&quot;:&quot;TimeSformer&quot;,&quot;id&quot;:&quot;model_doc/timesformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/timesformer&quot;},{&quot;title&quot;:&quot;UperNet&quot;,&quot;id&quot;:&quot;model_doc/upernet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/upernet&quot;},{&quot;title&quot;:&quot;VAN&quot;,&quot;id&quot;:&quot;model_doc/van&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/van&quot;},{&quot;title&quot;:&quot;VideoMAE&quot;,&quot;id&quot;:&quot;model_doc/videomae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/videomae&quot;},{&quot;title&quot;:&quot;Vision Transformer (ViT)&quot;,&quot;id&quot;:&quot;model_doc/vit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit&quot;},{&quot;title&quot;:&quot;ViT Hybrid&quot;,&quot;id&quot;:&quot;model_doc/vit_hybrid&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_hybrid&quot;},{&quot;title&quot;:&quot;ViTDet&quot;,&quot;id&quot;:&quot;model_doc/vitdet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitdet&quot;},{&quot;title&quot;:&quot;ViTMAE&quot;,&quot;id&quot;:&quot;model_doc/vit_mae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_mae&quot;},{&quot;title&quot;:&quot;ViTMatte&quot;,&quot;id&quot;:&quot;model_doc/vitmatte&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitmatte&quot;},{&quot;title&quot;:&quot;ViTMSN&quot;,&quot;id&quot;:&quot;model_doc/vit_msn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_msn&quot;},{&quot;title&quot;:&quot;ViViT&quot;,&quot;id&quot;:&quot;model_doc/vivit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vivit&quot;},{&quot;title&quot;:&quot;YOLOS&quot;,&quot;id&quot;:&quot;model_doc/yolos&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yolos&quot;}]},{&quot;title&quot;:&quot;Audio models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio Spectrogram Transformer&quot;,&quot;id&quot;:&quot;model_doc/audio-spectrogram-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer&quot;},{&quot;title&quot;:&quot;Bark&quot;,&quot;id&quot;:&quot;model_doc/bark&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bark&quot;},{&quot;title&quot;:&quot;CLAP&quot;,&quot;id&quot;:&quot;model_doc/clap&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clap&quot;},{&quot;title&quot;:&quot;EnCodec&quot;,&quot;id&quot;:&quot;model_doc/encodec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encodec&quot;},{&quot;title&quot;:&quot;Hubert&quot;,&quot;id&quot;:&quot;model_doc/hubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/hubert&quot;},{&quot;title&quot;:&quot;MCTCT&quot;,&quot;id&quot;:&quot;model_doc/mctct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mctct&quot;},{&quot;title&quot;:&quot;MMS&quot;,&quot;id&quot;:&quot;model_doc/mms&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mms&quot;},{&quot;title&quot;:&quot;MusicGen&quot;,&quot;id&quot;:&quot;model_doc/musicgen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/musicgen&quot;},{&quot;title&quot;:&quot;Pop2Piano&quot;,&quot;id&quot;:&quot;model_doc/pop2piano&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pop2piano&quot;},{&quot;title&quot;:&quot;SEW&quot;,&quot;id&quot;:&quot;model_doc/sew&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew&quot;},{&quot;title&quot;:&quot;SEW-D&quot;,&quot;id&quot;:&quot;model_doc/sew-d&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew-d&quot;},{&quot;title&quot;:&quot;Speech2Text&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text&quot;},{&quot;title&quot;:&quot;Speech2Text2&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text_2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2&quot;},{&quot;title&quot;:&quot;SpeechT5&quot;,&quot;id&quot;:&quot;model_doc/speecht5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speecht5&quot;},{&quot;title&quot;:&quot;UniSpeech&quot;,&quot;id&quot;:&quot;model_doc/unispeech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech&quot;},{&quot;title&quot;:&quot;UniSpeech-SAT&quot;,&quot;id&quot;:&quot;model_doc/unispeech-sat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech-sat&quot;},{&quot;title&quot;:&quot;VITS&quot;,&quot;id&quot;:&quot;model_doc/vits&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vits&quot;},{&quot;title&quot;:&quot;Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2&quot;},{&quot;title&quot;:&quot;Wav2Vec2-Conformer&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2-conformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer&quot;},{&quot;title&quot;:&quot;Wav2Vec2Phoneme&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2_phoneme&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme&quot;},{&quot;title&quot;:&quot;WavLM&quot;,&quot;id&quot;:&quot;model_doc/wavlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wavlm&quot;},{&quot;title&quot;:&quot;Whisper&quot;,&quot;id&quot;:&quot;model_doc/whisper&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/whisper&quot;},{&quot;title&quot;:&quot;XLS-R&quot;,&quot;id&quot;:&quot;model_doc/xls_r&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xls_r&quot;},{&quot;title&quot;:&quot;XLSR-Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/xlsr_wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2&quot;}]},{&quot;title&quot;:&quot;Multimodal models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALIGN&quot;,&quot;id&quot;:&quot;model_doc/align&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/align&quot;},{&quot;title&quot;:&quot;AltCLIP&quot;,&quot;id&quot;:&quot;model_doc/altclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/altclip&quot;},{&quot;title&quot;:&quot;BLIP&quot;,&quot;id&quot;:&quot;model_doc/blip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip&quot;},{&quot;title&quot;:&quot;BLIP-2&quot;,&quot;id&quot;:&quot;model_doc/blip-2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip-2&quot;},{&quot;title&quot;:&quot;BridgeTower&quot;,&quot;id&quot;:&quot;model_doc/bridgetower&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bridgetower&quot;},{&quot;title&quot;:&quot;BROS&quot;,&quot;id&quot;:&quot;model_doc/bros&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bros&quot;},{&quot;title&quot;:&quot;Chinese-CLIP&quot;,&quot;id&quot;:&quot;model_doc/chinese_clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/chinese_clip&quot;},{&quot;title&quot;:&quot;CLIP&quot;,&quot;id&quot;:&quot;model_doc/clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clip&quot;},{&quot;title&quot;:&quot;CLIPSeg&quot;,&quot;id&quot;:&quot;model_doc/clipseg&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clipseg&quot;},{&quot;title&quot;:&quot;Data2Vec&quot;,&quot;id&quot;:&quot;model_doc/data2vec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/data2vec&quot;},{&quot;title&quot;:&quot;DePlot&quot;,&quot;id&quot;:&quot;model_doc/deplot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deplot&quot;},{&quot;title&quot;:&quot;Donut&quot;,&quot;id&quot;:&quot;model_doc/donut&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/donut&quot;},{&quot;title&quot;:&quot;FLAVA&quot;,&quot;id&quot;:&quot;model_doc/flava&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flava&quot;},{&quot;title&quot;:&quot;GIT&quot;,&quot;id&quot;:&quot;model_doc/git&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/git&quot;},{&quot;title&quot;:&quot;GroupViT&quot;,&quot;id&quot;:&quot;model_doc/groupvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/groupvit&quot;},{&quot;title&quot;:&quot;IDEFICS&quot;,&quot;id&quot;:&quot;model_doc/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/idefics&quot;},{&quot;title&quot;:&quot;InstructBLIP&quot;,&quot;id&quot;:&quot;model_doc/instructblip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/instructblip&quot;},{&quot;title&quot;:&quot;LayoutLM&quot;,&quot;id&quot;:&quot;model_doc/layoutlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlm&quot;},{&quot;title&quot;:&quot;LayoutLMV2&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv2&quot;},{&quot;title&quot;:&quot;LayoutLMV3&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv3&quot;},{&quot;title&quot;:&quot;LayoutXLM&quot;,&quot;id&quot;:&quot;model_doc/layoutxlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutxlm&quot;},{&quot;title&quot;:&quot;LiLT&quot;,&quot;id&quot;:&quot;model_doc/lilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lilt&quot;},{&quot;title&quot;:&quot;LXMERT&quot;,&quot;id&quot;:&quot;model_doc/lxmert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lxmert&quot;},{&quot;title&quot;:&quot;MatCha&quot;,&quot;id&quot;:&quot;model_doc/matcha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/matcha&quot;},{&quot;title&quot;:&quot;MGP-STR&quot;,&quot;id&quot;:&quot;model_doc/mgp-str&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mgp-str&quot;},{&quot;title&quot;:&quot;Nougat&quot;,&quot;id&quot;:&quot;model_doc/nougat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nougat&quot;},{&quot;title&quot;:&quot;OneFormer&quot;,&quot;id&quot;:&quot;model_doc/oneformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/oneformer&quot;},{&quot;title&quot;:&quot;OWL-ViT&quot;,&quot;id&quot;:&quot;model_doc/owlvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/owlvit&quot;},{&quot;title&quot;:&quot;Perceiver&quot;,&quot;id&quot;:&quot;model_doc/perceiver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/perceiver&quot;},{&quot;title&quot;:&quot;Pix2Struct&quot;,&quot;id&quot;:&quot;model_doc/pix2struct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pix2struct&quot;},{&quot;title&quot;:&quot;Segment Anything&quot;,&quot;id&quot;:&quot;model_doc/sam&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sam&quot;},{&quot;title&quot;:&quot;Speech Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/speech-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder&quot;},{&quot;title&quot;:&quot;TAPAS&quot;,&quot;id&quot;:&quot;model_doc/tapas&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapas&quot;},{&quot;title&quot;:&quot;TrOCR&quot;,&quot;id&quot;:&quot;model_doc/trocr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trocr&quot;},{&quot;title&quot;:&quot;TVLT&quot;,&quot;id&quot;:&quot;model_doc/tvlt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tvlt&quot;},{&quot;title&quot;:&quot;ViLT&quot;,&quot;id&quot;:&quot;model_doc/vilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vilt&quot;},{&quot;title&quot;:&quot;Vision Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/vision-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder&quot;},{&quot;title&quot;:&quot;Vision Text Dual Encoder&quot;,&quot;id&quot;:&quot;model_doc/vision-text-dual-encoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder&quot;},{&quot;title&quot;:&quot;VisualBERT&quot;,&quot;id&quot;:&quot;model_doc/visual_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/visual_bert&quot;},{&quot;title&quot;:&quot;X-CLIP&quot;,&quot;id&quot;:&quot;model_doc/xclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xclip&quot;}]},{&quot;title&quot;:&quot;Reinforcement learning models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Decision Transformer&quot;,&quot;id&quot;:&quot;model_doc/decision_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/decision_transformer&quot;},{&quot;title&quot;:&quot;Trajectory Transformer&quot;,&quot;id&quot;:&quot;model_doc/trajectory_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer&quot;}]},{&quot;title&quot;:&quot;Time series models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Autoformer&quot;,&quot;id&quot;:&quot;model_doc/autoformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/autoformer&quot;},{&quot;title&quot;:&quot;Informer&quot;,&quot;id&quot;:&quot;model_doc/informer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/informer&quot;},{&quot;title&quot;:&quot;Time Series Transformer&quot;,&quot;id&quot;:&quot;model_doc/time_series_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/time_series_transformer&quot;}]},{&quot;title&quot;:&quot;Graph models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Graphormer&quot;,&quot;id&quot;:&quot;model_doc/graphormer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/graphormer&quot;}]}]},{&quot;title&quot;:&quot;Internal Helpers&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Custom Layers and Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/modeling_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/modeling_utils&quot;},{&quot;title&quot;:&quot;Utilities for pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/pipelines_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/pipelines_utils&quot;},{&quot;title&quot;:&quot;Utilities for Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/tokenization_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/tokenization_utils&quot;},{&quot;title&quot;:&quot;Utilities for Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/trainer_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/trainer_utils&quot;},{&quot;title&quot;:&quot;Utilities for Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/generation_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/generation_utils&quot;},{&quot;title&quot;:&quot;Utilities for Image Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/image_processing_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/image_processing_utils&quot;},{&quot;title&quot;:&quot;Utilities for Audio processing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/audio_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/audio_utils&quot;},{&quot;title&quot;:&quot;General Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/file_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/file_utils&quot;},{&quot;title&quot;:&quot;Utilities for Time Series&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/time_series_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/time_series_utils&quot;}]}]}],&quot;chapterId&quot;:&quot;model_doc/xlm-roberta-xl&quot;,&quot;docType&quot;:&quot;docs&quot;,&quot;isLoggedIn&quot;:false,&quot;lang&quot;:&quot;en&quot;,&quot;langs&quot;:[&quot;de&quot;,&quot;en&quot;,&quot;es&quot;,&quot;fr&quot;,&quot;it&quot;,&quot;ko&quot;,&quot;pt&quot;,&quot;zh&quot;],&quot;library&quot;:&quot;transformers&quot;,&quot;theme&quot;:&quot;light&quot;,&quot;version&quot;:&quot;v4.34.0&quot;,&quot;versions&quot;:[{&quot;version&quot;:&quot;main&quot;},{&quot;version&quot;:&quot;v4.34.0&quot;},{&quot;version&quot;:&quot;v4.33.3&quot;},{&quot;version&quot;:&quot;v4.33.2&quot;},{&quot;version&quot;:&quot;v4.33.0&quot;},{&quot;version&quot;:&quot;v4.32.1&quot;},{&quot;version&quot;:&quot;v4.32.0&quot;},{&quot;version&quot;:&quot;v4.31.0&quot;},{&quot;version&quot;:&quot;v4.30.0&quot;},{&quot;version&quot;:&quot;v4.29.1&quot;},{&quot;version&quot;:&quot;v4.29.0&quot;},{&quot;version&quot;:&quot;v4.28.1&quot;},{&quot;version&quot;:&quot;v4.28.0&quot;},{&quot;version&quot;:&quot;v4.27.2&quot;},{&quot;version&quot;:&quot;v4.27.1&quot;},{&quot;version&quot;:&quot;v4.27.0&quot;},{&quot;version&quot;:&quot;v4.26.1&quot;},{&quot;version&quot;:&quot;v4.26.0&quot;},{&quot;version&quot;:&quot;v4.25.1&quot;},{&quot;version&quot;:&quot;v4.24.0&quot;},{&quot;version&quot;:&quot;v4.23.1&quot;},{&quot;version&quot;:&quot;v4.23.0&quot;},{&quot;version&quot;:&quot;v4.22.2&quot;},{&quot;version&quot;:&quot;v4.22.1&quot;},{&quot;version&quot;:&quot;v4.22.0&quot;},{&quot;version&quot;:&quot;v4.21.3&quot;},{&quot;version&quot;:&quot;v4.21.2&quot;},{&quot;version&quot;:&quot;v4.21.1&quot;},{&quot;version&quot;:&quot;v4.21.0&quot;},{&quot;version&quot;:&quot;v4.20.1&quot;},{&quot;version&quot;:&quot;v4.20.0&quot;},{&quot;version&quot;:&quot;v4.19.4&quot;},{&quot;version&quot;:&quot;v4.19.3&quot;},{&quot;version&quot;:&quot;v4.19.2&quot;},{&quot;version&quot;:&quot;v4.19.0&quot;},{&quot;version&quot;:&quot;v4.18.0&quot;},{&quot;version&quot;:&quot;v4.17.0&quot;},{&quot;version&quot;:&quot;v4.16.2&quot;},{&quot;version&quot;:&quot;v4.16.1&quot;},{&quot;version&quot;:&quot;v4.16.0&quot;},{&quot;version&quot;:&quot;v4.15.0&quot;},{&quot;version&quot;:&quot;v4.14.1&quot;},{&quot;version&quot;:&quot;v4.13.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.5&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.4&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.0.0&quot;},{&quot;version&quot;:&quot;doc-builder-html&quot;}],&quot;title&quot;:&quot;XLM-RoBERTa-XL&quot;}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"> <div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation </p> <div class="flex items-center"><p class="font-semibold">XLM-RoBERTa-XL</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "> <button class=" " type="button"> <h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> </button> <div class="flex items-center"> <select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1" selected="">v4.34.0</option><option value="2">v4.33.3</option><option value="3">v4.32.1</option><option value="4">v4.31.0</option><option value="5">v4.30.0</option><option value="6">v4.29.1</option><option value="7">v4.28.1</option><option value="8">v4.27.2</option><option value="9">v4.26.1</option><option value="10">v4.25.1</option><option value="11">v4.24.0</option><option value="12">v4.23.1</option><option value="13">v4.22.2</option><option value="14">v4.21.3</option><option value="15">v4.20.1</option><option value="16">v4.19.4</option><option value="17">v4.18.0</option><option value="18">v4.17.0</option><option value="19">v4.16.2</option><option value="20">v4.15.0</option><option value="21">v4.14.1</option><option value="22">v4.13.0</option><option value="23">v4.12.5</option><option value="24">v4.11.3</option><option value="25">v4.10.1</option><option value="26">v4.9.2</option><option value="27">v4.8.2</option><option value="28">v4.7.0</option><option value="29">v4.6.0</option><option value="30">v4.5.1</option><option value="31">v4.4.2</option><option value="32">v4.3.3</option><option value="33">v4.2.2</option><option value="34">v4.1.1</option><option value="35">v4.0.1</option><option value="36">v3.5.1</option><option value="37">v3.4.0</option><option value="38">v3.3.1</option><option value="39">v3.2.0</option><option value="40">v3.1.0</option><option value="41">v3.0.2</option><option value="42">v2.11.0</option><option value="43">v2.10.0</option><option value="44">v2.9.1</option><option value="45">v2.8.0</option><option value="46">v2.7.0</option><option value="47">v2.6.0</option><option value="48">v2.5.1</option><option value="49">v2.4.1</option><option value="50">v2.3.0</option><option value="51">v2.2.2</option><option value="52">v2.1.1</option><option value="53">v2.0.0</option><option value="54">v1.2.0</option><option value="55">v1.1.0</option><option value="56">v1.0.0</option><option value="57">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en" selected="">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"> <button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"> <svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> </a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Get started<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/index"><!-- HTML_TAG_START -->🤗 Transformers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/quicktour"><!-- HTML_TAG_START -->Quick tour<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/installation"><!-- HTML_TAG_START -->Installation<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Tutorials<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_tutorial"><!-- HTML_TAG_START -->Run inference with pipelines<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/autoclass_tutorial"><!-- HTML_TAG_START -->Write portable code with AutoClass<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/preprocessing"><!-- HTML_TAG_START -->Preprocess data<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/training"><!-- HTML_TAG_START -->Fine-tune a pretrained model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/run_scripts"><!-- HTML_TAG_START -->Train with a script<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/accelerate"><!-- HTML_TAG_START -->Set up distributed training with 🤗 Accelerate<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/peft"><!-- HTML_TAG_START -->Load and train adapters with 🤗 PEFT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_sharing"><!-- HTML_TAG_START -->Share your model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/transformers_agents"><!-- HTML_TAG_START -->Agents<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/llm_tutorial"><!-- HTML_TAG_START -->Generation with LLMs<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Task Guides<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Natural Language Processing<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Audio<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Computer Vision<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Multimodal<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Generation<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Prompting<!-- HTML_TAG_END --></span> </span></span> </div></div> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Developer guides<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/fast_tokenizers"><!-- HTML_TAG_START -->Use fast tokenizers from 🤗 Tokenizers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/multilingual"><!-- HTML_TAG_START -->Run inference with multilingual models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/create_a_model"><!-- HTML_TAG_START -->Use model-specific APIs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_models"><!-- HTML_TAG_START -->Share a custom model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/chat_templating"><!-- HTML_TAG_START -->Templates for chat models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/sagemaker"><!-- HTML_TAG_START -->Run training on Amazon SageMaker<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/serialization"><!-- HTML_TAG_START -->Export to ONNX<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tflite"><!-- HTML_TAG_START -->Export to TFLite<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/torchscript"><!-- HTML_TAG_START -->Export to TorchScript<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/benchmarks"><!-- HTML_TAG_START -->Benchmarks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/notebooks"><!-- HTML_TAG_START -->Notebooks with examples<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/community"><!-- HTML_TAG_START -->Community resources<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_tools"><!-- HTML_TAG_START -->Custom Tools and Prompts<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/troubleshooting"><!-- HTML_TAG_START -->Troubleshoot<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Performance and scalability<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/performance"><!-- HTML_TAG_START -->Overview<!-- HTML_TAG_END --> </a> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Efficient training techniques<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_one"><!-- HTML_TAG_START -->Methods and tools for efficient training on a single GPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_many"><!-- HTML_TAG_START -->Multiple GPUs and parallelism<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu"><!-- HTML_TAG_START -->Efficient training on CPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu_many"><!-- HTML_TAG_START -->Distributed CPU training<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu"><!-- HTML_TAG_START -->Training on TPUs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu_tf"><!-- HTML_TAG_START -->Training on TPU with TensorFlow<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_special"><!-- HTML_TAG_START -->Training on Specialized Hardware<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_hardware"><!-- HTML_TAG_START -->Custom hardware for training<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/hpo_train"><!-- HTML_TAG_START -->Hyperparameter Search using Trainer API<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Optimizing inference<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_cpu"><!-- HTML_TAG_START -->Inference on CPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_one"><!-- HTML_TAG_START -->Inference on one GPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_many"><!-- HTML_TAG_START -->Inference on many GPUs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_special"><!-- HTML_TAG_START -->Inference on Specialized Hardware<!-- HTML_TAG_END --> </a> </div><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/big_models"><!-- HTML_TAG_START -->Instantiating a big model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/debugging"><!-- HTML_TAG_START -->Troubleshooting<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tf_xla"><!-- HTML_TAG_START -->XLA Integration for TensorFlow Models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perf_torch_compile"><!-- HTML_TAG_START -->Optimize inference using `torch.compile()`<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Contribute<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/contributing"><!-- HTML_TAG_START -->How to contribute to transformers?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_model"><!-- HTML_TAG_START -->How to add a model to 🤗 Transformers?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_tensorflow_model"><!-- HTML_TAG_START -->How to convert a 🤗 Transformers model to TensorFlow?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_pipeline"><!-- HTML_TAG_START -->How to add a pipeline to 🤗 Transformers?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/testing"><!-- HTML_TAG_START -->Testing<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pr_checks"><!-- HTML_TAG_START -->Checks on a Pull Request<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Conceptual guides<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/philosophy"><!-- HTML_TAG_START -->Philosophy<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/glossary"><!-- HTML_TAG_START -->Glossary<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/task_summary"><!-- HTML_TAG_START -->What 🤗 Transformers can do<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tasks_explained"><!-- HTML_TAG_START -->How 🤗 Transformers solve tasks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_summary"><!-- HTML_TAG_START -->The Transformer model family<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tokenizer_summary"><!-- HTML_TAG_START -->Summary of the tokenizers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/attention"><!-- HTML_TAG_START -->Attention mechanisms<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pad_truncation"><!-- HTML_TAG_START -->Padding and truncation<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/bertology"><!-- HTML_TAG_START -->BERTology<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perplexity"><!-- HTML_TAG_START -->Perplexity of fixed-length models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_webserver"><!-- HTML_TAG_START -->Pipelines for webserver inference<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_memory_anatomy"><!-- HTML_TAG_START -->Model training anatomy<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->API<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Main Classes<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/agent"><!-- HTML_TAG_START -->Agents and Tools<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/model_doc/auto"><!-- HTML_TAG_START -->Auto Classes<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/callback"><!-- HTML_TAG_START -->Callbacks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/configuration"><!-- HTML_TAG_START -->Configuration<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/data_collator"><!-- HTML_TAG_START -->Data Collator<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks"><!-- HTML_TAG_START -->Keras callbacks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/logging"><!-- HTML_TAG_START -->Logging<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/model"><!-- HTML_TAG_START -->Models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/text_generation"><!-- HTML_TAG_START -->Text Generation<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/onnx"><!-- HTML_TAG_START -->ONNX<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules"><!-- HTML_TAG_START -->Optimization<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/output"><!-- HTML_TAG_START -->Model outputs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/pipelines"><!-- HTML_TAG_START -->Pipelines<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/processors"><!-- HTML_TAG_START -->Processors<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/quantization"><!-- HTML_TAG_START -->Quantization<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/tokenizer"><!-- HTML_TAG_START -->Tokenizer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/trainer"><!-- HTML_TAG_START -->Trainer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/deepspeed"><!-- HTML_TAG_START -->DeepSpeed Integration<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor"><!-- HTML_TAG_START -->Feature Extractor<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/image_processor"><!-- HTML_TAG_START -->Image Processor<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Text models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/albert"><!-- HTML_TAG_START -->ALBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bart"><!-- HTML_TAG_START -->BART<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/barthez"><!-- HTML_TAG_START -->BARThez<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bartpho"><!-- HTML_TAG_START -->BARTpho<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bert"><!-- HTML_TAG_START -->BERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bert-generation"><!-- HTML_TAG_START -->BertGeneration<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bert-japanese"><!-- HTML_TAG_START -->BertJapanese<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bertweet"><!-- HTML_TAG_START -->Bertweet<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/big_bird"><!-- HTML_TAG_START -->BigBird<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus"><!-- HTML_TAG_START -->BigBirdPegasus<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/biogpt"><!-- HTML_TAG_START -->BioGpt<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/blenderbot"><!-- HTML_TAG_START -->Blenderbot<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/blenderbot-small"><!-- HTML_TAG_START -->Blenderbot Small<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bloom"><!-- HTML_TAG_START -->BLOOM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bort"><!-- HTML_TAG_START -->BORT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/byt5"><!-- HTML_TAG_START -->ByT5<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/camembert"><!-- HTML_TAG_START -->CamemBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/canine"><!-- HTML_TAG_START -->CANINE<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/codegen"><!-- HTML_TAG_START -->CodeGen<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/code_llama"><!-- HTML_TAG_START -->CodeLlama<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/convbert"><!-- HTML_TAG_START -->ConvBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/cpm"><!-- HTML_TAG_START -->CPM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/cpmant"><!-- HTML_TAG_START -->CPMANT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ctrl"><!-- HTML_TAG_START -->CTRL<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/deberta"><!-- HTML_TAG_START -->DeBERTa<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/deberta-v2"><!-- HTML_TAG_START -->DeBERTa-v2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/dialogpt"><!-- HTML_TAG_START -->DialoGPT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/distilbert"><!-- HTML_TAG_START -->DistilBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/dpr"><!-- HTML_TAG_START -->DPR<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/electra"><!-- HTML_TAG_START -->ELECTRA<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/encoder-decoder"><!-- HTML_TAG_START -->Encoder Decoder Models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ernie"><!-- HTML_TAG_START -->ERNIE<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ernie_m"><!-- HTML_TAG_START -->ErnieM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/esm"><!-- HTML_TAG_START -->ESM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/falcon"><!-- HTML_TAG_START -->Falcon<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/flan-t5"><!-- HTML_TAG_START -->FLAN-T5<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/flan-ul2"><!-- HTML_TAG_START -->FLAN-UL2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/flaubert"><!-- HTML_TAG_START -->FlauBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/fnet"><!-- HTML_TAG_START -->FNet<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/fsmt"><!-- HTML_TAG_START -->FSMT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/funnel"><!-- HTML_TAG_START -->Funnel Transformer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/openai-gpt"><!-- HTML_TAG_START -->GPT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt_neo"><!-- HTML_TAG_START -->GPT Neo<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt_neox"><!-- HTML_TAG_START -->GPT NeoX<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese"><!-- HTML_TAG_START -->GPT NeoX Japanese<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gptj"><!-- HTML_TAG_START -->GPT-J<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt2"><!-- HTML_TAG_START -->GPT2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode"><!-- HTML_TAG_START -->GPTBigCode<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese"><!-- HTML_TAG_START -->GPTSAN Japanese<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt-sw3"><!-- HTML_TAG_START -->GPTSw3<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/herbert"><!-- HTML_TAG_START -->HerBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ibert"><!-- HTML_TAG_START -->I-BERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/jukebox"><!-- HTML_TAG_START -->Jukebox<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/led"><!-- HTML_TAG_START -->LED<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/llama"><!-- HTML_TAG_START -->LLaMA<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/llama2"><!-- HTML_TAG_START -->Llama2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/longformer"><!-- HTML_TAG_START -->Longformer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/longt5"><!-- HTML_TAG_START -->LongT5<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/luke"><!-- HTML_TAG_START -->LUKE<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/m2m_100"><!-- HTML_TAG_START -->M2M100<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/marian"><!-- HTML_TAG_START -->MarianMT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/markuplm"><!-- HTML_TAG_START -->MarkupLM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mbart"><!-- HTML_TAG_START -->MBart and MBart-50<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mega"><!-- HTML_TAG_START -->MEGA<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/megatron-bert"><!-- HTML_TAG_START -->MegatronBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2"><!-- HTML_TAG_START -->MegatronGPT2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mistral"><!-- HTML_TAG_START -->Mistral<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mluke"><!-- HTML_TAG_START -->mLUKE<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mobilebert"><!-- HTML_TAG_START -->MobileBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mpnet"><!-- HTML_TAG_START -->MPNet<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mpt"><!-- HTML_TAG_START -->MPT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mra"><!-- HTML_TAG_START -->MRA<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mt5"><!-- HTML_TAG_START -->MT5<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mvp"><!-- HTML_TAG_START -->MVP<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nezha"><!-- HTML_TAG_START -->NEZHA<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nllb"><!-- HTML_TAG_START -->NLLB<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nllb-moe"><!-- HTML_TAG_START -->NLLB-MoE<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nystromformer"><!-- HTML_TAG_START -->Nyströmformer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/open-llama"><!-- HTML_TAG_START -->Open-Llama<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/opt"><!-- HTML_TAG_START -->OPT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/pegasus"><!-- HTML_TAG_START -->Pegasus<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/pegasus_x"><!-- HTML_TAG_START -->PEGASUS-X<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/persimmon"><!-- HTML_TAG_START -->Persimmon<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/phobert"><!-- HTML_TAG_START -->PhoBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/plbart"><!-- HTML_TAG_START -->PLBart<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/prophetnet"><!-- HTML_TAG_START -->ProphetNet<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/qdqbert"><!-- HTML_TAG_START -->QDQBert<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/rag"><!-- HTML_TAG_START -->RAG<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/realm"><!-- HTML_TAG_START -->REALM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/reformer"><!-- HTML_TAG_START -->Reformer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/rembert"><!-- HTML_TAG_START -->RemBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/retribert"><!-- HTML_TAG_START -->RetriBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/roberta"><!-- HTML_TAG_START -->RoBERTa<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm"><!-- HTML_TAG_START -->RoBERTa-PreLayerNorm<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/roc_bert"><!-- HTML_TAG_START -->RoCBert<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/roformer"><!-- HTML_TAG_START -->RoFormer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/rwkv"><!-- HTML_TAG_START -->RWKV<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/splinter"><!-- HTML_TAG_START -->Splinter<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/squeezebert"><!-- HTML_TAG_START -->SqueezeBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/switch_transformers"><!-- HTML_TAG_START -->SwitchTransformers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/t5"><!-- HTML_TAG_START -->T5<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/t5v1.1"><!-- HTML_TAG_START -->T5v1.1<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/tapex"><!-- HTML_TAG_START -->TAPEX<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/transfo-xl"><!-- HTML_TAG_START -->Transformer XL<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ul2"><!-- HTML_TAG_START -->UL2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/umt5"><!-- HTML_TAG_START -->UMT5<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xmod"><!-- HTML_TAG_START -->X-MOD<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xglm"><!-- HTML_TAG_START -->XGLM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm"><!-- HTML_TAG_START -->XLM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet"><!-- HTML_TAG_START -->XLM-ProphetNet<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta"><!-- HTML_TAG_START -->XLM-RoBERTa<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl"><!-- HTML_TAG_START -->XLM-RoBERTa-XL<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm-v"><!-- HTML_TAG_START -->XLM-V<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlnet"><!-- HTML_TAG_START -->XLNet<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/yoso"><!-- HTML_TAG_START -->YOSO<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Vision models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Audio models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Multimodal models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Reinforcement learning models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Time series models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Graph models<!-- HTML_TAG_END --></span> </span></span> </div></div> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Internal Helpers<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/modeling_utils"><!-- HTML_TAG_START -->Custom Layers and Utilities<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/pipelines_utils"><!-- HTML_TAG_START -->Utilities for pipelines<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/tokenization_utils"><!-- HTML_TAG_START -->Utilities for Tokenizers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/trainer_utils"><!-- HTML_TAG_START -->Utilities for Trainer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/generation_utils"><!-- HTML_TAG_START -->Utilities for Generation<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/image_processing_utils"><!-- HTML_TAG_START -->Utilities for Image Processors<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/audio_utils"><!-- HTML_TAG_START -->Utilities for Audio processing<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/file_utils"><!-- HTML_TAG_START -->General Utilities<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/time_series_utils"><!-- HTML_TAG_START -->Utilities for Time Series<!-- HTML_TAG_END --> </a> </div> </div></nav></div></div></div> <div class="z-1 min-w-0 flex-1"> <div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg"> <div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div> <p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience </p> <div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes </div></div></div> <div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a> <p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div> <div class="prose-doc prose relative mx-auto max-w-4xl break-words"> <p></p> <h1 class="relative group"><a id="xlmrobertaxl" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#xlmrobertaxl"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-5zyzix">XLM-RoBERTa-XL</span></h1> <h2 class="relative group"><a id="overview" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#overview"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1jsw1pg">Overview</span></h2> <p data-svelte-h="svelte-6isz6h">The XLM-RoBERTa-XL model was proposed in <a href="https://arxiv.org/abs/2105.00572" rel="nofollow">Larger-Scale Transformers for Multilingual Masked Language Modeling</a> by Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau.</p> <p data-svelte-h="svelte-vfdo9a">The abstract from the paper is the following:</p> <p data-svelte-h="svelte-crr6px"><em>Recent work has demonstrated the effectiveness of cross-lingual language model pretraining for cross-lingual understanding. In this study, we present the results of two larger multilingual masked language models, with 3.5B and 10.7B parameters. Our two new models dubbed XLM-R XL and XLM-R XXL outperform XLM-R by 1.8% and 2.4% average accuracy on XNLI. Our model also outperforms the RoBERTa-Large model on several English tasks of the GLUE benchmark by 0.3% on average while handling 99 more languages. This suggests pretrained models with larger capacity may obtain both strong performance on high-resource languages while greatly improving low-resource languages. We make our code and models publicly available.</em></p> <p data-svelte-h="svelte-axv494">Tips:</p> <ul data-svelte-h="svelte-1yjc316"><li>XLM-RoBERTa-XL is a multilingual model trained on 100 different languages. Unlike some XLM multilingual models, it does not require <code>lang</code> tensors to understand which language is used, and should be able to determine the correct language from the input ids.</li></ul> <p data-svelte-h="svelte-1vkcn6f">This model was contributed by <a href="https://github.com/Soonhwan-Kwon" rel="nofollow">Soonhwan-Kwon</a> and <a href="https://huggingface.co/stefan-it" rel="nofollow">stefan-it</a>. The original code can be found <a href="https://github.com/pytorch/fairseq/tree/master/examples/xlmr" rel="nofollow">here</a>.</p> <h2 class="relative group"><a id="documentation-resources" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#documentation-resources"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-n3f0j0">Documentation resources</span></h2> <ul data-svelte-h="svelte-p1b16m"><li><a href="../tasks/sequence_classification">Text classification task guide</a></li> <li><a href="../tasks/token_classification">Token classification task guide</a></li> <li><a href="../tasks/question_answering">Question answering task guide</a></li> <li><a href="../tasks/language_modeling">Causal language modeling task guide</a></li> <li><a href="../tasks/masked_language_modeling">Masked language modeling task guide</a></li> <li><a href="../tasks/multiple_choice">Multiple choice task guide</a></li></ul> <h2 class="relative group"><a id="transformers.XLMRobertaXLConfig" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLConfig"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-25c39p">XLMRobertaXLConfig</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMRobertaXLConfig"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XLMRobertaXLConfig</span></span></h3> <a id="transformers.XLMRobertaXLConfig" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMRobertaXLConfig"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta_xl/configuration_xlm_roberta_xl.py#L34" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">vocab_size<span class="opacity-60"> = 250880</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_size<span class="opacity-60"> = 2560</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_hidden_layers<span class="opacity-60"> = 36</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_attention_heads<span class="opacity-60"> = 32</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">intermediate_size<span class="opacity-60"> = 10240</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_act<span class="opacity-60"> = 'gelu'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_dropout_prob<span class="opacity-60"> = 0.1</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_probs_dropout_prob<span class="opacity-60"> = 0.1</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">max_position_embeddings<span class="opacity-60"> = 514</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">type_vocab_size<span class="opacity-60"> = 1</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">initializer_range<span class="opacity-60"> = 0.02</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">layer_norm_eps<span class="opacity-60"> = 1e-05</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pad_token_id<span class="opacity-60"> = 1</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">bos_token_id<span class="opacity-60"> = 0</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">eos_token_id<span class="opacity-60"> = 2</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_embedding_type<span class="opacity-60"> = 'absolute'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_cache<span class="opacity-60"> = True</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">classifier_dropout<span class="opacity-60"> = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 15 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLConfig.vocab_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLConfig.vocab_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>vocab_size</strong> (<code>int</code>, <em>optional</em>, defaults to 250880) — Vocabulary size of the XLM_ROBERTA_XL model. Defines the number of different tokens that can be represented by the <code>inputs_ids</code> passed when calling <a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl#transformers.XLMRobertaXLModel">XLMRobertaXLModel</a>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLConfig.hidden_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLConfig.hidden_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hidden_size</strong> (<code>int</code>, <em>optional</em>, defaults to 2560) — Dimensionality of the encoder layers and the pooler layer.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLConfig.num_hidden_layers" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLConfig.num_hidden_layers"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>num_hidden_layers</strong> (<code>int</code>, <em>optional</em>, defaults to 36) — Number of hidden layers in the Transformer encoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLConfig.num_attention_heads" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLConfig.num_attention_heads"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>num_attention_heads</strong> (<code>int</code>, <em>optional</em>, defaults to 32) — Number of attention heads for each attention layer in the Transformer encoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLConfig.intermediate_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLConfig.intermediate_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>intermediate_size</strong> (<code>int</code>, <em>optional</em>, defaults to 10240) — Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLConfig.hidden_act" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLConfig.hidden_act"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hidden_act</strong> (<code>str</code> or <code>Callable</code>, <em>optional</em>, defaults to <code>"gelu"</code>) — The non-linear activation function (function or string) in the encoder and pooler. If string, <code>"gelu"</code>, <code>"relu"</code>, <code>"silu"</code> and <code>"gelu_new"</code> are supported.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLConfig.hidden_dropout_prob" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLConfig.hidden_dropout_prob"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hidden_dropout_prob</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLConfig.attention_probs_dropout_prob" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLConfig.attention_probs_dropout_prob"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_probs_dropout_prob</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) — The dropout ratio for the attention probabilities.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLConfig.max_position_embeddings" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLConfig.max_position_embeddings"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>max_position_embeddings</strong> (<code>int</code>, <em>optional</em>, defaults to 514) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLConfig.type_vocab_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLConfig.type_vocab_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>type_vocab_size</strong> (<code>int</code>, <em>optional</em>, defaults to 1) — The vocabulary size of the <code>token_type_ids</code> passed when calling <a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl#transformers.XLMRobertaXLModel">XLMRobertaXLModel</a> or <code>TFXLMRobertaXLModel</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLConfig.initializer_range" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLConfig.initializer_range"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>initializer_range</strong> (<code>float</code>, <em>optional</em>, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLConfig.layer_norm_eps" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLConfig.layer_norm_eps"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>layer_norm_eps</strong> (<code>float</code>, <em>optional</em>, defaults to 1e-5) — The epsilon used by the layer normalization layers.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLConfig.position_embedding_type" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLConfig.position_embedding_type"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_embedding_type</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"absolute"</code>) — Type of position embedding. Choose one of <code>"absolute"</code>, <code>"relative_key"</code>, <code>"relative_key_query"</code>. For positional embeddings use <code>"absolute"</code>. For more information on <code>"relative_key"</code>, please refer to <a href="https://arxiv.org/abs/1803.02155" rel="nofollow">Self-Attention with Relative Position Representations (Shaw et al.)</a>. For more information on <code>"relative_key_query"</code>, please refer to <em>Method 4</em> in <a href="https://arxiv.org/abs/2009.13658" rel="nofollow">Improve Transformer Models with Better Relative Position Embeddings (Huang et al.)</a>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLConfig.use_cache" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLConfig.use_cache"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>use_cache</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if <code>config.is_decoder=True</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLConfig.classifier_dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLConfig.classifier_dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>classifier_dropout</strong> (<code>float</code>, <em>optional</em>) — The dropout ratio for the classification head.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1eflfdl">This is the configuration class to store the configuration of a <a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl#transformers.XLMRobertaXLModel">XLMRobertaXLModel</a> or a <code>TFXLMRobertaXLModel</code>. It is used to instantiate a XLM_ROBERTA_XL model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the XLM_ROBERTA_XL <a href="https://huggingface.co/facebook/xlm-roberta-xl" rel="nofollow">facebook/xlm-roberta-xl</a> architecture.</p> <p data-svelte-h="svelte-10kqkkl">Configuration objects inherit from <a href="/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> and can be used to control the model outputs. Read the documentation from <a href="/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> for more information.</p> <div class="relative group rounded-md"><a id="transformers.XLMRobertaXLConfig.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLConfig.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-kvfsh7">Examples:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> XLMRobertaXLConfig, XLMRobertaXLModel <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Initializing a XLM_ROBERTA_XL bert-base-uncased style configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>configuration = XLMRobertaXLConfig() <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Initializing a model (with random weights) from the bert-base-uncased style configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model = XLMRobertaXLModel(configuration) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Accessing the model configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>configuration = model.config</pre></div></div></div> <h2 class="relative group"><a id="transformers.XLMRobertaXLModel" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLModel"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-14in7ek">XLMRobertaXLModel</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMRobertaXLModel"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XLMRobertaXLModel</span></span></h3> <a id="transformers.XLMRobertaXLModel" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMRobertaXLModel"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta_xl/modeling_xlm_roberta_xl.py#L664" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">add_pooling_layer<span class="opacity-60"> = True</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLModel.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLModel.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl#transformers.XLMRobertaXLConfig">XLMRobertaXLConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-ahre3v">The bare XLM-RoBERTa-xlarge Model transformer outputting raw hidden-states without any specific head on top. This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <p data-svelte-h="svelte-wa0gy5">The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of cross-attention is added between the self-attention layers, following the architecture described in <em>Attention is all you need</em><em>by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin. To behave as an decoder the model needs to be initialized with the <code>is_decoder</code> argument of the configuration set to <code>True</code>. To be used in a Seq2Seq model, the model needs to initialized with both <code>is_decoder</code> argument and <code>add_cross_attention</code> set to <code>True</code>; an <code>encoder_hidden_states</code> is then expected as an input to the forward pass. .. </em><em>Attention is all you need</em>: <a href="https://arxiv.org/abs/1706.03762" rel="nofollow">https://arxiv.org/abs/1706.03762</a></p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMRobertaXLModel.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.XLMRobertaXLModel.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMRobertaXLModel.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta_xl/modeling_xlm_roberta_xl.py#L702" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_hidden_states<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">past_key_values<span class="opacity-60">: typing.Optional[typing.List[torch.FloatTensor]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_cache<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions">transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 13 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLModel.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLModel.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details. <a href="../glossary#input-ids">What are input IDs?</a></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLModel.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLModel.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>. <a href="../glossary#attention-mask">What are attention masks?</a></li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLModel.forward.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLModel.forward.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token. <a href="../glossary#token-type-ids">What are token type IDs?</a></li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLModel.forward.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLModel.forward.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>. <a href="../glossary#position-ids">What are position IDs?</a></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLModel.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLModel.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLModel.forward.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLModel.forward.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLModel.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLModel.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLModel.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLModel.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLModel.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLModel.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLModel.forward.encoder_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLModel.forward.encoder_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>encoder_hidden_states</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLModel.forward.encoder_attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLModel.forward.encoder_attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>encoder_attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLModel.forward.past_key_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLModel.forward.past_key_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>past_key_values</strong> (<code>tuple(tuple(torch.FloatTensor))</code> of length <code>config.n_layers</code> with each tuple having 4 tensors of shape <code>(batch_size, num_heads, sequence_length - 1, embed_size_per_head)</code>) — Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.<p></p> <p>If <code>past_key_values</code> are used, the user can optionally input only the last <code>decoder_input_ids</code> (those that don’t have their past key value states given to this model) of shape <code>(batch_size, 1)</code> instead of all <code>decoder_input_ids</code> of shape <code>(batch_size, sequence_length)</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLModel.forward.use_cache" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLModel.forward.use_cache"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>use_cache</strong> (<code>bool</code>, <em>optional</em>) — If set to <code>True</code>, <code>past_key_values</code> key value states are returned and can be used to speed up decoding (see <code>past_key_values</code>).</span></span> </li></ul> <div id="transformers.XLMRobertaXLModel.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions">transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions">transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl#transformers.XLMRobertaXLConfig">XLMRobertaXLConfig</a>) and inputs.</p> <ul> <li> <p><strong>last_hidden_state</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>) — Sequence of hidden-states at the output of the last layer of the model.</p> </li> <li> <p><strong>pooler_output</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, hidden_size)</code>) — Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining.</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> <li> <p><strong>cross_attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> and <code>config.add_cross_attention=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.</p> </li> <li> <p><strong>past_key_values</strong> (<code>tuple(tuple(torch.FloatTensor))</code>, <em>optional</em>, returned when <code>use_cache=True</code> is passed or when <code>config.use_cache=True</code>) — Tuple of <code>tuple(torch.FloatTensor)</code> of length <code>config.n_layers</code>, with each tuple having 2 tensors of shape <code>(batch_size, num_heads, sequence_length, embed_size_per_head)</code>) and optionally if <code>config.is_encoder_decoder=True</code> 2 additional tensors of shape <code>(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)</code>.</p> <p>Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if <code>config.is_encoder_decoder=True</code> in the cross-attention blocks) that can be used (see <code>past_key_values</code> input) to speed up sequential decoding.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-ubylxt">The <a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl#transformers.XLMRobertaXLModel">XLMRobertaXLModel</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.XLMRobertaXLModel.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLModel.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, XLMRobertaXLModel <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"xlm-roberta-xlarge"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = XLMRobertaXLModel.from_pretrained(<span class="hljs-string">"xlm-roberta-xlarge"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(<span class="hljs-string">"Hello, my dog is cute"</span>, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(**inputs) <span class="hljs-meta">&gt;&gt;&gt; </span>last_hidden_states = outputs.last_hidden_state</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.XLMRobertaXLForCausalLM" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForCausalLM"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1v25my4">XLMRobertaXLForCausalLM</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMRobertaXLForCausalLM"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XLMRobertaXLForCausalLM</span></span></h3> <a id="transformers.XLMRobertaXLForCausalLM" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMRobertaXLForCausalLM"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta_xl/modeling_xlm_roberta_xl.py#L844" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForCausalLM.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForCausalLM.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl#transformers.XLMRobertaXLConfig">XLMRobertaXLConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1h6151b">XLM-RoBERTa-xlarge Model with a <code>language modeling</code> head on top for CLM fine-tuning. This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMRobertaXLForCausalLM.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.XLMRobertaXLForCausalLM.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMRobertaXLForCausalLM.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta_xl/modeling_xlm_roberta_xl.py#L864" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_hidden_states<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_attention_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">past_key_values<span class="opacity-60">: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_cache<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.CausalLMOutputWithCrossAttentions">transformers.modeling_outputs.CausalLMOutputWithCrossAttentions</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 14 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForCausalLM.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForCausalLM.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details. <a href="../glossary#input-ids">What are input IDs?</a></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForCausalLM.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForCausalLM.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>. <a href="../glossary#attention-mask">What are attention masks?</a></li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForCausalLM.forward.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForCausalLM.forward.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token. <a href="../glossary#token-type-ids">What are token type IDs?</a></li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForCausalLM.forward.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForCausalLM.forward.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>. <a href="../glossary#position-ids">What are position IDs?</a></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForCausalLM.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForCausalLM.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForCausalLM.forward.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForCausalLM.forward.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForCausalLM.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForCausalLM.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForCausalLM.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForCausalLM.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForCausalLM.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForCausalLM.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForCausalLM.forward.encoder_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForCausalLM.forward.encoder_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>encoder_hidden_states</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForCausalLM.forward.encoder_attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForCausalLM.forward.encoder_attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>encoder_attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForCausalLM.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForCausalLM.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be in <code>[-100, 0, ..., config.vocab_size]</code> (see <code>input_ids</code> docstring) Tokens with indices set to <code>-100</code> are ignored (masked), the loss is only computed for the tokens with labels in <code>[0, ..., config.vocab_size]</code></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForCausalLM.forward.past_key_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForCausalLM.forward.past_key_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>past_key_values</strong> (<code>tuple(tuple(torch.FloatTensor))</code> of length <code>config.n_layers</code> with each tuple having 4 tensors of shape <code>(batch_size, num_heads, sequence_length - 1, embed_size_per_head)</code>) — Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. If <code>past_key_values</code> are used, the user can optionally input only the last <code>decoder_input_ids</code> (those that don’t have their past key value states given to this model) of shape <code>(batch_size, 1)</code> instead of all <code>decoder_input_ids</code> of shape <code>(batch_size, sequence_length)</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForCausalLM.forward.use_cache" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForCausalLM.forward.use_cache"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>use_cache</strong> (<code>bool</code>, <em>optional</em>) — If set to <code>True</code>, <code>past_key_values</code> key value states are returned and can be used to speed up decoding (see <code>past_key_values</code>).</span></span> </li></ul> <div id="transformers.XLMRobertaXLForCausalLM.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.CausalLMOutputWithCrossAttentions">transformers.modeling_outputs.CausalLMOutputWithCrossAttentions</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.CausalLMOutputWithCrossAttentions">transformers.modeling_outputs.CausalLMOutputWithCrossAttentions</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl#transformers.XLMRobertaXLConfig">XLMRobertaXLConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Language modeling loss (for next-token prediction).</p> </li> <li> <p><strong>logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, config.vocab_size)</code>) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> <li> <p><strong>cross_attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Cross attentions weights after the attention softmax, used to compute the weighted average in the cross-attention heads.</p> </li> <li> <p><strong>past_key_values</strong> (<code>tuple(tuple(torch.FloatTensor))</code>, <em>optional</em>, returned when <code>use_cache=True</code> is passed or when <code>config.use_cache=True</code>) — Tuple of <code>torch.FloatTensor</code> tuples of length <code>config.n_layers</code>, with each tuple containing the cached key, value states of the self-attention and the cross-attention layers if model is used in encoder-decoder setting. Only relevant if <code>config.is_decoder = True</code>.</p> <p>Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see <code>past_key_values</code> input) to speed up sequential decoding.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-1h1ral1">The <a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl#transformers.XLMRobertaXLForCausalLM">XLMRobertaXLForCausalLM</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.XLMRobertaXLForCausalLM.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForCausalLM.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, RobertaForCausalLM, RobertaConfig <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"roberta-base"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>config = RobertaConfig.from_pretrained(<span class="hljs-string">"roberta-base"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>config.is_decoder = <span class="hljs-literal">True</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model = RobertaForCausalLM.from_pretrained(<span class="hljs-string">"roberta-base"</span>, config=config) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(<span class="hljs-string">"Hello, my dog is cute"</span>, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(**inputs) <span class="hljs-meta">&gt;&gt;&gt; </span>prediction_logits = outputs.logits</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.XLMRobertaXLForMaskedLM" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForMaskedLM"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-oblmw4">XLMRobertaXLForMaskedLM</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMRobertaXLForMaskedLM"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XLMRobertaXLForMaskedLM</span></span></h3> <a id="transformers.XLMRobertaXLForMaskedLM" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMRobertaXLForMaskedLM"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta_xl/modeling_xlm_roberta_xl.py#L991" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForMaskedLM.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForMaskedLM.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl#transformers.XLMRobertaXLConfig">XLMRobertaXLConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1y3j00a">XLM-RoBERTa-xlarge Model with a <code>language modeling</code> head on top. This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMRobertaXLForMaskedLM.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.XLMRobertaXLForMaskedLM.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMRobertaXLForMaskedLM.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta_xl/modeling_xlm_roberta_xl.py#L1014" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_hidden_states<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_attention_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.MaskedLMOutput">transformers.modeling_outputs.MaskedLMOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 11 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForMaskedLM.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForMaskedLM.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details. <a href="../glossary#input-ids">What are input IDs?</a></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForMaskedLM.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForMaskedLM.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>. <a href="../glossary#attention-mask">What are attention masks?</a></li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForMaskedLM.forward.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForMaskedLM.forward.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token. <a href="../glossary#token-type-ids">What are token type IDs?</a></li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForMaskedLM.forward.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForMaskedLM.forward.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>. <a href="../glossary#position-ids">What are position IDs?</a></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForMaskedLM.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForMaskedLM.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForMaskedLM.forward.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForMaskedLM.forward.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForMaskedLM.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForMaskedLM.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForMaskedLM.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForMaskedLM.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForMaskedLM.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForMaskedLM.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForMaskedLM.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForMaskedLM.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Labels for computing the masked language modeling loss. Indices should be in <code>[-100, 0, ..., config.vocab_size]</code> (see <code>input_ids</code> docstring) Tokens with indices set to <code>-100</code> are ignored (masked), the loss is only computed for the tokens with labels in <code>[0, ..., config.vocab_size]</code></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForMaskedLM.forward.kwargs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForMaskedLM.forward.kwargs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>kwargs</strong> (<code>Dict[str, any]</code>, optional, defaults to <em>{}</em>) — Used to hide legacy arguments that have been deprecated.</span></span> </li></ul> <div id="transformers.XLMRobertaXLForMaskedLM.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.MaskedLMOutput">transformers.modeling_outputs.MaskedLMOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.MaskedLMOutput">transformers.modeling_outputs.MaskedLMOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl#transformers.XLMRobertaXLConfig">XLMRobertaXLConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Masked language modeling (MLM) loss.</p> </li> <li> <p><strong>logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, config.vocab_size)</code>) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-1v95nnh">The <a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl#transformers.XLMRobertaXLForMaskedLM">XLMRobertaXLForMaskedLM</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.XLMRobertaXLForMaskedLM.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForMaskedLM.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, XLMRobertaXLForMaskedLM <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"xlm-roberta-xlarge"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = XLMRobertaXLForMaskedLM.from_pretrained(<span class="hljs-string">"xlm-roberta-xlarge"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(<span class="hljs-string">"The capital of France is &lt;mask&gt;."</span>, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> logits = model(**inputs).logits <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># retrieve index of &lt;mask&gt;</span> <span class="hljs-meta">&gt;&gt;&gt; </span>mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[<span class="hljs-number">0</span>].nonzero(as_tuple=<span class="hljs-literal">True</span>)[<span class="hljs-number">0</span>] <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_token_id = logits[<span class="hljs-number">0</span>, mask_token_index].argmax(axis=-<span class="hljs-number">1</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>labels = tokenizer(<span class="hljs-string">"The capital of France is Paris."</span>, return_tensors=<span class="hljs-string">"pt"</span>)[<span class="hljs-string">"input_ids"</span>] <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># mask labels of non-&lt;mask&gt; tokens</span> <span class="hljs-meta">&gt;&gt;&gt; </span>labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -<span class="hljs-number">100</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(**inputs, labels=labels)</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.XLMRobertaXLForSequenceClassification" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForSequenceClassification"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-frjk07">XLMRobertaXLForSequenceClassification</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMRobertaXLForSequenceClassification"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XLMRobertaXLForSequenceClassification</span></span></h3> <a id="transformers.XLMRobertaXLForSequenceClassification" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMRobertaXLForSequenceClassification"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta_xl/modeling_xlm_roberta_xl.py#L1113" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForSequenceClassification.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForSequenceClassification.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl#transformers.XLMRobertaXLConfig">XLMRobertaXLConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-ty277b">XLM-RoBERTa-xlarge Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks.</p> <p data-svelte-h="svelte-1nlcy0z">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMRobertaXLForSequenceClassification.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.XLMRobertaXLForSequenceClassification.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMRobertaXLForSequenceClassification.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta_xl/modeling_xlm_roberta_xl.py#L1124" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.SequenceClassifierOutput">transformers.modeling_outputs.SequenceClassifierOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 10 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForSequenceClassification.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForSequenceClassification.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details. <a href="../glossary#input-ids">What are input IDs?</a></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForSequenceClassification.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForSequenceClassification.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>. <a href="../glossary#attention-mask">What are attention masks?</a></li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForSequenceClassification.forward.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForSequenceClassification.forward.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token. <a href="../glossary#token-type-ids">What are token type IDs?</a></li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForSequenceClassification.forward.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForSequenceClassification.forward.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>. <a href="../glossary#position-ids">What are position IDs?</a></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForSequenceClassification.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForSequenceClassification.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForSequenceClassification.forward.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForSequenceClassification.forward.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForSequenceClassification.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForSequenceClassification.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForSequenceClassification.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForSequenceClassification.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForSequenceClassification.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForSequenceClassification.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForSequenceClassification.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForSequenceClassification.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Labels for computing the sequence classification/regression loss. Indices should be in <code>[0, ..., config.num_labels - 1]</code>. If <code>config.num_labels == 1</code> a regression loss is computed (Mean-Square loss), If <code>config.num_labels &gt; 1</code> a classification loss is computed (Cross-Entropy).</span></span> </li></ul> <div id="transformers.XLMRobertaXLForSequenceClassification.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.SequenceClassifierOutput">transformers.modeling_outputs.SequenceClassifierOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.SequenceClassifierOutput">transformers.modeling_outputs.SequenceClassifierOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl#transformers.XLMRobertaXLConfig">XLMRobertaXLConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Classification (or regression if config.num_labels==1) loss.</p> </li> <li> <p><strong>logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, config.num_labels)</code>) — Classification (or regression if config.num_labels==1) scores (before SoftMax).</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-10y8ybp">The <a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl#transformers.XLMRobertaXLForSequenceClassification">XLMRobertaXLForSequenceClassification</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.XLMRobertaXLForSequenceClassification.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForSequenceClassification.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-ykxpe4">Example of single-label classification:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, XLMRobertaXLForSequenceClassification <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"xlm-roberta-xlarge"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = XLMRobertaXLForSequenceClassification.from_pretrained(<span class="hljs-string">"xlm-roberta-xlarge"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(<span class="hljs-string">"Hello, my dog is cute"</span>, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> logits = model(**inputs).logits <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_class_id = logits.argmax().item() <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`</span> <span class="hljs-meta">&gt;&gt;&gt; </span>num_labels = <span class="hljs-built_in">len</span>(model.config.id2label) <span class="hljs-meta">&gt;&gt;&gt; </span>model = XLMRobertaXLForSequenceClassification.from_pretrained(<span class="hljs-string">"xlm-roberta-xlarge"</span>, num_labels=num_labels) <span class="hljs-meta">&gt;&gt;&gt; </span>labels = torch.tensor([<span class="hljs-number">1</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span>loss = model(**inputs, labels=labels).loss</pre></div></div> <div class="relative group rounded-md"><a id="transformers.XLMRobertaXLForSequenceClassification.forward.example-2" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForSequenceClassification.forward.example-2"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-1l8e32d">Example of multi-label classification:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, XLMRobertaXLForSequenceClassification <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"xlm-roberta-xlarge"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = XLMRobertaXLForSequenceClassification.from_pretrained(<span class="hljs-string">"xlm-roberta-xlarge"</span>, problem_type=<span class="hljs-string">"multi_label_classification"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(<span class="hljs-string">"Hello, my dog is cute"</span>, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> logits = model(**inputs).logits <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_class_ids = torch.arange(<span class="hljs-number">0</span>, logits.shape[-<span class="hljs-number">1</span>])[torch.sigmoid(logits).squeeze(dim=<span class="hljs-number">0</span>) &gt; <span class="hljs-number">0.5</span>] <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`</span> <span class="hljs-meta">&gt;&gt;&gt; </span>num_labels = <span class="hljs-built_in">len</span>(model.config.id2label) <span class="hljs-meta">&gt;&gt;&gt; </span>model = XLMRobertaXLForSequenceClassification.from_pretrained( <span class="hljs-meta">... </span> <span class="hljs-string">"xlm-roberta-xlarge"</span>, num_labels=num_labels, problem_type=<span class="hljs-string">"multi_label_classification"</span> <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>labels = torch.<span class="hljs-built_in">sum</span>( <span class="hljs-meta">... </span> torch.nn.functional.one_hot(predicted_class_ids[<span class="hljs-literal">None</span>, :].clone(), num_classes=num_labels), dim=<span class="hljs-number">1</span> <span class="hljs-meta">... </span>).to(torch.<span class="hljs-built_in">float</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>loss = model(**inputs, labels=labels).loss</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.XLMRobertaXLForMultipleChoice" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForMultipleChoice"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-19fuua1">XLMRobertaXLForMultipleChoice</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMRobertaXLForMultipleChoice"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XLMRobertaXLForMultipleChoice</span></span></h3> <a id="transformers.XLMRobertaXLForMultipleChoice" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMRobertaXLForMultipleChoice"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta_xl/modeling_xlm_roberta_xl.py#L1207" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForMultipleChoice.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForMultipleChoice.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl#transformers.XLMRobertaXLConfig">XLMRobertaXLConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-zks70n">XLM-Roberta-xlarge Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks.</p> <p data-svelte-h="svelte-1nlcy0z">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMRobertaXLForMultipleChoice.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.XLMRobertaXLForMultipleChoice.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMRobertaXLForMultipleChoice.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta_xl/modeling_xlm_roberta_xl.py#L1217" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.MultipleChoiceModelOutput">transformers.modeling_outputs.MultipleChoiceModelOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 10 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForMultipleChoice.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForMultipleChoice.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, num_choices, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details. <a href="../glossary#input-ids">What are input IDs?</a></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForMultipleChoice.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForMultipleChoice.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_choices, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>. <a href="../glossary#attention-mask">What are attention masks?</a></li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForMultipleChoice.forward.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForMultipleChoice.forward.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, num_choices, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token. <a href="../glossary#token-type-ids">What are token type IDs?</a></li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForMultipleChoice.forward.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForMultipleChoice.forward.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, num_choices, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>. <a href="../glossary#position-ids">What are position IDs?</a></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForMultipleChoice.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForMultipleChoice.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForMultipleChoice.forward.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForMultipleChoice.forward.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_choices, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForMultipleChoice.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForMultipleChoice.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForMultipleChoice.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForMultipleChoice.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForMultipleChoice.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForMultipleChoice.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForMultipleChoice.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForMultipleChoice.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Labels for computing the multiple choice classification loss. Indices should be in <code>[0, ..., num_choices-1]</code> where <code>num_choices</code> is the size of the second dimension of the input tensors. (See <code>input_ids</code> above)</span></span> </li></ul> <div id="transformers.XLMRobertaXLForMultipleChoice.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.MultipleChoiceModelOutput">transformers.modeling_outputs.MultipleChoiceModelOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.MultipleChoiceModelOutput">transformers.modeling_outputs.MultipleChoiceModelOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl#transformers.XLMRobertaXLConfig">XLMRobertaXLConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <em>(1,)</em>, <em>optional</em>, returned when <code>labels</code> is provided) — Classification loss.</p> </li> <li> <p><strong>logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_choices)</code>) — <em>num_choices</em> is the second dimension of the input tensors. (see <em>input_ids</em> above).</p> <p>Classification scores (before SoftMax).</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-vf7q0h">The <a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl#transformers.XLMRobertaXLForMultipleChoice">XLMRobertaXLForMultipleChoice</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.XLMRobertaXLForMultipleChoice.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForMultipleChoice.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, XLMRobertaXLForMultipleChoice <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"xlm-roberta-xlarge"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = XLMRobertaXLForMultipleChoice.from_pretrained(<span class="hljs-string">"xlm-roberta-xlarge"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>prompt = <span class="hljs-string">"In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."</span> <span class="hljs-meta">&gt;&gt;&gt; </span>choice0 = <span class="hljs-string">"It is eaten with a fork and a knife."</span> <span class="hljs-meta">&gt;&gt;&gt; </span>choice1 = <span class="hljs-string">"It is eaten while held in the hand."</span> <span class="hljs-meta">&gt;&gt;&gt; </span>labels = torch.tensor(<span class="hljs-number">0</span>).unsqueeze(<span class="hljs-number">0</span>) <span class="hljs-comment"># choice0 is correct (according to Wikipedia ;)), batch size 1</span> <span class="hljs-meta">&gt;&gt;&gt; </span>encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors=<span class="hljs-string">"pt"</span>, padding=<span class="hljs-literal">True</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(**{k: v.unsqueeze(<span class="hljs-number">0</span>) <span class="hljs-keyword">for</span> k, v <span class="hljs-keyword">in</span> encoding.items()}, labels=labels) <span class="hljs-comment"># batch size is 1</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># the linear classifier still needs to be trained</span> <span class="hljs-meta">&gt;&gt;&gt; </span>loss = outputs.loss <span class="hljs-meta">&gt;&gt;&gt; </span>logits = outputs.logits</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.XLMRobertaXLForTokenClassification" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForTokenClassification"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-fduzlj">XLMRobertaXLForTokenClassification</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMRobertaXLForTokenClassification"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XLMRobertaXLForTokenClassification</span></span></h3> <a id="transformers.XLMRobertaXLForTokenClassification" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMRobertaXLForTokenClassification"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta_xl/modeling_xlm_roberta_xl.py#L1298" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForTokenClassification.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForTokenClassification.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl#transformers.XLMRobertaXLConfig">XLMRobertaXLConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-5f23n8">XLM-Roberta-xlarge Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks.</p> <p data-svelte-h="svelte-1nlcy0z">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMRobertaXLForTokenClassification.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.XLMRobertaXLForTokenClassification.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMRobertaXLForTokenClassification.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta_xl/modeling_xlm_roberta_xl.py#L1312" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.TokenClassifierOutput">transformers.modeling_outputs.TokenClassifierOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 10 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForTokenClassification.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForTokenClassification.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details. <a href="../glossary#input-ids">What are input IDs?</a></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForTokenClassification.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForTokenClassification.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>. <a href="../glossary#attention-mask">What are attention masks?</a></li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForTokenClassification.forward.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForTokenClassification.forward.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token. <a href="../glossary#token-type-ids">What are token type IDs?</a></li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForTokenClassification.forward.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForTokenClassification.forward.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>. <a href="../glossary#position-ids">What are position IDs?</a></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForTokenClassification.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForTokenClassification.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForTokenClassification.forward.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForTokenClassification.forward.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForTokenClassification.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForTokenClassification.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForTokenClassification.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForTokenClassification.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForTokenClassification.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForTokenClassification.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForTokenClassification.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForTokenClassification.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Labels for computing the token classification loss. Indices should be in <code>[0, ..., config.num_labels - 1]</code>.</span></span> </li></ul> <div id="transformers.XLMRobertaXLForTokenClassification.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.TokenClassifierOutput">transformers.modeling_outputs.TokenClassifierOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.TokenClassifierOutput">transformers.modeling_outputs.TokenClassifierOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl#transformers.XLMRobertaXLConfig">XLMRobertaXLConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Classification loss.</p> </li> <li> <p><strong>logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, config.num_labels)</code>) — Classification scores (before SoftMax).</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-jthbi9">The <a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl#transformers.XLMRobertaXLForTokenClassification">XLMRobertaXLForTokenClassification</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.XLMRobertaXLForTokenClassification.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForTokenClassification.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, XLMRobertaXLForTokenClassification <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"xlm-roberta-xlarge"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = XLMRobertaXLForTokenClassification.from_pretrained(<span class="hljs-string">"xlm-roberta-xlarge"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer( <span class="hljs-meta">... </span> <span class="hljs-string">"HuggingFace is a company based in Paris and New York"</span>, add_special_tokens=<span class="hljs-literal">False</span>, return_tensors=<span class="hljs-string">"pt"</span> <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> logits = model(**inputs).logits <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_token_class_ids = logits.argmax(-<span class="hljs-number">1</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Note that tokens are classified rather then input words which means that</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># there might be more predicted token classes than words.</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Multiple token classes might account for the same word</span> <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_tokens_classes = [model.config.id2label[t.item()] <span class="hljs-keyword">for</span> t <span class="hljs-keyword">in</span> predicted_token_class_ids[<span class="hljs-number">0</span>]] <span class="hljs-meta">&gt;&gt;&gt; </span>labels = predicted_token_class_ids <span class="hljs-meta">&gt;&gt;&gt; </span>loss = model(**inputs, labels=labels).loss</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.XLMRobertaXLForQuestionAnswering" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForQuestionAnswering"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-pkcjne">XLMRobertaXLForQuestionAnswering</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMRobertaXLForQuestionAnswering"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XLMRobertaXLForQuestionAnswering</span></span></h3> <a id="transformers.XLMRobertaXLForQuestionAnswering" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMRobertaXLForQuestionAnswering"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta_xl/modeling_xlm_roberta_xl.py#L1409" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForQuestionAnswering.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForQuestionAnswering.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl#transformers.XLMRobertaXLConfig">XLMRobertaXLConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-qwbgqu">XLM-Roberta-xlarge Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute <code>span start logits</code> and <code>span end logits</code>).</p> <p data-svelte-h="svelte-1nlcy0z">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLMRobertaXLForQuestionAnswering.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.XLMRobertaXLForQuestionAnswering.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLMRobertaXLForQuestionAnswering.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlm_roberta_xl/modeling_xlm_roberta_xl.py#L1419" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">start_positions<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">end_positions<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.QuestionAnsweringModelOutput">transformers.modeling_outputs.QuestionAnsweringModelOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 11 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForQuestionAnswering.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForQuestionAnswering.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details. <a href="../glossary#input-ids">What are input IDs?</a></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForQuestionAnswering.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForQuestionAnswering.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>. <a href="../glossary#attention-mask">What are attention masks?</a></li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForQuestionAnswering.forward.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForQuestionAnswering.forward.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token. <a href="../glossary#token-type-ids">What are token type IDs?</a></li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForQuestionAnswering.forward.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForQuestionAnswering.forward.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>. <a href="../glossary#position-ids">What are position IDs?</a></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForQuestionAnswering.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForQuestionAnswering.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForQuestionAnswering.forward.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForQuestionAnswering.forward.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForQuestionAnswering.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForQuestionAnswering.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForQuestionAnswering.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForQuestionAnswering.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForQuestionAnswering.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForQuestionAnswering.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForQuestionAnswering.forward.start_positions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForQuestionAnswering.forward.start_positions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>start_positions</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (<code>sequence_length</code>). Position outside of the sequence are not taken into account for computing the loss.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLMRobertaXLForQuestionAnswering.forward.end_positions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForQuestionAnswering.forward.end_positions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>end_positions</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (<code>sequence_length</code>). Position outside of the sequence are not taken into account for computing the loss.</span></span> </li></ul> <div id="transformers.XLMRobertaXLForQuestionAnswering.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.QuestionAnsweringModelOutput">transformers.modeling_outputs.QuestionAnsweringModelOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.QuestionAnsweringModelOutput">transformers.modeling_outputs.QuestionAnsweringModelOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl#transformers.XLMRobertaXLConfig">XLMRobertaXLConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.</p> </li> <li> <p><strong>start_logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Span-start scores (before SoftMax).</p> </li> <li> <p><strong>end_logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Span-end scores (before SoftMax).</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-msegp7">The <a href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl#transformers.XLMRobertaXLForQuestionAnswering">XLMRobertaXLForQuestionAnswering</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.XLMRobertaXLForQuestionAnswering.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLMRobertaXLForQuestionAnswering.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, XLMRobertaXLForQuestionAnswering <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"xlm-roberta-xlarge"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = XLMRobertaXLForQuestionAnswering.from_pretrained(<span class="hljs-string">"xlm-roberta-xlarge"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>question, text = <span class="hljs-string">"Who was Jim Henson?"</span>, <span class="hljs-string">"Jim Henson was a nice puppet"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(question, text, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> outputs = model(**inputs) <span class="hljs-meta">&gt;&gt;&gt; </span>answer_start_index = outputs.start_logits.argmax() <span class="hljs-meta">&gt;&gt;&gt; </span>answer_end_index = outputs.end_logits.argmax() <span class="hljs-meta">&gt;&gt;&gt; </span>predict_answer_tokens = inputs.input_ids[<span class="hljs-number">0</span>, answer_start_index : answer_end_index + <span class="hljs-number">1</span>] <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># target is "nice puppet"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>target_start_index = torch.tensor([<span class="hljs-number">14</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span>target_end_index = torch.tensor([<span class="hljs-number">15</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index) <span class="hljs-meta">&gt;&gt;&gt; </span>loss = outputs.loss</pre></div></div></div></div> <p></p> <div id="svelte-announcer" aria-live="assertive" aria-atomic="true" style="position: absolute; left: 0px; top: 0px; clip: rect(0px, 0px, 0px, 0px); clip-path: inset(50%); overflow: hidden; white-space: nowrap; width: 1px; height: 1px;"></div></div> <div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>XLM-RoBERTa</a> <a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/xlm-v" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">XLM-V<span class="ml-2 translate-y-px">→</span></a></div></div></div> <div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapter&quot;:{&quot;title&quot;:&quot;XLM-RoBERTa-XL&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;xlmrobertaxl&quot;,&quot;url&quot;:&quot;#xlmrobertaxl&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;overview&quot;,&quot;url&quot;:&quot;#overview&quot;},{&quot;title&quot;:&quot;Documentation resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;documentation-resources&quot;,&quot;url&quot;:&quot;#documentation-resources&quot;},{&quot;title&quot;:&quot;XLMRobertaXLConfig&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XLMRobertaXLConfig&quot;,&quot;url&quot;:&quot;#transformers.XLMRobertaXLConfig&quot;},{&quot;title&quot;:&quot;XLMRobertaXLModel&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XLMRobertaXLModel&quot;,&quot;url&quot;:&quot;#transformers.XLMRobertaXLModel&quot;},{&quot;title&quot;:&quot;XLMRobertaXLForCausalLM&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XLMRobertaXLForCausalLM&quot;,&quot;url&quot;:&quot;#transformers.XLMRobertaXLForCausalLM&quot;},{&quot;title&quot;:&quot;XLMRobertaXLForMaskedLM&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XLMRobertaXLForMaskedLM&quot;,&quot;url&quot;:&quot;#transformers.XLMRobertaXLForMaskedLM&quot;},{&quot;title&quot;:&quot;XLMRobertaXLForSequenceClassification&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XLMRobertaXLForSequenceClassification&quot;,&quot;url&quot;:&quot;#transformers.XLMRobertaXLForSequenceClassification&quot;},{&quot;title&quot;:&quot;XLMRobertaXLForMultipleChoice&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XLMRobertaXLForMultipleChoice&quot;,&quot;url&quot;:&quot;#transformers.XLMRobertaXLForMultipleChoice&quot;},{&quot;title&quot;:&quot;XLMRobertaXLForTokenClassification&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XLMRobertaXLForTokenClassification&quot;,&quot;url&quot;:&quot;#transformers.XLMRobertaXLForTokenClassification&quot;},{&quot;title&quot;:&quot;XLMRobertaXLForQuestionAnswering&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XLMRobertaXLForQuestionAnswering&quot;,&quot;url&quot;:&quot;#transformers.XLMRobertaXLForQuestionAnswering&quot;}]}}" data-target="SubSideMenu"><nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#xlmrobertaxl" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-xlmrobertaxl">XL<wbr>M-<wbr>RoBER<wbr>Ta-XL</a> <a href="#overview" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-overview"><wbr>Overview</a> <a href="#documentation-resources" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-documentation-resources"><wbr>Documentation resources</a> <a href="#transformers.XLMRobertaXLConfig" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XLMRobertaXLConfig">XLM<wbr>RobertaXL<wbr>Config</a> <a href="#transformers.XLMRobertaXLModel" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XLMRobertaXLModel">XLM<wbr>RobertaXL<wbr>Model</a> <a href="#transformers.XLMRobertaXLForCausalLM" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XLMRobertaXLForCausalLM">XLM<wbr>RobertaXL<wbr>For<wbr>CausalLM</a> <a href="#transformers.XLMRobertaXLForMaskedLM" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XLMRobertaXLForMaskedLM">XLM<wbr>RobertaXL<wbr>For<wbr>MaskedLM</a> <a href="#transformers.XLMRobertaXLForSequenceClassification" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XLMRobertaXLForSequenceClassification">XLM<wbr>RobertaXL<wbr>For<wbr>Sequence<wbr>Classification</a> <a href="#transformers.XLMRobertaXLForMultipleChoice" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XLMRobertaXLForMultipleChoice">XLM<wbr>RobertaXL<wbr>For<wbr>Multiple<wbr>Choice</a> <a href="#transformers.XLMRobertaXLForTokenClassification" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XLMRobertaXLForTokenClassification">XLM<wbr>RobertaXL<wbr>For<wbr>Token<wbr>Classification</a> <a href="#transformers.XLMRobertaXLForQuestionAnswering" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XLMRobertaXLForQuestionAnswering">XLM<wbr>RobertaXL<wbr>For<wbr>Question<wbr>Answering</a> </nav></div></div></div> <div id="doc-footer"></div></main> </div> <script> import("/front/build/kube-b0520c1/index.js"); window.moonSha = "kube-b0520c1/"; window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc","environment":"production","userAgent":"HuggingFace (production)"}`); </script> <!-- Stripe --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://js.stripe.com/v3/"; script.async = true; document.head.appendChild(script); } </script> <!-- Google analytics v4 --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL"; script.async = true; document.head.appendChild(script); window.dataLayer = window.dataLayer || []; function gtag() { if (window.dataLayer !== undefined) { window.dataLayer.push(arguments); } } gtag("js", new Date()); gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl" }); /// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" }); /// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent /// TODO: ask the user for their consent and update this with gtag('consent', 'update') } </script> <!-- Google Analytics v3 (deprecated) --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { (function (i, s, o, g, r, a, m) { i["GoogleAnalyticsObject"] = r; (i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments); }), (i[r].l = 1 * new Date()); (a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]); a.async = 1; a.src = g; m.parentNode.insertBefore(a, m); })(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics"); ganalytics("create", "UA-83738774-2", "auto"); ganalytics("send", "pageview", "/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl"); } </script> <iframe name="__privateStripeMetricsController2420" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-27c67c0d52761104439bb051c7856ab1.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fv4.34.0%2Fen%2Fmodel_doc%2Fxlm-roberta-xl&amp;title=XLM-RoBERTa-XL&amp;referrer=&amp;muid=NA&amp;sid=NA&amp;version=6&amp;preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html>
2023-10-05T13:33:37.859Z
XLSR-Wav2Vec2
https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2
# XLSR-Wav2Vec2 ## Overview The XLSR-Wav2Vec2 model was proposed in [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) by Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli. The abstract from the paper is the following: _This paper presents XLSR which learns cross-lingual speech representations by pretraining a single model from the raw waveform of speech in multiple languages. We build on wav2vec 2.0 which is trained by solving a contrastive task over masked latent speech representations and jointly learns a quantization of the latents shared across languages. The resulting model is fine-tuned on labeled data and experiments show that cross-lingual pretraining significantly outperforms monolingual pretraining. On the CommonVoice benchmark, XLSR shows a relative phoneme error rate reduction of 72% compared to the best known results. On BABEL, our approach improves word error rate by 16% relative compared to a comparable system. Our approach enables a single multilingual speech recognition model which is competitive to strong individual models. Analysis shows that the latent discrete speech representations are shared across languages with increased sharing for related languages. We hope to catalyze research in low-resource speech understanding by releasing XLSR-53, a large model pretrained in 53 languages._ Tips: - XLSR-Wav2Vec2 is a speech model that accepts a float array corresponding to the raw waveform of the speech signal. - XLSR-Wav2Vec2 model was trained using connectionist temporal classification (CTC) so the model output has to be decoded using [Wav2Vec2CTCTokenizer](/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2CTCTokenizer). XLSR-Wav2Vec2’s architecture is based on the Wav2Vec2 model, so one can refer to [Wav2Vec2’s documentation page](wav2vec2). The original code can be found [here](https://github.com/pytorch/fairseq/tree/master/fairseq/models/wav2vec).
<!DOCTYPE html><html class=""><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"> <meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science."> <meta property="fb:app_id" content="1321688464574422"> <meta name="twitter:card" content="summary_large_image"> <meta name="twitter:site" content="@huggingface"> <meta property="og:title" content="XLSR-Wav2Vec2"> <meta property="og:type" content="website"> <meta property="og:url" content="https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2"> <meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png"> <link rel="stylesheet" href="/front/build/kube-5e23f38/style.css"> <link rel="preconnect" href="https://fonts.gstatic.com"> <link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&amp;display=swap" rel="stylesheet"> <link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&amp;display=swap" rel="stylesheet"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" as="style" onload="this.onload=null;this.rel='stylesheet'"> <noscript> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" /> </noscript> <title>XLSR-Wav2Vec2</title> <script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script> <script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head> <body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage"> <div class="flex min-h-screen flex-col"> <div class="SVELTE_HYDRATER contents" data-props="{&quot;classNames&quot;:&quot;&quot;,&quot;isWide&quot;:true,&quot;isZh&quot;:false}" data-target="MainHeader"><header class="border-b border-gray-100 "><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <div class="flex flex-none items-center justify-center p-0.5 place-self-stretch lg:hidden"><button class="relative z-40 flex h-6 w-8 items-center justify-center" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div></div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="rounded-full border border-transparent bg-gray-900 px-3 py-1 leading-none text-white hover:border-black hover:bg-white hover:text-black" href="/join">Sign Up</a></li></ul></nav></div></header></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div> <main class="flex flex-1 flex-col"><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapters&quot;:[{&quot;title&quot;:&quot;Get started&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;🤗 Transformers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;index&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/index&quot;},{&quot;title&quot;:&quot;Quick tour&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;quicktour&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/quicktour&quot;},{&quot;title&quot;:&quot;Installation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;installation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/installation&quot;}]},{&quot;title&quot;:&quot;Tutorials&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Run inference with pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_tutorial&quot;},{&quot;title&quot;:&quot;Write portable code with AutoClass&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;autoclass_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/autoclass_tutorial&quot;},{&quot;title&quot;:&quot;Preprocess data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocessing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/preprocessing&quot;},{&quot;title&quot;:&quot;Fine-tune a pretrained model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;training&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/training&quot;},{&quot;title&quot;:&quot;Train with a script&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;run_scripts&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/run_scripts&quot;},{&quot;title&quot;:&quot;Set up distributed training with 🤗 Accelerate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;accelerate&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/accelerate&quot;},{&quot;title&quot;:&quot;Load and train adapters with 🤗 PEFT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;peft&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/peft&quot;},{&quot;title&quot;:&quot;Share your model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_sharing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_sharing&quot;},{&quot;title&quot;:&quot;Agents&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers_agents&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/transformers_agents&quot;},{&quot;title&quot;:&quot;Generation with LLMs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;llm_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/llm_tutorial&quot;}]},{&quot;title&quot;:&quot;Task Guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Natural Language Processing&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text classification&quot;,&quot;id&quot;:&quot;tasks/sequence_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/sequence_classification&quot;},{&quot;title&quot;:&quot;Token classification&quot;,&quot;id&quot;:&quot;tasks/token_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/token_classification&quot;},{&quot;title&quot;:&quot;Question answering&quot;,&quot;id&quot;:&quot;tasks/question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/question_answering&quot;},{&quot;title&quot;:&quot;Causal language modeling&quot;,&quot;id&quot;:&quot;tasks/language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/language_modeling&quot;},{&quot;title&quot;:&quot;Masked language modeling&quot;,&quot;id&quot;:&quot;tasks/masked_language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/masked_language_modeling&quot;},{&quot;title&quot;:&quot;Translation&quot;,&quot;id&quot;:&quot;tasks/translation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/translation&quot;},{&quot;title&quot;:&quot;Summarization&quot;,&quot;id&quot;:&quot;tasks/summarization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/summarization&quot;},{&quot;title&quot;:&quot;Multiple choice&quot;,&quot;id&quot;:&quot;tasks/multiple_choice&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/multiple_choice&quot;}]},{&quot;title&quot;:&quot;Audio&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio classification&quot;,&quot;id&quot;:&quot;tasks/audio_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/audio_classification&quot;},{&quot;title&quot;:&quot;Automatic speech recognition&quot;,&quot;id&quot;:&quot;tasks/asr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/asr&quot;}]},{&quot;title&quot;:&quot;Computer Vision&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image classification&quot;,&quot;id&quot;:&quot;tasks/image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_classification&quot;},{&quot;title&quot;:&quot;Semantic segmentation&quot;,&quot;id&quot;:&quot;tasks/semantic_segmentation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/semantic_segmentation&quot;},{&quot;title&quot;:&quot;Video classification&quot;,&quot;id&quot;:&quot;tasks/video_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/video_classification&quot;},{&quot;title&quot;:&quot;Object detection&quot;,&quot;id&quot;:&quot;tasks/object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot object detection&quot;,&quot;id&quot;:&quot;tasks/zero_shot_object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot image classification&quot;,&quot;id&quot;:&quot;tasks/zero_shot_image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification&quot;},{&quot;title&quot;:&quot;Depth estimation&quot;,&quot;id&quot;:&quot;tasks/monocular_depth_estimation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation&quot;}]},{&quot;title&quot;:&quot;Multimodal&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image captioning&quot;,&quot;id&quot;:&quot;tasks/image_captioning&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_captioning&quot;},{&quot;title&quot;:&quot;Document Question Answering&quot;,&quot;id&quot;:&quot;tasks/document_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/document_question_answering&quot;},{&quot;title&quot;:&quot;Visual Question Answering&quot;,&quot;id&quot;:&quot;tasks/visual_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/visual_question_answering&quot;},{&quot;title&quot;:&quot;Text to speech&quot;,&quot;id&quot;:&quot;tasks/text-to-speech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/text-to-speech&quot;}]},{&quot;title&quot;:&quot;Generation&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Customize the generation strategy&quot;,&quot;id&quot;:&quot;generation_strategies&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/generation_strategies&quot;}]},{&quot;title&quot;:&quot;Prompting&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image tasks with IDEFICS&quot;,&quot;id&quot;:&quot;tasks/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/idefics&quot;}]}]},{&quot;title&quot;:&quot;Developer guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Use fast tokenizers from 🤗 Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;fast_tokenizers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/fast_tokenizers&quot;},{&quot;title&quot;:&quot;Run inference with multilingual models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;multilingual&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/multilingual&quot;},{&quot;title&quot;:&quot;Use model-specific APIs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;create_a_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/create_a_model&quot;},{&quot;title&quot;:&quot;Share a custom model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_models&quot;},{&quot;title&quot;:&quot;Templates for chat models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;chat_templating&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/chat_templating&quot;},{&quot;title&quot;:&quot;Run training on Amazon SageMaker&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;sagemaker&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/sagemaker&quot;},{&quot;title&quot;:&quot;Export to ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;serialization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/serialization&quot;},{&quot;title&quot;:&quot;Export to TFLite&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tflite&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tflite&quot;},{&quot;title&quot;:&quot;Export to TorchScript&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;torchscript&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/torchscript&quot;},{&quot;title&quot;:&quot;Benchmarks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;benchmarks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/benchmarks&quot;},{&quot;title&quot;:&quot;Notebooks with examples&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;notebooks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/notebooks&quot;},{&quot;title&quot;:&quot;Community resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;community&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/community&quot;},{&quot;title&quot;:&quot;Custom Tools and Prompts&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_tools&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_tools&quot;},{&quot;title&quot;:&quot;Troubleshoot&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;troubleshooting&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/troubleshooting&quot;}]},{&quot;title&quot;:&quot;Performance and scalability&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;performance&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/performance&quot;},{&quot;title&quot;:&quot;Efficient training techniques&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Methods and tools for efficient training on a single GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_one&quot;},{&quot;title&quot;:&quot;Multiple GPUs and parallelism&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_many&quot;},{&quot;title&quot;:&quot;Efficient training on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu&quot;},{&quot;title&quot;:&quot;Distributed CPU training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu_many&quot;},{&quot;title&quot;:&quot;Training on TPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu&quot;},{&quot;title&quot;:&quot;Training on TPU with TensorFlow&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu_tf&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu_tf&quot;},{&quot;title&quot;:&quot;Training on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_special&quot;},{&quot;title&quot;:&quot;Custom hardware for training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_hardware&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_hardware&quot;},{&quot;title&quot;:&quot;Hyperparameter Search using Trainer API&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;hpo_train&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/hpo_train&quot;}]},{&quot;title&quot;:&quot;Optimizing inference&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Inference on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_cpu&quot;},{&quot;title&quot;:&quot;Inference on one GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_one&quot;},{&quot;title&quot;:&quot;Inference on many GPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_many&quot;},{&quot;title&quot;:&quot;Inference on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_special&quot;}]},{&quot;title&quot;:&quot;Instantiating a big model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;big_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/big_models&quot;},{&quot;title&quot;:&quot;Troubleshooting&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;debugging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/debugging&quot;},{&quot;title&quot;:&quot;XLA Integration for TensorFlow Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tf_xla&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tf_xla&quot;},{&quot;title&quot;:&quot;Optimize inference using `torch.compile()`&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_torch_compile&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_torch_compile&quot;}]},{&quot;title&quot;:&quot;Contribute&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;How to contribute to transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;contributing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/contributing&quot;},{&quot;title&quot;:&quot;How to add a model to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_model&quot;},{&quot;title&quot;:&quot;How to convert a 🤗 Transformers model to TensorFlow?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_tensorflow_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_tensorflow_model&quot;},{&quot;title&quot;:&quot;How to add a pipeline to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_pipeline&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_pipeline&quot;},{&quot;title&quot;:&quot;Testing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;testing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/testing&quot;},{&quot;title&quot;:&quot;Checks on a Pull Request&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pr_checks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pr_checks&quot;}]},{&quot;title&quot;:&quot;Conceptual guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Philosophy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;philosophy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/philosophy&quot;},{&quot;title&quot;:&quot;Glossary&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;glossary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/glossary&quot;},{&quot;title&quot;:&quot;What 🤗 Transformers can do&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;task_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/task_summary&quot;},{&quot;title&quot;:&quot;How 🤗 Transformers solve tasks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks_explained&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks_explained&quot;},{&quot;title&quot;:&quot;The Transformer model family&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_summary&quot;},{&quot;title&quot;:&quot;Summary of the tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tokenizer_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tokenizer_summary&quot;},{&quot;title&quot;:&quot;Attention mechanisms&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;attention&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/attention&quot;},{&quot;title&quot;:&quot;Padding and truncation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pad_truncation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pad_truncation&quot;},{&quot;title&quot;:&quot;BERTology&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;bertology&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/bertology&quot;},{&quot;title&quot;:&quot;Perplexity of fixed-length models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perplexity&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perplexity&quot;},{&quot;title&quot;:&quot;Pipelines for webserver inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_webserver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_webserver&quot;},{&quot;title&quot;:&quot;Model training anatomy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_memory_anatomy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_memory_anatomy&quot;}]},{&quot;title&quot;:&quot;API&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Main Classes&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Agents and Tools&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/agent&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/agent&quot;},{&quot;title&quot;:&quot;Auto Classes&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/auto&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/auto&quot;},{&quot;title&quot;:&quot;Callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/callback&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/callback&quot;},{&quot;title&quot;:&quot;Configuration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/configuration&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/configuration&quot;},{&quot;title&quot;:&quot;Data Collator&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/data_collator&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/data_collator&quot;},{&quot;title&quot;:&quot;Keras callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/keras_callbacks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/keras_callbacks&quot;},{&quot;title&quot;:&quot;Logging&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/logging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/logging&quot;},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/model&quot;},{&quot;title&quot;:&quot;Text Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/text_generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/text_generation&quot;},{&quot;title&quot;:&quot;ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/onnx&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/onnx&quot;},{&quot;title&quot;:&quot;Optimization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/optimizer_schedules&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules&quot;},{&quot;title&quot;:&quot;Model outputs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/output&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/output&quot;},{&quot;title&quot;:&quot;Pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/pipelines&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/pipelines&quot;},{&quot;title&quot;:&quot;Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/processors&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/processors&quot;},{&quot;title&quot;:&quot;Quantization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/quantization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/quantization&quot;},{&quot;title&quot;:&quot;Tokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/tokenizer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/tokenizer&quot;},{&quot;title&quot;:&quot;Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/trainer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/trainer&quot;},{&quot;title&quot;:&quot;DeepSpeed Integration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/deepspeed&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/deepspeed&quot;},{&quot;title&quot;:&quot;Feature Extractor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/feature_extractor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/feature_extractor&quot;},{&quot;title&quot;:&quot;Image Processor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/image_processor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/image_processor&quot;}]},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALBERT&quot;,&quot;id&quot;:&quot;model_doc/albert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/albert&quot;},{&quot;title&quot;:&quot;BART&quot;,&quot;id&quot;:&quot;model_doc/bart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bart&quot;},{&quot;title&quot;:&quot;BARThez&quot;,&quot;id&quot;:&quot;model_doc/barthez&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/barthez&quot;},{&quot;title&quot;:&quot;BARTpho&quot;,&quot;id&quot;:&quot;model_doc/bartpho&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bartpho&quot;},{&quot;title&quot;:&quot;BERT&quot;,&quot;id&quot;:&quot;model_doc/bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert&quot;},{&quot;title&quot;:&quot;BertGeneration&quot;,&quot;id&quot;:&quot;model_doc/bert-generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-generation&quot;},{&quot;title&quot;:&quot;BertJapanese&quot;,&quot;id&quot;:&quot;model_doc/bert-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-japanese&quot;},{&quot;title&quot;:&quot;Bertweet&quot;,&quot;id&quot;:&quot;model_doc/bertweet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bertweet&quot;},{&quot;title&quot;:&quot;BigBird&quot;,&quot;id&quot;:&quot;model_doc/big_bird&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/big_bird&quot;},{&quot;title&quot;:&quot;BigBirdPegasus&quot;,&quot;id&quot;:&quot;model_doc/bigbird_pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus&quot;},{&quot;title&quot;:&quot;BioGpt&quot;,&quot;id&quot;:&quot;model_doc/biogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/biogpt&quot;},{&quot;title&quot;:&quot;Blenderbot&quot;,&quot;id&quot;:&quot;model_doc/blenderbot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot&quot;},{&quot;title&quot;:&quot;Blenderbot Small&quot;,&quot;id&quot;:&quot;model_doc/blenderbot-small&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot-small&quot;},{&quot;title&quot;:&quot;BLOOM&quot;,&quot;id&quot;:&quot;model_doc/bloom&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bloom&quot;},{&quot;title&quot;:&quot;BORT&quot;,&quot;id&quot;:&quot;model_doc/bort&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bort&quot;},{&quot;title&quot;:&quot;ByT5&quot;,&quot;id&quot;:&quot;model_doc/byt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/byt5&quot;},{&quot;title&quot;:&quot;CamemBERT&quot;,&quot;id&quot;:&quot;model_doc/camembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/camembert&quot;},{&quot;title&quot;:&quot;CANINE&quot;,&quot;id&quot;:&quot;model_doc/canine&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/canine&quot;},{&quot;title&quot;:&quot;CodeGen&quot;,&quot;id&quot;:&quot;model_doc/codegen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/codegen&quot;},{&quot;title&quot;:&quot;CodeLlama&quot;,&quot;id&quot;:&quot;model_doc/code_llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/code_llama&quot;},{&quot;title&quot;:&quot;ConvBERT&quot;,&quot;id&quot;:&quot;model_doc/convbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convbert&quot;},{&quot;title&quot;:&quot;CPM&quot;,&quot;id&quot;:&quot;model_doc/cpm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpm&quot;},{&quot;title&quot;:&quot;CPMANT&quot;,&quot;id&quot;:&quot;model_doc/cpmant&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpmant&quot;},{&quot;title&quot;:&quot;CTRL&quot;,&quot;id&quot;:&quot;model_doc/ctrl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ctrl&quot;},{&quot;title&quot;:&quot;DeBERTa&quot;,&quot;id&quot;:&quot;model_doc/deberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta&quot;},{&quot;title&quot;:&quot;DeBERTa-v2&quot;,&quot;id&quot;:&quot;model_doc/deberta-v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta-v2&quot;},{&quot;title&quot;:&quot;DialoGPT&quot;,&quot;id&quot;:&quot;model_doc/dialogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dialogpt&quot;},{&quot;title&quot;:&quot;DistilBERT&quot;,&quot;id&quot;:&quot;model_doc/distilbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/distilbert&quot;},{&quot;title&quot;:&quot;DPR&quot;,&quot;id&quot;:&quot;model_doc/dpr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpr&quot;},{&quot;title&quot;:&quot;ELECTRA&quot;,&quot;id&quot;:&quot;model_doc/electra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/electra&quot;},{&quot;title&quot;:&quot;Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encoder-decoder&quot;},{&quot;title&quot;:&quot;ERNIE&quot;,&quot;id&quot;:&quot;model_doc/ernie&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie&quot;},{&quot;title&quot;:&quot;ErnieM&quot;,&quot;id&quot;:&quot;model_doc/ernie_m&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie_m&quot;},{&quot;title&quot;:&quot;ESM&quot;,&quot;id&quot;:&quot;model_doc/esm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/esm&quot;},{&quot;title&quot;:&quot;Falcon&quot;,&quot;id&quot;:&quot;model_doc/falcon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/falcon&quot;},{&quot;title&quot;:&quot;FLAN-T5&quot;,&quot;id&quot;:&quot;model_doc/flan-t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-t5&quot;},{&quot;title&quot;:&quot;FLAN-UL2&quot;,&quot;id&quot;:&quot;model_doc/flan-ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-ul2&quot;},{&quot;title&quot;:&quot;FlauBERT&quot;,&quot;id&quot;:&quot;model_doc/flaubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flaubert&quot;},{&quot;title&quot;:&quot;FNet&quot;,&quot;id&quot;:&quot;model_doc/fnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fnet&quot;},{&quot;title&quot;:&quot;FSMT&quot;,&quot;id&quot;:&quot;model_doc/fsmt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fsmt&quot;},{&quot;title&quot;:&quot;Funnel Transformer&quot;,&quot;id&quot;:&quot;model_doc/funnel&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/funnel&quot;},{&quot;title&quot;:&quot;GPT&quot;,&quot;id&quot;:&quot;model_doc/openai-gpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/openai-gpt&quot;},{&quot;title&quot;:&quot;GPT Neo&quot;,&quot;id&quot;:&quot;model_doc/gpt_neo&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neo&quot;},{&quot;title&quot;:&quot;GPT NeoX&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox&quot;},{&quot;title&quot;:&quot;GPT NeoX Japanese&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox_japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese&quot;},{&quot;title&quot;:&quot;GPT-J&quot;,&quot;id&quot;:&quot;model_doc/gptj&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptj&quot;},{&quot;title&quot;:&quot;GPT2&quot;,&quot;id&quot;:&quot;model_doc/gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt2&quot;},{&quot;title&quot;:&quot;GPTBigCode&quot;,&quot;id&quot;:&quot;model_doc/gpt_bigcode&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode&quot;},{&quot;title&quot;:&quot;GPTSAN Japanese&quot;,&quot;id&quot;:&quot;model_doc/gptsan-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese&quot;},{&quot;title&quot;:&quot;GPTSw3&quot;,&quot;id&quot;:&quot;model_doc/gpt-sw3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt-sw3&quot;},{&quot;title&quot;:&quot;HerBERT&quot;,&quot;id&quot;:&quot;model_doc/herbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/herbert&quot;},{&quot;title&quot;:&quot;I-BERT&quot;,&quot;id&quot;:&quot;model_doc/ibert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ibert&quot;},{&quot;title&quot;:&quot;Jukebox&quot;,&quot;id&quot;:&quot;model_doc/jukebox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/jukebox&quot;},{&quot;title&quot;:&quot;LED&quot;,&quot;id&quot;:&quot;model_doc/led&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/led&quot;},{&quot;title&quot;:&quot;LLaMA&quot;,&quot;id&quot;:&quot;model_doc/llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama&quot;},{&quot;title&quot;:&quot;Llama2&quot;,&quot;id&quot;:&quot;model_doc/llama2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama2&quot;},{&quot;title&quot;:&quot;Longformer&quot;,&quot;id&quot;:&quot;model_doc/longformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longformer&quot;},{&quot;title&quot;:&quot;LongT5&quot;,&quot;id&quot;:&quot;model_doc/longt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longt5&quot;},{&quot;title&quot;:&quot;LUKE&quot;,&quot;id&quot;:&quot;model_doc/luke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/luke&quot;},{&quot;title&quot;:&quot;M2M100&quot;,&quot;id&quot;:&quot;model_doc/m2m_100&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/m2m_100&quot;},{&quot;title&quot;:&quot;MarianMT&quot;,&quot;id&quot;:&quot;model_doc/marian&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/marian&quot;},{&quot;title&quot;:&quot;MarkupLM&quot;,&quot;id&quot;:&quot;model_doc/markuplm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/markuplm&quot;},{&quot;title&quot;:&quot;MBart and MBart-50&quot;,&quot;id&quot;:&quot;model_doc/mbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mbart&quot;},{&quot;title&quot;:&quot;MEGA&quot;,&quot;id&quot;:&quot;model_doc/mega&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mega&quot;},{&quot;title&quot;:&quot;MegatronBERT&quot;,&quot;id&quot;:&quot;model_doc/megatron-bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron-bert&quot;},{&quot;title&quot;:&quot;MegatronGPT2&quot;,&quot;id&quot;:&quot;model_doc/megatron_gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2&quot;},{&quot;title&quot;:&quot;Mistral&quot;,&quot;id&quot;:&quot;model_doc/mistral&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mistral&quot;},{&quot;title&quot;:&quot;mLUKE&quot;,&quot;id&quot;:&quot;model_doc/mluke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mluke&quot;},{&quot;title&quot;:&quot;MobileBERT&quot;,&quot;id&quot;:&quot;model_doc/mobilebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilebert&quot;},{&quot;title&quot;:&quot;MPNet&quot;,&quot;id&quot;:&quot;model_doc/mpnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpnet&quot;},{&quot;title&quot;:&quot;MPT&quot;,&quot;id&quot;:&quot;model_doc/mpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpt&quot;},{&quot;title&quot;:&quot;MRA&quot;,&quot;id&quot;:&quot;model_doc/mra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mra&quot;},{&quot;title&quot;:&quot;MT5&quot;,&quot;id&quot;:&quot;model_doc/mt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mt5&quot;},{&quot;title&quot;:&quot;MVP&quot;,&quot;id&quot;:&quot;model_doc/mvp&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mvp&quot;},{&quot;title&quot;:&quot;NEZHA&quot;,&quot;id&quot;:&quot;model_doc/nezha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nezha&quot;},{&quot;title&quot;:&quot;NLLB&quot;,&quot;id&quot;:&quot;model_doc/nllb&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb&quot;},{&quot;title&quot;:&quot;NLLB-MoE&quot;,&quot;id&quot;:&quot;model_doc/nllb-moe&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb-moe&quot;},{&quot;title&quot;:&quot;Nyströmformer&quot;,&quot;id&quot;:&quot;model_doc/nystromformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nystromformer&quot;},{&quot;title&quot;:&quot;Open-Llama&quot;,&quot;id&quot;:&quot;model_doc/open-llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/open-llama&quot;},{&quot;title&quot;:&quot;OPT&quot;,&quot;id&quot;:&quot;model_doc/opt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/opt&quot;},{&quot;title&quot;:&quot;Pegasus&quot;,&quot;id&quot;:&quot;model_doc/pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus&quot;},{&quot;title&quot;:&quot;PEGASUS-X&quot;,&quot;id&quot;:&quot;model_doc/pegasus_x&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus_x&quot;},{&quot;title&quot;:&quot;Persimmon&quot;,&quot;id&quot;:&quot;model_doc/persimmon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/persimmon&quot;},{&quot;title&quot;:&quot;PhoBERT&quot;,&quot;id&quot;:&quot;model_doc/phobert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/phobert&quot;},{&quot;title&quot;:&quot;PLBart&quot;,&quot;id&quot;:&quot;model_doc/plbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/plbart&quot;},{&quot;title&quot;:&quot;ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/prophetnet&quot;},{&quot;title&quot;:&quot;QDQBert&quot;,&quot;id&quot;:&quot;model_doc/qdqbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/qdqbert&quot;},{&quot;title&quot;:&quot;RAG&quot;,&quot;id&quot;:&quot;model_doc/rag&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rag&quot;},{&quot;title&quot;:&quot;REALM&quot;,&quot;id&quot;:&quot;model_doc/realm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/realm&quot;},{&quot;title&quot;:&quot;Reformer&quot;,&quot;id&quot;:&quot;model_doc/reformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/reformer&quot;},{&quot;title&quot;:&quot;RemBERT&quot;,&quot;id&quot;:&quot;model_doc/rembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rembert&quot;},{&quot;title&quot;:&quot;RetriBERT&quot;,&quot;id&quot;:&quot;model_doc/retribert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/retribert&quot;},{&quot;title&quot;:&quot;RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta&quot;},{&quot;title&quot;:&quot;RoBERTa-PreLayerNorm&quot;,&quot;id&quot;:&quot;model_doc/roberta-prelayernorm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm&quot;},{&quot;title&quot;:&quot;RoCBert&quot;,&quot;id&quot;:&quot;model_doc/roc_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roc_bert&quot;},{&quot;title&quot;:&quot;RoFormer&quot;,&quot;id&quot;:&quot;model_doc/roformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roformer&quot;},{&quot;title&quot;:&quot;RWKV&quot;,&quot;id&quot;:&quot;model_doc/rwkv&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rwkv&quot;},{&quot;title&quot;:&quot;Splinter&quot;,&quot;id&quot;:&quot;model_doc/splinter&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/splinter&quot;},{&quot;title&quot;:&quot;SqueezeBERT&quot;,&quot;id&quot;:&quot;model_doc/squeezebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/squeezebert&quot;},{&quot;title&quot;:&quot;SwitchTransformers&quot;,&quot;id&quot;:&quot;model_doc/switch_transformers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/switch_transformers&quot;},{&quot;title&quot;:&quot;T5&quot;,&quot;id&quot;:&quot;model_doc/t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5&quot;},{&quot;title&quot;:&quot;T5v1.1&quot;,&quot;id&quot;:&quot;model_doc/t5v1.1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5v1.1&quot;},{&quot;title&quot;:&quot;TAPEX&quot;,&quot;id&quot;:&quot;model_doc/tapex&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapex&quot;},{&quot;title&quot;:&quot;Transformer XL&quot;,&quot;id&quot;:&quot;model_doc/transfo-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/transfo-xl&quot;},{&quot;title&quot;:&quot;UL2&quot;,&quot;id&quot;:&quot;model_doc/ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ul2&quot;},{&quot;title&quot;:&quot;UMT5&quot;,&quot;id&quot;:&quot;model_doc/umt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/umt5&quot;},{&quot;title&quot;:&quot;X-MOD&quot;,&quot;id&quot;:&quot;model_doc/xmod&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xmod&quot;},{&quot;title&quot;:&quot;XGLM&quot;,&quot;id&quot;:&quot;model_doc/xglm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xglm&quot;},{&quot;title&quot;:&quot;XLM&quot;,&quot;id&quot;:&quot;model_doc/xlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm&quot;},{&quot;title&quot;:&quot;XLM-ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/xlm-prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa-XL&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl&quot;},{&quot;title&quot;:&quot;XLM-V&quot;,&quot;id&quot;:&quot;model_doc/xlm-v&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-v&quot;},{&quot;title&quot;:&quot;XLNet&quot;,&quot;id&quot;:&quot;model_doc/xlnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlnet&quot;},{&quot;title&quot;:&quot;YOSO&quot;,&quot;id&quot;:&quot;model_doc/yoso&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yoso&quot;}]},{&quot;title&quot;:&quot;Vision models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;BEiT&quot;,&quot;id&quot;:&quot;model_doc/beit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/beit&quot;},{&quot;title&quot;:&quot;BiT&quot;,&quot;id&quot;:&quot;model_doc/bit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bit&quot;},{&quot;title&quot;:&quot;Conditional DETR&quot;,&quot;id&quot;:&quot;model_doc/conditional_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/conditional_detr&quot;},{&quot;title&quot;:&quot;ConvNeXT&quot;,&quot;id&quot;:&quot;model_doc/convnext&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnext&quot;},{&quot;title&quot;:&quot;ConvNeXTV2&quot;,&quot;id&quot;:&quot;model_doc/convnextv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnextv2&quot;},{&quot;title&quot;:&quot;CvT&quot;,&quot;id&quot;:&quot;model_doc/cvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cvt&quot;},{&quot;title&quot;:&quot;Deformable DETR&quot;,&quot;id&quot;:&quot;model_doc/deformable_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deformable_detr&quot;},{&quot;title&quot;:&quot;DeiT&quot;,&quot;id&quot;:&quot;model_doc/deit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deit&quot;},{&quot;title&quot;:&quot;DETA&quot;,&quot;id&quot;:&quot;model_doc/deta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deta&quot;},{&quot;title&quot;:&quot;DETR&quot;,&quot;id&quot;:&quot;model_doc/detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/detr&quot;},{&quot;title&quot;:&quot;DiNAT&quot;,&quot;id&quot;:&quot;model_doc/dinat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinat&quot;},{&quot;title&quot;:&quot;DINO V2&quot;,&quot;id&quot;:&quot;model_doc/dinov2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinov2&quot;},{&quot;title&quot;:&quot;DiT&quot;,&quot;id&quot;:&quot;model_doc/dit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dit&quot;},{&quot;title&quot;:&quot;DPT&quot;,&quot;id&quot;:&quot;model_doc/dpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpt&quot;},{&quot;title&quot;:&quot;EfficientFormer&quot;,&quot;id&quot;:&quot;model_doc/efficientformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientformer&quot;},{&quot;title&quot;:&quot;EfficientNet&quot;,&quot;id&quot;:&quot;model_doc/efficientnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientnet&quot;},{&quot;title&quot;:&quot;FocalNet&quot;,&quot;id&quot;:&quot;model_doc/focalnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/focalnet&quot;},{&quot;title&quot;:&quot;GLPN&quot;,&quot;id&quot;:&quot;model_doc/glpn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/glpn&quot;},{&quot;title&quot;:&quot;ImageGPT&quot;,&quot;id&quot;:&quot;model_doc/imagegpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/imagegpt&quot;},{&quot;title&quot;:&quot;LeViT&quot;,&quot;id&quot;:&quot;model_doc/levit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/levit&quot;},{&quot;title&quot;:&quot;Mask2Former&quot;,&quot;id&quot;:&quot;model_doc/mask2former&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mask2former&quot;},{&quot;title&quot;:&quot;MaskFormer&quot;,&quot;id&quot;:&quot;model_doc/maskformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/maskformer&quot;},{&quot;title&quot;:&quot;MobileNetV1&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v1&quot;},{&quot;title&quot;:&quot;MobileNetV2&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v2&quot;},{&quot;title&quot;:&quot;MobileViT&quot;,&quot;id&quot;:&quot;model_doc/mobilevit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevit&quot;},{&quot;title&quot;:&quot;MobileViTV2&quot;,&quot;id&quot;:&quot;model_doc/mobilevitv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevitv2&quot;},{&quot;title&quot;:&quot;NAT&quot;,&quot;id&quot;:&quot;model_doc/nat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nat&quot;},{&quot;title&quot;:&quot;PoolFormer&quot;,&quot;id&quot;:&quot;model_doc/poolformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/poolformer&quot;},{&quot;title&quot;:&quot;Pyramid Vision Transformer (PVT)&quot;,&quot;id&quot;:&quot;model_doc/pvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pvt&quot;},{&quot;title&quot;:&quot;RegNet&quot;,&quot;id&quot;:&quot;model_doc/regnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/regnet&quot;},{&quot;title&quot;:&quot;ResNet&quot;,&quot;id&quot;:&quot;model_doc/resnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/resnet&quot;},{&quot;title&quot;:&quot;SegFormer&quot;,&quot;id&quot;:&quot;model_doc/segformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/segformer&quot;},{&quot;title&quot;:&quot;SwiftFormer&quot;,&quot;id&quot;:&quot;model_doc/swiftformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swiftformer&quot;},{&quot;title&quot;:&quot;Swin Transformer&quot;,&quot;id&quot;:&quot;model_doc/swin&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin&quot;},{&quot;title&quot;:&quot;Swin Transformer V2&quot;,&quot;id&quot;:&quot;model_doc/swinv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swinv2&quot;},{&quot;title&quot;:&quot;Swin2SR&quot;,&quot;id&quot;:&quot;model_doc/swin2sr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin2sr&quot;},{&quot;title&quot;:&quot;Table Transformer&quot;,&quot;id&quot;:&quot;model_doc/table-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/table-transformer&quot;},{&quot;title&quot;:&quot;TimeSformer&quot;,&quot;id&quot;:&quot;model_doc/timesformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/timesformer&quot;},{&quot;title&quot;:&quot;UperNet&quot;,&quot;id&quot;:&quot;model_doc/upernet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/upernet&quot;},{&quot;title&quot;:&quot;VAN&quot;,&quot;id&quot;:&quot;model_doc/van&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/van&quot;},{&quot;title&quot;:&quot;VideoMAE&quot;,&quot;id&quot;:&quot;model_doc/videomae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/videomae&quot;},{&quot;title&quot;:&quot;Vision Transformer (ViT)&quot;,&quot;id&quot;:&quot;model_doc/vit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit&quot;},{&quot;title&quot;:&quot;ViT Hybrid&quot;,&quot;id&quot;:&quot;model_doc/vit_hybrid&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_hybrid&quot;},{&quot;title&quot;:&quot;ViTDet&quot;,&quot;id&quot;:&quot;model_doc/vitdet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitdet&quot;},{&quot;title&quot;:&quot;ViTMAE&quot;,&quot;id&quot;:&quot;model_doc/vit_mae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_mae&quot;},{&quot;title&quot;:&quot;ViTMatte&quot;,&quot;id&quot;:&quot;model_doc/vitmatte&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitmatte&quot;},{&quot;title&quot;:&quot;ViTMSN&quot;,&quot;id&quot;:&quot;model_doc/vit_msn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_msn&quot;},{&quot;title&quot;:&quot;ViViT&quot;,&quot;id&quot;:&quot;model_doc/vivit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vivit&quot;},{&quot;title&quot;:&quot;YOLOS&quot;,&quot;id&quot;:&quot;model_doc/yolos&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yolos&quot;}]},{&quot;title&quot;:&quot;Audio models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio Spectrogram Transformer&quot;,&quot;id&quot;:&quot;model_doc/audio-spectrogram-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer&quot;},{&quot;title&quot;:&quot;Bark&quot;,&quot;id&quot;:&quot;model_doc/bark&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bark&quot;},{&quot;title&quot;:&quot;CLAP&quot;,&quot;id&quot;:&quot;model_doc/clap&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clap&quot;},{&quot;title&quot;:&quot;EnCodec&quot;,&quot;id&quot;:&quot;model_doc/encodec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encodec&quot;},{&quot;title&quot;:&quot;Hubert&quot;,&quot;id&quot;:&quot;model_doc/hubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/hubert&quot;},{&quot;title&quot;:&quot;MCTCT&quot;,&quot;id&quot;:&quot;model_doc/mctct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mctct&quot;},{&quot;title&quot;:&quot;MMS&quot;,&quot;id&quot;:&quot;model_doc/mms&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mms&quot;},{&quot;title&quot;:&quot;MusicGen&quot;,&quot;id&quot;:&quot;model_doc/musicgen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/musicgen&quot;},{&quot;title&quot;:&quot;Pop2Piano&quot;,&quot;id&quot;:&quot;model_doc/pop2piano&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pop2piano&quot;},{&quot;title&quot;:&quot;SEW&quot;,&quot;id&quot;:&quot;model_doc/sew&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew&quot;},{&quot;title&quot;:&quot;SEW-D&quot;,&quot;id&quot;:&quot;model_doc/sew-d&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew-d&quot;},{&quot;title&quot;:&quot;Speech2Text&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text&quot;},{&quot;title&quot;:&quot;Speech2Text2&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text_2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2&quot;},{&quot;title&quot;:&quot;SpeechT5&quot;,&quot;id&quot;:&quot;model_doc/speecht5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speecht5&quot;},{&quot;title&quot;:&quot;UniSpeech&quot;,&quot;id&quot;:&quot;model_doc/unispeech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech&quot;},{&quot;title&quot;:&quot;UniSpeech-SAT&quot;,&quot;id&quot;:&quot;model_doc/unispeech-sat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech-sat&quot;},{&quot;title&quot;:&quot;VITS&quot;,&quot;id&quot;:&quot;model_doc/vits&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vits&quot;},{&quot;title&quot;:&quot;Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2&quot;},{&quot;title&quot;:&quot;Wav2Vec2-Conformer&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2-conformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer&quot;},{&quot;title&quot;:&quot;Wav2Vec2Phoneme&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2_phoneme&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme&quot;},{&quot;title&quot;:&quot;WavLM&quot;,&quot;id&quot;:&quot;model_doc/wavlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wavlm&quot;},{&quot;title&quot;:&quot;Whisper&quot;,&quot;id&quot;:&quot;model_doc/whisper&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/whisper&quot;},{&quot;title&quot;:&quot;XLS-R&quot;,&quot;id&quot;:&quot;model_doc/xls_r&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xls_r&quot;},{&quot;title&quot;:&quot;XLSR-Wav2Vec2&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/xlsr_wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2&quot;}]},{&quot;title&quot;:&quot;Multimodal models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALIGN&quot;,&quot;id&quot;:&quot;model_doc/align&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/align&quot;},{&quot;title&quot;:&quot;AltCLIP&quot;,&quot;id&quot;:&quot;model_doc/altclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/altclip&quot;},{&quot;title&quot;:&quot;BLIP&quot;,&quot;id&quot;:&quot;model_doc/blip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip&quot;},{&quot;title&quot;:&quot;BLIP-2&quot;,&quot;id&quot;:&quot;model_doc/blip-2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip-2&quot;},{&quot;title&quot;:&quot;BridgeTower&quot;,&quot;id&quot;:&quot;model_doc/bridgetower&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bridgetower&quot;},{&quot;title&quot;:&quot;BROS&quot;,&quot;id&quot;:&quot;model_doc/bros&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bros&quot;},{&quot;title&quot;:&quot;Chinese-CLIP&quot;,&quot;id&quot;:&quot;model_doc/chinese_clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/chinese_clip&quot;},{&quot;title&quot;:&quot;CLIP&quot;,&quot;id&quot;:&quot;model_doc/clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clip&quot;},{&quot;title&quot;:&quot;CLIPSeg&quot;,&quot;id&quot;:&quot;model_doc/clipseg&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clipseg&quot;},{&quot;title&quot;:&quot;Data2Vec&quot;,&quot;id&quot;:&quot;model_doc/data2vec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/data2vec&quot;},{&quot;title&quot;:&quot;DePlot&quot;,&quot;id&quot;:&quot;model_doc/deplot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deplot&quot;},{&quot;title&quot;:&quot;Donut&quot;,&quot;id&quot;:&quot;model_doc/donut&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/donut&quot;},{&quot;title&quot;:&quot;FLAVA&quot;,&quot;id&quot;:&quot;model_doc/flava&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flava&quot;},{&quot;title&quot;:&quot;GIT&quot;,&quot;id&quot;:&quot;model_doc/git&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/git&quot;},{&quot;title&quot;:&quot;GroupViT&quot;,&quot;id&quot;:&quot;model_doc/groupvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/groupvit&quot;},{&quot;title&quot;:&quot;IDEFICS&quot;,&quot;id&quot;:&quot;model_doc/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/idefics&quot;},{&quot;title&quot;:&quot;InstructBLIP&quot;,&quot;id&quot;:&quot;model_doc/instructblip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/instructblip&quot;},{&quot;title&quot;:&quot;LayoutLM&quot;,&quot;id&quot;:&quot;model_doc/layoutlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlm&quot;},{&quot;title&quot;:&quot;LayoutLMV2&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv2&quot;},{&quot;title&quot;:&quot;LayoutLMV3&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv3&quot;},{&quot;title&quot;:&quot;LayoutXLM&quot;,&quot;id&quot;:&quot;model_doc/layoutxlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutxlm&quot;},{&quot;title&quot;:&quot;LiLT&quot;,&quot;id&quot;:&quot;model_doc/lilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lilt&quot;},{&quot;title&quot;:&quot;LXMERT&quot;,&quot;id&quot;:&quot;model_doc/lxmert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lxmert&quot;},{&quot;title&quot;:&quot;MatCha&quot;,&quot;id&quot;:&quot;model_doc/matcha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/matcha&quot;},{&quot;title&quot;:&quot;MGP-STR&quot;,&quot;id&quot;:&quot;model_doc/mgp-str&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mgp-str&quot;},{&quot;title&quot;:&quot;Nougat&quot;,&quot;id&quot;:&quot;model_doc/nougat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nougat&quot;},{&quot;title&quot;:&quot;OneFormer&quot;,&quot;id&quot;:&quot;model_doc/oneformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/oneformer&quot;},{&quot;title&quot;:&quot;OWL-ViT&quot;,&quot;id&quot;:&quot;model_doc/owlvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/owlvit&quot;},{&quot;title&quot;:&quot;Perceiver&quot;,&quot;id&quot;:&quot;model_doc/perceiver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/perceiver&quot;},{&quot;title&quot;:&quot;Pix2Struct&quot;,&quot;id&quot;:&quot;model_doc/pix2struct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pix2struct&quot;},{&quot;title&quot;:&quot;Segment Anything&quot;,&quot;id&quot;:&quot;model_doc/sam&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sam&quot;},{&quot;title&quot;:&quot;Speech Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/speech-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder&quot;},{&quot;title&quot;:&quot;TAPAS&quot;,&quot;id&quot;:&quot;model_doc/tapas&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapas&quot;},{&quot;title&quot;:&quot;TrOCR&quot;,&quot;id&quot;:&quot;model_doc/trocr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trocr&quot;},{&quot;title&quot;:&quot;TVLT&quot;,&quot;id&quot;:&quot;model_doc/tvlt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tvlt&quot;},{&quot;title&quot;:&quot;ViLT&quot;,&quot;id&quot;:&quot;model_doc/vilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vilt&quot;},{&quot;title&quot;:&quot;Vision Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/vision-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder&quot;},{&quot;title&quot;:&quot;Vision Text Dual Encoder&quot;,&quot;id&quot;:&quot;model_doc/vision-text-dual-encoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder&quot;},{&quot;title&quot;:&quot;VisualBERT&quot;,&quot;id&quot;:&quot;model_doc/visual_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/visual_bert&quot;},{&quot;title&quot;:&quot;X-CLIP&quot;,&quot;id&quot;:&quot;model_doc/xclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xclip&quot;}]},{&quot;title&quot;:&quot;Reinforcement learning models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Decision Transformer&quot;,&quot;id&quot;:&quot;model_doc/decision_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/decision_transformer&quot;},{&quot;title&quot;:&quot;Trajectory Transformer&quot;,&quot;id&quot;:&quot;model_doc/trajectory_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer&quot;}]},{&quot;title&quot;:&quot;Time series models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Autoformer&quot;,&quot;id&quot;:&quot;model_doc/autoformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/autoformer&quot;},{&quot;title&quot;:&quot;Informer&quot;,&quot;id&quot;:&quot;model_doc/informer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/informer&quot;},{&quot;title&quot;:&quot;Time Series Transformer&quot;,&quot;id&quot;:&quot;model_doc/time_series_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/time_series_transformer&quot;}]},{&quot;title&quot;:&quot;Graph models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Graphormer&quot;,&quot;id&quot;:&quot;model_doc/graphormer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/graphormer&quot;}]}]},{&quot;title&quot;:&quot;Internal Helpers&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Custom Layers and Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/modeling_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/modeling_utils&quot;},{&quot;title&quot;:&quot;Utilities for pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/pipelines_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/pipelines_utils&quot;},{&quot;title&quot;:&quot;Utilities for Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/tokenization_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/tokenization_utils&quot;},{&quot;title&quot;:&quot;Utilities for Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/trainer_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/trainer_utils&quot;},{&quot;title&quot;:&quot;Utilities for Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/generation_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/generation_utils&quot;},{&quot;title&quot;:&quot;Utilities for Image Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/image_processing_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/image_processing_utils&quot;},{&quot;title&quot;:&quot;Utilities for Audio processing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/audio_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/audio_utils&quot;},{&quot;title&quot;:&quot;General Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/file_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/file_utils&quot;},{&quot;title&quot;:&quot;Utilities for Time Series&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/time_series_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/time_series_utils&quot;}]}]}],&quot;chapterId&quot;:&quot;model_doc/xlsr_wav2vec2&quot;,&quot;docType&quot;:&quot;docs&quot;,&quot;isLoggedIn&quot;:false,&quot;lang&quot;:&quot;en&quot;,&quot;langs&quot;:[&quot;de&quot;,&quot;en&quot;,&quot;es&quot;,&quot;fr&quot;,&quot;it&quot;,&quot;ko&quot;,&quot;pt&quot;,&quot;zh&quot;],&quot;library&quot;:&quot;transformers&quot;,&quot;theme&quot;:&quot;light&quot;,&quot;version&quot;:&quot;v4.34.0&quot;,&quot;versions&quot;:[{&quot;version&quot;:&quot;main&quot;},{&quot;version&quot;:&quot;v4.34.0&quot;},{&quot;version&quot;:&quot;v4.33.3&quot;},{&quot;version&quot;:&quot;v4.33.2&quot;},{&quot;version&quot;:&quot;v4.33.0&quot;},{&quot;version&quot;:&quot;v4.32.1&quot;},{&quot;version&quot;:&quot;v4.32.0&quot;},{&quot;version&quot;:&quot;v4.31.0&quot;},{&quot;version&quot;:&quot;v4.30.0&quot;},{&quot;version&quot;:&quot;v4.29.1&quot;},{&quot;version&quot;:&quot;v4.29.0&quot;},{&quot;version&quot;:&quot;v4.28.1&quot;},{&quot;version&quot;:&quot;v4.28.0&quot;},{&quot;version&quot;:&quot;v4.27.2&quot;},{&quot;version&quot;:&quot;v4.27.1&quot;},{&quot;version&quot;:&quot;v4.27.0&quot;},{&quot;version&quot;:&quot;v4.26.1&quot;},{&quot;version&quot;:&quot;v4.26.0&quot;},{&quot;version&quot;:&quot;v4.25.1&quot;},{&quot;version&quot;:&quot;v4.24.0&quot;},{&quot;version&quot;:&quot;v4.23.1&quot;},{&quot;version&quot;:&quot;v4.23.0&quot;},{&quot;version&quot;:&quot;v4.22.2&quot;},{&quot;version&quot;:&quot;v4.22.1&quot;},{&quot;version&quot;:&quot;v4.22.0&quot;},{&quot;version&quot;:&quot;v4.21.3&quot;},{&quot;version&quot;:&quot;v4.21.2&quot;},{&quot;version&quot;:&quot;v4.21.1&quot;},{&quot;version&quot;:&quot;v4.21.0&quot;},{&quot;version&quot;:&quot;v4.20.1&quot;},{&quot;version&quot;:&quot;v4.20.0&quot;},{&quot;version&quot;:&quot;v4.19.4&quot;},{&quot;version&quot;:&quot;v4.19.3&quot;},{&quot;version&quot;:&quot;v4.19.2&quot;},{&quot;version&quot;:&quot;v4.19.0&quot;},{&quot;version&quot;:&quot;v4.18.0&quot;},{&quot;version&quot;:&quot;v4.17.0&quot;},{&quot;version&quot;:&quot;v4.16.2&quot;},{&quot;version&quot;:&quot;v4.16.1&quot;},{&quot;version&quot;:&quot;v4.16.0&quot;},{&quot;version&quot;:&quot;v4.15.0&quot;},{&quot;version&quot;:&quot;v4.14.1&quot;},{&quot;version&quot;:&quot;v4.13.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.5&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.4&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.0.0&quot;},{&quot;version&quot;:&quot;doc-builder-html&quot;}],&quot;title&quot;:&quot;XLSR-Wav2Vec2&quot;}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"> <div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation </p> <div class="flex items-center"><p class="font-semibold">XLSR-Wav2Vec2</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "> <button class=" " type="button"> <h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> </button> <div class="flex items-center"> <select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1" selected="">v4.34.0</option><option value="2">v4.33.3</option><option value="3">v4.32.1</option><option value="4">v4.31.0</option><option value="5">v4.30.0</option><option value="6">v4.29.1</option><option value="7">v4.28.1</option><option value="8">v4.27.2</option><option value="9">v4.26.1</option><option value="10">v4.25.1</option><option value="11">v4.24.0</option><option value="12">v4.23.1</option><option value="13">v4.22.2</option><option value="14">v4.21.3</option><option value="15">v4.20.1</option><option value="16">v4.19.4</option><option value="17">v4.18.0</option><option value="18">v4.17.0</option><option value="19">v4.16.2</option><option value="20">v4.15.0</option><option value="21">v4.14.1</option><option value="22">v4.13.0</option><option value="23">v4.12.5</option><option value="24">v4.11.3</option><option value="25">v4.10.1</option><option value="26">v4.9.2</option><option value="27">v4.8.2</option><option value="28">v4.7.0</option><option value="29">v4.6.0</option><option value="30">v4.5.1</option><option value="31">v4.4.2</option><option value="32">v4.3.3</option><option value="33">v4.2.2</option><option value="34">v4.1.1</option><option value="35">v4.0.1</option><option value="36">v3.5.1</option><option value="37">v3.4.0</option><option value="38">v3.3.1</option><option value="39">v3.2.0</option><option value="40">v3.1.0</option><option value="41">v3.0.2</option><option value="42">v2.11.0</option><option value="43">v2.10.0</option><option value="44">v2.9.1</option><option value="45">v2.8.0</option><option value="46">v2.7.0</option><option value="47">v2.6.0</option><option value="48">v2.5.1</option><option value="49">v2.4.1</option><option value="50">v2.3.0</option><option value="51">v2.2.2</option><option value="52">v2.1.1</option><option value="53">v2.0.0</option><option value="54">v1.2.0</option><option value="55">v1.1.0</option><option value="56">v1.0.0</option><option value="57">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en" selected="">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"> <button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"> <svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> </a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Get started<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/index"><!-- HTML_TAG_START -->🤗 Transformers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/quicktour"><!-- HTML_TAG_START -->Quick tour<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/installation"><!-- HTML_TAG_START -->Installation<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Tutorials<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_tutorial"><!-- HTML_TAG_START -->Run inference with pipelines<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/autoclass_tutorial"><!-- HTML_TAG_START -->Write portable code with AutoClass<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/preprocessing"><!-- HTML_TAG_START -->Preprocess data<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/training"><!-- HTML_TAG_START -->Fine-tune a pretrained model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/run_scripts"><!-- HTML_TAG_START -->Train with a script<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/accelerate"><!-- HTML_TAG_START -->Set up distributed training with 🤗 Accelerate<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/peft"><!-- HTML_TAG_START -->Load and train adapters with 🤗 PEFT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_sharing"><!-- HTML_TAG_START -->Share your model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/transformers_agents"><!-- HTML_TAG_START -->Agents<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/llm_tutorial"><!-- HTML_TAG_START -->Generation with LLMs<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Task Guides<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Natural Language Processing<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Audio<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Computer Vision<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Multimodal<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Generation<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Prompting<!-- HTML_TAG_END --></span> </span></span> </div></div> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Developer guides<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/fast_tokenizers"><!-- HTML_TAG_START -->Use fast tokenizers from 🤗 Tokenizers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/multilingual"><!-- HTML_TAG_START -->Run inference with multilingual models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/create_a_model"><!-- HTML_TAG_START -->Use model-specific APIs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_models"><!-- HTML_TAG_START -->Share a custom model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/chat_templating"><!-- HTML_TAG_START -->Templates for chat models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/sagemaker"><!-- HTML_TAG_START -->Run training on Amazon SageMaker<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/serialization"><!-- HTML_TAG_START -->Export to ONNX<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tflite"><!-- HTML_TAG_START -->Export to TFLite<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/torchscript"><!-- HTML_TAG_START -->Export to TorchScript<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/benchmarks"><!-- HTML_TAG_START -->Benchmarks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/notebooks"><!-- HTML_TAG_START -->Notebooks with examples<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/community"><!-- HTML_TAG_START -->Community resources<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_tools"><!-- HTML_TAG_START -->Custom Tools and Prompts<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/troubleshooting"><!-- HTML_TAG_START -->Troubleshoot<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Performance and scalability<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/performance"><!-- HTML_TAG_START -->Overview<!-- HTML_TAG_END --> </a> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Efficient training techniques<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_one"><!-- HTML_TAG_START -->Methods and tools for efficient training on a single GPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_many"><!-- HTML_TAG_START -->Multiple GPUs and parallelism<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu"><!-- HTML_TAG_START -->Efficient training on CPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu_many"><!-- HTML_TAG_START -->Distributed CPU training<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu"><!-- HTML_TAG_START -->Training on TPUs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu_tf"><!-- HTML_TAG_START -->Training on TPU with TensorFlow<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_special"><!-- HTML_TAG_START -->Training on Specialized Hardware<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_hardware"><!-- HTML_TAG_START -->Custom hardware for training<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/hpo_train"><!-- HTML_TAG_START -->Hyperparameter Search using Trainer API<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Optimizing inference<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_cpu"><!-- HTML_TAG_START -->Inference on CPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_one"><!-- HTML_TAG_START -->Inference on one GPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_many"><!-- HTML_TAG_START -->Inference on many GPUs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_special"><!-- HTML_TAG_START -->Inference on Specialized Hardware<!-- HTML_TAG_END --> </a> </div><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/big_models"><!-- HTML_TAG_START -->Instantiating a big model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/debugging"><!-- HTML_TAG_START -->Troubleshooting<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tf_xla"><!-- HTML_TAG_START -->XLA Integration for TensorFlow Models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perf_torch_compile"><!-- HTML_TAG_START -->Optimize inference using `torch.compile()`<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Contribute<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/contributing"><!-- HTML_TAG_START -->How to contribute to transformers?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_model"><!-- HTML_TAG_START -->How to add a model to 🤗 Transformers?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_tensorflow_model"><!-- HTML_TAG_START -->How to convert a 🤗 Transformers model to TensorFlow?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_pipeline"><!-- HTML_TAG_START -->How to add a pipeline to 🤗 Transformers?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/testing"><!-- HTML_TAG_START -->Testing<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pr_checks"><!-- HTML_TAG_START -->Checks on a Pull Request<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Conceptual guides<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/philosophy"><!-- HTML_TAG_START -->Philosophy<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/glossary"><!-- HTML_TAG_START -->Glossary<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/task_summary"><!-- HTML_TAG_START -->What 🤗 Transformers can do<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tasks_explained"><!-- HTML_TAG_START -->How 🤗 Transformers solve tasks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_summary"><!-- HTML_TAG_START -->The Transformer model family<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tokenizer_summary"><!-- HTML_TAG_START -->Summary of the tokenizers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/attention"><!-- HTML_TAG_START -->Attention mechanisms<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pad_truncation"><!-- HTML_TAG_START -->Padding and truncation<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/bertology"><!-- HTML_TAG_START -->BERTology<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perplexity"><!-- HTML_TAG_START -->Perplexity of fixed-length models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_webserver"><!-- HTML_TAG_START -->Pipelines for webserver inference<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_memory_anatomy"><!-- HTML_TAG_START -->Model training anatomy<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->API<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Main Classes<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/agent"><!-- HTML_TAG_START -->Agents and Tools<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/model_doc/auto"><!-- HTML_TAG_START -->Auto Classes<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/callback"><!-- HTML_TAG_START -->Callbacks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/configuration"><!-- HTML_TAG_START -->Configuration<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/data_collator"><!-- HTML_TAG_START -->Data Collator<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks"><!-- HTML_TAG_START -->Keras callbacks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/logging"><!-- HTML_TAG_START -->Logging<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/model"><!-- HTML_TAG_START -->Models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/text_generation"><!-- HTML_TAG_START -->Text Generation<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/onnx"><!-- HTML_TAG_START -->ONNX<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules"><!-- HTML_TAG_START -->Optimization<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/output"><!-- HTML_TAG_START -->Model outputs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/pipelines"><!-- HTML_TAG_START -->Pipelines<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/processors"><!-- HTML_TAG_START -->Processors<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/quantization"><!-- HTML_TAG_START -->Quantization<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/tokenizer"><!-- HTML_TAG_START -->Tokenizer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/trainer"><!-- HTML_TAG_START -->Trainer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/deepspeed"><!-- HTML_TAG_START -->DeepSpeed Integration<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor"><!-- HTML_TAG_START -->Feature Extractor<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/image_processor"><!-- HTML_TAG_START -->Image Processor<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Text models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Vision models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Audio models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer"><!-- HTML_TAG_START -->Audio Spectrogram Transformer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bark"><!-- HTML_TAG_START -->Bark<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/clap"><!-- HTML_TAG_START -->CLAP<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/encodec"><!-- HTML_TAG_START -->EnCodec<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/hubert"><!-- HTML_TAG_START -->Hubert<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mctct"><!-- HTML_TAG_START -->MCTCT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mms"><!-- HTML_TAG_START -->MMS<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/musicgen"><!-- HTML_TAG_START -->MusicGen<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/pop2piano"><!-- HTML_TAG_START -->Pop2Piano<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/sew"><!-- HTML_TAG_START -->SEW<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/sew-d"><!-- HTML_TAG_START -->SEW-D<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/speech_to_text"><!-- HTML_TAG_START -->Speech2Text<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2"><!-- HTML_TAG_START -->Speech2Text2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/speecht5"><!-- HTML_TAG_START -->SpeechT5<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/unispeech"><!-- HTML_TAG_START -->UniSpeech<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/unispeech-sat"><!-- HTML_TAG_START -->UniSpeech-SAT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/vits"><!-- HTML_TAG_START -->VITS<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2"><!-- HTML_TAG_START -->Wav2Vec2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer"><!-- HTML_TAG_START -->Wav2Vec2-Conformer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme"><!-- HTML_TAG_START -->Wav2Vec2Phoneme<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/wavlm"><!-- HTML_TAG_START -->WavLM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/whisper"><!-- HTML_TAG_START -->Whisper<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xls_r"><!-- HTML_TAG_START -->XLS-R<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2"><!-- HTML_TAG_START -->XLSR-Wav2Vec2<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Multimodal models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Reinforcement learning models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Time series models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Graph models<!-- HTML_TAG_END --></span> </span></span> </div></div> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Internal Helpers<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/modeling_utils"><!-- HTML_TAG_START -->Custom Layers and Utilities<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/pipelines_utils"><!-- HTML_TAG_START -->Utilities for pipelines<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/tokenization_utils"><!-- HTML_TAG_START -->Utilities for Tokenizers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/trainer_utils"><!-- HTML_TAG_START -->Utilities for Trainer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/generation_utils"><!-- HTML_TAG_START -->Utilities for Generation<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/image_processing_utils"><!-- HTML_TAG_START -->Utilities for Image Processors<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/audio_utils"><!-- HTML_TAG_START -->Utilities for Audio processing<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/file_utils"><!-- HTML_TAG_START -->General Utilities<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/time_series_utils"><!-- HTML_TAG_START -->Utilities for Time Series<!-- HTML_TAG_END --> </a> </div> </div></nav></div></div></div> <div class="z-1 min-w-0 flex-1"> <div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg"> <div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div> <p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience </p> <div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes </div></div></div> <div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a> <p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div> <div class="prose-doc prose relative mx-auto max-w-4xl break-words"><!-- HTML_TAG_START --> <link href="/docs/transformers/v4.34.0/en/_app/immutable/assets/0.e3b0c442.css" rel="modulepreload"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/entry/start.c2db227a.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/scheduler.9bc65507.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/singletons.e3057404.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/index.3b203c72.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/paths.e7de6301.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/entry/app.879d9b87.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/index.78c82d43.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/0.242aaaff.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/each.e59479a4.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/284.0a83c010.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/IconCopyLink.bedaa44d.js"><!-- HEAD_svelte-1phssyn_START --><meta name="hf:doc:metadata" content="{&quot;local&quot;:&quot;xlsrwav2vec2&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;overview&quot;,&quot;title&quot;:&quot;Overview&quot;}],&quot;title&quot;:&quot;XLSR-Wav2Vec2&quot;}"><!-- HEAD_svelte-1phssyn_END --> <p></p> <h1 class="relative group"><a id="xlsrwav2vec2" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#xlsrwav2vec2"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1uh77ab">XLSR-Wav2Vec2</span></h1> <h2 class="relative group"><a id="overview" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#overview"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1jsw1pg">Overview</span></h2> <p data-svelte-h="svelte-nw8zv9">The XLSR-Wav2Vec2 model was proposed in <a href="https://arxiv.org/abs/2006.13979" rel="nofollow">Unsupervised Cross-Lingual Representation Learning For Speech Recognition</a> by Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli.</p> <p data-svelte-h="svelte-vfdo9a">The abstract from the paper is the following:</p> <p data-svelte-h="svelte-8kgyjh"><em>This paper presents XLSR which learns cross-lingual speech representations by pretraining a single model from the raw waveform of speech in multiple languages. We build on wav2vec 2.0 which is trained by solving a contrastive task over masked latent speech representations and jointly learns a quantization of the latents shared across languages. The resulting model is fine-tuned on labeled data and experiments show that cross-lingual pretraining significantly outperforms monolingual pretraining. On the CommonVoice benchmark, XLSR shows a relative phoneme error rate reduction of 72% compared to the best known results. On BABEL, our approach improves word error rate by 16% relative compared to a comparable system. Our approach enables a single multilingual speech recognition model which is competitive to strong individual models. Analysis shows that the latent discrete speech representations are shared across languages with increased sharing for related languages. We hope to catalyze research in low-resource speech understanding by releasing XLSR-53, a large model pretrained in 53 languages.</em></p> <p data-svelte-h="svelte-axv494">Tips:</p> <ul data-svelte-h="svelte-864s4o"><li>XLSR-Wav2Vec2 is a speech model that accepts a float array corresponding to the raw waveform of the speech signal.</li> <li>XLSR-Wav2Vec2 model was trained using connectionist temporal classification (CTC) so the model output has to be decoded using <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2CTCTokenizer">Wav2Vec2CTCTokenizer</a>.</li></ul> <p data-svelte-h="svelte-1fm5txi">XLSR-Wav2Vec2’s architecture is based on the Wav2Vec2 model, so one can refer to <a href="wav2vec2">Wav2Vec2’s documentation page</a>.</p> <p data-svelte-h="svelte-12gzw10">The original code can be found <a href="https://github.com/pytorch/fairseq/tree/master/fairseq/models/wav2vec" rel="nofollow">here</a>.</p> <p></p> <script> { __sveltekit_1yybmhh = { assets: "/docs/transformers/v4.34.0/en", base: "/docs/transformers/v4.34.0/en", env: {} }; const element = document.currentScript.parentElement; const data = [null,null]; Promise.all([ import("/docs/transformers/v4.34.0/en/_app/immutable/entry/start.c2db227a.js"), import("/docs/transformers/v4.34.0/en/_app/immutable/entry/app.879d9b87.js") ]).then(([kit, app]) => { kit.start(app, element, { node_ids: [0, 284], data, form: null, error: null }); }); } </script> <!-- HTML_TAG_END --></div> <div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/xls_r" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>XLS-R</a> <a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/align" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">ALIGN<span class="ml-2 translate-y-px">→</span></a></div></div></div> <div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapter&quot;:{&quot;title&quot;:&quot;XLSR-Wav2Vec2&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;xlsrwav2vec2&quot;,&quot;url&quot;:&quot;#xlsrwav2vec2&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;overview&quot;,&quot;url&quot;:&quot;#overview&quot;}]}}" data-target="SubSideMenu"><nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#xlsrwav2vec2" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-xlsrwav2vec2">XLS<wbr>R-<wbr>Wav2<wbr>Vec2</a> <a href="#overview" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-overview"><wbr>Overview</a> </nav></div></div></div> <div id="doc-footer"></div></main> </div> <script> import("/front/build/kube-5e23f38/index.js"); window.moonSha = "kube-5e23f38/"; window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc","environment":"production","userAgent":"HuggingFace (production)"}`); </script> <!-- Stripe --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://js.stripe.com/v3/"; script.async = true; document.head.appendChild(script); } </script> <!-- Google analytics v4 --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL"; script.async = true; document.head.appendChild(script); window.dataLayer = window.dataLayer || []; function gtag() { if (window.dataLayer !== undefined) { window.dataLayer.push(arguments); } } gtag("js", new Date()); gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2" }); /// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" }); /// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent /// TODO: ask the user for their consent and update this with gtag('consent', 'update') } </script> <!-- Google Analytics v3 (deprecated) --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { (function (i, s, o, g, r, a, m) { i["GoogleAnalyticsObject"] = r; (i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments); }), (i[r].l = 1 * new Date()); (a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]); a.async = 1; a.src = g; m.parentNode.insertBefore(a, m); })(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics"); ganalytics("create", "UA-83738774-2", "auto"); ganalytics("send", "pageview", "/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2"); } </script> <iframe name="__privateStripeMetricsController1610" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-27c67c0d52761104439bb051c7856ab1.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fv4.34.0%2Fen%2Fmodel_doc%2Fxlsr_wav2vec2&amp;title=XLSR-Wav2Vec2&amp;referrer=&amp;muid=577a1d98-59a0-46fc-98a8-36ee316848488be1c3&amp;sid=95f156dd-eb84-4e70-95ef-3883996ebe1530e886&amp;version=6&amp;preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html>
2023-10-05T13:33:38.581Z
XLNet
https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/xlnet
# XLNet [![Models](https://img.shields.io/badge/All_model_pages-xlnet-blueviolet)](https://huggingface.co/models?filter=xlnet) [![Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/docs-demos/xlnet-base-cased) ## Overview The XLNet model was proposed in [XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le. XLnet is an extension of the Transformer-XL model pre-trained using an autoregressive method to learn bidirectional contexts by maximizing the expected likelihood over all permutations of the input sequence factorization order. The abstract from the paper is the following: _With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language modeling. However, relying on corrupting the input with masks, BERT neglects dependency between the masked positions and suffers from a pretrain-finetune discrepancy. In light of these pros and cons, we propose XLNet, a generalized autoregressive pretraining method that (1) enables learning bidirectional contexts by maximizing the expected likelihood over all permutations of the factorization order and (2) overcomes the limitations of BERT thanks to its autoregressive formulation. Furthermore, XLNet integrates ideas from Transformer-XL, the state-of-the-art autoregressive model, into pretraining. Empirically, under comparable experiment settings, XLNet outperforms BERT on 20 tasks, often by a large margin, including question answering, natural language inference, sentiment analysis, and document ranking._ Tips: - The specific attention pattern can be controlled at training and test time using the `perm_mask` input. - Due to the difficulty of training a fully auto-regressive model over various factorization order, XLNet is pretrained using only a sub-set of the output tokens as target which are selected with the `target_mapping` input. - To use XLNet for sequential decoding (i.e. not in fully bi-directional setting), use the `perm_mask` and `target_mapping` inputs to control the attention span and outputs (see examples in _examples/pytorch/text-generation/run\_generation.py_) - XLNet is one of the few models that has no sequence length limit. - XLNet is not a traditional autoregressive model but uses a training strategy that builds on that. It permutes the tokens in the sentence, then allows the model to use the last n tokens to predict the token n+1. Since this is all done with a mask, the sentence is actually fed in the model in the right order, but instead of masking the first n tokens for n+1, XLNet uses a mask that hides the previous tokens in some given permutation of 1,…,sequence length. - XLNet also uses the same recurrence mechanism as Transformer-XL to build long-term dependencies. This model was contributed by [thomwolf](https://huggingface.co/thomwolf). The original code can be found [here](https://github.com/zihangdai/xlnet/). ## Documentation resources - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Causal language modeling task guide](../tasks/language_modeling) - [Multiple choice task guide](../tasks/multiple_choice) ## XLNetConfig ### class transformers.XLNetConfig [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/configuration_xlnet.py#L32) ( vocab\_size = 32000d\_model = 1024n\_layer = 24n\_head = 16d\_inner = 4096ff\_activation = 'gelu'untie\_r = Trueattn\_type = 'bi'initializer\_range = 0.02layer\_norm\_eps = 1e-12dropout = 0.1mem\_len = 512reuse\_len = Noneuse\_mems\_eval = Trueuse\_mems\_train = Falsebi\_data = Falseclamp\_len = -1same\_length = Falsesummary\_type = 'last'summary\_use\_proj = Truesummary\_activation = 'tanh'summary\_last\_dropout = 0.1start\_n\_top = 5end\_n\_top = 5pad\_token\_id = 5bos\_token\_id = 1eos\_token\_id = 2\*\*kwargs ) This is the configuration class to store the configuration of a [XLNetModel](/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetModel) or a [TFXLNetModel](/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.TFXLNetModel). It is used to instantiate a XLNet model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the [xlnet-large-cased](https://huggingface.co/xlnet-large-cased) architecture. Configuration objects inherit from [PretrainedConfig](/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig) and can be used to control the model outputs. Read the documentation from [PretrainedConfig](/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig) for more information. Examples: ``` >>> from transformers import XLNetConfig, XLNetModel >>> >>> configuration = XLNetConfig() >>> >>> model = XLNetModel(configuration) >>> >>> configuration = model.config ``` ## XLNetTokenizer ### class transformers.XLNetTokenizer [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/tokenization_xlnet.py#L53) ( vocab\_filedo\_lower\_case = Falseremove\_space = Truekeep\_accents = Falsebos\_token = '<s>'eos\_token = '</s>'unk\_token = '<unk>'sep\_token = '<sep>'pad\_token = '<pad>'cls\_token = '<cls>'mask\_token = '<mask>'additional\_special\_tokens = \['<eop>', '<eod>'\]sp\_model\_kwargs: typing.Union\[typing.Dict\[str, typing.Any\], NoneType\] = None\*\*kwargs ) Construct an XLNet tokenizer. Based on [SentencePiece](https://github.com/google/sentencepiece). This tokenizer inherits from [PreTrainedTokenizer](/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizer) which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. #### build\_inputs\_with\_special\_tokens [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/tokenization_xlnet.py#L298) ( token\_ids\_0: typing.List\[int\]token\_ids\_1: typing.Optional\[typing.List\[int\]\] = None ) → `List[int]` Parameters - **token\_ids\_0** (`List[int]`) — List of IDs to which the special tokens will be added. - **token\_ids\_1** (`List[int]`, _optional_) — Optional second list of IDs for sequence pairs. List of [input IDs](../glossary#input-ids) with the appropriate special tokens. Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. An XLNet sequence has the following format: - single sequence: `X <sep> <cls>` - pair of sequences: `A <sep> B <sep> <cls>` #### get\_special\_tokens\_mask [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/tokenization_xlnet.py#L323) ( token\_ids\_0: typing.List\[int\]token\_ids\_1: typing.Optional\[typing.List\[int\]\] = Nonealready\_has\_special\_tokens: bool = False ) → `List[int]` Parameters - **token\_ids\_0** (`List[int]`) — List of IDs. - **token\_ids\_1** (`List[int]`, _optional_) — Optional second list of IDs for sequence pairs. - **already\_has\_special\_tokens** (`bool`, _optional_, defaults to `False`) — Whether or not the token list is already formatted with special tokens for the model. A list of integers in the range \[0, 1\]: 1 for a special token, 0 for a sequence token. Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer `prepare_for_model` method. #### create\_token\_type\_ids\_from\_sequences [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/tokenization_xlnet.py#L351) ( token\_ids\_0: typing.List\[int\]token\_ids\_1: typing.Optional\[typing.List\[int\]\] = None ) → `List[int]` Parameters - **token\_ids\_0** (`List[int]`) — List of IDs. - **token\_ids\_1** (`List[int]`, _optional_) — Optional second list of IDs for sequence pairs. List of [token type IDs](../glossary#token-type-ids) according to the given sequence(s). Create a mask from the two sequences passed to be used in a sequence-pair classification task. An XLNet sequence pair mask has the following format: ``` 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 | first sequence | second sequence | ``` If `token_ids_1` is `None`, this method only returns the first portion of the mask (0s). #### save\_vocabulary [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/tokenization_xlnet.py#L381) ( save\_directory: strfilename\_prefix: typing.Optional\[str\] = None ) ## XLNetTokenizerFast ### class transformers.XLNetTokenizerFast [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/tokenization_xlnet_fast.py#L63) ( vocab\_file = Nonetokenizer\_file = Nonedo\_lower\_case = Falseremove\_space = Truekeep\_accents = Falsebos\_token = '<s>'eos\_token = '</s>'unk\_token = '<unk>'sep\_token = '<sep>'pad\_token = '<pad>'cls\_token = '<cls>'mask\_token = '<mask>'additional\_special\_tokens = \['<eop>', '<eod>'\]\*\*kwargs ) Construct a “fast” XLNet tokenizer (backed by HuggingFace’s _tokenizers_ library). Based on [Unigram](https://huggingface.co/docs/tokenizers/python/latest/components.html?highlight=unigram#models). This tokenizer inherits from [PreTrainedTokenizerFast](/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast) which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. #### build\_inputs\_with\_special\_tokens [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/tokenization_xlnet_fast.py#L177) ( token\_ids\_0: typing.List\[int\]token\_ids\_1: typing.Optional\[typing.List\[int\]\] = None ) → `List[int]` Parameters - **token\_ids\_0** (`List[int]`) — List of IDs to which the special tokens will be added. - **token\_ids\_1** (`List[int]`, _optional_) — Optional second list of IDs for sequence pairs. List of [input IDs](../glossary#input-ids) with the appropriate special tokens. Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. An XLNet sequence has the following format: - single sequence: `X <sep> <cls>` - pair of sequences: `A <sep> B <sep> <cls>` #### create\_token\_type\_ids\_from\_sequences [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/tokenization_xlnet_fast.py#L202) ( token\_ids\_0: typing.List\[int\]token\_ids\_1: typing.Optional\[typing.List\[int\]\] = None ) → `List[int]` Parameters - **token\_ids\_0** (`List[int]`) — List of IDs. - **token\_ids\_1** (`List[int]`, _optional_) — Optional second list of IDs for sequence pairs. List of [token type IDs](../glossary#token-type-ids) according to the given sequence(s). Create a mask from the two sequences passed to be used in a sequence-pair classification task. An XLNet sequence pair mask has the following format: ``` 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 | first sequence | second sequence | ``` If `token_ids_1` is `None`, this method only returns the first portion of the mask (0s). ## XLNet specific outputs ### class transformers.models.xlnet.modeling\_xlnet.XLNetModelOutput [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_xlnet.py#L579) ( last\_hidden\_state: FloatTensormems: typing.Optional\[typing.List\[torch.FloatTensor\]\] = Nonehidden\_states: typing.Optional\[typing.Tuple\[torch.FloatTensor\]\] = Noneattentions: typing.Optional\[typing.Tuple\[torch.FloatTensor\]\] = None ) Output type of [XLNetModel](/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetModel). ### class transformers.models.xlnet.modeling\_xlnet.XLNetLMHeadModelOutput [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_xlnet.py#L613) ( loss: typing.Optional\[torch.FloatTensor\] = Nonelogits: FloatTensor = Nonemems: typing.Optional\[typing.List\[torch.FloatTensor\]\] = Nonehidden\_states: typing.Optional\[typing.Tuple\[torch.FloatTensor\]\] = Noneattentions: typing.Optional\[typing.Tuple\[torch.FloatTensor\]\] = None ) Output type of [XLNetLMHeadModel](/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetLMHeadModel). ### class transformers.models.xlnet.modeling\_xlnet.XLNetForSequenceClassificationOutput [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_xlnet.py#L650) ( loss: typing.Optional\[torch.FloatTensor\] = Nonelogits: FloatTensor = Nonemems: typing.Optional\[typing.List\[torch.FloatTensor\]\] = Nonehidden\_states: typing.Optional\[typing.Tuple\[torch.FloatTensor\]\] = Noneattentions: typing.Optional\[typing.Tuple\[torch.FloatTensor\]\] = None ) Output type of [XLNetForSequenceClassification](/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetForSequenceClassification). ### class transformers.models.xlnet.modeling\_xlnet.XLNetForMultipleChoiceOutput [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_xlnet.py#L718) ( loss: typing.Optional\[torch.FloatTensor\] = Nonelogits: FloatTensor = Nonemems: typing.Optional\[typing.List\[torch.FloatTensor\]\] = Nonehidden\_states: typing.Optional\[typing.Tuple\[torch.FloatTensor\]\] = Noneattentions: typing.Optional\[typing.Tuple\[torch.FloatTensor\]\] = None ) Output type of [XLNetForMultipleChoice](/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetForMultipleChoice). ### class transformers.models.xlnet.modeling\_xlnet.XLNetForTokenClassificationOutput [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_xlnet.py#L684) ( loss: typing.Optional\[torch.FloatTensor\] = Nonelogits: FloatTensor = Nonemems: typing.Optional\[typing.List\[torch.FloatTensor\]\] = Nonehidden\_states: typing.Optional\[typing.Tuple\[torch.FloatTensor\]\] = Noneattentions: typing.Optional\[typing.Tuple\[torch.FloatTensor\]\] = None ) Output type of `XLNetForTokenClassificationOutput`. ### class transformers.models.xlnet.modeling\_xlnet.XLNetForQuestionAnsweringSimpleOutput [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_xlnet.py#L754) ( loss: typing.Optional\[torch.FloatTensor\] = Nonestart\_logits: FloatTensor = Noneend\_logits: FloatTensor = Nonemems: typing.Optional\[typing.List\[torch.FloatTensor\]\] = Nonehidden\_states: typing.Optional\[typing.Tuple\[torch.FloatTensor\]\] = Noneattentions: typing.Optional\[typing.Tuple\[torch.FloatTensor\]\] = None ) Output type of [XLNetForQuestionAnsweringSimple](/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetForQuestionAnsweringSimple). ### class transformers.models.xlnet.modeling\_xlnet.XLNetForQuestionAnsweringOutput [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_xlnet.py#L791) ( loss: typing.Optional\[torch.FloatTensor\] = Nonestart\_top\_log\_probs: typing.Optional\[torch.FloatTensor\] = Nonestart\_top\_index: typing.Optional\[torch.LongTensor\] = Noneend\_top\_log\_probs: typing.Optional\[torch.FloatTensor\] = Noneend\_top\_index: typing.Optional\[torch.LongTensor\] = Nonecls\_logits: typing.Optional\[torch.FloatTensor\] = Nonemems: typing.Optional\[typing.List\[torch.FloatTensor\]\] = Nonehidden\_states: typing.Optional\[typing.Tuple\[torch.FloatTensor\]\] = Noneattentions: typing.Optional\[typing.Tuple\[torch.FloatTensor\]\] = None ) Output type of [XLNetForQuestionAnswering](/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetForQuestionAnswering). ### class transformers.models.xlnet.modeling\_tf\_xlnet.TFXLNetModelOutput [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_tf_xlnet.py#L802) ( last\_hidden\_state: tf.Tensor = Nonemems: List\[tf.Tensor\] | None = Nonehidden\_states: Tuple\[tf.Tensor\] | None = Noneattentions: Tuple\[tf.Tensor\] | None = None ) Output type of [TFXLNetModel](/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.TFXLNetModel). ### class transformers.models.xlnet.modeling\_tf\_xlnet.TFXLNetLMHeadModelOutput [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_tf_xlnet.py#L836) ( loss: tf.Tensor | None = Nonelogits: tf.Tensor = Nonemems: List\[tf.Tensor\] | None = Nonehidden\_states: Tuple\[tf.Tensor\] | None = Noneattentions: Tuple\[tf.Tensor\] | None = None ) Output type of [TFXLNetLMHeadModel](/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.TFXLNetLMHeadModel). ### class transformers.models.xlnet.modeling\_tf\_xlnet.TFXLNetForSequenceClassificationOutput [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_tf_xlnet.py#L873) ( loss: tf.Tensor | None = Nonelogits: tf.Tensor = Nonemems: List\[tf.Tensor\] | None = Nonehidden\_states: Tuple\[tf.Tensor\] | None = Noneattentions: Tuple\[tf.Tensor\] | None = None ) Output type of [TFXLNetForSequenceClassification](/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.TFXLNetForSequenceClassification). ### class transformers.models.xlnet.modeling\_tf\_xlnet.TFXLNetForMultipleChoiceOutput [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_tf_xlnet.py#L941) ( loss: tf.Tensor | None = Nonelogits: tf.Tensor = Nonemems: List\[tf.Tensor\] | None = Nonehidden\_states: Tuple\[tf.Tensor\] | None = Noneattentions: Tuple\[tf.Tensor\] | None = None ) Output type of [TFXLNetForMultipleChoice](/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.TFXLNetForMultipleChoice). ### class transformers.models.xlnet.modeling\_tf\_xlnet.TFXLNetForTokenClassificationOutput [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_tf_xlnet.py#L907) ( loss: tf.Tensor | None = Nonelogits: tf.Tensor = Nonemems: List\[tf.Tensor\] | None = Nonehidden\_states: Tuple\[tf.Tensor\] | None = Noneattentions: Tuple\[tf.Tensor\] | None = None ) Output type of `TFXLNetForTokenClassificationOutput`. ### class transformers.models.xlnet.modeling\_tf\_xlnet.TFXLNetForQuestionAnsweringSimpleOutput [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_tf_xlnet.py#L977) ( loss: tf.Tensor | None = Nonestart\_logits: tf.Tensor = Noneend\_logits: tf.Tensor = Nonemems: List\[tf.Tensor\] | None = Nonehidden\_states: Tuple\[tf.Tensor\] | None = Noneattentions: Tuple\[tf.Tensor\] | None = None ) Output type of [TFXLNetForQuestionAnsweringSimple](/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.TFXLNetForQuestionAnsweringSimple). ## XLNetModel ### class transformers.XLNetModel [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_xlnet.py#L931) ( config ) Parameters - **config** ([XLNetConfig](/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. The bare XLNet Model transformer outputting raw hidden-states without any specific head on top. This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_xlnet.py#L1059) ( input\_ids: typing.Optional\[torch.Tensor\] = Noneattention\_mask: typing.Optional\[torch.Tensor\] = Nonemems: typing.Optional\[torch.Tensor\] = Noneperm\_mask: typing.Optional\[torch.Tensor\] = Nonetarget\_mapping: typing.Optional\[torch.Tensor\] = Nonetoken\_type\_ids: typing.Optional\[torch.Tensor\] = Noneinput\_mask: typing.Optional\[torch.Tensor\] = Nonehead\_mask: typing.Optional\[torch.Tensor\] = Noneinputs\_embeds: typing.Optional\[torch.Tensor\] = Noneuse\_mems: typing.Optional\[bool\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None\*\*kwargs ) → [transformers.models.xlnet.modeling\_xlnet.XLNetModelOutput](/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.models.xlnet.modeling_xlnet.XLNetModelOutput) or `tuple(torch.FloatTensor)` The [XLNetModel](/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetModel) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, XLNetModel >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("xlnet-base-cased") >>> model = XLNetModel.from_pretrained("xlnet-base-cased") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state ``` ## XLNetLMHeadModel ### class transformers.XLNetLMHeadModel [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_xlnet.py#L1294) ( config ) Parameters - **config** ([XLNetConfig](/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. XLNet Model with a language modeling head on top (linear layer with weights tied to the input embeddings). This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_xlnet.py#L1356) ( input\_ids: typing.Optional\[torch.Tensor\] = Noneattention\_mask: typing.Optional\[torch.Tensor\] = Nonemems: typing.Optional\[torch.Tensor\] = Noneperm\_mask: typing.Optional\[torch.Tensor\] = Nonetarget\_mapping: typing.Optional\[torch.Tensor\] = Nonetoken\_type\_ids: typing.Optional\[torch.Tensor\] = Noneinput\_mask: typing.Optional\[torch.Tensor\] = Nonehead\_mask: typing.Optional\[torch.Tensor\] = Noneinputs\_embeds: typing.Optional\[torch.Tensor\] = Nonelabels: typing.Optional\[torch.Tensor\] = Noneuse\_mems: typing.Optional\[bool\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None\*\*kwargs ) → [transformers.models.xlnet.modeling\_xlnet.XLNetLMHeadModelOutput](/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.models.xlnet.modeling_xlnet.XLNetLMHeadModelOutput) or `tuple(torch.FloatTensor)` The [XLNetLMHeadModel](/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetLMHeadModel) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: ``` >>> from transformers import AutoTokenizer, XLNetLMHeadModel >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("xlnet-large-cased") >>> model = XLNetLMHeadModel.from_pretrained("xlnet-large-cased") >>> >>> input_ids = torch.tensor( ... tokenizer.encode("Hello, my dog is very <mask>", add_special_tokens=False) ... ).unsqueeze( ... 0 ... ) >>> perm_mask = torch.zeros((1, input_ids.shape[1], input_ids.shape[1]), dtype=torch.float) >>> perm_mask[:, :, -1] = 1.0 >>> target_mapping = torch.zeros( ... (1, 1, input_ids.shape[1]), dtype=torch.float ... ) >>> target_mapping[ ... 0, 0, -1 ... ] = 1.0 >>> outputs = model(input_ids, perm_mask=perm_mask, target_mapping=target_mapping) >>> next_token_logits = outputs[ ... 0 ... ] >>> >>> input_ids = torch.tensor( ... tokenizer.encode("Hello, my dog is very <mask>", add_special_tokens=False) ... ).unsqueeze( ... 0 ... ) >>> labels = torch.tensor(tokenizer.encode("cute", add_special_tokens=False)).unsqueeze(0) >>> assert labels.shape[0] == 1, "only one word will be predicted" >>> perm_mask = torch.zeros((1, input_ids.shape[1], input_ids.shape[1]), dtype=torch.float) >>> perm_mask[ ... :, :, -1 ... ] = 1.0 >>> target_mapping = torch.zeros( ... (1, 1, input_ids.shape[1]), dtype=torch.float ... ) >>> target_mapping[ ... 0, 0, -1 ... ] = 1.0 >>> outputs = model(input_ids, perm_mask=perm_mask, target_mapping=target_mapping, labels=labels) >>> loss = outputs.loss >>> next_token_logits = ( ... outputs.logits ... ) ``` ## XLNetForSequenceClassification ### class transformers.XLNetForSequenceClassification [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_xlnet.py#L1500) ( config ) Parameters - **config** ([XLNetConfig](/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. XLNet Model with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_xlnet.py#L1513) ( input\_ids: typing.Optional\[torch.Tensor\] = Noneattention\_mask: typing.Optional\[torch.Tensor\] = Nonemems: typing.Optional\[torch.Tensor\] = Noneperm\_mask: typing.Optional\[torch.Tensor\] = Nonetarget\_mapping: typing.Optional\[torch.Tensor\] = Nonetoken\_type\_ids: typing.Optional\[torch.Tensor\] = Noneinput\_mask: typing.Optional\[torch.Tensor\] = Nonehead\_mask: typing.Optional\[torch.Tensor\] = Noneinputs\_embeds: typing.Optional\[torch.Tensor\] = Nonelabels: typing.Optional\[torch.Tensor\] = Noneuse\_mems: typing.Optional\[bool\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None\*\*kwargs ) → [transformers.models.xlnet.modeling\_xlnet.XLNetForSequenceClassificationOutput](/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.models.xlnet.modeling_xlnet.XLNetForSequenceClassificationOutput) or `tuple(torch.FloatTensor)` The [XLNetForSequenceClassification](/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetForSequenceClassification) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example of single-label classification: ``` >>> import torch >>> from transformers import AutoTokenizer, XLNetForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("xlnet-base-cased") >>> model = XLNetForSequenceClassification.from_pretrained("xlnet-base-cased") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_id = logits.argmax().item() >>> >>> num_labels = len(model.config.id2label) >>> model = XLNetForSequenceClassification.from_pretrained("xlnet-base-cased", num_labels=num_labels) >>> labels = torch.tensor([1]) >>> loss = model(**inputs, labels=labels).loss ``` Example of multi-label classification: ``` >>> import torch >>> from transformers import AutoTokenizer, XLNetForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("xlnet-base-cased") >>> model = XLNetForSequenceClassification.from_pretrained("xlnet-base-cased", problem_type="multi_label_classification") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5] >>> >>> num_labels = len(model.config.id2label) >>> model = XLNetForSequenceClassification.from_pretrained( ... "xlnet-base-cased", num_labels=num_labels, problem_type="multi_label_classification" ... ) >>> labels = torch.sum( ... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1 ... ).to(torch.float) >>> loss = model(**inputs, labels=labels).loss ``` ## XLNetForMultipleChoice ### class transformers.XLNetForMultipleChoice [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_xlnet.py#L1696) ( config ) Parameters - **config** ([XLNetConfig](/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. XLNet Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RACE/SWAG tasks. This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_xlnet.py#L1707) ( input\_ids: typing.Optional\[torch.Tensor\] = Nonetoken\_type\_ids: typing.Optional\[torch.Tensor\] = Noneinput\_mask: typing.Optional\[torch.Tensor\] = Noneattention\_mask: typing.Optional\[torch.Tensor\] = Nonemems: typing.Optional\[torch.Tensor\] = Noneperm\_mask: typing.Optional\[torch.Tensor\] = Nonetarget\_mapping: typing.Optional\[torch.Tensor\] = Nonehead\_mask: typing.Optional\[torch.Tensor\] = Noneinputs\_embeds: typing.Optional\[torch.Tensor\] = Nonelabels: typing.Optional\[torch.Tensor\] = Noneuse\_mems: typing.Optional\[bool\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None\*\*kwargs ) → [transformers.models.xlnet.modeling\_xlnet.XLNetForMultipleChoiceOutput](/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.models.xlnet.modeling_xlnet.XLNetForMultipleChoiceOutput) or `tuple(torch.FloatTensor)` The [XLNetForMultipleChoice](/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetForMultipleChoice) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, XLNetForMultipleChoice >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("xlnet-base-cased") >>> model = XLNetForMultipleChoice.from_pretrained("xlnet-base-cased") >>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced." >>> choice0 = "It is eaten with a fork and a knife." >>> choice1 = "It is eaten while held in the hand." >>> labels = torch.tensor(0).unsqueeze(0) >>> encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="pt", padding=True) >>> outputs = model(**{k: v.unsqueeze(0) for k, v in encoding.items()}, labels=labels) >>> >>> loss = outputs.loss >>> logits = outputs.logits ``` ## XLNetForTokenClassification ### class transformers.XLNetForTokenClassification [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_xlnet.py#L1609) ( config ) Parameters - **config** ([XLNetConfig](/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. XLNet Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_xlnet.py#L1620) ( input\_ids: typing.Optional\[torch.Tensor\] = Noneattention\_mask: typing.Optional\[torch.Tensor\] = Nonemems: typing.Optional\[torch.Tensor\] = Noneperm\_mask: typing.Optional\[torch.Tensor\] = Nonetarget\_mapping: typing.Optional\[torch.Tensor\] = Nonetoken\_type\_ids: typing.Optional\[torch.Tensor\] = Noneinput\_mask: typing.Optional\[torch.Tensor\] = Nonehead\_mask: typing.Optional\[torch.Tensor\] = Noneinputs\_embeds: typing.Optional\[torch.Tensor\] = Nonelabels: typing.Optional\[torch.Tensor\] = Noneuse\_mems: typing.Optional\[bool\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None\*\*kwargs ) → [transformers.models.xlnet.modeling\_xlnet.XLNetForTokenClassificationOutput](/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.models.xlnet.modeling_xlnet.XLNetForTokenClassificationOutput) or `tuple(torch.FloatTensor)` The [XLNetForTokenClassification](/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetForTokenClassification) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, XLNetForTokenClassification >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("xlnet-base-cased") >>> model = XLNetForTokenClassification.from_pretrained("xlnet-base-cased") >>> inputs = tokenizer( ... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt" ... ) >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_token_class_ids = logits.argmax(-1) >>> >>> >>> >>> predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]] >>> labels = predicted_token_class_ids >>> loss = model(**inputs, labels=labels).loss ``` ## XLNetForQuestionAnsweringSimple ### class transformers.XLNetForQuestionAnsweringSimple [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_xlnet.py#L1799) ( config ) Parameters - **config** ([XLNetConfig](/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. XLNet Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute `span start logits` and `span end logits`). This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_xlnet.py#L1810) ( input\_ids: typing.Optional\[torch.Tensor\] = Noneattention\_mask: typing.Optional\[torch.Tensor\] = Nonemems: typing.Optional\[torch.Tensor\] = Noneperm\_mask: typing.Optional\[torch.Tensor\] = Nonetarget\_mapping: typing.Optional\[torch.Tensor\] = Nonetoken\_type\_ids: typing.Optional\[torch.Tensor\] = Noneinput\_mask: typing.Optional\[torch.Tensor\] = Nonehead\_mask: typing.Optional\[torch.Tensor\] = Noneinputs\_embeds: typing.Optional\[torch.Tensor\] = Nonestart\_positions: typing.Optional\[torch.Tensor\] = Noneend\_positions: typing.Optional\[torch.Tensor\] = Noneuse\_mems: typing.Optional\[bool\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None\*\*kwargs ) → [transformers.models.xlnet.modeling\_xlnet.XLNetForQuestionAnsweringSimpleOutput](/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringSimpleOutput) or `tuple(torch.FloatTensor)` The [XLNetForQuestionAnsweringSimple](/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetForQuestionAnsweringSimple) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, XLNetForQuestionAnsweringSimple >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("xlnet-base-cased") >>> model = XLNetForQuestionAnsweringSimple.from_pretrained("xlnet-base-cased") >>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" >>> inputs = tokenizer(question, text, return_tensors="pt") >>> with torch.no_grad(): ... outputs = model(**inputs) >>> answer_start_index = outputs.start_logits.argmax() >>> answer_end_index = outputs.end_logits.argmax() >>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1] >>> >>> target_start_index = torch.tensor([14]) >>> target_end_index = torch.tensor([15]) >>> outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index) >>> loss = outputs.loss ``` ## XLNetForQuestionAnswering ### class transformers.XLNetForQuestionAnswering [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_xlnet.py#L1909) ( config ) Parameters - **config** ([XLNetConfig](/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. XLNet Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute `span start logits` and `span end logits`). This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_xlnet.py#L1923) ( input\_ids: typing.Optional\[torch.Tensor\] = Noneattention\_mask: typing.Optional\[torch.Tensor\] = Nonemems: typing.Optional\[torch.Tensor\] = Noneperm\_mask: typing.Optional\[torch.Tensor\] = Nonetarget\_mapping: typing.Optional\[torch.Tensor\] = Nonetoken\_type\_ids: typing.Optional\[torch.Tensor\] = Noneinput\_mask: typing.Optional\[torch.Tensor\] = Nonehead\_mask: typing.Optional\[torch.Tensor\] = Noneinputs\_embeds: typing.Optional\[torch.Tensor\] = Nonestart\_positions: typing.Optional\[torch.Tensor\] = Noneend\_positions: typing.Optional\[torch.Tensor\] = Noneis\_impossible: typing.Optional\[torch.Tensor\] = Nonecls\_index: typing.Optional\[torch.Tensor\] = Nonep\_mask: typing.Optional\[torch.Tensor\] = Noneuse\_mems: typing.Optional\[bool\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None\*\*kwargs ) → [transformers.models.xlnet.modeling\_xlnet.XLNetForQuestionAnsweringOutput](/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringOutput) or `tuple(torch.FloatTensor)` The [XLNetForQuestionAnswering](/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetForQuestionAnswering) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, XLNetForQuestionAnswering >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("xlnet-base-cased") >>> model = XLNetForQuestionAnswering.from_pretrained("xlnet-base-cased") >>> input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze( ... 0 ... ) >>> start_positions = torch.tensor([1]) >>> end_positions = torch.tensor([3]) >>> outputs = model(input_ids, start_positions=start_positions, end_positions=end_positions) >>> loss = outputs.loss ``` ## TFXLNetModel ### class transformers.TFXLNetModel [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_tf_xlnet.py#L1132) ( \*args\*\*kwargs ) Parameters - **config** ([XLNetConfig](/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. The bare XLNet Model transformer outputting raw hidden-states without any specific head on top. This model inherits from [TFPreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a [tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in `transformers` accept two formats as input: - having all inputs as keyword arguments (like PyTorch models), or - having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like `model.fit()` things should “just work” for you - just pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: - a single Tensor with `input_ids` only and nothing else: `model(input_ids)` - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])` - a dictionary with one or several input Tensors associated to the input names given in the docstring: `model({"input_ids": input_ids, "token_type_ids": token_type_ids})` Note that when creating models and layers with [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! #### call [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_tf_xlnet.py#L1137) ( input\_ids: TFModelInputType | None = Noneattention\_mask: np.ndarray | tf.Tensor | None = Nonemems: np.ndarray | tf.Tensor | None = Noneperm\_mask: np.ndarray | tf.Tensor | None = Nonetarget\_mapping: np.ndarray | tf.Tensor | None = Nonetoken\_type\_ids: np.ndarray | tf.Tensor | None = Noneinput\_mask: np.ndarray | tf.Tensor | None = Nonehead\_mask: np.ndarray | tf.Tensor | None = Noneinputs\_embeds: np.ndarray | tf.Tensor | None = Noneuse\_mems: Optional\[bool\] = Noneoutput\_attentions: Optional\[bool\] = Noneoutput\_hidden\_states: Optional\[bool\] = Nonereturn\_dict: Optional\[bool\] = Nonetraining: bool = False ) → [transformers.models.xlnet.modeling\_tf\_xlnet.TFXLNetModelOutput](/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.models.xlnet.modeling_tf_xlnet.TFXLNetModelOutput) or `tuple(tf.Tensor)` The [TFXLNetModel](/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.TFXLNetModel) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, TFXLNetModel >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("xlnet-base-cased") >>> model = TFXLNetModel.from_pretrained("xlnet-base-cased") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") >>> outputs = model(inputs) >>> last_hidden_states = outputs.last_hidden_state ``` ## TFXLNetLMHeadModel ### class transformers.TFXLNetLMHeadModel [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_tf_xlnet.py#L1187) ( \*args\*\*kwargs ) Parameters - **config** ([XLNetConfig](/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. XLNet Model with a language modeling head on top (linear layer with weights tied to the input embeddings). This model inherits from [TFPreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a [tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in `transformers` accept two formats as input: - having all inputs as keyword arguments (like PyTorch models), or - having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like `model.fit()` things should “just work” for you - just pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: - a single Tensor with `input_ids` only and nothing else: `model(input_ids)` - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])` - a dictionary with one or several input Tensors associated to the input names given in the docstring: `model({"input_ids": input_ids, "token_type_ids": token_type_ids})` Note that when creating models and layers with [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! #### call [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_tf_xlnet.py#L1241) ( input\_ids: TFModelInputType | None = Noneattention\_mask: np.ndarray | tf.Tensor | None = Nonemems: np.ndarray | tf.Tensor | None = Noneperm\_mask: np.ndarray | tf.Tensor | None = Nonetarget\_mapping: np.ndarray | tf.Tensor | None = Nonetoken\_type\_ids: np.ndarray | tf.Tensor | None = Noneinput\_mask: np.ndarray | tf.Tensor | None = Nonehead\_mask: np.ndarray | tf.Tensor | None = Noneinputs\_embeds: np.ndarray | tf.Tensor | None = Noneuse\_mems: Optional\[bool\] = Noneoutput\_attentions: Optional\[bool\] = Noneoutput\_hidden\_states: Optional\[bool\] = Nonereturn\_dict: Optional\[bool\] = Nonelabels: np.ndarray | tf.Tensor | None = Nonetraining: bool = False ) → [transformers.models.xlnet.modeling\_tf\_xlnet.TFXLNetLMHeadModelOutput](/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.models.xlnet.modeling_tf_xlnet.TFXLNetLMHeadModelOutput) or `tuple(tf.Tensor)` The [TFXLNetLMHeadModel](/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.TFXLNetLMHeadModel) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: ``` >>> import tensorflow as tf >>> import numpy as np >>> from transformers import AutoTokenizer, TFXLNetLMHeadModel >>> tokenizer = AutoTokenizer.from_pretrained("xlnet-large-cased") >>> model = TFXLNetLMHeadModel.from_pretrained("xlnet-large-cased") >>> >>> input_ids = tf.constant(tokenizer.encode("Hello, my dog is very <mask>", add_special_tokens=True))[ ... None, : ... ] >>> perm_mask = np.zeros((1, input_ids.shape[1], input_ids.shape[1])) >>> perm_mask[:, :, -1] = 1.0 >>> target_mapping = np.zeros( ... (1, 1, input_ids.shape[1]) ... ) >>> target_mapping[ ... 0, 0, -1 ... ] = 1.0 >>> outputs = model( ... input_ids, ... perm_mask=tf.constant(perm_mask, dtype=tf.float32), ... target_mapping=tf.constant(target_mapping, dtype=tf.float32), ... ) >>> next_token_logits = outputs[ ... 0 ... ] ``` ## TFXLNetForSequenceClassification ### class transformers.TFXLNetForSequenceClassification [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_tf_xlnet.py#L1347) ( \*args\*\*kwargs ) Parameters - **config** ([XLNetConfig](/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. XLNet Model with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. This model inherits from [TFPreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a [tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in `transformers` accept two formats as input: - having all inputs as keyword arguments (like PyTorch models), or - having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like `model.fit()` things should “just work” for you - just pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: - a single Tensor with `input_ids` only and nothing else: `model(input_ids)` - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])` - a dictionary with one or several input Tensors associated to the input names given in the docstring: `model({"input_ids": input_ids, "token_type_ids": token_type_ids})` Note that when creating models and layers with [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! #### call [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_tf_xlnet.py#L1360) ( input\_ids: TFModelInputType | None = Noneattention\_mask: np.ndarray | tf.Tensor | None = Nonemems: np.ndarray | tf.Tensor | None = Noneperm\_mask: np.ndarray | tf.Tensor | None = Nonetarget\_mapping: np.ndarray | tf.Tensor | None = Nonetoken\_type\_ids: np.ndarray | tf.Tensor | None = Noneinput\_mask: np.ndarray | tf.Tensor | None = Nonehead\_mask: np.ndarray | tf.Tensor | None = Noneinputs\_embeds: np.ndarray | tf.Tensor | None = Noneuse\_mems: Optional\[bool\] = Noneoutput\_attentions: Optional\[bool\] = Noneoutput\_hidden\_states: Optional\[bool\] = Nonereturn\_dict: Optional\[bool\] = Nonelabels: np.ndarray | tf.Tensor | None = Nonetraining: bool = False ) → [transformers.models.xlnet.modeling\_tf\_xlnet.TFXLNetForSequenceClassificationOutput](/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForSequenceClassificationOutput) or `tuple(tf.Tensor)` The [TFXLNetForSequenceClassification](/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.TFXLNetForSequenceClassification) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, TFXLNetForSequenceClassification >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("xlnet-base-cased") >>> model = TFXLNetForSequenceClassification.from_pretrained("xlnet-base-cased") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") >>> logits = model(**inputs).logits >>> predicted_class_id = int(tf.math.argmax(logits, axis=-1)[0]) ``` ``` >>> >>> num_labels = len(model.config.id2label) >>> model = TFXLNetForSequenceClassification.from_pretrained("xlnet-base-cased", num_labels=num_labels) >>> labels = tf.constant(1) >>> loss = model(**inputs, labels=labels).loss ``` ## TFLNetForMultipleChoice ### class transformers.TFXLNetForMultipleChoice [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_tf_xlnet.py#L1434) ( \*args\*\*kwargs ) Parameters - **config** ([XLNetConfig](/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. XLNET Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks. This model inherits from [TFPreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a [tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in `transformers` accept two formats as input: - having all inputs as keyword arguments (like PyTorch models), or - having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like `model.fit()` things should “just work” for you - just pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: - a single Tensor with `input_ids` only and nothing else: `model(input_ids)` - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])` - a dictionary with one or several input Tensors associated to the input names given in the docstring: `model({"input_ids": input_ids, "token_type_ids": token_type_ids})` Note that when creating models and layers with [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! #### call [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_tf_xlnet.py#L1446) ( input\_ids: TFModelInputType | None = Nonetoken\_type\_ids: np.ndarray | tf.Tensor | None = Noneinput\_mask: np.ndarray | tf.Tensor | None = Noneattention\_mask: np.ndarray | tf.Tensor | None = Nonemems: np.ndarray | tf.Tensor | None = Noneperm\_mask: np.ndarray | tf.Tensor | None = Nonetarget\_mapping: np.ndarray | tf.Tensor | None = Nonehead\_mask: np.ndarray | tf.Tensor | None = Noneinputs\_embeds: np.ndarray | tf.Tensor | None = Noneuse\_mems: Optional\[bool\] = Noneoutput\_attentions: Optional\[bool\] = Noneoutput\_hidden\_states: Optional\[bool\] = Nonereturn\_dict: Optional\[bool\] = Nonelabels: np.ndarray | tf.Tensor | None = Nonetraining: bool = False ) → [transformers.models.xlnet.modeling\_tf\_xlnet.TFXLNetForMultipleChoiceOutput](/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForMultipleChoiceOutput) or `tuple(tf.Tensor)` The [TFXLNetForMultipleChoice](/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.TFXLNetForMultipleChoice) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, TFXLNetForMultipleChoice >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("xlnet-base-cased") >>> model = TFXLNetForMultipleChoice.from_pretrained("xlnet-base-cased") >>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced." >>> choice0 = "It is eaten with a fork and a knife." >>> choice1 = "It is eaten while held in the hand." >>> encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="tf", padding=True) >>> inputs = {k: tf.expand_dims(v, 0) for k, v in encoding.items()} >>> outputs = model(inputs) >>> >>> logits = outputs.logits ``` ## TFXLNetForTokenClassification ### class transformers.TFXLNetForTokenClassification [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_tf_xlnet.py#L1535) ( \*args\*\*kwargs ) Parameters - **config** ([XLNetConfig](/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. XLNet Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. This model inherits from [TFPreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a [tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in `transformers` accept two formats as input: - having all inputs as keyword arguments (like PyTorch models), or - having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like `model.fit()` things should “just work” for you - just pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: - a single Tensor with `input_ids` only and nothing else: `model(input_ids)` - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])` - a dictionary with one or several input Tensors associated to the input names given in the docstring: `model({"input_ids": input_ids, "token_type_ids": token_type_ids})` Note that when creating models and layers with [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! #### call [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_tf_xlnet.py#L1545) ( input\_ids: TFModelInputType | None = Noneattention\_mask: np.ndarray | tf.Tensor | None = Nonemems: np.ndarray | tf.Tensor | None = Noneperm\_mask: np.ndarray | tf.Tensor | None = Nonetarget\_mapping: np.ndarray | tf.Tensor | None = Nonetoken\_type\_ids: np.ndarray | tf.Tensor | None = Noneinput\_mask: np.ndarray | tf.Tensor | None = Nonehead\_mask: np.ndarray | tf.Tensor | None = Noneinputs\_embeds: np.ndarray | tf.Tensor | None = Noneuse\_mems: Optional\[bool\] = Noneoutput\_attentions: Optional\[bool\] = Noneoutput\_hidden\_states: Optional\[bool\] = Nonereturn\_dict: Optional\[bool\] = Nonelabels: np.ndarray | tf.Tensor | None = Nonetraining: bool = False ) → [transformers.models.xlnet.modeling\_tf\_xlnet.TFXLNetForTokenClassificationOutput](/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForTokenClassificationOutput) or `tuple(tf.Tensor)` The [TFXLNetForTokenClassification](/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.TFXLNetForTokenClassification) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, TFXLNetForTokenClassification >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("xlnet-base-cased") >>> model = TFXLNetForTokenClassification.from_pretrained("xlnet-base-cased") >>> inputs = tokenizer( ... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="tf" ... ) >>> logits = model(**inputs).logits >>> predicted_token_class_ids = tf.math.argmax(logits, axis=-1) >>> >>> >>> >>> predicted_tokens_classes = [model.config.id2label[t] for t in predicted_token_class_ids[0].numpy().tolist()] ``` ``` >>> labels = predicted_token_class_ids >>> loss = tf.math.reduce_mean(model(**inputs, labels=labels).loss) ``` ## TFXLNetForQuestionAnsweringSimple ### class transformers.TFXLNetForQuestionAnsweringSimple [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_tf_xlnet.py#L1615) ( \*args\*\*kwargs ) Parameters - **config** ([XLNetConfig](/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. XLNet Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute `span start logits` and `span end logits`). This model inherits from [TFPreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a [tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in `transformers` accept two formats as input: - having all inputs as keyword arguments (like PyTorch models), or - having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like `model.fit()` things should “just work” for you - just pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: - a single Tensor with `input_ids` only and nothing else: `model(input_ids)` - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])` - a dictionary with one or several input Tensors associated to the input names given in the docstring: `model({"input_ids": input_ids, "token_type_ids": token_type_ids})` Note that when creating models and layers with [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! #### call [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_tf_xlnet.py#L1623) ( input\_ids: TFModelInputType | None = Noneattention\_mask: np.ndarray | tf.Tensor | None = Nonemems: np.ndarray | tf.Tensor | None = Noneperm\_mask: np.ndarray | tf.Tensor | None = Nonetarget\_mapping: np.ndarray | tf.Tensor | None = Nonetoken\_type\_ids: np.ndarray | tf.Tensor | None = Noneinput\_mask: np.ndarray | tf.Tensor | None = Nonehead\_mask: np.ndarray | tf.Tensor | None = Noneinputs\_embeds: np.ndarray | tf.Tensor | None = Noneuse\_mems: Optional\[bool\] = Noneoutput\_attentions: Optional\[bool\] = Noneoutput\_hidden\_states: Optional\[bool\] = Nonereturn\_dict: Optional\[bool\] = Nonestart\_positions: np.ndarray | tf.Tensor | None = Noneend\_positions: np.ndarray | tf.Tensor | None = Nonetraining: bool = False ) → [transformers.models.xlnet.modeling\_tf\_xlnet.TFXLNetForQuestionAnsweringSimpleOutput](/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForQuestionAnsweringSimpleOutput) or `tuple(tf.Tensor)` The [TFXLNetForQuestionAnsweringSimple](/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.TFXLNetForQuestionAnsweringSimple) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, TFXLNetForQuestionAnsweringSimple >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("xlnet-base-cased") >>> model = TFXLNetForQuestionAnsweringSimple.from_pretrained("xlnet-base-cased") >>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" >>> inputs = tokenizer(question, text, return_tensors="tf") >>> outputs = model(**inputs) >>> answer_start_index = int(tf.math.argmax(outputs.start_logits, axis=-1)[0]) >>> answer_end_index = int(tf.math.argmax(outputs.end_logits, axis=-1)[0]) >>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1] ``` ``` >>> >>> target_start_index = tf.constant([14]) >>> target_end_index = tf.constant([15]) >>> outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index) >>> loss = tf.math.reduce_mean(outputs.loss) ```
<!DOCTYPE html><html class=""><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"> <meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science."> <meta property="fb:app_id" content="1321688464574422"> <meta name="twitter:card" content="summary_large_image"> <meta name="twitter:site" content="@huggingface"> <meta property="og:title" content="XLNet"> <meta property="og:type" content="website"> <meta property="og:url" content="https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/xlnet"> <meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png"> <link rel="stylesheet" href="/front/build/kube-b0520c1/style.css"> <link rel="preconnect" href="https://fonts.gstatic.com"> <link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&amp;display=swap" rel="stylesheet"> <link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&amp;display=swap" rel="stylesheet"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" as="style" onload="this.onload=null;this.rel='stylesheet'"> <noscript> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" /> </noscript> <title>XLNet</title> <script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script> <script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><link rel="stylesheet" href="/docs/transformers/v4.34.0/en/_app/immutable/assets/0.e3b0c442.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/1.38c5c2f6.js"><meta name="hf:doc:metadata" content="{&quot;local&quot;:&quot;xlnet&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;overview&quot;,&quot;title&quot;:&quot;Overview&quot;},{&quot;local&quot;:&quot;documentation-resources&quot;,&quot;title&quot;:&quot;Documentation resources&quot;},{&quot;local&quot;:&quot;transformers.XLNetConfig&quot;,&quot;title&quot;:&quot;XLNetConfig&quot;},{&quot;local&quot;:&quot;transformers.XLNetTokenizer&quot;,&quot;title&quot;:&quot;XLNetTokenizer&quot;},{&quot;local&quot;:&quot;transformers.XLNetTokenizerFast&quot;,&quot;title&quot;:&quot;XLNetTokenizerFast&quot;},{&quot;local&quot;:&quot;transformers.models.xlnet.modeling_xlnet.XLNetModelOutput&quot;,&quot;title&quot;:&quot;XLNet specific outputs&quot;},{&quot;local&quot;:&quot;transformers.XLNetModel&quot;,&quot;title&quot;:&quot;XLNetModel&quot;},{&quot;local&quot;:&quot;transformers.XLNetLMHeadModel&quot;,&quot;title&quot;:&quot;XLNetLMHeadModel&quot;},{&quot;local&quot;:&quot;transformers.XLNetForSequenceClassification&quot;,&quot;title&quot;:&quot;XLNetForSequenceClassification&quot;},{&quot;local&quot;:&quot;transformers.XLNetForMultipleChoice&quot;,&quot;title&quot;:&quot;XLNetForMultipleChoice&quot;},{&quot;local&quot;:&quot;transformers.XLNetForTokenClassification&quot;,&quot;title&quot;:&quot;XLNetForTokenClassification&quot;},{&quot;local&quot;:&quot;transformers.XLNetForQuestionAnsweringSimple&quot;,&quot;title&quot;:&quot;XLNetForQuestionAnsweringSimple&quot;},{&quot;local&quot;:&quot;transformers.XLNetForQuestionAnswering&quot;,&quot;title&quot;:&quot;XLNetForQuestionAnswering&quot;},{&quot;local&quot;:&quot;transformers.TFXLNetModel&quot;,&quot;title&quot;:&quot;TFXLNetModel&quot;},{&quot;local&quot;:&quot;transformers.TFXLNetLMHeadModel&quot;,&quot;title&quot;:&quot;TFXLNetLMHeadModel&quot;},{&quot;local&quot;:&quot;transformers.TFXLNetForSequenceClassification&quot;,&quot;title&quot;:&quot;TFXLNetForSequenceClassification&quot;},{&quot;local&quot;:&quot;transformers.TFXLNetForMultipleChoice&quot;,&quot;title&quot;:&quot;TFLNetForMultipleChoice&quot;},{&quot;local&quot;:&quot;transformers.TFXLNetForTokenClassification&quot;,&quot;title&quot;:&quot;TFXLNetForTokenClassification&quot;},{&quot;local&quot;:&quot;transformers.TFXLNetForQuestionAnsweringSimple&quot;,&quot;title&quot;:&quot;TFXLNetForQuestionAnsweringSimple&quot;}],&quot;title&quot;:&quot;XLNet&quot;}"><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head> <body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage"> <div class="flex min-h-screen flex-col"> <div class="SVELTE_HYDRATER contents" data-props="{&quot;classNames&quot;:&quot;&quot;,&quot;isWide&quot;:true,&quot;isZh&quot;:false}" data-target="MainHeader"><header class="border-b border-gray-100 "><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <div class="flex flex-none items-center justify-center p-0.5 place-self-stretch lg:hidden"><button class="relative z-40 flex h-6 w-8 items-center justify-center" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div></div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="rounded-full border border-transparent bg-gray-900 px-3 py-1 leading-none text-white hover:border-black hover:bg-white hover:text-black" href="/join">Sign Up</a></li></ul></nav></div></header></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div> <main class="flex flex-1 flex-col"><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapters&quot;:[{&quot;title&quot;:&quot;Get started&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;🤗 Transformers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;index&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/index&quot;},{&quot;title&quot;:&quot;Quick tour&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;quicktour&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/quicktour&quot;},{&quot;title&quot;:&quot;Installation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;installation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/installation&quot;}]},{&quot;title&quot;:&quot;Tutorials&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Run inference with pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_tutorial&quot;},{&quot;title&quot;:&quot;Write portable code with AutoClass&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;autoclass_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/autoclass_tutorial&quot;},{&quot;title&quot;:&quot;Preprocess data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocessing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/preprocessing&quot;},{&quot;title&quot;:&quot;Fine-tune a pretrained model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;training&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/training&quot;},{&quot;title&quot;:&quot;Train with a script&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;run_scripts&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/run_scripts&quot;},{&quot;title&quot;:&quot;Set up distributed training with 🤗 Accelerate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;accelerate&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/accelerate&quot;},{&quot;title&quot;:&quot;Load and train adapters with 🤗 PEFT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;peft&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/peft&quot;},{&quot;title&quot;:&quot;Share your model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_sharing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_sharing&quot;},{&quot;title&quot;:&quot;Agents&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers_agents&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/transformers_agents&quot;},{&quot;title&quot;:&quot;Generation with LLMs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;llm_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/llm_tutorial&quot;}]},{&quot;title&quot;:&quot;Task Guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Natural Language Processing&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text classification&quot;,&quot;id&quot;:&quot;tasks/sequence_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/sequence_classification&quot;},{&quot;title&quot;:&quot;Token classification&quot;,&quot;id&quot;:&quot;tasks/token_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/token_classification&quot;},{&quot;title&quot;:&quot;Question answering&quot;,&quot;id&quot;:&quot;tasks/question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/question_answering&quot;},{&quot;title&quot;:&quot;Causal language modeling&quot;,&quot;id&quot;:&quot;tasks/language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/language_modeling&quot;},{&quot;title&quot;:&quot;Masked language modeling&quot;,&quot;id&quot;:&quot;tasks/masked_language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/masked_language_modeling&quot;},{&quot;title&quot;:&quot;Translation&quot;,&quot;id&quot;:&quot;tasks/translation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/translation&quot;},{&quot;title&quot;:&quot;Summarization&quot;,&quot;id&quot;:&quot;tasks/summarization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/summarization&quot;},{&quot;title&quot;:&quot;Multiple choice&quot;,&quot;id&quot;:&quot;tasks/multiple_choice&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/multiple_choice&quot;}]},{&quot;title&quot;:&quot;Audio&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio classification&quot;,&quot;id&quot;:&quot;tasks/audio_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/audio_classification&quot;},{&quot;title&quot;:&quot;Automatic speech recognition&quot;,&quot;id&quot;:&quot;tasks/asr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/asr&quot;}]},{&quot;title&quot;:&quot;Computer Vision&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image classification&quot;,&quot;id&quot;:&quot;tasks/image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_classification&quot;},{&quot;title&quot;:&quot;Semantic segmentation&quot;,&quot;id&quot;:&quot;tasks/semantic_segmentation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/semantic_segmentation&quot;},{&quot;title&quot;:&quot;Video classification&quot;,&quot;id&quot;:&quot;tasks/video_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/video_classification&quot;},{&quot;title&quot;:&quot;Object detection&quot;,&quot;id&quot;:&quot;tasks/object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot object detection&quot;,&quot;id&quot;:&quot;tasks/zero_shot_object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot image classification&quot;,&quot;id&quot;:&quot;tasks/zero_shot_image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification&quot;},{&quot;title&quot;:&quot;Depth estimation&quot;,&quot;id&quot;:&quot;tasks/monocular_depth_estimation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation&quot;}]},{&quot;title&quot;:&quot;Multimodal&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image captioning&quot;,&quot;id&quot;:&quot;tasks/image_captioning&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_captioning&quot;},{&quot;title&quot;:&quot;Document Question Answering&quot;,&quot;id&quot;:&quot;tasks/document_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/document_question_answering&quot;},{&quot;title&quot;:&quot;Visual Question Answering&quot;,&quot;id&quot;:&quot;tasks/visual_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/visual_question_answering&quot;},{&quot;title&quot;:&quot;Text to speech&quot;,&quot;id&quot;:&quot;tasks/text-to-speech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/text-to-speech&quot;}]},{&quot;title&quot;:&quot;Generation&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Customize the generation strategy&quot;,&quot;id&quot;:&quot;generation_strategies&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/generation_strategies&quot;}]},{&quot;title&quot;:&quot;Prompting&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image tasks with IDEFICS&quot;,&quot;id&quot;:&quot;tasks/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/idefics&quot;}]}]},{&quot;title&quot;:&quot;Developer guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Use fast tokenizers from 🤗 Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;fast_tokenizers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/fast_tokenizers&quot;},{&quot;title&quot;:&quot;Run inference with multilingual models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;multilingual&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/multilingual&quot;},{&quot;title&quot;:&quot;Use model-specific APIs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;create_a_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/create_a_model&quot;},{&quot;title&quot;:&quot;Share a custom model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_models&quot;},{&quot;title&quot;:&quot;Templates for chat models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;chat_templating&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/chat_templating&quot;},{&quot;title&quot;:&quot;Run training on Amazon SageMaker&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;sagemaker&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/sagemaker&quot;},{&quot;title&quot;:&quot;Export to ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;serialization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/serialization&quot;},{&quot;title&quot;:&quot;Export to TFLite&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tflite&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tflite&quot;},{&quot;title&quot;:&quot;Export to TorchScript&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;torchscript&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/torchscript&quot;},{&quot;title&quot;:&quot;Benchmarks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;benchmarks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/benchmarks&quot;},{&quot;title&quot;:&quot;Notebooks with examples&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;notebooks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/notebooks&quot;},{&quot;title&quot;:&quot;Community resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;community&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/community&quot;},{&quot;title&quot;:&quot;Custom Tools and Prompts&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_tools&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_tools&quot;},{&quot;title&quot;:&quot;Troubleshoot&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;troubleshooting&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/troubleshooting&quot;}]},{&quot;title&quot;:&quot;Performance and scalability&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;performance&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/performance&quot;},{&quot;title&quot;:&quot;Efficient training techniques&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Methods and tools for efficient training on a single GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_one&quot;},{&quot;title&quot;:&quot;Multiple GPUs and parallelism&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_many&quot;},{&quot;title&quot;:&quot;Efficient training on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu&quot;},{&quot;title&quot;:&quot;Distributed CPU training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu_many&quot;},{&quot;title&quot;:&quot;Training on TPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu&quot;},{&quot;title&quot;:&quot;Training on TPU with TensorFlow&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu_tf&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu_tf&quot;},{&quot;title&quot;:&quot;Training on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_special&quot;},{&quot;title&quot;:&quot;Custom hardware for training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_hardware&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_hardware&quot;},{&quot;title&quot;:&quot;Hyperparameter Search using Trainer API&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;hpo_train&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/hpo_train&quot;}]},{&quot;title&quot;:&quot;Optimizing inference&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Inference on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_cpu&quot;},{&quot;title&quot;:&quot;Inference on one GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_one&quot;},{&quot;title&quot;:&quot;Inference on many GPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_many&quot;},{&quot;title&quot;:&quot;Inference on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_special&quot;}]},{&quot;title&quot;:&quot;Instantiating a big model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;big_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/big_models&quot;},{&quot;title&quot;:&quot;Troubleshooting&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;debugging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/debugging&quot;},{&quot;title&quot;:&quot;XLA Integration for TensorFlow Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tf_xla&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tf_xla&quot;},{&quot;title&quot;:&quot;Optimize inference using `torch.compile()`&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_torch_compile&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_torch_compile&quot;}]},{&quot;title&quot;:&quot;Contribute&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;How to contribute to transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;contributing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/contributing&quot;},{&quot;title&quot;:&quot;How to add a model to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_model&quot;},{&quot;title&quot;:&quot;How to convert a 🤗 Transformers model to TensorFlow?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_tensorflow_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_tensorflow_model&quot;},{&quot;title&quot;:&quot;How to add a pipeline to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_pipeline&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_pipeline&quot;},{&quot;title&quot;:&quot;Testing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;testing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/testing&quot;},{&quot;title&quot;:&quot;Checks on a Pull Request&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pr_checks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pr_checks&quot;}]},{&quot;title&quot;:&quot;Conceptual guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Philosophy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;philosophy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/philosophy&quot;},{&quot;title&quot;:&quot;Glossary&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;glossary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/glossary&quot;},{&quot;title&quot;:&quot;What 🤗 Transformers can do&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;task_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/task_summary&quot;},{&quot;title&quot;:&quot;How 🤗 Transformers solve tasks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks_explained&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks_explained&quot;},{&quot;title&quot;:&quot;The Transformer model family&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_summary&quot;},{&quot;title&quot;:&quot;Summary of the tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tokenizer_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tokenizer_summary&quot;},{&quot;title&quot;:&quot;Attention mechanisms&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;attention&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/attention&quot;},{&quot;title&quot;:&quot;Padding and truncation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pad_truncation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pad_truncation&quot;},{&quot;title&quot;:&quot;BERTology&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;bertology&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/bertology&quot;},{&quot;title&quot;:&quot;Perplexity of fixed-length models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perplexity&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perplexity&quot;},{&quot;title&quot;:&quot;Pipelines for webserver inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_webserver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_webserver&quot;},{&quot;title&quot;:&quot;Model training anatomy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_memory_anatomy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_memory_anatomy&quot;}]},{&quot;title&quot;:&quot;API&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Main Classes&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Agents and Tools&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/agent&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/agent&quot;},{&quot;title&quot;:&quot;Auto Classes&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/auto&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/auto&quot;},{&quot;title&quot;:&quot;Callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/callback&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/callback&quot;},{&quot;title&quot;:&quot;Configuration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/configuration&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/configuration&quot;},{&quot;title&quot;:&quot;Data Collator&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/data_collator&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/data_collator&quot;},{&quot;title&quot;:&quot;Keras callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/keras_callbacks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/keras_callbacks&quot;},{&quot;title&quot;:&quot;Logging&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/logging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/logging&quot;},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/model&quot;},{&quot;title&quot;:&quot;Text Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/text_generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/text_generation&quot;},{&quot;title&quot;:&quot;ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/onnx&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/onnx&quot;},{&quot;title&quot;:&quot;Optimization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/optimizer_schedules&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules&quot;},{&quot;title&quot;:&quot;Model outputs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/output&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/output&quot;},{&quot;title&quot;:&quot;Pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/pipelines&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/pipelines&quot;},{&quot;title&quot;:&quot;Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/processors&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/processors&quot;},{&quot;title&quot;:&quot;Quantization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/quantization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/quantization&quot;},{&quot;title&quot;:&quot;Tokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/tokenizer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/tokenizer&quot;},{&quot;title&quot;:&quot;Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/trainer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/trainer&quot;},{&quot;title&quot;:&quot;DeepSpeed Integration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/deepspeed&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/deepspeed&quot;},{&quot;title&quot;:&quot;Feature Extractor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/feature_extractor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/feature_extractor&quot;},{&quot;title&quot;:&quot;Image Processor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/image_processor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/image_processor&quot;}]},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALBERT&quot;,&quot;id&quot;:&quot;model_doc/albert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/albert&quot;},{&quot;title&quot;:&quot;BART&quot;,&quot;id&quot;:&quot;model_doc/bart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bart&quot;},{&quot;title&quot;:&quot;BARThez&quot;,&quot;id&quot;:&quot;model_doc/barthez&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/barthez&quot;},{&quot;title&quot;:&quot;BARTpho&quot;,&quot;id&quot;:&quot;model_doc/bartpho&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bartpho&quot;},{&quot;title&quot;:&quot;BERT&quot;,&quot;id&quot;:&quot;model_doc/bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert&quot;},{&quot;title&quot;:&quot;BertGeneration&quot;,&quot;id&quot;:&quot;model_doc/bert-generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-generation&quot;},{&quot;title&quot;:&quot;BertJapanese&quot;,&quot;id&quot;:&quot;model_doc/bert-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-japanese&quot;},{&quot;title&quot;:&quot;Bertweet&quot;,&quot;id&quot;:&quot;model_doc/bertweet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bertweet&quot;},{&quot;title&quot;:&quot;BigBird&quot;,&quot;id&quot;:&quot;model_doc/big_bird&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/big_bird&quot;},{&quot;title&quot;:&quot;BigBirdPegasus&quot;,&quot;id&quot;:&quot;model_doc/bigbird_pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus&quot;},{&quot;title&quot;:&quot;BioGpt&quot;,&quot;id&quot;:&quot;model_doc/biogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/biogpt&quot;},{&quot;title&quot;:&quot;Blenderbot&quot;,&quot;id&quot;:&quot;model_doc/blenderbot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot&quot;},{&quot;title&quot;:&quot;Blenderbot Small&quot;,&quot;id&quot;:&quot;model_doc/blenderbot-small&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot-small&quot;},{&quot;title&quot;:&quot;BLOOM&quot;,&quot;id&quot;:&quot;model_doc/bloom&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bloom&quot;},{&quot;title&quot;:&quot;BORT&quot;,&quot;id&quot;:&quot;model_doc/bort&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bort&quot;},{&quot;title&quot;:&quot;ByT5&quot;,&quot;id&quot;:&quot;model_doc/byt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/byt5&quot;},{&quot;title&quot;:&quot;CamemBERT&quot;,&quot;id&quot;:&quot;model_doc/camembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/camembert&quot;},{&quot;title&quot;:&quot;CANINE&quot;,&quot;id&quot;:&quot;model_doc/canine&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/canine&quot;},{&quot;title&quot;:&quot;CodeGen&quot;,&quot;id&quot;:&quot;model_doc/codegen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/codegen&quot;},{&quot;title&quot;:&quot;CodeLlama&quot;,&quot;id&quot;:&quot;model_doc/code_llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/code_llama&quot;},{&quot;title&quot;:&quot;ConvBERT&quot;,&quot;id&quot;:&quot;model_doc/convbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convbert&quot;},{&quot;title&quot;:&quot;CPM&quot;,&quot;id&quot;:&quot;model_doc/cpm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpm&quot;},{&quot;title&quot;:&quot;CPMANT&quot;,&quot;id&quot;:&quot;model_doc/cpmant&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpmant&quot;},{&quot;title&quot;:&quot;CTRL&quot;,&quot;id&quot;:&quot;model_doc/ctrl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ctrl&quot;},{&quot;title&quot;:&quot;DeBERTa&quot;,&quot;id&quot;:&quot;model_doc/deberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta&quot;},{&quot;title&quot;:&quot;DeBERTa-v2&quot;,&quot;id&quot;:&quot;model_doc/deberta-v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta-v2&quot;},{&quot;title&quot;:&quot;DialoGPT&quot;,&quot;id&quot;:&quot;model_doc/dialogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dialogpt&quot;},{&quot;title&quot;:&quot;DistilBERT&quot;,&quot;id&quot;:&quot;model_doc/distilbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/distilbert&quot;},{&quot;title&quot;:&quot;DPR&quot;,&quot;id&quot;:&quot;model_doc/dpr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpr&quot;},{&quot;title&quot;:&quot;ELECTRA&quot;,&quot;id&quot;:&quot;model_doc/electra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/electra&quot;},{&quot;title&quot;:&quot;Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encoder-decoder&quot;},{&quot;title&quot;:&quot;ERNIE&quot;,&quot;id&quot;:&quot;model_doc/ernie&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie&quot;},{&quot;title&quot;:&quot;ErnieM&quot;,&quot;id&quot;:&quot;model_doc/ernie_m&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie_m&quot;},{&quot;title&quot;:&quot;ESM&quot;,&quot;id&quot;:&quot;model_doc/esm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/esm&quot;},{&quot;title&quot;:&quot;Falcon&quot;,&quot;id&quot;:&quot;model_doc/falcon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/falcon&quot;},{&quot;title&quot;:&quot;FLAN-T5&quot;,&quot;id&quot;:&quot;model_doc/flan-t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-t5&quot;},{&quot;title&quot;:&quot;FLAN-UL2&quot;,&quot;id&quot;:&quot;model_doc/flan-ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-ul2&quot;},{&quot;title&quot;:&quot;FlauBERT&quot;,&quot;id&quot;:&quot;model_doc/flaubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flaubert&quot;},{&quot;title&quot;:&quot;FNet&quot;,&quot;id&quot;:&quot;model_doc/fnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fnet&quot;},{&quot;title&quot;:&quot;FSMT&quot;,&quot;id&quot;:&quot;model_doc/fsmt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fsmt&quot;},{&quot;title&quot;:&quot;Funnel Transformer&quot;,&quot;id&quot;:&quot;model_doc/funnel&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/funnel&quot;},{&quot;title&quot;:&quot;GPT&quot;,&quot;id&quot;:&quot;model_doc/openai-gpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/openai-gpt&quot;},{&quot;title&quot;:&quot;GPT Neo&quot;,&quot;id&quot;:&quot;model_doc/gpt_neo&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neo&quot;},{&quot;title&quot;:&quot;GPT NeoX&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox&quot;},{&quot;title&quot;:&quot;GPT NeoX Japanese&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox_japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese&quot;},{&quot;title&quot;:&quot;GPT-J&quot;,&quot;id&quot;:&quot;model_doc/gptj&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptj&quot;},{&quot;title&quot;:&quot;GPT2&quot;,&quot;id&quot;:&quot;model_doc/gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt2&quot;},{&quot;title&quot;:&quot;GPTBigCode&quot;,&quot;id&quot;:&quot;model_doc/gpt_bigcode&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode&quot;},{&quot;title&quot;:&quot;GPTSAN Japanese&quot;,&quot;id&quot;:&quot;model_doc/gptsan-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese&quot;},{&quot;title&quot;:&quot;GPTSw3&quot;,&quot;id&quot;:&quot;model_doc/gpt-sw3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt-sw3&quot;},{&quot;title&quot;:&quot;HerBERT&quot;,&quot;id&quot;:&quot;model_doc/herbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/herbert&quot;},{&quot;title&quot;:&quot;I-BERT&quot;,&quot;id&quot;:&quot;model_doc/ibert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ibert&quot;},{&quot;title&quot;:&quot;Jukebox&quot;,&quot;id&quot;:&quot;model_doc/jukebox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/jukebox&quot;},{&quot;title&quot;:&quot;LED&quot;,&quot;id&quot;:&quot;model_doc/led&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/led&quot;},{&quot;title&quot;:&quot;LLaMA&quot;,&quot;id&quot;:&quot;model_doc/llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama&quot;},{&quot;title&quot;:&quot;Llama2&quot;,&quot;id&quot;:&quot;model_doc/llama2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama2&quot;},{&quot;title&quot;:&quot;Longformer&quot;,&quot;id&quot;:&quot;model_doc/longformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longformer&quot;},{&quot;title&quot;:&quot;LongT5&quot;,&quot;id&quot;:&quot;model_doc/longt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longt5&quot;},{&quot;title&quot;:&quot;LUKE&quot;,&quot;id&quot;:&quot;model_doc/luke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/luke&quot;},{&quot;title&quot;:&quot;M2M100&quot;,&quot;id&quot;:&quot;model_doc/m2m_100&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/m2m_100&quot;},{&quot;title&quot;:&quot;MarianMT&quot;,&quot;id&quot;:&quot;model_doc/marian&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/marian&quot;},{&quot;title&quot;:&quot;MarkupLM&quot;,&quot;id&quot;:&quot;model_doc/markuplm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/markuplm&quot;},{&quot;title&quot;:&quot;MBart and MBart-50&quot;,&quot;id&quot;:&quot;model_doc/mbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mbart&quot;},{&quot;title&quot;:&quot;MEGA&quot;,&quot;id&quot;:&quot;model_doc/mega&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mega&quot;},{&quot;title&quot;:&quot;MegatronBERT&quot;,&quot;id&quot;:&quot;model_doc/megatron-bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron-bert&quot;},{&quot;title&quot;:&quot;MegatronGPT2&quot;,&quot;id&quot;:&quot;model_doc/megatron_gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2&quot;},{&quot;title&quot;:&quot;Mistral&quot;,&quot;id&quot;:&quot;model_doc/mistral&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mistral&quot;},{&quot;title&quot;:&quot;mLUKE&quot;,&quot;id&quot;:&quot;model_doc/mluke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mluke&quot;},{&quot;title&quot;:&quot;MobileBERT&quot;,&quot;id&quot;:&quot;model_doc/mobilebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilebert&quot;},{&quot;title&quot;:&quot;MPNet&quot;,&quot;id&quot;:&quot;model_doc/mpnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpnet&quot;},{&quot;title&quot;:&quot;MPT&quot;,&quot;id&quot;:&quot;model_doc/mpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpt&quot;},{&quot;title&quot;:&quot;MRA&quot;,&quot;id&quot;:&quot;model_doc/mra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mra&quot;},{&quot;title&quot;:&quot;MT5&quot;,&quot;id&quot;:&quot;model_doc/mt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mt5&quot;},{&quot;title&quot;:&quot;MVP&quot;,&quot;id&quot;:&quot;model_doc/mvp&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mvp&quot;},{&quot;title&quot;:&quot;NEZHA&quot;,&quot;id&quot;:&quot;model_doc/nezha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nezha&quot;},{&quot;title&quot;:&quot;NLLB&quot;,&quot;id&quot;:&quot;model_doc/nllb&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb&quot;},{&quot;title&quot;:&quot;NLLB-MoE&quot;,&quot;id&quot;:&quot;model_doc/nllb-moe&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb-moe&quot;},{&quot;title&quot;:&quot;Nyströmformer&quot;,&quot;id&quot;:&quot;model_doc/nystromformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nystromformer&quot;},{&quot;title&quot;:&quot;Open-Llama&quot;,&quot;id&quot;:&quot;model_doc/open-llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/open-llama&quot;},{&quot;title&quot;:&quot;OPT&quot;,&quot;id&quot;:&quot;model_doc/opt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/opt&quot;},{&quot;title&quot;:&quot;Pegasus&quot;,&quot;id&quot;:&quot;model_doc/pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus&quot;},{&quot;title&quot;:&quot;PEGASUS-X&quot;,&quot;id&quot;:&quot;model_doc/pegasus_x&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus_x&quot;},{&quot;title&quot;:&quot;Persimmon&quot;,&quot;id&quot;:&quot;model_doc/persimmon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/persimmon&quot;},{&quot;title&quot;:&quot;PhoBERT&quot;,&quot;id&quot;:&quot;model_doc/phobert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/phobert&quot;},{&quot;title&quot;:&quot;PLBart&quot;,&quot;id&quot;:&quot;model_doc/plbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/plbart&quot;},{&quot;title&quot;:&quot;ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/prophetnet&quot;},{&quot;title&quot;:&quot;QDQBert&quot;,&quot;id&quot;:&quot;model_doc/qdqbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/qdqbert&quot;},{&quot;title&quot;:&quot;RAG&quot;,&quot;id&quot;:&quot;model_doc/rag&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rag&quot;},{&quot;title&quot;:&quot;REALM&quot;,&quot;id&quot;:&quot;model_doc/realm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/realm&quot;},{&quot;title&quot;:&quot;Reformer&quot;,&quot;id&quot;:&quot;model_doc/reformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/reformer&quot;},{&quot;title&quot;:&quot;RemBERT&quot;,&quot;id&quot;:&quot;model_doc/rembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rembert&quot;},{&quot;title&quot;:&quot;RetriBERT&quot;,&quot;id&quot;:&quot;model_doc/retribert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/retribert&quot;},{&quot;title&quot;:&quot;RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta&quot;},{&quot;title&quot;:&quot;RoBERTa-PreLayerNorm&quot;,&quot;id&quot;:&quot;model_doc/roberta-prelayernorm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm&quot;},{&quot;title&quot;:&quot;RoCBert&quot;,&quot;id&quot;:&quot;model_doc/roc_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roc_bert&quot;},{&quot;title&quot;:&quot;RoFormer&quot;,&quot;id&quot;:&quot;model_doc/roformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roformer&quot;},{&quot;title&quot;:&quot;RWKV&quot;,&quot;id&quot;:&quot;model_doc/rwkv&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rwkv&quot;},{&quot;title&quot;:&quot;Splinter&quot;,&quot;id&quot;:&quot;model_doc/splinter&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/splinter&quot;},{&quot;title&quot;:&quot;SqueezeBERT&quot;,&quot;id&quot;:&quot;model_doc/squeezebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/squeezebert&quot;},{&quot;title&quot;:&quot;SwitchTransformers&quot;,&quot;id&quot;:&quot;model_doc/switch_transformers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/switch_transformers&quot;},{&quot;title&quot;:&quot;T5&quot;,&quot;id&quot;:&quot;model_doc/t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5&quot;},{&quot;title&quot;:&quot;T5v1.1&quot;,&quot;id&quot;:&quot;model_doc/t5v1.1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5v1.1&quot;},{&quot;title&quot;:&quot;TAPEX&quot;,&quot;id&quot;:&quot;model_doc/tapex&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapex&quot;},{&quot;title&quot;:&quot;Transformer XL&quot;,&quot;id&quot;:&quot;model_doc/transfo-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/transfo-xl&quot;},{&quot;title&quot;:&quot;UL2&quot;,&quot;id&quot;:&quot;model_doc/ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ul2&quot;},{&quot;title&quot;:&quot;UMT5&quot;,&quot;id&quot;:&quot;model_doc/umt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/umt5&quot;},{&quot;title&quot;:&quot;X-MOD&quot;,&quot;id&quot;:&quot;model_doc/xmod&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xmod&quot;},{&quot;title&quot;:&quot;XGLM&quot;,&quot;id&quot;:&quot;model_doc/xglm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xglm&quot;},{&quot;title&quot;:&quot;XLM&quot;,&quot;id&quot;:&quot;model_doc/xlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm&quot;},{&quot;title&quot;:&quot;XLM-ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/xlm-prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa-XL&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl&quot;},{&quot;title&quot;:&quot;XLM-V&quot;,&quot;id&quot;:&quot;model_doc/xlm-v&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-v&quot;},{&quot;title&quot;:&quot;XLNet&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/xlnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlnet&quot;},{&quot;title&quot;:&quot;YOSO&quot;,&quot;id&quot;:&quot;model_doc/yoso&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yoso&quot;}]},{&quot;title&quot;:&quot;Vision models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;BEiT&quot;,&quot;id&quot;:&quot;model_doc/beit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/beit&quot;},{&quot;title&quot;:&quot;BiT&quot;,&quot;id&quot;:&quot;model_doc/bit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bit&quot;},{&quot;title&quot;:&quot;Conditional DETR&quot;,&quot;id&quot;:&quot;model_doc/conditional_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/conditional_detr&quot;},{&quot;title&quot;:&quot;ConvNeXT&quot;,&quot;id&quot;:&quot;model_doc/convnext&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnext&quot;},{&quot;title&quot;:&quot;ConvNeXTV2&quot;,&quot;id&quot;:&quot;model_doc/convnextv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnextv2&quot;},{&quot;title&quot;:&quot;CvT&quot;,&quot;id&quot;:&quot;model_doc/cvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cvt&quot;},{&quot;title&quot;:&quot;Deformable DETR&quot;,&quot;id&quot;:&quot;model_doc/deformable_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deformable_detr&quot;},{&quot;title&quot;:&quot;DeiT&quot;,&quot;id&quot;:&quot;model_doc/deit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deit&quot;},{&quot;title&quot;:&quot;DETA&quot;,&quot;id&quot;:&quot;model_doc/deta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deta&quot;},{&quot;title&quot;:&quot;DETR&quot;,&quot;id&quot;:&quot;model_doc/detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/detr&quot;},{&quot;title&quot;:&quot;DiNAT&quot;,&quot;id&quot;:&quot;model_doc/dinat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinat&quot;},{&quot;title&quot;:&quot;DINO V2&quot;,&quot;id&quot;:&quot;model_doc/dinov2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinov2&quot;},{&quot;title&quot;:&quot;DiT&quot;,&quot;id&quot;:&quot;model_doc/dit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dit&quot;},{&quot;title&quot;:&quot;DPT&quot;,&quot;id&quot;:&quot;model_doc/dpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpt&quot;},{&quot;title&quot;:&quot;EfficientFormer&quot;,&quot;id&quot;:&quot;model_doc/efficientformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientformer&quot;},{&quot;title&quot;:&quot;EfficientNet&quot;,&quot;id&quot;:&quot;model_doc/efficientnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientnet&quot;},{&quot;title&quot;:&quot;FocalNet&quot;,&quot;id&quot;:&quot;model_doc/focalnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/focalnet&quot;},{&quot;title&quot;:&quot;GLPN&quot;,&quot;id&quot;:&quot;model_doc/glpn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/glpn&quot;},{&quot;title&quot;:&quot;ImageGPT&quot;,&quot;id&quot;:&quot;model_doc/imagegpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/imagegpt&quot;},{&quot;title&quot;:&quot;LeViT&quot;,&quot;id&quot;:&quot;model_doc/levit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/levit&quot;},{&quot;title&quot;:&quot;Mask2Former&quot;,&quot;id&quot;:&quot;model_doc/mask2former&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mask2former&quot;},{&quot;title&quot;:&quot;MaskFormer&quot;,&quot;id&quot;:&quot;model_doc/maskformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/maskformer&quot;},{&quot;title&quot;:&quot;MobileNetV1&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v1&quot;},{&quot;title&quot;:&quot;MobileNetV2&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v2&quot;},{&quot;title&quot;:&quot;MobileViT&quot;,&quot;id&quot;:&quot;model_doc/mobilevit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevit&quot;},{&quot;title&quot;:&quot;MobileViTV2&quot;,&quot;id&quot;:&quot;model_doc/mobilevitv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevitv2&quot;},{&quot;title&quot;:&quot;NAT&quot;,&quot;id&quot;:&quot;model_doc/nat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nat&quot;},{&quot;title&quot;:&quot;PoolFormer&quot;,&quot;id&quot;:&quot;model_doc/poolformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/poolformer&quot;},{&quot;title&quot;:&quot;Pyramid Vision Transformer (PVT)&quot;,&quot;id&quot;:&quot;model_doc/pvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pvt&quot;},{&quot;title&quot;:&quot;RegNet&quot;,&quot;id&quot;:&quot;model_doc/regnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/regnet&quot;},{&quot;title&quot;:&quot;ResNet&quot;,&quot;id&quot;:&quot;model_doc/resnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/resnet&quot;},{&quot;title&quot;:&quot;SegFormer&quot;,&quot;id&quot;:&quot;model_doc/segformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/segformer&quot;},{&quot;title&quot;:&quot;SwiftFormer&quot;,&quot;id&quot;:&quot;model_doc/swiftformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swiftformer&quot;},{&quot;title&quot;:&quot;Swin Transformer&quot;,&quot;id&quot;:&quot;model_doc/swin&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin&quot;},{&quot;title&quot;:&quot;Swin Transformer V2&quot;,&quot;id&quot;:&quot;model_doc/swinv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swinv2&quot;},{&quot;title&quot;:&quot;Swin2SR&quot;,&quot;id&quot;:&quot;model_doc/swin2sr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin2sr&quot;},{&quot;title&quot;:&quot;Table Transformer&quot;,&quot;id&quot;:&quot;model_doc/table-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/table-transformer&quot;},{&quot;title&quot;:&quot;TimeSformer&quot;,&quot;id&quot;:&quot;model_doc/timesformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/timesformer&quot;},{&quot;title&quot;:&quot;UperNet&quot;,&quot;id&quot;:&quot;model_doc/upernet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/upernet&quot;},{&quot;title&quot;:&quot;VAN&quot;,&quot;id&quot;:&quot;model_doc/van&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/van&quot;},{&quot;title&quot;:&quot;VideoMAE&quot;,&quot;id&quot;:&quot;model_doc/videomae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/videomae&quot;},{&quot;title&quot;:&quot;Vision Transformer (ViT)&quot;,&quot;id&quot;:&quot;model_doc/vit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit&quot;},{&quot;title&quot;:&quot;ViT Hybrid&quot;,&quot;id&quot;:&quot;model_doc/vit_hybrid&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_hybrid&quot;},{&quot;title&quot;:&quot;ViTDet&quot;,&quot;id&quot;:&quot;model_doc/vitdet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitdet&quot;},{&quot;title&quot;:&quot;ViTMAE&quot;,&quot;id&quot;:&quot;model_doc/vit_mae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_mae&quot;},{&quot;title&quot;:&quot;ViTMatte&quot;,&quot;id&quot;:&quot;model_doc/vitmatte&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitmatte&quot;},{&quot;title&quot;:&quot;ViTMSN&quot;,&quot;id&quot;:&quot;model_doc/vit_msn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_msn&quot;},{&quot;title&quot;:&quot;ViViT&quot;,&quot;id&quot;:&quot;model_doc/vivit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vivit&quot;},{&quot;title&quot;:&quot;YOLOS&quot;,&quot;id&quot;:&quot;model_doc/yolos&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yolos&quot;}]},{&quot;title&quot;:&quot;Audio models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio Spectrogram Transformer&quot;,&quot;id&quot;:&quot;model_doc/audio-spectrogram-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer&quot;},{&quot;title&quot;:&quot;Bark&quot;,&quot;id&quot;:&quot;model_doc/bark&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bark&quot;},{&quot;title&quot;:&quot;CLAP&quot;,&quot;id&quot;:&quot;model_doc/clap&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clap&quot;},{&quot;title&quot;:&quot;EnCodec&quot;,&quot;id&quot;:&quot;model_doc/encodec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encodec&quot;},{&quot;title&quot;:&quot;Hubert&quot;,&quot;id&quot;:&quot;model_doc/hubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/hubert&quot;},{&quot;title&quot;:&quot;MCTCT&quot;,&quot;id&quot;:&quot;model_doc/mctct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mctct&quot;},{&quot;title&quot;:&quot;MMS&quot;,&quot;id&quot;:&quot;model_doc/mms&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mms&quot;},{&quot;title&quot;:&quot;MusicGen&quot;,&quot;id&quot;:&quot;model_doc/musicgen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/musicgen&quot;},{&quot;title&quot;:&quot;Pop2Piano&quot;,&quot;id&quot;:&quot;model_doc/pop2piano&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pop2piano&quot;},{&quot;title&quot;:&quot;SEW&quot;,&quot;id&quot;:&quot;model_doc/sew&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew&quot;},{&quot;title&quot;:&quot;SEW-D&quot;,&quot;id&quot;:&quot;model_doc/sew-d&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew-d&quot;},{&quot;title&quot;:&quot;Speech2Text&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text&quot;},{&quot;title&quot;:&quot;Speech2Text2&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text_2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2&quot;},{&quot;title&quot;:&quot;SpeechT5&quot;,&quot;id&quot;:&quot;model_doc/speecht5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speecht5&quot;},{&quot;title&quot;:&quot;UniSpeech&quot;,&quot;id&quot;:&quot;model_doc/unispeech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech&quot;},{&quot;title&quot;:&quot;UniSpeech-SAT&quot;,&quot;id&quot;:&quot;model_doc/unispeech-sat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech-sat&quot;},{&quot;title&quot;:&quot;VITS&quot;,&quot;id&quot;:&quot;model_doc/vits&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vits&quot;},{&quot;title&quot;:&quot;Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2&quot;},{&quot;title&quot;:&quot;Wav2Vec2-Conformer&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2-conformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer&quot;},{&quot;title&quot;:&quot;Wav2Vec2Phoneme&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2_phoneme&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme&quot;},{&quot;title&quot;:&quot;WavLM&quot;,&quot;id&quot;:&quot;model_doc/wavlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wavlm&quot;},{&quot;title&quot;:&quot;Whisper&quot;,&quot;id&quot;:&quot;model_doc/whisper&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/whisper&quot;},{&quot;title&quot;:&quot;XLS-R&quot;,&quot;id&quot;:&quot;model_doc/xls_r&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xls_r&quot;},{&quot;title&quot;:&quot;XLSR-Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/xlsr_wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2&quot;}]},{&quot;title&quot;:&quot;Multimodal models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALIGN&quot;,&quot;id&quot;:&quot;model_doc/align&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/align&quot;},{&quot;title&quot;:&quot;AltCLIP&quot;,&quot;id&quot;:&quot;model_doc/altclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/altclip&quot;},{&quot;title&quot;:&quot;BLIP&quot;,&quot;id&quot;:&quot;model_doc/blip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip&quot;},{&quot;title&quot;:&quot;BLIP-2&quot;,&quot;id&quot;:&quot;model_doc/blip-2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip-2&quot;},{&quot;title&quot;:&quot;BridgeTower&quot;,&quot;id&quot;:&quot;model_doc/bridgetower&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bridgetower&quot;},{&quot;title&quot;:&quot;BROS&quot;,&quot;id&quot;:&quot;model_doc/bros&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bros&quot;},{&quot;title&quot;:&quot;Chinese-CLIP&quot;,&quot;id&quot;:&quot;model_doc/chinese_clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/chinese_clip&quot;},{&quot;title&quot;:&quot;CLIP&quot;,&quot;id&quot;:&quot;model_doc/clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clip&quot;},{&quot;title&quot;:&quot;CLIPSeg&quot;,&quot;id&quot;:&quot;model_doc/clipseg&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clipseg&quot;},{&quot;title&quot;:&quot;Data2Vec&quot;,&quot;id&quot;:&quot;model_doc/data2vec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/data2vec&quot;},{&quot;title&quot;:&quot;DePlot&quot;,&quot;id&quot;:&quot;model_doc/deplot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deplot&quot;},{&quot;title&quot;:&quot;Donut&quot;,&quot;id&quot;:&quot;model_doc/donut&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/donut&quot;},{&quot;title&quot;:&quot;FLAVA&quot;,&quot;id&quot;:&quot;model_doc/flava&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flava&quot;},{&quot;title&quot;:&quot;GIT&quot;,&quot;id&quot;:&quot;model_doc/git&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/git&quot;},{&quot;title&quot;:&quot;GroupViT&quot;,&quot;id&quot;:&quot;model_doc/groupvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/groupvit&quot;},{&quot;title&quot;:&quot;IDEFICS&quot;,&quot;id&quot;:&quot;model_doc/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/idefics&quot;},{&quot;title&quot;:&quot;InstructBLIP&quot;,&quot;id&quot;:&quot;model_doc/instructblip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/instructblip&quot;},{&quot;title&quot;:&quot;LayoutLM&quot;,&quot;id&quot;:&quot;model_doc/layoutlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlm&quot;},{&quot;title&quot;:&quot;LayoutLMV2&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv2&quot;},{&quot;title&quot;:&quot;LayoutLMV3&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv3&quot;},{&quot;title&quot;:&quot;LayoutXLM&quot;,&quot;id&quot;:&quot;model_doc/layoutxlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutxlm&quot;},{&quot;title&quot;:&quot;LiLT&quot;,&quot;id&quot;:&quot;model_doc/lilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lilt&quot;},{&quot;title&quot;:&quot;LXMERT&quot;,&quot;id&quot;:&quot;model_doc/lxmert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lxmert&quot;},{&quot;title&quot;:&quot;MatCha&quot;,&quot;id&quot;:&quot;model_doc/matcha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/matcha&quot;},{&quot;title&quot;:&quot;MGP-STR&quot;,&quot;id&quot;:&quot;model_doc/mgp-str&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mgp-str&quot;},{&quot;title&quot;:&quot;Nougat&quot;,&quot;id&quot;:&quot;model_doc/nougat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nougat&quot;},{&quot;title&quot;:&quot;OneFormer&quot;,&quot;id&quot;:&quot;model_doc/oneformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/oneformer&quot;},{&quot;title&quot;:&quot;OWL-ViT&quot;,&quot;id&quot;:&quot;model_doc/owlvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/owlvit&quot;},{&quot;title&quot;:&quot;Perceiver&quot;,&quot;id&quot;:&quot;model_doc/perceiver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/perceiver&quot;},{&quot;title&quot;:&quot;Pix2Struct&quot;,&quot;id&quot;:&quot;model_doc/pix2struct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pix2struct&quot;},{&quot;title&quot;:&quot;Segment Anything&quot;,&quot;id&quot;:&quot;model_doc/sam&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sam&quot;},{&quot;title&quot;:&quot;Speech Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/speech-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder&quot;},{&quot;title&quot;:&quot;TAPAS&quot;,&quot;id&quot;:&quot;model_doc/tapas&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapas&quot;},{&quot;title&quot;:&quot;TrOCR&quot;,&quot;id&quot;:&quot;model_doc/trocr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trocr&quot;},{&quot;title&quot;:&quot;TVLT&quot;,&quot;id&quot;:&quot;model_doc/tvlt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tvlt&quot;},{&quot;title&quot;:&quot;ViLT&quot;,&quot;id&quot;:&quot;model_doc/vilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vilt&quot;},{&quot;title&quot;:&quot;Vision Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/vision-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder&quot;},{&quot;title&quot;:&quot;Vision Text Dual Encoder&quot;,&quot;id&quot;:&quot;model_doc/vision-text-dual-encoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder&quot;},{&quot;title&quot;:&quot;VisualBERT&quot;,&quot;id&quot;:&quot;model_doc/visual_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/visual_bert&quot;},{&quot;title&quot;:&quot;X-CLIP&quot;,&quot;id&quot;:&quot;model_doc/xclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xclip&quot;}]},{&quot;title&quot;:&quot;Reinforcement learning models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Decision Transformer&quot;,&quot;id&quot;:&quot;model_doc/decision_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/decision_transformer&quot;},{&quot;title&quot;:&quot;Trajectory Transformer&quot;,&quot;id&quot;:&quot;model_doc/trajectory_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer&quot;}]},{&quot;title&quot;:&quot;Time series models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Autoformer&quot;,&quot;id&quot;:&quot;model_doc/autoformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/autoformer&quot;},{&quot;title&quot;:&quot;Informer&quot;,&quot;id&quot;:&quot;model_doc/informer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/informer&quot;},{&quot;title&quot;:&quot;Time Series Transformer&quot;,&quot;id&quot;:&quot;model_doc/time_series_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/time_series_transformer&quot;}]},{&quot;title&quot;:&quot;Graph models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Graphormer&quot;,&quot;id&quot;:&quot;model_doc/graphormer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/graphormer&quot;}]}]},{&quot;title&quot;:&quot;Internal Helpers&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Custom Layers and Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/modeling_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/modeling_utils&quot;},{&quot;title&quot;:&quot;Utilities for pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/pipelines_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/pipelines_utils&quot;},{&quot;title&quot;:&quot;Utilities for Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/tokenization_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/tokenization_utils&quot;},{&quot;title&quot;:&quot;Utilities for Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/trainer_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/trainer_utils&quot;},{&quot;title&quot;:&quot;Utilities for Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/generation_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/generation_utils&quot;},{&quot;title&quot;:&quot;Utilities for Image Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/image_processing_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/image_processing_utils&quot;},{&quot;title&quot;:&quot;Utilities for Audio processing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/audio_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/audio_utils&quot;},{&quot;title&quot;:&quot;General Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/file_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/file_utils&quot;},{&quot;title&quot;:&quot;Utilities for Time Series&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/time_series_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/time_series_utils&quot;}]}]}],&quot;chapterId&quot;:&quot;model_doc/xlnet&quot;,&quot;docType&quot;:&quot;docs&quot;,&quot;isLoggedIn&quot;:false,&quot;lang&quot;:&quot;en&quot;,&quot;langs&quot;:[&quot;de&quot;,&quot;en&quot;,&quot;es&quot;,&quot;fr&quot;,&quot;it&quot;,&quot;ko&quot;,&quot;pt&quot;,&quot;zh&quot;],&quot;library&quot;:&quot;transformers&quot;,&quot;theme&quot;:&quot;light&quot;,&quot;version&quot;:&quot;v4.34.0&quot;,&quot;versions&quot;:[{&quot;version&quot;:&quot;main&quot;},{&quot;version&quot;:&quot;v4.34.0&quot;},{&quot;version&quot;:&quot;v4.33.3&quot;},{&quot;version&quot;:&quot;v4.33.2&quot;},{&quot;version&quot;:&quot;v4.33.0&quot;},{&quot;version&quot;:&quot;v4.32.1&quot;},{&quot;version&quot;:&quot;v4.32.0&quot;},{&quot;version&quot;:&quot;v4.31.0&quot;},{&quot;version&quot;:&quot;v4.30.0&quot;},{&quot;version&quot;:&quot;v4.29.1&quot;},{&quot;version&quot;:&quot;v4.29.0&quot;},{&quot;version&quot;:&quot;v4.28.1&quot;},{&quot;version&quot;:&quot;v4.28.0&quot;},{&quot;version&quot;:&quot;v4.27.2&quot;},{&quot;version&quot;:&quot;v4.27.1&quot;},{&quot;version&quot;:&quot;v4.27.0&quot;},{&quot;version&quot;:&quot;v4.26.1&quot;},{&quot;version&quot;:&quot;v4.26.0&quot;},{&quot;version&quot;:&quot;v4.25.1&quot;},{&quot;version&quot;:&quot;v4.24.0&quot;},{&quot;version&quot;:&quot;v4.23.1&quot;},{&quot;version&quot;:&quot;v4.23.0&quot;},{&quot;version&quot;:&quot;v4.22.2&quot;},{&quot;version&quot;:&quot;v4.22.1&quot;},{&quot;version&quot;:&quot;v4.22.0&quot;},{&quot;version&quot;:&quot;v4.21.3&quot;},{&quot;version&quot;:&quot;v4.21.2&quot;},{&quot;version&quot;:&quot;v4.21.1&quot;},{&quot;version&quot;:&quot;v4.21.0&quot;},{&quot;version&quot;:&quot;v4.20.1&quot;},{&quot;version&quot;:&quot;v4.20.0&quot;},{&quot;version&quot;:&quot;v4.19.4&quot;},{&quot;version&quot;:&quot;v4.19.3&quot;},{&quot;version&quot;:&quot;v4.19.2&quot;},{&quot;version&quot;:&quot;v4.19.0&quot;},{&quot;version&quot;:&quot;v4.18.0&quot;},{&quot;version&quot;:&quot;v4.17.0&quot;},{&quot;version&quot;:&quot;v4.16.2&quot;},{&quot;version&quot;:&quot;v4.16.1&quot;},{&quot;version&quot;:&quot;v4.16.0&quot;},{&quot;version&quot;:&quot;v4.15.0&quot;},{&quot;version&quot;:&quot;v4.14.1&quot;},{&quot;version&quot;:&quot;v4.13.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.5&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.4&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.0.0&quot;},{&quot;version&quot;:&quot;doc-builder-html&quot;}],&quot;title&quot;:&quot;XLNet&quot;}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"> <div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation </p> <div class="flex items-center"><p class="font-semibold">XLNet</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "> <button class=" " type="button"> <h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> </button> <div class="flex items-center"> <select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1" selected="">v4.34.0</option><option value="2">v4.33.3</option><option value="3">v4.32.1</option><option value="4">v4.31.0</option><option value="5">v4.30.0</option><option value="6">v4.29.1</option><option value="7">v4.28.1</option><option value="8">v4.27.2</option><option value="9">v4.26.1</option><option value="10">v4.25.1</option><option value="11">v4.24.0</option><option value="12">v4.23.1</option><option value="13">v4.22.2</option><option value="14">v4.21.3</option><option value="15">v4.20.1</option><option value="16">v4.19.4</option><option value="17">v4.18.0</option><option value="18">v4.17.0</option><option value="19">v4.16.2</option><option value="20">v4.15.0</option><option value="21">v4.14.1</option><option value="22">v4.13.0</option><option value="23">v4.12.5</option><option value="24">v4.11.3</option><option value="25">v4.10.1</option><option value="26">v4.9.2</option><option value="27">v4.8.2</option><option value="28">v4.7.0</option><option value="29">v4.6.0</option><option value="30">v4.5.1</option><option value="31">v4.4.2</option><option value="32">v4.3.3</option><option value="33">v4.2.2</option><option value="34">v4.1.1</option><option value="35">v4.0.1</option><option value="36">v3.5.1</option><option value="37">v3.4.0</option><option value="38">v3.3.1</option><option value="39">v3.2.0</option><option value="40">v3.1.0</option><option value="41">v3.0.2</option><option value="42">v2.11.0</option><option value="43">v2.10.0</option><option value="44">v2.9.1</option><option value="45">v2.8.0</option><option value="46">v2.7.0</option><option value="47">v2.6.0</option><option value="48">v2.5.1</option><option value="49">v2.4.1</option><option value="50">v2.3.0</option><option value="51">v2.2.2</option><option value="52">v2.1.1</option><option value="53">v2.0.0</option><option value="54">v1.2.0</option><option value="55">v1.1.0</option><option value="56">v1.0.0</option><option value="57">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en" selected="">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"> <button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"> <svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> </a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Get started<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/index"><!-- HTML_TAG_START -->🤗 Transformers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/quicktour"><!-- HTML_TAG_START -->Quick tour<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/installation"><!-- HTML_TAG_START -->Installation<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Tutorials<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_tutorial"><!-- HTML_TAG_START -->Run inference with pipelines<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/autoclass_tutorial"><!-- HTML_TAG_START -->Write portable code with AutoClass<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/preprocessing"><!-- HTML_TAG_START -->Preprocess data<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/training"><!-- HTML_TAG_START -->Fine-tune a pretrained model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/run_scripts"><!-- HTML_TAG_START -->Train with a script<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/accelerate"><!-- HTML_TAG_START -->Set up distributed training with 🤗 Accelerate<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/peft"><!-- HTML_TAG_START -->Load and train adapters with 🤗 PEFT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_sharing"><!-- HTML_TAG_START -->Share your model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/transformers_agents"><!-- HTML_TAG_START -->Agents<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/llm_tutorial"><!-- HTML_TAG_START -->Generation with LLMs<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Task Guides<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Natural Language Processing<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Audio<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Computer Vision<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Multimodal<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Generation<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Prompting<!-- HTML_TAG_END --></span> </span></span> </div></div> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Developer guides<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/fast_tokenizers"><!-- HTML_TAG_START -->Use fast tokenizers from 🤗 Tokenizers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/multilingual"><!-- HTML_TAG_START -->Run inference with multilingual models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/create_a_model"><!-- HTML_TAG_START -->Use model-specific APIs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_models"><!-- HTML_TAG_START -->Share a custom model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/chat_templating"><!-- HTML_TAG_START -->Templates for chat models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/sagemaker"><!-- HTML_TAG_START -->Run training on Amazon SageMaker<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/serialization"><!-- HTML_TAG_START -->Export to ONNX<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tflite"><!-- HTML_TAG_START -->Export to TFLite<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/torchscript"><!-- HTML_TAG_START -->Export to TorchScript<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/benchmarks"><!-- HTML_TAG_START -->Benchmarks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/notebooks"><!-- HTML_TAG_START -->Notebooks with examples<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/community"><!-- HTML_TAG_START -->Community resources<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_tools"><!-- HTML_TAG_START -->Custom Tools and Prompts<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/troubleshooting"><!-- HTML_TAG_START -->Troubleshoot<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Performance and scalability<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/performance"><!-- HTML_TAG_START -->Overview<!-- HTML_TAG_END --> </a> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Efficient training techniques<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_one"><!-- HTML_TAG_START -->Methods and tools for efficient training on a single GPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_many"><!-- HTML_TAG_START -->Multiple GPUs and parallelism<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu"><!-- HTML_TAG_START -->Efficient training on CPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu_many"><!-- HTML_TAG_START -->Distributed CPU training<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu"><!-- HTML_TAG_START -->Training on TPUs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu_tf"><!-- HTML_TAG_START -->Training on TPU with TensorFlow<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_special"><!-- HTML_TAG_START -->Training on Specialized Hardware<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_hardware"><!-- HTML_TAG_START -->Custom hardware for training<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/hpo_train"><!-- HTML_TAG_START -->Hyperparameter Search using Trainer API<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Optimizing inference<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_cpu"><!-- HTML_TAG_START -->Inference on CPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_one"><!-- HTML_TAG_START -->Inference on one GPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_many"><!-- HTML_TAG_START -->Inference on many GPUs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_special"><!-- HTML_TAG_START -->Inference on Specialized Hardware<!-- HTML_TAG_END --> </a> </div><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/big_models"><!-- HTML_TAG_START -->Instantiating a big model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/debugging"><!-- HTML_TAG_START -->Troubleshooting<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tf_xla"><!-- HTML_TAG_START -->XLA Integration for TensorFlow Models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perf_torch_compile"><!-- HTML_TAG_START -->Optimize inference using `torch.compile()`<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Contribute<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/contributing"><!-- HTML_TAG_START -->How to contribute to transformers?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_model"><!-- HTML_TAG_START -->How to add a model to 🤗 Transformers?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_tensorflow_model"><!-- HTML_TAG_START -->How to convert a 🤗 Transformers model to TensorFlow?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_pipeline"><!-- HTML_TAG_START -->How to add a pipeline to 🤗 Transformers?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/testing"><!-- HTML_TAG_START -->Testing<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pr_checks"><!-- HTML_TAG_START -->Checks on a Pull Request<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Conceptual guides<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/philosophy"><!-- HTML_TAG_START -->Philosophy<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/glossary"><!-- HTML_TAG_START -->Glossary<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/task_summary"><!-- HTML_TAG_START -->What 🤗 Transformers can do<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tasks_explained"><!-- HTML_TAG_START -->How 🤗 Transformers solve tasks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_summary"><!-- HTML_TAG_START -->The Transformer model family<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tokenizer_summary"><!-- HTML_TAG_START -->Summary of the tokenizers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/attention"><!-- HTML_TAG_START -->Attention mechanisms<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pad_truncation"><!-- HTML_TAG_START -->Padding and truncation<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/bertology"><!-- HTML_TAG_START -->BERTology<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perplexity"><!-- HTML_TAG_START -->Perplexity of fixed-length models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_webserver"><!-- HTML_TAG_START -->Pipelines for webserver inference<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_memory_anatomy"><!-- HTML_TAG_START -->Model training anatomy<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->API<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Main Classes<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/agent"><!-- HTML_TAG_START -->Agents and Tools<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/model_doc/auto"><!-- HTML_TAG_START -->Auto Classes<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/callback"><!-- HTML_TAG_START -->Callbacks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/configuration"><!-- HTML_TAG_START -->Configuration<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/data_collator"><!-- HTML_TAG_START -->Data Collator<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks"><!-- HTML_TAG_START -->Keras callbacks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/logging"><!-- HTML_TAG_START -->Logging<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/model"><!-- HTML_TAG_START -->Models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/text_generation"><!-- HTML_TAG_START -->Text Generation<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/onnx"><!-- HTML_TAG_START -->ONNX<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules"><!-- HTML_TAG_START -->Optimization<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/output"><!-- HTML_TAG_START -->Model outputs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/pipelines"><!-- HTML_TAG_START -->Pipelines<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/processors"><!-- HTML_TAG_START -->Processors<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/quantization"><!-- HTML_TAG_START -->Quantization<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/tokenizer"><!-- HTML_TAG_START -->Tokenizer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/trainer"><!-- HTML_TAG_START -->Trainer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/deepspeed"><!-- HTML_TAG_START -->DeepSpeed Integration<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor"><!-- HTML_TAG_START -->Feature Extractor<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/image_processor"><!-- HTML_TAG_START -->Image Processor<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Text models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/albert"><!-- HTML_TAG_START -->ALBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bart"><!-- HTML_TAG_START -->BART<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/barthez"><!-- HTML_TAG_START -->BARThez<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bartpho"><!-- HTML_TAG_START -->BARTpho<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bert"><!-- HTML_TAG_START -->BERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bert-generation"><!-- HTML_TAG_START -->BertGeneration<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bert-japanese"><!-- HTML_TAG_START -->BertJapanese<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bertweet"><!-- HTML_TAG_START -->Bertweet<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/big_bird"><!-- HTML_TAG_START -->BigBird<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus"><!-- HTML_TAG_START -->BigBirdPegasus<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/biogpt"><!-- HTML_TAG_START -->BioGpt<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/blenderbot"><!-- HTML_TAG_START -->Blenderbot<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/blenderbot-small"><!-- HTML_TAG_START -->Blenderbot Small<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bloom"><!-- HTML_TAG_START -->BLOOM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bort"><!-- HTML_TAG_START -->BORT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/byt5"><!-- HTML_TAG_START -->ByT5<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/camembert"><!-- HTML_TAG_START -->CamemBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/canine"><!-- HTML_TAG_START -->CANINE<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/codegen"><!-- HTML_TAG_START -->CodeGen<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/code_llama"><!-- HTML_TAG_START -->CodeLlama<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/convbert"><!-- HTML_TAG_START -->ConvBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/cpm"><!-- HTML_TAG_START -->CPM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/cpmant"><!-- HTML_TAG_START -->CPMANT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ctrl"><!-- HTML_TAG_START -->CTRL<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/deberta"><!-- HTML_TAG_START -->DeBERTa<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/deberta-v2"><!-- HTML_TAG_START -->DeBERTa-v2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/dialogpt"><!-- HTML_TAG_START -->DialoGPT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/distilbert"><!-- HTML_TAG_START -->DistilBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/dpr"><!-- HTML_TAG_START -->DPR<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/electra"><!-- HTML_TAG_START -->ELECTRA<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/encoder-decoder"><!-- HTML_TAG_START -->Encoder Decoder Models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ernie"><!-- HTML_TAG_START -->ERNIE<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ernie_m"><!-- HTML_TAG_START -->ErnieM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/esm"><!-- HTML_TAG_START -->ESM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/falcon"><!-- HTML_TAG_START -->Falcon<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/flan-t5"><!-- HTML_TAG_START -->FLAN-T5<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/flan-ul2"><!-- HTML_TAG_START -->FLAN-UL2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/flaubert"><!-- HTML_TAG_START -->FlauBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/fnet"><!-- HTML_TAG_START -->FNet<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/fsmt"><!-- HTML_TAG_START -->FSMT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/funnel"><!-- HTML_TAG_START -->Funnel Transformer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/openai-gpt"><!-- HTML_TAG_START -->GPT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt_neo"><!-- HTML_TAG_START -->GPT Neo<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt_neox"><!-- HTML_TAG_START -->GPT NeoX<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese"><!-- HTML_TAG_START -->GPT NeoX Japanese<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gptj"><!-- HTML_TAG_START -->GPT-J<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt2"><!-- HTML_TAG_START -->GPT2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode"><!-- HTML_TAG_START -->GPTBigCode<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese"><!-- HTML_TAG_START -->GPTSAN Japanese<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt-sw3"><!-- HTML_TAG_START -->GPTSw3<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/herbert"><!-- HTML_TAG_START -->HerBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ibert"><!-- HTML_TAG_START -->I-BERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/jukebox"><!-- HTML_TAG_START -->Jukebox<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/led"><!-- HTML_TAG_START -->LED<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/llama"><!-- HTML_TAG_START -->LLaMA<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/llama2"><!-- HTML_TAG_START -->Llama2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/longformer"><!-- HTML_TAG_START -->Longformer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/longt5"><!-- HTML_TAG_START -->LongT5<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/luke"><!-- HTML_TAG_START -->LUKE<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/m2m_100"><!-- HTML_TAG_START -->M2M100<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/marian"><!-- HTML_TAG_START -->MarianMT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/markuplm"><!-- HTML_TAG_START -->MarkupLM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mbart"><!-- HTML_TAG_START -->MBart and MBart-50<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mega"><!-- HTML_TAG_START -->MEGA<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/megatron-bert"><!-- HTML_TAG_START -->MegatronBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2"><!-- HTML_TAG_START -->MegatronGPT2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mistral"><!-- HTML_TAG_START -->Mistral<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mluke"><!-- HTML_TAG_START -->mLUKE<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mobilebert"><!-- HTML_TAG_START -->MobileBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mpnet"><!-- HTML_TAG_START -->MPNet<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mpt"><!-- HTML_TAG_START -->MPT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mra"><!-- HTML_TAG_START -->MRA<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mt5"><!-- HTML_TAG_START -->MT5<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mvp"><!-- HTML_TAG_START -->MVP<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nezha"><!-- HTML_TAG_START -->NEZHA<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nllb"><!-- HTML_TAG_START -->NLLB<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nllb-moe"><!-- HTML_TAG_START -->NLLB-MoE<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nystromformer"><!-- HTML_TAG_START -->Nyströmformer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/open-llama"><!-- HTML_TAG_START -->Open-Llama<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/opt"><!-- HTML_TAG_START -->OPT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/pegasus"><!-- HTML_TAG_START -->Pegasus<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/pegasus_x"><!-- HTML_TAG_START -->PEGASUS-X<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/persimmon"><!-- HTML_TAG_START -->Persimmon<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/phobert"><!-- HTML_TAG_START -->PhoBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/plbart"><!-- HTML_TAG_START -->PLBart<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/prophetnet"><!-- HTML_TAG_START -->ProphetNet<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/qdqbert"><!-- HTML_TAG_START -->QDQBert<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/rag"><!-- HTML_TAG_START -->RAG<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/realm"><!-- HTML_TAG_START -->REALM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/reformer"><!-- HTML_TAG_START -->Reformer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/rembert"><!-- HTML_TAG_START -->RemBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/retribert"><!-- HTML_TAG_START -->RetriBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/roberta"><!-- HTML_TAG_START -->RoBERTa<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm"><!-- HTML_TAG_START -->RoBERTa-PreLayerNorm<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/roc_bert"><!-- HTML_TAG_START -->RoCBert<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/roformer"><!-- HTML_TAG_START -->RoFormer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/rwkv"><!-- HTML_TAG_START -->RWKV<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/splinter"><!-- HTML_TAG_START -->Splinter<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/squeezebert"><!-- HTML_TAG_START -->SqueezeBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/switch_transformers"><!-- HTML_TAG_START -->SwitchTransformers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/t5"><!-- HTML_TAG_START -->T5<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/t5v1.1"><!-- HTML_TAG_START -->T5v1.1<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/tapex"><!-- HTML_TAG_START -->TAPEX<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/transfo-xl"><!-- HTML_TAG_START -->Transformer XL<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ul2"><!-- HTML_TAG_START -->UL2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/umt5"><!-- HTML_TAG_START -->UMT5<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xmod"><!-- HTML_TAG_START -->X-MOD<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xglm"><!-- HTML_TAG_START -->XGLM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm"><!-- HTML_TAG_START -->XLM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet"><!-- HTML_TAG_START -->XLM-ProphetNet<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta"><!-- HTML_TAG_START -->XLM-RoBERTa<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl"><!-- HTML_TAG_START -->XLM-RoBERTa-XL<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm-v"><!-- HTML_TAG_START -->XLM-V<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlnet"><!-- HTML_TAG_START -->XLNet<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/yoso"><!-- HTML_TAG_START -->YOSO<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Vision models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Audio models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Multimodal models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Reinforcement learning models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Time series models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Graph models<!-- HTML_TAG_END --></span> </span></span> </div></div> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Internal Helpers<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/modeling_utils"><!-- HTML_TAG_START -->Custom Layers and Utilities<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/pipelines_utils"><!-- HTML_TAG_START -->Utilities for pipelines<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/tokenization_utils"><!-- HTML_TAG_START -->Utilities for Tokenizers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/trainer_utils"><!-- HTML_TAG_START -->Utilities for Trainer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/generation_utils"><!-- HTML_TAG_START -->Utilities for Generation<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/image_processing_utils"><!-- HTML_TAG_START -->Utilities for Image Processors<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/audio_utils"><!-- HTML_TAG_START -->Utilities for Audio processing<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/file_utils"><!-- HTML_TAG_START -->General Utilities<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/time_series_utils"><!-- HTML_TAG_START -->Utilities for Time Series<!-- HTML_TAG_END --> </a> </div> </div></nav></div></div></div> <div class="z-1 min-w-0 flex-1"> <div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg"> <div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div> <p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience </p> <div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes </div></div></div> <div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a> <p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div> <div class="prose-doc prose relative mx-auto max-w-4xl break-words"> <p></p> <h1 class="relative group"><a id="xlnet" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#xlnet"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1k6bowc">XLNet</span></h1> <div class="flex flex-wrap space-x-1" data-svelte-h="svelte-1jyfy8r"><a href="https://huggingface.co/models?filter=xlnet"><img alt="Models" src="https://img.shields.io/badge/All_model_pages-xlnet-blueviolet"></a> <a href="https://huggingface.co/spaces/docs-demos/xlnet-base-cased"><img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"></a></div> <h2 class="relative group"><a id="overview" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#overview"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1jsw1pg">Overview</span></h2> <p data-svelte-h="svelte-y0ew">The XLNet model was proposed in <a href="https://arxiv.org/abs/1906.08237" rel="nofollow">XLNet: Generalized Autoregressive Pretraining for Language Understanding</a> by Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le. XLnet is an extension of the Transformer-XL model pre-trained using an autoregressive method to learn bidirectional contexts by maximizing the expected likelihood over all permutations of the input sequence factorization order.</p> <p data-svelte-h="svelte-vfdo9a">The abstract from the paper is the following:</p> <p data-svelte-h="svelte-ebukil"><em>With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language modeling. However, relying on corrupting the input with masks, BERT neglects dependency between the masked positions and suffers from a pretrain-finetune discrepancy. In light of these pros and cons, we propose XLNet, a generalized autoregressive pretraining method that (1) enables learning bidirectional contexts by maximizing the expected likelihood over all permutations of the factorization order and (2) overcomes the limitations of BERT thanks to its autoregressive formulation. Furthermore, XLNet integrates ideas from Transformer-XL, the state-of-the-art autoregressive model, into pretraining. Empirically, under comparable experiment settings, XLNet outperforms BERT on 20 tasks, often by a large margin, including question answering, natural language inference, sentiment analysis, and document ranking.</em></p> <p data-svelte-h="svelte-axv494">Tips:</p> <ul data-svelte-h="svelte-1gubzyh"><li>The specific attention pattern can be controlled at training and test time using the <code>perm_mask</code> input.</li> <li>Due to the difficulty of training a fully auto-regressive model over various factorization order, XLNet is pretrained using only a sub-set of the output tokens as target which are selected with the <code>target_mapping</code> input.</li> <li>To use XLNet for sequential decoding (i.e. not in fully bi-directional setting), use the <code>perm_mask</code> and <code>target_mapping</code> inputs to control the attention span and outputs (see examples in <em>examples/pytorch/text-generation/run_generation.py</em>)</li> <li>XLNet is one of the few models that has no sequence length limit.</li> <li>XLNet is not a traditional autoregressive model but uses a training strategy that builds on that. It permutes the tokens in the sentence, then allows the model to use the last n tokens to predict the token n+1. Since this is all done with a mask, the sentence is actually fed in the model in the right order, but instead of masking the first n tokens for n+1, XLNet uses a mask that hides the previous tokens in some given permutation of 1,…,sequence length.</li> <li>XLNet also uses the same recurrence mechanism as Transformer-XL to build long-term dependencies.</li></ul> <p data-svelte-h="svelte-1j7svg">This model was contributed by <a href="https://huggingface.co/thomwolf" rel="nofollow">thomwolf</a>. The original code can be found <a href="https://github.com/zihangdai/xlnet/" rel="nofollow">here</a>.</p> <h2 class="relative group"><a id="documentation-resources" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#documentation-resources"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-n3f0j0">Documentation resources</span></h2> <ul data-svelte-h="svelte-qxfc7p"><li><a href="../tasks/sequence_classification">Text classification task guide</a></li> <li><a href="../tasks/token_classification">Token classification task guide</a></li> <li><a href="../tasks/question_answering">Question answering task guide</a></li> <li><a href="../tasks/language_modeling">Causal language modeling task guide</a></li> <li><a href="../tasks/multiple_choice">Multiple choice task guide</a></li></ul> <h2 class="relative group"><a id="transformers.XLNetConfig" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetConfig"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-115x6te">XLNetConfig</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLNetConfig"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XLNetConfig</span></span></h3> <a id="transformers.XLNetConfig" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLNetConfig"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/configuration_xlnet.py#L32" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">vocab_size<span class="opacity-60"> = 32000</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">d_model<span class="opacity-60"> = 1024</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">n_layer<span class="opacity-60"> = 24</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">n_head<span class="opacity-60"> = 16</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">d_inner<span class="opacity-60"> = 4096</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">ff_activation<span class="opacity-60"> = 'gelu'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">untie_r<span class="opacity-60"> = True</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attn_type<span class="opacity-60"> = 'bi'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">initializer_range<span class="opacity-60"> = 0.02</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">layer_norm_eps<span class="opacity-60"> = 1e-12</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">dropout<span class="opacity-60"> = 0.1</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mem_len<span class="opacity-60"> = 512</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">reuse_len<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_mems_eval<span class="opacity-60"> = True</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_mems_train<span class="opacity-60"> = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">bi_data<span class="opacity-60"> = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">clamp_len<span class="opacity-60"> = -1</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">same_length<span class="opacity-60"> = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">summary_type<span class="opacity-60"> = 'last'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">summary_use_proj<span class="opacity-60"> = True</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">summary_activation<span class="opacity-60"> = 'tanh'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">summary_last_dropout<span class="opacity-60"> = 0.1</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">start_n_top<span class="opacity-60"> = 5</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">end_n_top<span class="opacity-60"> = 5</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pad_token_id<span class="opacity-60"> = 5</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">bos_token_id<span class="opacity-60"> = 1</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">eos_token_id<span class="opacity-60"> = 2</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 25 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetConfig.vocab_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetConfig.vocab_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>vocab_size</strong> (<code>int</code>, <em>optional</em>, defaults to 32000) — Vocabulary size of the XLNet model. Defines the number of different tokens that can be represented by the <code>inputs_ids</code> passed when calling <a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetModel">XLNetModel</a> or <a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.TFXLNetModel">TFXLNetModel</a>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetConfig.d_model" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetConfig.d_model"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>d_model</strong> (<code>int</code>, <em>optional</em>, defaults to 1024) — Dimensionality of the encoder layers and the pooler layer.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetConfig.n_layer" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetConfig.n_layer"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>n_layer</strong> (<code>int</code>, <em>optional</em>, defaults to 24) — Number of hidden layers in the Transformer encoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetConfig.n_head" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetConfig.n_head"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>n_head</strong> (<code>int</code>, <em>optional</em>, defaults to 16) — Number of attention heads for each attention layer in the Transformer encoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetConfig.d_inner" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetConfig.d_inner"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>d_inner</strong> (<code>int</code>, <em>optional</em>, defaults to 4096) — Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetConfig.ff_activation" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetConfig.ff_activation"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>ff_activation</strong> (<code>str</code> or <code>Callable</code>, <em>optional</em>, defaults to <code>"gelu"</code>) — The non-linear activation function (function or string) in the If string, <code>"gelu"</code>, <code>"relu"</code>, <code>"silu"</code> and <code>"gelu_new"</code> are supported.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetConfig.untie_r" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetConfig.untie_r"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>untie_r</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether or not to untie relative position biases</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetConfig.attn_type" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetConfig.attn_type"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attn_type</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"bi"</code>) — The attention type used by the model. Set <code>"bi"</code> for XLNet, <code>"uni"</code> for Transformer-XL.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetConfig.initializer_range" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetConfig.initializer_range"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>initializer_range</strong> (<code>float</code>, <em>optional</em>, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetConfig.layer_norm_eps" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetConfig.layer_norm_eps"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>layer_norm_eps</strong> (<code>float</code>, <em>optional</em>, defaults to 1e-12) — The epsilon used by the layer normalization layers.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetConfig.dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetConfig.dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>dropout</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetConfig.mem_len" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetConfig.mem_len"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mem_len</strong> (<code>int</code> or <code>None</code>, <em>optional</em>) — The number of tokens to cache. The key/value pairs that have already been pre-computed in a previous forward pass won’t be re-computed. See the <a href="https://huggingface.co/transformers/quickstart.html#using-the-past" rel="nofollow">quickstart</a> for more information.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetConfig.reuse_len" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetConfig.reuse_len"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>reuse_len</strong> (<code>int</code>, <em>optional</em>) — The number of tokens in the current batch to be cached and reused in the future.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetConfig.bi_data" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetConfig.bi_data"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>bi_data</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not to use bidirectional input pipeline. Usually set to <code>True</code> during pretraining and <code>False</code> during finetuning.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetConfig.clamp_len" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetConfig.clamp_len"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>clamp_len</strong> (<code>int</code>, <em>optional</em>, defaults to -1) — Clamp all relative distances larger than clamp_len. Setting this attribute to -1 means no clamping.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetConfig.same_length" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetConfig.same_length"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>same_length</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not to use the same attention length for each token.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetConfig.summary_type" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetConfig.summary_type"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>summary_type</strong> (<code>str</code>, <em>optional</em>, defaults to “last”) — Argument used when doing sequence summary. Used in the sequence classification and multiple choice models.<p></p> <p>Has to be one of the following options:</p> <ul> <li><code>"last"</code>: Take the last token hidden state (like XLNet).</li> <li><code>"first"</code>: Take the first token hidden state (like BERT).</li> <li><code>"mean"</code>: Take the mean of all tokens hidden states.</li> <li><code>"cls_index"</code>: Supply a Tensor of classification token position (like GPT/GPT-2).</li> <li><code>"attn"</code>: Not implemented now, use multi-head attention.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetConfig.summary_use_proj" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetConfig.summary_use_proj"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>summary_use_proj</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Argument used when doing sequence summary. Used in the sequence classification and multiple choice models.<p></p> <p>Whether or not to add a projection after the vector extraction.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetConfig.summary_activation" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetConfig.summary_activation"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>summary_activation</strong> (<code>str</code>, <em>optional</em>) — Argument used when doing sequence summary. Used in the sequence classification and multiple choice models.<p></p> <p>Pass <code>"tanh"</code> for a tanh activation to the output, any other value will result in no activation.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetConfig.summary_proj_to_labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetConfig.summary_proj_to_labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>summary_proj_to_labels</strong> (<code>boo</code>, <em>optional</em>, defaults to <code>True</code>) — Used in the sequence classification and multiple choice models.<p></p> <p>Whether the projection outputs should have <code>config.num_labels</code> or <code>config.hidden_size</code> classes.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetConfig.summary_last_dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetConfig.summary_last_dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>summary_last_dropout</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) — Used in the sequence classification and multiple choice models.<p></p> <p>The dropout ratio to be used after the projection and activation.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetConfig.start_n_top" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetConfig.start_n_top"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>start_n_top</strong> (<code>int</code>, <em>optional</em>, defaults to 5) — Used in the SQuAD evaluation script.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetConfig.end_n_top" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetConfig.end_n_top"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>end_n_top</strong> (<code>int</code>, <em>optional</em>, defaults to 5) — Used in the SQuAD evaluation script.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetConfig.use_mems_eval" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetConfig.use_mems_eval"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>use_mems_eval</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether or not the model should make use of the recurrent memory mechanism in evaluation mode.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetConfig.use_mems_train" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetConfig.use_mems_train"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>use_mems_train</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not the model should make use of the recurrent memory mechanism in train mode.<p></p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"> <p>For pretraining, it is recommended to set <code>use_mems_train</code> to <code>True</code>. For fine-tuning, it is recommended to set <code>use_mems_train</code> to <code>False</code> as discussed <a href="https://github.com/zihangdai/xlnet/issues/41#issuecomment-505102587" rel="nofollow">here</a>. If <code>use_mems_train</code> is set to <code>True</code>, one has to make sure that the train batches are correctly pre-processed, <em>e.g.</em> <code>batch_1 = [[This line is], [This is the]]</code> and <code>batch_2 = [[ the first line], [ second line]]</code> and that all batches are of equal size.</p> </div></span></span> </li></ul> </div></div> <p data-svelte-h="svelte-kib7tn">This is the configuration class to store the configuration of a <a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetModel">XLNetModel</a> or a <a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.TFXLNetModel">TFXLNetModel</a>. It is used to instantiate a XLNet model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the <a href="https://huggingface.co/xlnet-large-cased" rel="nofollow">xlnet-large-cased</a> architecture.</p> <p data-svelte-h="svelte-10kqkkl">Configuration objects inherit from <a href="/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> and can be used to control the model outputs. Read the documentation from <a href="/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> for more information.</p> <div class="relative group rounded-md"><a id="transformers.XLNetConfig.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetConfig.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-kvfsh7">Examples:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> XLNetConfig, XLNetModel <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Initializing a XLNet configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>configuration = XLNetConfig() <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Initializing a model (with random weights) from the configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model = XLNetModel(configuration) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Accessing the model configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>configuration = model.config</pre></div></div></div> <h2 class="relative group"><a id="transformers.XLNetTokenizer" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetTokenizer"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-vs31nx">XLNetTokenizer</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLNetTokenizer"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XLNetTokenizer</span></span></h3> <a id="transformers.XLNetTokenizer" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLNetTokenizer"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/tokenization_xlnet.py#L53" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">vocab_file<span class="opacity-60"></span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">do_lower_case<span class="opacity-60"> = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">remove_space<span class="opacity-60"> = True</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">keep_accents<span class="opacity-60"> = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">bos_token<span class="opacity-60"> = '&lt;s&gt;'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">eos_token<span class="opacity-60"> = '&lt;/s&gt;'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">unk_token<span class="opacity-60"> = '&lt;unk&gt;'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">sep_token<span class="opacity-60"> = '&lt;sep&gt;'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pad_token<span class="opacity-60"> = '&lt;pad&gt;'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">cls_token<span class="opacity-60"> = '&lt;cls&gt;'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_token<span class="opacity-60"> = '&lt;mask&gt;'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">additional_special_tokens<span class="opacity-60"> = ['&lt;eop&gt;', '&lt;eod&gt;']</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">sp_model_kwargs<span class="opacity-60">: typing.Union[typing.Dict[str, typing.Any], NoneType] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 14 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetTokenizer.vocab_file" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetTokenizer.vocab_file"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>vocab_file</strong> (<code>str</code>) — <a href="https://github.com/google/sentencepiece" rel="nofollow">SentencePiece</a> file (generally has a .spm extension) that contains the vocabulary necessary to instantiate a tokenizer.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetTokenizer.do_lower_case" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetTokenizer.do_lower_case"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>do_lower_case</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether to lowercase the input when tokenizing.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetTokenizer.remove_space" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetTokenizer.remove_space"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>remove_space</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether to strip the text when tokenizing (removing excess spaces before and after the string).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetTokenizer.keep_accents" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetTokenizer.keep_accents"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>keep_accents</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether to keep accents when tokenizing.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetTokenizer.bos_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetTokenizer.bos_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>bos_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;s&gt;"</code>) — The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.<p></p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"> <p>When building a sequence using special tokens, this is not the token that is used for the beginning of sequence. The token used is the <code>cls_token</code>.</p> </div></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetTokenizer.eos_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetTokenizer.eos_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>eos_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;/s&gt;"</code>) — The end of sequence token.<p></p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"> <p>When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the <code>sep_token</code>.</p> </div></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetTokenizer.unk_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetTokenizer.unk_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>unk_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;unk&gt;"</code>) — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetTokenizer.sep_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetTokenizer.sep_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>sep_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;sep&gt;"</code>) — The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetTokenizer.pad_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetTokenizer.pad_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>pad_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;pad&gt;"</code>) — The token used for padding, for example when batching sequences of different lengths.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetTokenizer.cls_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetTokenizer.cls_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>cls_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;cls&gt;"</code>) — The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetTokenizer.mask_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetTokenizer.mask_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mask_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;mask&gt;"</code>) — The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetTokenizer.additional_special_tokens" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetTokenizer.additional_special_tokens"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>additional_special_tokens</strong> (<code>List[str]</code>, <em>optional</em>, defaults to <code>["&lt;eop&gt;", "&lt;eod&gt;"]</code>) — Additional special tokens used by the tokenizer.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetTokenizer.sp_model_kwargs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetTokenizer.sp_model_kwargs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>sp_model_kwargs</strong> (<code>dict</code>, <em>optional</em>) — Will be passed to the <code>SentencePieceProcessor.__init__()</code> method. The <a href="https://github.com/google/sentencepiece/tree/master/python" rel="nofollow">Python wrapper for SentencePiece</a> can be used, among other things, to set:<p></p> <ul> <li> <p><code>enable_sampling</code>: Enable subword regularization.</p> </li> <li> <p><code>nbest_size</code>: Sampling parameters for unigram. Invalid for BPE-Dropout.</p> <ul> <li><code>nbest_size = {0,1}</code>: No sampling is performed.</li> <li><code>nbest_size &gt; 1</code>: samples from the nbest_size results.</li> <li><code>nbest_size &lt; 0</code>: assuming that nbest_size is infinite and samples from the all hypothesis (lattice) using forward-filtering-and-backward-sampling algorithm.</li> </ul> </li> <li> <p><code>alpha</code>: Smoothing parameter for unigram sampling, and dropout probability of merge operations for BPE-dropout.</p> </li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetTokenizer.sp_model" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetTokenizer.sp_model"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>sp_model</strong> (<code>SentencePieceProcessor</code>) — The <em>SentencePiece</em> processor that is used for every conversion (string, tokens and IDs).</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1dqc3xp">Construct an XLNet tokenizer. Based on <a href="https://github.com/google/sentencepiece" rel="nofollow">SentencePiece</a>.</p> <p data-svelte-h="svelte-1b0fouy">This tokenizer inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizer">PreTrainedTokenizer</a> which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLNetTokenizer.build_inputs_with_special_tokens"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>build_inputs_with_special_tokens</span></h4> <a id="transformers.XLNetTokenizer.build_inputs_with_special_tokens" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLNetTokenizer.build_inputs_with_special_tokens"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/tokenization_xlnet.py#L298" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_0<span class="opacity-60">: typing.List[int]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_1<span class="opacity-60">: typing.Optional[typing.List[int]] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>List[int]</code></span></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetTokenizer.build_inputs_with_special_tokens.token_ids_0" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetTokenizer.build_inputs_with_special_tokens.token_ids_0"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_0</strong> (<code>List[int]</code>) — List of IDs to which the special tokens will be added.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetTokenizer.build_inputs_with_special_tokens.token_ids_1" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetTokenizer.build_inputs_with_special_tokens.token_ids_1"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_1</strong> (<code>List[int]</code>, <em>optional</em>) — Optional second list of IDs for sequence pairs.</span></span> </li></ul> <div id="transformers.XLNetTokenizer.build_inputs_with_special_tokens.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><code>List[int]</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>List of <a href="../glossary#input-ids">input IDs</a> with the appropriate special tokens.</p> </p> </div></div> <p data-svelte-h="svelte-1dgk30w">Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. An XLNet sequence has the following format:</p> <ul data-svelte-h="svelte-zi1mnq"><li>single sequence: <code>X &lt;sep&gt; &lt;cls&gt;</code></li> <li>pair of sequences: <code>A &lt;sep&gt; B &lt;sep&gt; &lt;cls&gt;</code></li></ul></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLNetTokenizer.get_special_tokens_mask"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>get_special_tokens_mask</span></h4> <a id="transformers.XLNetTokenizer.get_special_tokens_mask" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLNetTokenizer.get_special_tokens_mask"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/tokenization_xlnet.py#L323" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_0<span class="opacity-60">: typing.List[int]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_1<span class="opacity-60">: typing.Optional[typing.List[int]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">already_has_special_tokens<span class="opacity-60">: bool = False</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>List[int]</code></span></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetTokenizer.get_special_tokens_mask.token_ids_0" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetTokenizer.get_special_tokens_mask.token_ids_0"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_0</strong> (<code>List[int]</code>) — List of IDs.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetTokenizer.get_special_tokens_mask.token_ids_1" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetTokenizer.get_special_tokens_mask.token_ids_1"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_1</strong> (<code>List[int]</code>, <em>optional</em>) — Optional second list of IDs for sequence pairs.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetTokenizer.get_special_tokens_mask.already_has_special_tokens" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetTokenizer.get_special_tokens_mask.already_has_special_tokens"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>already_has_special_tokens</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not the token list is already formatted with special tokens for the model.</span></span> </li></ul> <div id="transformers.XLNetTokenizer.get_special_tokens_mask.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><code>List[int]</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.</p> </p> </div></div> <p data-svelte-h="svelte-1f4f5kp">Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer <code>prepare_for_model</code> method.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLNetTokenizer.create_token_type_ids_from_sequences"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>create_token_type_ids_from_sequences</span></h4> <a id="transformers.XLNetTokenizer.create_token_type_ids_from_sequences" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLNetTokenizer.create_token_type_ids_from_sequences"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/tokenization_xlnet.py#L351" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_0<span class="opacity-60">: typing.List[int]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_1<span class="opacity-60">: typing.Optional[typing.List[int]] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>List[int]</code></span></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetTokenizer.create_token_type_ids_from_sequences.token_ids_0" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetTokenizer.create_token_type_ids_from_sequences.token_ids_0"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_0</strong> (<code>List[int]</code>) — List of IDs.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetTokenizer.create_token_type_ids_from_sequences.token_ids_1" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetTokenizer.create_token_type_ids_from_sequences.token_ids_1"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_1</strong> (<code>List[int]</code>, <em>optional</em>) — Optional second list of IDs for sequence pairs.</span></span> </li></ul> <div id="transformers.XLNetTokenizer.create_token_type_ids_from_sequences.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><code>List[int]</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>List of <a href="../glossary#token-type-ids">token type IDs</a> according to the given sequence(s).</p> </p> </div></div> <p data-svelte-h="svelte-1nwvqaq">Create a mask from the two sequences passed to be used in a sequence-pair classification task. An XLNet</p> <div class="relative group rounded-md"><a id="transformers.XLNetTokenizer.create_token_type_ids_from_sequences.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetTokenizer.create_token_type_ids_from_sequences.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-16klr56">sequence pair mask has the following format:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class="">0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 1 </span>1<span class="hljs-number"> 1 </span>1<span class="hljs-number"> 1 </span>1<span class="hljs-number"> 1 </span>1 1 | first sequence | second sequence |</pre></div></div> <p data-svelte-h="svelte-owoxgn">If <code>token_ids_1</code> is <code>None</code>, this method only returns the first portion of the mask (0s).</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLNetTokenizer.save_vocabulary"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>save_vocabulary</span></h4> <a id="transformers.XLNetTokenizer.save_vocabulary" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLNetTokenizer.save_vocabulary"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/tokenization_xlnet.py#L381" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">save_directory<span class="opacity-60">: str</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">filename_prefix<span class="opacity-60">: typing.Optional[str] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> </div></div></div></div> <h2 class="relative group"><a id="transformers.XLNetTokenizerFast" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetTokenizerFast"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-4fzpej">XLNetTokenizerFast</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLNetTokenizerFast"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XLNetTokenizerFast</span></span></h3> <a id="transformers.XLNetTokenizerFast" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLNetTokenizerFast"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/tokenization_xlnet_fast.py#L63" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">vocab_file<span class="opacity-60"> = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">tokenizer_file<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">do_lower_case<span class="opacity-60"> = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">remove_space<span class="opacity-60"> = True</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">keep_accents<span class="opacity-60"> = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">bos_token<span class="opacity-60"> = '&lt;s&gt;'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">eos_token<span class="opacity-60"> = '&lt;/s&gt;'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">unk_token<span class="opacity-60"> = '&lt;unk&gt;'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">sep_token<span class="opacity-60"> = '&lt;sep&gt;'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pad_token<span class="opacity-60"> = '&lt;pad&gt;'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">cls_token<span class="opacity-60"> = '&lt;cls&gt;'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_token<span class="opacity-60"> = '&lt;mask&gt;'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">additional_special_tokens<span class="opacity-60"> = ['&lt;eop&gt;', '&lt;eod&gt;']</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 13 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetTokenizerFast.vocab_file" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetTokenizerFast.vocab_file"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>vocab_file</strong> (<code>str</code>) — <a href="https://github.com/google/sentencepiece" rel="nofollow">SentencePiece</a> file (generally has a .spm extension) that contains the vocabulary necessary to instantiate a tokenizer.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetTokenizerFast.do_lower_case" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetTokenizerFast.do_lower_case"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>do_lower_case</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether to lowercase the input when tokenizing.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetTokenizerFast.remove_space" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetTokenizerFast.remove_space"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>remove_space</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether to strip the text when tokenizing (removing excess spaces before and after the string).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetTokenizerFast.keep_accents" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetTokenizerFast.keep_accents"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>keep_accents</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether to keep accents when tokenizing.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetTokenizerFast.bos_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetTokenizerFast.bos_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>bos_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;s&gt;"</code>) — The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.<p></p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"> <p>When building a sequence using special tokens, this is not the token that is used for the beginning of sequence. The token used is the <code>cls_token</code>.</p> </div></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetTokenizerFast.eos_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetTokenizerFast.eos_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>eos_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;/s&gt;"</code>) — The end of sequence token.<p></p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"> <p>When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the <code>sep_token</code>.</p> </div></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetTokenizerFast.unk_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetTokenizerFast.unk_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>unk_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;unk&gt;"</code>) — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetTokenizerFast.sep_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetTokenizerFast.sep_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>sep_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;sep&gt;"</code>) — The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetTokenizerFast.pad_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetTokenizerFast.pad_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>pad_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;pad&gt;"</code>) — The token used for padding, for example when batching sequences of different lengths.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetTokenizerFast.cls_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetTokenizerFast.cls_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>cls_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;cls&gt;"</code>) — The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetTokenizerFast.mask_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetTokenizerFast.mask_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mask_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;mask&gt;"</code>) — The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetTokenizerFast.additional_special_tokens" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetTokenizerFast.additional_special_tokens"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>additional_special_tokens</strong> (<code>List[str]</code>, <em>optional</em>, defaults to <code>["&lt;eop&gt;", "&lt;eod&gt;"]</code>) — Additional special tokens used by the tokenizer.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetTokenizerFast.sp_model" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetTokenizerFast.sp_model"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>sp_model</strong> (<code>SentencePieceProcessor</code>) — The <em>SentencePiece</em> processor that is used for every conversion (string, tokens and IDs).</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1jtbcee">Construct a “fast” XLNet tokenizer (backed by HuggingFace’s <em>tokenizers</em> library). Based on <a href="https://huggingface.co/docs/tokenizers/python/latest/components.html?highlight=unigram#models" rel="nofollow">Unigram</a>.</p> <p data-svelte-h="svelte-ttxvs6">This tokenizer inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast">PreTrainedTokenizerFast</a> which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLNetTokenizerFast.build_inputs_with_special_tokens"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>build_inputs_with_special_tokens</span></h4> <a id="transformers.XLNetTokenizerFast.build_inputs_with_special_tokens" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLNetTokenizerFast.build_inputs_with_special_tokens"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/tokenization_xlnet_fast.py#L177" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_0<span class="opacity-60">: typing.List[int]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_1<span class="opacity-60">: typing.Optional[typing.List[int]] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>List[int]</code></span></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetTokenizerFast.build_inputs_with_special_tokens.token_ids_0" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetTokenizerFast.build_inputs_with_special_tokens.token_ids_0"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_0</strong> (<code>List[int]</code>) — List of IDs to which the special tokens will be added.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetTokenizerFast.build_inputs_with_special_tokens.token_ids_1" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetTokenizerFast.build_inputs_with_special_tokens.token_ids_1"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_1</strong> (<code>List[int]</code>, <em>optional</em>) — Optional second list of IDs for sequence pairs.</span></span> </li></ul> <div id="transformers.XLNetTokenizerFast.build_inputs_with_special_tokens.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><code>List[int]</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>List of <a href="../glossary#input-ids">input IDs</a> with the appropriate special tokens.</p> </p> </div></div> <p data-svelte-h="svelte-1dgk30w">Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. An XLNet sequence has the following format:</p> <ul data-svelte-h="svelte-zi1mnq"><li>single sequence: <code>X &lt;sep&gt; &lt;cls&gt;</code></li> <li>pair of sequences: <code>A &lt;sep&gt; B &lt;sep&gt; &lt;cls&gt;</code></li></ul></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLNetTokenizerFast.create_token_type_ids_from_sequences"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>create_token_type_ids_from_sequences</span></h4> <a id="transformers.XLNetTokenizerFast.create_token_type_ids_from_sequences" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLNetTokenizerFast.create_token_type_ids_from_sequences"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/tokenization_xlnet_fast.py#L202" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_0<span class="opacity-60">: typing.List[int]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_1<span class="opacity-60">: typing.Optional[typing.List[int]] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>List[int]</code></span></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetTokenizerFast.create_token_type_ids_from_sequences.token_ids_0" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetTokenizerFast.create_token_type_ids_from_sequences.token_ids_0"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_0</strong> (<code>List[int]</code>) — List of IDs.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetTokenizerFast.create_token_type_ids_from_sequences.token_ids_1" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetTokenizerFast.create_token_type_ids_from_sequences.token_ids_1"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_1</strong> (<code>List[int]</code>, <em>optional</em>) — Optional second list of IDs for sequence pairs.</span></span> </li></ul> <div id="transformers.XLNetTokenizerFast.create_token_type_ids_from_sequences.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><code>List[int]</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>List of <a href="../glossary#token-type-ids">token type IDs</a> according to the given sequence(s).</p> </p> </div></div> <p data-svelte-h="svelte-1nwvqaq">Create a mask from the two sequences passed to be used in a sequence-pair classification task. An XLNet</p> <div class="relative group rounded-md"><a id="transformers.XLNetTokenizerFast.create_token_type_ids_from_sequences.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetTokenizerFast.create_token_type_ids_from_sequences.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-16klr56">sequence pair mask has the following format:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class="">0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 1 </span>1<span class="hljs-number"> 1 </span>1<span class="hljs-number"> 1 </span>1<span class="hljs-number"> 1 </span>1 1 | first sequence | second sequence |</pre></div></div> <p data-svelte-h="svelte-owoxgn">If <code>token_ids_1</code> is <code>None</code>, this method only returns the first portion of the mask (0s).</p></div></div> <h2 class="relative group"><a id="transformers.models.xlnet.modeling_xlnet.XLNetModelOutput" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_xlnet.XLNetModelOutput"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-uy5dig">XLNet specific outputs</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.models.xlnet.modeling_xlnet.XLNetModelOutput"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.models.xlnet.modeling_xlnet.</span><span class="font-semibold">XLNetModelOutput</span></span></h3> <a id="transformers.models.xlnet.modeling_xlnet.XLNetModelOutput" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.models.xlnet.modeling_xlnet.XLNetModelOutput"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_xlnet.py#L579" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">last_hidden_state<span class="opacity-60">: FloatTensor</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mems<span class="opacity-60">: typing.Optional[typing.List[torch.FloatTensor]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_states<span class="opacity-60">: typing.Optional[typing.Tuple[torch.FloatTensor]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attentions<span class="opacity-60">: typing.Optional[typing.Tuple[torch.FloatTensor]] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 4 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_xlnet.XLNetModelOutput.last_hidden_state" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_xlnet.XLNetModelOutput.last_hidden_state"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>last_hidden_state</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_predict, hidden_size)</code>) — Sequence of hidden-states at the last layer of the model.<p></p> <p><code>num_predict</code> corresponds to <code>target_mapping.shape[1]</code>. If <code>target_mapping</code> is <code>None</code>, then <code>num_predict</code> corresponds to <code>sequence_length</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_xlnet.XLNetModelOutput.mems" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_xlnet.XLNetModelOutput.mems"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mems</strong> (<code>List[torch.FloatTensor]</code> of length <code>config.n_layers</code>) — Contains pre-computed hidden-states. Can be used (see <code>mems</code> input) to speed up sequential decoding. The token ids which have their past given to this model should not be passed as <code>input_ids</code> as they have already been computed.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_xlnet.XLNetModelOutput.hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_xlnet.XLNetModelOutput.hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.<p></p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_xlnet.XLNetModelOutput.attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_xlnet.XLNetModelOutput.attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.<p></p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p></span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1ljs2ry">Output type of <a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetModel">XLNetModel</a>.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.models.xlnet.modeling_xlnet.XLNetLMHeadModelOutput"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.models.xlnet.modeling_xlnet.</span><span class="font-semibold">XLNetLMHeadModelOutput</span></span></h3> <a id="transformers.models.xlnet.modeling_xlnet.XLNetLMHeadModelOutput" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.models.xlnet.modeling_xlnet.XLNetLMHeadModelOutput"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_xlnet.py#L613" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">loss<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">logits<span class="opacity-60">: FloatTensor = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mems<span class="opacity-60">: typing.Optional[typing.List[torch.FloatTensor]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_states<span class="opacity-60">: typing.Optional[typing.Tuple[torch.FloatTensor]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attentions<span class="opacity-60">: typing.Optional[typing.Tuple[torch.FloatTensor]] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 5 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_xlnet.XLNetLMHeadModelOutput.loss" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_xlnet.XLNetLMHeadModelOutput.loss"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <em>(1,)</em>, <em>optional</em>, returned when <code>labels</code> is provided) — Language modeling loss (for next-token prediction).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_xlnet.XLNetLMHeadModelOutput.logits" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_xlnet.XLNetLMHeadModelOutput.logits"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_predict, config.vocab_size)</code>) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).<p></p> <p><code>num_predict</code> corresponds to <code>target_mapping.shape[1]</code>. If <code>target_mapping</code> is <code>None</code>, then <code>num_predict</code> corresponds to <code>sequence_length</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_xlnet.XLNetLMHeadModelOutput.mems" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_xlnet.XLNetLMHeadModelOutput.mems"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mems</strong> (<code>List[torch.FloatTensor]</code> of length <code>config.n_layers</code>) — Contains pre-computed hidden-states. Can be used (see <code>mems</code> input) to speed up sequential decoding. The token ids which have their past given to this model should not be passed as <code>input_ids</code> as they have already been computed.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_xlnet.XLNetLMHeadModelOutput.hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_xlnet.XLNetLMHeadModelOutput.hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.<p></p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_xlnet.XLNetLMHeadModelOutput.attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_xlnet.XLNetLMHeadModelOutput.attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.<p></p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p></span></span> </li></ul> </div></div> <p data-svelte-h="svelte-u8b1y8">Output type of <a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetLMHeadModel">XLNetLMHeadModel</a>.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.models.xlnet.modeling_xlnet.XLNetForSequenceClassificationOutput"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.models.xlnet.modeling_xlnet.</span><span class="font-semibold">XLNetForSequenceClassificationOutput</span></span></h3> <a id="transformers.models.xlnet.modeling_xlnet.XLNetForSequenceClassificationOutput" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.models.xlnet.modeling_xlnet.XLNetForSequenceClassificationOutput"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_xlnet.py#L650" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">loss<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">logits<span class="opacity-60">: FloatTensor = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mems<span class="opacity-60">: typing.Optional[typing.List[torch.FloatTensor]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_states<span class="opacity-60">: typing.Optional[typing.Tuple[torch.FloatTensor]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attentions<span class="opacity-60">: typing.Optional[typing.Tuple[torch.FloatTensor]] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 5 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_xlnet.XLNetForSequenceClassificationOutput.loss" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_xlnet.XLNetForSequenceClassificationOutput.loss"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>label</code> is provided) — Classification (or regression if config.num_labels==1) loss.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_xlnet.XLNetForSequenceClassificationOutput.logits" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_xlnet.XLNetForSequenceClassificationOutput.logits"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, config.num_labels)</code>) — Classification (or regression if config.num_labels==1) scores (before SoftMax).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_xlnet.XLNetForSequenceClassificationOutput.mems" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_xlnet.XLNetForSequenceClassificationOutput.mems"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mems</strong> (<code>List[torch.FloatTensor]</code> of length <code>config.n_layers</code>) — Contains pre-computed hidden-states. Can be used (see <code>mems</code> input) to speed up sequential decoding. The token ids which have their past given to this model should not be passed as <code>input_ids</code> as they have already been computed.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_xlnet.XLNetForSequenceClassificationOutput.hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_xlnet.XLNetForSequenceClassificationOutput.hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.<p></p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_xlnet.XLNetForSequenceClassificationOutput.attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_xlnet.XLNetForSequenceClassificationOutput.attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.<p></p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p></span></span> </li></ul> </div></div> <p data-svelte-h="svelte-13jmdm4">Output type of <a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetForSequenceClassification">XLNetForSequenceClassification</a>.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.models.xlnet.modeling_xlnet.XLNetForMultipleChoiceOutput"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.models.xlnet.modeling_xlnet.</span><span class="font-semibold">XLNetForMultipleChoiceOutput</span></span></h3> <a id="transformers.models.xlnet.modeling_xlnet.XLNetForMultipleChoiceOutput" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.models.xlnet.modeling_xlnet.XLNetForMultipleChoiceOutput"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_xlnet.py#L718" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">loss<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">logits<span class="opacity-60">: FloatTensor = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mems<span class="opacity-60">: typing.Optional[typing.List[torch.FloatTensor]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_states<span class="opacity-60">: typing.Optional[typing.Tuple[torch.FloatTensor]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attentions<span class="opacity-60">: typing.Optional[typing.Tuple[torch.FloatTensor]] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 5 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_xlnet.XLNetForMultipleChoiceOutput.loss" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_xlnet.XLNetForMultipleChoiceOutput.loss"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <em>(1,)</em>, <em>optional</em>, returned when <code>labels</code> is provided) — Classification loss.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_xlnet.XLNetForMultipleChoiceOutput.logits" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_xlnet.XLNetForMultipleChoiceOutput.logits"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_choices)</code>) — <em>num_choices</em> is the second dimension of the input tensors. (see <em>input_ids</em> above).<p></p> <p>Classification scores (before SoftMax).</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_xlnet.XLNetForMultipleChoiceOutput.mems" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_xlnet.XLNetForMultipleChoiceOutput.mems"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mems</strong> (<code>List[torch.FloatTensor]</code> of length <code>config.n_layers</code>) — Contains pre-computed hidden-states. Can be used (see <code>mems</code> input) to speed up sequential decoding. The token ids which have their past given to this model should not be passed as <code>input_ids</code> as they have already been computed.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_xlnet.XLNetForMultipleChoiceOutput.hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_xlnet.XLNetForMultipleChoiceOutput.hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.<p></p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_xlnet.XLNetForMultipleChoiceOutput.attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_xlnet.XLNetForMultipleChoiceOutput.attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.<p></p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p></span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1gsijuc">Output type of <a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetForMultipleChoice">XLNetForMultipleChoice</a>.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.models.xlnet.modeling_xlnet.XLNetForTokenClassificationOutput"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.models.xlnet.modeling_xlnet.</span><span class="font-semibold">XLNetForTokenClassificationOutput</span></span></h3> <a id="transformers.models.xlnet.modeling_xlnet.XLNetForTokenClassificationOutput" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.models.xlnet.modeling_xlnet.XLNetForTokenClassificationOutput"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_xlnet.py#L684" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">loss<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">logits<span class="opacity-60">: FloatTensor = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mems<span class="opacity-60">: typing.Optional[typing.List[torch.FloatTensor]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_states<span class="opacity-60">: typing.Optional[typing.Tuple[torch.FloatTensor]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attentions<span class="opacity-60">: typing.Optional[typing.Tuple[torch.FloatTensor]] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 5 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_xlnet.XLNetForTokenClassificationOutput.loss" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_xlnet.XLNetForTokenClassificationOutput.loss"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Classification loss.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_xlnet.XLNetForTokenClassificationOutput.logits" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_xlnet.XLNetForTokenClassificationOutput.logits"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, config.num_labels)</code>) — Classification scores (before SoftMax).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_xlnet.XLNetForTokenClassificationOutput.mems" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_xlnet.XLNetForTokenClassificationOutput.mems"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mems</strong> (<code>List[torch.FloatTensor]</code> of length <code>config.n_layers</code>) — Contains pre-computed hidden-states. Can be used (see <code>mems</code> input) to speed up sequential decoding. The token ids which have their past given to this model should not be passed as <code>input_ids</code> as they have already been computed.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_xlnet.XLNetForTokenClassificationOutput.hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_xlnet.XLNetForTokenClassificationOutput.hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.<p></p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_xlnet.XLNetForTokenClassificationOutput.attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_xlnet.XLNetForTokenClassificationOutput.attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.<p></p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p></span></span> </li></ul> </div></div> <p data-svelte-h="svelte-3njxff">Output type of <code>XLNetForTokenClassificationOutput</code>.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringSimpleOutput"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.models.xlnet.modeling_xlnet.</span><span class="font-semibold">XLNetForQuestionAnsweringSimpleOutput</span></span></h3> <a id="transformers.models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringSimpleOutput" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringSimpleOutput"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_xlnet.py#L754" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">loss<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">start_logits<span class="opacity-60">: FloatTensor = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">end_logits<span class="opacity-60">: FloatTensor = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mems<span class="opacity-60">: typing.Optional[typing.List[torch.FloatTensor]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_states<span class="opacity-60">: typing.Optional[typing.Tuple[torch.FloatTensor]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attentions<span class="opacity-60">: typing.Optional[typing.Tuple[torch.FloatTensor]] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 6 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringSimpleOutput.loss" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringSimpleOutput.loss"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringSimpleOutput.start_logits" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringSimpleOutput.start_logits"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>start_logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length,)</code>) — Span-start scores (before SoftMax).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringSimpleOutput.end_logits" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringSimpleOutput.end_logits"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>end_logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length,)</code>) — Span-end scores (before SoftMax).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringSimpleOutput.mems" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringSimpleOutput.mems"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mems</strong> (<code>List[torch.FloatTensor]</code> of length <code>config.n_layers</code>) — Contains pre-computed hidden-states. Can be used (see <code>mems</code> input) to speed up sequential decoding. The token ids which have their past given to this model should not be passed as <code>input_ids</code> as they have already been computed.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringSimpleOutput.hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringSimpleOutput.hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.<p></p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringSimpleOutput.attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringSimpleOutput.attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.<p></p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p></span></span> </li></ul> </div></div> <p data-svelte-h="svelte-68uyqi">Output type of <a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetForQuestionAnsweringSimple">XLNetForQuestionAnsweringSimple</a>.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringOutput"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.models.xlnet.modeling_xlnet.</span><span class="font-semibold">XLNetForQuestionAnsweringOutput</span></span></h3> <a id="transformers.models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringOutput" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringOutput"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_xlnet.py#L791" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">loss<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">start_top_log_probs<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">start_top_index<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">end_top_log_probs<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">end_top_index<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">cls_logits<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mems<span class="opacity-60">: typing.Optional[typing.List[torch.FloatTensor]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_states<span class="opacity-60">: typing.Optional[typing.Tuple[torch.FloatTensor]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attentions<span class="opacity-60">: typing.Optional[typing.Tuple[torch.FloatTensor]] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 9 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringOutput.loss" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringOutput.loss"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned if both <code>start_positions</code> and <code>end_positions</code> are provided) — Classification loss as the sum of start token, end token (and is_impossible if provided) classification losses.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringOutput.start_top_log_probs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringOutput.start_top_log_probs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>start_top_log_probs</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, config.start_n_top)</code>, <em>optional</em>, returned if <code>start_positions</code> or <code>end_positions</code> is not provided) — Log probabilities for the top config.start_n_top start token possibilities (beam-search).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringOutput.start_top_index" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringOutput.start_top_index"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>start_top_index</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, config.start_n_top)</code>, <em>optional</em>, returned if <code>start_positions</code> or <code>end_positions</code> is not provided) — Indices for the top config.start_n_top start token possibilities (beam-search).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringOutput.end_top_log_probs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringOutput.end_top_log_probs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>end_top_log_probs</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, config.start_n_top * config.end_n_top)</code>, <em>optional</em>, returned if <code>start_positions</code> or <code>end_positions</code> is not provided) — Log probabilities for the top <code>config.start_n_top * config.end_n_top</code> end token possibilities (beam-search).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringOutput.end_top_index" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringOutput.end_top_index"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>end_top_index</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, config.start_n_top * config.end_n_top)</code>, <em>optional</em>, returned if <code>start_positions</code> or <code>end_positions</code> is not provided) — Indices for the top <code>config.start_n_top * config.end_n_top</code> end token possibilities (beam-search).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringOutput.cls_logits" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringOutput.cls_logits"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>cls_logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>, returned if <code>start_positions</code> or <code>end_positions</code> is not provided) — Log probabilities for the <code>is_impossible</code> label of the answers.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringOutput.mems" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringOutput.mems"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mems</strong> (<code>List[torch.FloatTensor]</code> of length <code>config.n_layers</code>) — Contains pre-computed hidden-states. Can be used (see <code>mems</code> input) to speed up sequential decoding. The token ids which have their past given to this model should not be passed as <code>input_ids</code> as they have already been computed.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringOutput.hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringOutput.hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.<p></p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringOutput.attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringOutput.attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.<p></p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p></span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1ap6xcm">Output type of <a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetForQuestionAnswering">XLNetForQuestionAnswering</a>.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.models.xlnet.modeling_tf_xlnet.TFXLNetModelOutput"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.models.xlnet.modeling_tf_xlnet.</span><span class="font-semibold">TFXLNetModelOutput</span></span></h3> <a id="transformers.models.xlnet.modeling_tf_xlnet.TFXLNetModelOutput" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.models.xlnet.modeling_tf_xlnet.TFXLNetModelOutput"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_tf_xlnet.py#L802" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">last_hidden_state<span class="opacity-60">: tf.Tensor = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mems<span class="opacity-60">: List[tf.Tensor] | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_states<span class="opacity-60">: Tuple[tf.Tensor] | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attentions<span class="opacity-60">: Tuple[tf.Tensor] | None = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 4 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_tf_xlnet.TFXLNetModelOutput.last_hidden_state" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_tf_xlnet.TFXLNetModelOutput.last_hidden_state"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>last_hidden_state</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, num_predict, hidden_size)</code>) — Sequence of hidden-states at the last layer of the model.<p></p> <p><code>num_predict</code> corresponds to <code>target_mapping.shape[1]</code>. If <code>target_mapping</code> is <code>None</code>, then <code>num_predict</code> corresponds to <code>sequence_length</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_tf_xlnet.TFXLNetModelOutput.mems" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_tf_xlnet.TFXLNetModelOutput.mems"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mems</strong> (<code>List[tf.Tensor]</code> of length <code>config.n_layers</code>) — Contains pre-computed hidden-states. Can be used (see <code>mems</code> input) to speed up sequential decoding. The token ids which have their past given to this model should not be passed as <code>input_ids</code> as they have already been computed.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_tf_xlnet.TFXLNetModelOutput.hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_tf_xlnet.TFXLNetModelOutput.hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hidden_states</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>tf.Tensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.<p></p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_tf_xlnet.TFXLNetModelOutput.attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_tf_xlnet.TFXLNetModelOutput.attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attentions</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>tf.Tensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.<p></p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p></span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1ntygvu">Output type of <a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.TFXLNetModel">TFXLNetModel</a>.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.models.xlnet.modeling_tf_xlnet.TFXLNetLMHeadModelOutput"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.models.xlnet.modeling_tf_xlnet.</span><span class="font-semibold">TFXLNetLMHeadModelOutput</span></span></h3> <a id="transformers.models.xlnet.modeling_tf_xlnet.TFXLNetLMHeadModelOutput" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.models.xlnet.modeling_tf_xlnet.TFXLNetLMHeadModelOutput"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_tf_xlnet.py#L836" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">loss<span class="opacity-60">: tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">logits<span class="opacity-60">: tf.Tensor = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mems<span class="opacity-60">: List[tf.Tensor] | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_states<span class="opacity-60">: Tuple[tf.Tensor] | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attentions<span class="opacity-60">: Tuple[tf.Tensor] | None = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 5 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_tf_xlnet.TFXLNetLMHeadModelOutput.loss" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_tf_xlnet.TFXLNetLMHeadModelOutput.loss"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>loss</strong> (<code>tf.Tensor</code> of shape <em>(1,)</em>, <em>optional</em>, returned when <code>labels</code> is provided) — Language modeling loss (for next-token prediction).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_tf_xlnet.TFXLNetLMHeadModelOutput.logits" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_tf_xlnet.TFXLNetLMHeadModelOutput.logits"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>logits</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, num_predict, config.vocab_size)</code>) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).<p></p> <p><code>num_predict</code> corresponds to <code>target_mapping.shape[1]</code>. If <code>target_mapping</code> is <code>None</code>, then <code>num_predict</code> corresponds to <code>sequence_length</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_tf_xlnet.TFXLNetLMHeadModelOutput.mems" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_tf_xlnet.TFXLNetLMHeadModelOutput.mems"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mems</strong> (<code>List[tf.Tensor]</code> of length <code>config.n_layers</code>) — Contains pre-computed hidden-states. Can be used (see <code>mems</code> input) to speed up sequential decoding. The token ids which have their past given to this model should not be passed as <code>input_ids</code> as they have already been computed.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_tf_xlnet.TFXLNetLMHeadModelOutput.hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_tf_xlnet.TFXLNetLMHeadModelOutput.hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hidden_states</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>tf.Tensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.<p></p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_tf_xlnet.TFXLNetLMHeadModelOutput.attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_tf_xlnet.TFXLNetLMHeadModelOutput.attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attentions</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>tf.Tensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.<p></p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p></span></span> </li></ul> </div></div> <p data-svelte-h="svelte-b09go0">Output type of <a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.TFXLNetLMHeadModel">TFXLNetLMHeadModel</a>.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForSequenceClassificationOutput"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.models.xlnet.modeling_tf_xlnet.</span><span class="font-semibold">TFXLNetForSequenceClassificationOutput</span></span></h3> <a id="transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForSequenceClassificationOutput" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForSequenceClassificationOutput"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_tf_xlnet.py#L873" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">loss<span class="opacity-60">: tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">logits<span class="opacity-60">: tf.Tensor = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mems<span class="opacity-60">: List[tf.Tensor] | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_states<span class="opacity-60">: Tuple[tf.Tensor] | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attentions<span class="opacity-60">: Tuple[tf.Tensor] | None = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 5 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForSequenceClassificationOutput.loss" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForSequenceClassificationOutput.loss"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>loss</strong> (<code>tf.Tensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>label</code> is provided) — Classification (or regression if config.num_labels==1) loss.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForSequenceClassificationOutput.logits" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForSequenceClassificationOutput.logits"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>logits</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, config.num_labels)</code>) — Classification (or regression if config.num_labels==1) scores (before SoftMax).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForSequenceClassificationOutput.mems" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForSequenceClassificationOutput.mems"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mems</strong> (<code>List[tf.Tensor]</code> of length <code>config.n_layers</code>) — Contains pre-computed hidden-states. Can be used (see <code>mems</code> input) to speed up sequential decoding. The token ids which have their past given to this model should not be passed as <code>input_ids</code> as they have already been computed.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForSequenceClassificationOutput.hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForSequenceClassificationOutput.hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hidden_states</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>tf.Tensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.<p></p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForSequenceClassificationOutput.attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForSequenceClassificationOutput.attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attentions</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>tf.Tensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.<p></p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p></span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1fjxwzg">Output type of <a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.TFXLNetForSequenceClassification">TFXLNetForSequenceClassification</a>.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForMultipleChoiceOutput"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.models.xlnet.modeling_tf_xlnet.</span><span class="font-semibold">TFXLNetForMultipleChoiceOutput</span></span></h3> <a id="transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForMultipleChoiceOutput" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForMultipleChoiceOutput"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_tf_xlnet.py#L941" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">loss<span class="opacity-60">: tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">logits<span class="opacity-60">: tf.Tensor = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mems<span class="opacity-60">: List[tf.Tensor] | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_states<span class="opacity-60">: Tuple[tf.Tensor] | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attentions<span class="opacity-60">: Tuple[tf.Tensor] | None = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 5 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForMultipleChoiceOutput.loss" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForMultipleChoiceOutput.loss"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>loss</strong> (<code>tf.Tensor</code> of shape <em>(1,)</em>, <em>optional</em>, returned when <code>labels</code> is provided) — Classification loss.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForMultipleChoiceOutput.logits" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForMultipleChoiceOutput.logits"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>logits</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, num_choices)</code>) — <em>num_choices</em> is the second dimension of the input tensors. (see <em>input_ids</em> above).<p></p> <p>Classification scores (before SoftMax).</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForMultipleChoiceOutput.mems" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForMultipleChoiceOutput.mems"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mems</strong> (<code>List[tf.Tensor]</code> of length <code>config.n_layers</code>) — Contains pre-computed hidden-states. Can be used (see <code>mems</code> input) to speed up sequential decoding. The token ids which have their past given to this model should not be passed as <code>input_ids</code> as they have already been computed.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForMultipleChoiceOutput.hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForMultipleChoiceOutput.hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hidden_states</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>tf.Tensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.<p></p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForMultipleChoiceOutput.attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForMultipleChoiceOutput.attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attentions</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>tf.Tensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.<p></p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p></span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1tbtz30">Output type of <a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.TFXLNetForMultipleChoice">TFXLNetForMultipleChoice</a>.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForTokenClassificationOutput"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.models.xlnet.modeling_tf_xlnet.</span><span class="font-semibold">TFXLNetForTokenClassificationOutput</span></span></h3> <a id="transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForTokenClassificationOutput" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForTokenClassificationOutput"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_tf_xlnet.py#L907" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">loss<span class="opacity-60">: tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">logits<span class="opacity-60">: tf.Tensor = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mems<span class="opacity-60">: List[tf.Tensor] | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_states<span class="opacity-60">: Tuple[tf.Tensor] | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attentions<span class="opacity-60">: Tuple[tf.Tensor] | None = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 5 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForTokenClassificationOutput.loss" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForTokenClassificationOutput.loss"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>loss</strong> (<code>tf.Tensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Classification loss.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForTokenClassificationOutput.logits" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForTokenClassificationOutput.logits"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>logits</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length, config.num_labels)</code>) — Classification scores (before SoftMax).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForTokenClassificationOutput.mems" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForTokenClassificationOutput.mems"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mems</strong> (<code>List[tf.Tensor]</code> of length <code>config.n_layers</code>) — Contains pre-computed hidden-states. Can be used (see <code>mems</code> input) to speed up sequential decoding. The token ids which have their past given to this model should not be passed as <code>input_ids</code> as they have already been computed.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForTokenClassificationOutput.hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForTokenClassificationOutput.hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hidden_states</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>tf.Tensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.<p></p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForTokenClassificationOutput.attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForTokenClassificationOutput.attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attentions</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>tf.Tensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.<p></p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p></span></span> </li></ul> </div></div> <p data-svelte-h="svelte-mmrrdx">Output type of <code>TFXLNetForTokenClassificationOutput</code>.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForQuestionAnsweringSimpleOutput"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.models.xlnet.modeling_tf_xlnet.</span><span class="font-semibold">TFXLNetForQuestionAnsweringSimpleOutput</span></span></h3> <a id="transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForQuestionAnsweringSimpleOutput" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForQuestionAnsweringSimpleOutput"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_tf_xlnet.py#L977" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">loss<span class="opacity-60">: tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">start_logits<span class="opacity-60">: tf.Tensor = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">end_logits<span class="opacity-60">: tf.Tensor = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mems<span class="opacity-60">: List[tf.Tensor] | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_states<span class="opacity-60">: Tuple[tf.Tensor] | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attentions<span class="opacity-60">: Tuple[tf.Tensor] | None = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 6 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForQuestionAnsweringSimpleOutput.loss" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForQuestionAnsweringSimpleOutput.loss"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>loss</strong> (<code>tf.Tensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForQuestionAnsweringSimpleOutput.start_logits" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForQuestionAnsweringSimpleOutput.start_logits"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>start_logits</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length,)</code>) — Span-start scores (before SoftMax).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForQuestionAnsweringSimpleOutput.end_logits" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForQuestionAnsweringSimpleOutput.end_logits"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>end_logits</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length,)</code>) — Span-end scores (before SoftMax).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForQuestionAnsweringSimpleOutput.mems" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForQuestionAnsweringSimpleOutput.mems"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mems</strong> (<code>List[tf.Tensor]</code> of length <code>config.n_layers</code>) — Contains pre-computed hidden-states. Can be used (see <code>mems</code> input) to speed up sequential decoding. The token ids which have their past given to this model should not be passed as <code>input_ids</code> as they have already been computed.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForQuestionAnsweringSimpleOutput.hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForQuestionAnsweringSimpleOutput.hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hidden_states</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>tf.Tensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.<p></p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForQuestionAnsweringSimpleOutput.attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForQuestionAnsweringSimpleOutput.attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attentions</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>tf.Tensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.<p></p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p></span></span> </li></ul> </div></div> <p data-svelte-h="svelte-19olbe6">Output type of <a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.TFXLNetForQuestionAnsweringSimple">TFXLNetForQuestionAnsweringSimple</a>.</p></div> <h2 class="relative group"><a id="transformers.XLNetModel" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetModel"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1ybx7t9">XLNetModel</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLNetModel"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XLNetModel</span></span></h3> <a id="transformers.XLNetModel" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLNetModel"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_xlnet.py#L931" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetModel.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetModel.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetConfig">XLNetConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1ceqc0e">The bare XLNet Model transformer outputting raw hidden-states without any specific head on top.</p> <p data-svelte-h="svelte-hmtw9k">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-hswkmf">This model is also a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLNetModel.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.XLNetModel.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLNetModel.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_xlnet.py#L1059" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mems<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">perm_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">target_mapping<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_mems<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.models.xlnet.modeling_xlnet.XLNetModelOutput">transformers.models.xlnet.modeling_xlnet.XLNetModelOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 12 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetModel.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetModel.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetModel.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetModel.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetModel.forward.mems" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetModel.forward.mems"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mems</strong> (<code>List[torch.FloatTensor]</code> of length <code>config.n_layers</code>) — Contains pre-computed hidden-states (see <code>mems</code> output below) . Can be used to speed up sequential decoding. The token ids which have their past given to this model should not be passed as <code>input_ids</code> as they have already been computed.<p></p> <p><code>use_mems</code> has to be set to <code>True</code> to make use of <code>mems</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetModel.forward.perm_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetModel.forward.perm_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>perm_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, sequence_length)</code>, <em>optional</em>) — Mask to indicate the attention pattern for each input token with values selected in <code>[0, 1]</code>:<p></p> <ul> <li>if <code>perm_mask[k, i, j] = 0</code>, i attend to j in batch k;</li> <li>if <code>perm_mask[k, i, j] = 1</code>, i does not attend to j in batch k.</li> </ul> <p>If not set, each token attends to all the others (full bidirectional attention). Only used during pretraining (to define factorization order) or for sequential decoding (generation).</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetModel.forward.target_mapping" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetModel.forward.target_mapping"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>target_mapping</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_predict, sequence_length)</code>, <em>optional</em>) — Mask to indicate the output tokens to use. If <code>target_mapping[k, i, j] = 1</code>, the i-th predict in batch k is on the j-th token. Only used during pretraining for partial prediction or for sequential decoding (generation).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetModel.forward.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetModel.forward.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token.</li> </ul> <p><a href="../glossary#token-type-ids">What are token type IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetModel.forward.input_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetModel.forward.input_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_mask</strong> (<code>torch.FloatTensor</code> of shape <code>batch_size, sequence_length</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Negative of <code>attention_mask</code>, i.e. with 0 for real tokens and 1 for padding which is kept for compatibility with the original code base.<p></p> <p>Mask values selected in <code>[0, 1]</code>:</p> <ul> <li>1 for tokens that are <strong>masked</strong>,</li> <li>0 for tokens that are <strong>not masked</strong>.</li> </ul> <p>You can only uses one of <code>input_mask</code> and <code>attention_mask</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetModel.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetModel.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetModel.forward.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetModel.forward.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetModel.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetModel.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetModel.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetModel.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetModel.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetModel.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li></ul> <div id="transformers.XLNetModel.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.models.xlnet.modeling_xlnet.XLNetModelOutput">transformers.models.xlnet.modeling_xlnet.XLNetModelOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.models.xlnet.modeling_xlnet.XLNetModelOutput">transformers.models.xlnet.modeling_xlnet.XLNetModelOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetConfig">XLNetConfig</a>) and inputs.</p> <ul> <li> <p><strong>last_hidden_state</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_predict, hidden_size)</code>) — Sequence of hidden-states at the last layer of the model.</p> <p><code>num_predict</code> corresponds to <code>target_mapping.shape[1]</code>. If <code>target_mapping</code> is <code>None</code>, then <code>num_predict</code> corresponds to <code>sequence_length</code>.</p> </li> <li> <p><strong>mems</strong> (<code>List[torch.FloatTensor]</code> of length <code>config.n_layers</code>) — Contains pre-computed hidden-states. Can be used (see <code>mems</code> input) to speed up sequential decoding. The token ids which have their past given to this model should not be passed as <code>input_ids</code> as they have already been computed.</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-13ksfds">The <a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetModel">XLNetModel</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.XLNetModel.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetModel.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, XLNetModel <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"xlnet-base-cased"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = XLNetModel.from_pretrained(<span class="hljs-string">"xlnet-base-cased"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(<span class="hljs-string">"Hello, my dog is cute"</span>, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(**inputs) <span class="hljs-meta">&gt;&gt;&gt; </span>last_hidden_states = outputs.last_hidden_state</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.XLNetLMHeadModel" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetLMHeadModel"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1190184">XLNetLMHeadModel</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLNetLMHeadModel"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XLNetLMHeadModel</span></span></h3> <a id="transformers.XLNetLMHeadModel" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLNetLMHeadModel"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_xlnet.py#L1294" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetLMHeadModel.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetLMHeadModel.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetConfig">XLNetConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-s260t6">XLNet Model with a language modeling head on top (linear layer with weights tied to the input embeddings).</p> <p data-svelte-h="svelte-hmtw9k">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-hswkmf">This model is also a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLNetLMHeadModel.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.XLNetLMHeadModel.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLNetLMHeadModel.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_xlnet.py#L1356" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mems<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">perm_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">target_mapping<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_mems<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.models.xlnet.modeling_xlnet.XLNetLMHeadModelOutput">transformers.models.xlnet.modeling_xlnet.XLNetLMHeadModelOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 13 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetLMHeadModel.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetLMHeadModel.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetLMHeadModel.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetLMHeadModel.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetLMHeadModel.forward.mems" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetLMHeadModel.forward.mems"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mems</strong> (<code>List[torch.FloatTensor]</code> of length <code>config.n_layers</code>) — Contains pre-computed hidden-states (see <code>mems</code> output below) . Can be used to speed up sequential decoding. The token ids which have their past given to this model should not be passed as <code>input_ids</code> as they have already been computed.<p></p> <p><code>use_mems</code> has to be set to <code>True</code> to make use of <code>mems</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetLMHeadModel.forward.perm_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetLMHeadModel.forward.perm_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>perm_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, sequence_length)</code>, <em>optional</em>) — Mask to indicate the attention pattern for each input token with values selected in <code>[0, 1]</code>:<p></p> <ul> <li>if <code>perm_mask[k, i, j] = 0</code>, i attend to j in batch k;</li> <li>if <code>perm_mask[k, i, j] = 1</code>, i does not attend to j in batch k.</li> </ul> <p>If not set, each token attends to all the others (full bidirectional attention). Only used during pretraining (to define factorization order) or for sequential decoding (generation).</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetLMHeadModel.forward.target_mapping" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetLMHeadModel.forward.target_mapping"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>target_mapping</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_predict, sequence_length)</code>, <em>optional</em>) — Mask to indicate the output tokens to use. If <code>target_mapping[k, i, j] = 1</code>, the i-th predict in batch k is on the j-th token. Only used during pretraining for partial prediction or for sequential decoding (generation).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetLMHeadModel.forward.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetLMHeadModel.forward.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token.</li> </ul> <p><a href="../glossary#token-type-ids">What are token type IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetLMHeadModel.forward.input_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetLMHeadModel.forward.input_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_mask</strong> (<code>torch.FloatTensor</code> of shape <code>batch_size, sequence_length</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Negative of <code>attention_mask</code>, i.e. with 0 for real tokens and 1 for padding which is kept for compatibility with the original code base.<p></p> <p>Mask values selected in <code>[0, 1]</code>:</p> <ul> <li>1 for tokens that are <strong>masked</strong>,</li> <li>0 for tokens that are <strong>not masked</strong>.</li> </ul> <p>You can only uses one of <code>input_mask</code> and <code>attention_mask</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetLMHeadModel.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetLMHeadModel.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetLMHeadModel.forward.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetLMHeadModel.forward.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetLMHeadModel.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetLMHeadModel.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetLMHeadModel.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetLMHeadModel.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetLMHeadModel.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetLMHeadModel.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetLMHeadModel.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetLMHeadModel.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, num_predict)</code>, <em>optional</em>) — Labels for masked language modeling. <code>num_predict</code> corresponds to <code>target_mapping.shape[1]</code>. If <code>target_mapping</code> is <code>None</code>, then <code>num_predict</code> corresponds to <code>sequence_length</code>.<p></p> <p>The labels should correspond to the masked input words that should be predicted and depends on <code>target_mapping</code>. Note in order to perform standard auto-regressive language modeling a <em><mask></mask></em> token has to be added to the <code>input_ids</code> (see the <code>prepare_inputs_for_generation</code> function and examples below)</p> <p>Indices are selected in <code>[-100, 0, ..., config.vocab_size]</code> All labels set to <code>-100</code> are ignored, the loss is only computed for labels in <code>[0, ..., config.vocab_size]</code></p></span></span> </li></ul> <div id="transformers.XLNetLMHeadModel.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.models.xlnet.modeling_xlnet.XLNetLMHeadModelOutput">transformers.models.xlnet.modeling_xlnet.XLNetLMHeadModelOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.models.xlnet.modeling_xlnet.XLNetLMHeadModelOutput">transformers.models.xlnet.modeling_xlnet.XLNetLMHeadModelOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetConfig">XLNetConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <em>(1,)</em>, <em>optional</em>, returned when <code>labels</code> is provided) Language modeling loss (for next-token prediction).</p> </li> <li> <p><strong>logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_predict, config.vocab_size)</code>) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).</p> <p><code>num_predict</code> corresponds to <code>target_mapping.shape[1]</code>. If <code>target_mapping</code> is <code>None</code>, then <code>num_predict</code> corresponds to <code>sequence_length</code>.</p> </li> <li> <p><strong>mems</strong> (<code>List[torch.FloatTensor]</code> of length <code>config.n_layers</code>) — Contains pre-computed hidden-states. Can be used (see <code>mems</code> input) to speed up sequential decoding. The token ids which have their past given to this model should not be passed as <code>input_ids</code> as they have already been computed.</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-rewcv2">The <a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetLMHeadModel">XLNetLMHeadModel</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.XLNetLMHeadModel.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetLMHeadModel.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-kvfsh7">Examples:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, XLNetLMHeadModel <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"xlnet-large-cased"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = XLNetLMHeadModel.from_pretrained(<span class="hljs-string">"xlnet-large-cased"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># We show how to setup inputs to predict a next token using a bi-directional context.</span> <span class="hljs-meta">&gt;&gt;&gt; </span>input_ids = torch.tensor( <span class="hljs-meta">... </span> tokenizer.encode(<span class="hljs-string">"Hello, my dog is very &lt;mask&gt;"</span>, add_special_tokens=<span class="hljs-literal">False</span>) <span class="hljs-meta">... </span>).unsqueeze( <span class="hljs-meta">... </span> <span class="hljs-number">0</span> <span class="hljs-meta">... </span>) <span class="hljs-comment"># We will predict the masked token</span> <span class="hljs-meta">&gt;&gt;&gt; </span>perm_mask = torch.zeros((<span class="hljs-number">1</span>, input_ids.shape[<span class="hljs-number">1</span>], input_ids.shape[<span class="hljs-number">1</span>]), dtype=torch.<span class="hljs-built_in">float</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>perm_mask[:, :, -<span class="hljs-number">1</span>] = <span class="hljs-number">1.0</span> <span class="hljs-comment"># Previous tokens don't see last token</span> <span class="hljs-meta">&gt;&gt;&gt; </span>target_mapping = torch.zeros( <span class="hljs-meta">... </span> (<span class="hljs-number">1</span>, <span class="hljs-number">1</span>, input_ids.shape[<span class="hljs-number">1</span>]), dtype=torch.<span class="hljs-built_in">float</span> <span class="hljs-meta">... </span>) <span class="hljs-comment"># Shape [1, 1, seq_length] =&gt; let's predict one token</span> <span class="hljs-meta">&gt;&gt;&gt; </span>target_mapping[ <span class="hljs-meta">... </span> <span class="hljs-number">0</span>, <span class="hljs-number">0</span>, -<span class="hljs-number">1</span> <span class="hljs-meta">... </span>] = <span class="hljs-number">1.0</span> <span class="hljs-comment"># Our first (and only) prediction will be the last token of the sequence (the masked token)</span> <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(input_ids, perm_mask=perm_mask, target_mapping=target_mapping) <span class="hljs-meta">&gt;&gt;&gt; </span>next_token_logits = outputs[ <span class="hljs-meta">... </span> <span class="hljs-number">0</span> <span class="hljs-meta">... </span>] <span class="hljs-comment"># Output has shape [target_mapping.size(0), target_mapping.size(1), config.vocab_size]</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># The same way can the XLNetLMHeadModel be used to be trained by standard auto-regressive language modeling.</span> <span class="hljs-meta">&gt;&gt;&gt; </span>input_ids = torch.tensor( <span class="hljs-meta">... </span> tokenizer.encode(<span class="hljs-string">"Hello, my dog is very &lt;mask&gt;"</span>, add_special_tokens=<span class="hljs-literal">False</span>) <span class="hljs-meta">... </span>).unsqueeze( <span class="hljs-meta">... </span> <span class="hljs-number">0</span> <span class="hljs-meta">... </span>) <span class="hljs-comment"># We will predict the masked token</span> <span class="hljs-meta">&gt;&gt;&gt; </span>labels = torch.tensor(tokenizer.encode(<span class="hljs-string">"cute"</span>, add_special_tokens=<span class="hljs-literal">False</span>)).unsqueeze(<span class="hljs-number">0</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">assert</span> labels.shape[<span class="hljs-number">0</span>] == <span class="hljs-number">1</span>, <span class="hljs-string">"only one word will be predicted"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>perm_mask = torch.zeros((<span class="hljs-number">1</span>, input_ids.shape[<span class="hljs-number">1</span>], input_ids.shape[<span class="hljs-number">1</span>]), dtype=torch.<span class="hljs-built_in">float</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>perm_mask[ <span class="hljs-meta">... </span> :, :, -<span class="hljs-number">1</span> <span class="hljs-meta">... </span>] = <span class="hljs-number">1.0</span> <span class="hljs-comment"># Previous tokens don't see last token as is done in standard auto-regressive lm training</span> <span class="hljs-meta">&gt;&gt;&gt; </span>target_mapping = torch.zeros( <span class="hljs-meta">... </span> (<span class="hljs-number">1</span>, <span class="hljs-number">1</span>, input_ids.shape[<span class="hljs-number">1</span>]), dtype=torch.<span class="hljs-built_in">float</span> <span class="hljs-meta">... </span>) <span class="hljs-comment"># Shape [1, 1, seq_length] =&gt; let's predict one token</span> <span class="hljs-meta">&gt;&gt;&gt; </span>target_mapping[ <span class="hljs-meta">... </span> <span class="hljs-number">0</span>, <span class="hljs-number">0</span>, -<span class="hljs-number">1</span> <span class="hljs-meta">... </span>] = <span class="hljs-number">1.0</span> <span class="hljs-comment"># Our first (and only) prediction will be the last token of the sequence (the masked token)</span> <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(input_ids, perm_mask=perm_mask, target_mapping=target_mapping, labels=labels) <span class="hljs-meta">&gt;&gt;&gt; </span>loss = outputs.loss <span class="hljs-meta">&gt;&gt;&gt; </span>next_token_logits = ( <span class="hljs-meta">... </span> outputs.logits <span class="hljs-meta">... </span>) <span class="hljs-comment"># Logits have shape [target_mapping.size(0), target_mapping.size(1), config.vocab_size]</span></pre></div></div></div></div> <h2 class="relative group"><a id="transformers.XLNetForSequenceClassification" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForSequenceClassification"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1dty8qc">XLNetForSequenceClassification</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLNetForSequenceClassification"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XLNetForSequenceClassification</span></span></h3> <a id="transformers.XLNetForSequenceClassification" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLNetForSequenceClassification"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_xlnet.py#L1500" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForSequenceClassification.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForSequenceClassification.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetConfig">XLNetConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1ccr0de">XLNet Model with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks.</p> <p data-svelte-h="svelte-hmtw9k">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-hswkmf">This model is also a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLNetForSequenceClassification.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.XLNetForSequenceClassification.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLNetForSequenceClassification.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_xlnet.py#L1513" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mems<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">perm_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">target_mapping<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_mems<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.models.xlnet.modeling_xlnet.XLNetForSequenceClassificationOutput">transformers.models.xlnet.modeling_xlnet.XLNetForSequenceClassificationOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 13 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForSequenceClassification.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForSequenceClassification.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForSequenceClassification.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForSequenceClassification.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForSequenceClassification.forward.mems" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForSequenceClassification.forward.mems"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mems</strong> (<code>List[torch.FloatTensor]</code> of length <code>config.n_layers</code>) — Contains pre-computed hidden-states (see <code>mems</code> output below) . Can be used to speed up sequential decoding. The token ids which have their past given to this model should not be passed as <code>input_ids</code> as they have already been computed.<p></p> <p><code>use_mems</code> has to be set to <code>True</code> to make use of <code>mems</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForSequenceClassification.forward.perm_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForSequenceClassification.forward.perm_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>perm_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, sequence_length)</code>, <em>optional</em>) — Mask to indicate the attention pattern for each input token with values selected in <code>[0, 1]</code>:<p></p> <ul> <li>if <code>perm_mask[k, i, j] = 0</code>, i attend to j in batch k;</li> <li>if <code>perm_mask[k, i, j] = 1</code>, i does not attend to j in batch k.</li> </ul> <p>If not set, each token attends to all the others (full bidirectional attention). Only used during pretraining (to define factorization order) or for sequential decoding (generation).</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForSequenceClassification.forward.target_mapping" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForSequenceClassification.forward.target_mapping"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>target_mapping</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_predict, sequence_length)</code>, <em>optional</em>) — Mask to indicate the output tokens to use. If <code>target_mapping[k, i, j] = 1</code>, the i-th predict in batch k is on the j-th token. Only used during pretraining for partial prediction or for sequential decoding (generation).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForSequenceClassification.forward.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForSequenceClassification.forward.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token.</li> </ul> <p><a href="../glossary#token-type-ids">What are token type IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForSequenceClassification.forward.input_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForSequenceClassification.forward.input_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_mask</strong> (<code>torch.FloatTensor</code> of shape <code>batch_size, sequence_length</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Negative of <code>attention_mask</code>, i.e. with 0 for real tokens and 1 for padding which is kept for compatibility with the original code base.<p></p> <p>Mask values selected in <code>[0, 1]</code>:</p> <ul> <li>1 for tokens that are <strong>masked</strong>,</li> <li>0 for tokens that are <strong>not masked</strong>.</li> </ul> <p>You can only uses one of <code>input_mask</code> and <code>attention_mask</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForSequenceClassification.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForSequenceClassification.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForSequenceClassification.forward.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForSequenceClassification.forward.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForSequenceClassification.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForSequenceClassification.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForSequenceClassification.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForSequenceClassification.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForSequenceClassification.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForSequenceClassification.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForSequenceClassification.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForSequenceClassification.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Labels for computing the sequence classification/regression loss. Indices should be in <code>[0, ..., config.num_labels - 1]</code>. If <code>config.num_labels == 1</code> a regression loss is computed (Mean-Square loss), If <code>config.num_labels &gt; 1</code> a classification loss is computed (Cross-Entropy).</span></span> </li></ul> <div id="transformers.XLNetForSequenceClassification.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.models.xlnet.modeling_xlnet.XLNetForSequenceClassificationOutput">transformers.models.xlnet.modeling_xlnet.XLNetForSequenceClassificationOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.models.xlnet.modeling_xlnet.XLNetForSequenceClassificationOutput">transformers.models.xlnet.modeling_xlnet.XLNetForSequenceClassificationOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetConfig">XLNetConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>label</code> is provided) — Classification (or regression if config.num_labels==1) loss.</p> </li> <li> <p><strong>logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, config.num_labels)</code>) — Classification (or regression if config.num_labels==1) scores (before SoftMax).</p> </li> <li> <p><strong>mems</strong> (<code>List[torch.FloatTensor]</code> of length <code>config.n_layers</code>) — Contains pre-computed hidden-states. Can be used (see <code>mems</code> input) to speed up sequential decoding. The token ids which have their past given to this model should not be passed as <code>input_ids</code> as they have already been computed.</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-1c1imeq">The <a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetForSequenceClassification">XLNetForSequenceClassification</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.XLNetForSequenceClassification.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForSequenceClassification.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-ykxpe4">Example of single-label classification:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, XLNetForSequenceClassification <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"xlnet-base-cased"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = XLNetForSequenceClassification.from_pretrained(<span class="hljs-string">"xlnet-base-cased"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(<span class="hljs-string">"Hello, my dog is cute"</span>, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> logits = model(**inputs).logits <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_class_id = logits.argmax().item() <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`</span> <span class="hljs-meta">&gt;&gt;&gt; </span>num_labels = <span class="hljs-built_in">len</span>(model.config.id2label) <span class="hljs-meta">&gt;&gt;&gt; </span>model = XLNetForSequenceClassification.from_pretrained(<span class="hljs-string">"xlnet-base-cased"</span>, num_labels=num_labels) <span class="hljs-meta">&gt;&gt;&gt; </span>labels = torch.tensor([<span class="hljs-number">1</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span>loss = model(**inputs, labels=labels).loss</pre></div></div> <div class="relative group rounded-md"><a id="transformers.XLNetForSequenceClassification.forward.example-2" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForSequenceClassification.forward.example-2"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-1l8e32d">Example of multi-label classification:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, XLNetForSequenceClassification <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"xlnet-base-cased"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = XLNetForSequenceClassification.from_pretrained(<span class="hljs-string">"xlnet-base-cased"</span>, problem_type=<span class="hljs-string">"multi_label_classification"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(<span class="hljs-string">"Hello, my dog is cute"</span>, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> logits = model(**inputs).logits <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_class_ids = torch.arange(<span class="hljs-number">0</span>, logits.shape[-<span class="hljs-number">1</span>])[torch.sigmoid(logits).squeeze(dim=<span class="hljs-number">0</span>) &gt; <span class="hljs-number">0.5</span>] <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`</span> <span class="hljs-meta">&gt;&gt;&gt; </span>num_labels = <span class="hljs-built_in">len</span>(model.config.id2label) <span class="hljs-meta">&gt;&gt;&gt; </span>model = XLNetForSequenceClassification.from_pretrained( <span class="hljs-meta">... </span> <span class="hljs-string">"xlnet-base-cased"</span>, num_labels=num_labels, problem_type=<span class="hljs-string">"multi_label_classification"</span> <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>labels = torch.<span class="hljs-built_in">sum</span>( <span class="hljs-meta">... </span> torch.nn.functional.one_hot(predicted_class_ids[<span class="hljs-literal">None</span>, :].clone(), num_classes=num_labels), dim=<span class="hljs-number">1</span> <span class="hljs-meta">... </span>).to(torch.<span class="hljs-built_in">float</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>loss = model(**inputs, labels=labels).loss</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.XLNetForMultipleChoice" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForMultipleChoice"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-6oe9yu">XLNetForMultipleChoice</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLNetForMultipleChoice"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XLNetForMultipleChoice</span></span></h3> <a id="transformers.XLNetForMultipleChoice" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLNetForMultipleChoice"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_xlnet.py#L1696" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForMultipleChoice.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForMultipleChoice.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetConfig">XLNetConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-4hc6jt">XLNet Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RACE/SWAG tasks.</p> <p data-svelte-h="svelte-hmtw9k">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-hswkmf">This model is also a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLNetForMultipleChoice.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.XLNetForMultipleChoice.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLNetForMultipleChoice.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_xlnet.py#L1707" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mems<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">perm_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">target_mapping<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_mems<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.models.xlnet.modeling_xlnet.XLNetForMultipleChoiceOutput">transformers.models.xlnet.modeling_xlnet.XLNetForMultipleChoiceOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 13 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForMultipleChoice.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForMultipleChoice.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, num_choices, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForMultipleChoice.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForMultipleChoice.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_choices, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForMultipleChoice.forward.mems" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForMultipleChoice.forward.mems"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mems</strong> (<code>List[torch.FloatTensor]</code> of length <code>config.n_layers</code>) — Contains pre-computed hidden-states (see <code>mems</code> output below) . Can be used to speed up sequential decoding. The token ids which have their past given to this model should not be passed as <code>input_ids</code> as they have already been computed.<p></p> <p><code>use_mems</code> has to be set to <code>True</code> to make use of <code>mems</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForMultipleChoice.forward.perm_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForMultipleChoice.forward.perm_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>perm_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, sequence_length)</code>, <em>optional</em>) — Mask to indicate the attention pattern for each input token with values selected in <code>[0, 1]</code>:<p></p> <ul> <li>if <code>perm_mask[k, i, j] = 0</code>, i attend to j in batch k;</li> <li>if <code>perm_mask[k, i, j] = 1</code>, i does not attend to j in batch k.</li> </ul> <p>If not set, each token attends to all the others (full bidirectional attention). Only used during pretraining (to define factorization order) or for sequential decoding (generation).</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForMultipleChoice.forward.target_mapping" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForMultipleChoice.forward.target_mapping"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>target_mapping</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_predict, sequence_length)</code>, <em>optional</em>) — Mask to indicate the output tokens to use. If <code>target_mapping[k, i, j] = 1</code>, the i-th predict in batch k is on the j-th token. Only used during pretraining for partial prediction or for sequential decoding (generation).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForMultipleChoice.forward.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForMultipleChoice.forward.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, num_choices, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token.</li> </ul> <p><a href="../glossary#token-type-ids">What are token type IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForMultipleChoice.forward.input_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForMultipleChoice.forward.input_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_mask</strong> (<code>torch.FloatTensor</code> of shape <code>batch_size, num_choices, sequence_length</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Negative of <code>attention_mask</code>, i.e. with 0 for real tokens and 1 for padding which is kept for compatibility with the original code base.<p></p> <p>Mask values selected in <code>[0, 1]</code>:</p> <ul> <li>1 for tokens that are <strong>masked</strong>,</li> <li>0 for tokens that are <strong>not masked</strong>.</li> </ul> <p>You can only uses one of <code>input_mask</code> and <code>attention_mask</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForMultipleChoice.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForMultipleChoice.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForMultipleChoice.forward.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForMultipleChoice.forward.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_choices, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForMultipleChoice.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForMultipleChoice.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForMultipleChoice.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForMultipleChoice.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForMultipleChoice.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForMultipleChoice.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForMultipleChoice.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForMultipleChoice.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Labels for computing the multiple choice classification loss. Indices should be in <code>[0, ..., num_choices-1]</code> where <code>num_choices</code> is the size of the second dimension of the input tensors. (See <code>input_ids</code> above)</span></span> </li></ul> <div id="transformers.XLNetForMultipleChoice.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.models.xlnet.modeling_xlnet.XLNetForMultipleChoiceOutput">transformers.models.xlnet.modeling_xlnet.XLNetForMultipleChoiceOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.models.xlnet.modeling_xlnet.XLNetForMultipleChoiceOutput">transformers.models.xlnet.modeling_xlnet.XLNetForMultipleChoiceOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetConfig">XLNetConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <em>(1,)</em>, <em>optional</em>, returned when <code>labels</code> is provided) — Classification loss.</p> </li> <li> <p><strong>logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_choices)</code>) — <em>num_choices</em> is the second dimension of the input tensors. (see <em>input_ids</em> above).</p> <p>Classification scores (before SoftMax).</p> </li> <li> <p><strong>mems</strong> (<code>List[torch.FloatTensor]</code> of length <code>config.n_layers</code>) — Contains pre-computed hidden-states. Can be used (see <code>mems</code> input) to speed up sequential decoding. The token ids which have their past given to this model should not be passed as <code>input_ids</code> as they have already been computed.</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-163v9xq">The <a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetForMultipleChoice">XLNetForMultipleChoice</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.XLNetForMultipleChoice.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForMultipleChoice.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, XLNetForMultipleChoice <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"xlnet-base-cased"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = XLNetForMultipleChoice.from_pretrained(<span class="hljs-string">"xlnet-base-cased"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>prompt = <span class="hljs-string">"In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."</span> <span class="hljs-meta">&gt;&gt;&gt; </span>choice0 = <span class="hljs-string">"It is eaten with a fork and a knife."</span> <span class="hljs-meta">&gt;&gt;&gt; </span>choice1 = <span class="hljs-string">"It is eaten while held in the hand."</span> <span class="hljs-meta">&gt;&gt;&gt; </span>labels = torch.tensor(<span class="hljs-number">0</span>).unsqueeze(<span class="hljs-number">0</span>) <span class="hljs-comment"># choice0 is correct (according to Wikipedia ;)), batch size 1</span> <span class="hljs-meta">&gt;&gt;&gt; </span>encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors=<span class="hljs-string">"pt"</span>, padding=<span class="hljs-literal">True</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(**{k: v.unsqueeze(<span class="hljs-number">0</span>) <span class="hljs-keyword">for</span> k, v <span class="hljs-keyword">in</span> encoding.items()}, labels=labels) <span class="hljs-comment"># batch size is 1</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># the linear classifier still needs to be trained</span> <span class="hljs-meta">&gt;&gt;&gt; </span>loss = outputs.loss <span class="hljs-meta">&gt;&gt;&gt; </span>logits = outputs.logits</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.XLNetForTokenClassification" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForTokenClassification"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-9ulquc">XLNetForTokenClassification</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLNetForTokenClassification"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XLNetForTokenClassification</span></span></h3> <a id="transformers.XLNetForTokenClassification" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLNetForTokenClassification"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_xlnet.py#L1609" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForTokenClassification.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForTokenClassification.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetConfig">XLNetConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-jh923c">XLNet Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks.</p> <p data-svelte-h="svelte-hmtw9k">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-hswkmf">This model is also a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLNetForTokenClassification.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.XLNetForTokenClassification.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLNetForTokenClassification.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_xlnet.py#L1620" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mems<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">perm_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">target_mapping<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_mems<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.models.xlnet.modeling_xlnet.XLNetForTokenClassificationOutput">transformers.models.xlnet.modeling_xlnet.XLNetForTokenClassificationOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 13 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForTokenClassification.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForTokenClassification.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForTokenClassification.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForTokenClassification.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForTokenClassification.forward.mems" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForTokenClassification.forward.mems"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mems</strong> (<code>List[torch.FloatTensor]</code> of length <code>config.n_layers</code>) — Contains pre-computed hidden-states (see <code>mems</code> output below) . Can be used to speed up sequential decoding. The token ids which have their past given to this model should not be passed as <code>input_ids</code> as they have already been computed.<p></p> <p><code>use_mems</code> has to be set to <code>True</code> to make use of <code>mems</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForTokenClassification.forward.perm_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForTokenClassification.forward.perm_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>perm_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, sequence_length)</code>, <em>optional</em>) — Mask to indicate the attention pattern for each input token with values selected in <code>[0, 1]</code>:<p></p> <ul> <li>if <code>perm_mask[k, i, j] = 0</code>, i attend to j in batch k;</li> <li>if <code>perm_mask[k, i, j] = 1</code>, i does not attend to j in batch k.</li> </ul> <p>If not set, each token attends to all the others (full bidirectional attention). Only used during pretraining (to define factorization order) or for sequential decoding (generation).</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForTokenClassification.forward.target_mapping" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForTokenClassification.forward.target_mapping"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>target_mapping</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_predict, sequence_length)</code>, <em>optional</em>) — Mask to indicate the output tokens to use. If <code>target_mapping[k, i, j] = 1</code>, the i-th predict in batch k is on the j-th token. Only used during pretraining for partial prediction or for sequential decoding (generation).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForTokenClassification.forward.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForTokenClassification.forward.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token.</li> </ul> <p><a href="../glossary#token-type-ids">What are token type IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForTokenClassification.forward.input_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForTokenClassification.forward.input_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_mask</strong> (<code>torch.FloatTensor</code> of shape <code>batch_size, sequence_length</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Negative of <code>attention_mask</code>, i.e. with 0 for real tokens and 1 for padding which is kept for compatibility with the original code base.<p></p> <p>Mask values selected in <code>[0, 1]</code>:</p> <ul> <li>1 for tokens that are <strong>masked</strong>,</li> <li>0 for tokens that are <strong>not masked</strong>.</li> </ul> <p>You can only uses one of <code>input_mask</code> and <code>attention_mask</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForTokenClassification.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForTokenClassification.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForTokenClassification.forward.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForTokenClassification.forward.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForTokenClassification.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForTokenClassification.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForTokenClassification.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForTokenClassification.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForTokenClassification.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForTokenClassification.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForTokenClassification.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForTokenClassification.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Labels for computing the multiple choice classification loss. Indices should be in <code>[0, ..., num_choices]</code> where <em>num_choices</em> is the size of the second dimension of the input tensors. (see <em>input_ids</em> above)</span></span> </li></ul> <div id="transformers.XLNetForTokenClassification.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.models.xlnet.modeling_xlnet.XLNetForTokenClassificationOutput">transformers.models.xlnet.modeling_xlnet.XLNetForTokenClassificationOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.models.xlnet.modeling_xlnet.XLNetForTokenClassificationOutput">transformers.models.xlnet.modeling_xlnet.XLNetForTokenClassificationOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetConfig">XLNetConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Classification loss.</p> </li> <li> <p><strong>logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, config.num_labels)</code>) — Classification scores (before SoftMax).</p> </li> <li> <p><strong>mems</strong> (<code>List[torch.FloatTensor]</code> of length <code>config.n_layers</code>) — Contains pre-computed hidden-states. Can be used (see <code>mems</code> input) to speed up sequential decoding. The token ids which have their past given to this model should not be passed as <code>input_ids</code> as they have already been computed.</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-1fep6xc">The <a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetForTokenClassification">XLNetForTokenClassification</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.XLNetForTokenClassification.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForTokenClassification.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, XLNetForTokenClassification <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"xlnet-base-cased"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = XLNetForTokenClassification.from_pretrained(<span class="hljs-string">"xlnet-base-cased"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer( <span class="hljs-meta">... </span> <span class="hljs-string">"HuggingFace is a company based in Paris and New York"</span>, add_special_tokens=<span class="hljs-literal">False</span>, return_tensors=<span class="hljs-string">"pt"</span> <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> logits = model(**inputs).logits <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_token_class_ids = logits.argmax(-<span class="hljs-number">1</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Note that tokens are classified rather then input words which means that</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># there might be more predicted token classes than words.</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Multiple token classes might account for the same word</span> <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_tokens_classes = [model.config.id2label[t.item()] <span class="hljs-keyword">for</span> t <span class="hljs-keyword">in</span> predicted_token_class_ids[<span class="hljs-number">0</span>]] <span class="hljs-meta">&gt;&gt;&gt; </span>labels = predicted_token_class_ids <span class="hljs-meta">&gt;&gt;&gt; </span>loss = model(**inputs, labels=labels).loss</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.XLNetForQuestionAnsweringSimple" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForQuestionAnsweringSimple"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-12z7pkx">XLNetForQuestionAnsweringSimple</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLNetForQuestionAnsweringSimple"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XLNetForQuestionAnsweringSimple</span></span></h3> <a id="transformers.XLNetForQuestionAnsweringSimple" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLNetForQuestionAnsweringSimple"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_xlnet.py#L1799" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForQuestionAnsweringSimple.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForQuestionAnsweringSimple.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetConfig">XLNetConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1gmn8ay">XLNet Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute <code>span start logits</code> and <code>span end logits</code>).</p> <p data-svelte-h="svelte-hmtw9k">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-hswkmf">This model is also a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLNetForQuestionAnsweringSimple.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.XLNetForQuestionAnsweringSimple.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLNetForQuestionAnsweringSimple.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_xlnet.py#L1810" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mems<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">perm_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">target_mapping<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">start_positions<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">end_positions<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_mems<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringSimpleOutput">transformers.models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringSimpleOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 14 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForQuestionAnsweringSimple.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForQuestionAnsweringSimple.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForQuestionAnsweringSimple.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForQuestionAnsweringSimple.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForQuestionAnsweringSimple.forward.mems" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForQuestionAnsweringSimple.forward.mems"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mems</strong> (<code>List[torch.FloatTensor]</code> of length <code>config.n_layers</code>) — Contains pre-computed hidden-states (see <code>mems</code> output below) . Can be used to speed up sequential decoding. The token ids which have their past given to this model should not be passed as <code>input_ids</code> as they have already been computed.<p></p> <p><code>use_mems</code> has to be set to <code>True</code> to make use of <code>mems</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForQuestionAnsweringSimple.forward.perm_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForQuestionAnsweringSimple.forward.perm_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>perm_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, sequence_length)</code>, <em>optional</em>) — Mask to indicate the attention pattern for each input token with values selected in <code>[0, 1]</code>:<p></p> <ul> <li>if <code>perm_mask[k, i, j] = 0</code>, i attend to j in batch k;</li> <li>if <code>perm_mask[k, i, j] = 1</code>, i does not attend to j in batch k.</li> </ul> <p>If not set, each token attends to all the others (full bidirectional attention). Only used during pretraining (to define factorization order) or for sequential decoding (generation).</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForQuestionAnsweringSimple.forward.target_mapping" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForQuestionAnsweringSimple.forward.target_mapping"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>target_mapping</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_predict, sequence_length)</code>, <em>optional</em>) — Mask to indicate the output tokens to use. If <code>target_mapping[k, i, j] = 1</code>, the i-th predict in batch k is on the j-th token. Only used during pretraining for partial prediction or for sequential decoding (generation).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForQuestionAnsweringSimple.forward.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForQuestionAnsweringSimple.forward.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token.</li> </ul> <p><a href="../glossary#token-type-ids">What are token type IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForQuestionAnsweringSimple.forward.input_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForQuestionAnsweringSimple.forward.input_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_mask</strong> (<code>torch.FloatTensor</code> of shape <code>batch_size, sequence_length</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Negative of <code>attention_mask</code>, i.e. with 0 for real tokens and 1 for padding which is kept for compatibility with the original code base.<p></p> <p>Mask values selected in <code>[0, 1]</code>:</p> <ul> <li>1 for tokens that are <strong>masked</strong>,</li> <li>0 for tokens that are <strong>not masked</strong>.</li> </ul> <p>You can only uses one of <code>input_mask</code> and <code>attention_mask</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForQuestionAnsweringSimple.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForQuestionAnsweringSimple.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForQuestionAnsweringSimple.forward.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForQuestionAnsweringSimple.forward.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForQuestionAnsweringSimple.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForQuestionAnsweringSimple.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForQuestionAnsweringSimple.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForQuestionAnsweringSimple.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForQuestionAnsweringSimple.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForQuestionAnsweringSimple.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForQuestionAnsweringSimple.forward.start_positions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForQuestionAnsweringSimple.forward.start_positions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>start_positions</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (<code>sequence_length</code>). Position outside of the sequence are not taken into account for computing the loss.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForQuestionAnsweringSimple.forward.end_positions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForQuestionAnsweringSimple.forward.end_positions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>end_positions</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (<code>sequence_length</code>). Position outside of the sequence are not taken into account for computing the loss.</span></span> </li></ul> <div id="transformers.XLNetForQuestionAnsweringSimple.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringSimpleOutput">transformers.models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringSimpleOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringSimpleOutput">transformers.models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringSimpleOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetConfig">XLNetConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.</p> </li> <li> <p><strong>start_logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length,)</code>) — Span-start scores (before SoftMax).</p> </li> <li> <p><strong>end_logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length,)</code>) — Span-end scores (before SoftMax).</p> </li> <li> <p><strong>mems</strong> (<code>List[torch.FloatTensor]</code> of length <code>config.n_layers</code>) — Contains pre-computed hidden-states. Can be used (see <code>mems</code> input) to speed up sequential decoding. The token ids which have their past given to this model should not be passed as <code>input_ids</code> as they have already been computed.</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-1a2q09k">The <a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetForQuestionAnsweringSimple">XLNetForQuestionAnsweringSimple</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.XLNetForQuestionAnsweringSimple.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForQuestionAnsweringSimple.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, XLNetForQuestionAnsweringSimple <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"xlnet-base-cased"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = XLNetForQuestionAnsweringSimple.from_pretrained(<span class="hljs-string">"xlnet-base-cased"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>question, text = <span class="hljs-string">"Who was Jim Henson?"</span>, <span class="hljs-string">"Jim Henson was a nice puppet"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(question, text, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> outputs = model(**inputs) <span class="hljs-meta">&gt;&gt;&gt; </span>answer_start_index = outputs.start_logits.argmax() <span class="hljs-meta">&gt;&gt;&gt; </span>answer_end_index = outputs.end_logits.argmax() <span class="hljs-meta">&gt;&gt;&gt; </span>predict_answer_tokens = inputs.input_ids[<span class="hljs-number">0</span>, answer_start_index : answer_end_index + <span class="hljs-number">1</span>] <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># target is "nice puppet"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>target_start_index = torch.tensor([<span class="hljs-number">14</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span>target_end_index = torch.tensor([<span class="hljs-number">15</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index) <span class="hljs-meta">&gt;&gt;&gt; </span>loss = outputs.loss</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.XLNetForQuestionAnswering" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForQuestionAnswering"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-ggf8b">XLNetForQuestionAnswering</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLNetForQuestionAnswering"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">XLNetForQuestionAnswering</span></span></h3> <a id="transformers.XLNetForQuestionAnswering" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLNetForQuestionAnswering"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_xlnet.py#L1909" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForQuestionAnswering.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForQuestionAnswering.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetConfig">XLNetConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1gmn8ay">XLNet Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute <code>span start logits</code> and <code>span end logits</code>).</p> <p data-svelte-h="svelte-hmtw9k">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-hswkmf">This model is also a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.XLNetForQuestionAnswering.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.XLNetForQuestionAnswering.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.XLNetForQuestionAnswering.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_xlnet.py#L1923" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mems<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">perm_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">target_mapping<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">start_positions<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">end_positions<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">is_impossible<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">cls_index<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">p_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_mems<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringOutput">transformers.models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 17 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForQuestionAnswering.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForQuestionAnswering.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForQuestionAnswering.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForQuestionAnswering.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForQuestionAnswering.forward.mems" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForQuestionAnswering.forward.mems"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mems</strong> (<code>List[torch.FloatTensor]</code> of length <code>config.n_layers</code>) — Contains pre-computed hidden-states (see <code>mems</code> output below) . Can be used to speed up sequential decoding. The token ids which have their past given to this model should not be passed as <code>input_ids</code> as they have already been computed.<p></p> <p><code>use_mems</code> has to be set to <code>True</code> to make use of <code>mems</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForQuestionAnswering.forward.perm_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForQuestionAnswering.forward.perm_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>perm_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, sequence_length)</code>, <em>optional</em>) — Mask to indicate the attention pattern for each input token with values selected in <code>[0, 1]</code>:<p></p> <ul> <li>if <code>perm_mask[k, i, j] = 0</code>, i attend to j in batch k;</li> <li>if <code>perm_mask[k, i, j] = 1</code>, i does not attend to j in batch k.</li> </ul> <p>If not set, each token attends to all the others (full bidirectional attention). Only used during pretraining (to define factorization order) or for sequential decoding (generation).</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForQuestionAnswering.forward.target_mapping" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForQuestionAnswering.forward.target_mapping"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>target_mapping</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_predict, sequence_length)</code>, <em>optional</em>) — Mask to indicate the output tokens to use. If <code>target_mapping[k, i, j] = 1</code>, the i-th predict in batch k is on the j-th token. Only used during pretraining for partial prediction or for sequential decoding (generation).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForQuestionAnswering.forward.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForQuestionAnswering.forward.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token.</li> </ul> <p><a href="../glossary#token-type-ids">What are token type IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForQuestionAnswering.forward.input_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForQuestionAnswering.forward.input_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_mask</strong> (<code>torch.FloatTensor</code> of shape <code>batch_size, sequence_length</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Negative of <code>attention_mask</code>, i.e. with 0 for real tokens and 1 for padding which is kept for compatibility with the original code base.<p></p> <p>Mask values selected in <code>[0, 1]</code>:</p> <ul> <li>1 for tokens that are <strong>masked</strong>,</li> <li>0 for tokens that are <strong>not masked</strong>.</li> </ul> <p>You can only uses one of <code>input_mask</code> and <code>attention_mask</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForQuestionAnswering.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForQuestionAnswering.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForQuestionAnswering.forward.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForQuestionAnswering.forward.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForQuestionAnswering.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForQuestionAnswering.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForQuestionAnswering.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForQuestionAnswering.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForQuestionAnswering.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForQuestionAnswering.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForQuestionAnswering.forward.start_positions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForQuestionAnswering.forward.start_positions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>start_positions</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (<code>sequence_length</code>). Position outside of the sequence are not taken into account for computing the loss.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForQuestionAnswering.forward.end_positions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForQuestionAnswering.forward.end_positions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>end_positions</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (<code>sequence_length</code>). Position outside of the sequence are not taken into account for computing the loss.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForQuestionAnswering.forward.is_impossible" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForQuestionAnswering.forward.is_impossible"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>is_impossible</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Labels whether a question has an answer or no answer (SQuAD 2.0)</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForQuestionAnswering.forward.cls_index" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForQuestionAnswering.forward.cls_index"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>cls_index</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Labels for position (index) of the classification token to use as input for computing plausibility of the answer.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.XLNetForQuestionAnswering.forward.p_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForQuestionAnswering.forward.p_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>p_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Optional mask of tokens which can’t be in answers (e.g. [CLS], [PAD], …). 1.0 means token should be masked. 0.0 mean token is not masked.</span></span> </li></ul> <div id="transformers.XLNetForQuestionAnswering.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringOutput">transformers.models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringOutput">transformers.models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetConfig">XLNetConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned if both <code>start_positions</code> and <code>end_positions</code> are provided) — Classification loss as the sum of start token, end token (and is_impossible if provided) classification losses.</p> </li> <li> <p><strong>start_top_log_probs</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, config.start_n_top)</code>, <em>optional</em>, returned if <code>start_positions</code> or <code>end_positions</code> is not provided) — Log probabilities for the top config.start_n_top start token possibilities (beam-search).</p> </li> <li> <p><strong>start_top_index</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, config.start_n_top)</code>, <em>optional</em>, returned if <code>start_positions</code> or <code>end_positions</code> is not provided) — Indices for the top config.start_n_top start token possibilities (beam-search).</p> </li> <li> <p><strong>end_top_log_probs</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, config.start_n_top * config.end_n_top)</code>, <em>optional</em>, returned if <code>start_positions</code> or <code>end_positions</code> is not provided) — Log probabilities for the top <code>config.start_n_top * config.end_n_top</code> end token possibilities (beam-search).</p> </li> <li> <p><strong>end_top_index</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, config.start_n_top * config.end_n_top)</code>, <em>optional</em>, returned if <code>start_positions</code> or <code>end_positions</code> is not provided) — Indices for the top <code>config.start_n_top * config.end_n_top</code> end token possibilities (beam-search).</p> </li> <li> <p><strong>cls_logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>, returned if <code>start_positions</code> or <code>end_positions</code> is not provided) — Log probabilities for the <code>is_impossible</code> label of the answers.</p> </li> <li> <p><strong>mems</strong> (<code>List[torch.FloatTensor]</code> of length <code>config.n_layers</code>) — Contains pre-computed hidden-states. Can be used (see <code>mems</code> input) to speed up sequential decoding. The token ids which have their past given to this model should not be passed as <code>input_ids</code> as they have already been computed.</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-wny6qk">The <a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetForQuestionAnswering">XLNetForQuestionAnswering</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.XLNetForQuestionAnswering.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.XLNetForQuestionAnswering.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, XLNetForQuestionAnswering <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"xlnet-base-cased"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = XLNetForQuestionAnswering.from_pretrained(<span class="hljs-string">"xlnet-base-cased"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>input_ids = torch.tensor(tokenizer.encode(<span class="hljs-string">"Hello, my dog is cute"</span>, add_special_tokens=<span class="hljs-literal">True</span>)).unsqueeze( <span class="hljs-meta">... </span> <span class="hljs-number">0</span> <span class="hljs-meta">... </span>) <span class="hljs-comment"># Batch size 1</span> <span class="hljs-meta">&gt;&gt;&gt; </span>start_positions = torch.tensor([<span class="hljs-number">1</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span>end_positions = torch.tensor([<span class="hljs-number">3</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(input_ids, start_positions=start_positions, end_positions=end_positions) <span class="hljs-meta">&gt;&gt;&gt; </span>loss = outputs.loss</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.TFXLNetModel" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetModel"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-9fw45f">TFXLNetModel</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFXLNetModel"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">TFXLNetModel</span></span></h3> <a id="transformers.TFXLNetModel" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFXLNetModel"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_tf_xlnet.py#L1132" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">*args<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetModel.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetModel.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetConfig">XLNetConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1ceqc0e">The bare XLNet Model transformer outputting raw hidden-states without any specific head on top.</p> <p data-svelte-h="svelte-1i0vt4o">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel">TFPreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-1ivrf8m">This model is also a <a href="https://www.tensorflow.org/api_docs/python/tf/keras/Model" rel="nofollow">tf.keras.Model</a> subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1ajbfxg">TensorFlow models and layers in <code>transformers</code> accept two formats as input:</p> <ul data-svelte-h="svelte-qm1t26"><li>having all inputs as keyword arguments (like PyTorch models), or</li> <li>having all inputs as a list, tuple or dict in the first positional argument.</li></ul> <p data-svelte-h="svelte-1v9qsc5">The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like <code>model.fit()</code> things should “just work” for you - just pass your inputs and labels in any format that <code>model.fit()</code> supports! If, however, you want to use the second format outside of Keras methods like <code>fit()</code> and <code>predict()</code>, such as when creating your own layers or models with the Keras <code>Functional</code> API, there are three possibilities you can use to gather all the input Tensors in the first positional argument:</p> <ul data-svelte-h="svelte-15scerc"><li>a single Tensor with <code>input_ids</code> only and nothing else: <code>model(input_ids)</code></li> <li>a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: <code>model([input_ids, attention_mask])</code> or <code>model([input_ids, attention_mask, token_type_ids])</code></li> <li>a dictionary with one or several input Tensors associated to the input names given in the docstring: <code>model({"input_ids": input_ids, "token_type_ids": token_type_ids})</code></li></ul> <p data-svelte-h="svelte-1an3odd">Note that when creating models and layers with <a href="https://keras.io/guides/making_new_layers_and_models_via_subclassing/" rel="nofollow">subclassing</a> then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function!</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFXLNetModel.call"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>call</span></h4> <a id="transformers.TFXLNetModel.call" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFXLNetModel.call"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_tf_xlnet.py#L1137" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: TFModelInputType | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mems<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">perm_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">target_mapping<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_mems<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">training<span class="opacity-60">: bool = False</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.models.xlnet.modeling_tf_xlnet.TFXLNetModelOutput">transformers.models.xlnet.modeling_tf_xlnet.TFXLNetModelOutput</a> or <code>tuple(tf.Tensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 12 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetModel.call.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetModel.call.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetModel.call.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetModel.call.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetModel.call.mems" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetModel.call.mems"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mems</strong> (<code>List[torch.FloatTensor]</code> of length <code>config.n_layers</code>) — Contains pre-computed hidden-states (see <code>mems</code> output below) . Can be used to speed up sequential decoding. The token ids which have their past given to this model should not be passed as <code>input_ids</code> as they have already been computed.<p></p> <p><code>use_mems</code> has to be set to <code>True</code> to make use of <code>mems</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetModel.call.perm_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetModel.call.perm_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>perm_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, sequence_length)</code>, <em>optional</em>) — Mask to indicate the attention pattern for each input token with values selected in <code>[0, 1]</code>:<p></p> <ul> <li>if <code>perm_mask[k, i, j] = 0</code>, i attend to j in batch k;</li> <li>if <code>perm_mask[k, i, j] = 1</code>, i does not attend to j in batch k.</li> </ul> <p>If not set, each token attends to all the others (full bidirectional attention). Only used during pretraining (to define factorization order) or for sequential decoding (generation).</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetModel.call.target_mapping" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetModel.call.target_mapping"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>target_mapping</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_predict, sequence_length)</code>, <em>optional</em>) — Mask to indicate the output tokens to use. If <code>target_mapping[k, i, j] = 1</code>, the i-th predict in batch k is on the j-th token. Only used during pretraining for partial prediction or for sequential decoding (generation).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetModel.call.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetModel.call.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token.</li> </ul> <p><a href="../glossary#token-type-ids">What are token type IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetModel.call.input_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetModel.call.input_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_mask</strong> (<code>torch.FloatTensor</code> of shape <code>batch_size, sequence_length</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Negative of <code>attention_mask</code>, i.e. with 0 for real tokens and 1 for padding which is kept for compatibility with the original code base.<p></p> <p>Mask values selected in <code>[0, 1]</code>:</p> <ul> <li>1 for tokens that are <strong>masked</strong>,</li> <li>0 for tokens that are <strong>not masked</strong>.</li> </ul> <p>You can only uses one of <code>input_mask</code> and <code>attention_mask</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetModel.call.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetModel.call.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetModel.call.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetModel.call.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetModel.call.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetModel.call.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetModel.call.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetModel.call.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetModel.call.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetModel.call.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li></ul> <div id="transformers.TFXLNetModel.call.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.models.xlnet.modeling_tf_xlnet.TFXLNetModelOutput">transformers.models.xlnet.modeling_tf_xlnet.TFXLNetModelOutput</a> or <code>tuple(tf.Tensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.models.xlnet.modeling_tf_xlnet.TFXLNetModelOutput">transformers.models.xlnet.modeling_tf_xlnet.TFXLNetModelOutput</a> or a tuple of <code>tf.Tensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetConfig">XLNetConfig</a>) and inputs.</p> <ul> <li> <p><strong>last_hidden_state</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, num_predict, hidden_size)</code>) — Sequence of hidden-states at the last layer of the model.</p> <p><code>num_predict</code> corresponds to <code>target_mapping.shape[1]</code>. If <code>target_mapping</code> is <code>None</code>, then <code>num_predict</code> corresponds to <code>sequence_length</code>.</p> </li> <li> <p><strong>mems</strong> (<code>List[tf.Tensor]</code> of length <code>config.n_layers</code>) — Contains pre-computed hidden-states. Can be used (see <code>mems</code> input) to speed up sequential decoding. The token ids which have their past given to this model should not be passed as <code>input_ids</code> as they have already been computed.</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>tf.Tensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>tf.Tensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-1ej5qjk">The <a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.TFXLNetModel">TFXLNetModel</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.TFXLNetModel.call.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetModel.call.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, TFXLNetModel <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> tensorflow <span class="hljs-keyword">as</span> tf <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"xlnet-base-cased"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = TFXLNetModel.from_pretrained(<span class="hljs-string">"xlnet-base-cased"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(<span class="hljs-string">"Hello, my dog is cute"</span>, return_tensors=<span class="hljs-string">"tf"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(inputs) <span class="hljs-meta">&gt;&gt;&gt; </span>last_hidden_states = outputs.last_hidden_state</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.TFXLNetLMHeadModel" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetLMHeadModel"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1uy1jui">TFXLNetLMHeadModel</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFXLNetLMHeadModel"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">TFXLNetLMHeadModel</span></span></h3> <a id="transformers.TFXLNetLMHeadModel" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFXLNetLMHeadModel"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_tf_xlnet.py#L1187" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">*args<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetLMHeadModel.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetLMHeadModel.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetConfig">XLNetConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-s260t6">XLNet Model with a language modeling head on top (linear layer with weights tied to the input embeddings).</p> <p data-svelte-h="svelte-1i0vt4o">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel">TFPreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-1ivrf8m">This model is also a <a href="https://www.tensorflow.org/api_docs/python/tf/keras/Model" rel="nofollow">tf.keras.Model</a> subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1ajbfxg">TensorFlow models and layers in <code>transformers</code> accept two formats as input:</p> <ul data-svelte-h="svelte-qm1t26"><li>having all inputs as keyword arguments (like PyTorch models), or</li> <li>having all inputs as a list, tuple or dict in the first positional argument.</li></ul> <p data-svelte-h="svelte-1v9qsc5">The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like <code>model.fit()</code> things should “just work” for you - just pass your inputs and labels in any format that <code>model.fit()</code> supports! If, however, you want to use the second format outside of Keras methods like <code>fit()</code> and <code>predict()</code>, such as when creating your own layers or models with the Keras <code>Functional</code> API, there are three possibilities you can use to gather all the input Tensors in the first positional argument:</p> <ul data-svelte-h="svelte-15scerc"><li>a single Tensor with <code>input_ids</code> only and nothing else: <code>model(input_ids)</code></li> <li>a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: <code>model([input_ids, attention_mask])</code> or <code>model([input_ids, attention_mask, token_type_ids])</code></li> <li>a dictionary with one or several input Tensors associated to the input names given in the docstring: <code>model({"input_ids": input_ids, "token_type_ids": token_type_ids})</code></li></ul> <p data-svelte-h="svelte-1an3odd">Note that when creating models and layers with <a href="https://keras.io/guides/making_new_layers_and_models_via_subclassing/" rel="nofollow">subclassing</a> then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function!</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFXLNetLMHeadModel.call"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>call</span></h4> <a id="transformers.TFXLNetLMHeadModel.call" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFXLNetLMHeadModel.call"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_tf_xlnet.py#L1241" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: TFModelInputType | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mems<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">perm_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">target_mapping<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_mems<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">training<span class="opacity-60">: bool = False</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.models.xlnet.modeling_tf_xlnet.TFXLNetLMHeadModelOutput">transformers.models.xlnet.modeling_tf_xlnet.TFXLNetLMHeadModelOutput</a> or <code>tuple(tf.Tensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 13 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetLMHeadModel.call.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetLMHeadModel.call.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetLMHeadModel.call.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetLMHeadModel.call.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetLMHeadModel.call.mems" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetLMHeadModel.call.mems"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mems</strong> (<code>List[torch.FloatTensor]</code> of length <code>config.n_layers</code>) — Contains pre-computed hidden-states (see <code>mems</code> output below) . Can be used to speed up sequential decoding. The token ids which have their past given to this model should not be passed as <code>input_ids</code> as they have already been computed.<p></p> <p><code>use_mems</code> has to be set to <code>True</code> to make use of <code>mems</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetLMHeadModel.call.perm_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetLMHeadModel.call.perm_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>perm_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, sequence_length)</code>, <em>optional</em>) — Mask to indicate the attention pattern for each input token with values selected in <code>[0, 1]</code>:<p></p> <ul> <li>if <code>perm_mask[k, i, j] = 0</code>, i attend to j in batch k;</li> <li>if <code>perm_mask[k, i, j] = 1</code>, i does not attend to j in batch k.</li> </ul> <p>If not set, each token attends to all the others (full bidirectional attention). Only used during pretraining (to define factorization order) or for sequential decoding (generation).</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetLMHeadModel.call.target_mapping" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetLMHeadModel.call.target_mapping"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>target_mapping</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_predict, sequence_length)</code>, <em>optional</em>) — Mask to indicate the output tokens to use. If <code>target_mapping[k, i, j] = 1</code>, the i-th predict in batch k is on the j-th token. Only used during pretraining for partial prediction or for sequential decoding (generation).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetLMHeadModel.call.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetLMHeadModel.call.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token.</li> </ul> <p><a href="../glossary#token-type-ids">What are token type IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetLMHeadModel.call.input_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetLMHeadModel.call.input_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_mask</strong> (<code>torch.FloatTensor</code> of shape <code>batch_size, sequence_length</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Negative of <code>attention_mask</code>, i.e. with 0 for real tokens and 1 for padding which is kept for compatibility with the original code base.<p></p> <p>Mask values selected in <code>[0, 1]</code>:</p> <ul> <li>1 for tokens that are <strong>masked</strong>,</li> <li>0 for tokens that are <strong>not masked</strong>.</li> </ul> <p>You can only uses one of <code>input_mask</code> and <code>attention_mask</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetLMHeadModel.call.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetLMHeadModel.call.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetLMHeadModel.call.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetLMHeadModel.call.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetLMHeadModel.call.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetLMHeadModel.call.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetLMHeadModel.call.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetLMHeadModel.call.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetLMHeadModel.call.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetLMHeadModel.call.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetLMHeadModel.call.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetLMHeadModel.call.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Labels for computing the cross entropy classification loss. Indices should be in <code>[0, ..., config.vocab_size - 1]</code>.</span></span> </li></ul> <div id="transformers.TFXLNetLMHeadModel.call.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.models.xlnet.modeling_tf_xlnet.TFXLNetLMHeadModelOutput">transformers.models.xlnet.modeling_tf_xlnet.TFXLNetLMHeadModelOutput</a> or <code>tuple(tf.Tensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.models.xlnet.modeling_tf_xlnet.TFXLNetLMHeadModelOutput">transformers.models.xlnet.modeling_tf_xlnet.TFXLNetLMHeadModelOutput</a> or a tuple of <code>tf.Tensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetConfig">XLNetConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>tf.Tensor</code> of shape <em>(1,)</em>, <em>optional</em>, returned when <code>labels</code> is provided) Language modeling loss (for next-token prediction).</p> </li> <li> <p><strong>logits</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, num_predict, config.vocab_size)</code>) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).</p> <p><code>num_predict</code> corresponds to <code>target_mapping.shape[1]</code>. If <code>target_mapping</code> is <code>None</code>, then <code>num_predict</code> corresponds to <code>sequence_length</code>.</p> </li> <li> <p><strong>mems</strong> (<code>List[tf.Tensor]</code> of length <code>config.n_layers</code>) — Contains pre-computed hidden-states. Can be used (see <code>mems</code> input) to speed up sequential decoding. The token ids which have their past given to this model should not be passed as <code>input_ids</code> as they have already been computed.</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>tf.Tensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>tf.Tensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-11eksoq">The <a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.TFXLNetLMHeadModel">TFXLNetLMHeadModel</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.TFXLNetLMHeadModel.call.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetLMHeadModel.call.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-kvfsh7">Examples:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> tensorflow <span class="hljs-keyword">as</span> tf <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, TFXLNetLMHeadModel <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"xlnet-large-cased"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = TFXLNetLMHeadModel.from_pretrained(<span class="hljs-string">"xlnet-large-cased"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># We show how to setup inputs to predict a next token using a bi-directional context.</span> <span class="hljs-meta">&gt;&gt;&gt; </span>input_ids = tf.constant(tokenizer.encode(<span class="hljs-string">"Hello, my dog is very &lt;mask&gt;"</span>, add_special_tokens=<span class="hljs-literal">True</span>))[ <span class="hljs-meta">... </span> <span class="hljs-literal">None</span>, : <span class="hljs-meta">... </span>] <span class="hljs-comment"># We will predict the masked token</span> <span class="hljs-meta">&gt;&gt;&gt; </span>perm_mask = np.zeros((<span class="hljs-number">1</span>, input_ids.shape[<span class="hljs-number">1</span>], input_ids.shape[<span class="hljs-number">1</span>])) <span class="hljs-meta">&gt;&gt;&gt; </span>perm_mask[:, :, -<span class="hljs-number">1</span>] = <span class="hljs-number">1.0</span> <span class="hljs-comment"># Previous tokens don't see last token</span> <span class="hljs-meta">&gt;&gt;&gt; </span>target_mapping = np.zeros( <span class="hljs-meta">... </span> (<span class="hljs-number">1</span>, <span class="hljs-number">1</span>, input_ids.shape[<span class="hljs-number">1</span>]) <span class="hljs-meta">... </span>) <span class="hljs-comment"># Shape [1, 1, seq_length] =&gt; let's predict one token</span> <span class="hljs-meta">&gt;&gt;&gt; </span>target_mapping[ <span class="hljs-meta">... </span> <span class="hljs-number">0</span>, <span class="hljs-number">0</span>, -<span class="hljs-number">1</span> <span class="hljs-meta">... </span>] = <span class="hljs-number">1.0</span> <span class="hljs-comment"># Our first (and only) prediction will be the last token of the sequence (the masked token)</span> <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model( <span class="hljs-meta">... </span> input_ids, <span class="hljs-meta">... </span> perm_mask=tf.constant(perm_mask, dtype=tf.float32), <span class="hljs-meta">... </span> target_mapping=tf.constant(target_mapping, dtype=tf.float32), <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>next_token_logits = outputs[ <span class="hljs-meta">... </span> <span class="hljs-number">0</span> <span class="hljs-meta">... </span>] <span class="hljs-comment"># Output has shape [target_mapping.size(0), target_mapping.size(1), config.vocab_size]</span></pre></div></div></div></div> <h2 class="relative group"><a id="transformers.TFXLNetForSequenceClassification" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForSequenceClassification"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-bwyji2">TFXLNetForSequenceClassification</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFXLNetForSequenceClassification"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">TFXLNetForSequenceClassification</span></span></h3> <a id="transformers.TFXLNetForSequenceClassification" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFXLNetForSequenceClassification"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_tf_xlnet.py#L1347" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">*args<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetForSequenceClassification.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForSequenceClassification.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetConfig">XLNetConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1ccr0de">XLNet Model with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks.</p> <p data-svelte-h="svelte-1i0vt4o">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel">TFPreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-1ivrf8m">This model is also a <a href="https://www.tensorflow.org/api_docs/python/tf/keras/Model" rel="nofollow">tf.keras.Model</a> subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1ajbfxg">TensorFlow models and layers in <code>transformers</code> accept two formats as input:</p> <ul data-svelte-h="svelte-qm1t26"><li>having all inputs as keyword arguments (like PyTorch models), or</li> <li>having all inputs as a list, tuple or dict in the first positional argument.</li></ul> <p data-svelte-h="svelte-1v9qsc5">The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like <code>model.fit()</code> things should “just work” for you - just pass your inputs and labels in any format that <code>model.fit()</code> supports! If, however, you want to use the second format outside of Keras methods like <code>fit()</code> and <code>predict()</code>, such as when creating your own layers or models with the Keras <code>Functional</code> API, there are three possibilities you can use to gather all the input Tensors in the first positional argument:</p> <ul data-svelte-h="svelte-15scerc"><li>a single Tensor with <code>input_ids</code> only and nothing else: <code>model(input_ids)</code></li> <li>a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: <code>model([input_ids, attention_mask])</code> or <code>model([input_ids, attention_mask, token_type_ids])</code></li> <li>a dictionary with one or several input Tensors associated to the input names given in the docstring: <code>model({"input_ids": input_ids, "token_type_ids": token_type_ids})</code></li></ul> <p data-svelte-h="svelte-1an3odd">Note that when creating models and layers with <a href="https://keras.io/guides/making_new_layers_and_models_via_subclassing/" rel="nofollow">subclassing</a> then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function!</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFXLNetForSequenceClassification.call"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>call</span></h4> <a id="transformers.TFXLNetForSequenceClassification.call" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFXLNetForSequenceClassification.call"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_tf_xlnet.py#L1360" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: TFModelInputType | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mems<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">perm_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">target_mapping<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_mems<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">training<span class="opacity-60">: bool = False</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForSequenceClassificationOutput">transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForSequenceClassificationOutput</a> or <code>tuple(tf.Tensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 13 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetForSequenceClassification.call.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForSequenceClassification.call.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetForSequenceClassification.call.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForSequenceClassification.call.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetForSequenceClassification.call.mems" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForSequenceClassification.call.mems"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mems</strong> (<code>List[torch.FloatTensor]</code> of length <code>config.n_layers</code>) — Contains pre-computed hidden-states (see <code>mems</code> output below) . Can be used to speed up sequential decoding. The token ids which have their past given to this model should not be passed as <code>input_ids</code> as they have already been computed.<p></p> <p><code>use_mems</code> has to be set to <code>True</code> to make use of <code>mems</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetForSequenceClassification.call.perm_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForSequenceClassification.call.perm_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>perm_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, sequence_length)</code>, <em>optional</em>) — Mask to indicate the attention pattern for each input token with values selected in <code>[0, 1]</code>:<p></p> <ul> <li>if <code>perm_mask[k, i, j] = 0</code>, i attend to j in batch k;</li> <li>if <code>perm_mask[k, i, j] = 1</code>, i does not attend to j in batch k.</li> </ul> <p>If not set, each token attends to all the others (full bidirectional attention). Only used during pretraining (to define factorization order) or for sequential decoding (generation).</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetForSequenceClassification.call.target_mapping" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForSequenceClassification.call.target_mapping"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>target_mapping</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_predict, sequence_length)</code>, <em>optional</em>) — Mask to indicate the output tokens to use. If <code>target_mapping[k, i, j] = 1</code>, the i-th predict in batch k is on the j-th token. Only used during pretraining for partial prediction or for sequential decoding (generation).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetForSequenceClassification.call.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForSequenceClassification.call.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token.</li> </ul> <p><a href="../glossary#token-type-ids">What are token type IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetForSequenceClassification.call.input_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForSequenceClassification.call.input_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_mask</strong> (<code>torch.FloatTensor</code> of shape <code>batch_size, sequence_length</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Negative of <code>attention_mask</code>, i.e. with 0 for real tokens and 1 for padding which is kept for compatibility with the original code base.<p></p> <p>Mask values selected in <code>[0, 1]</code>:</p> <ul> <li>1 for tokens that are <strong>masked</strong>,</li> <li>0 for tokens that are <strong>not masked</strong>.</li> </ul> <p>You can only uses one of <code>input_mask</code> and <code>attention_mask</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetForSequenceClassification.call.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForSequenceClassification.call.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetForSequenceClassification.call.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForSequenceClassification.call.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetForSequenceClassification.call.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForSequenceClassification.call.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetForSequenceClassification.call.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForSequenceClassification.call.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetForSequenceClassification.call.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForSequenceClassification.call.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetForSequenceClassification.call.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForSequenceClassification.call.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>tf.Tensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Labels for computing the sequence classification/regression loss. Indices should be in <code>[0, ..., config.num_labels - 1]</code>. If <code>config.num_labels == 1</code> a regression loss is computed (Mean-Square loss), If <code>config.num_labels &gt; 1</code> a classification loss is computed (Cross-Entropy).</span></span> </li></ul> <div id="transformers.TFXLNetForSequenceClassification.call.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForSequenceClassificationOutput">transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForSequenceClassificationOutput</a> or <code>tuple(tf.Tensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForSequenceClassificationOutput">transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForSequenceClassificationOutput</a> or a tuple of <code>tf.Tensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetConfig">XLNetConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>tf.Tensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>label</code> is provided) — Classification (or regression if config.num_labels==1) loss.</p> </li> <li> <p><strong>logits</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, config.num_labels)</code>) — Classification (or regression if config.num_labels==1) scores (before SoftMax).</p> </li> <li> <p><strong>mems</strong> (<code>List[tf.Tensor]</code> of length <code>config.n_layers</code>) — Contains pre-computed hidden-states. Can be used (see <code>mems</code> input) to speed up sequential decoding. The token ids which have their past given to this model should not be passed as <code>input_ids</code> as they have already been computed.</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>tf.Tensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>tf.Tensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-1bh4n0u">The <a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.TFXLNetForSequenceClassification">TFXLNetForSequenceClassification</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.TFXLNetForSequenceClassification.call.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForSequenceClassification.call.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, TFXLNetForSequenceClassification <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> tensorflow <span class="hljs-keyword">as</span> tf <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"xlnet-base-cased"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = TFXLNetForSequenceClassification.from_pretrained(<span class="hljs-string">"xlnet-base-cased"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(<span class="hljs-string">"Hello, my dog is cute"</span>, return_tensors=<span class="hljs-string">"tf"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>logits = model(**inputs).logits <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_class_id = <span class="hljs-built_in">int</span>(tf.math.argmax(logits, axis=-<span class="hljs-number">1</span>)[<span class="hljs-number">0</span>])</pre></div></div> <div class="relative group rounded-md"><a id="transformers.TFXLNetForSequenceClassification.call.example-2" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForSequenceClassification.call.example-2"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`</span> <span class="hljs-meta">&gt;&gt;&gt; </span>num_labels = <span class="hljs-built_in">len</span>(model.config.id2label) <span class="hljs-meta">&gt;&gt;&gt; </span>model = TFXLNetForSequenceClassification.from_pretrained(<span class="hljs-string">"xlnet-base-cased"</span>, num_labels=num_labels) <span class="hljs-meta">&gt;&gt;&gt; </span>labels = tf.constant(<span class="hljs-number">1</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>loss = model(**inputs, labels=labels).loss</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.TFXLNetForMultipleChoice" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForMultipleChoice"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-bd8mo">TFLNetForMultipleChoice</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFXLNetForMultipleChoice"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">TFXLNetForMultipleChoice</span></span></h3> <a id="transformers.TFXLNetForMultipleChoice" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFXLNetForMultipleChoice"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_tf_xlnet.py#L1434" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">*args<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetForMultipleChoice.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForMultipleChoice.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetConfig">XLNetConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-11xkuel">XLNET Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks.</p> <p data-svelte-h="svelte-1i0vt4o">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel">TFPreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-1ivrf8m">This model is also a <a href="https://www.tensorflow.org/api_docs/python/tf/keras/Model" rel="nofollow">tf.keras.Model</a> subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1ajbfxg">TensorFlow models and layers in <code>transformers</code> accept two formats as input:</p> <ul data-svelte-h="svelte-qm1t26"><li>having all inputs as keyword arguments (like PyTorch models), or</li> <li>having all inputs as a list, tuple or dict in the first positional argument.</li></ul> <p data-svelte-h="svelte-1v9qsc5">The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like <code>model.fit()</code> things should “just work” for you - just pass your inputs and labels in any format that <code>model.fit()</code> supports! If, however, you want to use the second format outside of Keras methods like <code>fit()</code> and <code>predict()</code>, such as when creating your own layers or models with the Keras <code>Functional</code> API, there are three possibilities you can use to gather all the input Tensors in the first positional argument:</p> <ul data-svelte-h="svelte-15scerc"><li>a single Tensor with <code>input_ids</code> only and nothing else: <code>model(input_ids)</code></li> <li>a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: <code>model([input_ids, attention_mask])</code> or <code>model([input_ids, attention_mask, token_type_ids])</code></li> <li>a dictionary with one or several input Tensors associated to the input names given in the docstring: <code>model({"input_ids": input_ids, "token_type_ids": token_type_ids})</code></li></ul> <p data-svelte-h="svelte-1an3odd">Note that when creating models and layers with <a href="https://keras.io/guides/making_new_layers_and_models_via_subclassing/" rel="nofollow">subclassing</a> then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function!</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFXLNetForMultipleChoice.call"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>call</span></h4> <a id="transformers.TFXLNetForMultipleChoice.call" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFXLNetForMultipleChoice.call"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_tf_xlnet.py#L1446" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: TFModelInputType | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mems<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">perm_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">target_mapping<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_mems<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">training<span class="opacity-60">: bool = False</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForMultipleChoiceOutput">transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForMultipleChoiceOutput</a> or <code>tuple(tf.Tensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 13 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetForMultipleChoice.call.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForMultipleChoice.call.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, num_choices, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetForMultipleChoice.call.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForMultipleChoice.call.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_choices, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetForMultipleChoice.call.mems" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForMultipleChoice.call.mems"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mems</strong> (<code>List[torch.FloatTensor]</code> of length <code>config.n_layers</code>) — Contains pre-computed hidden-states (see <code>mems</code> output below) . Can be used to speed up sequential decoding. The token ids which have their past given to this model should not be passed as <code>input_ids</code> as they have already been computed.<p></p> <p><code>use_mems</code> has to be set to <code>True</code> to make use of <code>mems</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetForMultipleChoice.call.perm_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForMultipleChoice.call.perm_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>perm_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, sequence_length)</code>, <em>optional</em>) — Mask to indicate the attention pattern for each input token with values selected in <code>[0, 1]</code>:<p></p> <ul> <li>if <code>perm_mask[k, i, j] = 0</code>, i attend to j in batch k;</li> <li>if <code>perm_mask[k, i, j] = 1</code>, i does not attend to j in batch k.</li> </ul> <p>If not set, each token attends to all the others (full bidirectional attention). Only used during pretraining (to define factorization order) or for sequential decoding (generation).</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetForMultipleChoice.call.target_mapping" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForMultipleChoice.call.target_mapping"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>target_mapping</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_predict, sequence_length)</code>, <em>optional</em>) — Mask to indicate the output tokens to use. If <code>target_mapping[k, i, j] = 1</code>, the i-th predict in batch k is on the j-th token. Only used during pretraining for partial prediction or for sequential decoding (generation).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetForMultipleChoice.call.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForMultipleChoice.call.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, num_choices, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token.</li> </ul> <p><a href="../glossary#token-type-ids">What are token type IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetForMultipleChoice.call.input_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForMultipleChoice.call.input_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_mask</strong> (<code>torch.FloatTensor</code> of shape <code>batch_size, num_choices, sequence_length</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Negative of <code>attention_mask</code>, i.e. with 0 for real tokens and 1 for padding which is kept for compatibility with the original code base.<p></p> <p>Mask values selected in <code>[0, 1]</code>:</p> <ul> <li>1 for tokens that are <strong>masked</strong>,</li> <li>0 for tokens that are <strong>not masked</strong>.</li> </ul> <p>You can only uses one of <code>input_mask</code> and <code>attention_mask</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetForMultipleChoice.call.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForMultipleChoice.call.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetForMultipleChoice.call.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForMultipleChoice.call.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_choices, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetForMultipleChoice.call.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForMultipleChoice.call.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetForMultipleChoice.call.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForMultipleChoice.call.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetForMultipleChoice.call.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForMultipleChoice.call.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetForMultipleChoice.call.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForMultipleChoice.call.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>tf.Tensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Labels for computing the multiple choice classification loss. Indices should be in <code>[0, ..., num_choices]</code> where <code>num_choices</code> is the size of the second dimension of the input tensors. (See <code>input_ids</code> above)</span></span> </li></ul> <div id="transformers.TFXLNetForMultipleChoice.call.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForMultipleChoiceOutput">transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForMultipleChoiceOutput</a> or <code>tuple(tf.Tensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForMultipleChoiceOutput">transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForMultipleChoiceOutput</a> or a tuple of <code>tf.Tensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetConfig">XLNetConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>tf.Tensor</code> of shape <em>(1,)</em>, <em>optional</em>, returned when <code>labels</code> is provided) — Classification loss.</p> </li> <li> <p><strong>logits</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, num_choices)</code>) — <em>num_choices</em> is the second dimension of the input tensors. (see <em>input_ids</em> above).</p> <p>Classification scores (before SoftMax).</p> </li> <li> <p><strong>mems</strong> (<code>List[tf.Tensor]</code> of length <code>config.n_layers</code>) — Contains pre-computed hidden-states. Can be used (see <code>mems</code> input) to speed up sequential decoding. The token ids which have their past given to this model should not be passed as <code>input_ids</code> as they have already been computed.</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>tf.Tensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>tf.Tensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-1we2lfm">The <a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.TFXLNetForMultipleChoice">TFXLNetForMultipleChoice</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.TFXLNetForMultipleChoice.call.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForMultipleChoice.call.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, TFXLNetForMultipleChoice <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> tensorflow <span class="hljs-keyword">as</span> tf <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"xlnet-base-cased"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = TFXLNetForMultipleChoice.from_pretrained(<span class="hljs-string">"xlnet-base-cased"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>prompt = <span class="hljs-string">"In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."</span> <span class="hljs-meta">&gt;&gt;&gt; </span>choice0 = <span class="hljs-string">"It is eaten with a fork and a knife."</span> <span class="hljs-meta">&gt;&gt;&gt; </span>choice1 = <span class="hljs-string">"It is eaten while held in the hand."</span> <span class="hljs-meta">&gt;&gt;&gt; </span>encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors=<span class="hljs-string">"tf"</span>, padding=<span class="hljs-literal">True</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = {k: tf.expand_dims(v, <span class="hljs-number">0</span>) <span class="hljs-keyword">for</span> k, v <span class="hljs-keyword">in</span> encoding.items()} <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(inputs) <span class="hljs-comment"># batch size is 1</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># the linear classifier still needs to be trained</span> <span class="hljs-meta">&gt;&gt;&gt; </span>logits = outputs.logits</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.TFXLNetForTokenClassification" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForTokenClassification"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-117stje">TFXLNetForTokenClassification</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFXLNetForTokenClassification"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">TFXLNetForTokenClassification</span></span></h3> <a id="transformers.TFXLNetForTokenClassification" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFXLNetForTokenClassification"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_tf_xlnet.py#L1535" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">*args<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetForTokenClassification.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForTokenClassification.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetConfig">XLNetConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-jh923c">XLNet Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks.</p> <p data-svelte-h="svelte-1i0vt4o">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel">TFPreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-1ivrf8m">This model is also a <a href="https://www.tensorflow.org/api_docs/python/tf/keras/Model" rel="nofollow">tf.keras.Model</a> subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1ajbfxg">TensorFlow models and layers in <code>transformers</code> accept two formats as input:</p> <ul data-svelte-h="svelte-qm1t26"><li>having all inputs as keyword arguments (like PyTorch models), or</li> <li>having all inputs as a list, tuple or dict in the first positional argument.</li></ul> <p data-svelte-h="svelte-1v9qsc5">The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like <code>model.fit()</code> things should “just work” for you - just pass your inputs and labels in any format that <code>model.fit()</code> supports! If, however, you want to use the second format outside of Keras methods like <code>fit()</code> and <code>predict()</code>, such as when creating your own layers or models with the Keras <code>Functional</code> API, there are three possibilities you can use to gather all the input Tensors in the first positional argument:</p> <ul data-svelte-h="svelte-15scerc"><li>a single Tensor with <code>input_ids</code> only and nothing else: <code>model(input_ids)</code></li> <li>a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: <code>model([input_ids, attention_mask])</code> or <code>model([input_ids, attention_mask, token_type_ids])</code></li> <li>a dictionary with one or several input Tensors associated to the input names given in the docstring: <code>model({"input_ids": input_ids, "token_type_ids": token_type_ids})</code></li></ul> <p data-svelte-h="svelte-1an3odd">Note that when creating models and layers with <a href="https://keras.io/guides/making_new_layers_and_models_via_subclassing/" rel="nofollow">subclassing</a> then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function!</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFXLNetForTokenClassification.call"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>call</span></h4> <a id="transformers.TFXLNetForTokenClassification.call" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFXLNetForTokenClassification.call"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_tf_xlnet.py#L1545" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: TFModelInputType | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mems<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">perm_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">target_mapping<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_mems<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">training<span class="opacity-60">: bool = False</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForTokenClassificationOutput">transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForTokenClassificationOutput</a> or <code>tuple(tf.Tensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 13 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetForTokenClassification.call.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForTokenClassification.call.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetForTokenClassification.call.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForTokenClassification.call.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetForTokenClassification.call.mems" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForTokenClassification.call.mems"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mems</strong> (<code>List[torch.FloatTensor]</code> of length <code>config.n_layers</code>) — Contains pre-computed hidden-states (see <code>mems</code> output below) . Can be used to speed up sequential decoding. The token ids which have their past given to this model should not be passed as <code>input_ids</code> as they have already been computed.<p></p> <p><code>use_mems</code> has to be set to <code>True</code> to make use of <code>mems</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetForTokenClassification.call.perm_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForTokenClassification.call.perm_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>perm_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, sequence_length)</code>, <em>optional</em>) — Mask to indicate the attention pattern for each input token with values selected in <code>[0, 1]</code>:<p></p> <ul> <li>if <code>perm_mask[k, i, j] = 0</code>, i attend to j in batch k;</li> <li>if <code>perm_mask[k, i, j] = 1</code>, i does not attend to j in batch k.</li> </ul> <p>If not set, each token attends to all the others (full bidirectional attention). Only used during pretraining (to define factorization order) or for sequential decoding (generation).</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetForTokenClassification.call.target_mapping" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForTokenClassification.call.target_mapping"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>target_mapping</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_predict, sequence_length)</code>, <em>optional</em>) — Mask to indicate the output tokens to use. If <code>target_mapping[k, i, j] = 1</code>, the i-th predict in batch k is on the j-th token. Only used during pretraining for partial prediction or for sequential decoding (generation).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetForTokenClassification.call.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForTokenClassification.call.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token.</li> </ul> <p><a href="../glossary#token-type-ids">What are token type IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetForTokenClassification.call.input_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForTokenClassification.call.input_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_mask</strong> (<code>torch.FloatTensor</code> of shape <code>batch_size, sequence_length</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Negative of <code>attention_mask</code>, i.e. with 0 for real tokens and 1 for padding which is kept for compatibility with the original code base.<p></p> <p>Mask values selected in <code>[0, 1]</code>:</p> <ul> <li>1 for tokens that are <strong>masked</strong>,</li> <li>0 for tokens that are <strong>not masked</strong>.</li> </ul> <p>You can only uses one of <code>input_mask</code> and <code>attention_mask</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetForTokenClassification.call.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForTokenClassification.call.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetForTokenClassification.call.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForTokenClassification.call.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetForTokenClassification.call.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForTokenClassification.call.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetForTokenClassification.call.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForTokenClassification.call.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetForTokenClassification.call.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForTokenClassification.call.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetForTokenClassification.call.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForTokenClassification.call.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Labels for computing the token classification loss. Indices should be in <code>[0, ..., config.num_labels - 1]</code>.</span></span> </li></ul> <div id="transformers.TFXLNetForTokenClassification.call.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForTokenClassificationOutput">transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForTokenClassificationOutput</a> or <code>tuple(tf.Tensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForTokenClassificationOutput">transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForTokenClassificationOutput</a> or a tuple of <code>tf.Tensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetConfig">XLNetConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>tf.Tensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Classification loss.</p> </li> <li> <p><strong>logits</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length, config.num_labels)</code>) — Classification scores (before SoftMax).</p> </li> <li> <p><strong>mems</strong> (<code>List[tf.Tensor]</code> of length <code>config.n_layers</code>) — Contains pre-computed hidden-states. Can be used (see <code>mems</code> input) to speed up sequential decoding. The token ids which have their past given to this model should not be passed as <code>input_ids</code> as they have already been computed.</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>tf.Tensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>tf.Tensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-qym9a4">The <a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.TFXLNetForTokenClassification">TFXLNetForTokenClassification</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.TFXLNetForTokenClassification.call.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForTokenClassification.call.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, TFXLNetForTokenClassification <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> tensorflow <span class="hljs-keyword">as</span> tf <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"xlnet-base-cased"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = TFXLNetForTokenClassification.from_pretrained(<span class="hljs-string">"xlnet-base-cased"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer( <span class="hljs-meta">... </span> <span class="hljs-string">"HuggingFace is a company based in Paris and New York"</span>, add_special_tokens=<span class="hljs-literal">False</span>, return_tensors=<span class="hljs-string">"tf"</span> <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>logits = model(**inputs).logits <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_token_class_ids = tf.math.argmax(logits, axis=-<span class="hljs-number">1</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Note that tokens are classified rather then input words which means that</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># there might be more predicted token classes than words.</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Multiple token classes might account for the same word</span> <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_tokens_classes = [model.config.id2label[t] <span class="hljs-keyword">for</span> t <span class="hljs-keyword">in</span> predicted_token_class_ids[<span class="hljs-number">0</span>].numpy().tolist()]</pre></div></div> <div class="relative group rounded-md"><a id="transformers.TFXLNetForTokenClassification.call.example-2" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForTokenClassification.call.example-2"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>labels = predicted_token_class_ids <span class="hljs-meta">&gt;&gt;&gt; </span>loss = tf.math.reduce_mean(model(**inputs, labels=labels).loss)</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.TFXLNetForQuestionAnsweringSimple" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForQuestionAnsweringSimple"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1e1w6yf">TFXLNetForQuestionAnsweringSimple</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFXLNetForQuestionAnsweringSimple"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">TFXLNetForQuestionAnsweringSimple</span></span></h3> <a id="transformers.TFXLNetForQuestionAnsweringSimple" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFXLNetForQuestionAnsweringSimple"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_tf_xlnet.py#L1615" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">*args<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetForQuestionAnsweringSimple.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForQuestionAnsweringSimple.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetConfig">XLNetConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1gmn8ay">XLNet Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute <code>span start logits</code> and <code>span end logits</code>).</p> <p data-svelte-h="svelte-1i0vt4o">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel">TFPreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-1ivrf8m">This model is also a <a href="https://www.tensorflow.org/api_docs/python/tf/keras/Model" rel="nofollow">tf.keras.Model</a> subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1ajbfxg">TensorFlow models and layers in <code>transformers</code> accept two formats as input:</p> <ul data-svelte-h="svelte-qm1t26"><li>having all inputs as keyword arguments (like PyTorch models), or</li> <li>having all inputs as a list, tuple or dict in the first positional argument.</li></ul> <p data-svelte-h="svelte-1v9qsc5">The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like <code>model.fit()</code> things should “just work” for you - just pass your inputs and labels in any format that <code>model.fit()</code> supports! If, however, you want to use the second format outside of Keras methods like <code>fit()</code> and <code>predict()</code>, such as when creating your own layers or models with the Keras <code>Functional</code> API, there are three possibilities you can use to gather all the input Tensors in the first positional argument:</p> <ul data-svelte-h="svelte-15scerc"><li>a single Tensor with <code>input_ids</code> only and nothing else: <code>model(input_ids)</code></li> <li>a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: <code>model([input_ids, attention_mask])</code> or <code>model([input_ids, attention_mask, token_type_ids])</code></li> <li>a dictionary with one or several input Tensors associated to the input names given in the docstring: <code>model({"input_ids": input_ids, "token_type_ids": token_type_ids})</code></li></ul> <p data-svelte-h="svelte-1an3odd">Note that when creating models and layers with <a href="https://keras.io/guides/making_new_layers_and_models_via_subclassing/" rel="nofollow">subclassing</a> then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function!</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFXLNetForQuestionAnsweringSimple.call"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>call</span></h4> <a id="transformers.TFXLNetForQuestionAnsweringSimple.call" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFXLNetForQuestionAnsweringSimple.call"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/xlnet/modeling_tf_xlnet.py#L1623" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: TFModelInputType | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mems<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">perm_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">target_mapping<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_mems<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">start_positions<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">end_positions<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">training<span class="opacity-60">: bool = False</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForQuestionAnsweringSimpleOutput">transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForQuestionAnsweringSimpleOutput</a> or <code>tuple(tf.Tensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 14 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetForQuestionAnsweringSimple.call.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForQuestionAnsweringSimple.call.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetForQuestionAnsweringSimple.call.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForQuestionAnsweringSimple.call.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetForQuestionAnsweringSimple.call.mems" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForQuestionAnsweringSimple.call.mems"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mems</strong> (<code>List[torch.FloatTensor]</code> of length <code>config.n_layers</code>) — Contains pre-computed hidden-states (see <code>mems</code> output below) . Can be used to speed up sequential decoding. The token ids which have their past given to this model should not be passed as <code>input_ids</code> as they have already been computed.<p></p> <p><code>use_mems</code> has to be set to <code>True</code> to make use of <code>mems</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetForQuestionAnsweringSimple.call.perm_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForQuestionAnsweringSimple.call.perm_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>perm_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, sequence_length)</code>, <em>optional</em>) — Mask to indicate the attention pattern for each input token with values selected in <code>[0, 1]</code>:<p></p> <ul> <li>if <code>perm_mask[k, i, j] = 0</code>, i attend to j in batch k;</li> <li>if <code>perm_mask[k, i, j] = 1</code>, i does not attend to j in batch k.</li> </ul> <p>If not set, each token attends to all the others (full bidirectional attention). Only used during pretraining (to define factorization order) or for sequential decoding (generation).</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetForQuestionAnsweringSimple.call.target_mapping" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForQuestionAnsweringSimple.call.target_mapping"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>target_mapping</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_predict, sequence_length)</code>, <em>optional</em>) — Mask to indicate the output tokens to use. If <code>target_mapping[k, i, j] = 1</code>, the i-th predict in batch k is on the j-th token. Only used during pretraining for partial prediction or for sequential decoding (generation).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetForQuestionAnsweringSimple.call.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForQuestionAnsweringSimple.call.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token.</li> </ul> <p><a href="../glossary#token-type-ids">What are token type IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetForQuestionAnsweringSimple.call.input_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForQuestionAnsweringSimple.call.input_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_mask</strong> (<code>torch.FloatTensor</code> of shape <code>batch_size, sequence_length</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Negative of <code>attention_mask</code>, i.e. with 0 for real tokens and 1 for padding which is kept for compatibility with the original code base.<p></p> <p>Mask values selected in <code>[0, 1]</code>:</p> <ul> <li>1 for tokens that are <strong>masked</strong>,</li> <li>0 for tokens that are <strong>not masked</strong>.</li> </ul> <p>You can only uses one of <code>input_mask</code> and <code>attention_mask</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetForQuestionAnsweringSimple.call.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForQuestionAnsweringSimple.call.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetForQuestionAnsweringSimple.call.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForQuestionAnsweringSimple.call.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetForQuestionAnsweringSimple.call.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForQuestionAnsweringSimple.call.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetForQuestionAnsweringSimple.call.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForQuestionAnsweringSimple.call.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetForQuestionAnsweringSimple.call.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForQuestionAnsweringSimple.call.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetForQuestionAnsweringSimple.call.start_positions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForQuestionAnsweringSimple.call.start_positions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>start_positions</strong> (<code>tf.Tensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (<code>sequence_length</code>). Position outside of the sequence are not taken into account for computing the loss.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFXLNetForQuestionAnsweringSimple.call.end_positions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForQuestionAnsweringSimple.call.end_positions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>end_positions</strong> (<code>tf.Tensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (<code>sequence_length</code>). Position outside of the sequence are not taken into account for computing the loss.</span></span> </li></ul> <div id="transformers.TFXLNetForQuestionAnsweringSimple.call.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForQuestionAnsweringSimpleOutput">transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForQuestionAnsweringSimpleOutput</a> or <code>tuple(tf.Tensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForQuestionAnsweringSimpleOutput">transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForQuestionAnsweringSimpleOutput</a> or a tuple of <code>tf.Tensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.XLNetConfig">XLNetConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>tf.Tensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.</p> </li> <li> <p><strong>start_logits</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length,)</code>) — Span-start scores (before SoftMax).</p> </li> <li> <p><strong>end_logits</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length,)</code>) — Span-end scores (before SoftMax).</p> </li> <li> <p><strong>mems</strong> (<code>List[tf.Tensor]</code> of length <code>config.n_layers</code>) — Contains pre-computed hidden-states. Can be used (see <code>mems</code> input) to speed up sequential decoding. The token ids which have their past given to this model should not be passed as <code>input_ids</code> as they have already been computed.</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>tf.Tensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>tf.Tensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-1gxampo">The <a href="/docs/transformers/v4.34.0/en/model_doc/xlnet#transformers.TFXLNetForQuestionAnsweringSimple">TFXLNetForQuestionAnsweringSimple</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.TFXLNetForQuestionAnsweringSimple.call.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForQuestionAnsweringSimple.call.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, TFXLNetForQuestionAnsweringSimple <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> tensorflow <span class="hljs-keyword">as</span> tf <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"xlnet-base-cased"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = TFXLNetForQuestionAnsweringSimple.from_pretrained(<span class="hljs-string">"xlnet-base-cased"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>question, text = <span class="hljs-string">"Who was Jim Henson?"</span>, <span class="hljs-string">"Jim Henson was a nice puppet"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(question, text, return_tensors=<span class="hljs-string">"tf"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(**inputs) <span class="hljs-meta">&gt;&gt;&gt; </span>answer_start_index = <span class="hljs-built_in">int</span>(tf.math.argmax(outputs.start_logits, axis=-<span class="hljs-number">1</span>)[<span class="hljs-number">0</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span>answer_end_index = <span class="hljs-built_in">int</span>(tf.math.argmax(outputs.end_logits, axis=-<span class="hljs-number">1</span>)[<span class="hljs-number">0</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span>predict_answer_tokens = inputs.input_ids[<span class="hljs-number">0</span>, answer_start_index : answer_end_index + <span class="hljs-number">1</span>]</pre></div></div> <div class="relative group rounded-md"><a id="transformers.TFXLNetForQuestionAnsweringSimple.call.example-2" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFXLNetForQuestionAnsweringSimple.call.example-2"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># target is "nice puppet"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>target_start_index = tf.constant([<span class="hljs-number">14</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span>target_end_index = tf.constant([<span class="hljs-number">15</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index) <span class="hljs-meta">&gt;&gt;&gt; </span>loss = tf.math.reduce_mean(outputs.loss)</pre></div></div></div></div> <p></p> <div id="svelte-announcer" aria-live="assertive" aria-atomic="true" style="position: absolute; left: 0px; top: 0px; clip: rect(0px, 0px, 0px, 0px); clip-path: inset(50%); overflow: hidden; white-space: nowrap; width: 1px; height: 1px;"></div></div> <div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/xlm-v" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>XLM-V</a> <a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/yoso" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">YOSO<span class="ml-2 translate-y-px">→</span></a></div></div></div> <div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapter&quot;:{&quot;title&quot;:&quot;XLNet&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;xlnet&quot;,&quot;url&quot;:&quot;#xlnet&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;overview&quot;,&quot;url&quot;:&quot;#overview&quot;},{&quot;title&quot;:&quot;Documentation resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;documentation-resources&quot;,&quot;url&quot;:&quot;#documentation-resources&quot;},{&quot;title&quot;:&quot;XLNetConfig&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XLNetConfig&quot;,&quot;url&quot;:&quot;#transformers.XLNetConfig&quot;},{&quot;title&quot;:&quot;XLNetTokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XLNetTokenizer&quot;,&quot;url&quot;:&quot;#transformers.XLNetTokenizer&quot;},{&quot;title&quot;:&quot;XLNetTokenizerFast&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XLNetTokenizerFast&quot;,&quot;url&quot;:&quot;#transformers.XLNetTokenizerFast&quot;},{&quot;title&quot;:&quot;XLNet specific outputs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.models.xlnet.modeling_xlnet.XLNetModelOutput&quot;,&quot;url&quot;:&quot;#transformers.models.xlnet.modeling_xlnet.XLNetModelOutput&quot;},{&quot;title&quot;:&quot;XLNetModel&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XLNetModel&quot;,&quot;url&quot;:&quot;#transformers.XLNetModel&quot;},{&quot;title&quot;:&quot;XLNetLMHeadModel&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XLNetLMHeadModel&quot;,&quot;url&quot;:&quot;#transformers.XLNetLMHeadModel&quot;},{&quot;title&quot;:&quot;XLNetForSequenceClassification&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XLNetForSequenceClassification&quot;,&quot;url&quot;:&quot;#transformers.XLNetForSequenceClassification&quot;},{&quot;title&quot;:&quot;XLNetForMultipleChoice&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XLNetForMultipleChoice&quot;,&quot;url&quot;:&quot;#transformers.XLNetForMultipleChoice&quot;},{&quot;title&quot;:&quot;XLNetForTokenClassification&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XLNetForTokenClassification&quot;,&quot;url&quot;:&quot;#transformers.XLNetForTokenClassification&quot;},{&quot;title&quot;:&quot;XLNetForQuestionAnsweringSimple&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XLNetForQuestionAnsweringSimple&quot;,&quot;url&quot;:&quot;#transformers.XLNetForQuestionAnsweringSimple&quot;},{&quot;title&quot;:&quot;XLNetForQuestionAnswering&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.XLNetForQuestionAnswering&quot;,&quot;url&quot;:&quot;#transformers.XLNetForQuestionAnswering&quot;},{&quot;title&quot;:&quot;TFXLNetModel&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.TFXLNetModel&quot;,&quot;url&quot;:&quot;#transformers.TFXLNetModel&quot;},{&quot;title&quot;:&quot;TFXLNetLMHeadModel&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.TFXLNetLMHeadModel&quot;,&quot;url&quot;:&quot;#transformers.TFXLNetLMHeadModel&quot;},{&quot;title&quot;:&quot;TFXLNetForSequenceClassification&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.TFXLNetForSequenceClassification&quot;,&quot;url&quot;:&quot;#transformers.TFXLNetForSequenceClassification&quot;},{&quot;title&quot;:&quot;TFLNetForMultipleChoice&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.TFXLNetForMultipleChoice&quot;,&quot;url&quot;:&quot;#transformers.TFXLNetForMultipleChoice&quot;},{&quot;title&quot;:&quot;TFXLNetForTokenClassification&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.TFXLNetForTokenClassification&quot;,&quot;url&quot;:&quot;#transformers.TFXLNetForTokenClassification&quot;},{&quot;title&quot;:&quot;TFXLNetForQuestionAnsweringSimple&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.TFXLNetForQuestionAnsweringSimple&quot;,&quot;url&quot;:&quot;#transformers.TFXLNetForQuestionAnsweringSimple&quot;}]}}" data-target="SubSideMenu"><nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#xlnet" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-xlnet">XL<wbr>Net</a> <a href="#overview" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-overview"><wbr>Overview</a> <a href="#documentation-resources" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-documentation-resources"><wbr>Documentation resources</a> <a href="#transformers.XLNetConfig" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XLNetConfig">XL<wbr>Net<wbr>Config</a> <a href="#transformers.XLNetTokenizer" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XLNetTokenizer">XL<wbr>Net<wbr>Tokenizer</a> <a href="#transformers.XLNetTokenizerFast" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XLNetTokenizerFast">XL<wbr>Net<wbr>Tokenizer<wbr>Fast</a> <a href="#transformers.models.xlnet.modeling_xlnet.XLNetModelOutput" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.models.xlnet.modeling_xlnet.XLNetModelOutput">XL<wbr>Net specific outputs</a> <a href="#transformers.XLNetModel" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XLNetModel">XL<wbr>Net<wbr>Model</a> <a href="#transformers.XLNetLMHeadModel" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XLNetLMHeadModel">XL<wbr>NetLM<wbr>Head<wbr>Model</a> <a href="#transformers.XLNetForSequenceClassification" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XLNetForSequenceClassification">XL<wbr>Net<wbr>For<wbr>Sequence<wbr>Classification</a> <a href="#transformers.XLNetForMultipleChoice" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XLNetForMultipleChoice">XL<wbr>Net<wbr>For<wbr>Multiple<wbr>Choice</a> <a href="#transformers.XLNetForTokenClassification" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XLNetForTokenClassification">XL<wbr>Net<wbr>For<wbr>Token<wbr>Classification</a> <a href="#transformers.XLNetForQuestionAnsweringSimple" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XLNetForQuestionAnsweringSimple">XL<wbr>Net<wbr>For<wbr>Question<wbr>Answering<wbr>Simple</a> <a href="#transformers.XLNetForQuestionAnswering" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.XLNetForQuestionAnswering">XL<wbr>Net<wbr>For<wbr>Question<wbr>Answering</a> <a href="#transformers.TFXLNetModel" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.TFXLNetModel">TFXL<wbr>Net<wbr>Model</a> <a href="#transformers.TFXLNetLMHeadModel" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.TFXLNetLMHeadModel">TFXL<wbr>NetLM<wbr>Head<wbr>Model</a> <a href="#transformers.TFXLNetForSequenceClassification" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.TFXLNetForSequenceClassification">TFXL<wbr>Net<wbr>For<wbr>Sequence<wbr>Classification</a> <a href="#transformers.TFXLNetForMultipleChoice" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.TFXLNetForMultipleChoice">TFL<wbr>Net<wbr>For<wbr>Multiple<wbr>Choice</a> <a href="#transformers.TFXLNetForTokenClassification" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.TFXLNetForTokenClassification">TFXL<wbr>Net<wbr>For<wbr>Token<wbr>Classification</a> <a href="#transformers.TFXLNetForQuestionAnsweringSimple" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.TFXLNetForQuestionAnsweringSimple">TFXL<wbr>Net<wbr>For<wbr>Question<wbr>Answering<wbr>Simple</a> </nav></div></div></div> <div id="doc-footer"></div></main> </div> <script> import("/front/build/kube-b0520c1/index.js"); window.moonSha = "kube-b0520c1/"; window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc","environment":"production","userAgent":"HuggingFace (production)"}`); </script> <!-- Stripe --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://js.stripe.com/v3/"; script.async = true; document.head.appendChild(script); } </script> <!-- Google analytics v4 --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL"; script.async = true; document.head.appendChild(script); window.dataLayer = window.dataLayer || []; function gtag() { if (window.dataLayer !== undefined) { window.dataLayer.push(arguments); } } gtag("js", new Date()); gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/v4.34.0/en/model_doc/xlnet" }); /// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" }); /// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent /// TODO: ask the user for their consent and update this with gtag('consent', 'update') } </script> <!-- Google Analytics v3 (deprecated) --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { (function (i, s, o, g, r, a, m) { i["GoogleAnalyticsObject"] = r; (i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments); }), (i[r].l = 1 * new Date()); (a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]); a.async = 1; a.src = g; m.parentNode.insertBefore(a, m); })(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics"); ganalytics("create", "UA-83738774-2", "auto"); ganalytics("send", "pageview", "/docs/transformers/v4.34.0/en/model_doc/xlnet"); } </script> <iframe name="__privateStripeMetricsController9110" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-27c67c0d52761104439bb051c7856ab1.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fv4.34.0%2Fen%2Fmodel_doc%2Fxlnet&amp;title=XLNet&amp;referrer=&amp;muid=577a1d98-59a0-46fc-98a8-36ee316848488be1c3&amp;sid=95f156dd-eb84-4e70-95ef-3883996ebe1530e886&amp;version=6&amp;preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html>
2023-10-05T13:33:39.548Z
YOSO
https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/yoso
# YOSO ## Overview The YOSO model was proposed in [You Only Sample (Almost) Once: Linear Cost Self-Attention Via Bernoulli Sampling](https://arxiv.org/abs/2111.09714) by Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh. YOSO approximates standard softmax self-attention via a Bernoulli sampling scheme based on Locality Sensitive Hashing (LSH). In principle, all the Bernoulli random variables can be sampled with a single hash. The abstract from the paper is the following: _Transformer-based models are widely used in natural language processing (NLP). Central to the transformer model is the self-attention mechanism, which captures the interactions of token pairs in the input sequences and depends quadratically on the sequence length. Training such models on longer sequences is expensive. In this paper, we show that a Bernoulli sampling attention mechanism based on Locality Sensitive Hashing (LSH), decreases the quadratic complexity of such models to linear. We bypass the quadratic cost by considering self-attention as a sum of individual tokens associated with Bernoulli random variables that can, in principle, be sampled at once by a single hash (although in practice, this number may be a small constant). This leads to an efficient sampling scheme to estimate self-attention which relies on specific modifications of LSH (to enable deployment on GPU architectures). We evaluate our algorithm on the GLUE benchmark with standard 512 sequence length where we see favorable performance relative to a standard pretrained Transformer. On the Long Range Arena (LRA) benchmark, for evaluating performance on long sequences, our method achieves results consistent with softmax self-attention but with sizable speed-ups and memory savings and often outperforms other efficient self-attention methods. Our code is available at this https URL_ Tips: - The YOSO attention algorithm is implemented through custom CUDA kernels, functions written in CUDA C++ that can be executed multiple times in parallel on a GPU. - The kernels provide a `fast_hash` function, which approximates the random projections of the queries and keys using the Fast Hadamard Transform. Using these hash codes, the `lsh_cumulation` function approximates self-attention via LSH-based Bernoulli sampling. - To use the custom kernels, the user should set `config.use_expectation = False`. To ensure that the kernels are compiled successfully, the user must install the correct version of PyTorch and cudatoolkit. By default, `config.use_expectation = True`, which uses YOSO-E and does not require compiling CUDA kernels. ![drawing](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/yoso_architecture.jpg) YOSO Attention Algorithm. Taken from the [original paper](https://arxiv.org/abs/2111.09714). This model was contributed by [novice03](https://huggingface.co/novice03). The original code can be found [here](https://github.com/mlpen/YOSO). ## Documentation resources - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/multiple_choice) ## YosoConfig ### class transformers.YosoConfig [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/yoso/configuration_yoso.py#L29) ( vocab\_size = 50265hidden\_size = 768num\_hidden\_layers = 12num\_attention\_heads = 12intermediate\_size = 3072hidden\_act = 'gelu'hidden\_dropout\_prob = 0.1attention\_probs\_dropout\_prob = 0.1max\_position\_embeddings = 4096type\_vocab\_size = 1initializer\_range = 0.02layer\_norm\_eps = 1e-12position\_embedding\_type = 'absolute'use\_expectation = Truehash\_code\_len = 9num\_hash = 64conv\_window = Noneuse\_fast\_hash = Truelsh\_backward = Truepad\_token\_id = 1bos\_token\_id = 0eos\_token\_id = 2\*\*kwargs ) This is the configuration class to store the configuration of a [YosoModel](/docs/transformers/v4.34.0/en/model_doc/yoso#transformers.YosoModel). It is used to instantiate an YOSO model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the YOSO [uw-madison/yoso-4096](https://huggingface.co/uw-madison/yoso-4096) architecture. Configuration objects inherit from [PretrainedConfig](/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig) and can be used to control the model outputs. Read the documentation from [PretrainedConfig](/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig) for more information. Example: ``` >>> from transformers import YosoConfig, YosoModel >>> >>> configuration = YosoConfig() >>> >>> model = YosoModel(configuration) >>> >>> configuration = model.config ``` ## YosoModel ### class transformers.YosoModel [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/yoso/modeling_yoso.py#L741) ( config ) Parameters - **config** ([YosoConfig](/docs/transformers/v4.34.0/en/model_doc/yoso#transformers.YosoConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. The bare YOSO Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/yoso/modeling_yoso.py#L766) ( input\_ids: typing.Optional\[torch.Tensor\] = Noneattention\_mask: typing.Optional\[torch.Tensor\] = Nonetoken\_type\_ids: typing.Optional\[torch.Tensor\] = Noneposition\_ids: typing.Optional\[torch.Tensor\] = Nonehead\_mask: typing.Optional\[torch.Tensor\] = Noneinputs\_embeds: typing.Optional\[torch.Tensor\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → [transformers.modeling\_outputs.BaseModelOutputWithCrossAttentions](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.BaseModelOutputWithCrossAttentions) or `tuple(torch.FloatTensor)` The [YosoModel](/docs/transformers/v4.34.0/en/model_doc/yoso#transformers.YosoModel) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, YosoModel >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("uw-madison/yoso-4096") >>> model = YosoModel.from_pretrained("uw-madison/yoso-4096") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state ``` ## YosoForMaskedLM ### class transformers.YosoForMaskedLM [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/yoso/modeling_yoso.py#L853) ( config ) Parameters - **config** ([YosoConfig](/docs/transformers/v4.34.0/en/model_doc/yoso#transformers.YosoConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. YOSO Model with a `language modeling` head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/yoso/modeling_yoso.py#L871) ( input\_ids: typing.Optional\[torch.Tensor\] = Noneattention\_mask: typing.Optional\[torch.Tensor\] = Nonetoken\_type\_ids: typing.Optional\[torch.Tensor\] = Noneposition\_ids: typing.Optional\[torch.Tensor\] = Nonehead\_mask: typing.Optional\[torch.Tensor\] = Noneinputs\_embeds: typing.Optional\[torch.Tensor\] = Nonelabels: typing.Optional\[torch.Tensor\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → [transformers.modeling\_outputs.MaskedLMOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.MaskedLMOutput) or `tuple(torch.FloatTensor)` The [YosoForMaskedLM](/docs/transformers/v4.34.0/en/model_doc/yoso#transformers.YosoForMaskedLM) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, YosoForMaskedLM >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("uw-madison/yoso-4096") >>> model = YosoForMaskedLM.from_pretrained("uw-madison/yoso-4096") >>> inputs = tokenizer("The capital of France is [MASK].", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> >>> mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0] >>> predicted_token_id = logits[0, mask_token_index].argmax(axis=-1) >>> labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"] >>> >>> labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100) >>> outputs = model(**inputs, labels=labels) ``` ## YosoForSequenceClassification ### class transformers.YosoForSequenceClassification [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/yoso/modeling_yoso.py#L956) ( config ) Parameters - **config** ([YosoConfig](/docs/transformers/v4.34.0/en/model_doc/yoso#transformers.YosoConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. YOSO Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/yoso/modeling_yoso.py#L966) ( input\_ids: typing.Optional\[torch.Tensor\] = Noneattention\_mask: typing.Optional\[torch.Tensor\] = Nonetoken\_type\_ids: typing.Optional\[torch.Tensor\] = Noneposition\_ids: typing.Optional\[torch.Tensor\] = Nonehead\_mask: typing.Optional\[torch.Tensor\] = Noneinputs\_embeds: typing.Optional\[torch.Tensor\] = Nonelabels: typing.Optional\[torch.Tensor\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → [transformers.modeling\_outputs.SequenceClassifierOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.SequenceClassifierOutput) or `tuple(torch.FloatTensor)` The [YosoForSequenceClassification](/docs/transformers/v4.34.0/en/model_doc/yoso#transformers.YosoForSequenceClassification) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example of single-label classification: ``` >>> import torch >>> from transformers import AutoTokenizer, YosoForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("uw-madison/yoso-4096") >>> model = YosoForSequenceClassification.from_pretrained("uw-madison/yoso-4096") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_id = logits.argmax().item() >>> >>> num_labels = len(model.config.id2label) >>> model = YosoForSequenceClassification.from_pretrained("uw-madison/yoso-4096", num_labels=num_labels) >>> labels = torch.tensor([1]) >>> loss = model(**inputs, labels=labels).loss ``` Example of multi-label classification: ``` >>> import torch >>> from transformers import AutoTokenizer, YosoForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("uw-madison/yoso-4096") >>> model = YosoForSequenceClassification.from_pretrained("uw-madison/yoso-4096", problem_type="multi_label_classification") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5] >>> >>> num_labels = len(model.config.id2label) >>> model = YosoForSequenceClassification.from_pretrained( ... "uw-madison/yoso-4096", num_labels=num_labels, problem_type="multi_label_classification" ... ) >>> labels = torch.sum( ... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1 ... ).to(torch.float) >>> loss = model(**inputs, labels=labels).loss ``` ## YosoForMultipleChoice ### class transformers.YosoForMultipleChoice [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/yoso/modeling_yoso.py#L1047) ( config ) Parameters - **config** ([YosoConfig](/docs/transformers/v4.34.0/en/model_doc/yoso#transformers.YosoConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. YOSO Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/yoso/modeling_yoso.py#L1058) ( input\_ids: typing.Optional\[torch.Tensor\] = Noneattention\_mask: typing.Optional\[torch.Tensor\] = Nonetoken\_type\_ids: typing.Optional\[torch.Tensor\] = Noneposition\_ids: typing.Optional\[torch.Tensor\] = Nonehead\_mask: typing.Optional\[torch.Tensor\] = Noneinputs\_embeds: typing.Optional\[torch.Tensor\] = Nonelabels: typing.Optional\[torch.Tensor\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → [transformers.modeling\_outputs.MultipleChoiceModelOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.MultipleChoiceModelOutput) or `tuple(torch.FloatTensor)` The [YosoForMultipleChoice](/docs/transformers/v4.34.0/en/model_doc/yoso#transformers.YosoForMultipleChoice) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, YosoForMultipleChoice >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("uw-madison/yoso-4096") >>> model = YosoForMultipleChoice.from_pretrained("uw-madison/yoso-4096") >>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced." >>> choice0 = "It is eaten with a fork and a knife." >>> choice1 = "It is eaten while held in the hand." >>> labels = torch.tensor(0).unsqueeze(0) >>> encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="pt", padding=True) >>> outputs = model(**{k: v.unsqueeze(0) for k, v in encoding.items()}, labels=labels) >>> >>> loss = outputs.loss >>> logits = outputs.logits ``` ## YosoForTokenClassification ### class transformers.YosoForTokenClassification [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/yoso/modeling_yoso.py#L1138) ( config ) Parameters - **config** ([YosoConfig](/docs/transformers/v4.34.0/en/model_doc/yoso#transformers.YosoConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. YOSO Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/yoso/modeling_yoso.py#L1150) ( input\_ids: typing.Optional\[torch.Tensor\] = Noneattention\_mask: typing.Optional\[torch.Tensor\] = Nonetoken\_type\_ids: typing.Optional\[torch.Tensor\] = Noneposition\_ids: typing.Optional\[torch.Tensor\] = Nonehead\_mask: typing.Optional\[torch.Tensor\] = Noneinputs\_embeds: typing.Optional\[torch.Tensor\] = Nonelabels: typing.Optional\[torch.Tensor\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → [transformers.modeling\_outputs.TokenClassifierOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.TokenClassifierOutput) or `tuple(torch.FloatTensor)` The [YosoForTokenClassification](/docs/transformers/v4.34.0/en/model_doc/yoso#transformers.YosoForTokenClassification) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, YosoForTokenClassification >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("uw-madison/yoso-4096") >>> model = YosoForTokenClassification.from_pretrained("uw-madison/yoso-4096") >>> inputs = tokenizer( ... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt" ... ) >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_token_class_ids = logits.argmax(-1) >>> >>> >>> >>> predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]] >>> labels = predicted_token_class_ids >>> loss = model(**inputs, labels=labels).loss ``` ## YosoForQuestionAnswering ### class transformers.YosoForQuestionAnswering [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/yoso/modeling_yoso.py#L1223) ( config ) Parameters - **config** ([YosoConfig](/docs/transformers/v4.34.0/en/model_doc/yoso#transformers.YosoConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. YOSO Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute `span start logits` and `span end logits`). This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/yoso/modeling_yoso.py#L1236) ( input\_ids: typing.Optional\[torch.Tensor\] = Noneattention\_mask: typing.Optional\[torch.Tensor\] = Nonetoken\_type\_ids: typing.Optional\[torch.Tensor\] = Noneposition\_ids: typing.Optional\[torch.Tensor\] = Nonehead\_mask: typing.Optional\[torch.Tensor\] = Noneinputs\_embeds: typing.Optional\[torch.Tensor\] = Nonestart\_positions: typing.Optional\[torch.Tensor\] = Noneend\_positions: typing.Optional\[torch.Tensor\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → [transformers.modeling\_outputs.QuestionAnsweringModelOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.QuestionAnsweringModelOutput) or `tuple(torch.FloatTensor)` The [YosoForQuestionAnswering](/docs/transformers/v4.34.0/en/model_doc/yoso#transformers.YosoForQuestionAnswering) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, YosoForQuestionAnswering >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("uw-madison/yoso-4096") >>> model = YosoForQuestionAnswering.from_pretrained("uw-madison/yoso-4096") >>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" >>> inputs = tokenizer(question, text, return_tensors="pt") >>> with torch.no_grad(): ... outputs = model(**inputs) >>> answer_start_index = outputs.start_logits.argmax() >>> answer_end_index = outputs.end_logits.argmax() >>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1] >>> >>> target_start_index = torch.tensor([14]) >>> target_end_index = torch.tensor([15]) >>> outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index) >>> loss = outputs.loss ```
<!DOCTYPE html><html class=""><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"> <meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science."> <meta property="fb:app_id" content="1321688464574422"> <meta name="twitter:card" content="summary_large_image"> <meta name="twitter:site" content="@huggingface"> <meta property="og:title" content="YOSO"> <meta property="og:type" content="website"> <meta property="og:url" content="https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/yoso"> <meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png"> <link rel="stylesheet" href="/front/build/kube-5e23f38/style.css"> <link rel="preconnect" href="https://fonts.gstatic.com"> <link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&amp;display=swap" rel="stylesheet"> <link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&amp;display=swap" rel="stylesheet"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" as="style" onload="this.onload=null;this.rel='stylesheet'"> <noscript> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" /> </noscript> <title>YOSO</title> <script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script> <script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"><link rel="stylesheet" href="/docs/transformers/v4.34.0/en/_app/immutable/assets/0.e3b0c442.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/1.38c5c2f6.js"><meta name="hf:doc:metadata" content="{&quot;local&quot;:&quot;yoso&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;overview&quot;,&quot;title&quot;:&quot;Overview&quot;},{&quot;local&quot;:&quot;documentation-resources&quot;,&quot;title&quot;:&quot;Documentation resources&quot;},{&quot;local&quot;:&quot;transformers.YosoConfig&quot;,&quot;title&quot;:&quot;YosoConfig&quot;},{&quot;local&quot;:&quot;transformers.YosoModel&quot;,&quot;title&quot;:&quot;YosoModel&quot;},{&quot;local&quot;:&quot;transformers.YosoForMaskedLM&quot;,&quot;title&quot;:&quot;YosoForMaskedLM&quot;},{&quot;local&quot;:&quot;transformers.YosoForSequenceClassification&quot;,&quot;title&quot;:&quot;YosoForSequenceClassification&quot;},{&quot;local&quot;:&quot;transformers.YosoForMultipleChoice&quot;,&quot;title&quot;:&quot;YosoForMultipleChoice&quot;},{&quot;local&quot;:&quot;transformers.YosoForTokenClassification&quot;,&quot;title&quot;:&quot;YosoForTokenClassification&quot;},{&quot;local&quot;:&quot;transformers.YosoForQuestionAnswering&quot;,&quot;title&quot;:&quot;YosoForQuestionAnswering&quot;}],&quot;title&quot;:&quot;YOSO&quot;}"></head> <body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage"> <div class="flex min-h-screen flex-col"> <div class="SVELTE_HYDRATER contents" data-props="{&quot;classNames&quot;:&quot;&quot;,&quot;isWide&quot;:true,&quot;isZh&quot;:false}" data-target="MainHeader"><header class="border-b border-gray-100 "><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <div class="flex flex-none items-center justify-center p-0.5 place-self-stretch lg:hidden"><button class="relative z-40 flex h-6 w-8 items-center justify-center" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div></div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="rounded-full border border-transparent bg-gray-900 px-3 py-1 leading-none text-white hover:border-black hover:bg-white hover:text-black" href="/join">Sign Up</a></li></ul></nav></div></header></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div> <main class="flex flex-1 flex-col"><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapters&quot;:[{&quot;title&quot;:&quot;Get started&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;🤗 Transformers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;index&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/index&quot;},{&quot;title&quot;:&quot;Quick tour&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;quicktour&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/quicktour&quot;},{&quot;title&quot;:&quot;Installation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;installation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/installation&quot;}]},{&quot;title&quot;:&quot;Tutorials&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Run inference with pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_tutorial&quot;},{&quot;title&quot;:&quot;Write portable code with AutoClass&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;autoclass_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/autoclass_tutorial&quot;},{&quot;title&quot;:&quot;Preprocess data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocessing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/preprocessing&quot;},{&quot;title&quot;:&quot;Fine-tune a pretrained model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;training&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/training&quot;},{&quot;title&quot;:&quot;Train with a script&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;run_scripts&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/run_scripts&quot;},{&quot;title&quot;:&quot;Set up distributed training with 🤗 Accelerate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;accelerate&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/accelerate&quot;},{&quot;title&quot;:&quot;Load and train adapters with 🤗 PEFT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;peft&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/peft&quot;},{&quot;title&quot;:&quot;Share your model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_sharing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_sharing&quot;},{&quot;title&quot;:&quot;Agents&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers_agents&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/transformers_agents&quot;},{&quot;title&quot;:&quot;Generation with LLMs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;llm_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/llm_tutorial&quot;}]},{&quot;title&quot;:&quot;Task Guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Natural Language Processing&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text classification&quot;,&quot;id&quot;:&quot;tasks/sequence_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/sequence_classification&quot;},{&quot;title&quot;:&quot;Token classification&quot;,&quot;id&quot;:&quot;tasks/token_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/token_classification&quot;},{&quot;title&quot;:&quot;Question answering&quot;,&quot;id&quot;:&quot;tasks/question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/question_answering&quot;},{&quot;title&quot;:&quot;Causal language modeling&quot;,&quot;id&quot;:&quot;tasks/language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/language_modeling&quot;},{&quot;title&quot;:&quot;Masked language modeling&quot;,&quot;id&quot;:&quot;tasks/masked_language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/masked_language_modeling&quot;},{&quot;title&quot;:&quot;Translation&quot;,&quot;id&quot;:&quot;tasks/translation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/translation&quot;},{&quot;title&quot;:&quot;Summarization&quot;,&quot;id&quot;:&quot;tasks/summarization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/summarization&quot;},{&quot;title&quot;:&quot;Multiple choice&quot;,&quot;id&quot;:&quot;tasks/multiple_choice&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/multiple_choice&quot;}]},{&quot;title&quot;:&quot;Audio&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio classification&quot;,&quot;id&quot;:&quot;tasks/audio_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/audio_classification&quot;},{&quot;title&quot;:&quot;Automatic speech recognition&quot;,&quot;id&quot;:&quot;tasks/asr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/asr&quot;}]},{&quot;title&quot;:&quot;Computer Vision&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image classification&quot;,&quot;id&quot;:&quot;tasks/image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_classification&quot;},{&quot;title&quot;:&quot;Semantic segmentation&quot;,&quot;id&quot;:&quot;tasks/semantic_segmentation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/semantic_segmentation&quot;},{&quot;title&quot;:&quot;Video classification&quot;,&quot;id&quot;:&quot;tasks/video_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/video_classification&quot;},{&quot;title&quot;:&quot;Object detection&quot;,&quot;id&quot;:&quot;tasks/object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot object detection&quot;,&quot;id&quot;:&quot;tasks/zero_shot_object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot image classification&quot;,&quot;id&quot;:&quot;tasks/zero_shot_image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification&quot;},{&quot;title&quot;:&quot;Depth estimation&quot;,&quot;id&quot;:&quot;tasks/monocular_depth_estimation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation&quot;}]},{&quot;title&quot;:&quot;Multimodal&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image captioning&quot;,&quot;id&quot;:&quot;tasks/image_captioning&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_captioning&quot;},{&quot;title&quot;:&quot;Document Question Answering&quot;,&quot;id&quot;:&quot;tasks/document_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/document_question_answering&quot;},{&quot;title&quot;:&quot;Visual Question Answering&quot;,&quot;id&quot;:&quot;tasks/visual_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/visual_question_answering&quot;},{&quot;title&quot;:&quot;Text to speech&quot;,&quot;id&quot;:&quot;tasks/text-to-speech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/text-to-speech&quot;}]},{&quot;title&quot;:&quot;Generation&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Customize the generation strategy&quot;,&quot;id&quot;:&quot;generation_strategies&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/generation_strategies&quot;}]},{&quot;title&quot;:&quot;Prompting&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image tasks with IDEFICS&quot;,&quot;id&quot;:&quot;tasks/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/idefics&quot;}]}]},{&quot;title&quot;:&quot;Developer guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Use fast tokenizers from 🤗 Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;fast_tokenizers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/fast_tokenizers&quot;},{&quot;title&quot;:&quot;Run inference with multilingual models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;multilingual&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/multilingual&quot;},{&quot;title&quot;:&quot;Use model-specific APIs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;create_a_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/create_a_model&quot;},{&quot;title&quot;:&quot;Share a custom model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_models&quot;},{&quot;title&quot;:&quot;Templates for chat models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;chat_templating&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/chat_templating&quot;},{&quot;title&quot;:&quot;Run training on Amazon SageMaker&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;sagemaker&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/sagemaker&quot;},{&quot;title&quot;:&quot;Export to ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;serialization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/serialization&quot;},{&quot;title&quot;:&quot;Export to TFLite&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tflite&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tflite&quot;},{&quot;title&quot;:&quot;Export to TorchScript&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;torchscript&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/torchscript&quot;},{&quot;title&quot;:&quot;Benchmarks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;benchmarks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/benchmarks&quot;},{&quot;title&quot;:&quot;Notebooks with examples&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;notebooks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/notebooks&quot;},{&quot;title&quot;:&quot;Community resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;community&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/community&quot;},{&quot;title&quot;:&quot;Custom Tools and Prompts&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_tools&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_tools&quot;},{&quot;title&quot;:&quot;Troubleshoot&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;troubleshooting&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/troubleshooting&quot;}]},{&quot;title&quot;:&quot;Performance and scalability&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;performance&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/performance&quot;},{&quot;title&quot;:&quot;Efficient training techniques&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Methods and tools for efficient training on a single GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_one&quot;},{&quot;title&quot;:&quot;Multiple GPUs and parallelism&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_many&quot;},{&quot;title&quot;:&quot;Efficient training on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu&quot;},{&quot;title&quot;:&quot;Distributed CPU training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu_many&quot;},{&quot;title&quot;:&quot;Training on TPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu&quot;},{&quot;title&quot;:&quot;Training on TPU with TensorFlow&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu_tf&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu_tf&quot;},{&quot;title&quot;:&quot;Training on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_special&quot;},{&quot;title&quot;:&quot;Custom hardware for training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_hardware&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_hardware&quot;},{&quot;title&quot;:&quot;Hyperparameter Search using Trainer API&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;hpo_train&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/hpo_train&quot;}]},{&quot;title&quot;:&quot;Optimizing inference&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Inference on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_cpu&quot;},{&quot;title&quot;:&quot;Inference on one GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_one&quot;},{&quot;title&quot;:&quot;Inference on many GPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_many&quot;},{&quot;title&quot;:&quot;Inference on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_special&quot;}]},{&quot;title&quot;:&quot;Instantiating a big model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;big_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/big_models&quot;},{&quot;title&quot;:&quot;Troubleshooting&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;debugging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/debugging&quot;},{&quot;title&quot;:&quot;XLA Integration for TensorFlow Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tf_xla&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tf_xla&quot;},{&quot;title&quot;:&quot;Optimize inference using `torch.compile()`&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_torch_compile&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_torch_compile&quot;}]},{&quot;title&quot;:&quot;Contribute&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;How to contribute to transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;contributing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/contributing&quot;},{&quot;title&quot;:&quot;How to add a model to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_model&quot;},{&quot;title&quot;:&quot;How to convert a 🤗 Transformers model to TensorFlow?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_tensorflow_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_tensorflow_model&quot;},{&quot;title&quot;:&quot;How to add a pipeline to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_pipeline&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_pipeline&quot;},{&quot;title&quot;:&quot;Testing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;testing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/testing&quot;},{&quot;title&quot;:&quot;Checks on a Pull Request&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pr_checks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pr_checks&quot;}]},{&quot;title&quot;:&quot;Conceptual guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Philosophy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;philosophy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/philosophy&quot;},{&quot;title&quot;:&quot;Glossary&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;glossary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/glossary&quot;},{&quot;title&quot;:&quot;What 🤗 Transformers can do&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;task_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/task_summary&quot;},{&quot;title&quot;:&quot;How 🤗 Transformers solve tasks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks_explained&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks_explained&quot;},{&quot;title&quot;:&quot;The Transformer model family&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_summary&quot;},{&quot;title&quot;:&quot;Summary of the tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tokenizer_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tokenizer_summary&quot;},{&quot;title&quot;:&quot;Attention mechanisms&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;attention&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/attention&quot;},{&quot;title&quot;:&quot;Padding and truncation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pad_truncation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pad_truncation&quot;},{&quot;title&quot;:&quot;BERTology&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;bertology&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/bertology&quot;},{&quot;title&quot;:&quot;Perplexity of fixed-length models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perplexity&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perplexity&quot;},{&quot;title&quot;:&quot;Pipelines for webserver inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_webserver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_webserver&quot;},{&quot;title&quot;:&quot;Model training anatomy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_memory_anatomy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_memory_anatomy&quot;}]},{&quot;title&quot;:&quot;API&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Main Classes&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Agents and Tools&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/agent&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/agent&quot;},{&quot;title&quot;:&quot;Auto Classes&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/auto&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/auto&quot;},{&quot;title&quot;:&quot;Callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/callback&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/callback&quot;},{&quot;title&quot;:&quot;Configuration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/configuration&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/configuration&quot;},{&quot;title&quot;:&quot;Data Collator&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/data_collator&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/data_collator&quot;},{&quot;title&quot;:&quot;Keras callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/keras_callbacks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/keras_callbacks&quot;},{&quot;title&quot;:&quot;Logging&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/logging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/logging&quot;},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/model&quot;},{&quot;title&quot;:&quot;Text Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/text_generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/text_generation&quot;},{&quot;title&quot;:&quot;ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/onnx&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/onnx&quot;},{&quot;title&quot;:&quot;Optimization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/optimizer_schedules&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules&quot;},{&quot;title&quot;:&quot;Model outputs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/output&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/output&quot;},{&quot;title&quot;:&quot;Pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/pipelines&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/pipelines&quot;},{&quot;title&quot;:&quot;Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/processors&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/processors&quot;},{&quot;title&quot;:&quot;Quantization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/quantization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/quantization&quot;},{&quot;title&quot;:&quot;Tokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/tokenizer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/tokenizer&quot;},{&quot;title&quot;:&quot;Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/trainer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/trainer&quot;},{&quot;title&quot;:&quot;DeepSpeed Integration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/deepspeed&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/deepspeed&quot;},{&quot;title&quot;:&quot;Feature Extractor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/feature_extractor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/feature_extractor&quot;},{&quot;title&quot;:&quot;Image Processor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/image_processor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/image_processor&quot;}]},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALBERT&quot;,&quot;id&quot;:&quot;model_doc/albert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/albert&quot;},{&quot;title&quot;:&quot;BART&quot;,&quot;id&quot;:&quot;model_doc/bart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bart&quot;},{&quot;title&quot;:&quot;BARThez&quot;,&quot;id&quot;:&quot;model_doc/barthez&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/barthez&quot;},{&quot;title&quot;:&quot;BARTpho&quot;,&quot;id&quot;:&quot;model_doc/bartpho&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bartpho&quot;},{&quot;title&quot;:&quot;BERT&quot;,&quot;id&quot;:&quot;model_doc/bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert&quot;},{&quot;title&quot;:&quot;BertGeneration&quot;,&quot;id&quot;:&quot;model_doc/bert-generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-generation&quot;},{&quot;title&quot;:&quot;BertJapanese&quot;,&quot;id&quot;:&quot;model_doc/bert-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-japanese&quot;},{&quot;title&quot;:&quot;Bertweet&quot;,&quot;id&quot;:&quot;model_doc/bertweet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bertweet&quot;},{&quot;title&quot;:&quot;BigBird&quot;,&quot;id&quot;:&quot;model_doc/big_bird&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/big_bird&quot;},{&quot;title&quot;:&quot;BigBirdPegasus&quot;,&quot;id&quot;:&quot;model_doc/bigbird_pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus&quot;},{&quot;title&quot;:&quot;BioGpt&quot;,&quot;id&quot;:&quot;model_doc/biogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/biogpt&quot;},{&quot;title&quot;:&quot;Blenderbot&quot;,&quot;id&quot;:&quot;model_doc/blenderbot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot&quot;},{&quot;title&quot;:&quot;Blenderbot Small&quot;,&quot;id&quot;:&quot;model_doc/blenderbot-small&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot-small&quot;},{&quot;title&quot;:&quot;BLOOM&quot;,&quot;id&quot;:&quot;model_doc/bloom&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bloom&quot;},{&quot;title&quot;:&quot;BORT&quot;,&quot;id&quot;:&quot;model_doc/bort&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bort&quot;},{&quot;title&quot;:&quot;ByT5&quot;,&quot;id&quot;:&quot;model_doc/byt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/byt5&quot;},{&quot;title&quot;:&quot;CamemBERT&quot;,&quot;id&quot;:&quot;model_doc/camembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/camembert&quot;},{&quot;title&quot;:&quot;CANINE&quot;,&quot;id&quot;:&quot;model_doc/canine&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/canine&quot;},{&quot;title&quot;:&quot;CodeGen&quot;,&quot;id&quot;:&quot;model_doc/codegen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/codegen&quot;},{&quot;title&quot;:&quot;CodeLlama&quot;,&quot;id&quot;:&quot;model_doc/code_llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/code_llama&quot;},{&quot;title&quot;:&quot;ConvBERT&quot;,&quot;id&quot;:&quot;model_doc/convbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convbert&quot;},{&quot;title&quot;:&quot;CPM&quot;,&quot;id&quot;:&quot;model_doc/cpm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpm&quot;},{&quot;title&quot;:&quot;CPMANT&quot;,&quot;id&quot;:&quot;model_doc/cpmant&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpmant&quot;},{&quot;title&quot;:&quot;CTRL&quot;,&quot;id&quot;:&quot;model_doc/ctrl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ctrl&quot;},{&quot;title&quot;:&quot;DeBERTa&quot;,&quot;id&quot;:&quot;model_doc/deberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta&quot;},{&quot;title&quot;:&quot;DeBERTa-v2&quot;,&quot;id&quot;:&quot;model_doc/deberta-v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta-v2&quot;},{&quot;title&quot;:&quot;DialoGPT&quot;,&quot;id&quot;:&quot;model_doc/dialogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dialogpt&quot;},{&quot;title&quot;:&quot;DistilBERT&quot;,&quot;id&quot;:&quot;model_doc/distilbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/distilbert&quot;},{&quot;title&quot;:&quot;DPR&quot;,&quot;id&quot;:&quot;model_doc/dpr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpr&quot;},{&quot;title&quot;:&quot;ELECTRA&quot;,&quot;id&quot;:&quot;model_doc/electra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/electra&quot;},{&quot;title&quot;:&quot;Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encoder-decoder&quot;},{&quot;title&quot;:&quot;ERNIE&quot;,&quot;id&quot;:&quot;model_doc/ernie&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie&quot;},{&quot;title&quot;:&quot;ErnieM&quot;,&quot;id&quot;:&quot;model_doc/ernie_m&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie_m&quot;},{&quot;title&quot;:&quot;ESM&quot;,&quot;id&quot;:&quot;model_doc/esm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/esm&quot;},{&quot;title&quot;:&quot;Falcon&quot;,&quot;id&quot;:&quot;model_doc/falcon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/falcon&quot;},{&quot;title&quot;:&quot;FLAN-T5&quot;,&quot;id&quot;:&quot;model_doc/flan-t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-t5&quot;},{&quot;title&quot;:&quot;FLAN-UL2&quot;,&quot;id&quot;:&quot;model_doc/flan-ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-ul2&quot;},{&quot;title&quot;:&quot;FlauBERT&quot;,&quot;id&quot;:&quot;model_doc/flaubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flaubert&quot;},{&quot;title&quot;:&quot;FNet&quot;,&quot;id&quot;:&quot;model_doc/fnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fnet&quot;},{&quot;title&quot;:&quot;FSMT&quot;,&quot;id&quot;:&quot;model_doc/fsmt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fsmt&quot;},{&quot;title&quot;:&quot;Funnel Transformer&quot;,&quot;id&quot;:&quot;model_doc/funnel&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/funnel&quot;},{&quot;title&quot;:&quot;GPT&quot;,&quot;id&quot;:&quot;model_doc/openai-gpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/openai-gpt&quot;},{&quot;title&quot;:&quot;GPT Neo&quot;,&quot;id&quot;:&quot;model_doc/gpt_neo&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neo&quot;},{&quot;title&quot;:&quot;GPT NeoX&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox&quot;},{&quot;title&quot;:&quot;GPT NeoX Japanese&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox_japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese&quot;},{&quot;title&quot;:&quot;GPT-J&quot;,&quot;id&quot;:&quot;model_doc/gptj&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptj&quot;},{&quot;title&quot;:&quot;GPT2&quot;,&quot;id&quot;:&quot;model_doc/gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt2&quot;},{&quot;title&quot;:&quot;GPTBigCode&quot;,&quot;id&quot;:&quot;model_doc/gpt_bigcode&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode&quot;},{&quot;title&quot;:&quot;GPTSAN Japanese&quot;,&quot;id&quot;:&quot;model_doc/gptsan-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese&quot;},{&quot;title&quot;:&quot;GPTSw3&quot;,&quot;id&quot;:&quot;model_doc/gpt-sw3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt-sw3&quot;},{&quot;title&quot;:&quot;HerBERT&quot;,&quot;id&quot;:&quot;model_doc/herbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/herbert&quot;},{&quot;title&quot;:&quot;I-BERT&quot;,&quot;id&quot;:&quot;model_doc/ibert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ibert&quot;},{&quot;title&quot;:&quot;Jukebox&quot;,&quot;id&quot;:&quot;model_doc/jukebox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/jukebox&quot;},{&quot;title&quot;:&quot;LED&quot;,&quot;id&quot;:&quot;model_doc/led&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/led&quot;},{&quot;title&quot;:&quot;LLaMA&quot;,&quot;id&quot;:&quot;model_doc/llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama&quot;},{&quot;title&quot;:&quot;Llama2&quot;,&quot;id&quot;:&quot;model_doc/llama2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama2&quot;},{&quot;title&quot;:&quot;Longformer&quot;,&quot;id&quot;:&quot;model_doc/longformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longformer&quot;},{&quot;title&quot;:&quot;LongT5&quot;,&quot;id&quot;:&quot;model_doc/longt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longt5&quot;},{&quot;title&quot;:&quot;LUKE&quot;,&quot;id&quot;:&quot;model_doc/luke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/luke&quot;},{&quot;title&quot;:&quot;M2M100&quot;,&quot;id&quot;:&quot;model_doc/m2m_100&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/m2m_100&quot;},{&quot;title&quot;:&quot;MarianMT&quot;,&quot;id&quot;:&quot;model_doc/marian&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/marian&quot;},{&quot;title&quot;:&quot;MarkupLM&quot;,&quot;id&quot;:&quot;model_doc/markuplm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/markuplm&quot;},{&quot;title&quot;:&quot;MBart and MBart-50&quot;,&quot;id&quot;:&quot;model_doc/mbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mbart&quot;},{&quot;title&quot;:&quot;MEGA&quot;,&quot;id&quot;:&quot;model_doc/mega&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mega&quot;},{&quot;title&quot;:&quot;MegatronBERT&quot;,&quot;id&quot;:&quot;model_doc/megatron-bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron-bert&quot;},{&quot;title&quot;:&quot;MegatronGPT2&quot;,&quot;id&quot;:&quot;model_doc/megatron_gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2&quot;},{&quot;title&quot;:&quot;Mistral&quot;,&quot;id&quot;:&quot;model_doc/mistral&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mistral&quot;},{&quot;title&quot;:&quot;mLUKE&quot;,&quot;id&quot;:&quot;model_doc/mluke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mluke&quot;},{&quot;title&quot;:&quot;MobileBERT&quot;,&quot;id&quot;:&quot;model_doc/mobilebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilebert&quot;},{&quot;title&quot;:&quot;MPNet&quot;,&quot;id&quot;:&quot;model_doc/mpnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpnet&quot;},{&quot;title&quot;:&quot;MPT&quot;,&quot;id&quot;:&quot;model_doc/mpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpt&quot;},{&quot;title&quot;:&quot;MRA&quot;,&quot;id&quot;:&quot;model_doc/mra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mra&quot;},{&quot;title&quot;:&quot;MT5&quot;,&quot;id&quot;:&quot;model_doc/mt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mt5&quot;},{&quot;title&quot;:&quot;MVP&quot;,&quot;id&quot;:&quot;model_doc/mvp&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mvp&quot;},{&quot;title&quot;:&quot;NEZHA&quot;,&quot;id&quot;:&quot;model_doc/nezha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nezha&quot;},{&quot;title&quot;:&quot;NLLB&quot;,&quot;id&quot;:&quot;model_doc/nllb&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb&quot;},{&quot;title&quot;:&quot;NLLB-MoE&quot;,&quot;id&quot;:&quot;model_doc/nllb-moe&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb-moe&quot;},{&quot;title&quot;:&quot;Nyströmformer&quot;,&quot;id&quot;:&quot;model_doc/nystromformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nystromformer&quot;},{&quot;title&quot;:&quot;Open-Llama&quot;,&quot;id&quot;:&quot;model_doc/open-llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/open-llama&quot;},{&quot;title&quot;:&quot;OPT&quot;,&quot;id&quot;:&quot;model_doc/opt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/opt&quot;},{&quot;title&quot;:&quot;Pegasus&quot;,&quot;id&quot;:&quot;model_doc/pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus&quot;},{&quot;title&quot;:&quot;PEGASUS-X&quot;,&quot;id&quot;:&quot;model_doc/pegasus_x&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus_x&quot;},{&quot;title&quot;:&quot;Persimmon&quot;,&quot;id&quot;:&quot;model_doc/persimmon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/persimmon&quot;},{&quot;title&quot;:&quot;PhoBERT&quot;,&quot;id&quot;:&quot;model_doc/phobert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/phobert&quot;},{&quot;title&quot;:&quot;PLBart&quot;,&quot;id&quot;:&quot;model_doc/plbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/plbart&quot;},{&quot;title&quot;:&quot;ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/prophetnet&quot;},{&quot;title&quot;:&quot;QDQBert&quot;,&quot;id&quot;:&quot;model_doc/qdqbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/qdqbert&quot;},{&quot;title&quot;:&quot;RAG&quot;,&quot;id&quot;:&quot;model_doc/rag&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rag&quot;},{&quot;title&quot;:&quot;REALM&quot;,&quot;id&quot;:&quot;model_doc/realm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/realm&quot;},{&quot;title&quot;:&quot;Reformer&quot;,&quot;id&quot;:&quot;model_doc/reformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/reformer&quot;},{&quot;title&quot;:&quot;RemBERT&quot;,&quot;id&quot;:&quot;model_doc/rembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rembert&quot;},{&quot;title&quot;:&quot;RetriBERT&quot;,&quot;id&quot;:&quot;model_doc/retribert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/retribert&quot;},{&quot;title&quot;:&quot;RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta&quot;},{&quot;title&quot;:&quot;RoBERTa-PreLayerNorm&quot;,&quot;id&quot;:&quot;model_doc/roberta-prelayernorm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm&quot;},{&quot;title&quot;:&quot;RoCBert&quot;,&quot;id&quot;:&quot;model_doc/roc_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roc_bert&quot;},{&quot;title&quot;:&quot;RoFormer&quot;,&quot;id&quot;:&quot;model_doc/roformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roformer&quot;},{&quot;title&quot;:&quot;RWKV&quot;,&quot;id&quot;:&quot;model_doc/rwkv&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rwkv&quot;},{&quot;title&quot;:&quot;Splinter&quot;,&quot;id&quot;:&quot;model_doc/splinter&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/splinter&quot;},{&quot;title&quot;:&quot;SqueezeBERT&quot;,&quot;id&quot;:&quot;model_doc/squeezebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/squeezebert&quot;},{&quot;title&quot;:&quot;SwitchTransformers&quot;,&quot;id&quot;:&quot;model_doc/switch_transformers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/switch_transformers&quot;},{&quot;title&quot;:&quot;T5&quot;,&quot;id&quot;:&quot;model_doc/t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5&quot;},{&quot;title&quot;:&quot;T5v1.1&quot;,&quot;id&quot;:&quot;model_doc/t5v1.1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5v1.1&quot;},{&quot;title&quot;:&quot;TAPEX&quot;,&quot;id&quot;:&quot;model_doc/tapex&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapex&quot;},{&quot;title&quot;:&quot;Transformer XL&quot;,&quot;id&quot;:&quot;model_doc/transfo-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/transfo-xl&quot;},{&quot;title&quot;:&quot;UL2&quot;,&quot;id&quot;:&quot;model_doc/ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ul2&quot;},{&quot;title&quot;:&quot;UMT5&quot;,&quot;id&quot;:&quot;model_doc/umt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/umt5&quot;},{&quot;title&quot;:&quot;X-MOD&quot;,&quot;id&quot;:&quot;model_doc/xmod&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xmod&quot;},{&quot;title&quot;:&quot;XGLM&quot;,&quot;id&quot;:&quot;model_doc/xglm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xglm&quot;},{&quot;title&quot;:&quot;XLM&quot;,&quot;id&quot;:&quot;model_doc/xlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm&quot;},{&quot;title&quot;:&quot;XLM-ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/xlm-prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa-XL&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl&quot;},{&quot;title&quot;:&quot;XLM-V&quot;,&quot;id&quot;:&quot;model_doc/xlm-v&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-v&quot;},{&quot;title&quot;:&quot;XLNet&quot;,&quot;id&quot;:&quot;model_doc/xlnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlnet&quot;},{&quot;title&quot;:&quot;YOSO&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/yoso&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yoso&quot;}]},{&quot;title&quot;:&quot;Vision models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;BEiT&quot;,&quot;id&quot;:&quot;model_doc/beit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/beit&quot;},{&quot;title&quot;:&quot;BiT&quot;,&quot;id&quot;:&quot;model_doc/bit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bit&quot;},{&quot;title&quot;:&quot;Conditional DETR&quot;,&quot;id&quot;:&quot;model_doc/conditional_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/conditional_detr&quot;},{&quot;title&quot;:&quot;ConvNeXT&quot;,&quot;id&quot;:&quot;model_doc/convnext&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnext&quot;},{&quot;title&quot;:&quot;ConvNeXTV2&quot;,&quot;id&quot;:&quot;model_doc/convnextv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnextv2&quot;},{&quot;title&quot;:&quot;CvT&quot;,&quot;id&quot;:&quot;model_doc/cvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cvt&quot;},{&quot;title&quot;:&quot;Deformable DETR&quot;,&quot;id&quot;:&quot;model_doc/deformable_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deformable_detr&quot;},{&quot;title&quot;:&quot;DeiT&quot;,&quot;id&quot;:&quot;model_doc/deit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deit&quot;},{&quot;title&quot;:&quot;DETA&quot;,&quot;id&quot;:&quot;model_doc/deta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deta&quot;},{&quot;title&quot;:&quot;DETR&quot;,&quot;id&quot;:&quot;model_doc/detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/detr&quot;},{&quot;title&quot;:&quot;DiNAT&quot;,&quot;id&quot;:&quot;model_doc/dinat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinat&quot;},{&quot;title&quot;:&quot;DINO V2&quot;,&quot;id&quot;:&quot;model_doc/dinov2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinov2&quot;},{&quot;title&quot;:&quot;DiT&quot;,&quot;id&quot;:&quot;model_doc/dit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dit&quot;},{&quot;title&quot;:&quot;DPT&quot;,&quot;id&quot;:&quot;model_doc/dpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpt&quot;},{&quot;title&quot;:&quot;EfficientFormer&quot;,&quot;id&quot;:&quot;model_doc/efficientformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientformer&quot;},{&quot;title&quot;:&quot;EfficientNet&quot;,&quot;id&quot;:&quot;model_doc/efficientnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientnet&quot;},{&quot;title&quot;:&quot;FocalNet&quot;,&quot;id&quot;:&quot;model_doc/focalnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/focalnet&quot;},{&quot;title&quot;:&quot;GLPN&quot;,&quot;id&quot;:&quot;model_doc/glpn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/glpn&quot;},{&quot;title&quot;:&quot;ImageGPT&quot;,&quot;id&quot;:&quot;model_doc/imagegpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/imagegpt&quot;},{&quot;title&quot;:&quot;LeViT&quot;,&quot;id&quot;:&quot;model_doc/levit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/levit&quot;},{&quot;title&quot;:&quot;Mask2Former&quot;,&quot;id&quot;:&quot;model_doc/mask2former&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mask2former&quot;},{&quot;title&quot;:&quot;MaskFormer&quot;,&quot;id&quot;:&quot;model_doc/maskformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/maskformer&quot;},{&quot;title&quot;:&quot;MobileNetV1&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v1&quot;},{&quot;title&quot;:&quot;MobileNetV2&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v2&quot;},{&quot;title&quot;:&quot;MobileViT&quot;,&quot;id&quot;:&quot;model_doc/mobilevit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevit&quot;},{&quot;title&quot;:&quot;MobileViTV2&quot;,&quot;id&quot;:&quot;model_doc/mobilevitv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevitv2&quot;},{&quot;title&quot;:&quot;NAT&quot;,&quot;id&quot;:&quot;model_doc/nat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nat&quot;},{&quot;title&quot;:&quot;PoolFormer&quot;,&quot;id&quot;:&quot;model_doc/poolformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/poolformer&quot;},{&quot;title&quot;:&quot;Pyramid Vision Transformer (PVT)&quot;,&quot;id&quot;:&quot;model_doc/pvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pvt&quot;},{&quot;title&quot;:&quot;RegNet&quot;,&quot;id&quot;:&quot;model_doc/regnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/regnet&quot;},{&quot;title&quot;:&quot;ResNet&quot;,&quot;id&quot;:&quot;model_doc/resnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/resnet&quot;},{&quot;title&quot;:&quot;SegFormer&quot;,&quot;id&quot;:&quot;model_doc/segformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/segformer&quot;},{&quot;title&quot;:&quot;SwiftFormer&quot;,&quot;id&quot;:&quot;model_doc/swiftformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swiftformer&quot;},{&quot;title&quot;:&quot;Swin Transformer&quot;,&quot;id&quot;:&quot;model_doc/swin&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin&quot;},{&quot;title&quot;:&quot;Swin Transformer V2&quot;,&quot;id&quot;:&quot;model_doc/swinv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swinv2&quot;},{&quot;title&quot;:&quot;Swin2SR&quot;,&quot;id&quot;:&quot;model_doc/swin2sr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin2sr&quot;},{&quot;title&quot;:&quot;Table Transformer&quot;,&quot;id&quot;:&quot;model_doc/table-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/table-transformer&quot;},{&quot;title&quot;:&quot;TimeSformer&quot;,&quot;id&quot;:&quot;model_doc/timesformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/timesformer&quot;},{&quot;title&quot;:&quot;UperNet&quot;,&quot;id&quot;:&quot;model_doc/upernet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/upernet&quot;},{&quot;title&quot;:&quot;VAN&quot;,&quot;id&quot;:&quot;model_doc/van&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/van&quot;},{&quot;title&quot;:&quot;VideoMAE&quot;,&quot;id&quot;:&quot;model_doc/videomae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/videomae&quot;},{&quot;title&quot;:&quot;Vision Transformer (ViT)&quot;,&quot;id&quot;:&quot;model_doc/vit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit&quot;},{&quot;title&quot;:&quot;ViT Hybrid&quot;,&quot;id&quot;:&quot;model_doc/vit_hybrid&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_hybrid&quot;},{&quot;title&quot;:&quot;ViTDet&quot;,&quot;id&quot;:&quot;model_doc/vitdet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitdet&quot;},{&quot;title&quot;:&quot;ViTMAE&quot;,&quot;id&quot;:&quot;model_doc/vit_mae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_mae&quot;},{&quot;title&quot;:&quot;ViTMatte&quot;,&quot;id&quot;:&quot;model_doc/vitmatte&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitmatte&quot;},{&quot;title&quot;:&quot;ViTMSN&quot;,&quot;id&quot;:&quot;model_doc/vit_msn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_msn&quot;},{&quot;title&quot;:&quot;ViViT&quot;,&quot;id&quot;:&quot;model_doc/vivit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vivit&quot;},{&quot;title&quot;:&quot;YOLOS&quot;,&quot;id&quot;:&quot;model_doc/yolos&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yolos&quot;}]},{&quot;title&quot;:&quot;Audio models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio Spectrogram Transformer&quot;,&quot;id&quot;:&quot;model_doc/audio-spectrogram-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer&quot;},{&quot;title&quot;:&quot;Bark&quot;,&quot;id&quot;:&quot;model_doc/bark&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bark&quot;},{&quot;title&quot;:&quot;CLAP&quot;,&quot;id&quot;:&quot;model_doc/clap&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clap&quot;},{&quot;title&quot;:&quot;EnCodec&quot;,&quot;id&quot;:&quot;model_doc/encodec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encodec&quot;},{&quot;title&quot;:&quot;Hubert&quot;,&quot;id&quot;:&quot;model_doc/hubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/hubert&quot;},{&quot;title&quot;:&quot;MCTCT&quot;,&quot;id&quot;:&quot;model_doc/mctct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mctct&quot;},{&quot;title&quot;:&quot;MMS&quot;,&quot;id&quot;:&quot;model_doc/mms&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mms&quot;},{&quot;title&quot;:&quot;MusicGen&quot;,&quot;id&quot;:&quot;model_doc/musicgen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/musicgen&quot;},{&quot;title&quot;:&quot;Pop2Piano&quot;,&quot;id&quot;:&quot;model_doc/pop2piano&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pop2piano&quot;},{&quot;title&quot;:&quot;SEW&quot;,&quot;id&quot;:&quot;model_doc/sew&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew&quot;},{&quot;title&quot;:&quot;SEW-D&quot;,&quot;id&quot;:&quot;model_doc/sew-d&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew-d&quot;},{&quot;title&quot;:&quot;Speech2Text&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text&quot;},{&quot;title&quot;:&quot;Speech2Text2&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text_2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2&quot;},{&quot;title&quot;:&quot;SpeechT5&quot;,&quot;id&quot;:&quot;model_doc/speecht5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speecht5&quot;},{&quot;title&quot;:&quot;UniSpeech&quot;,&quot;id&quot;:&quot;model_doc/unispeech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech&quot;},{&quot;title&quot;:&quot;UniSpeech-SAT&quot;,&quot;id&quot;:&quot;model_doc/unispeech-sat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech-sat&quot;},{&quot;title&quot;:&quot;VITS&quot;,&quot;id&quot;:&quot;model_doc/vits&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vits&quot;},{&quot;title&quot;:&quot;Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2&quot;},{&quot;title&quot;:&quot;Wav2Vec2-Conformer&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2-conformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer&quot;},{&quot;title&quot;:&quot;Wav2Vec2Phoneme&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2_phoneme&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme&quot;},{&quot;title&quot;:&quot;WavLM&quot;,&quot;id&quot;:&quot;model_doc/wavlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wavlm&quot;},{&quot;title&quot;:&quot;Whisper&quot;,&quot;id&quot;:&quot;model_doc/whisper&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/whisper&quot;},{&quot;title&quot;:&quot;XLS-R&quot;,&quot;id&quot;:&quot;model_doc/xls_r&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xls_r&quot;},{&quot;title&quot;:&quot;XLSR-Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/xlsr_wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2&quot;}]},{&quot;title&quot;:&quot;Multimodal models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALIGN&quot;,&quot;id&quot;:&quot;model_doc/align&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/align&quot;},{&quot;title&quot;:&quot;AltCLIP&quot;,&quot;id&quot;:&quot;model_doc/altclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/altclip&quot;},{&quot;title&quot;:&quot;BLIP&quot;,&quot;id&quot;:&quot;model_doc/blip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip&quot;},{&quot;title&quot;:&quot;BLIP-2&quot;,&quot;id&quot;:&quot;model_doc/blip-2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip-2&quot;},{&quot;title&quot;:&quot;BridgeTower&quot;,&quot;id&quot;:&quot;model_doc/bridgetower&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bridgetower&quot;},{&quot;title&quot;:&quot;BROS&quot;,&quot;id&quot;:&quot;model_doc/bros&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bros&quot;},{&quot;title&quot;:&quot;Chinese-CLIP&quot;,&quot;id&quot;:&quot;model_doc/chinese_clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/chinese_clip&quot;},{&quot;title&quot;:&quot;CLIP&quot;,&quot;id&quot;:&quot;model_doc/clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clip&quot;},{&quot;title&quot;:&quot;CLIPSeg&quot;,&quot;id&quot;:&quot;model_doc/clipseg&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clipseg&quot;},{&quot;title&quot;:&quot;Data2Vec&quot;,&quot;id&quot;:&quot;model_doc/data2vec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/data2vec&quot;},{&quot;title&quot;:&quot;DePlot&quot;,&quot;id&quot;:&quot;model_doc/deplot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deplot&quot;},{&quot;title&quot;:&quot;Donut&quot;,&quot;id&quot;:&quot;model_doc/donut&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/donut&quot;},{&quot;title&quot;:&quot;FLAVA&quot;,&quot;id&quot;:&quot;model_doc/flava&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flava&quot;},{&quot;title&quot;:&quot;GIT&quot;,&quot;id&quot;:&quot;model_doc/git&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/git&quot;},{&quot;title&quot;:&quot;GroupViT&quot;,&quot;id&quot;:&quot;model_doc/groupvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/groupvit&quot;},{&quot;title&quot;:&quot;IDEFICS&quot;,&quot;id&quot;:&quot;model_doc/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/idefics&quot;},{&quot;title&quot;:&quot;InstructBLIP&quot;,&quot;id&quot;:&quot;model_doc/instructblip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/instructblip&quot;},{&quot;title&quot;:&quot;LayoutLM&quot;,&quot;id&quot;:&quot;model_doc/layoutlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlm&quot;},{&quot;title&quot;:&quot;LayoutLMV2&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv2&quot;},{&quot;title&quot;:&quot;LayoutLMV3&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv3&quot;},{&quot;title&quot;:&quot;LayoutXLM&quot;,&quot;id&quot;:&quot;model_doc/layoutxlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutxlm&quot;},{&quot;title&quot;:&quot;LiLT&quot;,&quot;id&quot;:&quot;model_doc/lilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lilt&quot;},{&quot;title&quot;:&quot;LXMERT&quot;,&quot;id&quot;:&quot;model_doc/lxmert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lxmert&quot;},{&quot;title&quot;:&quot;MatCha&quot;,&quot;id&quot;:&quot;model_doc/matcha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/matcha&quot;},{&quot;title&quot;:&quot;MGP-STR&quot;,&quot;id&quot;:&quot;model_doc/mgp-str&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mgp-str&quot;},{&quot;title&quot;:&quot;Nougat&quot;,&quot;id&quot;:&quot;model_doc/nougat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nougat&quot;},{&quot;title&quot;:&quot;OneFormer&quot;,&quot;id&quot;:&quot;model_doc/oneformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/oneformer&quot;},{&quot;title&quot;:&quot;OWL-ViT&quot;,&quot;id&quot;:&quot;model_doc/owlvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/owlvit&quot;},{&quot;title&quot;:&quot;Perceiver&quot;,&quot;id&quot;:&quot;model_doc/perceiver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/perceiver&quot;},{&quot;title&quot;:&quot;Pix2Struct&quot;,&quot;id&quot;:&quot;model_doc/pix2struct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pix2struct&quot;},{&quot;title&quot;:&quot;Segment Anything&quot;,&quot;id&quot;:&quot;model_doc/sam&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sam&quot;},{&quot;title&quot;:&quot;Speech Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/speech-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder&quot;},{&quot;title&quot;:&quot;TAPAS&quot;,&quot;id&quot;:&quot;model_doc/tapas&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapas&quot;},{&quot;title&quot;:&quot;TrOCR&quot;,&quot;id&quot;:&quot;model_doc/trocr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trocr&quot;},{&quot;title&quot;:&quot;TVLT&quot;,&quot;id&quot;:&quot;model_doc/tvlt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tvlt&quot;},{&quot;title&quot;:&quot;ViLT&quot;,&quot;id&quot;:&quot;model_doc/vilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vilt&quot;},{&quot;title&quot;:&quot;Vision Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/vision-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder&quot;},{&quot;title&quot;:&quot;Vision Text Dual Encoder&quot;,&quot;id&quot;:&quot;model_doc/vision-text-dual-encoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder&quot;},{&quot;title&quot;:&quot;VisualBERT&quot;,&quot;id&quot;:&quot;model_doc/visual_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/visual_bert&quot;},{&quot;title&quot;:&quot;X-CLIP&quot;,&quot;id&quot;:&quot;model_doc/xclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xclip&quot;}]},{&quot;title&quot;:&quot;Reinforcement learning models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Decision Transformer&quot;,&quot;id&quot;:&quot;model_doc/decision_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/decision_transformer&quot;},{&quot;title&quot;:&quot;Trajectory Transformer&quot;,&quot;id&quot;:&quot;model_doc/trajectory_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer&quot;}]},{&quot;title&quot;:&quot;Time series models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Autoformer&quot;,&quot;id&quot;:&quot;model_doc/autoformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/autoformer&quot;},{&quot;title&quot;:&quot;Informer&quot;,&quot;id&quot;:&quot;model_doc/informer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/informer&quot;},{&quot;title&quot;:&quot;Time Series Transformer&quot;,&quot;id&quot;:&quot;model_doc/time_series_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/time_series_transformer&quot;}]},{&quot;title&quot;:&quot;Graph models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Graphormer&quot;,&quot;id&quot;:&quot;model_doc/graphormer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/graphormer&quot;}]}]},{&quot;title&quot;:&quot;Internal Helpers&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Custom Layers and Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/modeling_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/modeling_utils&quot;},{&quot;title&quot;:&quot;Utilities for pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/pipelines_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/pipelines_utils&quot;},{&quot;title&quot;:&quot;Utilities for Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/tokenization_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/tokenization_utils&quot;},{&quot;title&quot;:&quot;Utilities for Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/trainer_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/trainer_utils&quot;},{&quot;title&quot;:&quot;Utilities for Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/generation_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/generation_utils&quot;},{&quot;title&quot;:&quot;Utilities for Image Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/image_processing_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/image_processing_utils&quot;},{&quot;title&quot;:&quot;Utilities for Audio processing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/audio_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/audio_utils&quot;},{&quot;title&quot;:&quot;General Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/file_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/file_utils&quot;},{&quot;title&quot;:&quot;Utilities for Time Series&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/time_series_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/time_series_utils&quot;}]}]}],&quot;chapterId&quot;:&quot;model_doc/yoso&quot;,&quot;docType&quot;:&quot;docs&quot;,&quot;isLoggedIn&quot;:false,&quot;lang&quot;:&quot;en&quot;,&quot;langs&quot;:[&quot;de&quot;,&quot;en&quot;,&quot;es&quot;,&quot;fr&quot;,&quot;it&quot;,&quot;ko&quot;,&quot;pt&quot;,&quot;zh&quot;],&quot;library&quot;:&quot;transformers&quot;,&quot;theme&quot;:&quot;light&quot;,&quot;version&quot;:&quot;v4.34.0&quot;,&quot;versions&quot;:[{&quot;version&quot;:&quot;main&quot;},{&quot;version&quot;:&quot;v4.34.0&quot;},{&quot;version&quot;:&quot;v4.33.3&quot;},{&quot;version&quot;:&quot;v4.33.2&quot;},{&quot;version&quot;:&quot;v4.33.0&quot;},{&quot;version&quot;:&quot;v4.32.1&quot;},{&quot;version&quot;:&quot;v4.32.0&quot;},{&quot;version&quot;:&quot;v4.31.0&quot;},{&quot;version&quot;:&quot;v4.30.0&quot;},{&quot;version&quot;:&quot;v4.29.1&quot;},{&quot;version&quot;:&quot;v4.29.0&quot;},{&quot;version&quot;:&quot;v4.28.1&quot;},{&quot;version&quot;:&quot;v4.28.0&quot;},{&quot;version&quot;:&quot;v4.27.2&quot;},{&quot;version&quot;:&quot;v4.27.1&quot;},{&quot;version&quot;:&quot;v4.27.0&quot;},{&quot;version&quot;:&quot;v4.26.1&quot;},{&quot;version&quot;:&quot;v4.26.0&quot;},{&quot;version&quot;:&quot;v4.25.1&quot;},{&quot;version&quot;:&quot;v4.24.0&quot;},{&quot;version&quot;:&quot;v4.23.1&quot;},{&quot;version&quot;:&quot;v4.23.0&quot;},{&quot;version&quot;:&quot;v4.22.2&quot;},{&quot;version&quot;:&quot;v4.22.1&quot;},{&quot;version&quot;:&quot;v4.22.0&quot;},{&quot;version&quot;:&quot;v4.21.3&quot;},{&quot;version&quot;:&quot;v4.21.2&quot;},{&quot;version&quot;:&quot;v4.21.1&quot;},{&quot;version&quot;:&quot;v4.21.0&quot;},{&quot;version&quot;:&quot;v4.20.1&quot;},{&quot;version&quot;:&quot;v4.20.0&quot;},{&quot;version&quot;:&quot;v4.19.4&quot;},{&quot;version&quot;:&quot;v4.19.3&quot;},{&quot;version&quot;:&quot;v4.19.2&quot;},{&quot;version&quot;:&quot;v4.19.0&quot;},{&quot;version&quot;:&quot;v4.18.0&quot;},{&quot;version&quot;:&quot;v4.17.0&quot;},{&quot;version&quot;:&quot;v4.16.2&quot;},{&quot;version&quot;:&quot;v4.16.1&quot;},{&quot;version&quot;:&quot;v4.16.0&quot;},{&quot;version&quot;:&quot;v4.15.0&quot;},{&quot;version&quot;:&quot;v4.14.1&quot;},{&quot;version&quot;:&quot;v4.13.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.5&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.4&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.0.0&quot;},{&quot;version&quot;:&quot;doc-builder-html&quot;}],&quot;title&quot;:&quot;YOSO&quot;}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"><div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation</p> <div class="flex items-center"><p class="font-semibold">YOSO</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "><button class=" " type="button"><h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> <span class="ml-auto rounded border border-gray-200 bg-gray-100 px-0.5 text-xs dark:border-gray-800 dark:bg-gray-800"><kbd class="font-sans">⌘K</kbd></span></button> <div class="flex items-center"><select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1">v4.34.0</option><option value="2">v4.33.3</option><option value="3">v4.32.1</option><option value="4">v4.31.0</option><option value="5">v4.30.0</option><option value="6">v4.29.1</option><option value="7">v4.28.1</option><option value="8">v4.27.2</option><option value="9">v4.26.1</option><option value="10">v4.25.1</option><option value="11">v4.24.0</option><option value="12">v4.23.1</option><option value="13">v4.22.2</option><option value="14">v4.21.3</option><option value="15">v4.20.1</option><option value="16">v4.19.4</option><option value="17">v4.18.0</option><option value="18">v4.17.0</option><option value="19">v4.16.2</option><option value="20">v4.15.0</option><option value="21">v4.14.1</option><option value="22">v4.13.0</option><option value="23">v4.12.5</option><option value="24">v4.11.3</option><option value="25">v4.10.1</option><option value="26">v4.9.2</option><option value="27">v4.8.2</option><option value="28">v4.7.0</option><option value="29">v4.6.0</option><option value="30">v4.5.1</option><option value="31">v4.4.2</option><option value="32">v4.3.3</option><option value="33">v4.2.2</option><option value="34">v4.1.1</option><option value="35">v4.0.1</option><option value="36">v3.5.1</option><option value="37">v3.4.0</option><option value="38">v3.3.1</option><option value="39">v3.2.0</option><option value="40">v3.1.0</option><option value="41">v3.0.2</option><option value="42">v2.11.0</option><option value="43">v2.10.0</option><option value="44">v2.9.1</option><option value="45">v2.8.0</option><option value="46">v2.7.0</option><option value="47">v2.6.0</option><option value="48">v2.5.1</option><option value="49">v2.4.1</option><option value="50">v2.3.0</option><option value="51">v2.2.2</option><option value="52">v2.1.1</option><option value="53">v2.0.0</option><option value="54">v1.2.0</option><option value="55">v1.1.0</option><option value="56">v1.0.0</option><option value="57">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"><button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"><svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> 112,792</a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Get started</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/index">🤗 Transformers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/quicktour">Quick tour </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/installation">Installation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Tutorials</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_tutorial">Run inference with pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/autoclass_tutorial">Write portable code with AutoClass </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/preprocessing">Preprocess data </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/training">Fine-tune a pretrained model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/run_scripts">Train with a script </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/accelerate">Set up distributed training with 🤗 Accelerate </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/peft">Load and train adapters with 🤗 PEFT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_sharing">Share your model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/transformers_agents">Agents </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/llm_tutorial">Generation with LLMs </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Task Guides</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Natural Language Processing</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Computer Vision</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Generation</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Prompting</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Developer guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/fast_tokenizers">Use fast tokenizers from 🤗 Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/multilingual">Run inference with multilingual models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/create_a_model">Use model-specific APIs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_models">Share a custom model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/chat_templating">Templates for chat models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/sagemaker">Run training on Amazon SageMaker </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/serialization">Export to ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tflite">Export to TFLite </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/torchscript">Export to TorchScript </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/benchmarks">Benchmarks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/notebooks">Notebooks with examples </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/community">Community resources </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_tools">Custom Tools and Prompts </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/troubleshooting">Troubleshoot </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Performance and scalability</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/performance">Overview </a><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Efficient training techniques</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_one">Methods and tools for efficient training on a single GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_many">Multiple GPUs and parallelism </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu">Efficient training on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu_many">Distributed CPU training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu">Training on TPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu_tf">Training on TPU with TensorFlow </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_special">Training on Specialized Hardware </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_hardware">Custom hardware for training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/hpo_train">Hyperparameter Search using Trainer API </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Optimizing inference</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_cpu">Inference on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_one">Inference on one GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_many">Inference on many GPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_special">Inference on Specialized Hardware </a> </div><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/big_models">Instantiating a big model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/debugging">Troubleshooting </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tf_xla">XLA Integration for TensorFlow Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perf_torch_compile">Optimize inference using `torch.compile()` </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Contribute</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/contributing">How to contribute to transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_model">How to add a model to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_tensorflow_model">How to convert a 🤗 Transformers model to TensorFlow? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_pipeline">How to add a pipeline to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/testing">Testing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pr_checks">Checks on a Pull Request </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Conceptual guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/philosophy">Philosophy </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/glossary">Glossary </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/task_summary">What 🤗 Transformers can do </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tasks_explained">How 🤗 Transformers solve tasks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_summary">The Transformer model family </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tokenizer_summary">Summary of the tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/attention">Attention mechanisms </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pad_truncation">Padding and truncation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/bertology">BERTology </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perplexity">Perplexity of fixed-length models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_webserver">Pipelines for webserver inference </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_memory_anatomy">Model training anatomy </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>API</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Main Classes</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/agent">Agents and Tools </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/model_doc/auto">Auto Classes </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/callback">Callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/configuration">Configuration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/data_collator">Data Collator </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks">Keras callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/logging">Logging </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/model">Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/text_generation">Text Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/onnx">ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules">Optimization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/output">Model outputs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/pipelines">Pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/processors">Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/quantization">Quantization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/tokenizer">Tokenizer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/trainer">Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/deepspeed">DeepSpeed Integration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor">Feature Extractor </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/image_processor">Image Processor </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Models</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Text models</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/albert">ALBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bart">BART </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/barthez">BARThez </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bartpho">BARTpho </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bert">BERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bert-generation">BertGeneration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bert-japanese">BertJapanese </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bertweet">Bertweet </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/big_bird">BigBird </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus">BigBirdPegasus </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/biogpt">BioGpt </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/blenderbot">Blenderbot </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/blenderbot-small">Blenderbot Small </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bloom">BLOOM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bort">BORT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/byt5">ByT5 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/camembert">CamemBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/canine">CANINE </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/codegen">CodeGen </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/code_llama">CodeLlama </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/convbert">ConvBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/cpm">CPM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/cpmant">CPMANT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ctrl">CTRL </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/deberta">DeBERTa </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/deberta-v2">DeBERTa-v2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/dialogpt">DialoGPT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/distilbert">DistilBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/dpr">DPR </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/electra">ELECTRA </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/encoder-decoder">Encoder Decoder Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ernie">ERNIE </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ernie_m">ErnieM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/esm">ESM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/falcon">Falcon </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/flan-t5">FLAN-T5 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/flan-ul2">FLAN-UL2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/flaubert">FlauBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/fnet">FNet </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/fsmt">FSMT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/funnel">Funnel Transformer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/openai-gpt">GPT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt_neo">GPT Neo </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt_neox">GPT NeoX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese">GPT NeoX Japanese </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gptj">GPT-J </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt2">GPT2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode">GPTBigCode </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese">GPTSAN Japanese </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt-sw3">GPTSw3 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/herbert">HerBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ibert">I-BERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/jukebox">Jukebox </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/led">LED </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/llama">LLaMA </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/llama2">Llama2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/longformer">Longformer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/longt5">LongT5 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/luke">LUKE </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/m2m_100">M2M100 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/marian">MarianMT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/markuplm">MarkupLM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mbart">MBart and MBart-50 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mega">MEGA </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/megatron-bert">MegatronBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2">MegatronGPT2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mistral">Mistral </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mluke">mLUKE </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mobilebert">MobileBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mpnet">MPNet </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mpt">MPT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mra">MRA </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mt5">MT5 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mvp">MVP </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nezha">NEZHA </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nllb">NLLB </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nllb-moe">NLLB-MoE </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nystromformer">Nyströmformer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/open-llama">Open-Llama </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/opt">OPT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/pegasus">Pegasus </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/pegasus_x">PEGASUS-X </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/persimmon">Persimmon </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/phobert">PhoBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/plbart">PLBart </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/prophetnet">ProphetNet </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/qdqbert">QDQBert </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/rag">RAG </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/realm">REALM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/reformer">Reformer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/rembert">RemBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/retribert">RetriBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/roberta">RoBERTa </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm">RoBERTa-PreLayerNorm </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/roc_bert">RoCBert </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/roformer">RoFormer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/rwkv">RWKV </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/splinter">Splinter </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/squeezebert">SqueezeBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/switch_transformers">SwitchTransformers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/t5">T5 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/t5v1.1">T5v1.1 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/tapex">TAPEX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/transfo-xl">Transformer XL </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ul2">UL2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/umt5">UMT5 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xmod">X-MOD </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xglm">XGLM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm">XLM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet">XLM-ProphetNet </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta">XLM-RoBERTa </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl">XLM-RoBERTa-XL </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm-v">XLM-V </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlnet">XLNet </a><a data-sveltekit-reload="" class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/yoso">YOSO </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Vision models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Reinforcement learning models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Time series models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Graph models</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Internal Helpers</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/modeling_utils">Custom Layers and Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/pipelines_utils">Utilities for pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/tokenization_utils">Utilities for Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/trainer_utils">Utilities for Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/generation_utils">Utilities for Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/image_processing_utils">Utilities for Image Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/audio_utils">Utilities for Audio processing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/file_utils">General Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/time_series_utils">Utilities for Time Series </a> </div> </div></nav></div></div></div> <div class="z-1 min-w-0 flex-1"> <div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg"> <div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div> <p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience </p> <div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes </div></div></div> <div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a> <p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div> <div class="prose-doc prose relative mx-auto max-w-4xl break-words"> <p></p> <h1 class="relative group"><a id="yoso" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#yoso"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-etqxwj">YOSO</span></h1> <h2 class="relative group"><a id="overview" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#overview"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1jsw1pg">Overview</span></h2> <p data-svelte-h="svelte-1jigxei">The YOSO model was proposed in <a href="https://arxiv.org/abs/2111.09714" rel="nofollow">You Only Sample (Almost) Once: Linear Cost Self-Attention Via Bernoulli Sampling</a><br> by Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh. YOSO approximates standard softmax self-attention via a Bernoulli sampling scheme based on Locality Sensitive Hashing (LSH). In principle, all the Bernoulli random variables can be sampled with a single hash.</p> <p data-svelte-h="svelte-vfdo9a">The abstract from the paper is the following:</p> <p data-svelte-h="svelte-rozt8p"><em>Transformer-based models are widely used in natural language processing (NLP). Central to the transformer model is the self-attention mechanism, which captures the interactions of token pairs in the input sequences and depends quadratically on the sequence length. Training such models on longer sequences is expensive. In this paper, we show that a Bernoulli sampling attention mechanism based on Locality Sensitive Hashing (LSH), decreases the quadratic complexity of such models to linear. We bypass the quadratic cost by considering self-attention as a sum of individual tokens associated with Bernoulli random variables that can, in principle, be sampled at once by a single hash (although in practice, this number may be a small constant). This leads to an efficient sampling scheme to estimate self-attention which relies on specific modifications of LSH (to enable deployment on GPU architectures). We evaluate our algorithm on the GLUE benchmark with standard 512 sequence length where we see favorable performance relative to a standard pretrained Transformer. On the Long Range Arena (LRA) benchmark, for evaluating performance on long sequences, our method achieves results consistent with softmax self-attention but with sizable speed-ups and memory savings and often outperforms other efficient self-attention methods. Our code is available at this https URL</em></p> <p data-svelte-h="svelte-axv494">Tips:</p> <ul data-svelte-h="svelte-1uwfv15"><li>The YOSO attention algorithm is implemented through custom CUDA kernels, functions written in CUDA C++ that can be executed multiple times in parallel on a GPU.</li> <li>The kernels provide a <code>fast_hash</code> function, which approximates the random projections of the queries and keys using the Fast Hadamard Transform. Using these hash codes, the <code>lsh_cumulation</code> function approximates self-attention via LSH-based Bernoulli sampling.</li> <li>To use the custom kernels, the user should set <code>config.use_expectation = False</code>. To ensure that the kernels are compiled successfully, the user must install the correct version of PyTorch and cudatoolkit. By default, <code>config.use_expectation = True</code>, which uses YOSO-E and does not require compiling CUDA kernels.</li></ul> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/yoso_architecture.jpg" alt="drawing" width="600"> <small data-svelte-h="svelte-1td990m">YOSO Attention Algorithm. Taken from the <a href="https://arxiv.org/abs/2111.09714">original paper</a>.</small> <p data-svelte-h="svelte-6fv3r5">This model was contributed by <a href="https://huggingface.co/novice03" rel="nofollow">novice03</a>. The original code can be found <a href="https://github.com/mlpen/YOSO" rel="nofollow">here</a>.</p> <h2 class="relative group"><a id="documentation-resources" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#documentation-resources"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-n3f0j0">Documentation resources</span></h2> <ul data-svelte-h="svelte-mgusi3"><li><a href="../tasks/sequence_classification">Text classification task guide</a></li> <li><a href="../tasks/token_classification">Token classification task guide</a></li> <li><a href="../tasks/question_answering">Question answering task guide</a></li> <li><a href="../tasks/masked_language_modeling">Masked language modeling task guide</a></li> <li><a href="../tasks/multiple_choice">Multiple choice task guide</a></li></ul> <h2 class="relative group"><a id="transformers.YosoConfig" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoConfig"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-mfoh71">YosoConfig</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.YosoConfig"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">YosoConfig</span></span></h3> <a id="transformers.YosoConfig" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.YosoConfig"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/yoso/configuration_yoso.py#L29" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">vocab_size<span class="opacity-60"> = 50265</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_size<span class="opacity-60"> = 768</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_hidden_layers<span class="opacity-60"> = 12</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_attention_heads<span class="opacity-60"> = 12</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">intermediate_size<span class="opacity-60"> = 3072</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_act<span class="opacity-60"> = 'gelu'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_dropout_prob<span class="opacity-60"> = 0.1</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_probs_dropout_prob<span class="opacity-60"> = 0.1</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">max_position_embeddings<span class="opacity-60"> = 4096</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">type_vocab_size<span class="opacity-60"> = 1</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">initializer_range<span class="opacity-60"> = 0.02</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">layer_norm_eps<span class="opacity-60"> = 1e-12</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_embedding_type<span class="opacity-60"> = 'absolute'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_expectation<span class="opacity-60"> = True</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hash_code_len<span class="opacity-60"> = 9</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_hash<span class="opacity-60"> = 64</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">conv_window<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_fast_hash<span class="opacity-60"> = True</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">lsh_backward<span class="opacity-60"> = True</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pad_token_id<span class="opacity-60"> = 1</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">bos_token_id<span class="opacity-60"> = 0</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">eos_token_id<span class="opacity-60"> = 2</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 19 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoConfig.vocab_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoConfig.vocab_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>vocab_size</strong> (<code>int</code>, <em>optional</em>, defaults to 50265) — Vocabulary size of the YOSO model. Defines the number of different tokens that can be represented by the <code>inputs_ids</code> passed when calling <a href="/docs/transformers/v4.34.0/en/model_doc/yoso#transformers.YosoModel">YosoModel</a>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoConfig.hidden_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoConfig.hidden_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hidden_size</strong> (<code>int</code>, <em>optional</em>, defaults to 768) — Dimension of the encoder layers and the pooler layer.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoConfig.num_hidden_layers" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoConfig.num_hidden_layers"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>num_hidden_layers</strong> (<code>int</code>, <em>optional</em>, defaults to 12) — Number of hidden layers in the Transformer encoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoConfig.num_attention_heads" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoConfig.num_attention_heads"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>num_attention_heads</strong> (<code>int</code>, <em>optional</em>, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoConfig.intermediate_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoConfig.intermediate_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>intermediate_size</strong> (<code>int</code>, <em>optional</em>, defaults to 3072) — Dimension of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoConfig.hidden_act" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoConfig.hidden_act"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hidden_act</strong> (<code>str</code> or <code>function</code>, <em>optional</em>, defaults to <code>"gelu"</code>) — The non-linear activation function (function or string) in the encoder and pooler. If string, <code>"gelu"</code>, <code>"relu"</code>, <code>"selu"</code> and <code>"gelu_new"</code> are supported.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoConfig.hidden_dropout_prob" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoConfig.hidden_dropout_prob"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hidden_dropout_prob</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) — The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoConfig.attention_probs_dropout_prob" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoConfig.attention_probs_dropout_prob"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_probs_dropout_prob</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) — The dropout ratio for the attention probabilities.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoConfig.max_position_embeddings" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoConfig.max_position_embeddings"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>max_position_embeddings</strong> (<code>int</code>, <em>optional</em>, defaults to 512) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoConfig.type_vocab_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoConfig.type_vocab_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>type_vocab_size</strong> (<code>int</code>, <em>optional</em>, defaults to 2) — The vocabulary size of the <code>token_type_ids</code> passed when calling <a href="/docs/transformers/v4.34.0/en/model_doc/yoso#transformers.YosoModel">YosoModel</a>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoConfig.initializer_range" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoConfig.initializer_range"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>initializer_range</strong> (<code>float</code>, <em>optional</em>, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoConfig.layer_norm_eps" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoConfig.layer_norm_eps"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>layer_norm_eps</strong> (<code>float</code>, <em>optional</em>, defaults to 1e-12) — The epsilon used by the layer normalization layers.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoConfig.position_embedding_type" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoConfig.position_embedding_type"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_embedding_type</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"absolute"</code>) — Type of position embedding. Choose one of <code>"absolute"</code>, <code>"relative_key"</code>, <code>"relative_key_query"</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoConfig.use_expectation" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoConfig.use_expectation"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>use_expectation</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether or not to use YOSO Expectation. Overrides any effect of num_hash.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoConfig.hash_code_len" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoConfig.hash_code_len"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hash_code_len</strong> (<code>int</code>, <em>optional</em>, defaults to 9) — The length of hashes generated by the hash functions.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoConfig.num_hash" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoConfig.num_hash"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>num_hash</strong> (<code>int</code>, <em>optional</em>, defaults to 64) — Number of hash functions used in <code>YosoSelfAttention</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoConfig.conv_window" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoConfig.conv_window"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>conv_window</strong> (<code>int</code>, <em>optional</em>) — Kernel size of depth-wise convolution.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoConfig.use_fast_hash" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoConfig.use_fast_hash"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>use_fast_hash</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not to use custom cuda kernels which perform fast random projection via hadamard transform.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoConfig.lsh_backward" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoConfig.lsh_backward"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>lsh_backward</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether or not to perform backpropagation using Locality Sensitive Hashing.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1cen7al">This is the configuration class to store the configuration of a <a href="/docs/transformers/v4.34.0/en/model_doc/yoso#transformers.YosoModel">YosoModel</a>. It is used to instantiate an YOSO model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the YOSO <a href="https://huggingface.co/uw-madison/yoso-4096" rel="nofollow">uw-madison/yoso-4096</a> architecture.</p> <p data-svelte-h="svelte-10kqkkl">Configuration objects inherit from <a href="/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> and can be used to control the model outputs. Read the documentation from <a href="/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> for more information.</p> <div class="relative group rounded-md"><a id="transformers.YosoConfig.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoConfig.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> YosoConfig, YosoModel <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Initializing a YOSO uw-madison/yoso-4096 style configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>configuration = YosoConfig() <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Initializing a model (with random weights) from the uw-madison/yoso-4096 style configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model = YosoModel(configuration) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Accessing the model configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>configuration = model.config</pre></div></div></div> <h2 class="relative group"><a id="transformers.YosoModel" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoModel"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-4tv6kw">YosoModel</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.YosoModel"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">YosoModel</span></span></h3> <a id="transformers.YosoModel" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.YosoModel"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/yoso/modeling_yoso.py#L741" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoModel.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoModel.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/yoso#transformers.YosoConfig">YosoConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-j4s9hu">The bare YOSO Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.YosoModel.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.YosoModel.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.YosoModel.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/yoso/modeling_yoso.py#L766" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.BaseModelOutputWithCrossAttentions">transformers.modeling_outputs.BaseModelOutputWithCrossAttentions</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 9 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoModel.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoModel.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoModel.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoModel.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoModel.forward.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoModel.forward.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token.</li> </ul> <p><a href="../glossary#token-type-ids">What are token type IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoModel.forward.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoModel.forward.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.<p></p> <p><a href="../glossary#position-ids">What are position IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoModel.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoModel.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoModel.forward.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoModel.forward.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <em>input_ids</em> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoModel.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoModel.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoModel.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoModel.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoModel.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoModel.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li></ul> <div id="transformers.YosoModel.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.BaseModelOutputWithCrossAttentions">transformers.modeling_outputs.BaseModelOutputWithCrossAttentions</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.BaseModelOutputWithCrossAttentions">transformers.modeling_outputs.BaseModelOutputWithCrossAttentions</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/yoso#transformers.YosoConfig">YosoConfig</a>) and inputs.</p> <ul> <li> <p><strong>last_hidden_state</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>) — Sequence of hidden-states at the output of the last layer of the model.</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> <li> <p><strong>cross_attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> and <code>config.add_cross_attention=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-1lpug8f">The <a href="/docs/transformers/v4.34.0/en/model_doc/yoso#transformers.YosoModel">YosoModel</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.YosoModel.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoModel.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, YosoModel <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"uw-madison/yoso-4096"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = YosoModel.from_pretrained(<span class="hljs-string">"uw-madison/yoso-4096"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(<span class="hljs-string">"Hello, my dog is cute"</span>, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(**inputs) <span class="hljs-meta">&gt;&gt;&gt; </span>last_hidden_states = outputs.last_hidden_state</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.YosoForMaskedLM" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForMaskedLM"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1p7cuh4">YosoForMaskedLM</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.YosoForMaskedLM"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">YosoForMaskedLM</span></span></h3> <a id="transformers.YosoForMaskedLM" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.YosoForMaskedLM"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/yoso/modeling_yoso.py#L853" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoForMaskedLM.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForMaskedLM.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/yoso#transformers.YosoConfig">YosoConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1g81ud">YOSO Model with a <code>language modeling</code> head on top. This model is a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.YosoForMaskedLM.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.YosoForMaskedLM.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.YosoForMaskedLM.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/yoso/modeling_yoso.py#L871" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.MaskedLMOutput">transformers.modeling_outputs.MaskedLMOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 10 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoForMaskedLM.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForMaskedLM.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoForMaskedLM.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForMaskedLM.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoForMaskedLM.forward.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForMaskedLM.forward.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token.</li> </ul> <p><a href="../glossary#token-type-ids">What are token type IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoForMaskedLM.forward.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForMaskedLM.forward.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.<p></p> <p><a href="../glossary#position-ids">What are position IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoForMaskedLM.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForMaskedLM.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoForMaskedLM.forward.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForMaskedLM.forward.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <em>input_ids</em> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoForMaskedLM.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForMaskedLM.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoForMaskedLM.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForMaskedLM.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoForMaskedLM.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForMaskedLM.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoForMaskedLM.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForMaskedLM.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Labels for computing the masked language modeling loss. Indices should be in <code>[-100, 0, ..., config.vocab_size]</code> (see <code>input_ids</code> docstring) Tokens with indices set to <code>-100</code> are ignored (masked), the loss is only computed for the tokens with labels in <code>[0, ..., config.vocab_size]</code>.</span></span> </li></ul> <div id="transformers.YosoForMaskedLM.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.MaskedLMOutput">transformers.modeling_outputs.MaskedLMOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.MaskedLMOutput">transformers.modeling_outputs.MaskedLMOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/yoso#transformers.YosoConfig">YosoConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Masked language modeling (MLM) loss.</p> </li> <li> <p><strong>logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, config.vocab_size)</code>) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-1umva8r">The <a href="/docs/transformers/v4.34.0/en/model_doc/yoso#transformers.YosoForMaskedLM">YosoForMaskedLM</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.YosoForMaskedLM.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForMaskedLM.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, YosoForMaskedLM <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"uw-madison/yoso-4096"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = YosoForMaskedLM.from_pretrained(<span class="hljs-string">"uw-madison/yoso-4096"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(<span class="hljs-string">"The capital of France is [MASK]."</span>, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> logits = model(**inputs).logits <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># retrieve index of [MASK]</span> <span class="hljs-meta">&gt;&gt;&gt; </span>mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[<span class="hljs-number">0</span>].nonzero(as_tuple=<span class="hljs-literal">True</span>)[<span class="hljs-number">0</span>] <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_token_id = logits[<span class="hljs-number">0</span>, mask_token_index].argmax(axis=-<span class="hljs-number">1</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>labels = tokenizer(<span class="hljs-string">"The capital of France is Paris."</span>, return_tensors=<span class="hljs-string">"pt"</span>)[<span class="hljs-string">"input_ids"</span>] <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># mask labels of non-[MASK] tokens</span> <span class="hljs-meta">&gt;&gt;&gt; </span>labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -<span class="hljs-number">100</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(**inputs, labels=labels)</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.YosoForSequenceClassification" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForSequenceClassification"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1svoxaz">YosoForSequenceClassification</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.YosoForSequenceClassification"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">YosoForSequenceClassification</span></span></h3> <a id="transformers.YosoForSequenceClassification" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.YosoForSequenceClassification"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/yoso/modeling_yoso.py#L956" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoForSequenceClassification.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForSequenceClassification.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/yoso#transformers.YosoConfig">YosoConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-ptivsf">YOSO Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. This model is a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.YosoForSequenceClassification.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.YosoForSequenceClassification.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.YosoForSequenceClassification.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/yoso/modeling_yoso.py#L966" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.SequenceClassifierOutput">transformers.modeling_outputs.SequenceClassifierOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 10 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoForSequenceClassification.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForSequenceClassification.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoForSequenceClassification.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForSequenceClassification.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoForSequenceClassification.forward.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForSequenceClassification.forward.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token.</li> </ul> <p><a href="../glossary#token-type-ids">What are token type IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoForSequenceClassification.forward.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForSequenceClassification.forward.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.<p></p> <p><a href="../glossary#position-ids">What are position IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoForSequenceClassification.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForSequenceClassification.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoForSequenceClassification.forward.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForSequenceClassification.forward.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <em>input_ids</em> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoForSequenceClassification.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForSequenceClassification.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoForSequenceClassification.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForSequenceClassification.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoForSequenceClassification.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForSequenceClassification.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoForSequenceClassification.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForSequenceClassification.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Labels for computing the sequence classification/regression loss. Indices should be in <code>[0, ..., config.num_labels - 1]</code>. If <code>config.num_labels == 1</code> a regression loss is computed (Mean-Square loss), If <code>config.num_labels &gt; 1</code> a classification loss is computed (Cross-Entropy).</span></span> </li></ul> <div id="transformers.YosoForSequenceClassification.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.SequenceClassifierOutput">transformers.modeling_outputs.SequenceClassifierOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.SequenceClassifierOutput">transformers.modeling_outputs.SequenceClassifierOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/yoso#transformers.YosoConfig">YosoConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Classification (or regression if config.num_labels==1) loss.</p> </li> <li> <p><strong>logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, config.num_labels)</code>) — Classification (or regression if config.num_labels==1) scores (before SoftMax).</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-1wnqafv">The <a href="/docs/transformers/v4.34.0/en/model_doc/yoso#transformers.YosoForSequenceClassification">YosoForSequenceClassification</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.YosoForSequenceClassification.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForSequenceClassification.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-ykxpe4">Example of single-label classification:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, YosoForSequenceClassification <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"uw-madison/yoso-4096"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = YosoForSequenceClassification.from_pretrained(<span class="hljs-string">"uw-madison/yoso-4096"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(<span class="hljs-string">"Hello, my dog is cute"</span>, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> logits = model(**inputs).logits <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_class_id = logits.argmax().item() <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`</span> <span class="hljs-meta">&gt;&gt;&gt; </span>num_labels = <span class="hljs-built_in">len</span>(model.config.id2label) <span class="hljs-meta">&gt;&gt;&gt; </span>model = YosoForSequenceClassification.from_pretrained(<span class="hljs-string">"uw-madison/yoso-4096"</span>, num_labels=num_labels) <span class="hljs-meta">&gt;&gt;&gt; </span>labels = torch.tensor([<span class="hljs-number">1</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span>loss = model(**inputs, labels=labels).loss</pre></div></div> <div class="relative group rounded-md"><a id="transformers.YosoForSequenceClassification.forward.example-2" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForSequenceClassification.forward.example-2"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-1l8e32d">Example of multi-label classification:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, YosoForSequenceClassification <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"uw-madison/yoso-4096"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = YosoForSequenceClassification.from_pretrained(<span class="hljs-string">"uw-madison/yoso-4096"</span>, problem_type=<span class="hljs-string">"multi_label_classification"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(<span class="hljs-string">"Hello, my dog is cute"</span>, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> logits = model(**inputs).logits <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_class_ids = torch.arange(<span class="hljs-number">0</span>, logits.shape[-<span class="hljs-number">1</span>])[torch.sigmoid(logits).squeeze(dim=<span class="hljs-number">0</span>) &gt; <span class="hljs-number">0.5</span>] <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`</span> <span class="hljs-meta">&gt;&gt;&gt; </span>num_labels = <span class="hljs-built_in">len</span>(model.config.id2label) <span class="hljs-meta">&gt;&gt;&gt; </span>model = YosoForSequenceClassification.from_pretrained( <span class="hljs-meta">... </span> <span class="hljs-string">"uw-madison/yoso-4096"</span>, num_labels=num_labels, problem_type=<span class="hljs-string">"multi_label_classification"</span> <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>labels = torch.<span class="hljs-built_in">sum</span>( <span class="hljs-meta">... </span> torch.nn.functional.one_hot(predicted_class_ids[<span class="hljs-literal">None</span>, :].clone(), num_classes=num_labels), dim=<span class="hljs-number">1</span> <span class="hljs-meta">... </span>).to(torch.<span class="hljs-built_in">float</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>loss = model(**inputs, labels=labels).loss</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.YosoForMultipleChoice" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForMultipleChoice"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-pc0t3d">YosoForMultipleChoice</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.YosoForMultipleChoice"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">YosoForMultipleChoice</span></span></h3> <a id="transformers.YosoForMultipleChoice" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.YosoForMultipleChoice"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/yoso/modeling_yoso.py#L1047" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoForMultipleChoice.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForMultipleChoice.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/yoso#transformers.YosoConfig">YosoConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-gbqhit">YOSO Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks. This model is a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.YosoForMultipleChoice.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.YosoForMultipleChoice.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.YosoForMultipleChoice.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/yoso/modeling_yoso.py#L1058" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.MultipleChoiceModelOutput">transformers.modeling_outputs.MultipleChoiceModelOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 10 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoForMultipleChoice.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForMultipleChoice.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, num_choices, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoForMultipleChoice.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForMultipleChoice.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_choices, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoForMultipleChoice.forward.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForMultipleChoice.forward.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, num_choices, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token.</li> </ul> <p><a href="../glossary#token-type-ids">What are token type IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoForMultipleChoice.forward.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForMultipleChoice.forward.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, num_choices, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.<p></p> <p><a href="../glossary#position-ids">What are position IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoForMultipleChoice.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForMultipleChoice.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoForMultipleChoice.forward.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForMultipleChoice.forward.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_choices, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <em>input_ids</em> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoForMultipleChoice.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForMultipleChoice.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoForMultipleChoice.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForMultipleChoice.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoForMultipleChoice.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForMultipleChoice.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoForMultipleChoice.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForMultipleChoice.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Labels for computing the multiple choice classification loss. Indices should be in <code>[0, ..., num_choices-1]</code> where <code>num_choices</code> is the size of the second dimension of the input tensors. (See <code>input_ids</code> above)</span></span> </li></ul> <div id="transformers.YosoForMultipleChoice.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.MultipleChoiceModelOutput">transformers.modeling_outputs.MultipleChoiceModelOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.MultipleChoiceModelOutput">transformers.modeling_outputs.MultipleChoiceModelOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/yoso#transformers.YosoConfig">YosoConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <em>(1,)</em>, <em>optional</em>, returned when <code>labels</code> is provided) — Classification loss.</p> </li> <li> <p><strong>logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_choices)</code>) — <em>num_choices</em> is the second dimension of the input tensors. (see <em>input_ids</em> above).</p> <p>Classification scores (before SoftMax).</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-1hpmfcj">The <a href="/docs/transformers/v4.34.0/en/model_doc/yoso#transformers.YosoForMultipleChoice">YosoForMultipleChoice</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.YosoForMultipleChoice.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForMultipleChoice.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, YosoForMultipleChoice <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"uw-madison/yoso-4096"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = YosoForMultipleChoice.from_pretrained(<span class="hljs-string">"uw-madison/yoso-4096"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>prompt = <span class="hljs-string">"In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."</span> <span class="hljs-meta">&gt;&gt;&gt; </span>choice0 = <span class="hljs-string">"It is eaten with a fork and a knife."</span> <span class="hljs-meta">&gt;&gt;&gt; </span>choice1 = <span class="hljs-string">"It is eaten while held in the hand."</span> <span class="hljs-meta">&gt;&gt;&gt; </span>labels = torch.tensor(<span class="hljs-number">0</span>).unsqueeze(<span class="hljs-number">0</span>) <span class="hljs-comment"># choice0 is correct (according to Wikipedia ;)), batch size 1</span> <span class="hljs-meta">&gt;&gt;&gt; </span>encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors=<span class="hljs-string">"pt"</span>, padding=<span class="hljs-literal">True</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(**{k: v.unsqueeze(<span class="hljs-number">0</span>) <span class="hljs-keyword">for</span> k, v <span class="hljs-keyword">in</span> encoding.items()}, labels=labels) <span class="hljs-comment"># batch size is 1</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># the linear classifier still needs to be trained</span> <span class="hljs-meta">&gt;&gt;&gt; </span>loss = outputs.loss <span class="hljs-meta">&gt;&gt;&gt; </span>logits = outputs.logits</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.YosoForTokenClassification" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForTokenClassification"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-14p7cyz">YosoForTokenClassification</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.YosoForTokenClassification"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">YosoForTokenClassification</span></span></h3> <a id="transformers.YosoForTokenClassification" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.YosoForTokenClassification"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/yoso/modeling_yoso.py#L1138" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoForTokenClassification.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForTokenClassification.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/yoso#transformers.YosoConfig">YosoConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-w5lgoi">YOSO Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. This model is a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.YosoForTokenClassification.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.YosoForTokenClassification.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.YosoForTokenClassification.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/yoso/modeling_yoso.py#L1150" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.TokenClassifierOutput">transformers.modeling_outputs.TokenClassifierOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 10 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoForTokenClassification.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForTokenClassification.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoForTokenClassification.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForTokenClassification.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoForTokenClassification.forward.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForTokenClassification.forward.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token.</li> </ul> <p><a href="../glossary#token-type-ids">What are token type IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoForTokenClassification.forward.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForTokenClassification.forward.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.<p></p> <p><a href="../glossary#position-ids">What are position IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoForTokenClassification.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForTokenClassification.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoForTokenClassification.forward.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForTokenClassification.forward.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <em>input_ids</em> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoForTokenClassification.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForTokenClassification.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoForTokenClassification.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForTokenClassification.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoForTokenClassification.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForTokenClassification.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoForTokenClassification.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForTokenClassification.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Labels for computing the token classification loss. Indices should be in <code>[0, ..., config.num_labels - 1]</code>.</span></span> </li></ul> <div id="transformers.YosoForTokenClassification.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.TokenClassifierOutput">transformers.modeling_outputs.TokenClassifierOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.TokenClassifierOutput">transformers.modeling_outputs.TokenClassifierOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/yoso#transformers.YosoConfig">YosoConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Classification loss.</p> </li> <li> <p><strong>logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, config.num_labels)</code>) — Classification scores (before SoftMax).</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-162hv6v">The <a href="/docs/transformers/v4.34.0/en/model_doc/yoso#transformers.YosoForTokenClassification">YosoForTokenClassification</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.YosoForTokenClassification.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForTokenClassification.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, YosoForTokenClassification <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"uw-madison/yoso-4096"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = YosoForTokenClassification.from_pretrained(<span class="hljs-string">"uw-madison/yoso-4096"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer( <span class="hljs-meta">... </span> <span class="hljs-string">"HuggingFace is a company based in Paris and New York"</span>, add_special_tokens=<span class="hljs-literal">False</span>, return_tensors=<span class="hljs-string">"pt"</span> <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> logits = model(**inputs).logits <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_token_class_ids = logits.argmax(-<span class="hljs-number">1</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Note that tokens are classified rather then input words which means that</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># there might be more predicted token classes than words.</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Multiple token classes might account for the same word</span> <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_tokens_classes = [model.config.id2label[t.item()] <span class="hljs-keyword">for</span> t <span class="hljs-keyword">in</span> predicted_token_class_ids[<span class="hljs-number">0</span>]] <span class="hljs-meta">&gt;&gt;&gt; </span>labels = predicted_token_class_ids <span class="hljs-meta">&gt;&gt;&gt; </span>loss = model(**inputs, labels=labels).loss</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.YosoForQuestionAnswering" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForQuestionAnswering"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1y6hw9e">YosoForQuestionAnswering</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.YosoForQuestionAnswering"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">YosoForQuestionAnswering</span></span></h3> <a id="transformers.YosoForQuestionAnswering" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.YosoForQuestionAnswering"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/yoso/modeling_yoso.py#L1223" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoForQuestionAnswering.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForQuestionAnswering.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/yoso#transformers.YosoConfig">YosoConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1b6vrsw">YOSO Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute <code>span start logits</code> and <code>span end logits</code>). This model is a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.YosoForQuestionAnswering.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.YosoForQuestionAnswering.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.YosoForQuestionAnswering.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/yoso/modeling_yoso.py#L1236" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">start_positions<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">end_positions<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.QuestionAnsweringModelOutput">transformers.modeling_outputs.QuestionAnsweringModelOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 11 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoForQuestionAnswering.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForQuestionAnswering.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoForQuestionAnswering.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForQuestionAnswering.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoForQuestionAnswering.forward.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForQuestionAnswering.forward.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_type_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token.</li> </ul> <p><a href="../glossary#token-type-ids">What are token type IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoForQuestionAnswering.forward.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForQuestionAnswering.forward.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.<p></p> <p><a href="../glossary#position-ids">What are position IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoForQuestionAnswering.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForQuestionAnswering.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoForQuestionAnswering.forward.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForQuestionAnswering.forward.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <em>input_ids</em> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoForQuestionAnswering.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForQuestionAnswering.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoForQuestionAnswering.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForQuestionAnswering.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoForQuestionAnswering.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForQuestionAnswering.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoForQuestionAnswering.forward.start_positions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForQuestionAnswering.forward.start_positions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>start_positions</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (<code>sequence_length</code>). Position outside of the sequence are not taken into account for computing the loss.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YosoForQuestionAnswering.forward.end_positions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForQuestionAnswering.forward.end_positions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>end_positions</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (<code>sequence_length</code>). Position outside of the sequence are not taken into account for computing the loss.</span></span> </li></ul> <div id="transformers.YosoForQuestionAnswering.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.QuestionAnsweringModelOutput">transformers.modeling_outputs.QuestionAnsweringModelOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.QuestionAnsweringModelOutput">transformers.modeling_outputs.QuestionAnsweringModelOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/yoso#transformers.YosoConfig">YosoConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.</p> </li> <li> <p><strong>start_logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Span-start scores (before SoftMax).</p> </li> <li> <p><strong>end_logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Span-end scores (before SoftMax).</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-5qavbh">The <a href="/docs/transformers/v4.34.0/en/model_doc/yoso#transformers.YosoForQuestionAnswering">YosoForQuestionAnswering</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.YosoForQuestionAnswering.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YosoForQuestionAnswering.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, YosoForQuestionAnswering <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"uw-madison/yoso-4096"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = YosoForQuestionAnswering.from_pretrained(<span class="hljs-string">"uw-madison/yoso-4096"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>question, text = <span class="hljs-string">"Who was Jim Henson?"</span>, <span class="hljs-string">"Jim Henson was a nice puppet"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(question, text, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> outputs = model(**inputs) <span class="hljs-meta">&gt;&gt;&gt; </span>answer_start_index = outputs.start_logits.argmax() <span class="hljs-meta">&gt;&gt;&gt; </span>answer_end_index = outputs.end_logits.argmax() <span class="hljs-meta">&gt;&gt;&gt; </span>predict_answer_tokens = inputs.input_ids[<span class="hljs-number">0</span>, answer_start_index : answer_end_index + <span class="hljs-number">1</span>] <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># target is "nice puppet"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>target_start_index = torch.tensor([<span class="hljs-number">14</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span>target_end_index = torch.tensor([<span class="hljs-number">15</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index) <span class="hljs-meta">&gt;&gt;&gt; </span>loss = outputs.loss</pre></div></div></div></div> <p></p> <div id="svelte-announcer" aria-live="assertive" aria-atomic="true" style="position: absolute; left: 0px; top: 0px; clip: rect(0px, 0px, 0px, 0px); clip-path: inset(50%); overflow: hidden; white-space: nowrap; width: 1px; height: 1px;"></div></div> <div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/xlnet" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>XLNet</a> <a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/beit" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">BEiT<span class="ml-2 translate-y-px">→</span></a></div></div></div> <div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapter&quot;:{&quot;title&quot;:&quot;YOSO&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;yoso&quot;,&quot;url&quot;:&quot;#yoso&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;overview&quot;,&quot;url&quot;:&quot;#overview&quot;},{&quot;title&quot;:&quot;Documentation resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;documentation-resources&quot;,&quot;url&quot;:&quot;#documentation-resources&quot;},{&quot;title&quot;:&quot;YosoConfig&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.YosoConfig&quot;,&quot;url&quot;:&quot;#transformers.YosoConfig&quot;},{&quot;title&quot;:&quot;YosoModel&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.YosoModel&quot;,&quot;url&quot;:&quot;#transformers.YosoModel&quot;},{&quot;title&quot;:&quot;YosoForMaskedLM&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.YosoForMaskedLM&quot;,&quot;url&quot;:&quot;#transformers.YosoForMaskedLM&quot;},{&quot;title&quot;:&quot;YosoForSequenceClassification&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.YosoForSequenceClassification&quot;,&quot;url&quot;:&quot;#transformers.YosoForSequenceClassification&quot;},{&quot;title&quot;:&quot;YosoForMultipleChoice&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.YosoForMultipleChoice&quot;,&quot;url&quot;:&quot;#transformers.YosoForMultipleChoice&quot;},{&quot;title&quot;:&quot;YosoForTokenClassification&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.YosoForTokenClassification&quot;,&quot;url&quot;:&quot;#transformers.YosoForTokenClassification&quot;},{&quot;title&quot;:&quot;YosoForQuestionAnswering&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.YosoForQuestionAnswering&quot;,&quot;url&quot;:&quot;#transformers.YosoForQuestionAnswering&quot;}]}}" data-target="SubSideMenu"><nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#yoso" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-yoso">YOSO</a> <a href="#overview" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-overview"><wbr>Overview</a> <a href="#documentation-resources" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-documentation-resources"><wbr>Documentation resources</a> <a href="#transformers.YosoConfig" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.YosoConfig"><wbr>Yoso<wbr>Config</a> <a href="#transformers.YosoModel" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.YosoModel"><wbr>Yoso<wbr>Model</a> <a href="#transformers.YosoForMaskedLM" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.YosoForMaskedLM"><wbr>Yoso<wbr>For<wbr>MaskedLM</a> <a href="#transformers.YosoForSequenceClassification" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.YosoForSequenceClassification"><wbr>Yoso<wbr>For<wbr>Sequence<wbr>Classification</a> <a href="#transformers.YosoForMultipleChoice" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.YosoForMultipleChoice"><wbr>Yoso<wbr>For<wbr>Multiple<wbr>Choice</a> <a href="#transformers.YosoForTokenClassification" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.YosoForTokenClassification"><wbr>Yoso<wbr>For<wbr>Token<wbr>Classification</a> <a href="#transformers.YosoForQuestionAnswering" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.YosoForQuestionAnswering"><wbr>Yoso<wbr>For<wbr>Question<wbr>Answering</a> </nav></div></div></div> <div id="doc-footer"></div></main> </div> <script> import("/front/build/kube-5e23f38/index.js"); window.moonSha = "kube-5e23f38/"; window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc","environment":"production","userAgent":"HuggingFace (production)"}`); </script> <!-- Stripe --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://js.stripe.com/v3/"; script.async = true; document.head.appendChild(script); } </script> <!-- Google analytics v4 --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL"; script.async = true; document.head.appendChild(script); window.dataLayer = window.dataLayer || []; function gtag() { if (window.dataLayer !== undefined) { window.dataLayer.push(arguments); } } gtag("js", new Date()); gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/v4.34.0/en/model_doc/yoso" }); /// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" }); /// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent /// TODO: ask the user for their consent and update this with gtag('consent', 'update') } </script> <!-- Google Analytics v3 (deprecated) --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { (function (i, s, o, g, r, a, m) { i["GoogleAnalyticsObject"] = r; (i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments); }), (i[r].l = 1 * new Date()); (a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]); a.async = 1; a.src = g; m.parentNode.insertBefore(a, m); })(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics"); ganalytics("create", "UA-83738774-2", "auto"); ganalytics("send", "pageview", "/docs/transformers/v4.34.0/en/model_doc/yoso"); } </script> <iframe name="__privateStripeMetricsController2660" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-27c67c0d52761104439bb051c7856ab1.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fv4.34.0%2Fen%2Fmodel_doc%2Fyoso&amp;title=YOSO&amp;referrer=&amp;muid=577a1d98-59a0-46fc-98a8-36ee316848488be1c3&amp;sid=95f156dd-eb84-4e70-95ef-3883996ebe1530e886&amp;version=6&amp;preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html>
2023-10-05T13:33:40.013Z
YOLOS
https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/yolos
# YOLOS ## Overview The YOLOS model was proposed in [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) by Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu. YOLOS proposes to just leverage the plain [Vision Transformer (ViT)](vit) for object detection, inspired by DETR. It turns out that a base-sized encoder-only Transformer can also achieve 42 AP on COCO, similar to DETR and much more complex frameworks such as Faster R-CNN. The abstract from the paper is the following: _Can Transformer perform 2D object- and region-level recognition from a pure sequence-to-sequence perspective with minimal knowledge about the 2D spatial structure? To answer this question, we present You Only Look at One Sequence (YOLOS), a series of object detection models based on the vanilla Vision Transformer with the fewest possible modifications, region priors, as well as inductive biases of the target task. We find that YOLOS pre-trained on the mid-sized ImageNet-1k dataset only can already achieve quite competitive performance on the challenging COCO object detection benchmark, e.g., YOLOS-Base directly adopted from BERT-Base architecture can obtain 42.0 box AP on COCO val. We also discuss the impacts as well as limitations of current pre-train schemes and model scaling strategies for Transformer in vision through YOLOS._ Tips: - One can use [YolosImageProcessor](/docs/transformers/v4.34.0/en/model_doc/yolos#transformers.YolosImageProcessor) for preparing images (and optional targets) for the model. Contrary to [DETR](detr), YOLOS doesn’t require a `pixel_mask` to be created. ![drawing](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/yolos_architecture.png) YOLOS architecture. Taken from the [original paper](https://arxiv.org/abs/2106.00666). This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/hustvl/YOLOS). ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with YOLOS. - All example notebooks illustrating inference + fine-tuning [YolosForObjectDetection](/docs/transformers/v4.34.0/en/model_doc/yolos#transformers.YolosForObjectDetection) on a custom dataset can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/YOLOS). - See also: [Object detection task guide](../tasks/object_detection) If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ## YolosConfig ### class transformers.YolosConfig [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/yolos/configuration_yolos.py#L35) ( hidden\_size = 768num\_hidden\_layers = 12num\_attention\_heads = 12intermediate\_size = 3072hidden\_act = 'gelu'hidden\_dropout\_prob = 0.0attention\_probs\_dropout\_prob = 0.0initializer\_range = 0.02layer\_norm\_eps = 1e-12image\_size = \[512, 864\]patch\_size = 16num\_channels = 3qkv\_bias = Truenum\_detection\_tokens = 100use\_mid\_position\_embeddings = Trueauxiliary\_loss = Falseclass\_cost = 1bbox\_cost = 5giou\_cost = 2bbox\_loss\_coefficient = 5giou\_loss\_coefficient = 2eos\_coefficient = 0.1\*\*kwargs ) This is the configuration class to store the configuration of a [YolosModel](/docs/transformers/v4.34.0/en/model_doc/yolos#transformers.YolosModel). It is used to instantiate a YOLOS model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the YOLOS [hustvl/yolos-base](https://huggingface.co/hustvl/yolos-base) architecture. Configuration objects inherit from [PretrainedConfig](/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig) and can be used to control the model outputs. Read the documentation from [PretrainedConfig](/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig) for more information. Example: ``` >>> from transformers import YolosConfig, YolosModel >>> >>> configuration = YolosConfig() >>> >>> model = YolosModel(configuration) >>> >>> configuration = model.config ``` ## YolosImageProcessor ### class transformers.YolosImageProcessor [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/yolos/image_processing_yolos.py#L673) ( format: typing.Union\[str, transformers.models.yolos.image\_processing\_yolos.AnnotionFormat\] = <AnnotionFormat.COCO\_DETECTION: 'coco\_detection'>do\_resize: bool = Truesize: typing.Dict\[str, int\] = Noneresample: Resampling = <Resampling.BILINEAR: 2>do\_rescale: bool = Truerescale\_factor: typing.Union\[int, float\] = 0.00392156862745098do\_normalize: bool = Trueimage\_mean: typing.Union\[float, typing.List\[float\]\] = Noneimage\_std: typing.Union\[float, typing.List\[float\]\] = Nonedo\_pad: bool = True\*\*kwargs ) Constructs a Detr image processor. #### preprocess [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/yolos/image_processing_yolos.py#L1011) ( images: typing.Union\[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List\[ForwardRef('PIL.Image.Image')\], typing.List\[numpy.ndarray\], typing.List\[ForwardRef('torch.Tensor')\]\]annotations: typing.Union\[typing.Dict\[str, typing.Union\[int, str, typing.List\[typing.Dict\]\]\], typing.List\[typing.Dict\[str, typing.Union\[int, str, typing.List\[typing.Dict\]\]\]\], NoneType\] = Nonereturn\_segmentation\_masks: bool = Nonemasks\_path: typing.Union\[str, pathlib.Path, NoneType\] = Nonedo\_resize: typing.Optional\[bool\] = Nonesize: typing.Union\[typing.Dict\[str, int\], NoneType\] = Noneresample = Nonedo\_rescale: typing.Optional\[bool\] = Nonerescale\_factor: typing.Union\[int, float, NoneType\] = Nonedo\_normalize: typing.Optional\[bool\] = Noneimage\_mean: typing.Union\[float, typing.List\[float\], NoneType\] = Noneimage\_std: typing.Union\[float, typing.List\[float\], NoneType\] = Nonedo\_pad: typing.Optional\[bool\] = Noneformat: typing.Union\[str, transformers.models.yolos.image\_processing\_yolos.AnnotionFormat, NoneType\] = Nonereturn\_tensors: typing.Union\[str, transformers.utils.generic.TensorType, NoneType\] = Nonedata\_format: typing.Union\[str, transformers.image\_utils.ChannelDimension\] = <ChannelDimension.FIRST: 'channels\_first'>input\_data\_format: typing.Union\[str, transformers.image\_utils.ChannelDimension, NoneType\] = None\*\*kwargs ) Preprocess an image or a batch of images so that it can be used by the model. #### pad [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/yolos/image_processing_yolos.py#L956) ( images: typing.List\[numpy.ndarray\]constant\_values: typing.Union\[float, typing.Iterable\[float\]\] = 0return\_pixel\_mask: bool = Falsereturn\_tensors: typing.Union\[str, transformers.utils.generic.TensorType, NoneType\] = Nonedata\_format: typing.Optional\[transformers.image\_utils.ChannelDimension\] = Noneinput\_data\_format: typing.Union\[str, transformers.image\_utils.ChannelDimension, NoneType\] = None ) Pads a batch of images to the bottom and right of the image with zeros to the size of largest height and width in the batch and optionally returns their corresponding pixel mask. #### post\_process\_object\_detection [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/yolos/image_processing_yolos.py#L1294) ( outputsthreshold: float = 0.5target\_sizes: typing.Union\[transformers.utils.generic.TensorType, typing.List\[typing.Tuple\]\] = None ) → `List[Dict]` Parameters - **outputs** (`YolosObjectDetectionOutput`) — Raw outputs of the model. - **threshold** (`float`, _optional_) — Score threshold to keep object detection predictions. - **target\_sizes** (`torch.Tensor` or `List[Tuple[int, int]]`, _optional_) — Tensor of shape `(batch_size, 2)` or list of tuples (`Tuple[int, int]`) containing the target size `(height, width)` of each image in the batch. If unset, predictions will not be resized. A list of dictionaries, each dictionary containing the scores, labels and boxes for an image in the batch as predicted by the model. Converts the raw output of [YolosForObjectDetection](/docs/transformers/v4.34.0/en/model_doc/yolos#transformers.YolosForObjectDetection) into final bounding boxes in (top\_left\_x, top\_left\_y, bottom\_right\_x, bottom\_right\_y) format. Only supports PyTorch. ## YolosFeatureExtractor Preprocess an image or a batch of images. ( images: typing.List\[numpy.ndarray\]constant\_values: typing.Union\[float, typing.Iterable\[float\]\] = 0return\_pixel\_mask: bool = Falsereturn\_tensors: typing.Union\[str, transformers.utils.generic.TensorType, NoneType\] = Nonedata\_format: typing.Optional\[transformers.image\_utils.ChannelDimension\] = Noneinput\_data\_format: typing.Union\[str, transformers.image\_utils.ChannelDimension, NoneType\] = None ) Pads a batch of images to the bottom and right of the image with zeros to the size of largest height and width in the batch and optionally returns their corresponding pixel mask. ( outputsthreshold: float = 0.5target\_sizes: typing.Union\[transformers.utils.generic.TensorType, typing.List\[typing.Tuple\]\] = None ) → `List[Dict]` Parameters - **outputs** (`YolosObjectDetectionOutput`) — Raw outputs of the model. - **threshold** (`float`, _optional_) — Score threshold to keep object detection predictions. - **target\_sizes** (`torch.Tensor` or `List[Tuple[int, int]]`, _optional_) — Tensor of shape `(batch_size, 2)` or list of tuples (`Tuple[int, int]`) containing the target size `(height, width)` of each image in the batch. If unset, predictions will not be resized. A list of dictionaries, each dictionary containing the scores, labels and boxes for an image in the batch as predicted by the model. Converts the raw output of [YolosForObjectDetection](/docs/transformers/v4.34.0/en/model_doc/yolos#transformers.YolosForObjectDetection) into final bounding boxes in (top\_left\_x, top\_left\_y, bottom\_right\_x, bottom\_right\_y) format. Only supports PyTorch. ## YolosModel ### class transformers.YolosModel [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/yolos/modeling_yolos.py#L597) ( config: YolosConfigadd\_pooling\_layer: bool = True ) Parameters - **config** ([YolosConfig](/docs/transformers/v4.34.0/en/model_doc/yolos#transformers.YolosConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. The bare YOLOS Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/yolos/modeling_yolos.py#L625) ( pixel\_values: typing.Optional\[torch.Tensor\] = Nonehead\_mask: typing.Optional\[torch.Tensor\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → [transformers.modeling\_outputs.BaseModelOutputWithPooling](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.BaseModelOutputWithPooling) or `tuple(torch.FloatTensor)` The [YolosModel](/docs/transformers/v4.34.0/en/model_doc/yolos#transformers.YolosModel) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoImageProcessor, YolosModel >>> import torch >>> from datasets import load_dataset >>> dataset = load_dataset("huggingface/cats-image") >>> image = dataset["test"]["image"][0] >>> image_processor = AutoImageProcessor.from_pretrained("hustvl/yolos-small") >>> model = YolosModel.from_pretrained("hustvl/yolos-small") >>> inputs = image_processor(image, return_tensors="pt") >>> with torch.no_grad(): ... outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state >>> list(last_hidden_states.shape) [1, 3401, 384] ``` ## YolosForObjectDetection ### class transformers.YolosForObjectDetection [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/yolos/modeling_yolos.py#L705) ( config: YolosConfig ) Parameters - **config** ([YolosConfig](/docs/transformers/v4.34.0/en/model_doc/yolos#transformers.YolosConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. YOLOS Model (consisting of a ViT encoder) with object detection heads on top, for tasks such as COCO detection. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/yolos/modeling_yolos.py#L732) ( pixel\_values: FloatTensorlabels: typing.Optional\[typing.List\[typing.Dict\]\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → `transformers.models.yolos.modeling_yolos.YolosObjectDetectionOutput` or `tuple(torch.FloatTensor)` The [YolosForObjectDetection](/docs/transformers/v4.34.0/en/model_doc/yolos#transformers.YolosForObjectDetection) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: ``` >>> from transformers import AutoImageProcessor, AutoModelForObjectDetection >>> import torch >>> from PIL import Image >>> import requests >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> image_processor = AutoImageProcessor.from_pretrained("hustvl/yolos-tiny") >>> model = AutoModelForObjectDetection.from_pretrained("hustvl/yolos-tiny") >>> inputs = image_processor(images=image, return_tensors="pt") >>> outputs = model(**inputs) >>> >>> target_sizes = torch.tensor([image.size[::-1]]) >>> results = image_processor.post_process_object_detection(outputs, threshold=0.9, target_sizes=target_sizes)[ ... 0 ... ] >>> for score, label, box in zip(results["scores"], results["labels"], results["boxes"]): ... box = [round(i, 2) for i in box.tolist()] ... print( ... f"Detected {model.config.id2label[label.item()]} with confidence " ... f"{round(score.item(), 3)} at location {box}" ... ) Detected remote with confidence 0.994 at location [46.96, 72.61, 181.02, 119.73] Detected remote with confidence 0.975 at location [340.66, 79.19, 372.59, 192.65] Detected cat with confidence 0.984 at location [12.27, 54.25, 319.42, 470.99] Detected remote with confidence 0.922 at location [41.66, 71.96, 178.7, 120.33] Detected cat with confidence 0.914 at location [342.34, 21.48, 638.64, 372.46] ```
<!DOCTYPE html><html class=""><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"> <meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science."> <meta property="fb:app_id" content="1321688464574422"> <meta name="twitter:card" content="summary_large_image"> <meta name="twitter:site" content="@huggingface"> <meta property="og:title" content="YOLOS"> <meta property="og:type" content="website"> <meta property="og:url" content="https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/yolos"> <meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png"> <link rel="stylesheet" href="/front/build/kube-b0520c1/style.css"> <link rel="preconnect" href="https://fonts.gstatic.com"> <link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&amp;display=swap" rel="stylesheet"> <link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&amp;display=swap" rel="stylesheet"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" as="style" onload="this.onload=null;this.rel='stylesheet'"> <noscript> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" /> </noscript> <title>YOLOS</title> <script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script> <script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><link rel="stylesheet" href="/docs/transformers/v4.34.0/en/_app/immutable/assets/0.e3b0c442.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/1.38c5c2f6.js"><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"><meta name="hf:doc:metadata" content="{&quot;local&quot;:&quot;yolos&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;overview&quot;,&quot;title&quot;:&quot;Overview&quot;},{&quot;local&quot;:&quot;resources&quot;,&quot;title&quot;:&quot;Resources&quot;},{&quot;local&quot;:&quot;transformers.YolosConfig&quot;,&quot;title&quot;:&quot;YolosConfig&quot;},{&quot;local&quot;:&quot;transformers.YolosImageProcessor&quot;,&quot;title&quot;:&quot;YolosImageProcessor&quot;},{&quot;local&quot;:&quot;transformers.YolosFeatureExtractor&quot;,&quot;title&quot;:&quot;YolosFeatureExtractor&quot;},{&quot;local&quot;:&quot;transformers.YolosModel&quot;,&quot;title&quot;:&quot;YolosModel&quot;},{&quot;local&quot;:&quot;transformers.YolosForObjectDetection&quot;,&quot;title&quot;:&quot;YolosForObjectDetection&quot;}],&quot;title&quot;:&quot;YOLOS&quot;}"></head> <body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage"> <div class="flex min-h-screen flex-col"> <div class="SVELTE_HYDRATER contents" data-props="{&quot;classNames&quot;:&quot;&quot;,&quot;isWide&quot;:true,&quot;isZh&quot;:false}" data-target="MainHeader"><header class="border-b border-gray-100 "><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <div class="flex flex-none items-center justify-center p-0.5 place-self-stretch lg:hidden"><button class="relative z-40 flex h-6 w-8 items-center justify-center" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div></div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="rounded-full border border-transparent bg-gray-900 px-3 py-1 leading-none text-white hover:border-black hover:bg-white hover:text-black" href="/join">Sign Up</a></li></ul></nav></div></header></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div> <main class="flex flex-1 flex-col"><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapters&quot;:[{&quot;title&quot;:&quot;Get started&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;🤗 Transformers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;index&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/index&quot;},{&quot;title&quot;:&quot;Quick tour&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;quicktour&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/quicktour&quot;},{&quot;title&quot;:&quot;Installation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;installation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/installation&quot;}]},{&quot;title&quot;:&quot;Tutorials&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Run inference with pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_tutorial&quot;},{&quot;title&quot;:&quot;Write portable code with AutoClass&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;autoclass_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/autoclass_tutorial&quot;},{&quot;title&quot;:&quot;Preprocess data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocessing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/preprocessing&quot;},{&quot;title&quot;:&quot;Fine-tune a pretrained model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;training&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/training&quot;},{&quot;title&quot;:&quot;Train with a script&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;run_scripts&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/run_scripts&quot;},{&quot;title&quot;:&quot;Set up distributed training with 🤗 Accelerate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;accelerate&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/accelerate&quot;},{&quot;title&quot;:&quot;Load and train adapters with 🤗 PEFT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;peft&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/peft&quot;},{&quot;title&quot;:&quot;Share your model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_sharing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_sharing&quot;},{&quot;title&quot;:&quot;Agents&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers_agents&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/transformers_agents&quot;},{&quot;title&quot;:&quot;Generation with LLMs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;llm_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/llm_tutorial&quot;}]},{&quot;title&quot;:&quot;Task Guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Natural Language Processing&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text classification&quot;,&quot;id&quot;:&quot;tasks/sequence_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/sequence_classification&quot;},{&quot;title&quot;:&quot;Token classification&quot;,&quot;id&quot;:&quot;tasks/token_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/token_classification&quot;},{&quot;title&quot;:&quot;Question answering&quot;,&quot;id&quot;:&quot;tasks/question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/question_answering&quot;},{&quot;title&quot;:&quot;Causal language modeling&quot;,&quot;id&quot;:&quot;tasks/language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/language_modeling&quot;},{&quot;title&quot;:&quot;Masked language modeling&quot;,&quot;id&quot;:&quot;tasks/masked_language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/masked_language_modeling&quot;},{&quot;title&quot;:&quot;Translation&quot;,&quot;id&quot;:&quot;tasks/translation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/translation&quot;},{&quot;title&quot;:&quot;Summarization&quot;,&quot;id&quot;:&quot;tasks/summarization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/summarization&quot;},{&quot;title&quot;:&quot;Multiple choice&quot;,&quot;id&quot;:&quot;tasks/multiple_choice&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/multiple_choice&quot;}]},{&quot;title&quot;:&quot;Audio&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio classification&quot;,&quot;id&quot;:&quot;tasks/audio_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/audio_classification&quot;},{&quot;title&quot;:&quot;Automatic speech recognition&quot;,&quot;id&quot;:&quot;tasks/asr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/asr&quot;}]},{&quot;title&quot;:&quot;Computer Vision&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image classification&quot;,&quot;id&quot;:&quot;tasks/image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_classification&quot;},{&quot;title&quot;:&quot;Semantic segmentation&quot;,&quot;id&quot;:&quot;tasks/semantic_segmentation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/semantic_segmentation&quot;},{&quot;title&quot;:&quot;Video classification&quot;,&quot;id&quot;:&quot;tasks/video_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/video_classification&quot;},{&quot;title&quot;:&quot;Object detection&quot;,&quot;id&quot;:&quot;tasks/object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot object detection&quot;,&quot;id&quot;:&quot;tasks/zero_shot_object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot image classification&quot;,&quot;id&quot;:&quot;tasks/zero_shot_image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification&quot;},{&quot;title&quot;:&quot;Depth estimation&quot;,&quot;id&quot;:&quot;tasks/monocular_depth_estimation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation&quot;}]},{&quot;title&quot;:&quot;Multimodal&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image captioning&quot;,&quot;id&quot;:&quot;tasks/image_captioning&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_captioning&quot;},{&quot;title&quot;:&quot;Document Question Answering&quot;,&quot;id&quot;:&quot;tasks/document_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/document_question_answering&quot;},{&quot;title&quot;:&quot;Visual Question Answering&quot;,&quot;id&quot;:&quot;tasks/visual_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/visual_question_answering&quot;},{&quot;title&quot;:&quot;Text to speech&quot;,&quot;id&quot;:&quot;tasks/text-to-speech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/text-to-speech&quot;}]},{&quot;title&quot;:&quot;Generation&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Customize the generation strategy&quot;,&quot;id&quot;:&quot;generation_strategies&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/generation_strategies&quot;}]},{&quot;title&quot;:&quot;Prompting&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image tasks with IDEFICS&quot;,&quot;id&quot;:&quot;tasks/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/idefics&quot;}]}]},{&quot;title&quot;:&quot;Developer guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Use fast tokenizers from 🤗 Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;fast_tokenizers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/fast_tokenizers&quot;},{&quot;title&quot;:&quot;Run inference with multilingual models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;multilingual&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/multilingual&quot;},{&quot;title&quot;:&quot;Use model-specific APIs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;create_a_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/create_a_model&quot;},{&quot;title&quot;:&quot;Share a custom model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_models&quot;},{&quot;title&quot;:&quot;Templates for chat models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;chat_templating&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/chat_templating&quot;},{&quot;title&quot;:&quot;Run training on Amazon SageMaker&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;sagemaker&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/sagemaker&quot;},{&quot;title&quot;:&quot;Export to ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;serialization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/serialization&quot;},{&quot;title&quot;:&quot;Export to TFLite&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tflite&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tflite&quot;},{&quot;title&quot;:&quot;Export to TorchScript&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;torchscript&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/torchscript&quot;},{&quot;title&quot;:&quot;Benchmarks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;benchmarks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/benchmarks&quot;},{&quot;title&quot;:&quot;Notebooks with examples&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;notebooks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/notebooks&quot;},{&quot;title&quot;:&quot;Community resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;community&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/community&quot;},{&quot;title&quot;:&quot;Custom Tools and Prompts&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_tools&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_tools&quot;},{&quot;title&quot;:&quot;Troubleshoot&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;troubleshooting&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/troubleshooting&quot;}]},{&quot;title&quot;:&quot;Performance and scalability&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;performance&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/performance&quot;},{&quot;title&quot;:&quot;Efficient training techniques&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Methods and tools for efficient training on a single GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_one&quot;},{&quot;title&quot;:&quot;Multiple GPUs and parallelism&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_many&quot;},{&quot;title&quot;:&quot;Efficient training on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu&quot;},{&quot;title&quot;:&quot;Distributed CPU training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu_many&quot;},{&quot;title&quot;:&quot;Training on TPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu&quot;},{&quot;title&quot;:&quot;Training on TPU with TensorFlow&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu_tf&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu_tf&quot;},{&quot;title&quot;:&quot;Training on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_special&quot;},{&quot;title&quot;:&quot;Custom hardware for training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_hardware&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_hardware&quot;},{&quot;title&quot;:&quot;Hyperparameter Search using Trainer API&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;hpo_train&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/hpo_train&quot;}]},{&quot;title&quot;:&quot;Optimizing inference&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Inference on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_cpu&quot;},{&quot;title&quot;:&quot;Inference on one GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_one&quot;},{&quot;title&quot;:&quot;Inference on many GPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_many&quot;},{&quot;title&quot;:&quot;Inference on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_special&quot;}]},{&quot;title&quot;:&quot;Instantiating a big model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;big_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/big_models&quot;},{&quot;title&quot;:&quot;Troubleshooting&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;debugging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/debugging&quot;},{&quot;title&quot;:&quot;XLA Integration for TensorFlow Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tf_xla&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tf_xla&quot;},{&quot;title&quot;:&quot;Optimize inference using `torch.compile()`&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_torch_compile&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_torch_compile&quot;}]},{&quot;title&quot;:&quot;Contribute&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;How to contribute to transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;contributing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/contributing&quot;},{&quot;title&quot;:&quot;How to add a model to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_model&quot;},{&quot;title&quot;:&quot;How to convert a 🤗 Transformers model to TensorFlow?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_tensorflow_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_tensorflow_model&quot;},{&quot;title&quot;:&quot;How to add a pipeline to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_pipeline&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_pipeline&quot;},{&quot;title&quot;:&quot;Testing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;testing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/testing&quot;},{&quot;title&quot;:&quot;Checks on a Pull Request&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pr_checks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pr_checks&quot;}]},{&quot;title&quot;:&quot;Conceptual guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Philosophy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;philosophy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/philosophy&quot;},{&quot;title&quot;:&quot;Glossary&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;glossary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/glossary&quot;},{&quot;title&quot;:&quot;What 🤗 Transformers can do&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;task_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/task_summary&quot;},{&quot;title&quot;:&quot;How 🤗 Transformers solve tasks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks_explained&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks_explained&quot;},{&quot;title&quot;:&quot;The Transformer model family&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_summary&quot;},{&quot;title&quot;:&quot;Summary of the tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tokenizer_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tokenizer_summary&quot;},{&quot;title&quot;:&quot;Attention mechanisms&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;attention&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/attention&quot;},{&quot;title&quot;:&quot;Padding and truncation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pad_truncation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pad_truncation&quot;},{&quot;title&quot;:&quot;BERTology&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;bertology&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/bertology&quot;},{&quot;title&quot;:&quot;Perplexity of fixed-length models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perplexity&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perplexity&quot;},{&quot;title&quot;:&quot;Pipelines for webserver inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_webserver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_webserver&quot;},{&quot;title&quot;:&quot;Model training anatomy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_memory_anatomy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_memory_anatomy&quot;}]},{&quot;title&quot;:&quot;API&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Main Classes&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Agents and Tools&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/agent&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/agent&quot;},{&quot;title&quot;:&quot;Auto Classes&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/auto&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/auto&quot;},{&quot;title&quot;:&quot;Callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/callback&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/callback&quot;},{&quot;title&quot;:&quot;Configuration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/configuration&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/configuration&quot;},{&quot;title&quot;:&quot;Data Collator&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/data_collator&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/data_collator&quot;},{&quot;title&quot;:&quot;Keras callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/keras_callbacks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/keras_callbacks&quot;},{&quot;title&quot;:&quot;Logging&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/logging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/logging&quot;},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/model&quot;},{&quot;title&quot;:&quot;Text Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/text_generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/text_generation&quot;},{&quot;title&quot;:&quot;ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/onnx&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/onnx&quot;},{&quot;title&quot;:&quot;Optimization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/optimizer_schedules&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules&quot;},{&quot;title&quot;:&quot;Model outputs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/output&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/output&quot;},{&quot;title&quot;:&quot;Pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/pipelines&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/pipelines&quot;},{&quot;title&quot;:&quot;Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/processors&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/processors&quot;},{&quot;title&quot;:&quot;Quantization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/quantization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/quantization&quot;},{&quot;title&quot;:&quot;Tokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/tokenizer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/tokenizer&quot;},{&quot;title&quot;:&quot;Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/trainer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/trainer&quot;},{&quot;title&quot;:&quot;DeepSpeed Integration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/deepspeed&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/deepspeed&quot;},{&quot;title&quot;:&quot;Feature Extractor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/feature_extractor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/feature_extractor&quot;},{&quot;title&quot;:&quot;Image Processor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/image_processor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/image_processor&quot;}]},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALBERT&quot;,&quot;id&quot;:&quot;model_doc/albert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/albert&quot;},{&quot;title&quot;:&quot;BART&quot;,&quot;id&quot;:&quot;model_doc/bart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bart&quot;},{&quot;title&quot;:&quot;BARThez&quot;,&quot;id&quot;:&quot;model_doc/barthez&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/barthez&quot;},{&quot;title&quot;:&quot;BARTpho&quot;,&quot;id&quot;:&quot;model_doc/bartpho&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bartpho&quot;},{&quot;title&quot;:&quot;BERT&quot;,&quot;id&quot;:&quot;model_doc/bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert&quot;},{&quot;title&quot;:&quot;BertGeneration&quot;,&quot;id&quot;:&quot;model_doc/bert-generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-generation&quot;},{&quot;title&quot;:&quot;BertJapanese&quot;,&quot;id&quot;:&quot;model_doc/bert-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-japanese&quot;},{&quot;title&quot;:&quot;Bertweet&quot;,&quot;id&quot;:&quot;model_doc/bertweet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bertweet&quot;},{&quot;title&quot;:&quot;BigBird&quot;,&quot;id&quot;:&quot;model_doc/big_bird&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/big_bird&quot;},{&quot;title&quot;:&quot;BigBirdPegasus&quot;,&quot;id&quot;:&quot;model_doc/bigbird_pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus&quot;},{&quot;title&quot;:&quot;BioGpt&quot;,&quot;id&quot;:&quot;model_doc/biogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/biogpt&quot;},{&quot;title&quot;:&quot;Blenderbot&quot;,&quot;id&quot;:&quot;model_doc/blenderbot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot&quot;},{&quot;title&quot;:&quot;Blenderbot Small&quot;,&quot;id&quot;:&quot;model_doc/blenderbot-small&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot-small&quot;},{&quot;title&quot;:&quot;BLOOM&quot;,&quot;id&quot;:&quot;model_doc/bloom&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bloom&quot;},{&quot;title&quot;:&quot;BORT&quot;,&quot;id&quot;:&quot;model_doc/bort&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bort&quot;},{&quot;title&quot;:&quot;ByT5&quot;,&quot;id&quot;:&quot;model_doc/byt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/byt5&quot;},{&quot;title&quot;:&quot;CamemBERT&quot;,&quot;id&quot;:&quot;model_doc/camembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/camembert&quot;},{&quot;title&quot;:&quot;CANINE&quot;,&quot;id&quot;:&quot;model_doc/canine&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/canine&quot;},{&quot;title&quot;:&quot;CodeGen&quot;,&quot;id&quot;:&quot;model_doc/codegen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/codegen&quot;},{&quot;title&quot;:&quot;CodeLlama&quot;,&quot;id&quot;:&quot;model_doc/code_llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/code_llama&quot;},{&quot;title&quot;:&quot;ConvBERT&quot;,&quot;id&quot;:&quot;model_doc/convbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convbert&quot;},{&quot;title&quot;:&quot;CPM&quot;,&quot;id&quot;:&quot;model_doc/cpm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpm&quot;},{&quot;title&quot;:&quot;CPMANT&quot;,&quot;id&quot;:&quot;model_doc/cpmant&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpmant&quot;},{&quot;title&quot;:&quot;CTRL&quot;,&quot;id&quot;:&quot;model_doc/ctrl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ctrl&quot;},{&quot;title&quot;:&quot;DeBERTa&quot;,&quot;id&quot;:&quot;model_doc/deberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta&quot;},{&quot;title&quot;:&quot;DeBERTa-v2&quot;,&quot;id&quot;:&quot;model_doc/deberta-v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta-v2&quot;},{&quot;title&quot;:&quot;DialoGPT&quot;,&quot;id&quot;:&quot;model_doc/dialogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dialogpt&quot;},{&quot;title&quot;:&quot;DistilBERT&quot;,&quot;id&quot;:&quot;model_doc/distilbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/distilbert&quot;},{&quot;title&quot;:&quot;DPR&quot;,&quot;id&quot;:&quot;model_doc/dpr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpr&quot;},{&quot;title&quot;:&quot;ELECTRA&quot;,&quot;id&quot;:&quot;model_doc/electra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/electra&quot;},{&quot;title&quot;:&quot;Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encoder-decoder&quot;},{&quot;title&quot;:&quot;ERNIE&quot;,&quot;id&quot;:&quot;model_doc/ernie&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie&quot;},{&quot;title&quot;:&quot;ErnieM&quot;,&quot;id&quot;:&quot;model_doc/ernie_m&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie_m&quot;},{&quot;title&quot;:&quot;ESM&quot;,&quot;id&quot;:&quot;model_doc/esm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/esm&quot;},{&quot;title&quot;:&quot;Falcon&quot;,&quot;id&quot;:&quot;model_doc/falcon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/falcon&quot;},{&quot;title&quot;:&quot;FLAN-T5&quot;,&quot;id&quot;:&quot;model_doc/flan-t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-t5&quot;},{&quot;title&quot;:&quot;FLAN-UL2&quot;,&quot;id&quot;:&quot;model_doc/flan-ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-ul2&quot;},{&quot;title&quot;:&quot;FlauBERT&quot;,&quot;id&quot;:&quot;model_doc/flaubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flaubert&quot;},{&quot;title&quot;:&quot;FNet&quot;,&quot;id&quot;:&quot;model_doc/fnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fnet&quot;},{&quot;title&quot;:&quot;FSMT&quot;,&quot;id&quot;:&quot;model_doc/fsmt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fsmt&quot;},{&quot;title&quot;:&quot;Funnel Transformer&quot;,&quot;id&quot;:&quot;model_doc/funnel&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/funnel&quot;},{&quot;title&quot;:&quot;GPT&quot;,&quot;id&quot;:&quot;model_doc/openai-gpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/openai-gpt&quot;},{&quot;title&quot;:&quot;GPT Neo&quot;,&quot;id&quot;:&quot;model_doc/gpt_neo&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neo&quot;},{&quot;title&quot;:&quot;GPT NeoX&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox&quot;},{&quot;title&quot;:&quot;GPT NeoX Japanese&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox_japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese&quot;},{&quot;title&quot;:&quot;GPT-J&quot;,&quot;id&quot;:&quot;model_doc/gptj&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptj&quot;},{&quot;title&quot;:&quot;GPT2&quot;,&quot;id&quot;:&quot;model_doc/gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt2&quot;},{&quot;title&quot;:&quot;GPTBigCode&quot;,&quot;id&quot;:&quot;model_doc/gpt_bigcode&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode&quot;},{&quot;title&quot;:&quot;GPTSAN Japanese&quot;,&quot;id&quot;:&quot;model_doc/gptsan-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese&quot;},{&quot;title&quot;:&quot;GPTSw3&quot;,&quot;id&quot;:&quot;model_doc/gpt-sw3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt-sw3&quot;},{&quot;title&quot;:&quot;HerBERT&quot;,&quot;id&quot;:&quot;model_doc/herbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/herbert&quot;},{&quot;title&quot;:&quot;I-BERT&quot;,&quot;id&quot;:&quot;model_doc/ibert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ibert&quot;},{&quot;title&quot;:&quot;Jukebox&quot;,&quot;id&quot;:&quot;model_doc/jukebox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/jukebox&quot;},{&quot;title&quot;:&quot;LED&quot;,&quot;id&quot;:&quot;model_doc/led&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/led&quot;},{&quot;title&quot;:&quot;LLaMA&quot;,&quot;id&quot;:&quot;model_doc/llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama&quot;},{&quot;title&quot;:&quot;Llama2&quot;,&quot;id&quot;:&quot;model_doc/llama2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama2&quot;},{&quot;title&quot;:&quot;Longformer&quot;,&quot;id&quot;:&quot;model_doc/longformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longformer&quot;},{&quot;title&quot;:&quot;LongT5&quot;,&quot;id&quot;:&quot;model_doc/longt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longt5&quot;},{&quot;title&quot;:&quot;LUKE&quot;,&quot;id&quot;:&quot;model_doc/luke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/luke&quot;},{&quot;title&quot;:&quot;M2M100&quot;,&quot;id&quot;:&quot;model_doc/m2m_100&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/m2m_100&quot;},{&quot;title&quot;:&quot;MarianMT&quot;,&quot;id&quot;:&quot;model_doc/marian&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/marian&quot;},{&quot;title&quot;:&quot;MarkupLM&quot;,&quot;id&quot;:&quot;model_doc/markuplm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/markuplm&quot;},{&quot;title&quot;:&quot;MBart and MBart-50&quot;,&quot;id&quot;:&quot;model_doc/mbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mbart&quot;},{&quot;title&quot;:&quot;MEGA&quot;,&quot;id&quot;:&quot;model_doc/mega&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mega&quot;},{&quot;title&quot;:&quot;MegatronBERT&quot;,&quot;id&quot;:&quot;model_doc/megatron-bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron-bert&quot;},{&quot;title&quot;:&quot;MegatronGPT2&quot;,&quot;id&quot;:&quot;model_doc/megatron_gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2&quot;},{&quot;title&quot;:&quot;Mistral&quot;,&quot;id&quot;:&quot;model_doc/mistral&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mistral&quot;},{&quot;title&quot;:&quot;mLUKE&quot;,&quot;id&quot;:&quot;model_doc/mluke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mluke&quot;},{&quot;title&quot;:&quot;MobileBERT&quot;,&quot;id&quot;:&quot;model_doc/mobilebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilebert&quot;},{&quot;title&quot;:&quot;MPNet&quot;,&quot;id&quot;:&quot;model_doc/mpnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpnet&quot;},{&quot;title&quot;:&quot;MPT&quot;,&quot;id&quot;:&quot;model_doc/mpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpt&quot;},{&quot;title&quot;:&quot;MRA&quot;,&quot;id&quot;:&quot;model_doc/mra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mra&quot;},{&quot;title&quot;:&quot;MT5&quot;,&quot;id&quot;:&quot;model_doc/mt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mt5&quot;},{&quot;title&quot;:&quot;MVP&quot;,&quot;id&quot;:&quot;model_doc/mvp&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mvp&quot;},{&quot;title&quot;:&quot;NEZHA&quot;,&quot;id&quot;:&quot;model_doc/nezha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nezha&quot;},{&quot;title&quot;:&quot;NLLB&quot;,&quot;id&quot;:&quot;model_doc/nllb&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb&quot;},{&quot;title&quot;:&quot;NLLB-MoE&quot;,&quot;id&quot;:&quot;model_doc/nllb-moe&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb-moe&quot;},{&quot;title&quot;:&quot;Nyströmformer&quot;,&quot;id&quot;:&quot;model_doc/nystromformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nystromformer&quot;},{&quot;title&quot;:&quot;Open-Llama&quot;,&quot;id&quot;:&quot;model_doc/open-llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/open-llama&quot;},{&quot;title&quot;:&quot;OPT&quot;,&quot;id&quot;:&quot;model_doc/opt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/opt&quot;},{&quot;title&quot;:&quot;Pegasus&quot;,&quot;id&quot;:&quot;model_doc/pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus&quot;},{&quot;title&quot;:&quot;PEGASUS-X&quot;,&quot;id&quot;:&quot;model_doc/pegasus_x&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus_x&quot;},{&quot;title&quot;:&quot;Persimmon&quot;,&quot;id&quot;:&quot;model_doc/persimmon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/persimmon&quot;},{&quot;title&quot;:&quot;PhoBERT&quot;,&quot;id&quot;:&quot;model_doc/phobert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/phobert&quot;},{&quot;title&quot;:&quot;PLBart&quot;,&quot;id&quot;:&quot;model_doc/plbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/plbart&quot;},{&quot;title&quot;:&quot;ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/prophetnet&quot;},{&quot;title&quot;:&quot;QDQBert&quot;,&quot;id&quot;:&quot;model_doc/qdqbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/qdqbert&quot;},{&quot;title&quot;:&quot;RAG&quot;,&quot;id&quot;:&quot;model_doc/rag&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rag&quot;},{&quot;title&quot;:&quot;REALM&quot;,&quot;id&quot;:&quot;model_doc/realm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/realm&quot;},{&quot;title&quot;:&quot;Reformer&quot;,&quot;id&quot;:&quot;model_doc/reformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/reformer&quot;},{&quot;title&quot;:&quot;RemBERT&quot;,&quot;id&quot;:&quot;model_doc/rembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rembert&quot;},{&quot;title&quot;:&quot;RetriBERT&quot;,&quot;id&quot;:&quot;model_doc/retribert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/retribert&quot;},{&quot;title&quot;:&quot;RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta&quot;},{&quot;title&quot;:&quot;RoBERTa-PreLayerNorm&quot;,&quot;id&quot;:&quot;model_doc/roberta-prelayernorm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm&quot;},{&quot;title&quot;:&quot;RoCBert&quot;,&quot;id&quot;:&quot;model_doc/roc_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roc_bert&quot;},{&quot;title&quot;:&quot;RoFormer&quot;,&quot;id&quot;:&quot;model_doc/roformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roformer&quot;},{&quot;title&quot;:&quot;RWKV&quot;,&quot;id&quot;:&quot;model_doc/rwkv&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rwkv&quot;},{&quot;title&quot;:&quot;Splinter&quot;,&quot;id&quot;:&quot;model_doc/splinter&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/splinter&quot;},{&quot;title&quot;:&quot;SqueezeBERT&quot;,&quot;id&quot;:&quot;model_doc/squeezebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/squeezebert&quot;},{&quot;title&quot;:&quot;SwitchTransformers&quot;,&quot;id&quot;:&quot;model_doc/switch_transformers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/switch_transformers&quot;},{&quot;title&quot;:&quot;T5&quot;,&quot;id&quot;:&quot;model_doc/t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5&quot;},{&quot;title&quot;:&quot;T5v1.1&quot;,&quot;id&quot;:&quot;model_doc/t5v1.1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5v1.1&quot;},{&quot;title&quot;:&quot;TAPEX&quot;,&quot;id&quot;:&quot;model_doc/tapex&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapex&quot;},{&quot;title&quot;:&quot;Transformer XL&quot;,&quot;id&quot;:&quot;model_doc/transfo-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/transfo-xl&quot;},{&quot;title&quot;:&quot;UL2&quot;,&quot;id&quot;:&quot;model_doc/ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ul2&quot;},{&quot;title&quot;:&quot;UMT5&quot;,&quot;id&quot;:&quot;model_doc/umt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/umt5&quot;},{&quot;title&quot;:&quot;X-MOD&quot;,&quot;id&quot;:&quot;model_doc/xmod&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xmod&quot;},{&quot;title&quot;:&quot;XGLM&quot;,&quot;id&quot;:&quot;model_doc/xglm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xglm&quot;},{&quot;title&quot;:&quot;XLM&quot;,&quot;id&quot;:&quot;model_doc/xlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm&quot;},{&quot;title&quot;:&quot;XLM-ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/xlm-prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa-XL&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl&quot;},{&quot;title&quot;:&quot;XLM-V&quot;,&quot;id&quot;:&quot;model_doc/xlm-v&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-v&quot;},{&quot;title&quot;:&quot;XLNet&quot;,&quot;id&quot;:&quot;model_doc/xlnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlnet&quot;},{&quot;title&quot;:&quot;YOSO&quot;,&quot;id&quot;:&quot;model_doc/yoso&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yoso&quot;}]},{&quot;title&quot;:&quot;Vision models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;BEiT&quot;,&quot;id&quot;:&quot;model_doc/beit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/beit&quot;},{&quot;title&quot;:&quot;BiT&quot;,&quot;id&quot;:&quot;model_doc/bit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bit&quot;},{&quot;title&quot;:&quot;Conditional DETR&quot;,&quot;id&quot;:&quot;model_doc/conditional_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/conditional_detr&quot;},{&quot;title&quot;:&quot;ConvNeXT&quot;,&quot;id&quot;:&quot;model_doc/convnext&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnext&quot;},{&quot;title&quot;:&quot;ConvNeXTV2&quot;,&quot;id&quot;:&quot;model_doc/convnextv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnextv2&quot;},{&quot;title&quot;:&quot;CvT&quot;,&quot;id&quot;:&quot;model_doc/cvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cvt&quot;},{&quot;title&quot;:&quot;Deformable DETR&quot;,&quot;id&quot;:&quot;model_doc/deformable_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deformable_detr&quot;},{&quot;title&quot;:&quot;DeiT&quot;,&quot;id&quot;:&quot;model_doc/deit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deit&quot;},{&quot;title&quot;:&quot;DETA&quot;,&quot;id&quot;:&quot;model_doc/deta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deta&quot;},{&quot;title&quot;:&quot;DETR&quot;,&quot;id&quot;:&quot;model_doc/detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/detr&quot;},{&quot;title&quot;:&quot;DiNAT&quot;,&quot;id&quot;:&quot;model_doc/dinat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinat&quot;},{&quot;title&quot;:&quot;DINO V2&quot;,&quot;id&quot;:&quot;model_doc/dinov2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinov2&quot;},{&quot;title&quot;:&quot;DiT&quot;,&quot;id&quot;:&quot;model_doc/dit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dit&quot;},{&quot;title&quot;:&quot;DPT&quot;,&quot;id&quot;:&quot;model_doc/dpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpt&quot;},{&quot;title&quot;:&quot;EfficientFormer&quot;,&quot;id&quot;:&quot;model_doc/efficientformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientformer&quot;},{&quot;title&quot;:&quot;EfficientNet&quot;,&quot;id&quot;:&quot;model_doc/efficientnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientnet&quot;},{&quot;title&quot;:&quot;FocalNet&quot;,&quot;id&quot;:&quot;model_doc/focalnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/focalnet&quot;},{&quot;title&quot;:&quot;GLPN&quot;,&quot;id&quot;:&quot;model_doc/glpn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/glpn&quot;},{&quot;title&quot;:&quot;ImageGPT&quot;,&quot;id&quot;:&quot;model_doc/imagegpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/imagegpt&quot;},{&quot;title&quot;:&quot;LeViT&quot;,&quot;id&quot;:&quot;model_doc/levit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/levit&quot;},{&quot;title&quot;:&quot;Mask2Former&quot;,&quot;id&quot;:&quot;model_doc/mask2former&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mask2former&quot;},{&quot;title&quot;:&quot;MaskFormer&quot;,&quot;id&quot;:&quot;model_doc/maskformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/maskformer&quot;},{&quot;title&quot;:&quot;MobileNetV1&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v1&quot;},{&quot;title&quot;:&quot;MobileNetV2&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v2&quot;},{&quot;title&quot;:&quot;MobileViT&quot;,&quot;id&quot;:&quot;model_doc/mobilevit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevit&quot;},{&quot;title&quot;:&quot;MobileViTV2&quot;,&quot;id&quot;:&quot;model_doc/mobilevitv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevitv2&quot;},{&quot;title&quot;:&quot;NAT&quot;,&quot;id&quot;:&quot;model_doc/nat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nat&quot;},{&quot;title&quot;:&quot;PoolFormer&quot;,&quot;id&quot;:&quot;model_doc/poolformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/poolformer&quot;},{&quot;title&quot;:&quot;Pyramid Vision Transformer (PVT)&quot;,&quot;id&quot;:&quot;model_doc/pvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pvt&quot;},{&quot;title&quot;:&quot;RegNet&quot;,&quot;id&quot;:&quot;model_doc/regnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/regnet&quot;},{&quot;title&quot;:&quot;ResNet&quot;,&quot;id&quot;:&quot;model_doc/resnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/resnet&quot;},{&quot;title&quot;:&quot;SegFormer&quot;,&quot;id&quot;:&quot;model_doc/segformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/segformer&quot;},{&quot;title&quot;:&quot;SwiftFormer&quot;,&quot;id&quot;:&quot;model_doc/swiftformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swiftformer&quot;},{&quot;title&quot;:&quot;Swin Transformer&quot;,&quot;id&quot;:&quot;model_doc/swin&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin&quot;},{&quot;title&quot;:&quot;Swin Transformer V2&quot;,&quot;id&quot;:&quot;model_doc/swinv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swinv2&quot;},{&quot;title&quot;:&quot;Swin2SR&quot;,&quot;id&quot;:&quot;model_doc/swin2sr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin2sr&quot;},{&quot;title&quot;:&quot;Table Transformer&quot;,&quot;id&quot;:&quot;model_doc/table-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/table-transformer&quot;},{&quot;title&quot;:&quot;TimeSformer&quot;,&quot;id&quot;:&quot;model_doc/timesformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/timesformer&quot;},{&quot;title&quot;:&quot;UperNet&quot;,&quot;id&quot;:&quot;model_doc/upernet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/upernet&quot;},{&quot;title&quot;:&quot;VAN&quot;,&quot;id&quot;:&quot;model_doc/van&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/van&quot;},{&quot;title&quot;:&quot;VideoMAE&quot;,&quot;id&quot;:&quot;model_doc/videomae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/videomae&quot;},{&quot;title&quot;:&quot;Vision Transformer (ViT)&quot;,&quot;id&quot;:&quot;model_doc/vit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit&quot;},{&quot;title&quot;:&quot;ViT Hybrid&quot;,&quot;id&quot;:&quot;model_doc/vit_hybrid&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_hybrid&quot;},{&quot;title&quot;:&quot;ViTDet&quot;,&quot;id&quot;:&quot;model_doc/vitdet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitdet&quot;},{&quot;title&quot;:&quot;ViTMAE&quot;,&quot;id&quot;:&quot;model_doc/vit_mae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_mae&quot;},{&quot;title&quot;:&quot;ViTMatte&quot;,&quot;id&quot;:&quot;model_doc/vitmatte&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitmatte&quot;},{&quot;title&quot;:&quot;ViTMSN&quot;,&quot;id&quot;:&quot;model_doc/vit_msn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_msn&quot;},{&quot;title&quot;:&quot;ViViT&quot;,&quot;id&quot;:&quot;model_doc/vivit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vivit&quot;},{&quot;title&quot;:&quot;YOLOS&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/yolos&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yolos&quot;}]},{&quot;title&quot;:&quot;Audio models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio Spectrogram Transformer&quot;,&quot;id&quot;:&quot;model_doc/audio-spectrogram-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer&quot;},{&quot;title&quot;:&quot;Bark&quot;,&quot;id&quot;:&quot;model_doc/bark&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bark&quot;},{&quot;title&quot;:&quot;CLAP&quot;,&quot;id&quot;:&quot;model_doc/clap&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clap&quot;},{&quot;title&quot;:&quot;EnCodec&quot;,&quot;id&quot;:&quot;model_doc/encodec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encodec&quot;},{&quot;title&quot;:&quot;Hubert&quot;,&quot;id&quot;:&quot;model_doc/hubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/hubert&quot;},{&quot;title&quot;:&quot;MCTCT&quot;,&quot;id&quot;:&quot;model_doc/mctct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mctct&quot;},{&quot;title&quot;:&quot;MMS&quot;,&quot;id&quot;:&quot;model_doc/mms&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mms&quot;},{&quot;title&quot;:&quot;MusicGen&quot;,&quot;id&quot;:&quot;model_doc/musicgen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/musicgen&quot;},{&quot;title&quot;:&quot;Pop2Piano&quot;,&quot;id&quot;:&quot;model_doc/pop2piano&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pop2piano&quot;},{&quot;title&quot;:&quot;SEW&quot;,&quot;id&quot;:&quot;model_doc/sew&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew&quot;},{&quot;title&quot;:&quot;SEW-D&quot;,&quot;id&quot;:&quot;model_doc/sew-d&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew-d&quot;},{&quot;title&quot;:&quot;Speech2Text&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text&quot;},{&quot;title&quot;:&quot;Speech2Text2&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text_2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2&quot;},{&quot;title&quot;:&quot;SpeechT5&quot;,&quot;id&quot;:&quot;model_doc/speecht5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speecht5&quot;},{&quot;title&quot;:&quot;UniSpeech&quot;,&quot;id&quot;:&quot;model_doc/unispeech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech&quot;},{&quot;title&quot;:&quot;UniSpeech-SAT&quot;,&quot;id&quot;:&quot;model_doc/unispeech-sat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech-sat&quot;},{&quot;title&quot;:&quot;VITS&quot;,&quot;id&quot;:&quot;model_doc/vits&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vits&quot;},{&quot;title&quot;:&quot;Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2&quot;},{&quot;title&quot;:&quot;Wav2Vec2-Conformer&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2-conformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer&quot;},{&quot;title&quot;:&quot;Wav2Vec2Phoneme&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2_phoneme&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme&quot;},{&quot;title&quot;:&quot;WavLM&quot;,&quot;id&quot;:&quot;model_doc/wavlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wavlm&quot;},{&quot;title&quot;:&quot;Whisper&quot;,&quot;id&quot;:&quot;model_doc/whisper&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/whisper&quot;},{&quot;title&quot;:&quot;XLS-R&quot;,&quot;id&quot;:&quot;model_doc/xls_r&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xls_r&quot;},{&quot;title&quot;:&quot;XLSR-Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/xlsr_wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2&quot;}]},{&quot;title&quot;:&quot;Multimodal models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALIGN&quot;,&quot;id&quot;:&quot;model_doc/align&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/align&quot;},{&quot;title&quot;:&quot;AltCLIP&quot;,&quot;id&quot;:&quot;model_doc/altclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/altclip&quot;},{&quot;title&quot;:&quot;BLIP&quot;,&quot;id&quot;:&quot;model_doc/blip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip&quot;},{&quot;title&quot;:&quot;BLIP-2&quot;,&quot;id&quot;:&quot;model_doc/blip-2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip-2&quot;},{&quot;title&quot;:&quot;BridgeTower&quot;,&quot;id&quot;:&quot;model_doc/bridgetower&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bridgetower&quot;},{&quot;title&quot;:&quot;BROS&quot;,&quot;id&quot;:&quot;model_doc/bros&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bros&quot;},{&quot;title&quot;:&quot;Chinese-CLIP&quot;,&quot;id&quot;:&quot;model_doc/chinese_clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/chinese_clip&quot;},{&quot;title&quot;:&quot;CLIP&quot;,&quot;id&quot;:&quot;model_doc/clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clip&quot;},{&quot;title&quot;:&quot;CLIPSeg&quot;,&quot;id&quot;:&quot;model_doc/clipseg&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clipseg&quot;},{&quot;title&quot;:&quot;Data2Vec&quot;,&quot;id&quot;:&quot;model_doc/data2vec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/data2vec&quot;},{&quot;title&quot;:&quot;DePlot&quot;,&quot;id&quot;:&quot;model_doc/deplot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deplot&quot;},{&quot;title&quot;:&quot;Donut&quot;,&quot;id&quot;:&quot;model_doc/donut&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/donut&quot;},{&quot;title&quot;:&quot;FLAVA&quot;,&quot;id&quot;:&quot;model_doc/flava&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flava&quot;},{&quot;title&quot;:&quot;GIT&quot;,&quot;id&quot;:&quot;model_doc/git&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/git&quot;},{&quot;title&quot;:&quot;GroupViT&quot;,&quot;id&quot;:&quot;model_doc/groupvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/groupvit&quot;},{&quot;title&quot;:&quot;IDEFICS&quot;,&quot;id&quot;:&quot;model_doc/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/idefics&quot;},{&quot;title&quot;:&quot;InstructBLIP&quot;,&quot;id&quot;:&quot;model_doc/instructblip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/instructblip&quot;},{&quot;title&quot;:&quot;LayoutLM&quot;,&quot;id&quot;:&quot;model_doc/layoutlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlm&quot;},{&quot;title&quot;:&quot;LayoutLMV2&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv2&quot;},{&quot;title&quot;:&quot;LayoutLMV3&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv3&quot;},{&quot;title&quot;:&quot;LayoutXLM&quot;,&quot;id&quot;:&quot;model_doc/layoutxlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutxlm&quot;},{&quot;title&quot;:&quot;LiLT&quot;,&quot;id&quot;:&quot;model_doc/lilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lilt&quot;},{&quot;title&quot;:&quot;LXMERT&quot;,&quot;id&quot;:&quot;model_doc/lxmert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lxmert&quot;},{&quot;title&quot;:&quot;MatCha&quot;,&quot;id&quot;:&quot;model_doc/matcha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/matcha&quot;},{&quot;title&quot;:&quot;MGP-STR&quot;,&quot;id&quot;:&quot;model_doc/mgp-str&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mgp-str&quot;},{&quot;title&quot;:&quot;Nougat&quot;,&quot;id&quot;:&quot;model_doc/nougat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nougat&quot;},{&quot;title&quot;:&quot;OneFormer&quot;,&quot;id&quot;:&quot;model_doc/oneformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/oneformer&quot;},{&quot;title&quot;:&quot;OWL-ViT&quot;,&quot;id&quot;:&quot;model_doc/owlvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/owlvit&quot;},{&quot;title&quot;:&quot;Perceiver&quot;,&quot;id&quot;:&quot;model_doc/perceiver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/perceiver&quot;},{&quot;title&quot;:&quot;Pix2Struct&quot;,&quot;id&quot;:&quot;model_doc/pix2struct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pix2struct&quot;},{&quot;title&quot;:&quot;Segment Anything&quot;,&quot;id&quot;:&quot;model_doc/sam&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sam&quot;},{&quot;title&quot;:&quot;Speech Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/speech-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder&quot;},{&quot;title&quot;:&quot;TAPAS&quot;,&quot;id&quot;:&quot;model_doc/tapas&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapas&quot;},{&quot;title&quot;:&quot;TrOCR&quot;,&quot;id&quot;:&quot;model_doc/trocr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trocr&quot;},{&quot;title&quot;:&quot;TVLT&quot;,&quot;id&quot;:&quot;model_doc/tvlt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tvlt&quot;},{&quot;title&quot;:&quot;ViLT&quot;,&quot;id&quot;:&quot;model_doc/vilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vilt&quot;},{&quot;title&quot;:&quot;Vision Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/vision-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder&quot;},{&quot;title&quot;:&quot;Vision Text Dual Encoder&quot;,&quot;id&quot;:&quot;model_doc/vision-text-dual-encoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder&quot;},{&quot;title&quot;:&quot;VisualBERT&quot;,&quot;id&quot;:&quot;model_doc/visual_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/visual_bert&quot;},{&quot;title&quot;:&quot;X-CLIP&quot;,&quot;id&quot;:&quot;model_doc/xclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xclip&quot;}]},{&quot;title&quot;:&quot;Reinforcement learning models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Decision Transformer&quot;,&quot;id&quot;:&quot;model_doc/decision_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/decision_transformer&quot;},{&quot;title&quot;:&quot;Trajectory Transformer&quot;,&quot;id&quot;:&quot;model_doc/trajectory_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer&quot;}]},{&quot;title&quot;:&quot;Time series models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Autoformer&quot;,&quot;id&quot;:&quot;model_doc/autoformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/autoformer&quot;},{&quot;title&quot;:&quot;Informer&quot;,&quot;id&quot;:&quot;model_doc/informer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/informer&quot;},{&quot;title&quot;:&quot;Time Series Transformer&quot;,&quot;id&quot;:&quot;model_doc/time_series_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/time_series_transformer&quot;}]},{&quot;title&quot;:&quot;Graph models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Graphormer&quot;,&quot;id&quot;:&quot;model_doc/graphormer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/graphormer&quot;}]}]},{&quot;title&quot;:&quot;Internal Helpers&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Custom Layers and Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/modeling_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/modeling_utils&quot;},{&quot;title&quot;:&quot;Utilities for pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/pipelines_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/pipelines_utils&quot;},{&quot;title&quot;:&quot;Utilities for Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/tokenization_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/tokenization_utils&quot;},{&quot;title&quot;:&quot;Utilities for Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/trainer_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/trainer_utils&quot;},{&quot;title&quot;:&quot;Utilities for Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/generation_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/generation_utils&quot;},{&quot;title&quot;:&quot;Utilities for Image Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/image_processing_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/image_processing_utils&quot;},{&quot;title&quot;:&quot;Utilities for Audio processing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/audio_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/audio_utils&quot;},{&quot;title&quot;:&quot;General Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/file_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/file_utils&quot;},{&quot;title&quot;:&quot;Utilities for Time Series&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/time_series_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/time_series_utils&quot;}]}]}],&quot;chapterId&quot;:&quot;model_doc/yolos&quot;,&quot;docType&quot;:&quot;docs&quot;,&quot;isLoggedIn&quot;:false,&quot;lang&quot;:&quot;en&quot;,&quot;langs&quot;:[&quot;de&quot;,&quot;en&quot;,&quot;es&quot;,&quot;fr&quot;,&quot;it&quot;,&quot;ko&quot;,&quot;pt&quot;,&quot;zh&quot;],&quot;library&quot;:&quot;transformers&quot;,&quot;theme&quot;:&quot;light&quot;,&quot;version&quot;:&quot;v4.34.0&quot;,&quot;versions&quot;:[{&quot;version&quot;:&quot;main&quot;},{&quot;version&quot;:&quot;v4.34.0&quot;},{&quot;version&quot;:&quot;v4.33.3&quot;},{&quot;version&quot;:&quot;v4.33.2&quot;},{&quot;version&quot;:&quot;v4.33.0&quot;},{&quot;version&quot;:&quot;v4.32.1&quot;},{&quot;version&quot;:&quot;v4.32.0&quot;},{&quot;version&quot;:&quot;v4.31.0&quot;},{&quot;version&quot;:&quot;v4.30.0&quot;},{&quot;version&quot;:&quot;v4.29.1&quot;},{&quot;version&quot;:&quot;v4.29.0&quot;},{&quot;version&quot;:&quot;v4.28.1&quot;},{&quot;version&quot;:&quot;v4.28.0&quot;},{&quot;version&quot;:&quot;v4.27.2&quot;},{&quot;version&quot;:&quot;v4.27.1&quot;},{&quot;version&quot;:&quot;v4.27.0&quot;},{&quot;version&quot;:&quot;v4.26.1&quot;},{&quot;version&quot;:&quot;v4.26.0&quot;},{&quot;version&quot;:&quot;v4.25.1&quot;},{&quot;version&quot;:&quot;v4.24.0&quot;},{&quot;version&quot;:&quot;v4.23.1&quot;},{&quot;version&quot;:&quot;v4.23.0&quot;},{&quot;version&quot;:&quot;v4.22.2&quot;},{&quot;version&quot;:&quot;v4.22.1&quot;},{&quot;version&quot;:&quot;v4.22.0&quot;},{&quot;version&quot;:&quot;v4.21.3&quot;},{&quot;version&quot;:&quot;v4.21.2&quot;},{&quot;version&quot;:&quot;v4.21.1&quot;},{&quot;version&quot;:&quot;v4.21.0&quot;},{&quot;version&quot;:&quot;v4.20.1&quot;},{&quot;version&quot;:&quot;v4.20.0&quot;},{&quot;version&quot;:&quot;v4.19.4&quot;},{&quot;version&quot;:&quot;v4.19.3&quot;},{&quot;version&quot;:&quot;v4.19.2&quot;},{&quot;version&quot;:&quot;v4.19.0&quot;},{&quot;version&quot;:&quot;v4.18.0&quot;},{&quot;version&quot;:&quot;v4.17.0&quot;},{&quot;version&quot;:&quot;v4.16.2&quot;},{&quot;version&quot;:&quot;v4.16.1&quot;},{&quot;version&quot;:&quot;v4.16.0&quot;},{&quot;version&quot;:&quot;v4.15.0&quot;},{&quot;version&quot;:&quot;v4.14.1&quot;},{&quot;version&quot;:&quot;v4.13.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.5&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.4&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.0.0&quot;},{&quot;version&quot;:&quot;doc-builder-html&quot;}],&quot;title&quot;:&quot;YOLOS&quot;}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"><div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation</p> <div class="flex items-center"><p class="font-semibold">YOLOS</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "><button class=" " type="button"><h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> <span class="ml-auto rounded border border-gray-200 bg-gray-100 px-0.5 text-xs dark:border-gray-800 dark:bg-gray-800"><kbd class="font-sans">⌘K</kbd></span></button> <div class="flex items-center"><select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1">v4.34.0</option><option value="2">v4.33.3</option><option value="3">v4.32.1</option><option value="4">v4.31.0</option><option value="5">v4.30.0</option><option value="6">v4.29.1</option><option value="7">v4.28.1</option><option value="8">v4.27.2</option><option value="9">v4.26.1</option><option value="10">v4.25.1</option><option value="11">v4.24.0</option><option value="12">v4.23.1</option><option value="13">v4.22.2</option><option value="14">v4.21.3</option><option value="15">v4.20.1</option><option value="16">v4.19.4</option><option value="17">v4.18.0</option><option value="18">v4.17.0</option><option value="19">v4.16.2</option><option value="20">v4.15.0</option><option value="21">v4.14.1</option><option value="22">v4.13.0</option><option value="23">v4.12.5</option><option value="24">v4.11.3</option><option value="25">v4.10.1</option><option value="26">v4.9.2</option><option value="27">v4.8.2</option><option value="28">v4.7.0</option><option value="29">v4.6.0</option><option value="30">v4.5.1</option><option value="31">v4.4.2</option><option value="32">v4.3.3</option><option value="33">v4.2.2</option><option value="34">v4.1.1</option><option value="35">v4.0.1</option><option value="36">v3.5.1</option><option value="37">v3.4.0</option><option value="38">v3.3.1</option><option value="39">v3.2.0</option><option value="40">v3.1.0</option><option value="41">v3.0.2</option><option value="42">v2.11.0</option><option value="43">v2.10.0</option><option value="44">v2.9.1</option><option value="45">v2.8.0</option><option value="46">v2.7.0</option><option value="47">v2.6.0</option><option value="48">v2.5.1</option><option value="49">v2.4.1</option><option value="50">v2.3.0</option><option value="51">v2.2.2</option><option value="52">v2.1.1</option><option value="53">v2.0.0</option><option value="54">v1.2.0</option><option value="55">v1.1.0</option><option value="56">v1.0.0</option><option value="57">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"><button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"><svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> 112,792</a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Get started</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/index">🤗 Transformers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/quicktour">Quick tour </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/installation">Installation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Tutorials</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_tutorial">Run inference with pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/autoclass_tutorial">Write portable code with AutoClass </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/preprocessing">Preprocess data </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/training">Fine-tune a pretrained model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/run_scripts">Train with a script </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/accelerate">Set up distributed training with 🤗 Accelerate </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/peft">Load and train adapters with 🤗 PEFT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_sharing">Share your model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/transformers_agents">Agents </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/llm_tutorial">Generation with LLMs </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Task Guides</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Natural Language Processing</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Computer Vision</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Generation</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Prompting</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Developer guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/fast_tokenizers">Use fast tokenizers from 🤗 Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/multilingual">Run inference with multilingual models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/create_a_model">Use model-specific APIs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_models">Share a custom model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/chat_templating">Templates for chat models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/sagemaker">Run training on Amazon SageMaker </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/serialization">Export to ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tflite">Export to TFLite </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/torchscript">Export to TorchScript </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/benchmarks">Benchmarks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/notebooks">Notebooks with examples </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/community">Community resources </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_tools">Custom Tools and Prompts </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/troubleshooting">Troubleshoot </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Performance and scalability</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/performance">Overview </a><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Efficient training techniques</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_one">Methods and tools for efficient training on a single GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_many">Multiple GPUs and parallelism </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu">Efficient training on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu_many">Distributed CPU training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu">Training on TPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu_tf">Training on TPU with TensorFlow </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_special">Training on Specialized Hardware </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_hardware">Custom hardware for training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/hpo_train">Hyperparameter Search using Trainer API </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Optimizing inference</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_cpu">Inference on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_one">Inference on one GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_many">Inference on many GPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_special">Inference on Specialized Hardware </a> </div><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/big_models">Instantiating a big model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/debugging">Troubleshooting </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tf_xla">XLA Integration for TensorFlow Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perf_torch_compile">Optimize inference using `torch.compile()` </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Contribute</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/contributing">How to contribute to transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_model">How to add a model to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_tensorflow_model">How to convert a 🤗 Transformers model to TensorFlow? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_pipeline">How to add a pipeline to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/testing">Testing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pr_checks">Checks on a Pull Request </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Conceptual guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/philosophy">Philosophy </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/glossary">Glossary </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/task_summary">What 🤗 Transformers can do </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tasks_explained">How 🤗 Transformers solve tasks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_summary">The Transformer model family </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tokenizer_summary">Summary of the tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/attention">Attention mechanisms </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pad_truncation">Padding and truncation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/bertology">BERTology </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perplexity">Perplexity of fixed-length models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_webserver">Pipelines for webserver inference </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_memory_anatomy">Model training anatomy </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>API</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Main Classes</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/agent">Agents and Tools </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/model_doc/auto">Auto Classes </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/callback">Callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/configuration">Configuration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/data_collator">Data Collator </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks">Keras callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/logging">Logging </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/model">Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/text_generation">Text Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/onnx">ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules">Optimization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/output">Model outputs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/pipelines">Pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/processors">Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/quantization">Quantization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/tokenizer">Tokenizer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/trainer">Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/deepspeed">DeepSpeed Integration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor">Feature Extractor </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/image_processor">Image Processor </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Models</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Text models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Vision models</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/beit">BEiT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bit">BiT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/conditional_detr">Conditional DETR </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/convnext">ConvNeXT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/convnextv2">ConvNeXTV2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/cvt">CvT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/deformable_detr">Deformable DETR </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/deit">DeiT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/deta">DETA </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/detr">DETR </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/dinat">DiNAT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/dinov2">DINO V2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/dit">DiT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/dpt">DPT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/efficientformer">EfficientFormer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/efficientnet">EfficientNet </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/focalnet">FocalNet </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/glpn">GLPN </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/imagegpt">ImageGPT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/levit">LeViT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mask2former">Mask2Former </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/maskformer">MaskFormer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mobilenet_v1">MobileNetV1 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mobilenet_v2">MobileNetV2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mobilevit">MobileViT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mobilevitv2">MobileViTV2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nat">NAT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/poolformer">PoolFormer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/pvt">Pyramid Vision Transformer (PVT) </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/regnet">RegNet </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/resnet">ResNet </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/segformer">SegFormer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/swiftformer">SwiftFormer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/swin">Swin Transformer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/swinv2">Swin Transformer V2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/swin2sr">Swin2SR </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/table-transformer">Table Transformer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/timesformer">TimeSformer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/upernet">UperNet </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/van">VAN </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/videomae">VideoMAE </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/vit">Vision Transformer (ViT) </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/vit_hybrid">ViT Hybrid </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/vitdet">ViTDet </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/vit_mae">ViTMAE </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/vitmatte">ViTMatte </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/vit_msn">ViTMSN </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/vivit">ViViT </a><a data-sveltekit-reload="" class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/yolos">YOLOS </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Reinforcement learning models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Time series models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Graph models</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Internal Helpers</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/modeling_utils">Custom Layers and Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/pipelines_utils">Utilities for pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/tokenization_utils">Utilities for Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/trainer_utils">Utilities for Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/generation_utils">Utilities for Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/image_processing_utils">Utilities for Image Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/audio_utils">Utilities for Audio processing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/file_utils">General Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/time_series_utils">Utilities for Time Series </a> </div> </div></nav></div></div></div> <div class="z-1 min-w-0 flex-1"> <div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg"> <div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div> <p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience </p> <div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes </div></div></div> <div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a> <p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div> <div class="prose-doc prose relative mx-auto max-w-4xl break-words"> <p></p> <h1 class="relative group"><a id="yolos" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#yolos"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1gfpkpl">YOLOS</span></h1> <h2 class="relative group"><a id="overview" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#overview"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1jsw1pg">Overview</span></h2> <p data-svelte-h="svelte-98a8ak">The YOLOS model was proposed in <a href="https://arxiv.org/abs/2106.00666" rel="nofollow">You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection</a> by Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu. YOLOS proposes to just leverage the plain <a href="vit">Vision Transformer (ViT)</a> for object detection, inspired by DETR. It turns out that a base-sized encoder-only Transformer can also achieve 42 AP on COCO, similar to DETR and much more complex frameworks such as Faster R-CNN.</p> <p data-svelte-h="svelte-vfdo9a">The abstract from the paper is the following:</p> <p data-svelte-h="svelte-ng2z0l"><em>Can Transformer perform 2D object- and region-level recognition from a pure sequence-to-sequence perspective with minimal knowledge about the 2D spatial structure? To answer this question, we present You Only Look at One Sequence (YOLOS), a series of object detection models based on the vanilla Vision Transformer with the fewest possible modifications, region priors, as well as inductive biases of the target task. We find that YOLOS pre-trained on the mid-sized ImageNet-1k dataset only can already achieve quite competitive performance on the challenging COCO object detection benchmark, e.g., YOLOS-Base directly adopted from BERT-Base architecture can obtain 42.0 box AP on COCO val. We also discuss the impacts as well as limitations of current pre-train schemes and model scaling strategies for Transformer in vision through YOLOS.</em></p> <p data-svelte-h="svelte-axv494">Tips:</p> <ul data-svelte-h="svelte-1xffjav"><li>One can use <a href="/docs/transformers/v4.34.0/en/model_doc/yolos#transformers.YolosImageProcessor">YolosImageProcessor</a> for preparing images (and optional targets) for the model. Contrary to <a href="detr">DETR</a>, YOLOS doesn’t require a <code>pixel_mask</code> to be created.</li></ul> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/yolos_architecture.png" alt="drawing" width="600"> <small data-svelte-h="svelte-vits75">YOLOS architecture. Taken from the <a href="https://arxiv.org/abs/2106.00666">original paper</a>.</small> <p data-svelte-h="svelte-zgo631">This model was contributed by <a href="https://huggingface.co/nielsr" rel="nofollow">nielsr</a>. The original code can be found <a href="https://github.com/hustvl/YOLOS" rel="nofollow">here</a>.</p> <h2 class="relative group"><a id="resources" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#resources"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-w4zzv6">Resources</span></h2> <p data-svelte-h="svelte-93t6s9">A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with YOLOS.</p> <div class="inline-flex items-center border pr-1 rounded-xl "><svg class="mr-1 tag-ico tag-ico-yellow" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M24,14a5.99,5.99,0,0,0-4.885,9.4712L14,28.5859,15.4141,30l5.1147-5.1147A5.9971,5.9971,0,1,0,24,14Zm0,10a4,4,0,1,1,4-4A4.0045,4.0045,0,0,1,24,24Z"></path><path d="M17,12a3,3,0,1,0-3-3A3.0033,3.0033,0,0,0,17,12Zm0-4a1,1,0,1,1-1,1A1.0009,1.0009,0,0,1,17,8Z"></path><path d="M12,24H4V17.9966L9,13l5.5859,5.5859L16,17.168l-5.5859-5.5855a2,2,0,0,0-2.8282,0L4,15.168V4H24v6h2V4a2.0023,2.0023,0,0,0-2-2H4A2.002,2.002,0,0,0,2,4V24a2.0023,2.0023,0,0,0,2,2h8Z"></path></svg> <span>Object Detection</span></div> <ul data-svelte-h="svelte-svr1d7"><li>All example notebooks illustrating inference + fine-tuning <a href="/docs/transformers/v4.34.0/en/model_doc/yolos#transformers.YolosForObjectDetection">YolosForObjectDetection</a> on a custom dataset can be found <a href="https://github.com/NielsRogge/Transformers-Tutorials/tree/master/YOLOS" rel="nofollow">here</a>.</li> <li>See also: <a href="../tasks/object_detection">Object detection task guide</a></li></ul> <p data-svelte-h="svelte-1xesile">If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.</p> <h2 class="relative group"><a id="transformers.YolosConfig" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosConfig"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-18jq4zj">YolosConfig</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.YolosConfig"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">YolosConfig</span></span></h3> <a id="transformers.YolosConfig" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.YolosConfig"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/yolos/configuration_yolos.py#L35" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_size<span class="opacity-60"> = 768</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_hidden_layers<span class="opacity-60"> = 12</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_attention_heads<span class="opacity-60"> = 12</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">intermediate_size<span class="opacity-60"> = 3072</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_act<span class="opacity-60"> = 'gelu'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_dropout_prob<span class="opacity-60"> = 0.0</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_probs_dropout_prob<span class="opacity-60"> = 0.0</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">initializer_range<span class="opacity-60"> = 0.02</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">layer_norm_eps<span class="opacity-60"> = 1e-12</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">image_size<span class="opacity-60"> = [512, 864]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">patch_size<span class="opacity-60"> = 16</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_channels<span class="opacity-60"> = 3</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">qkv_bias<span class="opacity-60"> = True</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_detection_tokens<span class="opacity-60"> = 100</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_mid_position_embeddings<span class="opacity-60"> = True</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">auxiliary_loss<span class="opacity-60"> = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">class_cost<span class="opacity-60"> = 1</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">bbox_cost<span class="opacity-60"> = 5</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">giou_cost<span class="opacity-60"> = 2</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">bbox_loss_coefficient<span class="opacity-60"> = 5</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">giou_loss_coefficient<span class="opacity-60"> = 2</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">eos_coefficient<span class="opacity-60"> = 0.1</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 22 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosConfig.hidden_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosConfig.hidden_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hidden_size</strong> (<code>int</code>, <em>optional</em>, defaults to 768) — Dimensionality of the encoder layers and the pooler layer.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosConfig.num_hidden_layers" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosConfig.num_hidden_layers"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>num_hidden_layers</strong> (<code>int</code>, <em>optional</em>, defaults to 12) — Number of hidden layers in the Transformer encoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosConfig.num_attention_heads" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosConfig.num_attention_heads"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>num_attention_heads</strong> (<code>int</code>, <em>optional</em>, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosConfig.intermediate_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosConfig.intermediate_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>intermediate_size</strong> (<code>int</code>, <em>optional</em>, defaults to 3072) — Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosConfig.hidden_act" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosConfig.hidden_act"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hidden_act</strong> (<code>str</code> or <code>function</code>, <em>optional</em>, defaults to <code>"gelu"</code>) — The non-linear activation function (function or string) in the encoder and pooler. If string, <code>"gelu"</code>, <code>"relu"</code>, <code>"selu"</code> and <code>"gelu_new"</code> are supported.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosConfig.hidden_dropout_prob" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosConfig.hidden_dropout_prob"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hidden_dropout_prob</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) — The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosConfig.attention_probs_dropout_prob" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosConfig.attention_probs_dropout_prob"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_probs_dropout_prob</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) — The dropout ratio for the attention probabilities.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosConfig.initializer_range" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosConfig.initializer_range"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>initializer_range</strong> (<code>float</code>, <em>optional</em>, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosConfig.layer_norm_eps" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosConfig.layer_norm_eps"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>layer_norm_eps</strong> (<code>float</code>, <em>optional</em>, defaults to 1e-12) — The epsilon used by the layer normalization layers.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosConfig.image_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosConfig.image_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>image_size</strong> (<code>List[int]</code>, <em>optional</em>, defaults to <code>[512, 864]</code>) — The size (resolution) of each image.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosConfig.patch_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosConfig.patch_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>patch_size</strong> (<code>int</code>, <em>optional</em>, defaults to <code>16</code>) — The size (resolution) of each patch.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosConfig.num_channels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosConfig.num_channels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>num_channels</strong> (<code>int</code>, <em>optional</em>, defaults to <code>3</code>) — The number of input channels.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosConfig.qkv_bias" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosConfig.qkv_bias"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>qkv_bias</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether to add a bias to the queries, keys and values.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosConfig.num_detection_tokens" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosConfig.num_detection_tokens"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>num_detection_tokens</strong> (<code>int</code>, <em>optional</em>, defaults to <code>100</code>) — The number of detection tokens.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosConfig.use_mid_position_embeddings" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosConfig.use_mid_position_embeddings"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>use_mid_position_embeddings</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether to use the mid-layer position encodings.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosConfig.auxiliary_loss" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosConfig.auxiliary_loss"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>auxiliary_loss</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether auxiliary decoding losses (loss at each decoder layer) are to be used.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosConfig.class_cost" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosConfig.class_cost"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>class_cost</strong> (<code>float</code>, <em>optional</em>, defaults to 1) — Relative weight of the classification error in the Hungarian matching cost.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosConfig.bbox_cost" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosConfig.bbox_cost"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>bbox_cost</strong> (<code>float</code>, <em>optional</em>, defaults to 5) — Relative weight of the L1 error of the bounding box coordinates in the Hungarian matching cost.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosConfig.giou_cost" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosConfig.giou_cost"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>giou_cost</strong> (<code>float</code>, <em>optional</em>, defaults to 2) — Relative weight of the generalized IoU loss of the bounding box in the Hungarian matching cost.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosConfig.bbox_loss_coefficient" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosConfig.bbox_loss_coefficient"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>bbox_loss_coefficient</strong> (<code>float</code>, <em>optional</em>, defaults to 5) — Relative weight of the L1 bounding box loss in the object detection loss.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosConfig.giou_loss_coefficient" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosConfig.giou_loss_coefficient"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>giou_loss_coefficient</strong> (<code>float</code>, <em>optional</em>, defaults to 2) — Relative weight of the generalized IoU loss in the object detection loss.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosConfig.eos_coefficient" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosConfig.eos_coefficient"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>eos_coefficient</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) — Relative classification weight of the ‘no-object’ class in the object detection loss.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1q0e9yp">This is the configuration class to store the configuration of a <a href="/docs/transformers/v4.34.0/en/model_doc/yolos#transformers.YolosModel">YolosModel</a>. It is used to instantiate a YOLOS model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the YOLOS <a href="https://huggingface.co/hustvl/yolos-base" rel="nofollow">hustvl/yolos-base</a> architecture.</p> <p data-svelte-h="svelte-10kqkkl">Configuration objects inherit from <a href="/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> and can be used to control the model outputs. Read the documentation from <a href="/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> for more information.</p> <div class="relative group rounded-md"><a id="transformers.YolosConfig.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosConfig.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> YolosConfig, YolosModel <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Initializing a YOLOS hustvl/yolos-base style configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>configuration = YolosConfig() <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Initializing a model (with random weights) from the hustvl/yolos-base style configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model = YolosModel(configuration) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Accessing the model configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>configuration = model.config</pre></div></div></div> <h2 class="relative group"><a id="transformers.YolosImageProcessor" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosImageProcessor"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-12xr7w2">YolosImageProcessor</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.YolosImageProcessor"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">YolosImageProcessor</span></span></h3> <a id="transformers.YolosImageProcessor" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.YolosImageProcessor"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/yolos/image_processing_yolos.py#L673" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">format<span class="opacity-60">: typing.Union[str, transformers.models.yolos.image_processing_yolos.AnnotionFormat] = &lt;AnnotionFormat.COCO_DETECTION: 'coco_detection'&gt;</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">do_resize<span class="opacity-60">: bool = True</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">size<span class="opacity-60">: typing.Dict[str, int] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">resample<span class="opacity-60">: Resampling = &lt;Resampling.BILINEAR: 2&gt;</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">do_rescale<span class="opacity-60">: bool = True</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">rescale_factor<span class="opacity-60">: typing.Union[int, float] = 0.00392156862745098</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">do_normalize<span class="opacity-60">: bool = True</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">image_mean<span class="opacity-60">: typing.Union[float, typing.List[float]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">image_std<span class="opacity-60">: typing.Union[float, typing.List[float]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">do_pad<span class="opacity-60">: bool = True</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 9 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosImageProcessor.format" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosImageProcessor.format"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>format</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"coco_detection"</code>) — Data format of the annotations. One of “coco_detection” or “coco_panoptic”.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosImageProcessor.do_resize" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosImageProcessor.do_resize"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>do_resize</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Controls whether to resize the image’s (height, width) dimensions to the specified <code>size</code>. Can be overridden by the <code>do_resize</code> parameter in the <code>preprocess</code> method.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosImageProcessor.size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosImageProcessor.size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>size</strong> (<code>Dict[str, int]</code> <em>optional</em>, defaults to <code>{"shortest_edge" -- 800, "longest_edge": 1333}</code>): Size of the image’s (height, width) dimensions after resizing. Can be overridden by the <code>size</code> parameter in the <code>preprocess</code> method.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosImageProcessor.resample" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosImageProcessor.resample"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>resample</strong> (<code>PILImageResampling</code>, <em>optional</em>, defaults to <code>PILImageResampling.BILINEAR</code>) — Resampling filter to use if resizing the image.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosImageProcessor.do_rescale" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosImageProcessor.do_rescale"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>do_rescale</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Controls whether to rescale the image by the specified scale <code>rescale_factor</code>. Can be overridden by the <code>do_rescale</code> parameter in the <code>preprocess</code> method.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosImageProcessor.rescale_factor" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosImageProcessor.rescale_factor"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>rescale_factor</strong> (<code>int</code> or <code>float</code>, <em>optional</em>, defaults to <code>1/255</code>) — Scale factor to use if rescaling the image. Can be overridden by the <code>rescale_factor</code> parameter in the <code>preprocess</code> method. do_normalize — Controls whether to normalize the image. Can be overridden by the <code>do_normalize</code> parameter in the <code>preprocess</code> method.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosImageProcessor.image_mean" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosImageProcessor.image_mean"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>image_mean</strong> (<code>float</code> or <code>List[float]</code>, <em>optional</em>, defaults to <code>IMAGENET_DEFAULT_MEAN</code>) — Mean values to use when normalizing the image. Can be a single value or a list of values, one for each channel. Can be overridden by the <code>image_mean</code> parameter in the <code>preprocess</code> method.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosImageProcessor.image_std" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosImageProcessor.image_std"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>image_std</strong> (<code>float</code> or <code>List[float]</code>, <em>optional</em>, defaults to <code>IMAGENET_DEFAULT_STD</code>) — Standard deviation values to use when normalizing the image. Can be a single value or a list of values, one for each channel. Can be overridden by the <code>image_std</code> parameter in the <code>preprocess</code> method.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosImageProcessor.do_pad" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosImageProcessor.do_pad"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>do_pad</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Controls whether to pad the image to the largest image in a batch and create a pixel mask. Can be overridden by the <code>do_pad</code> parameter in the <code>preprocess</code> method.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-19j0nu1">Constructs a Detr image processor.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.YolosImageProcessor.preprocess"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>preprocess</span></h4> <a id="transformers.YolosImageProcessor.preprocess" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.YolosImageProcessor.preprocess"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/yolos/image_processing_yolos.py#L1011" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">images<span class="opacity-60">: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">annotations<span class="opacity-60">: typing.Union[typing.Dict[str, typing.Union[int, str, typing.List[typing.Dict]]], typing.List[typing.Dict[str, typing.Union[int, str, typing.List[typing.Dict]]]], NoneType] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_segmentation_masks<span class="opacity-60">: bool = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">masks_path<span class="opacity-60">: typing.Union[str, pathlib.Path, NoneType] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">do_resize<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">size<span class="opacity-60">: typing.Union[typing.Dict[str, int], NoneType] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">resample<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">do_rescale<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">rescale_factor<span class="opacity-60">: typing.Union[int, float, NoneType] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">do_normalize<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">image_mean<span class="opacity-60">: typing.Union[float, typing.List[float], NoneType] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">image_std<span class="opacity-60">: typing.Union[float, typing.List[float], NoneType] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">do_pad<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">format<span class="opacity-60">: typing.Union[str, transformers.models.yolos.image_processing_yolos.AnnotionFormat, NoneType] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_tensors<span class="opacity-60">: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">data_format<span class="opacity-60">: typing.Union[str, transformers.image_utils.ChannelDimension] = &lt;ChannelDimension.FIRST: 'channels_first'&gt;</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_data_format<span class="opacity-60">: typing.Union[str, transformers.image_utils.ChannelDimension, NoneType] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 17 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosImageProcessor.preprocess.images" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosImageProcessor.preprocess.images"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>images</strong> (<code>ImageInput</code>) — Image or batch of images to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If passing in images with pixel values between 0 and 1, set <code>do_rescale=False</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosImageProcessor.preprocess.annotations" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosImageProcessor.preprocess.annotations"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>annotations</strong> (<code>AnnotationType</code> or <code>List[AnnotationType]</code>, <em>optional</em>) — List of annotations associated with the image or batch of images. If annotionation is for object detection, the annotations should be a dictionary with the following keys:<ul> <li>“image_id” (<code>int</code>): The image id.</li> <li>“annotations” (<code>List[Dict]</code>): List of annotations for an image. Each annotation should be a dictionary. An image can have no annotations, in which case the list should be empty. If annotionation is for segmentation, the annotations should be a dictionary with the following keys:</li> <li>“image_id” (<code>int</code>): The image id.</li> <li>“segments_info” (<code>List[Dict]</code>): List of segments for an image. Each segment should be a dictionary. An image can have no segments, in which case the list should be empty.</li> <li>“file_name” (<code>str</code>): The file name of the image.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosImageProcessor.preprocess.return_segmentation_masks" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosImageProcessor.preprocess.return_segmentation_masks"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_segmentation_masks</strong> (<code>bool</code>, <em>optional</em>, defaults to self.return_segmentation_masks) — Whether to return segmentation masks.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosImageProcessor.preprocess.masks_path" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosImageProcessor.preprocess.masks_path"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>masks_path</strong> (<code>str</code> or <code>pathlib.Path</code>, <em>optional</em>) — Path to the directory containing the segmentation masks.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosImageProcessor.preprocess.do_resize" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosImageProcessor.preprocess.do_resize"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>do_resize</strong> (<code>bool</code>, <em>optional</em>, defaults to self.do_resize) — Whether to resize the image.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosImageProcessor.preprocess.size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosImageProcessor.preprocess.size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>size</strong> (<code>Dict[str, int]</code>, <em>optional</em>, defaults to self.size) — Size of the image after resizing.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosImageProcessor.preprocess.resample" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosImageProcessor.preprocess.resample"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>resample</strong> (<code>PILImageResampling</code>, <em>optional</em>, defaults to self.resample) — Resampling filter to use when resizing the image.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosImageProcessor.preprocess.do_rescale" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosImageProcessor.preprocess.do_rescale"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>do_rescale</strong> (<code>bool</code>, <em>optional</em>, defaults to self.do_rescale) — Whether to rescale the image.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosImageProcessor.preprocess.rescale_factor" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosImageProcessor.preprocess.rescale_factor"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>rescale_factor</strong> (<code>float</code>, <em>optional</em>, defaults to self.rescale_factor) — Rescale factor to use when rescaling the image.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosImageProcessor.preprocess.do_normalize" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosImageProcessor.preprocess.do_normalize"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>do_normalize</strong> (<code>bool</code>, <em>optional</em>, defaults to self.do_normalize) — Whether to normalize the image.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosImageProcessor.preprocess.image_mean" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosImageProcessor.preprocess.image_mean"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>image_mean</strong> (<code>float</code> or <code>List[float]</code>, <em>optional</em>, defaults to self.image_mean) — Mean to use when normalizing the image.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosImageProcessor.preprocess.image_std" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosImageProcessor.preprocess.image_std"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>image_std</strong> (<code>float</code> or <code>List[float]</code>, <em>optional</em>, defaults to self.image_std) — Standard deviation to use when normalizing the image.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosImageProcessor.preprocess.do_pad" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosImageProcessor.preprocess.do_pad"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>do_pad</strong> (<code>bool</code>, <em>optional</em>, defaults to self.do_pad) — Whether to pad the image.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosImageProcessor.preprocess.format" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosImageProcessor.preprocess.format"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>format</strong> (<code>str</code> or <code>AnnotionFormat</code>, <em>optional</em>, defaults to self.format) — Format of the annotations.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosImageProcessor.preprocess.return_tensors" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosImageProcessor.preprocess.return_tensors"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_tensors</strong> (<code>str</code> or <code>TensorType</code>, <em>optional</em>, defaults to self.return_tensors) — Type of tensors to return. If <code>None</code>, will return the list of images.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosImageProcessor.preprocess.data_format" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosImageProcessor.preprocess.data_format"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>data_format</strong> (<code>str</code> or <code>ChannelDimension</code>, <em>optional</em>, defaults to self.data_format) — The channel dimension format of the image. If not provided, it will be the same as the input image.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosImageProcessor.preprocess.input_data_format" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosImageProcessor.preprocess.input_data_format"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_data_format</strong> (<code>ChannelDimension</code> or <code>str</code>, <em>optional</em>) — The channel dimension format for the input image. If unset, the channel dimension format is inferred from the input image. Can be one of:<ul> <li><code>"channels_first"</code> or <code>ChannelDimension.FIRST</code>: image in (num_channels, height, width) format.</li> <li><code>"channels_last"</code> or <code>ChannelDimension.LAST</code>: image in (height, width, num_channels) format.</li> <li><code>"none"</code> or <code>ChannelDimension.NONE</code>: image in (height, width) format.</li> </ul></span></span> </li></ul> </div></div> <p data-svelte-h="svelte-jgz2ra">Preprocess an image or a batch of images so that it can be used by the model.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.YolosImageProcessor.pad"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>pad</span></h4> <a id="transformers.YolosImageProcessor.pad" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.YolosImageProcessor.pad"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/yolos/image_processing_yolos.py#L956" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">images<span class="opacity-60">: typing.List[numpy.ndarray]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">constant_values<span class="opacity-60">: typing.Union[float, typing.Iterable[float]] = 0</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_pixel_mask<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_tensors<span class="opacity-60">: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">data_format<span class="opacity-60">: typing.Optional[transformers.image_utils.ChannelDimension] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_data_format<span class="opacity-60">: typing.Union[str, transformers.image_utils.ChannelDimension, NoneType] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 6 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosImageProcessor.pad.image" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosImageProcessor.pad.image"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>image</strong> (<code>np.ndarray</code>) — Image to pad.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosImageProcessor.pad.constant_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosImageProcessor.pad.constant_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>constant_values</strong> (<code>float</code> or <code>Iterable[float]</code>, <em>optional</em>) — The value to use for the padding if <code>mode</code> is <code>"constant"</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosImageProcessor.pad.return_pixel_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosImageProcessor.pad.return_pixel_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_pixel_mask</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether to return a pixel mask.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosImageProcessor.pad.return_tensors" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosImageProcessor.pad.return_tensors"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_tensors</strong> (<code>str</code> or <code>TensorType</code>, <em>optional</em>) — The type of tensors to return. Can be one of:<ul> <li>Unset: Return a list of <code>np.ndarray</code>.</li> <li><code>TensorType.TENSORFLOW</code> or <code>'tf'</code>: Return a batch of type <code>tf.Tensor</code>.</li> <li><code>TensorType.PYTORCH</code> or <code>'pt'</code>: Return a batch of type <code>torch.Tensor</code>.</li> <li><code>TensorType.NUMPY</code> or <code>'np'</code>: Return a batch of type <code>np.ndarray</code>.</li> <li><code>TensorType.JAX</code> or <code>'jax'</code>: Return a batch of type <code>jax.numpy.ndarray</code>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosImageProcessor.pad.data_format" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosImageProcessor.pad.data_format"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>data_format</strong> (<code>str</code> or <code>ChannelDimension</code>, <em>optional</em>) — The channel dimension format of the image. If not provided, it will be the same as the input image.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosImageProcessor.pad.input_data_format" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosImageProcessor.pad.input_data_format"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_data_format</strong> (<code>ChannelDimension</code> or <code>str</code>, <em>optional</em>) — The channel dimension format of the input image. If not provided, it will be inferred.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1f2f3d6">Pads a batch of images to the bottom and right of the image with zeros to the size of largest height and width in the batch and optionally returns their corresponding pixel mask.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.YolosImageProcessor.post_process_object_detection"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>post_process_object_detection</span></h4> <a id="transformers.YolosImageProcessor.post_process_object_detection" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.YolosImageProcessor.post_process_object_detection"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/yolos/image_processing_yolos.py#L1294" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">outputs<span class="opacity-60"></span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">threshold<span class="opacity-60">: float = 0.5</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">target_sizes<span class="opacity-60">: typing.Union[transformers.utils.generic.TensorType, typing.List[typing.Tuple]] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>List[Dict]</code></span></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosImageProcessor.post_process_object_detection.outputs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosImageProcessor.post_process_object_detection.outputs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>outputs</strong> (<code>YolosObjectDetectionOutput</code>) — Raw outputs of the model.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosImageProcessor.post_process_object_detection.threshold" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosImageProcessor.post_process_object_detection.threshold"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>threshold</strong> (<code>float</code>, <em>optional</em>) — Score threshold to keep object detection predictions.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosImageProcessor.post_process_object_detection.target_sizes" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosImageProcessor.post_process_object_detection.target_sizes"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>target_sizes</strong> (<code>torch.Tensor</code> or <code>List[Tuple[int, int]]</code>, <em>optional</em>) — Tensor of shape <code>(batch_size, 2)</code> or list of tuples (<code>Tuple[int, int]</code>) containing the target size <code>(height, width)</code> of each image in the batch. If unset, predictions will not be resized.</span></span> </li></ul> <div id="transformers.YolosImageProcessor.post_process_object_detection.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><code>List[Dict]</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A list of dictionaries, each dictionary containing the scores, labels and boxes for an image in the batch as predicted by the model.</p> </p> </div></div> <p data-svelte-h="svelte-z2bgt5">Converts the raw output of <a href="/docs/transformers/v4.34.0/en/model_doc/yolos#transformers.YolosForObjectDetection">YolosForObjectDetection</a> into final bounding boxes in (top_left_x, top_left_y, bottom_right_x, bottom_right_y) format. Only supports PyTorch.</p></div></div> <h2 class="relative group"><a id="transformers.YolosFeatureExtractor" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosFeatureExtractor"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1i2k8i3">YolosFeatureExtractor</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.YolosFeatureExtractor"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">YolosFeatureExtractor</span></span></h3> <a id="transformers.YolosFeatureExtractor" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.YolosFeatureExtractor"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/yolos/feature_extraction_yolos.py#L26" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">*args<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> </div></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.YolosFeatureExtractor.__call__"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>__call__</span></h4> <a id="transformers.YolosFeatureExtractor.__call__" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.YolosFeatureExtractor.__call__"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/image_processing_utils.py#L544" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">images<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> </div></div> <p data-svelte-h="svelte-khengj">Preprocess an image or a batch of images.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.YolosFeatureExtractor.pad"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>pad</span></h4> <a id="transformers.YolosFeatureExtractor.pad" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.YolosFeatureExtractor.pad"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/yolos/image_processing_yolos.py#L956" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">images<span class="opacity-60">: typing.List[numpy.ndarray]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">constant_values<span class="opacity-60">: typing.Union[float, typing.Iterable[float]] = 0</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_pixel_mask<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_tensors<span class="opacity-60">: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">data_format<span class="opacity-60">: typing.Optional[transformers.image_utils.ChannelDimension] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_data_format<span class="opacity-60">: typing.Union[str, transformers.image_utils.ChannelDimension, NoneType] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 6 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosFeatureExtractor.pad.image" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosFeatureExtractor.pad.image"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>image</strong> (<code>np.ndarray</code>) — Image to pad.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosFeatureExtractor.pad.constant_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosFeatureExtractor.pad.constant_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>constant_values</strong> (<code>float</code> or <code>Iterable[float]</code>, <em>optional</em>) — The value to use for the padding if <code>mode</code> is <code>"constant"</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosFeatureExtractor.pad.return_pixel_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosFeatureExtractor.pad.return_pixel_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_pixel_mask</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether to return a pixel mask.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosFeatureExtractor.pad.return_tensors" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosFeatureExtractor.pad.return_tensors"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_tensors</strong> (<code>str</code> or <code>TensorType</code>, <em>optional</em>) — The type of tensors to return. Can be one of:<ul> <li>Unset: Return a list of <code>np.ndarray</code>.</li> <li><code>TensorType.TENSORFLOW</code> or <code>'tf'</code>: Return a batch of type <code>tf.Tensor</code>.</li> <li><code>TensorType.PYTORCH</code> or <code>'pt'</code>: Return a batch of type <code>torch.Tensor</code>.</li> <li><code>TensorType.NUMPY</code> or <code>'np'</code>: Return a batch of type <code>np.ndarray</code>.</li> <li><code>TensorType.JAX</code> or <code>'jax'</code>: Return a batch of type <code>jax.numpy.ndarray</code>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosFeatureExtractor.pad.data_format" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosFeatureExtractor.pad.data_format"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>data_format</strong> (<code>str</code> or <code>ChannelDimension</code>, <em>optional</em>) — The channel dimension format of the image. If not provided, it will be the same as the input image.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosFeatureExtractor.pad.input_data_format" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosFeatureExtractor.pad.input_data_format"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_data_format</strong> (<code>ChannelDimension</code> or <code>str</code>, <em>optional</em>) — The channel dimension format of the input image. If not provided, it will be inferred.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1f2f3d6">Pads a batch of images to the bottom and right of the image with zeros to the size of largest height and width in the batch and optionally returns their corresponding pixel mask.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.YolosFeatureExtractor.post_process_object_detection"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>post_process_object_detection</span></h4> <a id="transformers.YolosFeatureExtractor.post_process_object_detection" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.YolosFeatureExtractor.post_process_object_detection"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/yolos/image_processing_yolos.py#L1294" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">outputs<span class="opacity-60"></span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">threshold<span class="opacity-60">: float = 0.5</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">target_sizes<span class="opacity-60">: typing.Union[transformers.utils.generic.TensorType, typing.List[typing.Tuple]] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>List[Dict]</code></span></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosFeatureExtractor.post_process_object_detection.outputs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosFeatureExtractor.post_process_object_detection.outputs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>outputs</strong> (<code>YolosObjectDetectionOutput</code>) — Raw outputs of the model.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosFeatureExtractor.post_process_object_detection.threshold" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosFeatureExtractor.post_process_object_detection.threshold"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>threshold</strong> (<code>float</code>, <em>optional</em>) — Score threshold to keep object detection predictions.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosFeatureExtractor.post_process_object_detection.target_sizes" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosFeatureExtractor.post_process_object_detection.target_sizes"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>target_sizes</strong> (<code>torch.Tensor</code> or <code>List[Tuple[int, int]]</code>, <em>optional</em>) — Tensor of shape <code>(batch_size, 2)</code> or list of tuples (<code>Tuple[int, int]</code>) containing the target size <code>(height, width)</code> of each image in the batch. If unset, predictions will not be resized.</span></span> </li></ul> <div id="transformers.YolosFeatureExtractor.post_process_object_detection.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><code>List[Dict]</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A list of dictionaries, each dictionary containing the scores, labels and boxes for an image in the batch as predicted by the model.</p> </p> </div></div> <p data-svelte-h="svelte-z2bgt5">Converts the raw output of <a href="/docs/transformers/v4.34.0/en/model_doc/yolos#transformers.YolosForObjectDetection">YolosForObjectDetection</a> into final bounding boxes in (top_left_x, top_left_y, bottom_right_x, bottom_right_y) format. Only supports PyTorch.</p></div></div> <h2 class="relative group"><a id="transformers.YolosModel" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosModel"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-17ytdfk">YolosModel</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.YolosModel"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">YolosModel</span></span></h3> <a id="transformers.YolosModel" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.YolosModel"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/yolos/modeling_yolos.py#L597" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60">: YolosConfig</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">add_pooling_layer<span class="opacity-60">: bool = True</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosModel.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosModel.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/yolos#transformers.YolosConfig">YolosConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-19k5mgv">The bare YOLOS Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.YolosModel.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.YolosModel.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.YolosModel.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/yolos/modeling_yolos.py#L625" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pixel_values<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.BaseModelOutputWithPooling">transformers.modeling_outputs.BaseModelOutputWithPooling</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 5 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosModel.forward.pixel_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosModel.forward.pixel_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>pixel_values</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_channels, height, width)</code>) — Pixel values. Pixel values can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoImageProcessor">AutoImageProcessor</a>. See <a href="/docs/transformers/v4.34.0/en/model_doc/deit#transformers.DeiTFeatureExtractor.__call__">YolosImageProcessor.<strong>call</strong>()</a> for details.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosModel.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosModel.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosModel.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosModel.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosModel.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosModel.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosModel.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosModel.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li></ul> <div id="transformers.YolosModel.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.BaseModelOutputWithPooling">transformers.modeling_outputs.BaseModelOutputWithPooling</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.BaseModelOutputWithPooling">transformers.modeling_outputs.BaseModelOutputWithPooling</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/yolos#transformers.YolosConfig">YolosConfig</a>) and inputs.</p> <ul> <li> <p><strong>last_hidden_state</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>) — Sequence of hidden-states at the output of the last layer of the model.</p> </li> <li> <p><strong>pooler_output</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, hidden_size)</code>) — Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining.</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-52vbkl">The <a href="/docs/transformers/v4.34.0/en/model_doc/yolos#transformers.YolosModel">YolosModel</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.YolosModel.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosModel.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoImageProcessor, YolosModel <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset <span class="hljs-meta">&gt;&gt;&gt; </span>dataset = load_dataset(<span class="hljs-string">"huggingface/cats-image"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>image = dataset[<span class="hljs-string">"test"</span>][<span class="hljs-string">"image"</span>][<span class="hljs-number">0</span>] <span class="hljs-meta">&gt;&gt;&gt; </span>image_processor = AutoImageProcessor.from_pretrained(<span class="hljs-string">"hustvl/yolos-small"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = YolosModel.from_pretrained(<span class="hljs-string">"hustvl/yolos-small"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = image_processor(image, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> outputs = model(**inputs) <span class="hljs-meta">&gt;&gt;&gt; </span>last_hidden_states = outputs.last_hidden_state <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-built_in">list</span>(last_hidden_states.shape) [<span class="hljs-number">1</span>, <span class="hljs-number">3401</span>, <span class="hljs-number">384</span>]</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.YolosForObjectDetection" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosForObjectDetection"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-nryn1e">YolosForObjectDetection</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.YolosForObjectDetection"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">YolosForObjectDetection</span></span></h3> <a id="transformers.YolosForObjectDetection" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.YolosForObjectDetection"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/yolos/modeling_yolos.py#L705" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60">: YolosConfig</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosForObjectDetection.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosForObjectDetection.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/yolos#transformers.YolosConfig">YolosConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1yg9bo0">YOLOS Model (consisting of a ViT encoder) with object detection heads on top, for tasks such as COCO detection.</p> <p data-svelte-h="svelte-1gjh92c">This model is a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.YolosForObjectDetection.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.YolosForObjectDetection.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.YolosForObjectDetection.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/yolos/modeling_yolos.py#L732" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pixel_values<span class="opacity-60">: FloatTensor</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[typing.List[typing.Dict]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>transformers.models.yolos.modeling_yolos.YolosObjectDetectionOutput</code> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 6 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosForObjectDetection.forward.pixel_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosForObjectDetection.forward.pixel_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>pixel_values</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_channels, height, width)</code>) — Pixel values. Pixel values can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoImageProcessor">AutoImageProcessor</a>. See <a href="/docs/transformers/v4.34.0/en/model_doc/deit#transformers.DeiTFeatureExtractor.__call__">YolosImageProcessor.<strong>call</strong>()</a> for details.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosForObjectDetection.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosForObjectDetection.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosForObjectDetection.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosForObjectDetection.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosForObjectDetection.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosForObjectDetection.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosForObjectDetection.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosForObjectDetection.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.YolosForObjectDetection.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosForObjectDetection.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>List[Dict]</code> of len <code>(batch_size,)</code>, <em>optional</em>) — Labels for computing the bipartite matching loss. List of dicts, each dictionary containing at least the following 2 keys: <code>'class_labels'</code> and <code>'boxes'</code> (the class labels and bounding boxes of an image in the batch respectively). The class labels themselves should be a <code>torch.LongTensor</code> of len <code>(number of bounding boxes in the image,)</code> and the boxes a <code>torch.FloatTensor</code> of shape <code>(number of bounding boxes in the image, 4)</code>.</span></span> </li></ul> <div id="transformers.YolosForObjectDetection.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><code>transformers.models.yolos.modeling_yolos.YolosObjectDetectionOutput</code> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <code>transformers.models.yolos.modeling_yolos.YolosObjectDetectionOutput</code> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/yolos#transformers.YolosConfig">YolosConfig</a>) and inputs.</p> <ul> <li><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> are provided)) — Total loss as a linear combination of a negative log-likehood (cross-entropy) for class prediction and a bounding box loss. The latter is defined as a linear combination of the L1 loss and the generalized scale-invariant IoU loss.</li> <li><strong>loss_dict</strong> (<code>Dict</code>, <em>optional</em>) — A dictionary containing the individual losses. Useful for logging.</li> <li><strong>logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_queries, num_classes + 1)</code>) — Classification logits (including no-object) for all queries.</li> <li><strong>pred_boxes</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_queries, 4)</code>) — Normalized boxes coordinates for all queries, represented as (center_x, center_y, width, height). These values are normalized in [0, 1], relative to the size of each individual image in the batch (disregarding possible padding). You can use <code>post_process()</code> to retrieve the unnormalized bounding boxes.</li> <li><strong>auxiliary_outputs</strong> (<code>list[Dict]</code>, <em>optional</em>) — Optional, only returned when auxilary losses are activated (i.e. <code>config.auxiliary_loss</code> is set to <code>True</code>) and labels are provided. It is a list of dictionaries containing the two above keys (<code>logits</code> and <code>pred_boxes</code>) for each decoder layer.</li> <li><strong>last_hidden_state</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Sequence of hidden-states at the output of the last layer of the decoder of the model.</li> <li><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>. Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.</li> <li><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>. Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</li> </ul> </p> </div></div> <p data-svelte-h="svelte-17ohyvb">The <a href="/docs/transformers/v4.34.0/en/model_doc/yolos#transformers.YolosForObjectDetection">YolosForObjectDetection</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.YolosForObjectDetection.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.YolosForObjectDetection.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-kvfsh7">Examples:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoImageProcessor, AutoModelForObjectDetection <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> PIL <span class="hljs-keyword">import</span> Image <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> requests <span class="hljs-meta">&gt;&gt;&gt; </span>url = <span class="hljs-string">"http://images.cocodataset.org/val2017/000000039769.jpg"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>image = Image.<span class="hljs-built_in">open</span>(requests.get(url, stream=<span class="hljs-literal">True</span>).raw) <span class="hljs-meta">&gt;&gt;&gt; </span>image_processor = AutoImageProcessor.from_pretrained(<span class="hljs-string">"hustvl/yolos-tiny"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = AutoModelForObjectDetection.from_pretrained(<span class="hljs-string">"hustvl/yolos-tiny"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = image_processor(images=image, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(**inputs) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># convert outputs (bounding boxes and class logits) to COCO API</span> <span class="hljs-meta">&gt;&gt;&gt; </span>target_sizes = torch.tensor([image.size[::-<span class="hljs-number">1</span>]]) <span class="hljs-meta">&gt;&gt;&gt; </span>results = image_processor.post_process_object_detection(outputs, threshold=<span class="hljs-number">0.9</span>, target_sizes=target_sizes)[ <span class="hljs-meta">... </span> <span class="hljs-number">0</span> <span class="hljs-meta">... </span>] <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">for</span> score, label, box <span class="hljs-keyword">in</span> <span class="hljs-built_in">zip</span>(results[<span class="hljs-string">"scores"</span>], results[<span class="hljs-string">"labels"</span>], results[<span class="hljs-string">"boxes"</span>]): <span class="hljs-meta">... </span> box = [<span class="hljs-built_in">round</span>(i, <span class="hljs-number">2</span>) <span class="hljs-keyword">for</span> i <span class="hljs-keyword">in</span> box.tolist()] <span class="hljs-meta">... </span> <span class="hljs-built_in">print</span>( <span class="hljs-meta">... </span> <span class="hljs-string">f"Detected <span class="hljs-subst">{model.config.id2label[label.item()]}</span> with confidence "</span> <span class="hljs-meta">... </span> <span class="hljs-string">f"<span class="hljs-subst">{<span class="hljs-built_in">round</span>(score.item(), <span class="hljs-number">3</span>)}</span> at location <span class="hljs-subst">{box}</span>"</span> <span class="hljs-meta">... </span> ) Detected remote <span class="hljs-keyword">with</span> confidence <span class="hljs-number">0.994</span> at location [<span class="hljs-number">46.96</span>, <span class="hljs-number">72.61</span>, <span class="hljs-number">181.02</span>, <span class="hljs-number">119.73</span>] Detected remote <span class="hljs-keyword">with</span> confidence <span class="hljs-number">0.975</span> at location [<span class="hljs-number">340.66</span>, <span class="hljs-number">79.19</span>, <span class="hljs-number">372.59</span>, <span class="hljs-number">192.65</span>] Detected cat <span class="hljs-keyword">with</span> confidence <span class="hljs-number">0.984</span> at location [<span class="hljs-number">12.27</span>, <span class="hljs-number">54.25</span>, <span class="hljs-number">319.42</span>, <span class="hljs-number">470.99</span>] Detected remote <span class="hljs-keyword">with</span> confidence <span class="hljs-number">0.922</span> at location [<span class="hljs-number">41.66</span>, <span class="hljs-number">71.96</span>, <span class="hljs-number">178.7</span>, <span class="hljs-number">120.33</span>] Detected cat <span class="hljs-keyword">with</span> confidence <span class="hljs-number">0.914</span> at location [<span class="hljs-number">342.34</span>, <span class="hljs-number">21.48</span>, <span class="hljs-number">638.64</span>, <span class="hljs-number">372.46</span>]</pre></div></div></div></div> <p></p> <div id="svelte-announcer" aria-live="assertive" aria-atomic="true" style="position: absolute; left: 0px; top: 0px; clip: rect(0px, 0px, 0px, 0px); clip-path: inset(50%); overflow: hidden; white-space: nowrap; width: 1px; height: 1px;"></div></div> <div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/vivit" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>ViViT</a> <a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">Audio Spectrogram Transformer<span class="ml-2 translate-y-px">→</span></a></div></div></div> <div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapter&quot;:{&quot;title&quot;:&quot;YOLOS&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;yolos&quot;,&quot;url&quot;:&quot;#yolos&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;overview&quot;,&quot;url&quot;:&quot;#overview&quot;},{&quot;title&quot;:&quot;Resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;resources&quot;,&quot;url&quot;:&quot;#resources&quot;},{&quot;title&quot;:&quot;YolosConfig&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.YolosConfig&quot;,&quot;url&quot;:&quot;#transformers.YolosConfig&quot;},{&quot;title&quot;:&quot;YolosImageProcessor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.YolosImageProcessor&quot;,&quot;url&quot;:&quot;#transformers.YolosImageProcessor&quot;},{&quot;title&quot;:&quot;YolosFeatureExtractor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.YolosFeatureExtractor&quot;,&quot;url&quot;:&quot;#transformers.YolosFeatureExtractor&quot;},{&quot;title&quot;:&quot;YolosModel&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.YolosModel&quot;,&quot;url&quot;:&quot;#transformers.YolosModel&quot;},{&quot;title&quot;:&quot;YolosForObjectDetection&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.YolosForObjectDetection&quot;,&quot;url&quot;:&quot;#transformers.YolosForObjectDetection&quot;}]}}" data-target="SubSideMenu"><nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#yolos" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-yolos">YOLOS</a> <a href="#overview" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-overview"><wbr>Overview</a> <a href="#resources" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-resources"><wbr>Resources</a> <a href="#transformers.YolosConfig" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.YolosConfig"><wbr>Yolos<wbr>Config</a> <a href="#transformers.YolosImageProcessor" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.YolosImageProcessor"><wbr>Yolos<wbr>Image<wbr>Processor</a> <a href="#transformers.YolosFeatureExtractor" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.YolosFeatureExtractor"><wbr>Yolos<wbr>Feature<wbr>Extractor</a> <a href="#transformers.YolosModel" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.YolosModel"><wbr>Yolos<wbr>Model</a> <a href="#transformers.YolosForObjectDetection" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.YolosForObjectDetection"><wbr>Yolos<wbr>For<wbr>Object<wbr>Detection</a> </nav></div></div></div> <div id="doc-footer"></div></main> </div> <script> import("/front/build/kube-b0520c1/index.js"); window.moonSha = "kube-b0520c1/"; window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc","environment":"production","userAgent":"HuggingFace (production)"}`); </script> <!-- Stripe --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://js.stripe.com/v3/"; script.async = true; document.head.appendChild(script); } </script> <!-- Google analytics v4 --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL"; script.async = true; document.head.appendChild(script); window.dataLayer = window.dataLayer || []; function gtag() { if (window.dataLayer !== undefined) { window.dataLayer.push(arguments); } } gtag("js", new Date()); gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/v4.34.0/en/model_doc/yolos" }); /// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" }); /// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent /// TODO: ask the user for their consent and update this with gtag('consent', 'update') } </script> <!-- Google Analytics v3 (deprecated) --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { (function (i, s, o, g, r, a, m) { i["GoogleAnalyticsObject"] = r; (i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments); }), (i[r].l = 1 * new Date()); (a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]); a.async = 1; a.src = g; m.parentNode.insertBefore(a, m); })(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics"); ganalytics("create", "UA-83738774-2", "auto"); ganalytics("send", "pageview", "/docs/transformers/v4.34.0/en/model_doc/yolos"); } </script> <iframe name="__privateStripeMetricsController5960" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-27c67c0d52761104439bb051c7856ab1.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fv4.34.0%2Fen%2Fmodel_doc%2Fyolos&amp;title=YOLOS&amp;referrer=&amp;muid=577a1d98-59a0-46fc-98a8-36ee316848488be1c3&amp;sid=95f156dd-eb84-4e70-95ef-3883996ebe1530e886&amp;version=6&amp;preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html>
2023-10-05T13:33:40.204Z
https://huggingface.co/docs/transformers/v4.34.0/en/main_classes/callbacks
The documentation page MAIN\_CLASSES/CALLBACKS doesn’t exist in v4.34.0, but exists on the main version. Click [here](/docs/transformers/main/en/main_classes/callbacks) to redirect to the main version of the documentation.
<html><head></head><body>The documentation page MAIN_CLASSES/CALLBACKS doesn’t exist in v4.34.0, but exists on the main version. Click <a href="/docs/transformers/main/en/main_classes/callbacks">here</a> to redirect to the main version of the documentation.</body></html>
2023-10-05T13:33:40.230Z
https://huggingface.co/docs/transformers/v4.34.0/en/main_classes/image
The documentation page MAIN\_CLASSES/IMAGE doesn’t exist in v4.34.0, but exists on the main version. Click [here](/docs/transformers/main/en/main_classes/image) to redirect to the main version of the documentation.
<html><head></head><body>The documentation page MAIN_CLASSES/IMAGE doesn’t exist in v4.34.0, but exists on the main version. Click <a href="/docs/transformers/main/en/main_classes/image">here</a> to redirect to the main version of the documentation.</body></html>
2023-10-05T13:33:40.372Z
https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/visionencoderdecoder#transformers.VisionEncoderDecoderModel.forward.pixel_values
The documentation page MODEL\_DOC/VISIONENCODERDECODER doesn’t exist in v4.34.0, but exists on the main version. Click [here](/docs/transformers/main/en/model_doc/visionencoderdecoder) to redirect to the main version of the documentation.
<html><head></head><body>The documentation page MODEL_DOC/VISIONENCODERDECODER doesn’t exist in v4.34.0, but exists on the main version. Click <a href="/docs/transformers/main/en/model_doc/visionencoderdecoder">here</a> to redirect to the main version of the documentation.</body></html>
2023-10-05T13:33:40.586Z
https://huggingface.co/docs/transformers/v4.34.0/en/noteboks/README
The documentation page NOTEBOKS/README doesn’t exist in v4.34.0, but exists on the main version. Click [here](/docs/transformers/main/en/noteboks/README) to redirect to the main version of the documentation.
<html><head></head><body>The documentation page NOTEBOKS/README doesn’t exist in v4.34.0, but exists on the main version. Click <a href="/docs/transformers/main/en/noteboks/README">here</a> to redirect to the main version of the documentation.</body></html>
2023-10-05T13:33:40.806Z
https://huggingface.co/docs/transformers/v4.34.0/en/quantization#bitsandbytes-integration
The documentation page QUANTIZATION doesn’t exist in v4.34.0, but exists on the main version. Click [here](/docs/transformers/main/en/quantization) to redirect to the main version of the documentation.
<html><head></head><body>The documentation page QUANTIZATION doesn’t exist in v4.34.0, but exists on the main version. Click <a href="/docs/transformers/main/en/quantization">here</a> to redirect to the main version of the documentation.</body></html>
2023-10-05T13:33:41.011Z
Text generation strategies
https://huggingface.co/docs/transformers/v4.34.0/en/generation_strategies
# Text generation strategies Text generation is essential to many NLP tasks, such as open-ended text generation, summarization, translation, and more. It also plays a role in a variety of mixed-modality applications that have text as an output like speech-to-text and vision-to-text. Some of the models that can generate text include GPT2, XLNet, OpenAI GPT, CTRL, TransformerXL, XLM, Bart, T5, GIT, Whisper. Check out a few examples that use [generate()](/docs/transformers/v4.34.0/en/main_classes/text_generation#transformers.GenerationMixin.generate) method to produce text outputs for different tasks: - [Text summarization](./tasks/summarization#inference) - [Image captioning](./model_doc/git#transformers.GitForCausalLM.forward.example) - [Audio transcription](./model_doc/whisper#transformers.WhisperForConditionalGeneration.forward.example) Note that the inputs to the generate method depend on the model’s modality. They are returned by the model’s preprocessor class, such as AutoTokenizer or AutoProcessor. If a model’s preprocessor creates more than one kind of input, pass all the inputs to generate(). You can learn more about the individual model’s preprocessor in the corresponding model’s documentation. The process of selecting output tokens to generate text is known as decoding, and you can customize the decoding strategy that the `generate()` method will use. Modifying a decoding strategy does not change the values of any trainable parameters. However, it can have a noticeable impact on the quality of the generated output. It can help reduce repetition in the text and make it more coherent. This guide describes: - default generation configuration - common decoding strategies and their main parameters - saving and sharing custom generation configurations with your fine-tuned model on 🤗 Hub ## Default text generation configuration A decoding strategy for a model is defined in its generation configuration. When using pre-trained models for inference within a [pipeline()](/docs/transformers/v4.34.0/en/main_classes/pipelines#transformers.pipeline), the models call the `PreTrainedModel.generate()` method that applies a default generation configuration under the hood. The default configuration is also used when no custom configuration has been saved with the model. When you load a model explicitly, you can inspect the generation configuration that comes with it through `model.generation_config`: ``` >>> from transformers import AutoModelForCausalLM >>> model = AutoModelForCausalLM.from_pretrained("distilgpt2") >>> model.generation_config GenerationConfig { "bos_token_id": 50256, "eos_token_id": 50256, } ``` Printing out the `model.generation_config` reveals only the values that are different from the default generation configuration, and does not list any of the default values. The default generation configuration limits the size of the output combined with the input prompt to a maximum of 20 tokens to avoid running into resource limitations. The default decoding strategy is greedy search, which is the simplest decoding strategy that picks a token with the highest probability as the next token. For many tasks and small output sizes this works well. However, when used to generate longer outputs, greedy search can start producing highly repetitive results. ## Customize text generation You can override any `generation_config` by passing the parameters and their values directly to the `generate` method: ``` >>> my_model.generate(**inputs, num_beams=4, do_sample=True) ``` Even if the default decoding strategy mostly works for your task, you can still tweak a few things. Some of the commonly adjusted parameters include: - `max_new_tokens`: the maximum number of tokens to generate. In other words, the size of the output sequence, not including the tokens in the prompt. As an alternative to using the output’s length as a stopping criteria, you can choose to stop generation whenever the full generation exceeds some amount of time. To learn more, check [StoppingCriteria](/docs/transformers/v4.34.0/en/internal/generation_utils#transformers.StoppingCriteria). - `num_beams`: by specifying a number of beams higher than 1, you are effectively switching from greedy search to beam search. This strategy evaluates several hypotheses at each time step and eventually chooses the hypothesis that has the overall highest probability for the entire sequence. This has the advantage of identifying high-probability sequences that start with a lower probability initial tokens and would’ve been ignored by the greedy search. - `do_sample`: if set to `True`, this parameter enables decoding strategies such as multinomial sampling, beam-search multinomial sampling, Top-K sampling and Top-p sampling. All these strategies select the next token from the probability distribution over the entire vocabulary with various strategy-specific adjustments. - `num_return_sequences`: the number of sequence candidates to return for each input. This option is only available for the decoding strategies that support multiple sequence candidates, e.g. variations of beam search and sampling. Decoding strategies like greedy search and contrastive search return a single output sequence. ## Save a custom decoding strategy with your model If you would like to share your fine-tuned model with a specific generation configuration, you can: - Create a [GenerationConfig](/docs/transformers/v4.34.0/en/main_classes/text_generation#transformers.GenerationConfig) class instance - Specify the decoding strategy parameters - Save your generation configuration with [GenerationConfig.save\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/text_generation#transformers.GenerationConfig.save_pretrained), making sure to leave its `config_file_name` argument empty - Set `push_to_hub` to `True` to upload your config to the model’s repo ``` >>> from transformers import AutoModelForCausalLM, GenerationConfig >>> model = AutoModelForCausalLM.from_pretrained("my_account/my_model") >>> generation_config = GenerationConfig( ... max_new_tokens=50, do_sample=True, top_k=50, eos_token_id=model.config.eos_token_id ... ) >>> generation_config.save_pretrained("my_account/my_model", push_to_hub=True) ``` You can also store several generation configurations in a single directory, making use of the `config_file_name` argument in [GenerationConfig.save\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/text_generation#transformers.GenerationConfig.save_pretrained). You can later instantiate them with [GenerationConfig.from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/text_generation#transformers.GenerationConfig.from_pretrained). This is useful if you want to store several generation configurations for a single model (e.g. one for creative text generation with sampling, and one for summarization with beam search). You must have the right Hub permissions to add configuration files to a model. ``` >>> from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, GenerationConfig >>> tokenizer = AutoTokenizer.from_pretrained("t5-small") >>> model = AutoModelForSeq2SeqLM.from_pretrained("t5-small") >>> translation_generation_config = GenerationConfig( ... num_beams=4, ... early_stopping=True, ... decoder_start_token_id=0, ... eos_token_id=model.config.eos_token_id, ... pad_token=model.config.pad_token_id, ... ) >>> >>> translation_generation_config.save_pretrained("/tmp", "translation_generation_config.json") >>> >>> generation_config = GenerationConfig.from_pretrained("/tmp", "translation_generation_config.json") >>> inputs = tokenizer("translate English to French: Configuration files are easy to use!", return_tensors="pt") >>> outputs = model.generate(**inputs, generation_config=generation_config) >>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) ['Les fichiers de configuration sont faciles à utiliser!'] ``` ## Streaming The `generate()` supports streaming, through its `streamer` input. The `streamer` input is compatible with any instance from a class that has the following methods: `put()` and `end()`. Internally, `put()` is used to push new tokens and `end()` is used to flag the end of text generation. The API for the streamer classes is still under development and may change in the future. In practice, you can craft your own streaming class for all sorts of purposes! We also have basic streaming classes ready for you to use. For example, you can use the [TextStreamer](/docs/transformers/v4.34.0/en/internal/generation_utils#transformers.TextStreamer) class to stream the output of `generate()` into your screen, one word at a time: ``` >>> from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer >>> tok = AutoTokenizer.from_pretrained("gpt2") >>> model = AutoModelForCausalLM.from_pretrained("gpt2") >>> inputs = tok(["An increasing sequence: one,"], return_tensors="pt") >>> streamer = TextStreamer(tok) >>> >>> _ = model.generate(**inputs, streamer=streamer, max_new_tokens=20) An increasing sequence: one, two, three, four, five, six, seven, eight, nine, ten, eleven, ``` ## Decoding strategies Certain combinations of the `generate()` parameters, and ultimately `generation_config`, can be used to enable specific decoding strategies. If you are new to this concept, we recommend reading [this blog post that illustrates how common decoding strategies work](https://huggingface.co/blog/how-to-generate). Here, we’ll show some of the parameters that control the decoding strategies and illustrate how you can use them. ### Greedy Search `generate` uses greedy search decoding by default so you don’t have to pass any parameters to enable it. This means the parameters `num_beams` is set to 1 and `do_sample=False`. ``` >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> prompt = "I look forward to" >>> checkpoint = "distilgpt2" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> inputs = tokenizer(prompt, return_tensors="pt") >>> model = AutoModelForCausalLM.from_pretrained(checkpoint) >>> outputs = model.generate(**inputs) >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['I look forward to seeing you all again!\n\n\n\n\n\n\n\n\n\n\n'] ``` ### Contrastive search The contrastive search decoding strategy was proposed in the 2022 paper [A Contrastive Framework for Neural Text Generation](https://arxiv.org/abs/2202.06417). It demonstrates superior results for generating non-repetitive yet coherent long outputs. To learn how contrastive search works, check out [this blog post](https://huggingface.co/blog/introducing-csearch). The two main parameters that enable and control the behavior of contrastive search are `penalty_alpha` and `top_k`: ``` >>> from transformers import AutoTokenizer, AutoModelForCausalLM >>> checkpoint = "gpt2-large" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> model = AutoModelForCausalLM.from_pretrained(checkpoint) >>> prompt = "Hugging Face Company is" >>> inputs = tokenizer(prompt, return_tensors="pt") >>> outputs = model.generate(**inputs, penalty_alpha=0.6, top_k=4, max_new_tokens=100) >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['Hugging Face Company is a family owned and operated business. We pride ourselves on being the best in the business and our customer service is second to none.\n\nIf you have any questions about our products or services, feel free to contact us at any time. We look forward to hearing from you!'] ``` ### Multinomial sampling As opposed to greedy search that always chooses a token with the highest probability as the next token, multinomial sampling (also called ancestral sampling) randomly selects the next token based on the probability distribution over the entire vocabulary given by the model. Every token with a non-zero probability has a chance of being selected, thus reducing the risk of repetition. To enable multinomial sampling set `do_sample=True` and `num_beams=1`. ``` >>> from transformers import AutoTokenizer, AutoModelForCausalLM, set_seed >>> set_seed(0) >>> checkpoint = "gpt2-large" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> model = AutoModelForCausalLM.from_pretrained(checkpoint) >>> prompt = "Today was an amazing day because" >>> inputs = tokenizer(prompt, return_tensors="pt") >>> outputs = model.generate(**inputs, do_sample=True, num_beams=1, max_new_tokens=100) >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['Today was an amazing day because when you go to the World Cup and you don\'t, or when you don\'t get invited, that\'s a terrible feeling."'] ``` ### Beam-search decoding Unlike greedy search, beam-search decoding keeps several hypotheses at each time step and eventually chooses the hypothesis that has the overall highest probability for the entire sequence. This has the advantage of identifying high-probability sequences that start with lower probability initial tokens and would’ve been ignored by the greedy search. To enable this decoding strategy, specify the `num_beams` (aka number of hypotheses to keep track of) that is greater than 1. ``` >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> prompt = "It is astonishing how one can" >>> checkpoint = "gpt2-medium" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> inputs = tokenizer(prompt, return_tensors="pt") >>> model = AutoModelForCausalLM.from_pretrained(checkpoint) >>> outputs = model.generate(**inputs, num_beams=5, max_new_tokens=50) >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['It is astonishing how one can have such a profound impact on the lives of so many people in such a short period of time."\n\nHe added: "I am very proud of the work I have been able to do in the last few years.\n\n"I have'] ``` ### Beam-search multinomial sampling As the name implies, this decoding strategy combines beam search with multinomial sampling. You need to specify the `num_beams` greater than 1, and set `do_sample=True` to use this decoding strategy. ``` >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, set_seed >>> set_seed(0) >>> prompt = "translate English to German: The house is wonderful." >>> checkpoint = "t5-small" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> inputs = tokenizer(prompt, return_tensors="pt") >>> model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint) >>> outputs = model.generate(**inputs, num_beams=5, do_sample=True) >>> tokenizer.decode(outputs[0], skip_special_tokens=True) 'Das Haus ist wunderbar.' ``` ### Diverse beam search decoding The diverse beam search decoding strategy is an extension of the beam search strategy that allows for generating a more diverse set of beam sequences to choose from. To learn how it works, refer to [Diverse Beam Search: Decoding Diverse Solutions from Neural Sequence Models](https://arxiv.org/pdf/1610.02424.pdf). This approach has three main parameters: `num_beams`, `num_beam_groups`, and `diversity_penalty`. The diversity penalty ensures the outputs are distinct across groups, and beam search is used within each group. ``` >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM >>> checkpoint = "google/pegasus-xsum" >>> prompt = ( ... "The Permaculture Design Principles are a set of universal design principles " ... "that can be applied to any location, climate and culture, and they allow us to design " ... "the most efficient and sustainable human habitation and food production systems. " ... "Permaculture is a design system that encompasses a wide variety of disciplines, such " ... "as ecology, landscape design, environmental science and energy conservation, and the " ... "Permaculture design principles are drawn from these various disciplines. Each individual " ... "design principle itself embodies a complete conceptual framework based on sound " ... "scientific principles. When we bring all these separate principles together, we can " ... "create a design system that both looks at whole systems, the parts that these systems " ... "consist of, and how those parts interact with each other to create a complex, dynamic, " ... "living system. Each design principle serves as a tool that allows us to integrate all " ... "the separate parts of a design, referred to as elements, into a functional, synergistic, " ... "whole system, where the elements harmoniously interact and work together in the most " ... "efficient way possible." ... ) >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> inputs = tokenizer(prompt, return_tensors="pt") >>> model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint) >>> outputs = model.generate(**inputs, num_beams=5, num_beam_groups=5, max_new_tokens=30, diversity_penalty=1.0) >>> tokenizer.decode(outputs[0], skip_special_tokens=True) 'The Design Principles are a set of universal design principles that can be applied to any location, climate and culture, and they allow us to design the' ``` This guide illustrates the main parameters that enable various decoding strategies. More advanced parameters exist for the `generate` method, which gives you even further control over the `generate` method’s behavior. For the complete list of the available parameters, refer to the [API documentation](./main_classes/text_generation.md). ### Assisted Decoding Assisted decoding is a modification of the decoding strategies above that uses an assistant model with the same tokenizer (ideally a much smaller model) to greedily generate a few candidate tokens. The main model then validates the candidate tokens in a single forward pass, which speeds up the decoding process. Currently, only greedy search and sampling are supported with assisted decoding, and doesn’t support batched inputs. To learn more about assisted decoding, check [this blog post](https://huggingface.co/blog/assisted-generation). To enable assisted decoding, set the `assistant_model` argument with a model. ``` >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> prompt = "Alice and Bob" >>> checkpoint = "EleutherAI/pythia-1.4b-deduped" >>> assistant_checkpoint = "EleutherAI/pythia-160m-deduped" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> inputs = tokenizer(prompt, return_tensors="pt") >>> model = AutoModelForCausalLM.from_pretrained(checkpoint) >>> assistant_model = AutoModelForCausalLM.from_pretrained(assistant_checkpoint) >>> outputs = model.generate(**inputs, assistant_model=assistant_model) >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['Alice and Bob are sitting in a bar. Alice is drinking a beer and Bob is drinking a'] ``` When using assisted decoding with sampling methods, you can use the `temperature` argument to control the randomness just like in multinomial sampling. However, in assisted decoding, reducing the temperature will help improving latency. ``` >>> from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed >>> set_seed(42) >>> prompt = "Alice and Bob" >>> checkpoint = "EleutherAI/pythia-1.4b-deduped" >>> assistant_checkpoint = "EleutherAI/pythia-160m-deduped" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> inputs = tokenizer(prompt, return_tensors="pt") >>> model = AutoModelForCausalLM.from_pretrained(checkpoint) >>> assistant_model = AutoModelForCausalLM.from_pretrained(assistant_checkpoint) >>> outputs = model.generate(**inputs, assistant_model=assistant_model, do_sample=True, temperature=0.5) >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['Alice and Bob are going to the same party. It is a small party, in a small'] ```
<!DOCTYPE html><html class=""><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"> <meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science."> <meta property="fb:app_id" content="1321688464574422"> <meta name="twitter:card" content="summary_large_image"> <meta name="twitter:site" content="@huggingface"> <meta property="og:title" content="Text generation strategies"> <meta property="og:type" content="website"> <meta property="og:url" content="https://huggingface.co/docs/transformers/v4.34.0/en/generation_strategies"> <meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png"> <link rel="stylesheet" href="/front/build/kube-b0520c1/style.css"> <link rel="preconnect" href="https://fonts.gstatic.com"> <link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&amp;display=swap" rel="stylesheet"> <link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&amp;display=swap" rel="stylesheet"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" as="style" onload="this.onload=null;this.rel='stylesheet'"> <noscript> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" /> </noscript> <title>Text generation strategies</title> <script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script> <script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head> <body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage"> <div class="flex min-h-screen flex-col"> <div class="SVELTE_HYDRATER contents" data-props="{&quot;classNames&quot;:&quot;&quot;,&quot;isWide&quot;:true,&quot;isZh&quot;:false}" data-target="MainHeader"><header class="border-b border-gray-100 "><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <div class="flex flex-none items-center justify-center p-0.5 place-self-stretch lg:hidden"><button class="relative z-40 flex h-6 w-8 items-center justify-center" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div></div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="rounded-full border border-transparent bg-gray-900 px-3 py-1 leading-none text-white hover:border-black hover:bg-white hover:text-black" href="/join">Sign Up</a></li></ul></nav></div></header></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div> <main class="flex flex-1 flex-col"><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapters&quot;:[{&quot;title&quot;:&quot;Get started&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;🤗 Transformers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;index&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/index&quot;},{&quot;title&quot;:&quot;Quick tour&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;quicktour&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/quicktour&quot;},{&quot;title&quot;:&quot;Installation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;installation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/installation&quot;}]},{&quot;title&quot;:&quot;Tutorials&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Run inference with pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_tutorial&quot;},{&quot;title&quot;:&quot;Write portable code with AutoClass&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;autoclass_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/autoclass_tutorial&quot;},{&quot;title&quot;:&quot;Preprocess data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocessing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/preprocessing&quot;},{&quot;title&quot;:&quot;Fine-tune a pretrained model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;training&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/training&quot;},{&quot;title&quot;:&quot;Train with a script&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;run_scripts&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/run_scripts&quot;},{&quot;title&quot;:&quot;Set up distributed training with 🤗 Accelerate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;accelerate&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/accelerate&quot;},{&quot;title&quot;:&quot;Load and train adapters with 🤗 PEFT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;peft&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/peft&quot;},{&quot;title&quot;:&quot;Share your model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_sharing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_sharing&quot;},{&quot;title&quot;:&quot;Agents&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers_agents&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/transformers_agents&quot;},{&quot;title&quot;:&quot;Generation with LLMs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;llm_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/llm_tutorial&quot;}]},{&quot;title&quot;:&quot;Task Guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Natural Language Processing&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text classification&quot;,&quot;id&quot;:&quot;tasks/sequence_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/sequence_classification&quot;},{&quot;title&quot;:&quot;Token classification&quot;,&quot;id&quot;:&quot;tasks/token_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/token_classification&quot;},{&quot;title&quot;:&quot;Question answering&quot;,&quot;id&quot;:&quot;tasks/question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/question_answering&quot;},{&quot;title&quot;:&quot;Causal language modeling&quot;,&quot;id&quot;:&quot;tasks/language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/language_modeling&quot;},{&quot;title&quot;:&quot;Masked language modeling&quot;,&quot;id&quot;:&quot;tasks/masked_language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/masked_language_modeling&quot;},{&quot;title&quot;:&quot;Translation&quot;,&quot;id&quot;:&quot;tasks/translation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/translation&quot;},{&quot;title&quot;:&quot;Summarization&quot;,&quot;id&quot;:&quot;tasks/summarization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/summarization&quot;},{&quot;title&quot;:&quot;Multiple choice&quot;,&quot;id&quot;:&quot;tasks/multiple_choice&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/multiple_choice&quot;}]},{&quot;title&quot;:&quot;Audio&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio classification&quot;,&quot;id&quot;:&quot;tasks/audio_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/audio_classification&quot;},{&quot;title&quot;:&quot;Automatic speech recognition&quot;,&quot;id&quot;:&quot;tasks/asr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/asr&quot;}]},{&quot;title&quot;:&quot;Computer Vision&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image classification&quot;,&quot;id&quot;:&quot;tasks/image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_classification&quot;},{&quot;title&quot;:&quot;Semantic segmentation&quot;,&quot;id&quot;:&quot;tasks/semantic_segmentation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/semantic_segmentation&quot;},{&quot;title&quot;:&quot;Video classification&quot;,&quot;id&quot;:&quot;tasks/video_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/video_classification&quot;},{&quot;title&quot;:&quot;Object detection&quot;,&quot;id&quot;:&quot;tasks/object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot object detection&quot;,&quot;id&quot;:&quot;tasks/zero_shot_object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot image classification&quot;,&quot;id&quot;:&quot;tasks/zero_shot_image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification&quot;},{&quot;title&quot;:&quot;Depth estimation&quot;,&quot;id&quot;:&quot;tasks/monocular_depth_estimation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation&quot;}]},{&quot;title&quot;:&quot;Multimodal&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image captioning&quot;,&quot;id&quot;:&quot;tasks/image_captioning&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_captioning&quot;},{&quot;title&quot;:&quot;Document Question Answering&quot;,&quot;id&quot;:&quot;tasks/document_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/document_question_answering&quot;},{&quot;title&quot;:&quot;Visual Question Answering&quot;,&quot;id&quot;:&quot;tasks/visual_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/visual_question_answering&quot;},{&quot;title&quot;:&quot;Text to speech&quot;,&quot;id&quot;:&quot;tasks/text-to-speech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/text-to-speech&quot;}]},{&quot;title&quot;:&quot;Generation&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Customize the generation strategy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;generation_strategies&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/generation_strategies&quot;}]},{&quot;title&quot;:&quot;Prompting&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image tasks with IDEFICS&quot;,&quot;id&quot;:&quot;tasks/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/idefics&quot;}]}]},{&quot;title&quot;:&quot;Developer guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Use fast tokenizers from 🤗 Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;fast_tokenizers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/fast_tokenizers&quot;},{&quot;title&quot;:&quot;Run inference with multilingual models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;multilingual&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/multilingual&quot;},{&quot;title&quot;:&quot;Use model-specific APIs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;create_a_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/create_a_model&quot;},{&quot;title&quot;:&quot;Share a custom model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_models&quot;},{&quot;title&quot;:&quot;Templates for chat models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;chat_templating&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/chat_templating&quot;},{&quot;title&quot;:&quot;Run training on Amazon SageMaker&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;sagemaker&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/sagemaker&quot;},{&quot;title&quot;:&quot;Export to ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;serialization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/serialization&quot;},{&quot;title&quot;:&quot;Export to TFLite&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tflite&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tflite&quot;},{&quot;title&quot;:&quot;Export to TorchScript&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;torchscript&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/torchscript&quot;},{&quot;title&quot;:&quot;Benchmarks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;benchmarks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/benchmarks&quot;},{&quot;title&quot;:&quot;Notebooks with examples&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;notebooks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/notebooks&quot;},{&quot;title&quot;:&quot;Community resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;community&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/community&quot;},{&quot;title&quot;:&quot;Custom Tools and Prompts&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_tools&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_tools&quot;},{&quot;title&quot;:&quot;Troubleshoot&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;troubleshooting&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/troubleshooting&quot;}]},{&quot;title&quot;:&quot;Performance and scalability&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;performance&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/performance&quot;},{&quot;title&quot;:&quot;Efficient training techniques&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Methods and tools for efficient training on a single GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_one&quot;},{&quot;title&quot;:&quot;Multiple GPUs and parallelism&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_many&quot;},{&quot;title&quot;:&quot;Efficient training on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu&quot;},{&quot;title&quot;:&quot;Distributed CPU training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu_many&quot;},{&quot;title&quot;:&quot;Training on TPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu&quot;},{&quot;title&quot;:&quot;Training on TPU with TensorFlow&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu_tf&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu_tf&quot;},{&quot;title&quot;:&quot;Training on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_special&quot;},{&quot;title&quot;:&quot;Custom hardware for training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_hardware&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_hardware&quot;},{&quot;title&quot;:&quot;Hyperparameter Search using Trainer API&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;hpo_train&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/hpo_train&quot;}]},{&quot;title&quot;:&quot;Optimizing inference&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Inference on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_cpu&quot;},{&quot;title&quot;:&quot;Inference on one GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_one&quot;},{&quot;title&quot;:&quot;Inference on many GPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_many&quot;},{&quot;title&quot;:&quot;Inference on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_special&quot;}]},{&quot;title&quot;:&quot;Instantiating a big model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;big_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/big_models&quot;},{&quot;title&quot;:&quot;Troubleshooting&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;debugging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/debugging&quot;},{&quot;title&quot;:&quot;XLA Integration for TensorFlow Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tf_xla&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tf_xla&quot;},{&quot;title&quot;:&quot;Optimize inference using `torch.compile()`&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_torch_compile&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_torch_compile&quot;}]},{&quot;title&quot;:&quot;Contribute&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;How to contribute to transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;contributing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/contributing&quot;},{&quot;title&quot;:&quot;How to add a model to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_model&quot;},{&quot;title&quot;:&quot;How to convert a 🤗 Transformers model to TensorFlow?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_tensorflow_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_tensorflow_model&quot;},{&quot;title&quot;:&quot;How to add a pipeline to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_pipeline&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_pipeline&quot;},{&quot;title&quot;:&quot;Testing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;testing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/testing&quot;},{&quot;title&quot;:&quot;Checks on a Pull Request&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pr_checks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pr_checks&quot;}]},{&quot;title&quot;:&quot;Conceptual guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Philosophy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;philosophy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/philosophy&quot;},{&quot;title&quot;:&quot;Glossary&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;glossary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/glossary&quot;},{&quot;title&quot;:&quot;What 🤗 Transformers can do&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;task_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/task_summary&quot;},{&quot;title&quot;:&quot;How 🤗 Transformers solve tasks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks_explained&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks_explained&quot;},{&quot;title&quot;:&quot;The Transformer model family&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_summary&quot;},{&quot;title&quot;:&quot;Summary of the tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tokenizer_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tokenizer_summary&quot;},{&quot;title&quot;:&quot;Attention mechanisms&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;attention&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/attention&quot;},{&quot;title&quot;:&quot;Padding and truncation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pad_truncation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pad_truncation&quot;},{&quot;title&quot;:&quot;BERTology&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;bertology&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/bertology&quot;},{&quot;title&quot;:&quot;Perplexity of fixed-length models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perplexity&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perplexity&quot;},{&quot;title&quot;:&quot;Pipelines for webserver inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_webserver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_webserver&quot;},{&quot;title&quot;:&quot;Model training anatomy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_memory_anatomy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_memory_anatomy&quot;}]},{&quot;title&quot;:&quot;API&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Main Classes&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Agents and Tools&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/agent&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/agent&quot;},{&quot;title&quot;:&quot;Auto Classes&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/auto&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/auto&quot;},{&quot;title&quot;:&quot;Callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/callback&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/callback&quot;},{&quot;title&quot;:&quot;Configuration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/configuration&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/configuration&quot;},{&quot;title&quot;:&quot;Data Collator&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/data_collator&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/data_collator&quot;},{&quot;title&quot;:&quot;Keras callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/keras_callbacks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/keras_callbacks&quot;},{&quot;title&quot;:&quot;Logging&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/logging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/logging&quot;},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/model&quot;},{&quot;title&quot;:&quot;Text Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/text_generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/text_generation&quot;},{&quot;title&quot;:&quot;ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/onnx&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/onnx&quot;},{&quot;title&quot;:&quot;Optimization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/optimizer_schedules&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules&quot;},{&quot;title&quot;:&quot;Model outputs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/output&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/output&quot;},{&quot;title&quot;:&quot;Pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/pipelines&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/pipelines&quot;},{&quot;title&quot;:&quot;Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/processors&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/processors&quot;},{&quot;title&quot;:&quot;Quantization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/quantization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/quantization&quot;},{&quot;title&quot;:&quot;Tokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/tokenizer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/tokenizer&quot;},{&quot;title&quot;:&quot;Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/trainer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/trainer&quot;},{&quot;title&quot;:&quot;DeepSpeed Integration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/deepspeed&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/deepspeed&quot;},{&quot;title&quot;:&quot;Feature Extractor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/feature_extractor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/feature_extractor&quot;},{&quot;title&quot;:&quot;Image Processor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/image_processor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/image_processor&quot;}]},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALBERT&quot;,&quot;id&quot;:&quot;model_doc/albert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/albert&quot;},{&quot;title&quot;:&quot;BART&quot;,&quot;id&quot;:&quot;model_doc/bart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bart&quot;},{&quot;title&quot;:&quot;BARThez&quot;,&quot;id&quot;:&quot;model_doc/barthez&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/barthez&quot;},{&quot;title&quot;:&quot;BARTpho&quot;,&quot;id&quot;:&quot;model_doc/bartpho&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bartpho&quot;},{&quot;title&quot;:&quot;BERT&quot;,&quot;id&quot;:&quot;model_doc/bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert&quot;},{&quot;title&quot;:&quot;BertGeneration&quot;,&quot;id&quot;:&quot;model_doc/bert-generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-generation&quot;},{&quot;title&quot;:&quot;BertJapanese&quot;,&quot;id&quot;:&quot;model_doc/bert-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-japanese&quot;},{&quot;title&quot;:&quot;Bertweet&quot;,&quot;id&quot;:&quot;model_doc/bertweet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bertweet&quot;},{&quot;title&quot;:&quot;BigBird&quot;,&quot;id&quot;:&quot;model_doc/big_bird&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/big_bird&quot;},{&quot;title&quot;:&quot;BigBirdPegasus&quot;,&quot;id&quot;:&quot;model_doc/bigbird_pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus&quot;},{&quot;title&quot;:&quot;BioGpt&quot;,&quot;id&quot;:&quot;model_doc/biogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/biogpt&quot;},{&quot;title&quot;:&quot;Blenderbot&quot;,&quot;id&quot;:&quot;model_doc/blenderbot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot&quot;},{&quot;title&quot;:&quot;Blenderbot Small&quot;,&quot;id&quot;:&quot;model_doc/blenderbot-small&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot-small&quot;},{&quot;title&quot;:&quot;BLOOM&quot;,&quot;id&quot;:&quot;model_doc/bloom&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bloom&quot;},{&quot;title&quot;:&quot;BORT&quot;,&quot;id&quot;:&quot;model_doc/bort&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bort&quot;},{&quot;title&quot;:&quot;ByT5&quot;,&quot;id&quot;:&quot;model_doc/byt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/byt5&quot;},{&quot;title&quot;:&quot;CamemBERT&quot;,&quot;id&quot;:&quot;model_doc/camembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/camembert&quot;},{&quot;title&quot;:&quot;CANINE&quot;,&quot;id&quot;:&quot;model_doc/canine&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/canine&quot;},{&quot;title&quot;:&quot;CodeGen&quot;,&quot;id&quot;:&quot;model_doc/codegen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/codegen&quot;},{&quot;title&quot;:&quot;CodeLlama&quot;,&quot;id&quot;:&quot;model_doc/code_llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/code_llama&quot;},{&quot;title&quot;:&quot;ConvBERT&quot;,&quot;id&quot;:&quot;model_doc/convbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convbert&quot;},{&quot;title&quot;:&quot;CPM&quot;,&quot;id&quot;:&quot;model_doc/cpm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpm&quot;},{&quot;title&quot;:&quot;CPMANT&quot;,&quot;id&quot;:&quot;model_doc/cpmant&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpmant&quot;},{&quot;title&quot;:&quot;CTRL&quot;,&quot;id&quot;:&quot;model_doc/ctrl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ctrl&quot;},{&quot;title&quot;:&quot;DeBERTa&quot;,&quot;id&quot;:&quot;model_doc/deberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta&quot;},{&quot;title&quot;:&quot;DeBERTa-v2&quot;,&quot;id&quot;:&quot;model_doc/deberta-v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta-v2&quot;},{&quot;title&quot;:&quot;DialoGPT&quot;,&quot;id&quot;:&quot;model_doc/dialogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dialogpt&quot;},{&quot;title&quot;:&quot;DistilBERT&quot;,&quot;id&quot;:&quot;model_doc/distilbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/distilbert&quot;},{&quot;title&quot;:&quot;DPR&quot;,&quot;id&quot;:&quot;model_doc/dpr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpr&quot;},{&quot;title&quot;:&quot;ELECTRA&quot;,&quot;id&quot;:&quot;model_doc/electra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/electra&quot;},{&quot;title&quot;:&quot;Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encoder-decoder&quot;},{&quot;title&quot;:&quot;ERNIE&quot;,&quot;id&quot;:&quot;model_doc/ernie&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie&quot;},{&quot;title&quot;:&quot;ErnieM&quot;,&quot;id&quot;:&quot;model_doc/ernie_m&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie_m&quot;},{&quot;title&quot;:&quot;ESM&quot;,&quot;id&quot;:&quot;model_doc/esm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/esm&quot;},{&quot;title&quot;:&quot;Falcon&quot;,&quot;id&quot;:&quot;model_doc/falcon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/falcon&quot;},{&quot;title&quot;:&quot;FLAN-T5&quot;,&quot;id&quot;:&quot;model_doc/flan-t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-t5&quot;},{&quot;title&quot;:&quot;FLAN-UL2&quot;,&quot;id&quot;:&quot;model_doc/flan-ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-ul2&quot;},{&quot;title&quot;:&quot;FlauBERT&quot;,&quot;id&quot;:&quot;model_doc/flaubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flaubert&quot;},{&quot;title&quot;:&quot;FNet&quot;,&quot;id&quot;:&quot;model_doc/fnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fnet&quot;},{&quot;title&quot;:&quot;FSMT&quot;,&quot;id&quot;:&quot;model_doc/fsmt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fsmt&quot;},{&quot;title&quot;:&quot;Funnel Transformer&quot;,&quot;id&quot;:&quot;model_doc/funnel&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/funnel&quot;},{&quot;title&quot;:&quot;GPT&quot;,&quot;id&quot;:&quot;model_doc/openai-gpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/openai-gpt&quot;},{&quot;title&quot;:&quot;GPT Neo&quot;,&quot;id&quot;:&quot;model_doc/gpt_neo&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neo&quot;},{&quot;title&quot;:&quot;GPT NeoX&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox&quot;},{&quot;title&quot;:&quot;GPT NeoX Japanese&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox_japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese&quot;},{&quot;title&quot;:&quot;GPT-J&quot;,&quot;id&quot;:&quot;model_doc/gptj&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptj&quot;},{&quot;title&quot;:&quot;GPT2&quot;,&quot;id&quot;:&quot;model_doc/gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt2&quot;},{&quot;title&quot;:&quot;GPTBigCode&quot;,&quot;id&quot;:&quot;model_doc/gpt_bigcode&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode&quot;},{&quot;title&quot;:&quot;GPTSAN Japanese&quot;,&quot;id&quot;:&quot;model_doc/gptsan-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese&quot;},{&quot;title&quot;:&quot;GPTSw3&quot;,&quot;id&quot;:&quot;model_doc/gpt-sw3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt-sw3&quot;},{&quot;title&quot;:&quot;HerBERT&quot;,&quot;id&quot;:&quot;model_doc/herbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/herbert&quot;},{&quot;title&quot;:&quot;I-BERT&quot;,&quot;id&quot;:&quot;model_doc/ibert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ibert&quot;},{&quot;title&quot;:&quot;Jukebox&quot;,&quot;id&quot;:&quot;model_doc/jukebox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/jukebox&quot;},{&quot;title&quot;:&quot;LED&quot;,&quot;id&quot;:&quot;model_doc/led&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/led&quot;},{&quot;title&quot;:&quot;LLaMA&quot;,&quot;id&quot;:&quot;model_doc/llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama&quot;},{&quot;title&quot;:&quot;Llama2&quot;,&quot;id&quot;:&quot;model_doc/llama2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama2&quot;},{&quot;title&quot;:&quot;Longformer&quot;,&quot;id&quot;:&quot;model_doc/longformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longformer&quot;},{&quot;title&quot;:&quot;LongT5&quot;,&quot;id&quot;:&quot;model_doc/longt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longt5&quot;},{&quot;title&quot;:&quot;LUKE&quot;,&quot;id&quot;:&quot;model_doc/luke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/luke&quot;},{&quot;title&quot;:&quot;M2M100&quot;,&quot;id&quot;:&quot;model_doc/m2m_100&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/m2m_100&quot;},{&quot;title&quot;:&quot;MarianMT&quot;,&quot;id&quot;:&quot;model_doc/marian&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/marian&quot;},{&quot;title&quot;:&quot;MarkupLM&quot;,&quot;id&quot;:&quot;model_doc/markuplm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/markuplm&quot;},{&quot;title&quot;:&quot;MBart and MBart-50&quot;,&quot;id&quot;:&quot;model_doc/mbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mbart&quot;},{&quot;title&quot;:&quot;MEGA&quot;,&quot;id&quot;:&quot;model_doc/mega&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mega&quot;},{&quot;title&quot;:&quot;MegatronBERT&quot;,&quot;id&quot;:&quot;model_doc/megatron-bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron-bert&quot;},{&quot;title&quot;:&quot;MegatronGPT2&quot;,&quot;id&quot;:&quot;model_doc/megatron_gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2&quot;},{&quot;title&quot;:&quot;Mistral&quot;,&quot;id&quot;:&quot;model_doc/mistral&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mistral&quot;},{&quot;title&quot;:&quot;mLUKE&quot;,&quot;id&quot;:&quot;model_doc/mluke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mluke&quot;},{&quot;title&quot;:&quot;MobileBERT&quot;,&quot;id&quot;:&quot;model_doc/mobilebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilebert&quot;},{&quot;title&quot;:&quot;MPNet&quot;,&quot;id&quot;:&quot;model_doc/mpnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpnet&quot;},{&quot;title&quot;:&quot;MPT&quot;,&quot;id&quot;:&quot;model_doc/mpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpt&quot;},{&quot;title&quot;:&quot;MRA&quot;,&quot;id&quot;:&quot;model_doc/mra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mra&quot;},{&quot;title&quot;:&quot;MT5&quot;,&quot;id&quot;:&quot;model_doc/mt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mt5&quot;},{&quot;title&quot;:&quot;MVP&quot;,&quot;id&quot;:&quot;model_doc/mvp&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mvp&quot;},{&quot;title&quot;:&quot;NEZHA&quot;,&quot;id&quot;:&quot;model_doc/nezha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nezha&quot;},{&quot;title&quot;:&quot;NLLB&quot;,&quot;id&quot;:&quot;model_doc/nllb&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb&quot;},{&quot;title&quot;:&quot;NLLB-MoE&quot;,&quot;id&quot;:&quot;model_doc/nllb-moe&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb-moe&quot;},{&quot;title&quot;:&quot;Nyströmformer&quot;,&quot;id&quot;:&quot;model_doc/nystromformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nystromformer&quot;},{&quot;title&quot;:&quot;Open-Llama&quot;,&quot;id&quot;:&quot;model_doc/open-llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/open-llama&quot;},{&quot;title&quot;:&quot;OPT&quot;,&quot;id&quot;:&quot;model_doc/opt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/opt&quot;},{&quot;title&quot;:&quot;Pegasus&quot;,&quot;id&quot;:&quot;model_doc/pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus&quot;},{&quot;title&quot;:&quot;PEGASUS-X&quot;,&quot;id&quot;:&quot;model_doc/pegasus_x&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus_x&quot;},{&quot;title&quot;:&quot;Persimmon&quot;,&quot;id&quot;:&quot;model_doc/persimmon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/persimmon&quot;},{&quot;title&quot;:&quot;PhoBERT&quot;,&quot;id&quot;:&quot;model_doc/phobert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/phobert&quot;},{&quot;title&quot;:&quot;PLBart&quot;,&quot;id&quot;:&quot;model_doc/plbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/plbart&quot;},{&quot;title&quot;:&quot;ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/prophetnet&quot;},{&quot;title&quot;:&quot;QDQBert&quot;,&quot;id&quot;:&quot;model_doc/qdqbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/qdqbert&quot;},{&quot;title&quot;:&quot;RAG&quot;,&quot;id&quot;:&quot;model_doc/rag&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rag&quot;},{&quot;title&quot;:&quot;REALM&quot;,&quot;id&quot;:&quot;model_doc/realm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/realm&quot;},{&quot;title&quot;:&quot;Reformer&quot;,&quot;id&quot;:&quot;model_doc/reformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/reformer&quot;},{&quot;title&quot;:&quot;RemBERT&quot;,&quot;id&quot;:&quot;model_doc/rembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rembert&quot;},{&quot;title&quot;:&quot;RetriBERT&quot;,&quot;id&quot;:&quot;model_doc/retribert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/retribert&quot;},{&quot;title&quot;:&quot;RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta&quot;},{&quot;title&quot;:&quot;RoBERTa-PreLayerNorm&quot;,&quot;id&quot;:&quot;model_doc/roberta-prelayernorm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm&quot;},{&quot;title&quot;:&quot;RoCBert&quot;,&quot;id&quot;:&quot;model_doc/roc_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roc_bert&quot;},{&quot;title&quot;:&quot;RoFormer&quot;,&quot;id&quot;:&quot;model_doc/roformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roformer&quot;},{&quot;title&quot;:&quot;RWKV&quot;,&quot;id&quot;:&quot;model_doc/rwkv&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rwkv&quot;},{&quot;title&quot;:&quot;Splinter&quot;,&quot;id&quot;:&quot;model_doc/splinter&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/splinter&quot;},{&quot;title&quot;:&quot;SqueezeBERT&quot;,&quot;id&quot;:&quot;model_doc/squeezebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/squeezebert&quot;},{&quot;title&quot;:&quot;SwitchTransformers&quot;,&quot;id&quot;:&quot;model_doc/switch_transformers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/switch_transformers&quot;},{&quot;title&quot;:&quot;T5&quot;,&quot;id&quot;:&quot;model_doc/t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5&quot;},{&quot;title&quot;:&quot;T5v1.1&quot;,&quot;id&quot;:&quot;model_doc/t5v1.1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5v1.1&quot;},{&quot;title&quot;:&quot;TAPEX&quot;,&quot;id&quot;:&quot;model_doc/tapex&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapex&quot;},{&quot;title&quot;:&quot;Transformer XL&quot;,&quot;id&quot;:&quot;model_doc/transfo-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/transfo-xl&quot;},{&quot;title&quot;:&quot;UL2&quot;,&quot;id&quot;:&quot;model_doc/ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ul2&quot;},{&quot;title&quot;:&quot;UMT5&quot;,&quot;id&quot;:&quot;model_doc/umt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/umt5&quot;},{&quot;title&quot;:&quot;X-MOD&quot;,&quot;id&quot;:&quot;model_doc/xmod&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xmod&quot;},{&quot;title&quot;:&quot;XGLM&quot;,&quot;id&quot;:&quot;model_doc/xglm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xglm&quot;},{&quot;title&quot;:&quot;XLM&quot;,&quot;id&quot;:&quot;model_doc/xlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm&quot;},{&quot;title&quot;:&quot;XLM-ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/xlm-prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa-XL&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl&quot;},{&quot;title&quot;:&quot;XLM-V&quot;,&quot;id&quot;:&quot;model_doc/xlm-v&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-v&quot;},{&quot;title&quot;:&quot;XLNet&quot;,&quot;id&quot;:&quot;model_doc/xlnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlnet&quot;},{&quot;title&quot;:&quot;YOSO&quot;,&quot;id&quot;:&quot;model_doc/yoso&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yoso&quot;}]},{&quot;title&quot;:&quot;Vision models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;BEiT&quot;,&quot;id&quot;:&quot;model_doc/beit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/beit&quot;},{&quot;title&quot;:&quot;BiT&quot;,&quot;id&quot;:&quot;model_doc/bit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bit&quot;},{&quot;title&quot;:&quot;Conditional DETR&quot;,&quot;id&quot;:&quot;model_doc/conditional_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/conditional_detr&quot;},{&quot;title&quot;:&quot;ConvNeXT&quot;,&quot;id&quot;:&quot;model_doc/convnext&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnext&quot;},{&quot;title&quot;:&quot;ConvNeXTV2&quot;,&quot;id&quot;:&quot;model_doc/convnextv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnextv2&quot;},{&quot;title&quot;:&quot;CvT&quot;,&quot;id&quot;:&quot;model_doc/cvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cvt&quot;},{&quot;title&quot;:&quot;Deformable DETR&quot;,&quot;id&quot;:&quot;model_doc/deformable_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deformable_detr&quot;},{&quot;title&quot;:&quot;DeiT&quot;,&quot;id&quot;:&quot;model_doc/deit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deit&quot;},{&quot;title&quot;:&quot;DETA&quot;,&quot;id&quot;:&quot;model_doc/deta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deta&quot;},{&quot;title&quot;:&quot;DETR&quot;,&quot;id&quot;:&quot;model_doc/detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/detr&quot;},{&quot;title&quot;:&quot;DiNAT&quot;,&quot;id&quot;:&quot;model_doc/dinat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinat&quot;},{&quot;title&quot;:&quot;DINO V2&quot;,&quot;id&quot;:&quot;model_doc/dinov2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinov2&quot;},{&quot;title&quot;:&quot;DiT&quot;,&quot;id&quot;:&quot;model_doc/dit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dit&quot;},{&quot;title&quot;:&quot;DPT&quot;,&quot;id&quot;:&quot;model_doc/dpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpt&quot;},{&quot;title&quot;:&quot;EfficientFormer&quot;,&quot;id&quot;:&quot;model_doc/efficientformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientformer&quot;},{&quot;title&quot;:&quot;EfficientNet&quot;,&quot;id&quot;:&quot;model_doc/efficientnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientnet&quot;},{&quot;title&quot;:&quot;FocalNet&quot;,&quot;id&quot;:&quot;model_doc/focalnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/focalnet&quot;},{&quot;title&quot;:&quot;GLPN&quot;,&quot;id&quot;:&quot;model_doc/glpn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/glpn&quot;},{&quot;title&quot;:&quot;ImageGPT&quot;,&quot;id&quot;:&quot;model_doc/imagegpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/imagegpt&quot;},{&quot;title&quot;:&quot;LeViT&quot;,&quot;id&quot;:&quot;model_doc/levit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/levit&quot;},{&quot;title&quot;:&quot;Mask2Former&quot;,&quot;id&quot;:&quot;model_doc/mask2former&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mask2former&quot;},{&quot;title&quot;:&quot;MaskFormer&quot;,&quot;id&quot;:&quot;model_doc/maskformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/maskformer&quot;},{&quot;title&quot;:&quot;MobileNetV1&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v1&quot;},{&quot;title&quot;:&quot;MobileNetV2&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v2&quot;},{&quot;title&quot;:&quot;MobileViT&quot;,&quot;id&quot;:&quot;model_doc/mobilevit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevit&quot;},{&quot;title&quot;:&quot;MobileViTV2&quot;,&quot;id&quot;:&quot;model_doc/mobilevitv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevitv2&quot;},{&quot;title&quot;:&quot;NAT&quot;,&quot;id&quot;:&quot;model_doc/nat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nat&quot;},{&quot;title&quot;:&quot;PoolFormer&quot;,&quot;id&quot;:&quot;model_doc/poolformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/poolformer&quot;},{&quot;title&quot;:&quot;Pyramid Vision Transformer (PVT)&quot;,&quot;id&quot;:&quot;model_doc/pvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pvt&quot;},{&quot;title&quot;:&quot;RegNet&quot;,&quot;id&quot;:&quot;model_doc/regnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/regnet&quot;},{&quot;title&quot;:&quot;ResNet&quot;,&quot;id&quot;:&quot;model_doc/resnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/resnet&quot;},{&quot;title&quot;:&quot;SegFormer&quot;,&quot;id&quot;:&quot;model_doc/segformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/segformer&quot;},{&quot;title&quot;:&quot;SwiftFormer&quot;,&quot;id&quot;:&quot;model_doc/swiftformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swiftformer&quot;},{&quot;title&quot;:&quot;Swin Transformer&quot;,&quot;id&quot;:&quot;model_doc/swin&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin&quot;},{&quot;title&quot;:&quot;Swin Transformer V2&quot;,&quot;id&quot;:&quot;model_doc/swinv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swinv2&quot;},{&quot;title&quot;:&quot;Swin2SR&quot;,&quot;id&quot;:&quot;model_doc/swin2sr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin2sr&quot;},{&quot;title&quot;:&quot;Table Transformer&quot;,&quot;id&quot;:&quot;model_doc/table-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/table-transformer&quot;},{&quot;title&quot;:&quot;TimeSformer&quot;,&quot;id&quot;:&quot;model_doc/timesformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/timesformer&quot;},{&quot;title&quot;:&quot;UperNet&quot;,&quot;id&quot;:&quot;model_doc/upernet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/upernet&quot;},{&quot;title&quot;:&quot;VAN&quot;,&quot;id&quot;:&quot;model_doc/van&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/van&quot;},{&quot;title&quot;:&quot;VideoMAE&quot;,&quot;id&quot;:&quot;model_doc/videomae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/videomae&quot;},{&quot;title&quot;:&quot;Vision Transformer (ViT)&quot;,&quot;id&quot;:&quot;model_doc/vit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit&quot;},{&quot;title&quot;:&quot;ViT Hybrid&quot;,&quot;id&quot;:&quot;model_doc/vit_hybrid&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_hybrid&quot;},{&quot;title&quot;:&quot;ViTDet&quot;,&quot;id&quot;:&quot;model_doc/vitdet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitdet&quot;},{&quot;title&quot;:&quot;ViTMAE&quot;,&quot;id&quot;:&quot;model_doc/vit_mae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_mae&quot;},{&quot;title&quot;:&quot;ViTMatte&quot;,&quot;id&quot;:&quot;model_doc/vitmatte&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitmatte&quot;},{&quot;title&quot;:&quot;ViTMSN&quot;,&quot;id&quot;:&quot;model_doc/vit_msn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_msn&quot;},{&quot;title&quot;:&quot;ViViT&quot;,&quot;id&quot;:&quot;model_doc/vivit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vivit&quot;},{&quot;title&quot;:&quot;YOLOS&quot;,&quot;id&quot;:&quot;model_doc/yolos&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yolos&quot;}]},{&quot;title&quot;:&quot;Audio models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio Spectrogram Transformer&quot;,&quot;id&quot;:&quot;model_doc/audio-spectrogram-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer&quot;},{&quot;title&quot;:&quot;Bark&quot;,&quot;id&quot;:&quot;model_doc/bark&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bark&quot;},{&quot;title&quot;:&quot;CLAP&quot;,&quot;id&quot;:&quot;model_doc/clap&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clap&quot;},{&quot;title&quot;:&quot;EnCodec&quot;,&quot;id&quot;:&quot;model_doc/encodec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encodec&quot;},{&quot;title&quot;:&quot;Hubert&quot;,&quot;id&quot;:&quot;model_doc/hubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/hubert&quot;},{&quot;title&quot;:&quot;MCTCT&quot;,&quot;id&quot;:&quot;model_doc/mctct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mctct&quot;},{&quot;title&quot;:&quot;MMS&quot;,&quot;id&quot;:&quot;model_doc/mms&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mms&quot;},{&quot;title&quot;:&quot;MusicGen&quot;,&quot;id&quot;:&quot;model_doc/musicgen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/musicgen&quot;},{&quot;title&quot;:&quot;Pop2Piano&quot;,&quot;id&quot;:&quot;model_doc/pop2piano&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pop2piano&quot;},{&quot;title&quot;:&quot;SEW&quot;,&quot;id&quot;:&quot;model_doc/sew&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew&quot;},{&quot;title&quot;:&quot;SEW-D&quot;,&quot;id&quot;:&quot;model_doc/sew-d&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew-d&quot;},{&quot;title&quot;:&quot;Speech2Text&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text&quot;},{&quot;title&quot;:&quot;Speech2Text2&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text_2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2&quot;},{&quot;title&quot;:&quot;SpeechT5&quot;,&quot;id&quot;:&quot;model_doc/speecht5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speecht5&quot;},{&quot;title&quot;:&quot;UniSpeech&quot;,&quot;id&quot;:&quot;model_doc/unispeech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech&quot;},{&quot;title&quot;:&quot;UniSpeech-SAT&quot;,&quot;id&quot;:&quot;model_doc/unispeech-sat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech-sat&quot;},{&quot;title&quot;:&quot;VITS&quot;,&quot;id&quot;:&quot;model_doc/vits&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vits&quot;},{&quot;title&quot;:&quot;Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2&quot;},{&quot;title&quot;:&quot;Wav2Vec2-Conformer&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2-conformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer&quot;},{&quot;title&quot;:&quot;Wav2Vec2Phoneme&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2_phoneme&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme&quot;},{&quot;title&quot;:&quot;WavLM&quot;,&quot;id&quot;:&quot;model_doc/wavlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wavlm&quot;},{&quot;title&quot;:&quot;Whisper&quot;,&quot;id&quot;:&quot;model_doc/whisper&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/whisper&quot;},{&quot;title&quot;:&quot;XLS-R&quot;,&quot;id&quot;:&quot;model_doc/xls_r&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xls_r&quot;},{&quot;title&quot;:&quot;XLSR-Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/xlsr_wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2&quot;}]},{&quot;title&quot;:&quot;Multimodal models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALIGN&quot;,&quot;id&quot;:&quot;model_doc/align&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/align&quot;},{&quot;title&quot;:&quot;AltCLIP&quot;,&quot;id&quot;:&quot;model_doc/altclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/altclip&quot;},{&quot;title&quot;:&quot;BLIP&quot;,&quot;id&quot;:&quot;model_doc/blip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip&quot;},{&quot;title&quot;:&quot;BLIP-2&quot;,&quot;id&quot;:&quot;model_doc/blip-2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip-2&quot;},{&quot;title&quot;:&quot;BridgeTower&quot;,&quot;id&quot;:&quot;model_doc/bridgetower&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bridgetower&quot;},{&quot;title&quot;:&quot;BROS&quot;,&quot;id&quot;:&quot;model_doc/bros&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bros&quot;},{&quot;title&quot;:&quot;Chinese-CLIP&quot;,&quot;id&quot;:&quot;model_doc/chinese_clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/chinese_clip&quot;},{&quot;title&quot;:&quot;CLIP&quot;,&quot;id&quot;:&quot;model_doc/clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clip&quot;},{&quot;title&quot;:&quot;CLIPSeg&quot;,&quot;id&quot;:&quot;model_doc/clipseg&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clipseg&quot;},{&quot;title&quot;:&quot;Data2Vec&quot;,&quot;id&quot;:&quot;model_doc/data2vec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/data2vec&quot;},{&quot;title&quot;:&quot;DePlot&quot;,&quot;id&quot;:&quot;model_doc/deplot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deplot&quot;},{&quot;title&quot;:&quot;Donut&quot;,&quot;id&quot;:&quot;model_doc/donut&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/donut&quot;},{&quot;title&quot;:&quot;FLAVA&quot;,&quot;id&quot;:&quot;model_doc/flava&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flava&quot;},{&quot;title&quot;:&quot;GIT&quot;,&quot;id&quot;:&quot;model_doc/git&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/git&quot;},{&quot;title&quot;:&quot;GroupViT&quot;,&quot;id&quot;:&quot;model_doc/groupvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/groupvit&quot;},{&quot;title&quot;:&quot;IDEFICS&quot;,&quot;id&quot;:&quot;model_doc/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/idefics&quot;},{&quot;title&quot;:&quot;InstructBLIP&quot;,&quot;id&quot;:&quot;model_doc/instructblip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/instructblip&quot;},{&quot;title&quot;:&quot;LayoutLM&quot;,&quot;id&quot;:&quot;model_doc/layoutlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlm&quot;},{&quot;title&quot;:&quot;LayoutLMV2&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv2&quot;},{&quot;title&quot;:&quot;LayoutLMV3&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv3&quot;},{&quot;title&quot;:&quot;LayoutXLM&quot;,&quot;id&quot;:&quot;model_doc/layoutxlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutxlm&quot;},{&quot;title&quot;:&quot;LiLT&quot;,&quot;id&quot;:&quot;model_doc/lilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lilt&quot;},{&quot;title&quot;:&quot;LXMERT&quot;,&quot;id&quot;:&quot;model_doc/lxmert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lxmert&quot;},{&quot;title&quot;:&quot;MatCha&quot;,&quot;id&quot;:&quot;model_doc/matcha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/matcha&quot;},{&quot;title&quot;:&quot;MGP-STR&quot;,&quot;id&quot;:&quot;model_doc/mgp-str&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mgp-str&quot;},{&quot;title&quot;:&quot;Nougat&quot;,&quot;id&quot;:&quot;model_doc/nougat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nougat&quot;},{&quot;title&quot;:&quot;OneFormer&quot;,&quot;id&quot;:&quot;model_doc/oneformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/oneformer&quot;},{&quot;title&quot;:&quot;OWL-ViT&quot;,&quot;id&quot;:&quot;model_doc/owlvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/owlvit&quot;},{&quot;title&quot;:&quot;Perceiver&quot;,&quot;id&quot;:&quot;model_doc/perceiver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/perceiver&quot;},{&quot;title&quot;:&quot;Pix2Struct&quot;,&quot;id&quot;:&quot;model_doc/pix2struct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pix2struct&quot;},{&quot;title&quot;:&quot;Segment Anything&quot;,&quot;id&quot;:&quot;model_doc/sam&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sam&quot;},{&quot;title&quot;:&quot;Speech Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/speech-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder&quot;},{&quot;title&quot;:&quot;TAPAS&quot;,&quot;id&quot;:&quot;model_doc/tapas&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapas&quot;},{&quot;title&quot;:&quot;TrOCR&quot;,&quot;id&quot;:&quot;model_doc/trocr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trocr&quot;},{&quot;title&quot;:&quot;TVLT&quot;,&quot;id&quot;:&quot;model_doc/tvlt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tvlt&quot;},{&quot;title&quot;:&quot;ViLT&quot;,&quot;id&quot;:&quot;model_doc/vilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vilt&quot;},{&quot;title&quot;:&quot;Vision Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/vision-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder&quot;},{&quot;title&quot;:&quot;Vision Text Dual Encoder&quot;,&quot;id&quot;:&quot;model_doc/vision-text-dual-encoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder&quot;},{&quot;title&quot;:&quot;VisualBERT&quot;,&quot;id&quot;:&quot;model_doc/visual_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/visual_bert&quot;},{&quot;title&quot;:&quot;X-CLIP&quot;,&quot;id&quot;:&quot;model_doc/xclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xclip&quot;}]},{&quot;title&quot;:&quot;Reinforcement learning models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Decision Transformer&quot;,&quot;id&quot;:&quot;model_doc/decision_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/decision_transformer&quot;},{&quot;title&quot;:&quot;Trajectory Transformer&quot;,&quot;id&quot;:&quot;model_doc/trajectory_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer&quot;}]},{&quot;title&quot;:&quot;Time series models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Autoformer&quot;,&quot;id&quot;:&quot;model_doc/autoformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/autoformer&quot;},{&quot;title&quot;:&quot;Informer&quot;,&quot;id&quot;:&quot;model_doc/informer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/informer&quot;},{&quot;title&quot;:&quot;Time Series Transformer&quot;,&quot;id&quot;:&quot;model_doc/time_series_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/time_series_transformer&quot;}]},{&quot;title&quot;:&quot;Graph models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Graphormer&quot;,&quot;id&quot;:&quot;model_doc/graphormer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/graphormer&quot;}]}]},{&quot;title&quot;:&quot;Internal Helpers&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Custom Layers and Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/modeling_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/modeling_utils&quot;},{&quot;title&quot;:&quot;Utilities for pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/pipelines_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/pipelines_utils&quot;},{&quot;title&quot;:&quot;Utilities for Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/tokenization_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/tokenization_utils&quot;},{&quot;title&quot;:&quot;Utilities for Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/trainer_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/trainer_utils&quot;},{&quot;title&quot;:&quot;Utilities for Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/generation_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/generation_utils&quot;},{&quot;title&quot;:&quot;Utilities for Image Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/image_processing_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/image_processing_utils&quot;},{&quot;title&quot;:&quot;Utilities for Audio processing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/audio_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/audio_utils&quot;},{&quot;title&quot;:&quot;General Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/file_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/file_utils&quot;},{&quot;title&quot;:&quot;Utilities for Time Series&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/time_series_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/time_series_utils&quot;}]}]}],&quot;chapterId&quot;:&quot;generation_strategies&quot;,&quot;docType&quot;:&quot;docs&quot;,&quot;isLoggedIn&quot;:false,&quot;lang&quot;:&quot;en&quot;,&quot;langs&quot;:[&quot;de&quot;,&quot;en&quot;,&quot;es&quot;,&quot;fr&quot;,&quot;it&quot;,&quot;ko&quot;,&quot;pt&quot;,&quot;zh&quot;],&quot;library&quot;:&quot;transformers&quot;,&quot;theme&quot;:&quot;light&quot;,&quot;version&quot;:&quot;v4.34.0&quot;,&quot;versions&quot;:[{&quot;version&quot;:&quot;main&quot;},{&quot;version&quot;:&quot;v4.34.0&quot;},{&quot;version&quot;:&quot;v4.33.3&quot;},{&quot;version&quot;:&quot;v4.33.2&quot;},{&quot;version&quot;:&quot;v4.33.0&quot;},{&quot;version&quot;:&quot;v4.32.1&quot;},{&quot;version&quot;:&quot;v4.32.0&quot;},{&quot;version&quot;:&quot;v4.31.0&quot;},{&quot;version&quot;:&quot;v4.30.0&quot;},{&quot;version&quot;:&quot;v4.29.1&quot;},{&quot;version&quot;:&quot;v4.29.0&quot;},{&quot;version&quot;:&quot;v4.28.1&quot;},{&quot;version&quot;:&quot;v4.28.0&quot;},{&quot;version&quot;:&quot;v4.27.2&quot;},{&quot;version&quot;:&quot;v4.27.1&quot;},{&quot;version&quot;:&quot;v4.27.0&quot;},{&quot;version&quot;:&quot;v4.26.1&quot;},{&quot;version&quot;:&quot;v4.26.0&quot;},{&quot;version&quot;:&quot;v4.25.1&quot;},{&quot;version&quot;:&quot;v4.24.0&quot;},{&quot;version&quot;:&quot;v4.23.1&quot;},{&quot;version&quot;:&quot;v4.23.0&quot;},{&quot;version&quot;:&quot;v4.22.2&quot;},{&quot;version&quot;:&quot;v4.22.1&quot;},{&quot;version&quot;:&quot;v4.22.0&quot;},{&quot;version&quot;:&quot;v4.21.3&quot;},{&quot;version&quot;:&quot;v4.21.2&quot;},{&quot;version&quot;:&quot;v4.21.1&quot;},{&quot;version&quot;:&quot;v4.21.0&quot;},{&quot;version&quot;:&quot;v4.20.1&quot;},{&quot;version&quot;:&quot;v4.20.0&quot;},{&quot;version&quot;:&quot;v4.19.4&quot;},{&quot;version&quot;:&quot;v4.19.3&quot;},{&quot;version&quot;:&quot;v4.19.2&quot;},{&quot;version&quot;:&quot;v4.19.0&quot;},{&quot;version&quot;:&quot;v4.18.0&quot;},{&quot;version&quot;:&quot;v4.17.0&quot;},{&quot;version&quot;:&quot;v4.16.2&quot;},{&quot;version&quot;:&quot;v4.16.1&quot;},{&quot;version&quot;:&quot;v4.16.0&quot;},{&quot;version&quot;:&quot;v4.15.0&quot;},{&quot;version&quot;:&quot;v4.14.1&quot;},{&quot;version&quot;:&quot;v4.13.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.5&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.4&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.0.0&quot;},{&quot;version&quot;:&quot;doc-builder-html&quot;}],&quot;title&quot;:&quot;Text generation strategies&quot;}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"> <div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation </p> <div class="flex items-center"><p class="font-semibold">Text generation strategies</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "> <button class=" " type="button"> <h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> </button> <div class="flex items-center"> <select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1" selected="">v4.34.0</option><option value="2">v4.33.3</option><option value="3">v4.32.1</option><option value="4">v4.31.0</option><option value="5">v4.30.0</option><option value="6">v4.29.1</option><option value="7">v4.28.1</option><option value="8">v4.27.2</option><option value="9">v4.26.1</option><option value="10">v4.25.1</option><option value="11">v4.24.0</option><option value="12">v4.23.1</option><option value="13">v4.22.2</option><option value="14">v4.21.3</option><option value="15">v4.20.1</option><option value="16">v4.19.4</option><option value="17">v4.18.0</option><option value="18">v4.17.0</option><option value="19">v4.16.2</option><option value="20">v4.15.0</option><option value="21">v4.14.1</option><option value="22">v4.13.0</option><option value="23">v4.12.5</option><option value="24">v4.11.3</option><option value="25">v4.10.1</option><option value="26">v4.9.2</option><option value="27">v4.8.2</option><option value="28">v4.7.0</option><option value="29">v4.6.0</option><option value="30">v4.5.1</option><option value="31">v4.4.2</option><option value="32">v4.3.3</option><option value="33">v4.2.2</option><option value="34">v4.1.1</option><option value="35">v4.0.1</option><option value="36">v3.5.1</option><option value="37">v3.4.0</option><option value="38">v3.3.1</option><option value="39">v3.2.0</option><option value="40">v3.1.0</option><option value="41">v3.0.2</option><option value="42">v2.11.0</option><option value="43">v2.10.0</option><option value="44">v2.9.1</option><option value="45">v2.8.0</option><option value="46">v2.7.0</option><option value="47">v2.6.0</option><option value="48">v2.5.1</option><option value="49">v2.4.1</option><option value="50">v2.3.0</option><option value="51">v2.2.2</option><option value="52">v2.1.1</option><option value="53">v2.0.0</option><option value="54">v1.2.0</option><option value="55">v1.1.0</option><option value="56">v1.0.0</option><option value="57">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en" selected="">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"> <button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"> <svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> </a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Get started<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/index"><!-- HTML_TAG_START -->🤗 Transformers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/quicktour"><!-- HTML_TAG_START -->Quick tour<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/installation"><!-- HTML_TAG_START -->Installation<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Tutorials<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_tutorial"><!-- HTML_TAG_START -->Run inference with pipelines<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/autoclass_tutorial"><!-- HTML_TAG_START -->Write portable code with AutoClass<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/preprocessing"><!-- HTML_TAG_START -->Preprocess data<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/training"><!-- HTML_TAG_START -->Fine-tune a pretrained model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/run_scripts"><!-- HTML_TAG_START -->Train with a script<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/accelerate"><!-- HTML_TAG_START -->Set up distributed training with 🤗 Accelerate<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/peft"><!-- HTML_TAG_START -->Load and train adapters with 🤗 PEFT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_sharing"><!-- HTML_TAG_START -->Share your model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/transformers_agents"><!-- HTML_TAG_START -->Agents<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/llm_tutorial"><!-- HTML_TAG_START -->Generation with LLMs<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Task Guides<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Natural Language Processing<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Audio<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Computer Vision<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Multimodal<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Generation<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-4" href="/docs/transformers/v4.34.0/en/generation_strategies"><!-- HTML_TAG_START -->Customize the generation strategy<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Prompting<!-- HTML_TAG_END --></span> </span></span> </div></div> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Developer guides<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/fast_tokenizers"><!-- HTML_TAG_START -->Use fast tokenizers from 🤗 Tokenizers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/multilingual"><!-- HTML_TAG_START -->Run inference with multilingual models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/create_a_model"><!-- HTML_TAG_START -->Use model-specific APIs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_models"><!-- HTML_TAG_START -->Share a custom model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/chat_templating"><!-- HTML_TAG_START -->Templates for chat models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/sagemaker"><!-- HTML_TAG_START -->Run training on Amazon SageMaker<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/serialization"><!-- HTML_TAG_START -->Export to ONNX<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tflite"><!-- HTML_TAG_START -->Export to TFLite<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/torchscript"><!-- HTML_TAG_START -->Export to TorchScript<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/benchmarks"><!-- HTML_TAG_START -->Benchmarks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/notebooks"><!-- HTML_TAG_START -->Notebooks with examples<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/community"><!-- HTML_TAG_START -->Community resources<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_tools"><!-- HTML_TAG_START -->Custom Tools and Prompts<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/troubleshooting"><!-- HTML_TAG_START -->Troubleshoot<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Performance and scalability<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/performance"><!-- HTML_TAG_START -->Overview<!-- HTML_TAG_END --> </a> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Efficient training techniques<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_one"><!-- HTML_TAG_START -->Methods and tools for efficient training on a single GPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_many"><!-- HTML_TAG_START -->Multiple GPUs and parallelism<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu"><!-- HTML_TAG_START -->Efficient training on CPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu_many"><!-- HTML_TAG_START -->Distributed CPU training<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu"><!-- HTML_TAG_START -->Training on TPUs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu_tf"><!-- HTML_TAG_START -->Training on TPU with TensorFlow<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_special"><!-- HTML_TAG_START -->Training on Specialized Hardware<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_hardware"><!-- HTML_TAG_START -->Custom hardware for training<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/hpo_train"><!-- HTML_TAG_START -->Hyperparameter Search using Trainer API<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Optimizing inference<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_cpu"><!-- HTML_TAG_START -->Inference on CPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_one"><!-- HTML_TAG_START -->Inference on one GPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_many"><!-- HTML_TAG_START -->Inference on many GPUs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_special"><!-- HTML_TAG_START -->Inference on Specialized Hardware<!-- HTML_TAG_END --> </a> </div><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/big_models"><!-- HTML_TAG_START -->Instantiating a big model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/debugging"><!-- HTML_TAG_START -->Troubleshooting<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tf_xla"><!-- HTML_TAG_START -->XLA Integration for TensorFlow Models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perf_torch_compile"><!-- HTML_TAG_START -->Optimize inference using `torch.compile()`<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Contribute<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/contributing"><!-- HTML_TAG_START -->How to contribute to transformers?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_model"><!-- HTML_TAG_START -->How to add a model to 🤗 Transformers?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_tensorflow_model"><!-- HTML_TAG_START -->How to convert a 🤗 Transformers model to TensorFlow?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_pipeline"><!-- HTML_TAG_START -->How to add a pipeline to 🤗 Transformers?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/testing"><!-- HTML_TAG_START -->Testing<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pr_checks"><!-- HTML_TAG_START -->Checks on a Pull Request<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Conceptual guides<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/philosophy"><!-- HTML_TAG_START -->Philosophy<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/glossary"><!-- HTML_TAG_START -->Glossary<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/task_summary"><!-- HTML_TAG_START -->What 🤗 Transformers can do<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tasks_explained"><!-- HTML_TAG_START -->How 🤗 Transformers solve tasks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_summary"><!-- HTML_TAG_START -->The Transformer model family<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tokenizer_summary"><!-- HTML_TAG_START -->Summary of the tokenizers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/attention"><!-- HTML_TAG_START -->Attention mechanisms<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pad_truncation"><!-- HTML_TAG_START -->Padding and truncation<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/bertology"><!-- HTML_TAG_START -->BERTology<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perplexity"><!-- HTML_TAG_START -->Perplexity of fixed-length models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_webserver"><!-- HTML_TAG_START -->Pipelines for webserver inference<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_memory_anatomy"><!-- HTML_TAG_START -->Model training anatomy<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->API<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Main Classes<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/agent"><!-- HTML_TAG_START -->Agents and Tools<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/model_doc/auto"><!-- HTML_TAG_START -->Auto Classes<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/callback"><!-- HTML_TAG_START -->Callbacks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/configuration"><!-- HTML_TAG_START -->Configuration<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/data_collator"><!-- HTML_TAG_START -->Data Collator<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks"><!-- HTML_TAG_START -->Keras callbacks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/logging"><!-- HTML_TAG_START -->Logging<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/model"><!-- HTML_TAG_START -->Models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/text_generation"><!-- HTML_TAG_START -->Text Generation<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/onnx"><!-- HTML_TAG_START -->ONNX<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules"><!-- HTML_TAG_START -->Optimization<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/output"><!-- HTML_TAG_START -->Model outputs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/pipelines"><!-- HTML_TAG_START -->Pipelines<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/processors"><!-- HTML_TAG_START -->Processors<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/quantization"><!-- HTML_TAG_START -->Quantization<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/tokenizer"><!-- HTML_TAG_START -->Tokenizer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/trainer"><!-- HTML_TAG_START -->Trainer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/deepspeed"><!-- HTML_TAG_START -->DeepSpeed Integration<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor"><!-- HTML_TAG_START -->Feature Extractor<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/image_processor"><!-- HTML_TAG_START -->Image Processor<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Text models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Vision models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Audio models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Multimodal models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Reinforcement learning models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Time series models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Graph models<!-- HTML_TAG_END --></span> </span></span> </div></div> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Internal Helpers<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/modeling_utils"><!-- HTML_TAG_START -->Custom Layers and Utilities<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/pipelines_utils"><!-- HTML_TAG_START -->Utilities for pipelines<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/tokenization_utils"><!-- HTML_TAG_START -->Utilities for Tokenizers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/trainer_utils"><!-- HTML_TAG_START -->Utilities for Trainer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/generation_utils"><!-- HTML_TAG_START -->Utilities for Generation<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/image_processing_utils"><!-- HTML_TAG_START -->Utilities for Image Processors<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/audio_utils"><!-- HTML_TAG_START -->Utilities for Audio processing<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/file_utils"><!-- HTML_TAG_START -->General Utilities<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/time_series_utils"><!-- HTML_TAG_START -->Utilities for Time Series<!-- HTML_TAG_END --> </a> </div> </div></nav></div></div></div> <div class="z-1 min-w-0 flex-1"> <div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg"> <div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div> <p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience </p> <div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes </div></div></div> <div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a> <p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div> <div class="prose-doc prose relative mx-auto max-w-4xl break-words"><!-- HTML_TAG_START --> <link href="/docs/transformers/v4.34.0/en/_app/immutable/assets/0.e3b0c442.css" rel="modulepreload"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/entry/start.c2db227a.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/scheduler.9bc65507.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/singletons.e3057404.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/index.3b203c72.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/paths.e7de6301.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/entry/app.879d9b87.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/index.78c82d43.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/0.242aaaff.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/each.e59479a4.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/19.7b941c65.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/Tip.87d55b76.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/IconCopyLink.bedaa44d.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/CodeBlock.73e038be.js"><!-- HEAD_svelte-1phssyn_START --><meta name="hf:doc:metadata" content="{&quot;local&quot;:&quot;text-generation-strategies&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;default-text-generation-configuration&quot;,&quot;title&quot;:&quot;Default text generation configuration&quot;},{&quot;local&quot;:&quot;customize-text-generation&quot;,&quot;title&quot;:&quot;Customize text generation&quot;},{&quot;local&quot;:&quot;save-a-custom-decoding-strategy-with-your-model&quot;,&quot;title&quot;:&quot;Save a custom decoding strategy with your model&quot;},{&quot;local&quot;:&quot;streaming&quot;,&quot;title&quot;:&quot;Streaming&quot;},{&quot;local&quot;:&quot;decoding-strategies&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;greedy-search&quot;,&quot;title&quot;:&quot;Greedy Search&quot;},{&quot;local&quot;:&quot;contrastive-search&quot;,&quot;title&quot;:&quot;Contrastive search&quot;},{&quot;local&quot;:&quot;multinomial-sampling&quot;,&quot;title&quot;:&quot;Multinomial sampling&quot;},{&quot;local&quot;:&quot;beamsearch-decoding&quot;,&quot;title&quot;:&quot;Beam-search decoding&quot;},{&quot;local&quot;:&quot;beamsearch-multinomial-sampling&quot;,&quot;title&quot;:&quot;Beam-search multinomial sampling&quot;},{&quot;local&quot;:&quot;diverse-beam-search-decoding&quot;,&quot;title&quot;:&quot;Diverse beam search decoding&quot;},{&quot;local&quot;:&quot;assisted-decoding&quot;,&quot;title&quot;:&quot;Assisted Decoding&quot;}],&quot;title&quot;:&quot;Decoding strategies&quot;}],&quot;title&quot;:&quot;Text generation strategies&quot;}"><!-- HEAD_svelte-1phssyn_END --> <p></p> <h1 class="relative group"><a id="text-generation-strategies" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#text-generation-strategies"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1f4blit">Text generation strategies</span></h1> <p data-svelte-h="svelte-1pq6r4w">Text generation is essential to many NLP tasks, such as open-ended text generation, summarization, translation, and more. It also plays a role in a variety of mixed-modality applications that have text as an output like speech-to-text and vision-to-text. Some of the models that can generate text include GPT2, XLNet, OpenAI GPT, CTRL, TransformerXL, XLM, Bart, T5, GIT, Whisper.</p> <p data-svelte-h="svelte-p5c5ow">Check out a few examples that use <a href="/docs/transformers/v4.34.0/en/main_classes/text_generation#transformers.GenerationMixin.generate">generate()</a> method to produce text outputs for different tasks:</p> <ul data-svelte-h="svelte-18jzu0"><li><a href="./tasks/summarization#inference">Text summarization</a></li> <li><a href="./model_doc/git#transformers.GitForCausalLM.forward.example">Image captioning</a></li> <li><a href="./model_doc/whisper#transformers.WhisperForConditionalGeneration.forward.example">Audio transcription</a></li></ul> <p data-svelte-h="svelte-5iqkcx">Note that the inputs to the generate method depend on the model’s modality. They are returned by the model’s preprocessor class, such as AutoTokenizer or AutoProcessor. If a model’s preprocessor creates more than one kind of input, pass all the inputs to generate(). You can learn more about the individual model’s preprocessor in the corresponding model’s documentation.</p> <p data-svelte-h="svelte-agd87v">The process of selecting output tokens to generate text is known as decoding, and you can customize the decoding strategy that the <code>generate()</code> method will use. Modifying a decoding strategy does not change the values of any trainable parameters. However, it can have a noticeable impact on the quality of the generated output. It can help reduce repetition in the text and make it more coherent.</p> <p data-svelte-h="svelte-1gun7m8">This guide describes:</p> <ul data-svelte-h="svelte-l1azua"><li>default generation configuration</li> <li>common decoding strategies and their main parameters</li> <li>saving and sharing custom generation configurations with your fine-tuned model on 🤗 Hub</li></ul> <h2 class="relative group"><a id="default-text-generation-configuration" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#default-text-generation-configuration"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-thv2tr">Default text generation configuration</span></h2> <p data-svelte-h="svelte-17hu36s">A decoding strategy for a model is defined in its generation configuration. When using pre-trained models for inference within a <a href="/docs/transformers/v4.34.0/en/main_classes/pipelines#transformers.pipeline">pipeline()</a>, the models call the <code>PreTrainedModel.generate()</code> method that applies a default generation configuration under the hood. The default configuration is also used when no custom configuration has been saved with the model.</p> <p data-svelte-h="svelte-2o7gdz">When you load a model explicitly, you can inspect the generation configuration that comes with it through <code>model.generation_config</code>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoModelForCausalLM <span class="hljs-meta">&gt;&gt;&gt; </span>model = AutoModelForCausalLM.from_pretrained(<span class="hljs-string">"distilgpt2"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model.generation_config GenerationConfig { <span class="hljs-string">"bos_token_id"</span>: <span class="hljs-number">50256</span>, <span class="hljs-string">"eos_token_id"</span>: <span class="hljs-number">50256</span>, }<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-18srzjs">Printing out the <code>model.generation_config</code> reveals only the values that are different from the default generation configuration, and does not list any of the default values.</p> <p data-svelte-h="svelte-32rftl">The default generation configuration limits the size of the output combined with the input prompt to a maximum of 20 tokens to avoid running into resource limitations. The default decoding strategy is greedy search, which is the simplest decoding strategy that picks a token with the highest probability as the next token. For many tasks and small output sizes this works well. However, when used to generate longer outputs, greedy search can start producing highly repetitive results.</p> <h2 class="relative group"><a id="customize-text-generation" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#customize-text-generation"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-18i90pv">Customize text generation</span></h2> <p data-svelte-h="svelte-qprijs">You can override any <code>generation_config</code> by passing the parameters and their values directly to the <code>generate</code> method:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">&gt;&gt;&gt; </span>my_model.generate(**inputs, num_beams=<span class="hljs-number">4</span>, do_sample=<span class="hljs-literal">True</span>)<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-t9z639">Even if the default decoding strategy mostly works for your task, you can still tweak a few things. Some of the commonly adjusted parameters include:</p> <ul data-svelte-h="svelte-1i0i0e6"><li><code>max_new_tokens</code>: the maximum number of tokens to generate. In other words, the size of the output sequence, not including the tokens in the prompt. As an alternative to using the output’s length as a stopping criteria, you can choose to stop generation whenever the full generation exceeds some amount of time. To learn more, check <a href="/docs/transformers/v4.34.0/en/internal/generation_utils#transformers.StoppingCriteria">StoppingCriteria</a>.</li> <li><code>num_beams</code>: by specifying a number of beams higher than 1, you are effectively switching from greedy search to beam search. This strategy evaluates several hypotheses at each time step and eventually chooses the hypothesis that has the overall highest probability for the entire sequence. This has the advantage of identifying high-probability sequences that start with a lower probability initial tokens and would’ve been ignored by the greedy search.</li> <li><code>do_sample</code>: if set to <code>True</code>, this parameter enables decoding strategies such as multinomial sampling, beam-search multinomial sampling, Top-K sampling and Top-p sampling. All these strategies select the next token from the probability distribution over the entire vocabulary with various strategy-specific adjustments.</li> <li><code>num_return_sequences</code>: the number of sequence candidates to return for each input. This option is only available for the decoding strategies that support multiple sequence candidates, e.g. variations of beam search and sampling. Decoding strategies like greedy search and contrastive search return a single output sequence.</li></ul> <h2 class="relative group"><a id="save-a-custom-decoding-strategy-with-your-model" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#save-a-custom-decoding-strategy-with-your-model"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1crrg36">Save a custom decoding strategy with your model</span></h2> <p data-svelte-h="svelte-1l1ri2h">If you would like to share your fine-tuned model with a specific generation configuration, you can:</p> <ul data-svelte-h="svelte-15n4cha"><li>Create a <a href="/docs/transformers/v4.34.0/en/main_classes/text_generation#transformers.GenerationConfig">GenerationConfig</a> class instance</li> <li>Specify the decoding strategy parameters</li> <li>Save your generation configuration with <a href="/docs/transformers/v4.34.0/en/main_classes/text_generation#transformers.GenerationConfig.save_pretrained">GenerationConfig.save_pretrained()</a>, making sure to leave its <code>config_file_name</code> argument empty</li> <li>Set <code>push_to_hub</code> to <code>True</code> to upload your config to the model’s repo</li></ul> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoModelForCausalLM, GenerationConfig <span class="hljs-meta">&gt;&gt;&gt; </span>model = AutoModelForCausalLM.from_pretrained(<span class="hljs-string">"my_account/my_model"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>generation_config = GenerationConfig( <span class="hljs-meta">... </span> max_new_tokens=<span class="hljs-number">50</span>, do_sample=<span class="hljs-literal">True</span>, top_k=<span class="hljs-number">50</span>, eos_token_id=model.config.eos_token_id <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>generation_config.save_pretrained(<span class="hljs-string">"my_account/my_model"</span>, push_to_hub=<span class="hljs-literal">True</span>)<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-k9hg0n">You can also store several generation configurations in a single directory, making use of the <code>config_file_name</code> argument in <a href="/docs/transformers/v4.34.0/en/main_classes/text_generation#transformers.GenerationConfig.save_pretrained">GenerationConfig.save_pretrained()</a>. You can later instantiate them with <a href="/docs/transformers/v4.34.0/en/main_classes/text_generation#transformers.GenerationConfig.from_pretrained">GenerationConfig.from_pretrained()</a>. This is useful if you want to store several generation configurations for a single model (e.g. one for creative text generation with sampling, and one for summarization with beam search). You must have the right Hub permissions to add configuration files to a model.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoModelForSeq2SeqLM, AutoTokenizer, GenerationConfig <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"t5-small"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = AutoModelForSeq2SeqLM.from_pretrained(<span class="hljs-string">"t5-small"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>translation_generation_config = GenerationConfig( <span class="hljs-meta">... </span> num_beams=<span class="hljs-number">4</span>, <span class="hljs-meta">... </span> early_stopping=<span class="hljs-literal">True</span>, <span class="hljs-meta">... </span> decoder_start_token_id=<span class="hljs-number">0</span>, <span class="hljs-meta">... </span> eos_token_id=model.config.eos_token_id, <span class="hljs-meta">... </span> pad_token=model.config.pad_token_id, <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Tip: add `push_to_hub=True` to push to the Hub</span> <span class="hljs-meta">&gt;&gt;&gt; </span>translation_generation_config.save_pretrained(<span class="hljs-string">"/tmp"</span>, <span class="hljs-string">"translation_generation_config.json"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># You could then use the named generation config file to parameterize generation</span> <span class="hljs-meta">&gt;&gt;&gt; </span>generation_config = GenerationConfig.from_pretrained(<span class="hljs-string">"/tmp"</span>, <span class="hljs-string">"translation_generation_config.json"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(<span class="hljs-string">"translate English to French: Configuration files are easy to use!"</span>, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model.generate(**inputs, generation_config=generation_config) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-built_in">print</span>(tokenizer.batch_decode(outputs, skip_special_tokens=<span class="hljs-literal">True</span>)) [<span class="hljs-string">'Les fichiers de configuration sont faciles à utiliser!'</span>]<!-- HTML_TAG_END --></pre></div> <h2 class="relative group"><a id="streaming" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#streaming"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-r5rcbb">Streaming</span></h2> <p data-svelte-h="svelte-12rh7l9">The <code>generate()</code> supports streaming, through its <code>streamer</code> input. The <code>streamer</code> input is compatible with any instance from a class that has the following methods: <code>put()</code> and <code>end()</code>. Internally, <code>put()</code> is used to push new tokens and <code>end()</code> is used to flag the end of text generation.</p> <div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400"><p data-svelte-h="svelte-gv2g1g">The API for the streamer classes is still under development and may change in the future.</p></div> <p data-svelte-h="svelte-1m9he1z">In practice, you can craft your own streaming class for all sorts of purposes! We also have basic streaming classes ready for you to use. For example, you can use the <a href="/docs/transformers/v4.34.0/en/internal/generation_utils#transformers.TextStreamer">TextStreamer</a> class to stream the output of <code>generate()</code> into your screen, one word at a time:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoModelForCausalLM, AutoTokenizer, TextStreamer <span class="hljs-meta">&gt;&gt;&gt; </span>tok = AutoTokenizer.from_pretrained(<span class="hljs-string">"gpt2"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = AutoModelForCausalLM.from_pretrained(<span class="hljs-string">"gpt2"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tok([<span class="hljs-string">"An increasing sequence: one,"</span>], return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>streamer = TextStreamer(tok) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Despite returning the usual output, the streamer will also print the generated text to stdout.</span> <span class="hljs-meta">&gt;&gt;&gt; </span>_ = model.generate(**inputs, streamer=streamer, max_new_tokens=<span class="hljs-number">20</span>) An increasing sequence: one, two, three, four, five, six, seven, eight, nine, ten, eleven,<!-- HTML_TAG_END --></pre></div> <h2 class="relative group"><a id="decoding-strategies" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#decoding-strategies"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1wlre25">Decoding strategies</span></h2> <p data-svelte-h="svelte-ip87dk">Certain combinations of the <code>generate()</code> parameters, and ultimately <code>generation_config</code>, can be used to enable specific decoding strategies. If you are new to this concept, we recommend reading <a href="https://huggingface.co/blog/how-to-generate" rel="nofollow">this blog post that illustrates how common decoding strategies work</a>.</p> <p data-svelte-h="svelte-nugt5b">Here, we’ll show some of the parameters that control the decoding strategies and illustrate how you can use them.</p> <h3 class="relative group"><a id="greedy-search" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#greedy-search"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1pp3otr">Greedy Search</span></h3> <p data-svelte-h="svelte-1m7rj88"><code>generate</code> uses greedy search decoding by default so you don’t have to pass any parameters to enable it. This means the parameters <code>num_beams</code> is set to 1 and <code>do_sample=False</code>.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoModelForCausalLM, AutoTokenizer <span class="hljs-meta">&gt;&gt;&gt; </span>prompt = <span class="hljs-string">"I look forward to"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>checkpoint = <span class="hljs-string">"distilgpt2"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(checkpoint) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(prompt, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = AutoModelForCausalLM.from_pretrained(checkpoint) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model.generate(**inputs) <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer.batch_decode(outputs, skip_special_tokens=<span class="hljs-literal">True</span>) [<span class="hljs-string">'I look forward to seeing you all again!\n\n\n\n\n\n\n\n\n\n\n'</span>]<!-- HTML_TAG_END --></pre></div> <h3 class="relative group"><a id="contrastive-search" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#contrastive-search"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1497ha9">Contrastive search</span></h3> <p data-svelte-h="svelte-m0y9j6">The contrastive search decoding strategy was proposed in the 2022 paper <a href="https://arxiv.org/abs/2202.06417" rel="nofollow">A Contrastive Framework for Neural Text Generation</a>. It demonstrates superior results for generating non-repetitive yet coherent long outputs. To learn how contrastive search works, check out <a href="https://huggingface.co/blog/introducing-csearch" rel="nofollow">this blog post</a>. The two main parameters that enable and control the behavior of contrastive search are <code>penalty_alpha</code> and <code>top_k</code>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, AutoModelForCausalLM <span class="hljs-meta">&gt;&gt;&gt; </span>checkpoint = <span class="hljs-string">"gpt2-large"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(checkpoint) <span class="hljs-meta">&gt;&gt;&gt; </span>model = AutoModelForCausalLM.from_pretrained(checkpoint) <span class="hljs-meta">&gt;&gt;&gt; </span>prompt = <span class="hljs-string">"Hugging Face Company is"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(prompt, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model.generate(**inputs, penalty_alpha=<span class="hljs-number">0.6</span>, top_k=<span class="hljs-number">4</span>, max_new_tokens=<span class="hljs-number">100</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer.batch_decode(outputs, skip_special_tokens=<span class="hljs-literal">True</span>) [<span class="hljs-string">'Hugging Face Company is a family owned and operated business. We pride ourselves on being the best in the business and our customer service is second to none.\n\nIf you have any questions about our products or services, feel free to contact us at any time. We look forward to hearing from you!'</span>]<!-- HTML_TAG_END --></pre></div> <h3 class="relative group"><a id="multinomial-sampling" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#multinomial-sampling"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1q3sc33">Multinomial sampling</span></h3> <p data-svelte-h="svelte-vsvvis">As opposed to greedy search that always chooses a token with the highest probability as the next token, multinomial sampling (also called ancestral sampling) randomly selects the next token based on the probability distribution over the entire vocabulary given by the model. Every token with a non-zero probability has a chance of being selected, thus reducing the risk of repetition.</p> <p data-svelte-h="svelte-ldtxsn">To enable multinomial sampling set <code>do_sample=True</code> and <code>num_beams=1</code>.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, AutoModelForCausalLM, set_seed <span class="hljs-meta">&gt;&gt;&gt; </span>set_seed(<span class="hljs-number">0</span>) <span class="hljs-comment"># For reproducibility</span> <span class="hljs-meta">&gt;&gt;&gt; </span>checkpoint = <span class="hljs-string">"gpt2-large"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(checkpoint) <span class="hljs-meta">&gt;&gt;&gt; </span>model = AutoModelForCausalLM.from_pretrained(checkpoint) <span class="hljs-meta">&gt;&gt;&gt; </span>prompt = <span class="hljs-string">"Today was an amazing day because"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(prompt, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model.generate(**inputs, do_sample=<span class="hljs-literal">True</span>, num_beams=<span class="hljs-number">1</span>, max_new_tokens=<span class="hljs-number">100</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer.batch_decode(outputs, skip_special_tokens=<span class="hljs-literal">True</span>) [<span class="hljs-string">'Today was an amazing day because when you go to the World Cup and you don\'t, or when you don\'t get invited, that\'s a terrible feeling."'</span>]<!-- HTML_TAG_END --></pre></div> <h3 class="relative group"><a id="beamsearch-decoding" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#beamsearch-decoding"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1kucxhs">Beam-search decoding</span></h3> <p data-svelte-h="svelte-149ek3p">Unlike greedy search, beam-search decoding keeps several hypotheses at each time step and eventually chooses the hypothesis that has the overall highest probability for the entire sequence. This has the advantage of identifying high-probability sequences that start with lower probability initial tokens and would’ve been ignored by the greedy search.</p> <p data-svelte-h="svelte-krswod">To enable this decoding strategy, specify the <code>num_beams</code> (aka number of hypotheses to keep track of) that is greater than 1.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoModelForCausalLM, AutoTokenizer <span class="hljs-meta">&gt;&gt;&gt; </span>prompt = <span class="hljs-string">"It is astonishing how one can"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>checkpoint = <span class="hljs-string">"gpt2-medium"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(checkpoint) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(prompt, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = AutoModelForCausalLM.from_pretrained(checkpoint) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model.generate(**inputs, num_beams=<span class="hljs-number">5</span>, max_new_tokens=<span class="hljs-number">50</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer.batch_decode(outputs, skip_special_tokens=<span class="hljs-literal">True</span>) [<span class="hljs-string">'It is astonishing how one can have such a profound impact on the lives of so many people in such a short period of time."\n\nHe added: "I am very proud of the work I have been able to do in the last few years.\n\n"I have'</span>]<!-- HTML_TAG_END --></pre></div> <h3 class="relative group"><a id="beamsearch-multinomial-sampling" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#beamsearch-multinomial-sampling"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1w95569">Beam-search multinomial sampling</span></h3> <p data-svelte-h="svelte-zgjlvh">As the name implies, this decoding strategy combines beam search with multinomial sampling. You need to specify the <code>num_beams</code> greater than 1, and set <code>do_sample=True</code> to use this decoding strategy.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, AutoModelForSeq2SeqLM, set_seed <span class="hljs-meta">&gt;&gt;&gt; </span>set_seed(<span class="hljs-number">0</span>) <span class="hljs-comment"># For reproducibility</span> <span class="hljs-meta">&gt;&gt;&gt; </span>prompt = <span class="hljs-string">"translate English to German: The house is wonderful."</span> <span class="hljs-meta">&gt;&gt;&gt; </span>checkpoint = <span class="hljs-string">"t5-small"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(checkpoint) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(prompt, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model.generate(**inputs, num_beams=<span class="hljs-number">5</span>, do_sample=<span class="hljs-literal">True</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer.decode(outputs[<span class="hljs-number">0</span>], skip_special_tokens=<span class="hljs-literal">True</span>) <span class="hljs-string">'Das Haus ist wunderbar.'</span><!-- HTML_TAG_END --></pre></div> <h3 class="relative group"><a id="diverse-beam-search-decoding" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#diverse-beam-search-decoding"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-9yaih7">Diverse beam search decoding</span></h3> <p data-svelte-h="svelte-zdhf65">The diverse beam search decoding strategy is an extension of the beam search strategy that allows for generating a more diverse set of beam sequences to choose from. To learn how it works, refer to <a href="https://arxiv.org/pdf/1610.02424.pdf" rel="nofollow">Diverse Beam Search: Decoding Diverse Solutions from Neural Sequence Models</a>. This approach has three main parameters: <code>num_beams</code>, <code>num_beam_groups</code>, and <code>diversity_penalty</code>. The diversity penalty ensures the outputs are distinct across groups, and beam search is used within each group.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, AutoModelForSeq2SeqLM <span class="hljs-meta">&gt;&gt;&gt; </span>checkpoint = <span class="hljs-string">"google/pegasus-xsum"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>prompt = ( <span class="hljs-meta">... </span> <span class="hljs-string">"The Permaculture Design Principles are a set of universal design principles "</span> <span class="hljs-meta">... </span> <span class="hljs-string">"that can be applied to any location, climate and culture, and they allow us to design "</span> <span class="hljs-meta">... </span> <span class="hljs-string">"the most efficient and sustainable human habitation and food production systems. "</span> <span class="hljs-meta">... </span> <span class="hljs-string">"Permaculture is a design system that encompasses a wide variety of disciplines, such "</span> <span class="hljs-meta">... </span> <span class="hljs-string">"as ecology, landscape design, environmental science and energy conservation, and the "</span> <span class="hljs-meta">... </span> <span class="hljs-string">"Permaculture design principles are drawn from these various disciplines. Each individual "</span> <span class="hljs-meta">... </span> <span class="hljs-string">"design principle itself embodies a complete conceptual framework based on sound "</span> <span class="hljs-meta">... </span> <span class="hljs-string">"scientific principles. When we bring all these separate principles together, we can "</span> <span class="hljs-meta">... </span> <span class="hljs-string">"create a design system that both looks at whole systems, the parts that these systems "</span> <span class="hljs-meta">... </span> <span class="hljs-string">"consist of, and how those parts interact with each other to create a complex, dynamic, "</span> <span class="hljs-meta">... </span> <span class="hljs-string">"living system. Each design principle serves as a tool that allows us to integrate all "</span> <span class="hljs-meta">... </span> <span class="hljs-string">"the separate parts of a design, referred to as elements, into a functional, synergistic, "</span> <span class="hljs-meta">... </span> <span class="hljs-string">"whole system, where the elements harmoniously interact and work together in the most "</span> <span class="hljs-meta">... </span> <span class="hljs-string">"efficient way possible."</span> <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(checkpoint) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(prompt, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model.generate(**inputs, num_beams=<span class="hljs-number">5</span>, num_beam_groups=<span class="hljs-number">5</span>, max_new_tokens=<span class="hljs-number">30</span>, diversity_penalty=<span class="hljs-number">1.0</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer.decode(outputs[<span class="hljs-number">0</span>], skip_special_tokens=<span class="hljs-literal">True</span>) <span class="hljs-string">'The Design Principles are a set of universal design principles that can be applied to any location, climate and culture, and they allow us to design the'</span><!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-1e7mt8t">This guide illustrates the main parameters that enable various decoding strategies. More advanced parameters exist for the <code>generate</code> method, which gives you even further control over the <code>generate</code> method’s behavior. For the complete list of the available parameters, refer to the <a href="./main_classes/text_generation.md">API documentation</a>.</p> <h3 class="relative group"><a id="assisted-decoding" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#assisted-decoding"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-m6blcs">Assisted Decoding</span></h3> <p data-svelte-h="svelte-1i5gxtn">Assisted decoding is a modification of the decoding strategies above that uses an assistant model with the same tokenizer (ideally a much smaller model) to greedily generate a few candidate tokens. The main model then validates the candidate tokens in a single forward pass, which speeds up the decoding process. Currently, only greedy search and sampling are supported with assisted decoding, and doesn’t support batched inputs. To learn more about assisted decoding, check <a href="https://huggingface.co/blog/assisted-generation" rel="nofollow">this blog post</a>.</p> <p data-svelte-h="svelte-ebd3ly">To enable assisted decoding, set the <code>assistant_model</code> argument with a model.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoModelForCausalLM, AutoTokenizer <span class="hljs-meta">&gt;&gt;&gt; </span>prompt = <span class="hljs-string">"Alice and Bob"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>checkpoint = <span class="hljs-string">"EleutherAI/pythia-1.4b-deduped"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>assistant_checkpoint = <span class="hljs-string">"EleutherAI/pythia-160m-deduped"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(checkpoint) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(prompt, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = AutoModelForCausalLM.from_pretrained(checkpoint) <span class="hljs-meta">&gt;&gt;&gt; </span>assistant_model = AutoModelForCausalLM.from_pretrained(assistant_checkpoint) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model.generate(**inputs, assistant_model=assistant_model) <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer.batch_decode(outputs, skip_special_tokens=<span class="hljs-literal">True</span>) [<span class="hljs-string">'Alice and Bob are sitting in a bar. Alice is drinking a beer and Bob is drinking a'</span>]<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-m00818">When using assisted decoding with sampling methods, you can use the <code>temperature</code> argument to control the randomness just like in multinomial sampling. However, in assisted decoding, reducing the temperature will help improving latency.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoModelForCausalLM, AutoTokenizer, set_seed <span class="hljs-meta">&gt;&gt;&gt; </span>set_seed(<span class="hljs-number">42</span>) <span class="hljs-comment"># For reproducibility</span> <span class="hljs-meta">&gt;&gt;&gt; </span>prompt = <span class="hljs-string">"Alice and Bob"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>checkpoint = <span class="hljs-string">"EleutherAI/pythia-1.4b-deduped"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>assistant_checkpoint = <span class="hljs-string">"EleutherAI/pythia-160m-deduped"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(checkpoint) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(prompt, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = AutoModelForCausalLM.from_pretrained(checkpoint) <span class="hljs-meta">&gt;&gt;&gt; </span>assistant_model = AutoModelForCausalLM.from_pretrained(assistant_checkpoint) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model.generate(**inputs, assistant_model=assistant_model, do_sample=<span class="hljs-literal">True</span>, temperature=<span class="hljs-number">0.5</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer.batch_decode(outputs, skip_special_tokens=<span class="hljs-literal">True</span>) [<span class="hljs-string">'Alice and Bob are going to the same party. It is a small party, in a small'</span>]<!-- HTML_TAG_END --></pre></div> <p></p> <script> { __sveltekit_1yybmhh = { assets: "/docs/transformers/v4.34.0/en", base: "/docs/transformers/v4.34.0/en", env: {} }; const element = document.currentScript.parentElement; const data = [null,null]; Promise.all([ import("/docs/transformers/v4.34.0/en/_app/immutable/entry/start.c2db227a.js"), import("/docs/transformers/v4.34.0/en/_app/immutable/entry/app.879d9b87.js") ]).then(([kit, app]) => { kit.start(app, element, { node_ids: [0, 19], data, form: null, error: null }); }); } </script> <!-- HTML_TAG_END --></div> <div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/tasks/text-to-speech" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>Text to speech</a> <a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/tasks/idefics" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">Image tasks with IDEFICS<span class="ml-2 translate-y-px">→</span></a></div></div></div> <div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapter&quot;:{&quot;title&quot;:&quot;Text generation strategies&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;text-generation-strategies&quot;,&quot;url&quot;:&quot;#text-generation-strategies&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Default text generation configuration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;default-text-generation-configuration&quot;,&quot;url&quot;:&quot;#default-text-generation-configuration&quot;},{&quot;title&quot;:&quot;Customize text generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;customize-text-generation&quot;,&quot;url&quot;:&quot;#customize-text-generation&quot;},{&quot;title&quot;:&quot;Save a custom decoding strategy with your model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;save-a-custom-decoding-strategy-with-your-model&quot;,&quot;url&quot;:&quot;#save-a-custom-decoding-strategy-with-your-model&quot;},{&quot;title&quot;:&quot;Streaming&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;streaming&quot;,&quot;url&quot;:&quot;#streaming&quot;},{&quot;title&quot;:&quot;Decoding strategies&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;decoding-strategies&quot;,&quot;url&quot;:&quot;#decoding-strategies&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Greedy Search&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;greedy-search&quot;,&quot;url&quot;:&quot;#greedy-search&quot;},{&quot;title&quot;:&quot;Contrastive search&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;contrastive-search&quot;,&quot;url&quot;:&quot;#contrastive-search&quot;},{&quot;title&quot;:&quot;Multinomial sampling&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;multinomial-sampling&quot;,&quot;url&quot;:&quot;#multinomial-sampling&quot;},{&quot;title&quot;:&quot;Beam-search decoding&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;beamsearch-decoding&quot;,&quot;url&quot;:&quot;#beamsearch-decoding&quot;},{&quot;title&quot;:&quot;Beam-search multinomial sampling&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;beamsearch-multinomial-sampling&quot;,&quot;url&quot;:&quot;#beamsearch-multinomial-sampling&quot;},{&quot;title&quot;:&quot;Diverse beam search decoding&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;diverse-beam-search-decoding&quot;,&quot;url&quot;:&quot;#diverse-beam-search-decoding&quot;},{&quot;title&quot;:&quot;Assisted Decoding&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;assisted-decoding&quot;,&quot;url&quot;:&quot;#assisted-decoding&quot;}]}]}}" data-target="SubSideMenu"> <nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#text-generation-strategies" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-text-generation-strategies"><!-- HTML_TAG_START --><wbr>Text generation strategies<!-- HTML_TAG_END --></a> <a href="#default-text-generation-configuration" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-default-text-generation-configuration"><!-- HTML_TAG_START --><wbr>Default text generation configuration<!-- HTML_TAG_END --></a> <a href="#customize-text-generation" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-customize-text-generation"><!-- HTML_TAG_START --><wbr>Customize text generation<!-- HTML_TAG_END --></a> <a href="#save-a-custom-decoding-strategy-with-your-model" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-save-a-custom-decoding-strategy-with-your-model"><!-- HTML_TAG_START --><wbr>Save a custom decoding strategy with your model<!-- HTML_TAG_END --></a> <a href="#streaming" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-streaming"><!-- HTML_TAG_START --><wbr>Streaming<!-- HTML_TAG_END --></a> <a href="#decoding-strategies" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-decoding-strategies"><!-- HTML_TAG_START --><wbr>Decoding strategies<!-- HTML_TAG_END --></a> <a href="#greedy-search" class="pl-8 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-greedy-search"><!-- HTML_TAG_START --><wbr>Greedy <wbr>Search<!-- HTML_TAG_END --></a> <a href="#contrastive-search" class="pl-8 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-contrastive-search"><!-- HTML_TAG_START --><wbr>Contrastive search<!-- HTML_TAG_END --></a> <a href="#multinomial-sampling" class="pl-8 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-multinomial-sampling"><!-- HTML_TAG_START --><wbr>Multinomial sampling<!-- HTML_TAG_END --></a> <a href="#beamsearch-decoding" class="pl-8 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-beamsearch-decoding"><!-- HTML_TAG_START --><wbr>Beam-search decoding<!-- HTML_TAG_END --></a> <a href="#beamsearch-multinomial-sampling" class="pl-8 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-beamsearch-multinomial-sampling"><!-- HTML_TAG_START --><wbr>Beam-search multinomial sampling<!-- HTML_TAG_END --></a> <a href="#diverse-beam-search-decoding" class="pl-8 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-diverse-beam-search-decoding"><!-- HTML_TAG_START --><wbr>Diverse beam search decoding<!-- HTML_TAG_END --></a> <a href="#assisted-decoding" class="pl-8 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-assisted-decoding"><!-- HTML_TAG_START --><wbr>Assisted <wbr>Decoding<!-- HTML_TAG_END --></a> </nav></div></div></div> <div id="doc-footer"></div></main> </div> <script> import("/front/build/kube-b0520c1/index.js"); window.moonSha = "kube-b0520c1/"; window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc","environment":"production","userAgent":"HuggingFace (production)"}`); </script> <!-- Stripe --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://js.stripe.com/v3/"; script.async = true; document.head.appendChild(script); } </script> <!-- Google analytics v4 --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL"; script.async = true; document.head.appendChild(script); window.dataLayer = window.dataLayer || []; function gtag() { if (window.dataLayer !== undefined) { window.dataLayer.push(arguments); } } gtag("js", new Date()); gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/v4.34.0/en/generation_strategies" }); /// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" }); /// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent /// TODO: ask the user for their consent and update this with gtag('consent', 'update') } </script> <!-- Google Analytics v3 (deprecated) --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { (function (i, s, o, g, r, a, m) { i["GoogleAnalyticsObject"] = r; (i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments); }), (i[r].l = 1 * new Date()); (a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]); a.async = 1; a.src = g; m.parentNode.insertBefore(a, m); })(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics"); ganalytics("create", "UA-83738774-2", "auto"); ganalytics("send", "pageview", "/docs/transformers/v4.34.0/en/generation_strategies"); } </script> <iframe name="__privateStripeMetricsController8830" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-27c67c0d52761104439bb051c7856ab1.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fv4.34.0%2Fen%2Fgeneration_strategies&amp;title=Text%20generation%20strategies&amp;referrer=&amp;muid=577a1d98-59a0-46fc-98a8-36ee316848488be1c3&amp;sid=95f156dd-eb84-4e70-95ef-3883996ebe1530e886&amp;version=6&amp;preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html>
2023-10-05T13:33:41.948Z
Causal language modeling
https://huggingface.co/docs/transformers/v4.34.0/en/tasks/language_modeling
# Causal language modeling There are two types of language modeling, causal and masked. This guide illustrates causal language modeling. Causal language models are frequently used for text generation. You can use these models for creative applications like choosing your own text adventure or an intelligent coding assistant like Copilot or CodeParrot. Causal language modeling predicts the next token in a sequence of tokens, and the model can only attend to tokens on the left. This means the model cannot see future tokens. GPT-2 is an example of a causal language model. This guide will show you how to: 1. Finetune [DistilGPT2](https://huggingface.co/distilgpt2) on the [r/askscience](https://www.reddit.com/r/askscience/) subset of the [ELI5](https://huggingface.co/datasets/eli5) dataset. 2. Use your finetuned model for inference. You can finetune other architectures for causal language modeling following the same steps in this guide. Choose one of the following architectures: [BART](../model_doc/bart), [BERT](../model_doc/bert), [Bert Generation](../model_doc/bert-generation), [BigBird](../model_doc/big_bird), [BigBird-Pegasus](../model_doc/bigbird_pegasus), [BioGpt](../model_doc/biogpt), [Blenderbot](../model_doc/blenderbot), [BlenderbotSmall](../model_doc/blenderbot-small), [BLOOM](../model_doc/bloom), [CamemBERT](../model_doc/camembert), [CodeLlama](../model_doc/code_llama), [CodeGen](../model_doc/codegen), [CPM-Ant](../model_doc/cpmant), [CTRL](../model_doc/ctrl), [Data2VecText](../model_doc/data2vec-text), [ELECTRA](../model_doc/electra), [ERNIE](../model_doc/ernie), [Falcon](../model_doc/falcon), [GIT](../model_doc/git), [GPT-Sw3](../model_doc/gpt-sw3), [OpenAI GPT-2](../model_doc/gpt2), [GPTBigCode](../model_doc/gpt_bigcode), [GPT Neo](../model_doc/gpt_neo), [GPT NeoX](../model_doc/gpt_neox), [GPT NeoX Japanese](../model_doc/gpt_neox_japanese), [GPT-J](../model_doc/gptj), [LLaMA](../model_doc/llama), [Marian](../model_doc/marian), [mBART](../model_doc/mbart), [MEGA](../model_doc/mega), [Megatron-BERT](../model_doc/megatron-bert), [Mistral](../model_doc/mistral), [MPT](../model_doc/mpt), [MusicGen](../model_doc/musicgen), [MVP](../model_doc/mvp), [OpenLlama](../model_doc/open-llama), [OpenAI GPT](../model_doc/openai-gpt), [OPT](../model_doc/opt), [Pegasus](../model_doc/pegasus), [Persimmon](../model_doc/persimmon), [PLBart](../model_doc/plbart), [ProphetNet](../model_doc/prophetnet), [QDQBert](../model_doc/qdqbert), [Reformer](../model_doc/reformer), [RemBERT](../model_doc/rembert), [RoBERTa](../model_doc/roberta), [RoBERTa-PreLayerNorm](../model_doc/roberta-prelayernorm), [RoCBert](../model_doc/roc_bert), [RoFormer](../model_doc/roformer), [RWKV](../model_doc/rwkv), [Speech2Text2](../model_doc/speech_to_text_2), [Transformer-XL](../model_doc/transfo-xl), [TrOCR](../model_doc/trocr), [XGLM](../model_doc/xglm), [XLM](../model_doc/xlm), [XLM-ProphetNet](../model_doc/xlm-prophetnet), [XLM-RoBERTa](../model_doc/xlm-roberta), [XLM-RoBERTa-XL](../model_doc/xlm-roberta-xl), [XLNet](../model_doc/xlnet), [X-MOD](../model_doc/xmod) Before you begin, make sure you have all the necessary libraries installed: ``` pip install transformers datasets evaluate ``` We encourage you to log in to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to log in: ``` >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## Load ELI5 dataset Start by loading a smaller subset of the r/askscience subset of the ELI5 dataset from the 🤗 Datasets library. This’ll give you a chance to experiment and make sure everything works before spending more time training on the full dataset. ``` >>> from datasets import load_dataset >>> eli5 = load_dataset("eli5", split="train_asks[:5000]") ``` Split the dataset’s `train_asks` split into a train and test set with the [train\_test\_split](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.train_test_split) method: ``` >>> eli5 = eli5.train_test_split(test_size=0.2) ``` Then take a look at an example: ``` >>> eli5["train"][0] {'answers': {'a_id': ['c3d1aib', 'c3d4lya'], 'score': [6, 3], 'text': ["The velocity needed to remain in orbit is equal to the square root of Newton's constant times the mass of earth divided by the distance from the center of the earth. I don't know the altitude of that specific mission, but they're usually around 300 km. That means he's going 7-8 km/s.\n\nIn space there are no other forces acting on either the shuttle or the guy, so they stay in the same position relative to each other. If he were to become unable to return to the ship, he would presumably run out of oxygen, or slowly fall into the atmosphere and burn up.", "Hope you don't mind me asking another question, but why aren't there any stars visible in this photo?"]}, 'answers_urls': {'url': []}, 'document': '', 'q_id': 'nyxfp', 'selftext': '_URL_0_\n\nThis was on the front page earlier and I have a few questions about it. Is it possible to calculate how fast the astronaut would be orbiting the earth? Also how does he stay close to the shuttle so that he can return safely, i.e is he orbiting at the same speed and can therefore stay next to it? And finally if his propulsion system failed, would he eventually re-enter the atmosphere and presumably die?', 'selftext_urls': {'url': ['http://apod.nasa.gov/apod/image/1201/freeflyer_nasa_3000.jpg']}, 'subreddit': 'askscience', 'title': 'Few questions about this space walk photograph.', 'title_urls': {'url': []}} ``` While this may look like a lot, you’re only really interested in the `text` field. What’s cool about language modeling tasks is you don’t need labels (also known as an unsupervised task) because the next word _is_ the label. ## Preprocess The next step is to load a DistilGPT2 tokenizer to process the `text` subfield: ``` >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("distilgpt2") ``` You’ll notice from the example above, the `text` field is actually nested inside `answers`. This means you’ll need to extract the `text` subfield from its nested structure with the [`flatten`](https://huggingface.co/docs/datasets/process.html#flatten) method: ``` >>> eli5 = eli5.flatten() >>> eli5["train"][0] {'answers.a_id': ['c3d1aib', 'c3d4lya'], 'answers.score': [6, 3], 'answers.text': ["The velocity needed to remain in orbit is equal to the square root of Newton's constant times the mass of earth divided by the distance from the center of the earth. I don't know the altitude of that specific mission, but they're usually around 300 km. That means he's going 7-8 km/s.\n\nIn space there are no other forces acting on either the shuttle or the guy, so they stay in the same position relative to each other. If he were to become unable to return to the ship, he would presumably run out of oxygen, or slowly fall into the atmosphere and burn up.", "Hope you don't mind me asking another question, but why aren't there any stars visible in this photo?"], 'answers_urls.url': [], 'document': '', 'q_id': 'nyxfp', 'selftext': '_URL_0_\n\nThis was on the front page earlier and I have a few questions about it. Is it possible to calculate how fast the astronaut would be orbiting the earth? Also how does he stay close to the shuttle so that he can return safely, i.e is he orbiting at the same speed and can therefore stay next to it? And finally if his propulsion system failed, would he eventually re-enter the atmosphere and presumably die?', 'selftext_urls.url': ['http://apod.nasa.gov/apod/image/1201/freeflyer_nasa_3000.jpg'], 'subreddit': 'askscience', 'title': 'Few questions about this space walk photograph.', 'title_urls.url': []} ``` Each subfield is now a separate column as indicated by the `answers` prefix, and the `text` field is a list now. Instead of tokenizing each sentence separately, convert the list to a string so you can jointly tokenize them. Here is a first preprocessing function to join the list of strings for each example and tokenize the result: ``` >>> def preprocess_function(examples): ... return tokenizer([" ".join(x) for x in examples["answers.text"]]) ``` To apply this preprocessing function over the entire dataset, use the 🤗 Datasets [map](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.map) method. You can speed up the `map` function by setting `batched=True` to process multiple elements of the dataset at once, and increasing the number of processes with `num_proc`. Remove any columns you don’t need: ``` >>> tokenized_eli5 = eli5.map( ... preprocess_function, ... batched=True, ... num_proc=4, ... remove_columns=eli5["train"].column_names, ... ) ``` This dataset contains the token sequences, but some of these are longer than the maximum input length for the model. You can now use a second preprocessing function to - concatenate all the sequences - split the concatenated sequences into shorter chunks defined by `block_size`, which should be both shorter than the maximum input length and short enough for your GPU RAM. ``` >>> block_size = 128 >>> def group_texts(examples): ... ... concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()} ... total_length = len(concatenated_examples[list(examples.keys())[0]]) ... ... ... if total_length >= block_size: ... total_length = (total_length // block_size) * block_size ... ... result = { ... k: [t[i : i + block_size] for i in range(0, total_length, block_size)] ... for k, t in concatenated_examples.items() ... } ... result["labels"] = result["input_ids"].copy() ... return result ``` Apply the `group_texts` function over the entire dataset: ``` >>> lm_dataset = tokenized_eli5.map(group_texts, batched=True, num_proc=4) ``` Now create a batch of examples using [DataCollatorForLanguageModeling](/docs/transformers/v4.34.0/en/main_classes/data_collator#transformers.DataCollatorForLanguageModeling). It’s more efficient to _dynamically pad_ the sentences to the longest length in a batch during collation, instead of padding the whole dataset to the maximum length. Use the end-of-sequence token as the padding token and set `mlm=False`. This will use the inputs as labels shifted to the right by one element: ``` >>> from transformers import DataCollatorForLanguageModeling >>> tokenizer.pad_token = tokenizer.eos_token >>> data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False) ``` Use the end-of-sequence token as the padding token and set `mlm=False`. This will use the inputs as labels shifted to the right by one element: ``` >>> from transformers import DataCollatorForLanguageModeling >>> data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False, return_tensors="tf") ``` ## Train If you aren’t familiar with finetuning a model with the [Trainer](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer), take a look at the [basic tutorial](../training#train-with-pytorch-trainer)! You’re ready to start training your model now! Load DistilGPT2 with [AutoModelForCausalLM](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoModelForCausalLM): ``` >>> from transformers import AutoModelForCausalLM, TrainingArguments, Trainer >>> model = AutoModelForCausalLM.from_pretrained("distilgpt2") ``` At this point, only three steps remain: 1. Define your training hyperparameters in [TrainingArguments](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.TrainingArguments). The only required parameter is `output_dir` which specifies where to save your model. You’ll push this model to the Hub by setting `push_to_hub=True` (you need to be signed in to Hugging Face to upload your model). 2. Pass the training arguments to [Trainer](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer) along with the model, datasets, and data collator. 3. Call [train()](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.train) to finetune your model. ``` >>> training_args = TrainingArguments( ... output_dir="my_awesome_eli5_clm-model", ... evaluation_strategy="epoch", ... learning_rate=2e-5, ... weight_decay=0.01, ... push_to_hub=True, ... ) >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=lm_dataset["train"], ... eval_dataset=lm_dataset["test"], ... data_collator=data_collator, ... ) >>> trainer.train() ``` Once training is completed, use the [evaluate()](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.evaluate) method to evaluate your model and get its perplexity: ``` >>> import math >>> eval_results = trainer.evaluate() >>> print(f"Perplexity: {math.exp(eval_results['eval_loss']):.2f}") Perplexity: 49.61 ``` Then share your model to the Hub with the [push\_to\_hub()](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.push_to_hub) method so everyone can use your model: ``` >>> trainer.push_to_hub() ``` If you aren’t familiar with finetuning a model with Keras, take a look at the [basic tutorial](../training#train-a-tensorflow-model-with-keras)! To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters: ``` >>> from transformers import create_optimizer, AdamWeightDecay >>> optimizer = AdamWeightDecay(learning_rate=2e-5, weight_decay_rate=0.01) ``` Then you can load DistilGPT2 with [TFAutoModelForCausalLM](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.TFAutoModelForCausalLM): ``` >>> from transformers import TFAutoModelForCausalLM >>> model = TFAutoModelForCausalLM.from_pretrained("distilgpt2") ``` Convert your datasets to the `tf.data.Dataset` format with [prepare\_tf\_dataset()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel.prepare_tf_dataset): ``` >>> tf_train_set = model.prepare_tf_dataset( ... lm_dataset["train"], ... shuffle=True, ... batch_size=16, ... collate_fn=data_collator, ... ) >>> tf_test_set = model.prepare_tf_dataset( ... lm_dataset["test"], ... shuffle=False, ... batch_size=16, ... collate_fn=data_collator, ... ) ``` Configure the model for training with [`compile`](https://keras.io/api/models/model_training_apis/#compile-method). Note that Transformers models all have a default task-relevant loss function, so you don’t need to specify one unless you want to: ``` >>> import tensorflow as tf >>> model.compile(optimizer=optimizer) ``` This can be done by specifying where to push your model and tokenizer in the [PushToHubCallback](/docs/transformers/v4.34.0/en/main_classes/keras_callbacks#transformers.PushToHubCallback): ``` >>> from transformers.keras_callbacks import PushToHubCallback >>> callback = PushToHubCallback( ... output_dir="my_awesome_eli5_clm-model", ... tokenizer=tokenizer, ... ) ``` Finally, you’re ready to start training your model! Call [`fit`](https://keras.io/api/models/model_training_apis/#fit-method) with your training and validation datasets, the number of epochs, and your callback to finetune the model: ``` >>> model.fit(x=tf_train_set, validation_data=tf_test_set, epochs=3, callbacks=[callback]) ``` Once training is completed, your model is automatically uploaded to the Hub so everyone can use it! For a more in-depth example of how to finetune a model for causal language modeling, take a look at the corresponding [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb) or [TensorFlow notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb). ## Inference Great, now that you’ve finetuned a model, you can use it for inference! Come up with a prompt you’d like to generate text from: ``` >>> prompt = "Somatic hypermutation allows the immune system to" ``` The simplest way to try out your finetuned model for inference is to use it in a [pipeline()](/docs/transformers/v4.34.0/en/main_classes/pipelines#transformers.pipeline). Instantiate a `pipeline` for text generation with your model, and pass your text to it: ``` >>> from transformers import pipeline >>> generator = pipeline("text-generation", model="my_awesome_eli5_clm-model") >>> generator(prompt) [{'generated_text': "Somatic hypermutation allows the immune system to be able to effectively reverse the damage caused by an infection.\n\n\nThe damage caused by an infection is caused by the immune system's ability to perform its own self-correcting tasks."}] ``` Tokenize the text and return the `input_ids` as PyTorch tensors: ``` >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_eli5_clm-model") >>> inputs = tokenizer(prompt, return_tensors="pt").input_ids ``` Use the [generate()](/docs/transformers/v4.34.0/en/main_classes/text_generation#transformers.GenerationMixin.generate) method to generate text. For more details about the different text generation strategies and parameters for controlling generation, check out the [Text generation strategies](../generation_strategies) page. ``` >>> from transformers import AutoModelForCausalLM >>> model = AutoModelForCausalLM.from_pretrained("my_awesome_eli5_clm-model") >>> outputs = model.generate(inputs, max_new_tokens=100, do_sample=True, top_k=50, top_p=0.95) ``` Decode the generated token ids back into text: ``` >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ["Somatic hypermutation allows the immune system to react to drugs with the ability to adapt to a different environmental situation. In other words, a system of 'hypermutation' can help the immune system to adapt to a different environmental situation or in some cases even a single life. In contrast, researchers at the University of Massachusetts-Boston have found that 'hypermutation' is much stronger in mice than in humans but can be found in humans, and that it's not completely unknown to the immune system. A study on how the immune system"] ``` Tokenize the text and return the `input_ids` as TensorFlow tensors: ``` >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_eli5_clm-model") >>> inputs = tokenizer(prompt, return_tensors="tf").input_ids ``` Use the [generate()](/docs/transformers/v4.34.0/en/main_classes/text_generation#transformers.TFGenerationMixin.generate) method to create the summarization. For more details about the different text generation strategies and parameters for controlling generation, check out the [Text generation strategies](../generation_strategies) page. ``` >>> from transformers import TFAutoModelForCausalLM >>> model = TFAutoModelForCausalLM.from_pretrained("my_awesome_eli5_clm-model") >>> outputs = model.generate(input_ids=inputs, max_new_tokens=100, do_sample=True, top_k=50, top_p=0.95) ``` Decode the generated token ids back into text: ``` >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['Somatic hypermutation allows the immune system to detect the presence of other viruses as they become more prevalent. Therefore, researchers have identified a high proportion of human viruses. The proportion of virus-associated viruses in our study increases with age. Therefore, we propose a simple algorithm to detect the presence of these new viruses in our samples as a sign of improved immunity. A first study based on this algorithm, which will be published in Science on Friday, aims to show that this finding could translate into the development of a better vaccine that is more effective for'] ```
<!DOCTYPE html><html class=""><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"> <meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science."> <meta property="fb:app_id" content="1321688464574422"> <meta name="twitter:card" content="summary_large_image"> <meta name="twitter:site" content="@huggingface"> <meta property="og:title" content="Causal language modeling"> <meta property="og:type" content="website"> <meta property="og:url" content="https://huggingface.co/docs/transformers/v4.34.0/en/tasks/language_modeling"> <meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png"> <link rel="stylesheet" href="/front/build/kube-5e23f38/style.css"> <link rel="preconnect" href="https://fonts.gstatic.com"> <link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&amp;display=swap" rel="stylesheet"> <link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&amp;display=swap" rel="stylesheet"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" as="style" onload="this.onload=null;this.rel='stylesheet'"> <noscript> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" /> </noscript> <title>Causal language modeling</title> <script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script> <script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"><link rel="stylesheet" href="/docs/transformers/v4.34.0/en/_app/immutable/assets/0.e3b0c442.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/1.38c5c2f6.js"><meta name="hf:doc:metadata" content="{&quot;local&quot;:&quot;causal-language-modeling&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;load-eli5-dataset&quot;,&quot;title&quot;:&quot;Load ELI5 dataset&quot;},{&quot;local&quot;:&quot;preprocess&quot;,&quot;title&quot;:&quot;Preprocess&quot;},{&quot;local&quot;:&quot;train&quot;,&quot;title&quot;:&quot;Train&quot;},{&quot;local&quot;:&quot;inference&quot;,&quot;title&quot;:&quot;Inference&quot;}],&quot;title&quot;:&quot;Causal language modeling&quot;}"></head> <body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage"> <div class="flex min-h-screen flex-col"> <div class="SVELTE_HYDRATER contents" data-props="{&quot;classNames&quot;:&quot;&quot;,&quot;isWide&quot;:true,&quot;isZh&quot;:false}" data-target="MainHeader"><header class="border-b border-gray-100 "><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <div class="flex flex-none items-center justify-center p-0.5 place-self-stretch lg:hidden"><button class="relative z-40 flex h-6 w-8 items-center justify-center" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div></div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="rounded-full border border-transparent bg-gray-900 px-3 py-1 leading-none text-white hover:border-black hover:bg-white hover:text-black" href="/join">Sign Up</a></li></ul></nav></div></header></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div> <main class="flex flex-1 flex-col"><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapters&quot;:[{&quot;title&quot;:&quot;Get started&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;🤗 Transformers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;index&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/index&quot;},{&quot;title&quot;:&quot;Quick tour&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;quicktour&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/quicktour&quot;},{&quot;title&quot;:&quot;Installation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;installation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/installation&quot;}]},{&quot;title&quot;:&quot;Tutorials&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Run inference with pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_tutorial&quot;},{&quot;title&quot;:&quot;Write portable code with AutoClass&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;autoclass_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/autoclass_tutorial&quot;},{&quot;title&quot;:&quot;Preprocess data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocessing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/preprocessing&quot;},{&quot;title&quot;:&quot;Fine-tune a pretrained model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;training&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/training&quot;},{&quot;title&quot;:&quot;Train with a script&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;run_scripts&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/run_scripts&quot;},{&quot;title&quot;:&quot;Set up distributed training with 🤗 Accelerate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;accelerate&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/accelerate&quot;},{&quot;title&quot;:&quot;Load and train adapters with 🤗 PEFT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;peft&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/peft&quot;},{&quot;title&quot;:&quot;Share your model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_sharing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_sharing&quot;},{&quot;title&quot;:&quot;Agents&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers_agents&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/transformers_agents&quot;},{&quot;title&quot;:&quot;Generation with LLMs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;llm_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/llm_tutorial&quot;}]},{&quot;title&quot;:&quot;Task Guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Natural Language Processing&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text classification&quot;,&quot;id&quot;:&quot;tasks/sequence_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/sequence_classification&quot;},{&quot;title&quot;:&quot;Token classification&quot;,&quot;id&quot;:&quot;tasks/token_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/token_classification&quot;},{&quot;title&quot;:&quot;Question answering&quot;,&quot;id&quot;:&quot;tasks/question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/question_answering&quot;},{&quot;title&quot;:&quot;Causal language modeling&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks/language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/language_modeling&quot;},{&quot;title&quot;:&quot;Masked language modeling&quot;,&quot;id&quot;:&quot;tasks/masked_language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/masked_language_modeling&quot;},{&quot;title&quot;:&quot;Translation&quot;,&quot;id&quot;:&quot;tasks/translation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/translation&quot;},{&quot;title&quot;:&quot;Summarization&quot;,&quot;id&quot;:&quot;tasks/summarization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/summarization&quot;},{&quot;title&quot;:&quot;Multiple choice&quot;,&quot;id&quot;:&quot;tasks/multiple_choice&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/multiple_choice&quot;}]},{&quot;title&quot;:&quot;Audio&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio classification&quot;,&quot;id&quot;:&quot;tasks/audio_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/audio_classification&quot;},{&quot;title&quot;:&quot;Automatic speech recognition&quot;,&quot;id&quot;:&quot;tasks/asr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/asr&quot;}]},{&quot;title&quot;:&quot;Computer Vision&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image classification&quot;,&quot;id&quot;:&quot;tasks/image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_classification&quot;},{&quot;title&quot;:&quot;Semantic segmentation&quot;,&quot;id&quot;:&quot;tasks/semantic_segmentation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/semantic_segmentation&quot;},{&quot;title&quot;:&quot;Video classification&quot;,&quot;id&quot;:&quot;tasks/video_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/video_classification&quot;},{&quot;title&quot;:&quot;Object detection&quot;,&quot;id&quot;:&quot;tasks/object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot object detection&quot;,&quot;id&quot;:&quot;tasks/zero_shot_object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot image classification&quot;,&quot;id&quot;:&quot;tasks/zero_shot_image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification&quot;},{&quot;title&quot;:&quot;Depth estimation&quot;,&quot;id&quot;:&quot;tasks/monocular_depth_estimation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation&quot;}]},{&quot;title&quot;:&quot;Multimodal&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image captioning&quot;,&quot;id&quot;:&quot;tasks/image_captioning&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_captioning&quot;},{&quot;title&quot;:&quot;Document Question Answering&quot;,&quot;id&quot;:&quot;tasks/document_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/document_question_answering&quot;},{&quot;title&quot;:&quot;Visual Question Answering&quot;,&quot;id&quot;:&quot;tasks/visual_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/visual_question_answering&quot;},{&quot;title&quot;:&quot;Text to speech&quot;,&quot;id&quot;:&quot;tasks/text-to-speech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/text-to-speech&quot;}]},{&quot;title&quot;:&quot;Generation&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Customize the generation strategy&quot;,&quot;id&quot;:&quot;generation_strategies&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/generation_strategies&quot;}]},{&quot;title&quot;:&quot;Prompting&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image tasks with IDEFICS&quot;,&quot;id&quot;:&quot;tasks/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/idefics&quot;}]}]},{&quot;title&quot;:&quot;Developer guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Use fast tokenizers from 🤗 Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;fast_tokenizers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/fast_tokenizers&quot;},{&quot;title&quot;:&quot;Run inference with multilingual models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;multilingual&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/multilingual&quot;},{&quot;title&quot;:&quot;Use model-specific APIs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;create_a_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/create_a_model&quot;},{&quot;title&quot;:&quot;Share a custom model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_models&quot;},{&quot;title&quot;:&quot;Templates for chat models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;chat_templating&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/chat_templating&quot;},{&quot;title&quot;:&quot;Run training on Amazon SageMaker&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;sagemaker&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/sagemaker&quot;},{&quot;title&quot;:&quot;Export to ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;serialization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/serialization&quot;},{&quot;title&quot;:&quot;Export to TFLite&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tflite&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tflite&quot;},{&quot;title&quot;:&quot;Export to TorchScript&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;torchscript&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/torchscript&quot;},{&quot;title&quot;:&quot;Benchmarks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;benchmarks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/benchmarks&quot;},{&quot;title&quot;:&quot;Notebooks with examples&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;notebooks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/notebooks&quot;},{&quot;title&quot;:&quot;Community resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;community&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/community&quot;},{&quot;title&quot;:&quot;Custom Tools and Prompts&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_tools&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_tools&quot;},{&quot;title&quot;:&quot;Troubleshoot&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;troubleshooting&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/troubleshooting&quot;}]},{&quot;title&quot;:&quot;Performance and scalability&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;performance&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/performance&quot;},{&quot;title&quot;:&quot;Efficient training techniques&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Methods and tools for efficient training on a single GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_one&quot;},{&quot;title&quot;:&quot;Multiple GPUs and parallelism&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_many&quot;},{&quot;title&quot;:&quot;Efficient training on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu&quot;},{&quot;title&quot;:&quot;Distributed CPU training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu_many&quot;},{&quot;title&quot;:&quot;Training on TPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu&quot;},{&quot;title&quot;:&quot;Training on TPU with TensorFlow&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu_tf&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu_tf&quot;},{&quot;title&quot;:&quot;Training on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_special&quot;},{&quot;title&quot;:&quot;Custom hardware for training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_hardware&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_hardware&quot;},{&quot;title&quot;:&quot;Hyperparameter Search using Trainer API&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;hpo_train&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/hpo_train&quot;}]},{&quot;title&quot;:&quot;Optimizing inference&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Inference on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_cpu&quot;},{&quot;title&quot;:&quot;Inference on one GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_one&quot;},{&quot;title&quot;:&quot;Inference on many GPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_many&quot;},{&quot;title&quot;:&quot;Inference on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_special&quot;}]},{&quot;title&quot;:&quot;Instantiating a big model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;big_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/big_models&quot;},{&quot;title&quot;:&quot;Troubleshooting&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;debugging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/debugging&quot;},{&quot;title&quot;:&quot;XLA Integration for TensorFlow Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tf_xla&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tf_xla&quot;},{&quot;title&quot;:&quot;Optimize inference using `torch.compile()`&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_torch_compile&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_torch_compile&quot;}]},{&quot;title&quot;:&quot;Contribute&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;How to contribute to transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;contributing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/contributing&quot;},{&quot;title&quot;:&quot;How to add a model to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_model&quot;},{&quot;title&quot;:&quot;How to convert a 🤗 Transformers model to TensorFlow?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_tensorflow_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_tensorflow_model&quot;},{&quot;title&quot;:&quot;How to add a pipeline to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_pipeline&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_pipeline&quot;},{&quot;title&quot;:&quot;Testing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;testing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/testing&quot;},{&quot;title&quot;:&quot;Checks on a Pull Request&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pr_checks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pr_checks&quot;}]},{&quot;title&quot;:&quot;Conceptual guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Philosophy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;philosophy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/philosophy&quot;},{&quot;title&quot;:&quot;Glossary&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;glossary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/glossary&quot;},{&quot;title&quot;:&quot;What 🤗 Transformers can do&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;task_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/task_summary&quot;},{&quot;title&quot;:&quot;How 🤗 Transformers solve tasks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks_explained&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks_explained&quot;},{&quot;title&quot;:&quot;The Transformer model family&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_summary&quot;},{&quot;title&quot;:&quot;Summary of the tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tokenizer_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tokenizer_summary&quot;},{&quot;title&quot;:&quot;Attention mechanisms&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;attention&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/attention&quot;},{&quot;title&quot;:&quot;Padding and truncation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pad_truncation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pad_truncation&quot;},{&quot;title&quot;:&quot;BERTology&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;bertology&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/bertology&quot;},{&quot;title&quot;:&quot;Perplexity of fixed-length models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perplexity&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perplexity&quot;},{&quot;title&quot;:&quot;Pipelines for webserver inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_webserver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_webserver&quot;},{&quot;title&quot;:&quot;Model training anatomy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_memory_anatomy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_memory_anatomy&quot;}]},{&quot;title&quot;:&quot;API&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Main Classes&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Agents and Tools&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/agent&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/agent&quot;},{&quot;title&quot;:&quot;Auto Classes&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/auto&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/auto&quot;},{&quot;title&quot;:&quot;Callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/callback&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/callback&quot;},{&quot;title&quot;:&quot;Configuration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/configuration&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/configuration&quot;},{&quot;title&quot;:&quot;Data Collator&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/data_collator&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/data_collator&quot;},{&quot;title&quot;:&quot;Keras callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/keras_callbacks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/keras_callbacks&quot;},{&quot;title&quot;:&quot;Logging&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/logging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/logging&quot;},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/model&quot;},{&quot;title&quot;:&quot;Text Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/text_generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/text_generation&quot;},{&quot;title&quot;:&quot;ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/onnx&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/onnx&quot;},{&quot;title&quot;:&quot;Optimization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/optimizer_schedules&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules&quot;},{&quot;title&quot;:&quot;Model outputs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/output&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/output&quot;},{&quot;title&quot;:&quot;Pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/pipelines&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/pipelines&quot;},{&quot;title&quot;:&quot;Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/processors&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/processors&quot;},{&quot;title&quot;:&quot;Quantization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/quantization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/quantization&quot;},{&quot;title&quot;:&quot;Tokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/tokenizer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/tokenizer&quot;},{&quot;title&quot;:&quot;Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/trainer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/trainer&quot;},{&quot;title&quot;:&quot;DeepSpeed Integration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/deepspeed&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/deepspeed&quot;},{&quot;title&quot;:&quot;Feature Extractor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/feature_extractor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/feature_extractor&quot;},{&quot;title&quot;:&quot;Image Processor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/image_processor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/image_processor&quot;}]},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALBERT&quot;,&quot;id&quot;:&quot;model_doc/albert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/albert&quot;},{&quot;title&quot;:&quot;BART&quot;,&quot;id&quot;:&quot;model_doc/bart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bart&quot;},{&quot;title&quot;:&quot;BARThez&quot;,&quot;id&quot;:&quot;model_doc/barthez&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/barthez&quot;},{&quot;title&quot;:&quot;BARTpho&quot;,&quot;id&quot;:&quot;model_doc/bartpho&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bartpho&quot;},{&quot;title&quot;:&quot;BERT&quot;,&quot;id&quot;:&quot;model_doc/bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert&quot;},{&quot;title&quot;:&quot;BertGeneration&quot;,&quot;id&quot;:&quot;model_doc/bert-generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-generation&quot;},{&quot;title&quot;:&quot;BertJapanese&quot;,&quot;id&quot;:&quot;model_doc/bert-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-japanese&quot;},{&quot;title&quot;:&quot;Bertweet&quot;,&quot;id&quot;:&quot;model_doc/bertweet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bertweet&quot;},{&quot;title&quot;:&quot;BigBird&quot;,&quot;id&quot;:&quot;model_doc/big_bird&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/big_bird&quot;},{&quot;title&quot;:&quot;BigBirdPegasus&quot;,&quot;id&quot;:&quot;model_doc/bigbird_pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus&quot;},{&quot;title&quot;:&quot;BioGpt&quot;,&quot;id&quot;:&quot;model_doc/biogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/biogpt&quot;},{&quot;title&quot;:&quot;Blenderbot&quot;,&quot;id&quot;:&quot;model_doc/blenderbot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot&quot;},{&quot;title&quot;:&quot;Blenderbot Small&quot;,&quot;id&quot;:&quot;model_doc/blenderbot-small&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot-small&quot;},{&quot;title&quot;:&quot;BLOOM&quot;,&quot;id&quot;:&quot;model_doc/bloom&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bloom&quot;},{&quot;title&quot;:&quot;BORT&quot;,&quot;id&quot;:&quot;model_doc/bort&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bort&quot;},{&quot;title&quot;:&quot;ByT5&quot;,&quot;id&quot;:&quot;model_doc/byt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/byt5&quot;},{&quot;title&quot;:&quot;CamemBERT&quot;,&quot;id&quot;:&quot;model_doc/camembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/camembert&quot;},{&quot;title&quot;:&quot;CANINE&quot;,&quot;id&quot;:&quot;model_doc/canine&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/canine&quot;},{&quot;title&quot;:&quot;CodeGen&quot;,&quot;id&quot;:&quot;model_doc/codegen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/codegen&quot;},{&quot;title&quot;:&quot;CodeLlama&quot;,&quot;id&quot;:&quot;model_doc/code_llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/code_llama&quot;},{&quot;title&quot;:&quot;ConvBERT&quot;,&quot;id&quot;:&quot;model_doc/convbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convbert&quot;},{&quot;title&quot;:&quot;CPM&quot;,&quot;id&quot;:&quot;model_doc/cpm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpm&quot;},{&quot;title&quot;:&quot;CPMANT&quot;,&quot;id&quot;:&quot;model_doc/cpmant&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpmant&quot;},{&quot;title&quot;:&quot;CTRL&quot;,&quot;id&quot;:&quot;model_doc/ctrl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ctrl&quot;},{&quot;title&quot;:&quot;DeBERTa&quot;,&quot;id&quot;:&quot;model_doc/deberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta&quot;},{&quot;title&quot;:&quot;DeBERTa-v2&quot;,&quot;id&quot;:&quot;model_doc/deberta-v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta-v2&quot;},{&quot;title&quot;:&quot;DialoGPT&quot;,&quot;id&quot;:&quot;model_doc/dialogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dialogpt&quot;},{&quot;title&quot;:&quot;DistilBERT&quot;,&quot;id&quot;:&quot;model_doc/distilbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/distilbert&quot;},{&quot;title&quot;:&quot;DPR&quot;,&quot;id&quot;:&quot;model_doc/dpr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpr&quot;},{&quot;title&quot;:&quot;ELECTRA&quot;,&quot;id&quot;:&quot;model_doc/electra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/electra&quot;},{&quot;title&quot;:&quot;Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encoder-decoder&quot;},{&quot;title&quot;:&quot;ERNIE&quot;,&quot;id&quot;:&quot;model_doc/ernie&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie&quot;},{&quot;title&quot;:&quot;ErnieM&quot;,&quot;id&quot;:&quot;model_doc/ernie_m&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie_m&quot;},{&quot;title&quot;:&quot;ESM&quot;,&quot;id&quot;:&quot;model_doc/esm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/esm&quot;},{&quot;title&quot;:&quot;Falcon&quot;,&quot;id&quot;:&quot;model_doc/falcon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/falcon&quot;},{&quot;title&quot;:&quot;FLAN-T5&quot;,&quot;id&quot;:&quot;model_doc/flan-t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-t5&quot;},{&quot;title&quot;:&quot;FLAN-UL2&quot;,&quot;id&quot;:&quot;model_doc/flan-ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-ul2&quot;},{&quot;title&quot;:&quot;FlauBERT&quot;,&quot;id&quot;:&quot;model_doc/flaubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flaubert&quot;},{&quot;title&quot;:&quot;FNet&quot;,&quot;id&quot;:&quot;model_doc/fnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fnet&quot;},{&quot;title&quot;:&quot;FSMT&quot;,&quot;id&quot;:&quot;model_doc/fsmt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fsmt&quot;},{&quot;title&quot;:&quot;Funnel Transformer&quot;,&quot;id&quot;:&quot;model_doc/funnel&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/funnel&quot;},{&quot;title&quot;:&quot;GPT&quot;,&quot;id&quot;:&quot;model_doc/openai-gpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/openai-gpt&quot;},{&quot;title&quot;:&quot;GPT Neo&quot;,&quot;id&quot;:&quot;model_doc/gpt_neo&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neo&quot;},{&quot;title&quot;:&quot;GPT NeoX&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox&quot;},{&quot;title&quot;:&quot;GPT NeoX Japanese&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox_japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese&quot;},{&quot;title&quot;:&quot;GPT-J&quot;,&quot;id&quot;:&quot;model_doc/gptj&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptj&quot;},{&quot;title&quot;:&quot;GPT2&quot;,&quot;id&quot;:&quot;model_doc/gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt2&quot;},{&quot;title&quot;:&quot;GPTBigCode&quot;,&quot;id&quot;:&quot;model_doc/gpt_bigcode&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode&quot;},{&quot;title&quot;:&quot;GPTSAN Japanese&quot;,&quot;id&quot;:&quot;model_doc/gptsan-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese&quot;},{&quot;title&quot;:&quot;GPTSw3&quot;,&quot;id&quot;:&quot;model_doc/gpt-sw3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt-sw3&quot;},{&quot;title&quot;:&quot;HerBERT&quot;,&quot;id&quot;:&quot;model_doc/herbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/herbert&quot;},{&quot;title&quot;:&quot;I-BERT&quot;,&quot;id&quot;:&quot;model_doc/ibert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ibert&quot;},{&quot;title&quot;:&quot;Jukebox&quot;,&quot;id&quot;:&quot;model_doc/jukebox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/jukebox&quot;},{&quot;title&quot;:&quot;LED&quot;,&quot;id&quot;:&quot;model_doc/led&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/led&quot;},{&quot;title&quot;:&quot;LLaMA&quot;,&quot;id&quot;:&quot;model_doc/llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama&quot;},{&quot;title&quot;:&quot;Llama2&quot;,&quot;id&quot;:&quot;model_doc/llama2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama2&quot;},{&quot;title&quot;:&quot;Longformer&quot;,&quot;id&quot;:&quot;model_doc/longformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longformer&quot;},{&quot;title&quot;:&quot;LongT5&quot;,&quot;id&quot;:&quot;model_doc/longt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longt5&quot;},{&quot;title&quot;:&quot;LUKE&quot;,&quot;id&quot;:&quot;model_doc/luke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/luke&quot;},{&quot;title&quot;:&quot;M2M100&quot;,&quot;id&quot;:&quot;model_doc/m2m_100&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/m2m_100&quot;},{&quot;title&quot;:&quot;MarianMT&quot;,&quot;id&quot;:&quot;model_doc/marian&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/marian&quot;},{&quot;title&quot;:&quot;MarkupLM&quot;,&quot;id&quot;:&quot;model_doc/markuplm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/markuplm&quot;},{&quot;title&quot;:&quot;MBart and MBart-50&quot;,&quot;id&quot;:&quot;model_doc/mbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mbart&quot;},{&quot;title&quot;:&quot;MEGA&quot;,&quot;id&quot;:&quot;model_doc/mega&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mega&quot;},{&quot;title&quot;:&quot;MegatronBERT&quot;,&quot;id&quot;:&quot;model_doc/megatron-bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron-bert&quot;},{&quot;title&quot;:&quot;MegatronGPT2&quot;,&quot;id&quot;:&quot;model_doc/megatron_gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2&quot;},{&quot;title&quot;:&quot;Mistral&quot;,&quot;id&quot;:&quot;model_doc/mistral&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mistral&quot;},{&quot;title&quot;:&quot;mLUKE&quot;,&quot;id&quot;:&quot;model_doc/mluke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mluke&quot;},{&quot;title&quot;:&quot;MobileBERT&quot;,&quot;id&quot;:&quot;model_doc/mobilebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilebert&quot;},{&quot;title&quot;:&quot;MPNet&quot;,&quot;id&quot;:&quot;model_doc/mpnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpnet&quot;},{&quot;title&quot;:&quot;MPT&quot;,&quot;id&quot;:&quot;model_doc/mpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpt&quot;},{&quot;title&quot;:&quot;MRA&quot;,&quot;id&quot;:&quot;model_doc/mra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mra&quot;},{&quot;title&quot;:&quot;MT5&quot;,&quot;id&quot;:&quot;model_doc/mt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mt5&quot;},{&quot;title&quot;:&quot;MVP&quot;,&quot;id&quot;:&quot;model_doc/mvp&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mvp&quot;},{&quot;title&quot;:&quot;NEZHA&quot;,&quot;id&quot;:&quot;model_doc/nezha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nezha&quot;},{&quot;title&quot;:&quot;NLLB&quot;,&quot;id&quot;:&quot;model_doc/nllb&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb&quot;},{&quot;title&quot;:&quot;NLLB-MoE&quot;,&quot;id&quot;:&quot;model_doc/nllb-moe&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb-moe&quot;},{&quot;title&quot;:&quot;Nyströmformer&quot;,&quot;id&quot;:&quot;model_doc/nystromformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nystromformer&quot;},{&quot;title&quot;:&quot;Open-Llama&quot;,&quot;id&quot;:&quot;model_doc/open-llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/open-llama&quot;},{&quot;title&quot;:&quot;OPT&quot;,&quot;id&quot;:&quot;model_doc/opt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/opt&quot;},{&quot;title&quot;:&quot;Pegasus&quot;,&quot;id&quot;:&quot;model_doc/pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus&quot;},{&quot;title&quot;:&quot;PEGASUS-X&quot;,&quot;id&quot;:&quot;model_doc/pegasus_x&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus_x&quot;},{&quot;title&quot;:&quot;Persimmon&quot;,&quot;id&quot;:&quot;model_doc/persimmon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/persimmon&quot;},{&quot;title&quot;:&quot;PhoBERT&quot;,&quot;id&quot;:&quot;model_doc/phobert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/phobert&quot;},{&quot;title&quot;:&quot;PLBart&quot;,&quot;id&quot;:&quot;model_doc/plbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/plbart&quot;},{&quot;title&quot;:&quot;ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/prophetnet&quot;},{&quot;title&quot;:&quot;QDQBert&quot;,&quot;id&quot;:&quot;model_doc/qdqbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/qdqbert&quot;},{&quot;title&quot;:&quot;RAG&quot;,&quot;id&quot;:&quot;model_doc/rag&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rag&quot;},{&quot;title&quot;:&quot;REALM&quot;,&quot;id&quot;:&quot;model_doc/realm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/realm&quot;},{&quot;title&quot;:&quot;Reformer&quot;,&quot;id&quot;:&quot;model_doc/reformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/reformer&quot;},{&quot;title&quot;:&quot;RemBERT&quot;,&quot;id&quot;:&quot;model_doc/rembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rembert&quot;},{&quot;title&quot;:&quot;RetriBERT&quot;,&quot;id&quot;:&quot;model_doc/retribert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/retribert&quot;},{&quot;title&quot;:&quot;RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta&quot;},{&quot;title&quot;:&quot;RoBERTa-PreLayerNorm&quot;,&quot;id&quot;:&quot;model_doc/roberta-prelayernorm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm&quot;},{&quot;title&quot;:&quot;RoCBert&quot;,&quot;id&quot;:&quot;model_doc/roc_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roc_bert&quot;},{&quot;title&quot;:&quot;RoFormer&quot;,&quot;id&quot;:&quot;model_doc/roformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roformer&quot;},{&quot;title&quot;:&quot;RWKV&quot;,&quot;id&quot;:&quot;model_doc/rwkv&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rwkv&quot;},{&quot;title&quot;:&quot;Splinter&quot;,&quot;id&quot;:&quot;model_doc/splinter&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/splinter&quot;},{&quot;title&quot;:&quot;SqueezeBERT&quot;,&quot;id&quot;:&quot;model_doc/squeezebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/squeezebert&quot;},{&quot;title&quot;:&quot;SwitchTransformers&quot;,&quot;id&quot;:&quot;model_doc/switch_transformers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/switch_transformers&quot;},{&quot;title&quot;:&quot;T5&quot;,&quot;id&quot;:&quot;model_doc/t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5&quot;},{&quot;title&quot;:&quot;T5v1.1&quot;,&quot;id&quot;:&quot;model_doc/t5v1.1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5v1.1&quot;},{&quot;title&quot;:&quot;TAPEX&quot;,&quot;id&quot;:&quot;model_doc/tapex&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapex&quot;},{&quot;title&quot;:&quot;Transformer XL&quot;,&quot;id&quot;:&quot;model_doc/transfo-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/transfo-xl&quot;},{&quot;title&quot;:&quot;UL2&quot;,&quot;id&quot;:&quot;model_doc/ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ul2&quot;},{&quot;title&quot;:&quot;UMT5&quot;,&quot;id&quot;:&quot;model_doc/umt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/umt5&quot;},{&quot;title&quot;:&quot;X-MOD&quot;,&quot;id&quot;:&quot;model_doc/xmod&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xmod&quot;},{&quot;title&quot;:&quot;XGLM&quot;,&quot;id&quot;:&quot;model_doc/xglm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xglm&quot;},{&quot;title&quot;:&quot;XLM&quot;,&quot;id&quot;:&quot;model_doc/xlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm&quot;},{&quot;title&quot;:&quot;XLM-ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/xlm-prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa-XL&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl&quot;},{&quot;title&quot;:&quot;XLM-V&quot;,&quot;id&quot;:&quot;model_doc/xlm-v&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-v&quot;},{&quot;title&quot;:&quot;XLNet&quot;,&quot;id&quot;:&quot;model_doc/xlnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlnet&quot;},{&quot;title&quot;:&quot;YOSO&quot;,&quot;id&quot;:&quot;model_doc/yoso&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yoso&quot;}]},{&quot;title&quot;:&quot;Vision models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;BEiT&quot;,&quot;id&quot;:&quot;model_doc/beit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/beit&quot;},{&quot;title&quot;:&quot;BiT&quot;,&quot;id&quot;:&quot;model_doc/bit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bit&quot;},{&quot;title&quot;:&quot;Conditional DETR&quot;,&quot;id&quot;:&quot;model_doc/conditional_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/conditional_detr&quot;},{&quot;title&quot;:&quot;ConvNeXT&quot;,&quot;id&quot;:&quot;model_doc/convnext&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnext&quot;},{&quot;title&quot;:&quot;ConvNeXTV2&quot;,&quot;id&quot;:&quot;model_doc/convnextv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnextv2&quot;},{&quot;title&quot;:&quot;CvT&quot;,&quot;id&quot;:&quot;model_doc/cvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cvt&quot;},{&quot;title&quot;:&quot;Deformable DETR&quot;,&quot;id&quot;:&quot;model_doc/deformable_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deformable_detr&quot;},{&quot;title&quot;:&quot;DeiT&quot;,&quot;id&quot;:&quot;model_doc/deit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deit&quot;},{&quot;title&quot;:&quot;DETA&quot;,&quot;id&quot;:&quot;model_doc/deta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deta&quot;},{&quot;title&quot;:&quot;DETR&quot;,&quot;id&quot;:&quot;model_doc/detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/detr&quot;},{&quot;title&quot;:&quot;DiNAT&quot;,&quot;id&quot;:&quot;model_doc/dinat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinat&quot;},{&quot;title&quot;:&quot;DINO V2&quot;,&quot;id&quot;:&quot;model_doc/dinov2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinov2&quot;},{&quot;title&quot;:&quot;DiT&quot;,&quot;id&quot;:&quot;model_doc/dit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dit&quot;},{&quot;title&quot;:&quot;DPT&quot;,&quot;id&quot;:&quot;model_doc/dpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpt&quot;},{&quot;title&quot;:&quot;EfficientFormer&quot;,&quot;id&quot;:&quot;model_doc/efficientformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientformer&quot;},{&quot;title&quot;:&quot;EfficientNet&quot;,&quot;id&quot;:&quot;model_doc/efficientnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientnet&quot;},{&quot;title&quot;:&quot;FocalNet&quot;,&quot;id&quot;:&quot;model_doc/focalnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/focalnet&quot;},{&quot;title&quot;:&quot;GLPN&quot;,&quot;id&quot;:&quot;model_doc/glpn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/glpn&quot;},{&quot;title&quot;:&quot;ImageGPT&quot;,&quot;id&quot;:&quot;model_doc/imagegpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/imagegpt&quot;},{&quot;title&quot;:&quot;LeViT&quot;,&quot;id&quot;:&quot;model_doc/levit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/levit&quot;},{&quot;title&quot;:&quot;Mask2Former&quot;,&quot;id&quot;:&quot;model_doc/mask2former&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mask2former&quot;},{&quot;title&quot;:&quot;MaskFormer&quot;,&quot;id&quot;:&quot;model_doc/maskformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/maskformer&quot;},{&quot;title&quot;:&quot;MobileNetV1&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v1&quot;},{&quot;title&quot;:&quot;MobileNetV2&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v2&quot;},{&quot;title&quot;:&quot;MobileViT&quot;,&quot;id&quot;:&quot;model_doc/mobilevit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevit&quot;},{&quot;title&quot;:&quot;MobileViTV2&quot;,&quot;id&quot;:&quot;model_doc/mobilevitv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevitv2&quot;},{&quot;title&quot;:&quot;NAT&quot;,&quot;id&quot;:&quot;model_doc/nat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nat&quot;},{&quot;title&quot;:&quot;PoolFormer&quot;,&quot;id&quot;:&quot;model_doc/poolformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/poolformer&quot;},{&quot;title&quot;:&quot;Pyramid Vision Transformer (PVT)&quot;,&quot;id&quot;:&quot;model_doc/pvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pvt&quot;},{&quot;title&quot;:&quot;RegNet&quot;,&quot;id&quot;:&quot;model_doc/regnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/regnet&quot;},{&quot;title&quot;:&quot;ResNet&quot;,&quot;id&quot;:&quot;model_doc/resnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/resnet&quot;},{&quot;title&quot;:&quot;SegFormer&quot;,&quot;id&quot;:&quot;model_doc/segformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/segformer&quot;},{&quot;title&quot;:&quot;SwiftFormer&quot;,&quot;id&quot;:&quot;model_doc/swiftformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swiftformer&quot;},{&quot;title&quot;:&quot;Swin Transformer&quot;,&quot;id&quot;:&quot;model_doc/swin&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin&quot;},{&quot;title&quot;:&quot;Swin Transformer V2&quot;,&quot;id&quot;:&quot;model_doc/swinv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swinv2&quot;},{&quot;title&quot;:&quot;Swin2SR&quot;,&quot;id&quot;:&quot;model_doc/swin2sr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin2sr&quot;},{&quot;title&quot;:&quot;Table Transformer&quot;,&quot;id&quot;:&quot;model_doc/table-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/table-transformer&quot;},{&quot;title&quot;:&quot;TimeSformer&quot;,&quot;id&quot;:&quot;model_doc/timesformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/timesformer&quot;},{&quot;title&quot;:&quot;UperNet&quot;,&quot;id&quot;:&quot;model_doc/upernet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/upernet&quot;},{&quot;title&quot;:&quot;VAN&quot;,&quot;id&quot;:&quot;model_doc/van&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/van&quot;},{&quot;title&quot;:&quot;VideoMAE&quot;,&quot;id&quot;:&quot;model_doc/videomae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/videomae&quot;},{&quot;title&quot;:&quot;Vision Transformer (ViT)&quot;,&quot;id&quot;:&quot;model_doc/vit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit&quot;},{&quot;title&quot;:&quot;ViT Hybrid&quot;,&quot;id&quot;:&quot;model_doc/vit_hybrid&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_hybrid&quot;},{&quot;title&quot;:&quot;ViTDet&quot;,&quot;id&quot;:&quot;model_doc/vitdet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitdet&quot;},{&quot;title&quot;:&quot;ViTMAE&quot;,&quot;id&quot;:&quot;model_doc/vit_mae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_mae&quot;},{&quot;title&quot;:&quot;ViTMatte&quot;,&quot;id&quot;:&quot;model_doc/vitmatte&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitmatte&quot;},{&quot;title&quot;:&quot;ViTMSN&quot;,&quot;id&quot;:&quot;model_doc/vit_msn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_msn&quot;},{&quot;title&quot;:&quot;ViViT&quot;,&quot;id&quot;:&quot;model_doc/vivit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vivit&quot;},{&quot;title&quot;:&quot;YOLOS&quot;,&quot;id&quot;:&quot;model_doc/yolos&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yolos&quot;}]},{&quot;title&quot;:&quot;Audio models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio Spectrogram Transformer&quot;,&quot;id&quot;:&quot;model_doc/audio-spectrogram-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer&quot;},{&quot;title&quot;:&quot;Bark&quot;,&quot;id&quot;:&quot;model_doc/bark&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bark&quot;},{&quot;title&quot;:&quot;CLAP&quot;,&quot;id&quot;:&quot;model_doc/clap&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clap&quot;},{&quot;title&quot;:&quot;EnCodec&quot;,&quot;id&quot;:&quot;model_doc/encodec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encodec&quot;},{&quot;title&quot;:&quot;Hubert&quot;,&quot;id&quot;:&quot;model_doc/hubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/hubert&quot;},{&quot;title&quot;:&quot;MCTCT&quot;,&quot;id&quot;:&quot;model_doc/mctct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mctct&quot;},{&quot;title&quot;:&quot;MMS&quot;,&quot;id&quot;:&quot;model_doc/mms&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mms&quot;},{&quot;title&quot;:&quot;MusicGen&quot;,&quot;id&quot;:&quot;model_doc/musicgen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/musicgen&quot;},{&quot;title&quot;:&quot;Pop2Piano&quot;,&quot;id&quot;:&quot;model_doc/pop2piano&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pop2piano&quot;},{&quot;title&quot;:&quot;SEW&quot;,&quot;id&quot;:&quot;model_doc/sew&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew&quot;},{&quot;title&quot;:&quot;SEW-D&quot;,&quot;id&quot;:&quot;model_doc/sew-d&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew-d&quot;},{&quot;title&quot;:&quot;Speech2Text&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text&quot;},{&quot;title&quot;:&quot;Speech2Text2&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text_2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2&quot;},{&quot;title&quot;:&quot;SpeechT5&quot;,&quot;id&quot;:&quot;model_doc/speecht5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speecht5&quot;},{&quot;title&quot;:&quot;UniSpeech&quot;,&quot;id&quot;:&quot;model_doc/unispeech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech&quot;},{&quot;title&quot;:&quot;UniSpeech-SAT&quot;,&quot;id&quot;:&quot;model_doc/unispeech-sat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech-sat&quot;},{&quot;title&quot;:&quot;VITS&quot;,&quot;id&quot;:&quot;model_doc/vits&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vits&quot;},{&quot;title&quot;:&quot;Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2&quot;},{&quot;title&quot;:&quot;Wav2Vec2-Conformer&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2-conformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer&quot;},{&quot;title&quot;:&quot;Wav2Vec2Phoneme&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2_phoneme&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme&quot;},{&quot;title&quot;:&quot;WavLM&quot;,&quot;id&quot;:&quot;model_doc/wavlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wavlm&quot;},{&quot;title&quot;:&quot;Whisper&quot;,&quot;id&quot;:&quot;model_doc/whisper&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/whisper&quot;},{&quot;title&quot;:&quot;XLS-R&quot;,&quot;id&quot;:&quot;model_doc/xls_r&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xls_r&quot;},{&quot;title&quot;:&quot;XLSR-Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/xlsr_wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2&quot;}]},{&quot;title&quot;:&quot;Multimodal models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALIGN&quot;,&quot;id&quot;:&quot;model_doc/align&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/align&quot;},{&quot;title&quot;:&quot;AltCLIP&quot;,&quot;id&quot;:&quot;model_doc/altclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/altclip&quot;},{&quot;title&quot;:&quot;BLIP&quot;,&quot;id&quot;:&quot;model_doc/blip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip&quot;},{&quot;title&quot;:&quot;BLIP-2&quot;,&quot;id&quot;:&quot;model_doc/blip-2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip-2&quot;},{&quot;title&quot;:&quot;BridgeTower&quot;,&quot;id&quot;:&quot;model_doc/bridgetower&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bridgetower&quot;},{&quot;title&quot;:&quot;BROS&quot;,&quot;id&quot;:&quot;model_doc/bros&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bros&quot;},{&quot;title&quot;:&quot;Chinese-CLIP&quot;,&quot;id&quot;:&quot;model_doc/chinese_clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/chinese_clip&quot;},{&quot;title&quot;:&quot;CLIP&quot;,&quot;id&quot;:&quot;model_doc/clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clip&quot;},{&quot;title&quot;:&quot;CLIPSeg&quot;,&quot;id&quot;:&quot;model_doc/clipseg&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clipseg&quot;},{&quot;title&quot;:&quot;Data2Vec&quot;,&quot;id&quot;:&quot;model_doc/data2vec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/data2vec&quot;},{&quot;title&quot;:&quot;DePlot&quot;,&quot;id&quot;:&quot;model_doc/deplot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deplot&quot;},{&quot;title&quot;:&quot;Donut&quot;,&quot;id&quot;:&quot;model_doc/donut&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/donut&quot;},{&quot;title&quot;:&quot;FLAVA&quot;,&quot;id&quot;:&quot;model_doc/flava&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flava&quot;},{&quot;title&quot;:&quot;GIT&quot;,&quot;id&quot;:&quot;model_doc/git&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/git&quot;},{&quot;title&quot;:&quot;GroupViT&quot;,&quot;id&quot;:&quot;model_doc/groupvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/groupvit&quot;},{&quot;title&quot;:&quot;IDEFICS&quot;,&quot;id&quot;:&quot;model_doc/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/idefics&quot;},{&quot;title&quot;:&quot;InstructBLIP&quot;,&quot;id&quot;:&quot;model_doc/instructblip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/instructblip&quot;},{&quot;title&quot;:&quot;LayoutLM&quot;,&quot;id&quot;:&quot;model_doc/layoutlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlm&quot;},{&quot;title&quot;:&quot;LayoutLMV2&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv2&quot;},{&quot;title&quot;:&quot;LayoutLMV3&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv3&quot;},{&quot;title&quot;:&quot;LayoutXLM&quot;,&quot;id&quot;:&quot;model_doc/layoutxlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutxlm&quot;},{&quot;title&quot;:&quot;LiLT&quot;,&quot;id&quot;:&quot;model_doc/lilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lilt&quot;},{&quot;title&quot;:&quot;LXMERT&quot;,&quot;id&quot;:&quot;model_doc/lxmert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lxmert&quot;},{&quot;title&quot;:&quot;MatCha&quot;,&quot;id&quot;:&quot;model_doc/matcha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/matcha&quot;},{&quot;title&quot;:&quot;MGP-STR&quot;,&quot;id&quot;:&quot;model_doc/mgp-str&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mgp-str&quot;},{&quot;title&quot;:&quot;Nougat&quot;,&quot;id&quot;:&quot;model_doc/nougat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nougat&quot;},{&quot;title&quot;:&quot;OneFormer&quot;,&quot;id&quot;:&quot;model_doc/oneformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/oneformer&quot;},{&quot;title&quot;:&quot;OWL-ViT&quot;,&quot;id&quot;:&quot;model_doc/owlvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/owlvit&quot;},{&quot;title&quot;:&quot;Perceiver&quot;,&quot;id&quot;:&quot;model_doc/perceiver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/perceiver&quot;},{&quot;title&quot;:&quot;Pix2Struct&quot;,&quot;id&quot;:&quot;model_doc/pix2struct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pix2struct&quot;},{&quot;title&quot;:&quot;Segment Anything&quot;,&quot;id&quot;:&quot;model_doc/sam&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sam&quot;},{&quot;title&quot;:&quot;Speech Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/speech-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder&quot;},{&quot;title&quot;:&quot;TAPAS&quot;,&quot;id&quot;:&quot;model_doc/tapas&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapas&quot;},{&quot;title&quot;:&quot;TrOCR&quot;,&quot;id&quot;:&quot;model_doc/trocr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trocr&quot;},{&quot;title&quot;:&quot;TVLT&quot;,&quot;id&quot;:&quot;model_doc/tvlt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tvlt&quot;},{&quot;title&quot;:&quot;ViLT&quot;,&quot;id&quot;:&quot;model_doc/vilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vilt&quot;},{&quot;title&quot;:&quot;Vision Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/vision-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder&quot;},{&quot;title&quot;:&quot;Vision Text Dual Encoder&quot;,&quot;id&quot;:&quot;model_doc/vision-text-dual-encoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder&quot;},{&quot;title&quot;:&quot;VisualBERT&quot;,&quot;id&quot;:&quot;model_doc/visual_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/visual_bert&quot;},{&quot;title&quot;:&quot;X-CLIP&quot;,&quot;id&quot;:&quot;model_doc/xclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xclip&quot;}]},{&quot;title&quot;:&quot;Reinforcement learning models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Decision Transformer&quot;,&quot;id&quot;:&quot;model_doc/decision_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/decision_transformer&quot;},{&quot;title&quot;:&quot;Trajectory Transformer&quot;,&quot;id&quot;:&quot;model_doc/trajectory_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer&quot;}]},{&quot;title&quot;:&quot;Time series models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Autoformer&quot;,&quot;id&quot;:&quot;model_doc/autoformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/autoformer&quot;},{&quot;title&quot;:&quot;Informer&quot;,&quot;id&quot;:&quot;model_doc/informer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/informer&quot;},{&quot;title&quot;:&quot;Time Series Transformer&quot;,&quot;id&quot;:&quot;model_doc/time_series_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/time_series_transformer&quot;}]},{&quot;title&quot;:&quot;Graph models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Graphormer&quot;,&quot;id&quot;:&quot;model_doc/graphormer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/graphormer&quot;}]}]},{&quot;title&quot;:&quot;Internal Helpers&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Custom Layers and Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/modeling_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/modeling_utils&quot;},{&quot;title&quot;:&quot;Utilities for pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/pipelines_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/pipelines_utils&quot;},{&quot;title&quot;:&quot;Utilities for Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/tokenization_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/tokenization_utils&quot;},{&quot;title&quot;:&quot;Utilities for Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/trainer_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/trainer_utils&quot;},{&quot;title&quot;:&quot;Utilities for Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/generation_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/generation_utils&quot;},{&quot;title&quot;:&quot;Utilities for Image Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/image_processing_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/image_processing_utils&quot;},{&quot;title&quot;:&quot;Utilities for Audio processing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/audio_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/audio_utils&quot;},{&quot;title&quot;:&quot;General Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/file_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/file_utils&quot;},{&quot;title&quot;:&quot;Utilities for Time Series&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/time_series_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/time_series_utils&quot;}]}]}],&quot;chapterId&quot;:&quot;tasks/language_modeling&quot;,&quot;docType&quot;:&quot;docs&quot;,&quot;isLoggedIn&quot;:false,&quot;lang&quot;:&quot;en&quot;,&quot;langs&quot;:[&quot;de&quot;,&quot;en&quot;,&quot;es&quot;,&quot;fr&quot;,&quot;it&quot;,&quot;ko&quot;,&quot;pt&quot;,&quot;zh&quot;],&quot;library&quot;:&quot;transformers&quot;,&quot;theme&quot;:&quot;light&quot;,&quot;version&quot;:&quot;v4.34.0&quot;,&quot;versions&quot;:[{&quot;version&quot;:&quot;main&quot;},{&quot;version&quot;:&quot;v4.34.0&quot;},{&quot;version&quot;:&quot;v4.33.3&quot;},{&quot;version&quot;:&quot;v4.33.2&quot;},{&quot;version&quot;:&quot;v4.33.0&quot;},{&quot;version&quot;:&quot;v4.32.1&quot;},{&quot;version&quot;:&quot;v4.32.0&quot;},{&quot;version&quot;:&quot;v4.31.0&quot;},{&quot;version&quot;:&quot;v4.30.0&quot;},{&quot;version&quot;:&quot;v4.29.1&quot;},{&quot;version&quot;:&quot;v4.29.0&quot;},{&quot;version&quot;:&quot;v4.28.1&quot;},{&quot;version&quot;:&quot;v4.28.0&quot;},{&quot;version&quot;:&quot;v4.27.2&quot;},{&quot;version&quot;:&quot;v4.27.1&quot;},{&quot;version&quot;:&quot;v4.27.0&quot;},{&quot;version&quot;:&quot;v4.26.1&quot;},{&quot;version&quot;:&quot;v4.26.0&quot;},{&quot;version&quot;:&quot;v4.25.1&quot;},{&quot;version&quot;:&quot;v4.24.0&quot;},{&quot;version&quot;:&quot;v4.23.1&quot;},{&quot;version&quot;:&quot;v4.23.0&quot;},{&quot;version&quot;:&quot;v4.22.2&quot;},{&quot;version&quot;:&quot;v4.22.1&quot;},{&quot;version&quot;:&quot;v4.22.0&quot;},{&quot;version&quot;:&quot;v4.21.3&quot;},{&quot;version&quot;:&quot;v4.21.2&quot;},{&quot;version&quot;:&quot;v4.21.1&quot;},{&quot;version&quot;:&quot;v4.21.0&quot;},{&quot;version&quot;:&quot;v4.20.1&quot;},{&quot;version&quot;:&quot;v4.20.0&quot;},{&quot;version&quot;:&quot;v4.19.4&quot;},{&quot;version&quot;:&quot;v4.19.3&quot;},{&quot;version&quot;:&quot;v4.19.2&quot;},{&quot;version&quot;:&quot;v4.19.0&quot;},{&quot;version&quot;:&quot;v4.18.0&quot;},{&quot;version&quot;:&quot;v4.17.0&quot;},{&quot;version&quot;:&quot;v4.16.2&quot;},{&quot;version&quot;:&quot;v4.16.1&quot;},{&quot;version&quot;:&quot;v4.16.0&quot;},{&quot;version&quot;:&quot;v4.15.0&quot;},{&quot;version&quot;:&quot;v4.14.1&quot;},{&quot;version&quot;:&quot;v4.13.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.5&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.4&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.0.0&quot;},{&quot;version&quot;:&quot;doc-builder-html&quot;}],&quot;title&quot;:&quot;Causal language modeling&quot;}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"><div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation</p> <div class="flex items-center"><p class="font-semibold">Causal language modeling</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "><button class=" " type="button"><h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> <span class="ml-auto rounded border border-gray-200 bg-gray-100 px-0.5 text-xs dark:border-gray-800 dark:bg-gray-800"><kbd class="font-sans">⌘K</kbd></span></button> <div class="flex items-center"><select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1">v4.34.0</option><option value="2">v4.33.3</option><option value="3">v4.32.1</option><option value="4">v4.31.0</option><option value="5">v4.30.0</option><option value="6">v4.29.1</option><option value="7">v4.28.1</option><option value="8">v4.27.2</option><option value="9">v4.26.1</option><option value="10">v4.25.1</option><option value="11">v4.24.0</option><option value="12">v4.23.1</option><option value="13">v4.22.2</option><option value="14">v4.21.3</option><option value="15">v4.20.1</option><option value="16">v4.19.4</option><option value="17">v4.18.0</option><option value="18">v4.17.0</option><option value="19">v4.16.2</option><option value="20">v4.15.0</option><option value="21">v4.14.1</option><option value="22">v4.13.0</option><option value="23">v4.12.5</option><option value="24">v4.11.3</option><option value="25">v4.10.1</option><option value="26">v4.9.2</option><option value="27">v4.8.2</option><option value="28">v4.7.0</option><option value="29">v4.6.0</option><option value="30">v4.5.1</option><option value="31">v4.4.2</option><option value="32">v4.3.3</option><option value="33">v4.2.2</option><option value="34">v4.1.1</option><option value="35">v4.0.1</option><option value="36">v3.5.1</option><option value="37">v3.4.0</option><option value="38">v3.3.1</option><option value="39">v3.2.0</option><option value="40">v3.1.0</option><option value="41">v3.0.2</option><option value="42">v2.11.0</option><option value="43">v2.10.0</option><option value="44">v2.9.1</option><option value="45">v2.8.0</option><option value="46">v2.7.0</option><option value="47">v2.6.0</option><option value="48">v2.5.1</option><option value="49">v2.4.1</option><option value="50">v2.3.0</option><option value="51">v2.2.2</option><option value="52">v2.1.1</option><option value="53">v2.0.0</option><option value="54">v1.2.0</option><option value="55">v1.1.0</option><option value="56">v1.0.0</option><option value="57">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"><button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"><svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> 112,792</a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Get started</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/index">🤗 Transformers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/quicktour">Quick tour </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/installation">Installation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Tutorials</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_tutorial">Run inference with pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/autoclass_tutorial">Write portable code with AutoClass </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/preprocessing">Preprocess data </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/training">Fine-tune a pretrained model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/run_scripts">Train with a script </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/accelerate">Set up distributed training with 🤗 Accelerate </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/peft">Load and train adapters with 🤗 PEFT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_sharing">Share your model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/transformers_agents">Agents </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/llm_tutorial">Generation with LLMs </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Task Guides</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Natural Language Processing</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/sequence_classification">Text classification </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/token_classification">Token classification </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/question_answering">Question answering </a><a data-sveltekit-reload="" class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-4" href="/docs/transformers/v4.34.0/en/tasks/language_modeling">Causal language modeling </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/masked_language_modeling">Masked language modeling </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/translation">Translation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/summarization">Summarization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/multiple_choice">Multiple choice </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Computer Vision</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Generation</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Prompting</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Developer guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/fast_tokenizers">Use fast tokenizers from 🤗 Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/multilingual">Run inference with multilingual models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/create_a_model">Use model-specific APIs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_models">Share a custom model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/chat_templating">Templates for chat models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/sagemaker">Run training on Amazon SageMaker </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/serialization">Export to ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tflite">Export to TFLite </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/torchscript">Export to TorchScript </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/benchmarks">Benchmarks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/notebooks">Notebooks with examples </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/community">Community resources </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_tools">Custom Tools and Prompts </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/troubleshooting">Troubleshoot </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Performance and scalability</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/performance">Overview </a><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Efficient training techniques</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_one">Methods and tools for efficient training on a single GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_many">Multiple GPUs and parallelism </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu">Efficient training on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu_many">Distributed CPU training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu">Training on TPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu_tf">Training on TPU with TensorFlow </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_special">Training on Specialized Hardware </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_hardware">Custom hardware for training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/hpo_train">Hyperparameter Search using Trainer API </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Optimizing inference</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_cpu">Inference on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_one">Inference on one GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_many">Inference on many GPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_special">Inference on Specialized Hardware </a> </div><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/big_models">Instantiating a big model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/debugging">Troubleshooting </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tf_xla">XLA Integration for TensorFlow Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perf_torch_compile">Optimize inference using `torch.compile()` </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Contribute</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/contributing">How to contribute to transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_model">How to add a model to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_tensorflow_model">How to convert a 🤗 Transformers model to TensorFlow? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_pipeline">How to add a pipeline to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/testing">Testing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pr_checks">Checks on a Pull Request </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Conceptual guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/philosophy">Philosophy </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/glossary">Glossary </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/task_summary">What 🤗 Transformers can do </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tasks_explained">How 🤗 Transformers solve tasks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_summary">The Transformer model family </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tokenizer_summary">Summary of the tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/attention">Attention mechanisms </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pad_truncation">Padding and truncation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/bertology">BERTology </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perplexity">Perplexity of fixed-length models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_webserver">Pipelines for webserver inference </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_memory_anatomy">Model training anatomy </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>API</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Main Classes</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/agent">Agents and Tools </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/model_doc/auto">Auto Classes </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/callback">Callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/configuration">Configuration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/data_collator">Data Collator </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks">Keras callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/logging">Logging </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/model">Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/text_generation">Text Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/onnx">ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules">Optimization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/output">Model outputs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/pipelines">Pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/processors">Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/quantization">Quantization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/tokenizer">Tokenizer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/trainer">Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/deepspeed">DeepSpeed Integration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor">Feature Extractor </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/image_processor">Image Processor </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Models</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Text models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Vision models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Reinforcement learning models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Time series models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Graph models</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Internal Helpers</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/modeling_utils">Custom Layers and Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/pipelines_utils">Utilities for pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/tokenization_utils">Utilities for Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/trainer_utils">Utilities for Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/generation_utils">Utilities for Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/image_processing_utils">Utilities for Image Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/audio_utils">Utilities for Audio processing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/file_utils">General Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/time_series_utils">Utilities for Time Series </a> </div> </div></nav></div></div></div> <div class="z-1 min-w-0 flex-1"> <div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg"> <div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div> <p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience </p> <div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes </div></div></div> <div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a> <p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div> <div class="prose-doc prose relative mx-auto max-w-4xl break-words"> <p></p> <h1 class="relative group"><a id="causal-language-modeling" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#causal-language-modeling"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-vcsbqx">Causal language modeling</span></h1> <div class="flex space-x-1 absolute z-10 right-0 top-0"> <div class="relative colab-dropdown "><button class=" " type="button"><img alt="Open In Colab" class="!m-0" src="https://colab.research.google.com/assets/colab-badge.svg"></button> </div> <div class="relative colab-dropdown "><button class=" " type="button"><img alt="Open In Studio Lab" class="!m-0" src="https://studiolab.sagemaker.aws/studiolab.svg"></button> </div></div> <p data-svelte-h="svelte-1j29ona">There are two types of language modeling, causal and masked. This guide illustrates causal language modeling. Causal language models are frequently used for text generation. You can use these models for creative applications like choosing your own text adventure or an intelligent coding assistant like Copilot or CodeParrot.</p> <iframe class="w-full xl:w-4/6 h-80" src="https://www.youtube-nocookie.com/embed/Vpjb1lu0MDk" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe> <p data-svelte-h="svelte-nw6q3g">Causal language modeling predicts the next token in a sequence of tokens, and the model can only attend to tokens on the left. This means the model cannot see future tokens. GPT-2 is an example of a causal language model.</p> <p data-svelte-h="svelte-1aff4p7">This guide will show you how to:</p> <ol data-svelte-h="svelte-1h2f171"><li>Finetune <a href="https://huggingface.co/distilgpt2" rel="nofollow">DistilGPT2</a> on the <a href="https://www.reddit.com/r/askscience/" rel="nofollow">r/askscience</a> subset of the <a href="https://huggingface.co/datasets/eli5" rel="nofollow">ELI5</a> dataset.</li> <li>Use your finetuned model for inference.</li></ol> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400">You can finetune other architectures for causal language modeling following the same steps in this guide. Choose one of the following architectures: <p data-svelte-h="svelte-ep3wzt"><a href="../model_doc/bart">BART</a>, <a href="../model_doc/bert">BERT</a>, <a href="../model_doc/bert-generation">Bert Generation</a>, <a href="../model_doc/big_bird">BigBird</a>, <a href="../model_doc/bigbird_pegasus">BigBird-Pegasus</a>, <a href="../model_doc/biogpt">BioGpt</a>, <a href="../model_doc/blenderbot">Blenderbot</a>, <a href="../model_doc/blenderbot-small">BlenderbotSmall</a>, <a href="../model_doc/bloom">BLOOM</a>, <a href="../model_doc/camembert">CamemBERT</a>, <a href="../model_doc/code_llama">CodeLlama</a>, <a href="../model_doc/codegen">CodeGen</a>, <a href="../model_doc/cpmant">CPM-Ant</a>, <a href="../model_doc/ctrl">CTRL</a>, <a href="../model_doc/data2vec-text">Data2VecText</a>, <a href="../model_doc/electra">ELECTRA</a>, <a href="../model_doc/ernie">ERNIE</a>, <a href="../model_doc/falcon">Falcon</a>, <a href="../model_doc/git">GIT</a>, <a href="../model_doc/gpt-sw3">GPT-Sw3</a>, <a href="../model_doc/gpt2">OpenAI GPT-2</a>, <a href="../model_doc/gpt_bigcode">GPTBigCode</a>, <a href="../model_doc/gpt_neo">GPT Neo</a>, <a href="../model_doc/gpt_neox">GPT NeoX</a>, <a href="../model_doc/gpt_neox_japanese">GPT NeoX Japanese</a>, <a href="../model_doc/gptj">GPT-J</a>, <a href="../model_doc/llama">LLaMA</a>, <a href="../model_doc/marian">Marian</a>, <a href="../model_doc/mbart">mBART</a>, <a href="../model_doc/mega">MEGA</a>, <a href="../model_doc/megatron-bert">Megatron-BERT</a>, <a href="../model_doc/mistral">Mistral</a>, <a href="../model_doc/mpt">MPT</a>, <a href="../model_doc/musicgen">MusicGen</a>, <a href="../model_doc/mvp">MVP</a>, <a href="../model_doc/open-llama">OpenLlama</a>, <a href="../model_doc/openai-gpt">OpenAI GPT</a>, <a href="../model_doc/opt">OPT</a>, <a href="../model_doc/pegasus">Pegasus</a>, <a href="../model_doc/persimmon">Persimmon</a>, <a href="../model_doc/plbart">PLBart</a>, <a href="../model_doc/prophetnet">ProphetNet</a>, <a href="../model_doc/qdqbert">QDQBert</a>, <a href="../model_doc/reformer">Reformer</a>, <a href="../model_doc/rembert">RemBERT</a>, <a href="../model_doc/roberta">RoBERTa</a>, <a href="../model_doc/roberta-prelayernorm">RoBERTa-PreLayerNorm</a>, <a href="../model_doc/roc_bert">RoCBert</a>, <a href="../model_doc/roformer">RoFormer</a>, <a href="../model_doc/rwkv">RWKV</a>, <a href="../model_doc/speech_to_text_2">Speech2Text2</a>, <a href="../model_doc/transfo-xl">Transformer-XL</a>, <a href="../model_doc/trocr">TrOCR</a>, <a href="../model_doc/xglm">XGLM</a>, <a href="../model_doc/xlm">XLM</a>, <a href="../model_doc/xlm-prophetnet">XLM-ProphetNet</a>, <a href="../model_doc/xlm-roberta">XLM-RoBERTa</a>, <a href="../model_doc/xlm-roberta-xl">XLM-RoBERTa-XL</a>, <a href="../model_doc/xlnet">XLNet</a>, <a href="../model_doc/xmod">X-MOD</a></p></div> <p data-svelte-h="svelte-1c9nexd">Before you begin, make sure you have all the necessary libraries installed:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class="">pip install transformers datasets evaluate</pre></div> <p data-svelte-h="svelte-27hn0u">We encourage you to log in to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to log in:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> huggingface_hub <span class="hljs-keyword">import</span> notebook_login <span class="hljs-meta">&gt;&gt;&gt; </span>notebook_login()</pre></div> <h2 class="relative group"><a id="load-eli5-dataset" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#load-eli5-dataset"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-vt5erg">Load ELI5 dataset</span></h2> <p data-svelte-h="svelte-mxq58u">Start by loading a smaller subset of the r/askscience subset of the ELI5 dataset from the 🤗 Datasets library. This’ll give you a chance to experiment and make sure everything works before spending more time training on the full dataset.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset <span class="hljs-meta">&gt;&gt;&gt; </span>eli5 = load_dataset(<span class="hljs-string">"eli5"</span>, split=<span class="hljs-string">"train_asks[:5000]"</span>)</pre></div> <p data-svelte-h="svelte-4cgbu8">Split the dataset’s <code>train_asks</code> split into a train and test set with the <a href="https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.train_test_split" rel="nofollow">train_test_split</a> method:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>eli5 = eli5.train_test_split(test_size=<span class="hljs-number">0.2</span>)</pre></div> <p data-svelte-h="svelte-1m91ua0">Then take a look at an example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>eli5[<span class="hljs-string">"train"</span>][<span class="hljs-number">0</span>] {<span class="hljs-string">'answers'</span>: {<span class="hljs-string">'a_id'</span>: [<span class="hljs-string">'c3d1aib'</span>, <span class="hljs-string">'c3d4lya'</span>], <span class="hljs-string">'score'</span>: [<span class="hljs-number">6</span>, <span class="hljs-number">3</span>], <span class="hljs-string">'text'</span>: [<span class="hljs-string">"The velocity needed to remain in orbit is equal to the square root of Newton's constant times the mass of earth divided by the distance from the center of the earth. I don't know the altitude of that specific mission, but they're usually around 300 km. That means he's going 7-8 km/s.\n\nIn space there are no other forces acting on either the shuttle or the guy, so they stay in the same position relative to each other. If he were to become unable to return to the ship, he would presumably run out of oxygen, or slowly fall into the atmosphere and burn up."</span>, <span class="hljs-string">"Hope you don't mind me asking another question, but why aren't there any stars visible in this photo?"</span>]}, <span class="hljs-string">'answers_urls'</span>: {<span class="hljs-string">'url'</span>: []}, <span class="hljs-string">'document'</span>: <span class="hljs-string">''</span>, <span class="hljs-string">'q_id'</span>: <span class="hljs-string">'nyxfp'</span>, <span class="hljs-string">'selftext'</span>: <span class="hljs-string">'_URL_0_\n\nThis was on the front page earlier and I have a few questions about it. Is it possible to calculate how fast the astronaut would be orbiting the earth? Also how does he stay close to the shuttle so that he can return safely, i.e is he orbiting at the same speed and can therefore stay next to it? And finally if his propulsion system failed, would he eventually re-enter the atmosphere and presumably die?'</span>, <span class="hljs-string">'selftext_urls'</span>: {<span class="hljs-string">'url'</span>: [<span class="hljs-string">'http://apod.nasa.gov/apod/image/1201/freeflyer_nasa_3000.jpg'</span>]}, <span class="hljs-string">'subreddit'</span>: <span class="hljs-string">'askscience'</span>, <span class="hljs-string">'title'</span>: <span class="hljs-string">'Few questions about this space walk photograph.'</span>, <span class="hljs-string">'title_urls'</span>: {<span class="hljs-string">'url'</span>: []}}</pre></div> <p data-svelte-h="svelte-1wm9gdc">While this may look like a lot, you’re only really interested in the <code>text</code> field. What’s cool about language modeling tasks is you don’t need labels (also known as an unsupervised task) because the next word <em>is</em> the label.</p> <h2 class="relative group"><a id="preprocess" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#preprocess"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1cg9qj">Preprocess</span></h2> <iframe class="w-full xl:w-4/6 h-80" src="https://www.youtube-nocookie.com/embed/ma1TrR7gE7I" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe> <p data-svelte-h="svelte-1lrcpa6">The next step is to load a DistilGPT2 tokenizer to process the <code>text</code> subfield:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"distilgpt2"</span>)</pre></div> <p data-svelte-h="svelte-z1itq1">You’ll notice from the example above, the <code>text</code> field is actually nested inside <code>answers</code>. This means you’ll need to extract the <code>text</code> subfield from its nested structure with the <a href="https://huggingface.co/docs/datasets/process.html#flatten" rel="nofollow"><code>flatten</code></a> method:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>eli5 = eli5.flatten() <span class="hljs-meta">&gt;&gt;&gt; </span>eli5[<span class="hljs-string">"train"</span>][<span class="hljs-number">0</span>] {<span class="hljs-string">'answers.a_id'</span>: [<span class="hljs-string">'c3d1aib'</span>, <span class="hljs-string">'c3d4lya'</span>], <span class="hljs-string">'answers.score'</span>: [<span class="hljs-number">6</span>, <span class="hljs-number">3</span>], <span class="hljs-string">'answers.text'</span>: [<span class="hljs-string">"The velocity needed to remain in orbit is equal to the square root of Newton's constant times the mass of earth divided by the distance from the center of the earth. I don't know the altitude of that specific mission, but they're usually around 300 km. That means he's going 7-8 km/s.\n\nIn space there are no other forces acting on either the shuttle or the guy, so they stay in the same position relative to each other. If he were to become unable to return to the ship, he would presumably run out of oxygen, or slowly fall into the atmosphere and burn up."</span>, <span class="hljs-string">"Hope you don't mind me asking another question, but why aren't there any stars visible in this photo?"</span>], <span class="hljs-string">'answers_urls.url'</span>: [], <span class="hljs-string">'document'</span>: <span class="hljs-string">''</span>, <span class="hljs-string">'q_id'</span>: <span class="hljs-string">'nyxfp'</span>, <span class="hljs-string">'selftext'</span>: <span class="hljs-string">'_URL_0_\n\nThis was on the front page earlier and I have a few questions about it. Is it possible to calculate how fast the astronaut would be orbiting the earth? Also how does he stay close to the shuttle so that he can return safely, i.e is he orbiting at the same speed and can therefore stay next to it? And finally if his propulsion system failed, would he eventually re-enter the atmosphere and presumably die?'</span>, <span class="hljs-string">'selftext_urls.url'</span>: [<span class="hljs-string">'http://apod.nasa.gov/apod/image/1201/freeflyer_nasa_3000.jpg'</span>], <span class="hljs-string">'subreddit'</span>: <span class="hljs-string">'askscience'</span>, <span class="hljs-string">'title'</span>: <span class="hljs-string">'Few questions about this space walk photograph.'</span>, <span class="hljs-string">'title_urls.url'</span>: []}</pre></div> <p data-svelte-h="svelte-1mdv3gu">Each subfield is now a separate column as indicated by the <code>answers</code> prefix, and the <code>text</code> field is a list now. Instead of tokenizing each sentence separately, convert the list to a string so you can jointly tokenize them.</p> <p data-svelte-h="svelte-njkc6i">Here is a first preprocessing function to join the list of strings for each example and tokenize the result:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">preprocess_function</span>(<span class="hljs-params">examples</span>): <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> tokenizer([<span class="hljs-string">" "</span>.join(x) <span class="hljs-keyword">for</span> x <span class="hljs-keyword">in</span> examples[<span class="hljs-string">"answers.text"</span>]])</pre></div> <p data-svelte-h="svelte-1iccbqj">To apply this preprocessing function over the entire dataset, use the 🤗 Datasets <a href="https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.map" rel="nofollow">map</a> method. You can speed up the <code>map</code> function by setting <code>batched=True</code> to process multiple elements of the dataset at once, and increasing the number of processes with <code>num_proc</code>. Remove any columns you don’t need:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>tokenized_eli5 = eli5.<span class="hljs-built_in">map</span>( <span class="hljs-meta">... </span> preprocess_function, <span class="hljs-meta">... </span> batched=<span class="hljs-literal">True</span>, <span class="hljs-meta">... </span> num_proc=<span class="hljs-number">4</span>, <span class="hljs-meta">... </span> remove_columns=eli5[<span class="hljs-string">"train"</span>].column_names, <span class="hljs-meta">... </span>)</pre></div> <p data-svelte-h="svelte-pz0l04">This dataset contains the token sequences, but some of these are longer than the maximum input length for the model.</p> <p data-svelte-h="svelte-5guq64">You can now use a second preprocessing function to</p> <ul data-svelte-h="svelte-19gyadw"><li>concatenate all the sequences</li> <li>split the concatenated sequences into shorter chunks defined by <code>block_size</code>, which should be both shorter than the maximum input length and short enough for your GPU RAM.</li></ul> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>block_size = <span class="hljs-number">128</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">group_texts</span>(<span class="hljs-params">examples</span>): <span class="hljs-meta">... </span> <span class="hljs-comment"># Concatenate all texts.</span> <span class="hljs-meta">... </span> concatenated_examples = {k: <span class="hljs-built_in">sum</span>(examples[k], []) <span class="hljs-keyword">for</span> k <span class="hljs-keyword">in</span> examples.keys()} <span class="hljs-meta">... </span> total_length = <span class="hljs-built_in">len</span>(concatenated_examples[<span class="hljs-built_in">list</span>(examples.keys())[<span class="hljs-number">0</span>]]) <span class="hljs-meta">... </span> <span class="hljs-comment"># We drop the small remainder, we could add padding if the model supported it instead of this drop, you can</span> <span class="hljs-meta">... </span> <span class="hljs-comment"># customize this part to your needs.</span> <span class="hljs-meta">... </span> <span class="hljs-keyword">if</span> total_length &gt;= block_size: <span class="hljs-meta">... </span> total_length = (total_length // block_size) * block_size <span class="hljs-meta">... </span> <span class="hljs-comment"># Split by chunks of block_size.</span> <span class="hljs-meta">... </span> result = { <span class="hljs-meta">... </span> k: [t[i : i + block_size] <span class="hljs-keyword">for</span> i <span class="hljs-keyword">in</span> <span class="hljs-built_in">range</span>(<span class="hljs-number">0</span>, total_length, block_size)] <span class="hljs-meta">... </span> <span class="hljs-keyword">for</span> k, t <span class="hljs-keyword">in</span> concatenated_examples.items() <span class="hljs-meta">... </span> } <span class="hljs-meta">... </span> result[<span class="hljs-string">"labels"</span>] = result[<span class="hljs-string">"input_ids"</span>].copy() <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> result</pre></div> <p data-svelte-h="svelte-1o69amy">Apply the <code>group_texts</code> function over the entire dataset:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>lm_dataset = tokenized_eli5.<span class="hljs-built_in">map</span>(group_texts, batched=<span class="hljs-literal">True</span>, num_proc=<span class="hljs-number">4</span>)</pre></div> <p data-svelte-h="svelte-1pqegmx">Now create a batch of examples using <a href="/docs/transformers/v4.34.0/en/main_classes/data_collator#transformers.DataCollatorForLanguageModeling">DataCollatorForLanguageModeling</a>. It’s more efficient to <em>dynamically pad</em> the sentences to the longest length in a batch during collation, instead of padding the whole dataset to the maximum length.</p> <div class="space-y-10 py-6 2xl:py-8 2xl:-mx-4"><div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><defs><clipPath id="a"><rect x="3.05" y="0.5" width="25.73" height="31" fill="none"></rect></clipPath></defs><g clip-path="url(#a)"><path d="M24.94,9.51a12.81,12.81,0,0,1,0,18.16,12.68,12.68,0,0,1-18,0,12.81,12.81,0,0,1,0-18.16l9-9V5l-.84.83-6,6a9.58,9.58,0,1,0,13.55,0ZM20.44,9a1.68,1.68,0,1,1,1.67-1.67A1.68,1.68,0,0,1,20.44,9Z" fill="#ee4c2c"></path></g></svg> <span>Pytorch</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide Pytorch content</span></div></div> <div class="framework-content"><p data-svelte-h="svelte-su9ugx">Use the end-of-sequence token as the padding token and set <code>mlm=False</code>. This will use the inputs as labels shifted to the right by one element:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> DataCollatorForLanguageModeling <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer.pad_token = tokenizer.eos_token <span class="hljs-meta">&gt;&gt;&gt; </span>data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=<span class="hljs-literal">False</span>)</pre></div></div></div> <div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="0.94em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 274"><path d="M145.726 42.065v42.07l72.861 42.07v-42.07l-72.86-42.07zM0 84.135v42.07l36.43 21.03V105.17L0 84.135zm109.291 21.035l-36.43 21.034v126.2l36.43 21.035v-84.135l36.435 21.035v-42.07l-36.435-21.034V105.17z" fill="#E55B2D"></path><path d="M145.726 42.065L36.43 105.17v42.065l72.861-42.065v42.065l36.435-21.03v-84.14zM255.022 63.1l-36.435 21.035v42.07l36.435-21.035V63.1zm-72.865 84.135l-36.43 21.035v42.07l36.43-21.036v-42.07zm-36.43 63.104l-36.436-21.035v84.135l36.435-21.035V210.34z" fill="#ED8E24"></path><path d="M145.726 0L0 84.135l36.43 21.035l109.296-63.105l72.861 42.07L255.022 63.1L145.726 0zm0 126.204l-36.435 21.03l36.435 21.036l36.43-21.035l-36.43-21.03z" fill="#F8BF3C"></path></svg> <span>TensorFlow</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide TensorFlow content</span></div></div> <div class="framework-content"><p data-svelte-h="svelte-su9ugx">Use the end-of-sequence token as the padding token and set <code>mlm=False</code>. This will use the inputs as labels shifted to the right by one element:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> DataCollatorForLanguageModeling <span class="hljs-meta">&gt;&gt;&gt; </span>data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=<span class="hljs-literal">False</span>, return_tensors=<span class="hljs-string">"tf"</span>)</pre></div></div></div> </div> <h2 class="relative group"><a id="train" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#train"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-5arm0l">Train</span></h2> <div class="space-y-10 py-6 2xl:py-8 2xl:-mx-4"><div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><defs><clipPath id="a"><rect x="3.05" y="0.5" width="25.73" height="31" fill="none"></rect></clipPath></defs><g clip-path="url(#a)"><path d="M24.94,9.51a12.81,12.81,0,0,1,0,18.16,12.68,12.68,0,0,1-18,0,12.81,12.81,0,0,1,0-18.16l9-9V5l-.84.83-6,6a9.58,9.58,0,1,0,13.55,0ZM20.44,9a1.68,1.68,0,1,1,1.67-1.67A1.68,1.68,0,0,1,20.44,9Z" fill="#ee4c2c"></path></g></svg> <span>Pytorch</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide Pytorch content</span></div></div> <div class="framework-content"><div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-tdrp9m">If you aren’t familiar with finetuning a model with the <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer">Trainer</a>, take a look at the <a href="../training#train-with-pytorch-trainer">basic tutorial</a>!</p></div> <p data-svelte-h="svelte-10h47a8">You’re ready to start training your model now! Load DistilGPT2 with <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoModelForCausalLM">AutoModelForCausalLM</a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoModelForCausalLM, TrainingArguments, Trainer <span class="hljs-meta">&gt;&gt;&gt; </span>model = AutoModelForCausalLM.from_pretrained(<span class="hljs-string">"distilgpt2"</span>)</pre></div> <p data-svelte-h="svelte-l42k0i">At this point, only three steps remain:</p> <ol data-svelte-h="svelte-6uyzj4"><li>Define your training hyperparameters in <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.TrainingArguments">TrainingArguments</a>. The only required parameter is <code>output_dir</code> which specifies where to save your model. You’ll push this model to the Hub by setting <code>push_to_hub=True</code> (you need to be signed in to Hugging Face to upload your model).</li> <li>Pass the training arguments to <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer">Trainer</a> along with the model, datasets, and data collator.</li> <li>Call <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.train">train()</a> to finetune your model.</li></ol> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>training_args = TrainingArguments( <span class="hljs-meta">... </span> output_dir=<span class="hljs-string">"my_awesome_eli5_clm-model"</span>, <span class="hljs-meta">... </span> evaluation_strategy=<span class="hljs-string">"epoch"</span>, <span class="hljs-meta">... </span> learning_rate=<span class="hljs-number">2e-5</span>, <span class="hljs-meta">... </span> weight_decay=<span class="hljs-number">0.01</span>, <span class="hljs-meta">... </span> push_to_hub=<span class="hljs-literal">True</span>, <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>trainer = Trainer( <span class="hljs-meta">... </span> model=model, <span class="hljs-meta">... </span> args=training_args, <span class="hljs-meta">... </span> train_dataset=lm_dataset[<span class="hljs-string">"train"</span>], <span class="hljs-meta">... </span> eval_dataset=lm_dataset[<span class="hljs-string">"test"</span>], <span class="hljs-meta">... </span> data_collator=data_collator, <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>trainer.train()</pre></div> <p data-svelte-h="svelte-1h6sduc">Once training is completed, use the <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.evaluate">evaluate()</a> method to evaluate your model and get its perplexity:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> math <span class="hljs-meta">&gt;&gt;&gt; </span>eval_results = trainer.evaluate() <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-built_in">print</span>(<span class="hljs-string">f"Perplexity: <span class="hljs-subst">{math.exp(eval_results[<span class="hljs-string">'eval_loss'</span>]):<span class="hljs-number">.2</span>f}</span>"</span>) Perplexity: <span class="hljs-number">49.61</span></pre></div> <p data-svelte-h="svelte-xx2cfv">Then share your model to the Hub with the <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.push_to_hub">push_to_hub()</a> method so everyone can use your model:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>trainer.push_to_hub()</pre></div></div></div> <div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="0.94em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 274"><path d="M145.726 42.065v42.07l72.861 42.07v-42.07l-72.86-42.07zM0 84.135v42.07l36.43 21.03V105.17L0 84.135zm109.291 21.035l-36.43 21.034v126.2l36.43 21.035v-84.135l36.435 21.035v-42.07l-36.435-21.034V105.17z" fill="#E55B2D"></path><path d="M145.726 42.065L36.43 105.17v42.065l72.861-42.065v42.065l36.435-21.03v-84.14zM255.022 63.1l-36.435 21.035v42.07l36.435-21.035V63.1zm-72.865 84.135l-36.43 21.035v42.07l36.43-21.036v-42.07zm-36.43 63.104l-36.436-21.035v84.135l36.435-21.035V210.34z" fill="#ED8E24"></path><path d="M145.726 0L0 84.135l36.43 21.035l109.296-63.105l72.861 42.07L255.022 63.1L145.726 0zm0 126.204l-36.435 21.03l36.435 21.036l36.43-21.035l-36.43-21.03z" fill="#F8BF3C"></path></svg> <span>TensorFlow</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide TensorFlow content</span></div></div> <div class="framework-content"><div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1mqyx4y">If you aren’t familiar with finetuning a model with Keras, take a look at the <a href="../training#train-a-tensorflow-model-with-keras">basic tutorial</a>!</p></div> To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters: <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> create_optimizer, AdamWeightDecay <span class="hljs-meta">&gt;&gt;&gt; </span>optimizer = AdamWeightDecay(learning_rate=<span class="hljs-number">2e-5</span>, weight_decay_rate=<span class="hljs-number">0.01</span>)</pre></div> <p data-svelte-h="svelte-tm6376">Then you can load DistilGPT2 with <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.TFAutoModelForCausalLM">TFAutoModelForCausalLM</a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> TFAutoModelForCausalLM <span class="hljs-meta">&gt;&gt;&gt; </span>model = TFAutoModelForCausalLM.from_pretrained(<span class="hljs-string">"distilgpt2"</span>)</pre></div> <p data-svelte-h="svelte-qmwuyd">Convert your datasets to the <code>tf.data.Dataset</code> format with <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel.prepare_tf_dataset">prepare_tf_dataset()</a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>tf_train_set = model.prepare_tf_dataset( <span class="hljs-meta">... </span> lm_dataset[<span class="hljs-string">"train"</span>], <span class="hljs-meta">... </span> shuffle=<span class="hljs-literal">True</span>, <span class="hljs-meta">... </span> batch_size=<span class="hljs-number">16</span>, <span class="hljs-meta">... </span> collate_fn=data_collator, <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>tf_test_set = model.prepare_tf_dataset( <span class="hljs-meta">... </span> lm_dataset[<span class="hljs-string">"test"</span>], <span class="hljs-meta">... </span> shuffle=<span class="hljs-literal">False</span>, <span class="hljs-meta">... </span> batch_size=<span class="hljs-number">16</span>, <span class="hljs-meta">... </span> collate_fn=data_collator, <span class="hljs-meta">... </span>)</pre></div> <p data-svelte-h="svelte-17cxx5e">Configure the model for training with <a href="https://keras.io/api/models/model_training_apis/#compile-method" rel="nofollow"><code>compile</code></a>. Note that Transformers models all have a default task-relevant loss function, so you don’t need to specify one unless you want to:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> tensorflow <span class="hljs-keyword">as</span> tf <span class="hljs-meta">&gt;&gt;&gt; </span>model.<span class="hljs-built_in">compile</span>(optimizer=optimizer) <span class="hljs-comment"># No loss argument!</span></pre></div> <p data-svelte-h="svelte-ufj5fr">This can be done by specifying where to push your model and tokenizer in the <a href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks#transformers.PushToHubCallback">PushToHubCallback</a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers.keras_callbacks <span class="hljs-keyword">import</span> PushToHubCallback <span class="hljs-meta">&gt;&gt;&gt; </span>callback = PushToHubCallback( <span class="hljs-meta">... </span> output_dir=<span class="hljs-string">"my_awesome_eli5_clm-model"</span>, <span class="hljs-meta">... </span> tokenizer=tokenizer, <span class="hljs-meta">... </span>)</pre></div> <p data-svelte-h="svelte-1pfsro2">Finally, you’re ready to start training your model! Call <a href="https://keras.io/api/models/model_training_apis/#fit-method" rel="nofollow"><code>fit</code></a> with your training and validation datasets, the number of epochs, and your callback to finetune the model:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>model.fit(x=tf_train_set, validation_data=tf_test_set, epochs=<span class="hljs-number">3</span>, callbacks=[callback])</pre></div> <p data-svelte-h="svelte-2s71om">Once training is completed, your model is automatically uploaded to the Hub so everyone can use it!</p></div></div> </div> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-bbjpvr">For a more in-depth example of how to finetune a model for causal language modeling, take a look at the corresponding <a href="https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb" rel="nofollow">PyTorch notebook</a> or <a href="https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb" rel="nofollow">TensorFlow notebook</a>.</p></div> <h2 class="relative group"><a id="inference" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#inference"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-199uz7g">Inference</span></h2> <p data-svelte-h="svelte-633ppb">Great, now that you’ve finetuned a model, you can use it for inference!</p> <p data-svelte-h="svelte-12i1768">Come up with a prompt you’d like to generate text from:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>prompt = <span class="hljs-string">"Somatic hypermutation allows the immune system to"</span></pre></div> <p data-svelte-h="svelte-1532y41">The simplest way to try out your finetuned model for inference is to use it in a <a href="/docs/transformers/v4.34.0/en/main_classes/pipelines#transformers.pipeline">pipeline()</a>. Instantiate a <code>pipeline</code> for text generation with your model, and pass your text to it:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> pipeline <span class="hljs-meta">&gt;&gt;&gt; </span>generator = pipeline(<span class="hljs-string">"text-generation"</span>, model=<span class="hljs-string">"my_awesome_eli5_clm-model"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>generator(prompt) [{<span class="hljs-string">'generated_text'</span>: <span class="hljs-string">"Somatic hypermutation allows the immune system to be able to effectively reverse the damage caused by an infection.\n\n\nThe damage caused by an infection is caused by the immune system's ability to perform its own self-correcting tasks."</span>}]</pre></div> <div class="space-y-10 py-6 2xl:py-8 2xl:-mx-4"><div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><defs><clipPath id="a"><rect x="3.05" y="0.5" width="25.73" height="31" fill="none"></rect></clipPath></defs><g clip-path="url(#a)"><path d="M24.94,9.51a12.81,12.81,0,0,1,0,18.16,12.68,12.68,0,0,1-18,0,12.81,12.81,0,0,1,0-18.16l9-9V5l-.84.83-6,6a9.58,9.58,0,1,0,13.55,0ZM20.44,9a1.68,1.68,0,1,1,1.67-1.67A1.68,1.68,0,0,1,20.44,9Z" fill="#ee4c2c"></path></g></svg> <span>Pytorch</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide Pytorch content</span></div></div> <div class="framework-content"><p data-svelte-h="svelte-1c2y1ia">Tokenize the text and return the <code>input_ids</code> as PyTorch tensors:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"my_awesome_eli5_clm-model"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(prompt, return_tensors=<span class="hljs-string">"pt"</span>).input_ids</pre></div> <p data-svelte-h="svelte-13iao8h">Use the <a href="/docs/transformers/v4.34.0/en/main_classes/text_generation#transformers.GenerationMixin.generate">generate()</a> method to generate text. For more details about the different text generation strategies and parameters for controlling generation, check out the <a href="../generation_strategies">Text generation strategies</a> page.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoModelForCausalLM <span class="hljs-meta">&gt;&gt;&gt; </span>model = AutoModelForCausalLM.from_pretrained(<span class="hljs-string">"my_awesome_eli5_clm-model"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model.generate(inputs, max_new_tokens=<span class="hljs-number">100</span>, do_sample=<span class="hljs-literal">True</span>, top_k=<span class="hljs-number">50</span>, top_p=<span class="hljs-number">0.95</span>)</pre></div> <p data-svelte-h="svelte-1918fu9">Decode the generated token ids back into text:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer.batch_decode(outputs, skip_special_tokens=<span class="hljs-literal">True</span>) [<span class="hljs-string">"Somatic hypermutation allows the immune system to react to drugs with the ability to adapt to a different environmental situation. In other words, a system of 'hypermutation' can help the immune system to adapt to a different environmental situation or in some cases even a single life. In contrast, researchers at the University of Massachusetts-Boston have found that 'hypermutation' is much stronger in mice than in humans but can be found in humans, and that it's not completely unknown to the immune system. A study on how the immune system"</span>]</pre></div></div></div> <div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="0.94em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 274"><path d="M145.726 42.065v42.07l72.861 42.07v-42.07l-72.86-42.07zM0 84.135v42.07l36.43 21.03V105.17L0 84.135zm109.291 21.035l-36.43 21.034v126.2l36.43 21.035v-84.135l36.435 21.035v-42.07l-36.435-21.034V105.17z" fill="#E55B2D"></path><path d="M145.726 42.065L36.43 105.17v42.065l72.861-42.065v42.065l36.435-21.03v-84.14zM255.022 63.1l-36.435 21.035v42.07l36.435-21.035V63.1zm-72.865 84.135l-36.43 21.035v42.07l36.43-21.036v-42.07zm-36.43 63.104l-36.436-21.035v84.135l36.435-21.035V210.34z" fill="#ED8E24"></path><path d="M145.726 0L0 84.135l36.43 21.035l109.296-63.105l72.861 42.07L255.022 63.1L145.726 0zm0 126.204l-36.435 21.03l36.435 21.036l36.43-21.035l-36.43-21.03z" fill="#F8BF3C"></path></svg> <span>TensorFlow</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide TensorFlow content</span></div></div> <div class="framework-content"><p data-svelte-h="svelte-hw2mu6">Tokenize the text and return the <code>input_ids</code> as TensorFlow tensors:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"my_awesome_eli5_clm-model"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(prompt, return_tensors=<span class="hljs-string">"tf"</span>).input_ids</pre></div> <p data-svelte-h="svelte-1yoge13">Use the <a href="/docs/transformers/v4.34.0/en/main_classes/text_generation#transformers.TFGenerationMixin.generate">generate()</a> method to create the summarization. For more details about the different text generation strategies and parameters for controlling generation, check out the <a href="../generation_strategies">Text generation strategies</a> page.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> TFAutoModelForCausalLM <span class="hljs-meta">&gt;&gt;&gt; </span>model = TFAutoModelForCausalLM.from_pretrained(<span class="hljs-string">"my_awesome_eli5_clm-model"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model.generate(input_ids=inputs, max_new_tokens=<span class="hljs-number">100</span>, do_sample=<span class="hljs-literal">True</span>, top_k=<span class="hljs-number">50</span>, top_p=<span class="hljs-number">0.95</span>)</pre></div> <p data-svelte-h="svelte-1918fu9">Decode the generated token ids back into text:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer.batch_decode(outputs, skip_special_tokens=<span class="hljs-literal">True</span>) [<span class="hljs-string">'Somatic hypermutation allows the immune system to detect the presence of other viruses as they become more prevalent. Therefore, researchers have identified a high proportion of human viruses. The proportion of virus-associated viruses in our study increases with age. Therefore, we propose a simple algorithm to detect the presence of these new viruses in our samples as a sign of improved immunity. A first study based on this algorithm, which will be published in Science on Friday, aims to show that this finding could translate into the development of a better vaccine that is more effective for'</span>]</pre></div></div></div> </div> <p></p> <div id="svelte-announcer" aria-live="assertive" aria-atomic="true" style="position: absolute; left: 0px; top: 0px; clip: rect(0px, 0px, 0px, 0px); clip-path: inset(50%); overflow: hidden; white-space: nowrap; width: 1px; height: 1px;"></div></div> <div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/tasks/question_answering" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>Question answering</a> <a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/tasks/masked_language_modeling" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">Masked language modeling<span class="ml-2 translate-y-px">→</span></a></div></div></div> <div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapter&quot;:{&quot;title&quot;:&quot;Causal language modeling&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;causal-language-modeling&quot;,&quot;url&quot;:&quot;#causal-language-modeling&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Load ELI5 dataset&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;load-eli5-dataset&quot;,&quot;url&quot;:&quot;#load-eli5-dataset&quot;},{&quot;title&quot;:&quot;Preprocess&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocess&quot;,&quot;url&quot;:&quot;#preprocess&quot;},{&quot;title&quot;:&quot;Train&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;train&quot;,&quot;url&quot;:&quot;#train&quot;},{&quot;title&quot;:&quot;Inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;inference&quot;,&quot;url&quot;:&quot;#inference&quot;}]}}" data-target="SubSideMenu"><nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#causal-language-modeling" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-causal-language-modeling"><wbr>Causal language modeling</a> <a href="#load-eli5-dataset" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-load-eli5-dataset"><wbr>Load EL<wbr>I5 dataset</a> <a href="#preprocess" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-preprocess"><wbr>Preprocess</a> <a href="#train" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-train"><wbr>Train</a> <a href="#inference" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-inference"><wbr>Inference</a> </nav></div></div></div> <div id="doc-footer"></div></main> </div> <script> import("/front/build/kube-5e23f38/index.js"); window.moonSha = "kube-5e23f38/"; window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc","environment":"production","userAgent":"HuggingFace (production)"}`); </script> <!-- Stripe --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://js.stripe.com/v3/"; script.async = true; document.head.appendChild(script); } </script> <!-- Google analytics v4 --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL"; script.async = true; document.head.appendChild(script); window.dataLayer = window.dataLayer || []; function gtag() { if (window.dataLayer !== undefined) { window.dataLayer.push(arguments); } } gtag("js", new Date()); gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/v4.34.0/en/tasks/language_modeling" }); /// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" }); /// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent /// TODO: ask the user for their consent and update this with gtag('consent', 'update') } </script> <!-- Google Analytics v3 (deprecated) --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { (function (i, s, o, g, r, a, m) { i["GoogleAnalyticsObject"] = r; (i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments); }), (i[r].l = 1 * new Date()); (a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]); a.async = 1; a.src = g; m.parentNode.insertBefore(a, m); })(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics"); ganalytics("create", "UA-83738774-2", "auto"); ganalytics("send", "pageview", "/docs/transformers/v4.34.0/en/tasks/language_modeling"); } </script> <iframe name="__privateStripeMetricsController4960" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-27c67c0d52761104439bb051c7856ab1.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fv4.34.0%2Fen%2Ftasks%2Flanguage_modeling&amp;title=Causal%20language%20modeling&amp;referrer=&amp;muid=577a1d98-59a0-46fc-98a8-36ee316848488be1c3&amp;sid=95f156dd-eb84-4e70-95ef-3883996ebe1530e886&amp;version=6&amp;preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html>
2023-10-05T13:33:42.409Z
https://huggingface.co/docs/transformers/v4.34.0/en/main_classes/models
The documentation page MAIN\_CLASSES/MODELS doesn’t exist in v4.34.0, but exists on the main version. Click [here](/docs/transformers/main/en/main_classes/models) to redirect to the main version of the documentation.
<html><head></head><body>The documentation page MAIN_CLASSES/MODELS doesn’t exist in v4.34.0, but exists on the main version. Click <a href="/docs/transformers/main/en/main_classes/models">here</a> to redirect to the main version of the documentation.</body></html>
2023-10-05T13:33:42.439Z
Image tasks with IDEFICS
https://huggingface.co/docs/transformers/v4.34.0/en/tasks/idefics
# Image tasks with IDEFICS While individual tasks can be tackled by fine-tuning specialized models, an alternative approach that has recently emerged and gained popularity is to use large models for a diverse set of tasks without fine-tuning. For instance, large language models can handle such NLP tasks as summarization, translation, classification, and more. This approach is no longer limited to a single modality, such as text, and in this guide, we will illustrate how you can solve image-text tasks with a large multimodal model called IDEFICS. [IDEFICS](../model_doc/idefics) is an open-access vision and language model based on [Flamingo](https://huggingface.co/papers/2204.14198), a state-of-the-art visual language model initially developed by DeepMind. The model accepts arbitrary sequences of image and text inputs and generates coherent text as output. It can answer questions about images, describe visual content, create stories grounded in multiple images, and so on. IDEFICS comes in two variants - [80 billion parameters](https://huggingface.co/HuggingFaceM4/idefics-80b) and [9 billion parameters](https://huggingface.co/HuggingFaceM4/idefics-9b), both of which are available on the 🤗 Hub. For each variant, you can also find fine-tuned instructed versions of the model adapted for conversational use cases. This model is exceptionally versatile and can be used for a wide range of image and multimodal tasks. However, being a large model means it requires significant computational resources and infrastructure. It is up to you to decide whether this approach suits your use case better than fine-tuning specialized models for each individual task. In this guide, you’ll learn how to: - [Load IDEFICS](#loading-the-model) and [load the quantized version of the model](#loading-the-quantized-version-of-the-model) - Use IDEFICS for: - [Image captioning](#image-captioning) - [Prompted image captioning](#prompted-image-captioning) - [Few-shot prompting](#few-shot-prompting) - [Visual question answering](#visual-question-answering) - [Image classificaiton](#image-classification) - [Image-guided text generation](#image-guided-text-generation) - [Run inference in batch mode](#running-inference-in-batch-mode) - [Run IDEFICS instruct for conversational use](#idefics-instruct-for-conversational-use) Before you begin, make sure you have all the necessary libraries installed. ``` pip install -q bitsandbytes sentencepiece accelerate transformers ``` To run the following examples with a non-quantized version of the model checkpoint you will need at least 20GB of GPU memory. ## Loading the model Let’s start by loading the model’s 9 billion parameters checkpoint: ``` >>> checkpoint = "HuggingFaceM4/idefics-9b" ``` Just like for other Transformers models, you need to load a processor and the model itself from the checkpoint. The IDEFICS processor wraps a [LlamaTokenizer](/docs/transformers/v4.34.0/en/model_doc/llama2#transformers.LlamaTokenizer) and IDEFICS image processor into a single processor to take care of preparing text and image inputs for the model. ``` >>> import torch >>> from transformers import IdeficsForVisionText2Text, AutoProcessor >>> processor = AutoProcessor.from_pretrained(checkpoint) >>> model = IdeficsForVisionText2Text.from_pretrained(checkpoint, torch_dtype=torch.bfloat16, device_map="auto") ``` Setting `device_map` to `"auto"` will automatically determine how to load and store the model weights in the most optimized manner given existing devices. ### Quantized model If high-memory GPU availability is an issue, you can load the quantized version of the model. To load the model and the processor in 4bit precision, pass a `BitsAndBytesConfig` to the `from_pretrained` method and the model will be compressed on the fly while loading. ``` >>> import torch >>> from transformers import IdeficsForVisionText2Text, AutoProcessor, BitsAndBytesConfig >>> quantization_config = BitsAndBytesConfig( ... load_in_4bit=True, ... bnb_4bit_compute_dtype=torch.float16, ... ) >>> processor = AutoProcessor.from_pretrained(checkpoint) >>> model = IdeficsForVisionText2Text.from_pretrained( ... checkpoint, ... quantization_config=quantization_config, ... device_map="auto" ... ) ``` Now that you have the model loaded in one of the suggested ways, let’s move on to exploring tasks that you can use IDEFICS for. ## Image captioning Image captioning is the task of predicting a caption for a given image. A common application is to aid visually impaired people navigate through different situations, for instance, explore image content online. To illustrate the task, get an image to be captioned, e.g.: ![Image of a puppy in a flower bed](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/idefics-im-captioning.jpg) Photo by [Hendo Wang](https://unsplash.com/@hendoo). IDEFICS accepts text and image prompts. However, to caption an image, you do not have to provide a text prompt to the model, only the preprocessed input image. Without a text prompt, the model will start generating text from the BOS (beginning-of-sequence) token thus creating a caption. As image input to the model, you can use either an image object (`PIL.Image`) or a url from which the image can be retrieved. ``` >>> prompt = [ ... "https://images.unsplash.com/photo-1583160247711-2191776b4b91?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=3542&q=80", ... ] >>> inputs = processor(prompt, return_tensors="pt").to("cuda") >>> bad_words_ids = processor.tokenizer(["<image>", "<fake_token_around_image>"], add_special_tokens=False).input_ids >>> generated_ids = model.generate(**inputs, max_new_tokens=10, bad_words_ids=bad_words_ids) >>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True) >>> print(generated_text[0]) A puppy in a flower bed ``` It is a good idea to include the `bad_words_ids` in the call to `generate` to avoid errors arising when increasing the `max_new_tokens`: the model will want to generate a new `<image>` or `<fake_token_around_image>` token when there is no image being generated by the model. You can set it on-the-fly as in this guide, or store in the `GenerationConfig` as described in the [Text generation strategies](../generation_strategies) guide. ## Prompted image captioning You can extend image captioning by providing a text prompt, which the model will continue given the image. Let’s take another image to illustrate: ![Image of the Eiffel Tower at night](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/idefics-prompted-im-captioning.jpg) Photo by [Denys Nevozhai](https://unsplash.com/@dnevozhai). Textual and image prompts can be passed to the model’s processor as a single list to create appropriate inputs. ``` >>> prompt = [ ... "https://images.unsplash.com/photo-1543349689-9a4d426bee8e?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=3501&q=80", ... "This is an image of ", ... ] >>> inputs = processor(prompt, return_tensors="pt").to("cuda") >>> bad_words_ids = processor.tokenizer(["<image>", "<fake_token_around_image>"], add_special_tokens=False).input_ids >>> generated_ids = model.generate(**inputs, max_new_tokens=10, bad_words_ids=bad_words_ids) >>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True) >>> print(generated_text[0]) This is an image of the Eiffel Tower in Paris, France. ``` ## Few-shot prompting While IDEFICS demonstrates great zero-shot results, your task may require a certain format of the caption, or come with other restrictions or requirements that increase task’s complexity. Few-shot prompting can be used to enable in-context learning. By providing examples in the prompt, you can steer the model to generate results that mimic the format of given examples. Let’s use the previous image of the Eiffel Tower as an example for the model and build a prompt that demonstrates to the model that in addition to learning what the object in an image is, we would also like to get some interesting information about it. Then, let’s see, if we can get the same response format for an image of the Statue of Liberty: ![Image of the Statue of Liberty](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/idefics-few-shot.jpg) Photo by [Juan Mayobre](https://unsplash.com/@jmayobres). ``` >>> prompt = ["User:", ... "https://images.unsplash.com/photo-1543349689-9a4d426bee8e?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=3501&q=80", ... "Describe this image.\nAssistant: An image of the Eiffel Tower at night. Fun fact: the Eiffel Tower is the same height as an 81-storey building.\n", ... "User:", ... "https://images.unsplash.com/photo-1524099163253-32b7f0256868?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=3387&q=80", ... "Describe this image.\nAssistant:" ... ] >>> inputs = processor(prompt, return_tensors="pt").to("cuda") >>> bad_words_ids = processor.tokenizer(["<image>", "<fake_token_around_image>"], add_special_tokens=False).input_ids >>> generated_ids = model.generate(**inputs, max_new_tokens=30, bad_words_ids=bad_words_ids) >>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True) >>> print(generated_text[0]) User: Describe this image. Assistant: An image of the Eiffel Tower at night. Fun fact: the Eiffel Tower is the same height as an 81-storey building. User: Describe this image. Assistant: An image of the Statue of Liberty. Fun fact: the Statue of Liberty is 151 feet tall. ``` Notice that just from a single example (i.e., 1-shot) the model has learned how to perform the task. For more complex tasks, feel free to experiment with a larger number of examples (e.g., 3-shot, 5-shot, etc.). ## Visual question answering Visual Question Answering (VQA) is the task of answering open-ended questions based on an image. Similar to image captioning it can be used in accessibility applications, but also in education (reasoning about visual materials), customer service (questions about products based on images), and image retrieval. Let’s get a new image for this task: ![Image of a couple having a picnic](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/idefics-vqa.jpg) Photo by [Jarritos Mexican Soda](https://unsplash.com/@jarritos). You can steer the model from image captioning to visual question answering by prompting it with appropriate instructions: ``` >>> prompt = [ ... "Instruction: Provide an answer to the question. Use the image to answer.\n", ... "https://images.unsplash.com/photo-1623944889288-cd147dbb517c?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=3540&q=80", ... "Question: Where are these people and what's the weather like? Answer:" ... ] >>> inputs = processor(prompt, return_tensors="pt").to("cuda") >>> bad_words_ids = processor.tokenizer(["<image>", "<fake_token_around_image>"], add_special_tokens=False).input_ids >>> generated_ids = model.generate(**inputs, max_new_tokens=20, bad_words_ids=bad_words_ids) >>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True) >>> print(generated_text[0]) Instruction: Provide an answer to the question. Use the image to answer. Question: Where are these people and what's the weather like? Answer: They're in a park in New York City, and it's a beautiful day. ``` ## Image classification IDEFICS is capable of classifying images into different categories without being explicitly trained on data containing labeled examples from those specific categories. Given a list of categories and using its image and text understanding capabilities, the model can infer which category the image likely belongs to. Say, we have this image of a vegetable stand: ![Image of a vegetable stand](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/idefics-classification.jpg) Photo by [Peter Wendt](https://unsplash.com/@peterwendt). We can instruct the model to classify the image into one of the categories that we have: ``` >>> categories = ['animals','vegetables', 'city landscape', 'cars', 'office'] >>> prompt = [f"Instruction: Classify the following image into a single category from the following list: {categories}.\n", ... "https://images.unsplash.com/photo-1471193945509-9ad0617afabf?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=3540&q=80", ... "Category: " ... ] >>> inputs = processor(prompt, return_tensors="pt").to("cuda") >>> bad_words_ids = processor.tokenizer(["<image>", "<fake_token_around_image>"], add_special_tokens=False).input_ids >>> generated_ids = model.generate(**inputs, max_new_tokens=4, bad_words_ids=bad_words_ids) >>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True) >>> print(generated_text[0]) Instruction: Classify the following image into a single category from the following list: ['animals', 'vegetables', 'city landscape', 'cars', 'office']. Category: Vegetables ``` In the example above we instruct the model to classify the image into a single category, however, you can also prompt the model to do rank classification. ## Image-guided text generation For more creative applications, you can use image-guided text generation to generate text based on an image. This can be useful to create descriptions of products, ads, descriptions of a scene, etc. Let’s prompt IDEFICS to write a story based on a simple image of a red door: ![Image of a red door with a pumpkin on the steps](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/idefics-story-generation.jpg) Photo by [Craig Tidball](https://unsplash.com/@devonshiremedia). ``` >>> prompt = ["Instruction: Use the image to write a story. \n", ... "https://images.unsplash.com/photo-1517086822157-2b0358e7684a?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=2203&q=80", ... "Story: \n"] >>> inputs = processor(prompt, return_tensors="pt").to("cuda") >>> bad_words_ids = processor.tokenizer(["<image>", "<fake_token_around_image>"], add_special_tokens=False).input_ids >>> generated_ids = model.generate(**inputs, num_beams=2, max_new_tokens=200, bad_words_ids=bad_words_ids) >>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True) >>> print(generated_text[0]) Instruction: Use the image to write a story. Story: Once upon a time, there was a little girl who lived in a house with a red door. She loved her red door. It was the prettiest door in the whole world. One day, the little girl was playing in her yard when she noticed a man standing on her doorstep. He was wearing a long black coat and a top hat. The little girl ran inside and told her mother about the man. Her mother said, “Don’t worry, honey. He’s just a friendly ghost.” The little girl wasn’t sure if she believed her mother, but she went outside anyway. When she got to the door, the man was gone. The next day, the little girl was playing in her yard again when she noticed the man standing on her doorstep. He was wearing a long black coat and a top hat. The little girl ran ``` Looks like IDEFICS noticed the pumpkin on the doorstep and went with a spooky Halloween story about a ghost. For longer outputs like this, you will greatly benefit from tweaking the text generation strategy. This can help you significantly improve the quality of the generated output. Check out [Text generation strategies](../generation_strategies) to learn more. ## Running inference in batch mode All of the earlier sections illustrated IDEFICS for a single example. In a very similar fashion, you can run inference for a batch of examples by passing a list of prompts: ``` >>> prompts = [ ... [ "https://images.unsplash.com/photo-1543349689-9a4d426bee8e?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=3501&q=80", ... "This is an image of ", ... ], ... [ "https://images.unsplash.com/photo-1623944889288-cd147dbb517c?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=3540&q=80", ... "This is an image of ", ... ], ... [ "https://images.unsplash.com/photo-1471193945509-9ad0617afabf?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=3540&q=80", ... "This is an image of ", ... ], ... ] >>> inputs = processor(prompts, return_tensors="pt") >>> bad_words_ids = processor.tokenizer(["<image>", "<fake_token_around_image>"], add_special_tokens=False).input_ids >>> generated_ids = model.generate(**inputs, max_new_tokens=10, bad_words_ids=bad_words_ids) >>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True) >>> for i,t in enumerate(generated_text): ... print(f"{i}:\n{t}\n") 0: This is an image of the Eiffel Tower in Paris, France. 1: This is an image of a couple on a picnic blanket. 2: This is an image of a vegetable stand. ``` ## IDEFICS instruct for conversational use For conversational use cases, you can find fine-tuned instructed versions of the model on the 🤗 Hub: `HuggingFaceM4/idefics-80b-instruct` and `HuggingFaceM4/idefics-9b-instruct`. These checkpoints are the result of fine-tuning the respective base models on a mixture of supervised and instruction fine-tuning datasets, which boosts the downstream performance while making the models more usable in conversational settings. The use and prompting for the conversational use is very similar to using the base models: ``` >>> import torch >>> from transformers import IdeficsForVisionText2Text, AutoProcessor >>> device = "cuda" if torch.cuda.is_available() else "cpu" >>> checkpoint = "HuggingFaceM4/idefics-9b-instruct" >>> model = IdeficsForVisionText2Text.from_pretrained(checkpoint, torch_dtype=torch.bfloat16).to(device) >>> processor = AutoProcessor.from_pretrained(checkpoint) >>> prompts = [ ... [ ... "User: What is in this image?", ... "https://upload.wikimedia.org/wikipedia/commons/8/86/Id%C3%A9fix.JPG", ... "<end_of_utterance>", ... "\nAssistant: This picture depicts Idefix, the dog of Obelix in Asterix and Obelix. Idefix is running on the ground.<end_of_utterance>", ... "\nUser:", ... "https://static.wikia.nocookie.net/asterix/images/2/25/R22b.gif/revision/latest?cb=20110815073052", ... "And who is that?<end_of_utterance>", ... "\nAssistant:", ... ], ... ] >>> >>> inputs = processor(prompts, add_end_of_utterance_token=False, return_tensors="pt").to(device) >>> >>> >>> >>> exit_condition = processor.tokenizer("<end_of_utterance>", add_special_tokens=False).input_ids >>> bad_words_ids = processor.tokenizer(["<image>", "<fake_token_around_image>"], add_special_tokens=False).input_ids >>> generated_ids = model.generate(**inputs, eos_token_id=exit_condition, bad_words_ids=bad_words_ids, max_length=100) >>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True) >>> for i, t in enumerate(generated_text): ... print(f"{i}:\n{t}\n") ```
<!DOCTYPE html><html class=""><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"> <meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science."> <meta property="fb:app_id" content="1321688464574422"> <meta name="twitter:card" content="summary_large_image"> <meta name="twitter:site" content="@huggingface"> <meta property="og:title" content="Image tasks with IDEFICS"> <meta property="og:type" content="website"> <meta property="og:url" content="https://huggingface.co/docs/transformers/v4.34.0/en/tasks/idefics"> <meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png"> <link rel="stylesheet" href="/front/build/kube-b0520c1/style.css"> <link rel="preconnect" href="https://fonts.gstatic.com"> <link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&amp;display=swap" rel="stylesheet"> <link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&amp;display=swap" rel="stylesheet"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" as="style" onload="this.onload=null;this.rel='stylesheet'"> <noscript> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" /> </noscript> <title>Image tasks with IDEFICS</title> <script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script> <script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><link rel="stylesheet" href="/docs/transformers/v4.34.0/en/_app/immutable/assets/0.e3b0c442.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/1.38c5c2f6.js"><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"><meta name="hf:doc:metadata" content="{&quot;local&quot;:&quot;image-tasks-with-idefics&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;loading-the-model&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;quantized-model&quot;,&quot;title&quot;:&quot;Quantized model&quot;}],&quot;title&quot;:&quot;Loading the model&quot;},{&quot;local&quot;:&quot;image-captioning&quot;,&quot;title&quot;:&quot;Image captioning&quot;},{&quot;local&quot;:&quot;prompted-image-captioning&quot;,&quot;title&quot;:&quot;Prompted image captioning&quot;},{&quot;local&quot;:&quot;fewshot-prompting&quot;,&quot;title&quot;:&quot;Few-shot prompting&quot;},{&quot;local&quot;:&quot;visual-question-answering&quot;,&quot;title&quot;:&quot;Visual question answering&quot;},{&quot;local&quot;:&quot;image-classification&quot;,&quot;title&quot;:&quot;Image classification&quot;},{&quot;local&quot;:&quot;imageguided-text-generation&quot;,&quot;title&quot;:&quot;Image-guided text generation&quot;},{&quot;local&quot;:&quot;running-inference-in-batch-mode&quot;,&quot;title&quot;:&quot;Running inference in batch mode&quot;},{&quot;local&quot;:&quot;idefics-instruct-for-conversational-use&quot;,&quot;title&quot;:&quot;IDEFICS instruct for conversational use&quot;}],&quot;title&quot;:&quot;Image tasks with IDEFICS&quot;}"></head> <body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage"> <div class="flex min-h-screen flex-col"> <div class="SVELTE_HYDRATER contents" data-props="{&quot;classNames&quot;:&quot;&quot;,&quot;isWide&quot;:true,&quot;isZh&quot;:false}" data-target="MainHeader"><header class="border-b border-gray-100 "><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <div class="flex flex-none items-center justify-center p-0.5 place-self-stretch lg:hidden"><button class="relative z-40 flex h-6 w-8 items-center justify-center" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div></div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="rounded-full border border-transparent bg-gray-900 px-3 py-1 leading-none text-white hover:border-black hover:bg-white hover:text-black" href="/join">Sign Up</a></li></ul></nav></div></header></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div> <main class="flex flex-1 flex-col"><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapters&quot;:[{&quot;title&quot;:&quot;Get started&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;🤗 Transformers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;index&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/index&quot;},{&quot;title&quot;:&quot;Quick tour&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;quicktour&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/quicktour&quot;},{&quot;title&quot;:&quot;Installation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;installation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/installation&quot;}]},{&quot;title&quot;:&quot;Tutorials&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Run inference with pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_tutorial&quot;},{&quot;title&quot;:&quot;Write portable code with AutoClass&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;autoclass_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/autoclass_tutorial&quot;},{&quot;title&quot;:&quot;Preprocess data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocessing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/preprocessing&quot;},{&quot;title&quot;:&quot;Fine-tune a pretrained model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;training&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/training&quot;},{&quot;title&quot;:&quot;Train with a script&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;run_scripts&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/run_scripts&quot;},{&quot;title&quot;:&quot;Set up distributed training with 🤗 Accelerate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;accelerate&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/accelerate&quot;},{&quot;title&quot;:&quot;Load and train adapters with 🤗 PEFT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;peft&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/peft&quot;},{&quot;title&quot;:&quot;Share your model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_sharing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_sharing&quot;},{&quot;title&quot;:&quot;Agents&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers_agents&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/transformers_agents&quot;},{&quot;title&quot;:&quot;Generation with LLMs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;llm_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/llm_tutorial&quot;}]},{&quot;title&quot;:&quot;Task Guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Natural Language Processing&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text classification&quot;,&quot;id&quot;:&quot;tasks/sequence_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/sequence_classification&quot;},{&quot;title&quot;:&quot;Token classification&quot;,&quot;id&quot;:&quot;tasks/token_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/token_classification&quot;},{&quot;title&quot;:&quot;Question answering&quot;,&quot;id&quot;:&quot;tasks/question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/question_answering&quot;},{&quot;title&quot;:&quot;Causal language modeling&quot;,&quot;id&quot;:&quot;tasks/language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/language_modeling&quot;},{&quot;title&quot;:&quot;Masked language modeling&quot;,&quot;id&quot;:&quot;tasks/masked_language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/masked_language_modeling&quot;},{&quot;title&quot;:&quot;Translation&quot;,&quot;id&quot;:&quot;tasks/translation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/translation&quot;},{&quot;title&quot;:&quot;Summarization&quot;,&quot;id&quot;:&quot;tasks/summarization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/summarization&quot;},{&quot;title&quot;:&quot;Multiple choice&quot;,&quot;id&quot;:&quot;tasks/multiple_choice&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/multiple_choice&quot;}]},{&quot;title&quot;:&quot;Audio&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio classification&quot;,&quot;id&quot;:&quot;tasks/audio_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/audio_classification&quot;},{&quot;title&quot;:&quot;Automatic speech recognition&quot;,&quot;id&quot;:&quot;tasks/asr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/asr&quot;}]},{&quot;title&quot;:&quot;Computer Vision&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image classification&quot;,&quot;id&quot;:&quot;tasks/image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_classification&quot;},{&quot;title&quot;:&quot;Semantic segmentation&quot;,&quot;id&quot;:&quot;tasks/semantic_segmentation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/semantic_segmentation&quot;},{&quot;title&quot;:&quot;Video classification&quot;,&quot;id&quot;:&quot;tasks/video_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/video_classification&quot;},{&quot;title&quot;:&quot;Object detection&quot;,&quot;id&quot;:&quot;tasks/object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot object detection&quot;,&quot;id&quot;:&quot;tasks/zero_shot_object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot image classification&quot;,&quot;id&quot;:&quot;tasks/zero_shot_image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification&quot;},{&quot;title&quot;:&quot;Depth estimation&quot;,&quot;id&quot;:&quot;tasks/monocular_depth_estimation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation&quot;}]},{&quot;title&quot;:&quot;Multimodal&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image captioning&quot;,&quot;id&quot;:&quot;tasks/image_captioning&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_captioning&quot;},{&quot;title&quot;:&quot;Document Question Answering&quot;,&quot;id&quot;:&quot;tasks/document_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/document_question_answering&quot;},{&quot;title&quot;:&quot;Visual Question Answering&quot;,&quot;id&quot;:&quot;tasks/visual_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/visual_question_answering&quot;},{&quot;title&quot;:&quot;Text to speech&quot;,&quot;id&quot;:&quot;tasks/text-to-speech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/text-to-speech&quot;}]},{&quot;title&quot;:&quot;Generation&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Customize the generation strategy&quot;,&quot;id&quot;:&quot;generation_strategies&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/generation_strategies&quot;}]},{&quot;title&quot;:&quot;Prompting&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image tasks with IDEFICS&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/idefics&quot;}]}]},{&quot;title&quot;:&quot;Developer guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Use fast tokenizers from 🤗 Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;fast_tokenizers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/fast_tokenizers&quot;},{&quot;title&quot;:&quot;Run inference with multilingual models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;multilingual&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/multilingual&quot;},{&quot;title&quot;:&quot;Use model-specific APIs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;create_a_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/create_a_model&quot;},{&quot;title&quot;:&quot;Share a custom model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_models&quot;},{&quot;title&quot;:&quot;Templates for chat models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;chat_templating&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/chat_templating&quot;},{&quot;title&quot;:&quot;Run training on Amazon SageMaker&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;sagemaker&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/sagemaker&quot;},{&quot;title&quot;:&quot;Export to ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;serialization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/serialization&quot;},{&quot;title&quot;:&quot;Export to TFLite&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tflite&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tflite&quot;},{&quot;title&quot;:&quot;Export to TorchScript&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;torchscript&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/torchscript&quot;},{&quot;title&quot;:&quot;Benchmarks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;benchmarks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/benchmarks&quot;},{&quot;title&quot;:&quot;Notebooks with examples&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;notebooks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/notebooks&quot;},{&quot;title&quot;:&quot;Community resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;community&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/community&quot;},{&quot;title&quot;:&quot;Custom Tools and Prompts&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_tools&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_tools&quot;},{&quot;title&quot;:&quot;Troubleshoot&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;troubleshooting&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/troubleshooting&quot;}]},{&quot;title&quot;:&quot;Performance and scalability&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;performance&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/performance&quot;},{&quot;title&quot;:&quot;Efficient training techniques&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Methods and tools for efficient training on a single GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_one&quot;},{&quot;title&quot;:&quot;Multiple GPUs and parallelism&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_many&quot;},{&quot;title&quot;:&quot;Efficient training on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu&quot;},{&quot;title&quot;:&quot;Distributed CPU training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu_many&quot;},{&quot;title&quot;:&quot;Training on TPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu&quot;},{&quot;title&quot;:&quot;Training on TPU with TensorFlow&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu_tf&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu_tf&quot;},{&quot;title&quot;:&quot;Training on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_special&quot;},{&quot;title&quot;:&quot;Custom hardware for training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_hardware&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_hardware&quot;},{&quot;title&quot;:&quot;Hyperparameter Search using Trainer API&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;hpo_train&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/hpo_train&quot;}]},{&quot;title&quot;:&quot;Optimizing inference&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Inference on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_cpu&quot;},{&quot;title&quot;:&quot;Inference on one GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_one&quot;},{&quot;title&quot;:&quot;Inference on many GPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_many&quot;},{&quot;title&quot;:&quot;Inference on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_special&quot;}]},{&quot;title&quot;:&quot;Instantiating a big model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;big_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/big_models&quot;},{&quot;title&quot;:&quot;Troubleshooting&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;debugging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/debugging&quot;},{&quot;title&quot;:&quot;XLA Integration for TensorFlow Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tf_xla&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tf_xla&quot;},{&quot;title&quot;:&quot;Optimize inference using `torch.compile()`&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_torch_compile&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_torch_compile&quot;}]},{&quot;title&quot;:&quot;Contribute&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;How to contribute to transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;contributing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/contributing&quot;},{&quot;title&quot;:&quot;How to add a model to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_model&quot;},{&quot;title&quot;:&quot;How to convert a 🤗 Transformers model to TensorFlow?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_tensorflow_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_tensorflow_model&quot;},{&quot;title&quot;:&quot;How to add a pipeline to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_pipeline&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_pipeline&quot;},{&quot;title&quot;:&quot;Testing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;testing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/testing&quot;},{&quot;title&quot;:&quot;Checks on a Pull Request&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pr_checks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pr_checks&quot;}]},{&quot;title&quot;:&quot;Conceptual guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Philosophy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;philosophy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/philosophy&quot;},{&quot;title&quot;:&quot;Glossary&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;glossary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/glossary&quot;},{&quot;title&quot;:&quot;What 🤗 Transformers can do&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;task_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/task_summary&quot;},{&quot;title&quot;:&quot;How 🤗 Transformers solve tasks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks_explained&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks_explained&quot;},{&quot;title&quot;:&quot;The Transformer model family&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_summary&quot;},{&quot;title&quot;:&quot;Summary of the tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tokenizer_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tokenizer_summary&quot;},{&quot;title&quot;:&quot;Attention mechanisms&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;attention&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/attention&quot;},{&quot;title&quot;:&quot;Padding and truncation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pad_truncation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pad_truncation&quot;},{&quot;title&quot;:&quot;BERTology&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;bertology&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/bertology&quot;},{&quot;title&quot;:&quot;Perplexity of fixed-length models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perplexity&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perplexity&quot;},{&quot;title&quot;:&quot;Pipelines for webserver inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_webserver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_webserver&quot;},{&quot;title&quot;:&quot;Model training anatomy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_memory_anatomy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_memory_anatomy&quot;}]},{&quot;title&quot;:&quot;API&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Main Classes&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Agents and Tools&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/agent&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/agent&quot;},{&quot;title&quot;:&quot;Auto Classes&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/auto&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/auto&quot;},{&quot;title&quot;:&quot;Callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/callback&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/callback&quot;},{&quot;title&quot;:&quot;Configuration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/configuration&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/configuration&quot;},{&quot;title&quot;:&quot;Data Collator&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/data_collator&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/data_collator&quot;},{&quot;title&quot;:&quot;Keras callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/keras_callbacks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/keras_callbacks&quot;},{&quot;title&quot;:&quot;Logging&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/logging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/logging&quot;},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/model&quot;},{&quot;title&quot;:&quot;Text Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/text_generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/text_generation&quot;},{&quot;title&quot;:&quot;ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/onnx&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/onnx&quot;},{&quot;title&quot;:&quot;Optimization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/optimizer_schedules&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules&quot;},{&quot;title&quot;:&quot;Model outputs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/output&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/output&quot;},{&quot;title&quot;:&quot;Pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/pipelines&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/pipelines&quot;},{&quot;title&quot;:&quot;Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/processors&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/processors&quot;},{&quot;title&quot;:&quot;Quantization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/quantization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/quantization&quot;},{&quot;title&quot;:&quot;Tokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/tokenizer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/tokenizer&quot;},{&quot;title&quot;:&quot;Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/trainer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/trainer&quot;},{&quot;title&quot;:&quot;DeepSpeed Integration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/deepspeed&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/deepspeed&quot;},{&quot;title&quot;:&quot;Feature Extractor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/feature_extractor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/feature_extractor&quot;},{&quot;title&quot;:&quot;Image Processor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/image_processor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/image_processor&quot;}]},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALBERT&quot;,&quot;id&quot;:&quot;model_doc/albert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/albert&quot;},{&quot;title&quot;:&quot;BART&quot;,&quot;id&quot;:&quot;model_doc/bart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bart&quot;},{&quot;title&quot;:&quot;BARThez&quot;,&quot;id&quot;:&quot;model_doc/barthez&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/barthez&quot;},{&quot;title&quot;:&quot;BARTpho&quot;,&quot;id&quot;:&quot;model_doc/bartpho&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bartpho&quot;},{&quot;title&quot;:&quot;BERT&quot;,&quot;id&quot;:&quot;model_doc/bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert&quot;},{&quot;title&quot;:&quot;BertGeneration&quot;,&quot;id&quot;:&quot;model_doc/bert-generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-generation&quot;},{&quot;title&quot;:&quot;BertJapanese&quot;,&quot;id&quot;:&quot;model_doc/bert-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-japanese&quot;},{&quot;title&quot;:&quot;Bertweet&quot;,&quot;id&quot;:&quot;model_doc/bertweet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bertweet&quot;},{&quot;title&quot;:&quot;BigBird&quot;,&quot;id&quot;:&quot;model_doc/big_bird&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/big_bird&quot;},{&quot;title&quot;:&quot;BigBirdPegasus&quot;,&quot;id&quot;:&quot;model_doc/bigbird_pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus&quot;},{&quot;title&quot;:&quot;BioGpt&quot;,&quot;id&quot;:&quot;model_doc/biogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/biogpt&quot;},{&quot;title&quot;:&quot;Blenderbot&quot;,&quot;id&quot;:&quot;model_doc/blenderbot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot&quot;},{&quot;title&quot;:&quot;Blenderbot Small&quot;,&quot;id&quot;:&quot;model_doc/blenderbot-small&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot-small&quot;},{&quot;title&quot;:&quot;BLOOM&quot;,&quot;id&quot;:&quot;model_doc/bloom&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bloom&quot;},{&quot;title&quot;:&quot;BORT&quot;,&quot;id&quot;:&quot;model_doc/bort&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bort&quot;},{&quot;title&quot;:&quot;ByT5&quot;,&quot;id&quot;:&quot;model_doc/byt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/byt5&quot;},{&quot;title&quot;:&quot;CamemBERT&quot;,&quot;id&quot;:&quot;model_doc/camembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/camembert&quot;},{&quot;title&quot;:&quot;CANINE&quot;,&quot;id&quot;:&quot;model_doc/canine&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/canine&quot;},{&quot;title&quot;:&quot;CodeGen&quot;,&quot;id&quot;:&quot;model_doc/codegen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/codegen&quot;},{&quot;title&quot;:&quot;CodeLlama&quot;,&quot;id&quot;:&quot;model_doc/code_llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/code_llama&quot;},{&quot;title&quot;:&quot;ConvBERT&quot;,&quot;id&quot;:&quot;model_doc/convbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convbert&quot;},{&quot;title&quot;:&quot;CPM&quot;,&quot;id&quot;:&quot;model_doc/cpm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpm&quot;},{&quot;title&quot;:&quot;CPMANT&quot;,&quot;id&quot;:&quot;model_doc/cpmant&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpmant&quot;},{&quot;title&quot;:&quot;CTRL&quot;,&quot;id&quot;:&quot;model_doc/ctrl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ctrl&quot;},{&quot;title&quot;:&quot;DeBERTa&quot;,&quot;id&quot;:&quot;model_doc/deberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta&quot;},{&quot;title&quot;:&quot;DeBERTa-v2&quot;,&quot;id&quot;:&quot;model_doc/deberta-v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta-v2&quot;},{&quot;title&quot;:&quot;DialoGPT&quot;,&quot;id&quot;:&quot;model_doc/dialogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dialogpt&quot;},{&quot;title&quot;:&quot;DistilBERT&quot;,&quot;id&quot;:&quot;model_doc/distilbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/distilbert&quot;},{&quot;title&quot;:&quot;DPR&quot;,&quot;id&quot;:&quot;model_doc/dpr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpr&quot;},{&quot;title&quot;:&quot;ELECTRA&quot;,&quot;id&quot;:&quot;model_doc/electra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/electra&quot;},{&quot;title&quot;:&quot;Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encoder-decoder&quot;},{&quot;title&quot;:&quot;ERNIE&quot;,&quot;id&quot;:&quot;model_doc/ernie&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie&quot;},{&quot;title&quot;:&quot;ErnieM&quot;,&quot;id&quot;:&quot;model_doc/ernie_m&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie_m&quot;},{&quot;title&quot;:&quot;ESM&quot;,&quot;id&quot;:&quot;model_doc/esm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/esm&quot;},{&quot;title&quot;:&quot;Falcon&quot;,&quot;id&quot;:&quot;model_doc/falcon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/falcon&quot;},{&quot;title&quot;:&quot;FLAN-T5&quot;,&quot;id&quot;:&quot;model_doc/flan-t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-t5&quot;},{&quot;title&quot;:&quot;FLAN-UL2&quot;,&quot;id&quot;:&quot;model_doc/flan-ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-ul2&quot;},{&quot;title&quot;:&quot;FlauBERT&quot;,&quot;id&quot;:&quot;model_doc/flaubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flaubert&quot;},{&quot;title&quot;:&quot;FNet&quot;,&quot;id&quot;:&quot;model_doc/fnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fnet&quot;},{&quot;title&quot;:&quot;FSMT&quot;,&quot;id&quot;:&quot;model_doc/fsmt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fsmt&quot;},{&quot;title&quot;:&quot;Funnel Transformer&quot;,&quot;id&quot;:&quot;model_doc/funnel&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/funnel&quot;},{&quot;title&quot;:&quot;GPT&quot;,&quot;id&quot;:&quot;model_doc/openai-gpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/openai-gpt&quot;},{&quot;title&quot;:&quot;GPT Neo&quot;,&quot;id&quot;:&quot;model_doc/gpt_neo&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neo&quot;},{&quot;title&quot;:&quot;GPT NeoX&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox&quot;},{&quot;title&quot;:&quot;GPT NeoX Japanese&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox_japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese&quot;},{&quot;title&quot;:&quot;GPT-J&quot;,&quot;id&quot;:&quot;model_doc/gptj&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptj&quot;},{&quot;title&quot;:&quot;GPT2&quot;,&quot;id&quot;:&quot;model_doc/gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt2&quot;},{&quot;title&quot;:&quot;GPTBigCode&quot;,&quot;id&quot;:&quot;model_doc/gpt_bigcode&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode&quot;},{&quot;title&quot;:&quot;GPTSAN Japanese&quot;,&quot;id&quot;:&quot;model_doc/gptsan-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese&quot;},{&quot;title&quot;:&quot;GPTSw3&quot;,&quot;id&quot;:&quot;model_doc/gpt-sw3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt-sw3&quot;},{&quot;title&quot;:&quot;HerBERT&quot;,&quot;id&quot;:&quot;model_doc/herbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/herbert&quot;},{&quot;title&quot;:&quot;I-BERT&quot;,&quot;id&quot;:&quot;model_doc/ibert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ibert&quot;},{&quot;title&quot;:&quot;Jukebox&quot;,&quot;id&quot;:&quot;model_doc/jukebox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/jukebox&quot;},{&quot;title&quot;:&quot;LED&quot;,&quot;id&quot;:&quot;model_doc/led&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/led&quot;},{&quot;title&quot;:&quot;LLaMA&quot;,&quot;id&quot;:&quot;model_doc/llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama&quot;},{&quot;title&quot;:&quot;Llama2&quot;,&quot;id&quot;:&quot;model_doc/llama2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama2&quot;},{&quot;title&quot;:&quot;Longformer&quot;,&quot;id&quot;:&quot;model_doc/longformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longformer&quot;},{&quot;title&quot;:&quot;LongT5&quot;,&quot;id&quot;:&quot;model_doc/longt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longt5&quot;},{&quot;title&quot;:&quot;LUKE&quot;,&quot;id&quot;:&quot;model_doc/luke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/luke&quot;},{&quot;title&quot;:&quot;M2M100&quot;,&quot;id&quot;:&quot;model_doc/m2m_100&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/m2m_100&quot;},{&quot;title&quot;:&quot;MarianMT&quot;,&quot;id&quot;:&quot;model_doc/marian&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/marian&quot;},{&quot;title&quot;:&quot;MarkupLM&quot;,&quot;id&quot;:&quot;model_doc/markuplm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/markuplm&quot;},{&quot;title&quot;:&quot;MBart and MBart-50&quot;,&quot;id&quot;:&quot;model_doc/mbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mbart&quot;},{&quot;title&quot;:&quot;MEGA&quot;,&quot;id&quot;:&quot;model_doc/mega&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mega&quot;},{&quot;title&quot;:&quot;MegatronBERT&quot;,&quot;id&quot;:&quot;model_doc/megatron-bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron-bert&quot;},{&quot;title&quot;:&quot;MegatronGPT2&quot;,&quot;id&quot;:&quot;model_doc/megatron_gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2&quot;},{&quot;title&quot;:&quot;Mistral&quot;,&quot;id&quot;:&quot;model_doc/mistral&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mistral&quot;},{&quot;title&quot;:&quot;mLUKE&quot;,&quot;id&quot;:&quot;model_doc/mluke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mluke&quot;},{&quot;title&quot;:&quot;MobileBERT&quot;,&quot;id&quot;:&quot;model_doc/mobilebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilebert&quot;},{&quot;title&quot;:&quot;MPNet&quot;,&quot;id&quot;:&quot;model_doc/mpnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpnet&quot;},{&quot;title&quot;:&quot;MPT&quot;,&quot;id&quot;:&quot;model_doc/mpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpt&quot;},{&quot;title&quot;:&quot;MRA&quot;,&quot;id&quot;:&quot;model_doc/mra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mra&quot;},{&quot;title&quot;:&quot;MT5&quot;,&quot;id&quot;:&quot;model_doc/mt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mt5&quot;},{&quot;title&quot;:&quot;MVP&quot;,&quot;id&quot;:&quot;model_doc/mvp&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mvp&quot;},{&quot;title&quot;:&quot;NEZHA&quot;,&quot;id&quot;:&quot;model_doc/nezha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nezha&quot;},{&quot;title&quot;:&quot;NLLB&quot;,&quot;id&quot;:&quot;model_doc/nllb&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb&quot;},{&quot;title&quot;:&quot;NLLB-MoE&quot;,&quot;id&quot;:&quot;model_doc/nllb-moe&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb-moe&quot;},{&quot;title&quot;:&quot;Nyströmformer&quot;,&quot;id&quot;:&quot;model_doc/nystromformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nystromformer&quot;},{&quot;title&quot;:&quot;Open-Llama&quot;,&quot;id&quot;:&quot;model_doc/open-llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/open-llama&quot;},{&quot;title&quot;:&quot;OPT&quot;,&quot;id&quot;:&quot;model_doc/opt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/opt&quot;},{&quot;title&quot;:&quot;Pegasus&quot;,&quot;id&quot;:&quot;model_doc/pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus&quot;},{&quot;title&quot;:&quot;PEGASUS-X&quot;,&quot;id&quot;:&quot;model_doc/pegasus_x&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus_x&quot;},{&quot;title&quot;:&quot;Persimmon&quot;,&quot;id&quot;:&quot;model_doc/persimmon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/persimmon&quot;},{&quot;title&quot;:&quot;PhoBERT&quot;,&quot;id&quot;:&quot;model_doc/phobert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/phobert&quot;},{&quot;title&quot;:&quot;PLBart&quot;,&quot;id&quot;:&quot;model_doc/plbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/plbart&quot;},{&quot;title&quot;:&quot;ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/prophetnet&quot;},{&quot;title&quot;:&quot;QDQBert&quot;,&quot;id&quot;:&quot;model_doc/qdqbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/qdqbert&quot;},{&quot;title&quot;:&quot;RAG&quot;,&quot;id&quot;:&quot;model_doc/rag&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rag&quot;},{&quot;title&quot;:&quot;REALM&quot;,&quot;id&quot;:&quot;model_doc/realm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/realm&quot;},{&quot;title&quot;:&quot;Reformer&quot;,&quot;id&quot;:&quot;model_doc/reformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/reformer&quot;},{&quot;title&quot;:&quot;RemBERT&quot;,&quot;id&quot;:&quot;model_doc/rembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rembert&quot;},{&quot;title&quot;:&quot;RetriBERT&quot;,&quot;id&quot;:&quot;model_doc/retribert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/retribert&quot;},{&quot;title&quot;:&quot;RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta&quot;},{&quot;title&quot;:&quot;RoBERTa-PreLayerNorm&quot;,&quot;id&quot;:&quot;model_doc/roberta-prelayernorm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm&quot;},{&quot;title&quot;:&quot;RoCBert&quot;,&quot;id&quot;:&quot;model_doc/roc_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roc_bert&quot;},{&quot;title&quot;:&quot;RoFormer&quot;,&quot;id&quot;:&quot;model_doc/roformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roformer&quot;},{&quot;title&quot;:&quot;RWKV&quot;,&quot;id&quot;:&quot;model_doc/rwkv&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rwkv&quot;},{&quot;title&quot;:&quot;Splinter&quot;,&quot;id&quot;:&quot;model_doc/splinter&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/splinter&quot;},{&quot;title&quot;:&quot;SqueezeBERT&quot;,&quot;id&quot;:&quot;model_doc/squeezebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/squeezebert&quot;},{&quot;title&quot;:&quot;SwitchTransformers&quot;,&quot;id&quot;:&quot;model_doc/switch_transformers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/switch_transformers&quot;},{&quot;title&quot;:&quot;T5&quot;,&quot;id&quot;:&quot;model_doc/t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5&quot;},{&quot;title&quot;:&quot;T5v1.1&quot;,&quot;id&quot;:&quot;model_doc/t5v1.1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5v1.1&quot;},{&quot;title&quot;:&quot;TAPEX&quot;,&quot;id&quot;:&quot;model_doc/tapex&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapex&quot;},{&quot;title&quot;:&quot;Transformer XL&quot;,&quot;id&quot;:&quot;model_doc/transfo-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/transfo-xl&quot;},{&quot;title&quot;:&quot;UL2&quot;,&quot;id&quot;:&quot;model_doc/ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ul2&quot;},{&quot;title&quot;:&quot;UMT5&quot;,&quot;id&quot;:&quot;model_doc/umt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/umt5&quot;},{&quot;title&quot;:&quot;X-MOD&quot;,&quot;id&quot;:&quot;model_doc/xmod&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xmod&quot;},{&quot;title&quot;:&quot;XGLM&quot;,&quot;id&quot;:&quot;model_doc/xglm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xglm&quot;},{&quot;title&quot;:&quot;XLM&quot;,&quot;id&quot;:&quot;model_doc/xlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm&quot;},{&quot;title&quot;:&quot;XLM-ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/xlm-prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa-XL&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl&quot;},{&quot;title&quot;:&quot;XLM-V&quot;,&quot;id&quot;:&quot;model_doc/xlm-v&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-v&quot;},{&quot;title&quot;:&quot;XLNet&quot;,&quot;id&quot;:&quot;model_doc/xlnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlnet&quot;},{&quot;title&quot;:&quot;YOSO&quot;,&quot;id&quot;:&quot;model_doc/yoso&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yoso&quot;}]},{&quot;title&quot;:&quot;Vision models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;BEiT&quot;,&quot;id&quot;:&quot;model_doc/beit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/beit&quot;},{&quot;title&quot;:&quot;BiT&quot;,&quot;id&quot;:&quot;model_doc/bit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bit&quot;},{&quot;title&quot;:&quot;Conditional DETR&quot;,&quot;id&quot;:&quot;model_doc/conditional_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/conditional_detr&quot;},{&quot;title&quot;:&quot;ConvNeXT&quot;,&quot;id&quot;:&quot;model_doc/convnext&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnext&quot;},{&quot;title&quot;:&quot;ConvNeXTV2&quot;,&quot;id&quot;:&quot;model_doc/convnextv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnextv2&quot;},{&quot;title&quot;:&quot;CvT&quot;,&quot;id&quot;:&quot;model_doc/cvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cvt&quot;},{&quot;title&quot;:&quot;Deformable DETR&quot;,&quot;id&quot;:&quot;model_doc/deformable_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deformable_detr&quot;},{&quot;title&quot;:&quot;DeiT&quot;,&quot;id&quot;:&quot;model_doc/deit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deit&quot;},{&quot;title&quot;:&quot;DETA&quot;,&quot;id&quot;:&quot;model_doc/deta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deta&quot;},{&quot;title&quot;:&quot;DETR&quot;,&quot;id&quot;:&quot;model_doc/detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/detr&quot;},{&quot;title&quot;:&quot;DiNAT&quot;,&quot;id&quot;:&quot;model_doc/dinat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinat&quot;},{&quot;title&quot;:&quot;DINO V2&quot;,&quot;id&quot;:&quot;model_doc/dinov2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinov2&quot;},{&quot;title&quot;:&quot;DiT&quot;,&quot;id&quot;:&quot;model_doc/dit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dit&quot;},{&quot;title&quot;:&quot;DPT&quot;,&quot;id&quot;:&quot;model_doc/dpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpt&quot;},{&quot;title&quot;:&quot;EfficientFormer&quot;,&quot;id&quot;:&quot;model_doc/efficientformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientformer&quot;},{&quot;title&quot;:&quot;EfficientNet&quot;,&quot;id&quot;:&quot;model_doc/efficientnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientnet&quot;},{&quot;title&quot;:&quot;FocalNet&quot;,&quot;id&quot;:&quot;model_doc/focalnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/focalnet&quot;},{&quot;title&quot;:&quot;GLPN&quot;,&quot;id&quot;:&quot;model_doc/glpn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/glpn&quot;},{&quot;title&quot;:&quot;ImageGPT&quot;,&quot;id&quot;:&quot;model_doc/imagegpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/imagegpt&quot;},{&quot;title&quot;:&quot;LeViT&quot;,&quot;id&quot;:&quot;model_doc/levit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/levit&quot;},{&quot;title&quot;:&quot;Mask2Former&quot;,&quot;id&quot;:&quot;model_doc/mask2former&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mask2former&quot;},{&quot;title&quot;:&quot;MaskFormer&quot;,&quot;id&quot;:&quot;model_doc/maskformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/maskformer&quot;},{&quot;title&quot;:&quot;MobileNetV1&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v1&quot;},{&quot;title&quot;:&quot;MobileNetV2&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v2&quot;},{&quot;title&quot;:&quot;MobileViT&quot;,&quot;id&quot;:&quot;model_doc/mobilevit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevit&quot;},{&quot;title&quot;:&quot;MobileViTV2&quot;,&quot;id&quot;:&quot;model_doc/mobilevitv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevitv2&quot;},{&quot;title&quot;:&quot;NAT&quot;,&quot;id&quot;:&quot;model_doc/nat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nat&quot;},{&quot;title&quot;:&quot;PoolFormer&quot;,&quot;id&quot;:&quot;model_doc/poolformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/poolformer&quot;},{&quot;title&quot;:&quot;Pyramid Vision Transformer (PVT)&quot;,&quot;id&quot;:&quot;model_doc/pvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pvt&quot;},{&quot;title&quot;:&quot;RegNet&quot;,&quot;id&quot;:&quot;model_doc/regnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/regnet&quot;},{&quot;title&quot;:&quot;ResNet&quot;,&quot;id&quot;:&quot;model_doc/resnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/resnet&quot;},{&quot;title&quot;:&quot;SegFormer&quot;,&quot;id&quot;:&quot;model_doc/segformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/segformer&quot;},{&quot;title&quot;:&quot;SwiftFormer&quot;,&quot;id&quot;:&quot;model_doc/swiftformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swiftformer&quot;},{&quot;title&quot;:&quot;Swin Transformer&quot;,&quot;id&quot;:&quot;model_doc/swin&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin&quot;},{&quot;title&quot;:&quot;Swin Transformer V2&quot;,&quot;id&quot;:&quot;model_doc/swinv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swinv2&quot;},{&quot;title&quot;:&quot;Swin2SR&quot;,&quot;id&quot;:&quot;model_doc/swin2sr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin2sr&quot;},{&quot;title&quot;:&quot;Table Transformer&quot;,&quot;id&quot;:&quot;model_doc/table-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/table-transformer&quot;},{&quot;title&quot;:&quot;TimeSformer&quot;,&quot;id&quot;:&quot;model_doc/timesformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/timesformer&quot;},{&quot;title&quot;:&quot;UperNet&quot;,&quot;id&quot;:&quot;model_doc/upernet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/upernet&quot;},{&quot;title&quot;:&quot;VAN&quot;,&quot;id&quot;:&quot;model_doc/van&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/van&quot;},{&quot;title&quot;:&quot;VideoMAE&quot;,&quot;id&quot;:&quot;model_doc/videomae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/videomae&quot;},{&quot;title&quot;:&quot;Vision Transformer (ViT)&quot;,&quot;id&quot;:&quot;model_doc/vit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit&quot;},{&quot;title&quot;:&quot;ViT Hybrid&quot;,&quot;id&quot;:&quot;model_doc/vit_hybrid&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_hybrid&quot;},{&quot;title&quot;:&quot;ViTDet&quot;,&quot;id&quot;:&quot;model_doc/vitdet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitdet&quot;},{&quot;title&quot;:&quot;ViTMAE&quot;,&quot;id&quot;:&quot;model_doc/vit_mae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_mae&quot;},{&quot;title&quot;:&quot;ViTMatte&quot;,&quot;id&quot;:&quot;model_doc/vitmatte&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitmatte&quot;},{&quot;title&quot;:&quot;ViTMSN&quot;,&quot;id&quot;:&quot;model_doc/vit_msn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_msn&quot;},{&quot;title&quot;:&quot;ViViT&quot;,&quot;id&quot;:&quot;model_doc/vivit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vivit&quot;},{&quot;title&quot;:&quot;YOLOS&quot;,&quot;id&quot;:&quot;model_doc/yolos&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yolos&quot;}]},{&quot;title&quot;:&quot;Audio models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio Spectrogram Transformer&quot;,&quot;id&quot;:&quot;model_doc/audio-spectrogram-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer&quot;},{&quot;title&quot;:&quot;Bark&quot;,&quot;id&quot;:&quot;model_doc/bark&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bark&quot;},{&quot;title&quot;:&quot;CLAP&quot;,&quot;id&quot;:&quot;model_doc/clap&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clap&quot;},{&quot;title&quot;:&quot;EnCodec&quot;,&quot;id&quot;:&quot;model_doc/encodec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encodec&quot;},{&quot;title&quot;:&quot;Hubert&quot;,&quot;id&quot;:&quot;model_doc/hubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/hubert&quot;},{&quot;title&quot;:&quot;MCTCT&quot;,&quot;id&quot;:&quot;model_doc/mctct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mctct&quot;},{&quot;title&quot;:&quot;MMS&quot;,&quot;id&quot;:&quot;model_doc/mms&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mms&quot;},{&quot;title&quot;:&quot;MusicGen&quot;,&quot;id&quot;:&quot;model_doc/musicgen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/musicgen&quot;},{&quot;title&quot;:&quot;Pop2Piano&quot;,&quot;id&quot;:&quot;model_doc/pop2piano&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pop2piano&quot;},{&quot;title&quot;:&quot;SEW&quot;,&quot;id&quot;:&quot;model_doc/sew&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew&quot;},{&quot;title&quot;:&quot;SEW-D&quot;,&quot;id&quot;:&quot;model_doc/sew-d&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew-d&quot;},{&quot;title&quot;:&quot;Speech2Text&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text&quot;},{&quot;title&quot;:&quot;Speech2Text2&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text_2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2&quot;},{&quot;title&quot;:&quot;SpeechT5&quot;,&quot;id&quot;:&quot;model_doc/speecht5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speecht5&quot;},{&quot;title&quot;:&quot;UniSpeech&quot;,&quot;id&quot;:&quot;model_doc/unispeech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech&quot;},{&quot;title&quot;:&quot;UniSpeech-SAT&quot;,&quot;id&quot;:&quot;model_doc/unispeech-sat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech-sat&quot;},{&quot;title&quot;:&quot;VITS&quot;,&quot;id&quot;:&quot;model_doc/vits&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vits&quot;},{&quot;title&quot;:&quot;Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2&quot;},{&quot;title&quot;:&quot;Wav2Vec2-Conformer&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2-conformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer&quot;},{&quot;title&quot;:&quot;Wav2Vec2Phoneme&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2_phoneme&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme&quot;},{&quot;title&quot;:&quot;WavLM&quot;,&quot;id&quot;:&quot;model_doc/wavlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wavlm&quot;},{&quot;title&quot;:&quot;Whisper&quot;,&quot;id&quot;:&quot;model_doc/whisper&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/whisper&quot;},{&quot;title&quot;:&quot;XLS-R&quot;,&quot;id&quot;:&quot;model_doc/xls_r&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xls_r&quot;},{&quot;title&quot;:&quot;XLSR-Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/xlsr_wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2&quot;}]},{&quot;title&quot;:&quot;Multimodal models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALIGN&quot;,&quot;id&quot;:&quot;model_doc/align&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/align&quot;},{&quot;title&quot;:&quot;AltCLIP&quot;,&quot;id&quot;:&quot;model_doc/altclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/altclip&quot;},{&quot;title&quot;:&quot;BLIP&quot;,&quot;id&quot;:&quot;model_doc/blip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip&quot;},{&quot;title&quot;:&quot;BLIP-2&quot;,&quot;id&quot;:&quot;model_doc/blip-2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip-2&quot;},{&quot;title&quot;:&quot;BridgeTower&quot;,&quot;id&quot;:&quot;model_doc/bridgetower&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bridgetower&quot;},{&quot;title&quot;:&quot;BROS&quot;,&quot;id&quot;:&quot;model_doc/bros&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bros&quot;},{&quot;title&quot;:&quot;Chinese-CLIP&quot;,&quot;id&quot;:&quot;model_doc/chinese_clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/chinese_clip&quot;},{&quot;title&quot;:&quot;CLIP&quot;,&quot;id&quot;:&quot;model_doc/clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clip&quot;},{&quot;title&quot;:&quot;CLIPSeg&quot;,&quot;id&quot;:&quot;model_doc/clipseg&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clipseg&quot;},{&quot;title&quot;:&quot;Data2Vec&quot;,&quot;id&quot;:&quot;model_doc/data2vec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/data2vec&quot;},{&quot;title&quot;:&quot;DePlot&quot;,&quot;id&quot;:&quot;model_doc/deplot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deplot&quot;},{&quot;title&quot;:&quot;Donut&quot;,&quot;id&quot;:&quot;model_doc/donut&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/donut&quot;},{&quot;title&quot;:&quot;FLAVA&quot;,&quot;id&quot;:&quot;model_doc/flava&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flava&quot;},{&quot;title&quot;:&quot;GIT&quot;,&quot;id&quot;:&quot;model_doc/git&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/git&quot;},{&quot;title&quot;:&quot;GroupViT&quot;,&quot;id&quot;:&quot;model_doc/groupvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/groupvit&quot;},{&quot;title&quot;:&quot;IDEFICS&quot;,&quot;id&quot;:&quot;model_doc/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/idefics&quot;},{&quot;title&quot;:&quot;InstructBLIP&quot;,&quot;id&quot;:&quot;model_doc/instructblip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/instructblip&quot;},{&quot;title&quot;:&quot;LayoutLM&quot;,&quot;id&quot;:&quot;model_doc/layoutlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlm&quot;},{&quot;title&quot;:&quot;LayoutLMV2&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv2&quot;},{&quot;title&quot;:&quot;LayoutLMV3&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv3&quot;},{&quot;title&quot;:&quot;LayoutXLM&quot;,&quot;id&quot;:&quot;model_doc/layoutxlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutxlm&quot;},{&quot;title&quot;:&quot;LiLT&quot;,&quot;id&quot;:&quot;model_doc/lilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lilt&quot;},{&quot;title&quot;:&quot;LXMERT&quot;,&quot;id&quot;:&quot;model_doc/lxmert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lxmert&quot;},{&quot;title&quot;:&quot;MatCha&quot;,&quot;id&quot;:&quot;model_doc/matcha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/matcha&quot;},{&quot;title&quot;:&quot;MGP-STR&quot;,&quot;id&quot;:&quot;model_doc/mgp-str&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mgp-str&quot;},{&quot;title&quot;:&quot;Nougat&quot;,&quot;id&quot;:&quot;model_doc/nougat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nougat&quot;},{&quot;title&quot;:&quot;OneFormer&quot;,&quot;id&quot;:&quot;model_doc/oneformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/oneformer&quot;},{&quot;title&quot;:&quot;OWL-ViT&quot;,&quot;id&quot;:&quot;model_doc/owlvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/owlvit&quot;},{&quot;title&quot;:&quot;Perceiver&quot;,&quot;id&quot;:&quot;model_doc/perceiver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/perceiver&quot;},{&quot;title&quot;:&quot;Pix2Struct&quot;,&quot;id&quot;:&quot;model_doc/pix2struct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pix2struct&quot;},{&quot;title&quot;:&quot;Segment Anything&quot;,&quot;id&quot;:&quot;model_doc/sam&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sam&quot;},{&quot;title&quot;:&quot;Speech Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/speech-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder&quot;},{&quot;title&quot;:&quot;TAPAS&quot;,&quot;id&quot;:&quot;model_doc/tapas&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapas&quot;},{&quot;title&quot;:&quot;TrOCR&quot;,&quot;id&quot;:&quot;model_doc/trocr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trocr&quot;},{&quot;title&quot;:&quot;TVLT&quot;,&quot;id&quot;:&quot;model_doc/tvlt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tvlt&quot;},{&quot;title&quot;:&quot;ViLT&quot;,&quot;id&quot;:&quot;model_doc/vilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vilt&quot;},{&quot;title&quot;:&quot;Vision Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/vision-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder&quot;},{&quot;title&quot;:&quot;Vision Text Dual Encoder&quot;,&quot;id&quot;:&quot;model_doc/vision-text-dual-encoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder&quot;},{&quot;title&quot;:&quot;VisualBERT&quot;,&quot;id&quot;:&quot;model_doc/visual_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/visual_bert&quot;},{&quot;title&quot;:&quot;X-CLIP&quot;,&quot;id&quot;:&quot;model_doc/xclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xclip&quot;}]},{&quot;title&quot;:&quot;Reinforcement learning models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Decision Transformer&quot;,&quot;id&quot;:&quot;model_doc/decision_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/decision_transformer&quot;},{&quot;title&quot;:&quot;Trajectory Transformer&quot;,&quot;id&quot;:&quot;model_doc/trajectory_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer&quot;}]},{&quot;title&quot;:&quot;Time series models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Autoformer&quot;,&quot;id&quot;:&quot;model_doc/autoformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/autoformer&quot;},{&quot;title&quot;:&quot;Informer&quot;,&quot;id&quot;:&quot;model_doc/informer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/informer&quot;},{&quot;title&quot;:&quot;Time Series Transformer&quot;,&quot;id&quot;:&quot;model_doc/time_series_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/time_series_transformer&quot;}]},{&quot;title&quot;:&quot;Graph models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Graphormer&quot;,&quot;id&quot;:&quot;model_doc/graphormer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/graphormer&quot;}]}]},{&quot;title&quot;:&quot;Internal Helpers&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Custom Layers and Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/modeling_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/modeling_utils&quot;},{&quot;title&quot;:&quot;Utilities for pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/pipelines_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/pipelines_utils&quot;},{&quot;title&quot;:&quot;Utilities for Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/tokenization_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/tokenization_utils&quot;},{&quot;title&quot;:&quot;Utilities for Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/trainer_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/trainer_utils&quot;},{&quot;title&quot;:&quot;Utilities for Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/generation_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/generation_utils&quot;},{&quot;title&quot;:&quot;Utilities for Image Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/image_processing_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/image_processing_utils&quot;},{&quot;title&quot;:&quot;Utilities for Audio processing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/audio_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/audio_utils&quot;},{&quot;title&quot;:&quot;General Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/file_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/file_utils&quot;},{&quot;title&quot;:&quot;Utilities for Time Series&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/time_series_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/time_series_utils&quot;}]}]}],&quot;chapterId&quot;:&quot;tasks/idefics&quot;,&quot;docType&quot;:&quot;docs&quot;,&quot;isLoggedIn&quot;:false,&quot;lang&quot;:&quot;en&quot;,&quot;langs&quot;:[&quot;de&quot;,&quot;en&quot;,&quot;es&quot;,&quot;fr&quot;,&quot;it&quot;,&quot;ko&quot;,&quot;pt&quot;,&quot;zh&quot;],&quot;library&quot;:&quot;transformers&quot;,&quot;theme&quot;:&quot;light&quot;,&quot;version&quot;:&quot;v4.34.0&quot;,&quot;versions&quot;:[{&quot;version&quot;:&quot;main&quot;},{&quot;version&quot;:&quot;v4.34.0&quot;},{&quot;version&quot;:&quot;v4.33.3&quot;},{&quot;version&quot;:&quot;v4.33.2&quot;},{&quot;version&quot;:&quot;v4.33.0&quot;},{&quot;version&quot;:&quot;v4.32.1&quot;},{&quot;version&quot;:&quot;v4.32.0&quot;},{&quot;version&quot;:&quot;v4.31.0&quot;},{&quot;version&quot;:&quot;v4.30.0&quot;},{&quot;version&quot;:&quot;v4.29.1&quot;},{&quot;version&quot;:&quot;v4.29.0&quot;},{&quot;version&quot;:&quot;v4.28.1&quot;},{&quot;version&quot;:&quot;v4.28.0&quot;},{&quot;version&quot;:&quot;v4.27.2&quot;},{&quot;version&quot;:&quot;v4.27.1&quot;},{&quot;version&quot;:&quot;v4.27.0&quot;},{&quot;version&quot;:&quot;v4.26.1&quot;},{&quot;version&quot;:&quot;v4.26.0&quot;},{&quot;version&quot;:&quot;v4.25.1&quot;},{&quot;version&quot;:&quot;v4.24.0&quot;},{&quot;version&quot;:&quot;v4.23.1&quot;},{&quot;version&quot;:&quot;v4.23.0&quot;},{&quot;version&quot;:&quot;v4.22.2&quot;},{&quot;version&quot;:&quot;v4.22.1&quot;},{&quot;version&quot;:&quot;v4.22.0&quot;},{&quot;version&quot;:&quot;v4.21.3&quot;},{&quot;version&quot;:&quot;v4.21.2&quot;},{&quot;version&quot;:&quot;v4.21.1&quot;},{&quot;version&quot;:&quot;v4.21.0&quot;},{&quot;version&quot;:&quot;v4.20.1&quot;},{&quot;version&quot;:&quot;v4.20.0&quot;},{&quot;version&quot;:&quot;v4.19.4&quot;},{&quot;version&quot;:&quot;v4.19.3&quot;},{&quot;version&quot;:&quot;v4.19.2&quot;},{&quot;version&quot;:&quot;v4.19.0&quot;},{&quot;version&quot;:&quot;v4.18.0&quot;},{&quot;version&quot;:&quot;v4.17.0&quot;},{&quot;version&quot;:&quot;v4.16.2&quot;},{&quot;version&quot;:&quot;v4.16.1&quot;},{&quot;version&quot;:&quot;v4.16.0&quot;},{&quot;version&quot;:&quot;v4.15.0&quot;},{&quot;version&quot;:&quot;v4.14.1&quot;},{&quot;version&quot;:&quot;v4.13.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.5&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.4&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.0.0&quot;},{&quot;version&quot;:&quot;doc-builder-html&quot;}],&quot;title&quot;:&quot;Image tasks with IDEFICS&quot;}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"><div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation</p> <div class="flex items-center"><p class="font-semibold">Image tasks with IDEFICS</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "><button class=" " type="button"><h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> <span class="ml-auto rounded border border-gray-200 bg-gray-100 px-0.5 text-xs dark:border-gray-800 dark:bg-gray-800"><kbd class="font-sans">⌘K</kbd></span></button> <div class="flex items-center"><select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1">v4.34.0</option><option value="2">v4.33.3</option><option value="3">v4.32.1</option><option value="4">v4.31.0</option><option value="5">v4.30.0</option><option value="6">v4.29.1</option><option value="7">v4.28.1</option><option value="8">v4.27.2</option><option value="9">v4.26.1</option><option value="10">v4.25.1</option><option value="11">v4.24.0</option><option value="12">v4.23.1</option><option value="13">v4.22.2</option><option value="14">v4.21.3</option><option value="15">v4.20.1</option><option value="16">v4.19.4</option><option value="17">v4.18.0</option><option value="18">v4.17.0</option><option value="19">v4.16.2</option><option value="20">v4.15.0</option><option value="21">v4.14.1</option><option value="22">v4.13.0</option><option value="23">v4.12.5</option><option value="24">v4.11.3</option><option value="25">v4.10.1</option><option value="26">v4.9.2</option><option value="27">v4.8.2</option><option value="28">v4.7.0</option><option value="29">v4.6.0</option><option value="30">v4.5.1</option><option value="31">v4.4.2</option><option value="32">v4.3.3</option><option value="33">v4.2.2</option><option value="34">v4.1.1</option><option value="35">v4.0.1</option><option value="36">v3.5.1</option><option value="37">v3.4.0</option><option value="38">v3.3.1</option><option value="39">v3.2.0</option><option value="40">v3.1.0</option><option value="41">v3.0.2</option><option value="42">v2.11.0</option><option value="43">v2.10.0</option><option value="44">v2.9.1</option><option value="45">v2.8.0</option><option value="46">v2.7.0</option><option value="47">v2.6.0</option><option value="48">v2.5.1</option><option value="49">v2.4.1</option><option value="50">v2.3.0</option><option value="51">v2.2.2</option><option value="52">v2.1.1</option><option value="53">v2.0.0</option><option value="54">v1.2.0</option><option value="55">v1.1.0</option><option value="56">v1.0.0</option><option value="57">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"><button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"><svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> 112,792</a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Get started</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/index">🤗 Transformers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/quicktour">Quick tour </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/installation">Installation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Tutorials</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_tutorial">Run inference with pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/autoclass_tutorial">Write portable code with AutoClass </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/preprocessing">Preprocess data </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/training">Fine-tune a pretrained model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/run_scripts">Train with a script </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/accelerate">Set up distributed training with 🤗 Accelerate </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/peft">Load and train adapters with 🤗 PEFT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_sharing">Share your model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/transformers_agents">Agents </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/llm_tutorial">Generation with LLMs </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Task Guides</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Natural Language Processing</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Computer Vision</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Generation</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Prompting</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-4" href="/docs/transformers/v4.34.0/en/tasks/idefics">Image tasks with IDEFICS </a> </div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Developer guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/fast_tokenizers">Use fast tokenizers from 🤗 Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/multilingual">Run inference with multilingual models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/create_a_model">Use model-specific APIs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_models">Share a custom model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/chat_templating">Templates for chat models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/sagemaker">Run training on Amazon SageMaker </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/serialization">Export to ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tflite">Export to TFLite </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/torchscript">Export to TorchScript </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/benchmarks">Benchmarks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/notebooks">Notebooks with examples </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/community">Community resources </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_tools">Custom Tools and Prompts </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/troubleshooting">Troubleshoot </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Performance and scalability</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/performance">Overview </a><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Efficient training techniques</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_one">Methods and tools for efficient training on a single GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_many">Multiple GPUs and parallelism </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu">Efficient training on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu_many">Distributed CPU training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu">Training on TPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu_tf">Training on TPU with TensorFlow </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_special">Training on Specialized Hardware </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_hardware">Custom hardware for training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/hpo_train">Hyperparameter Search using Trainer API </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Optimizing inference</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_cpu">Inference on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_one">Inference on one GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_many">Inference on many GPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_special">Inference on Specialized Hardware </a> </div><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/big_models">Instantiating a big model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/debugging">Troubleshooting </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tf_xla">XLA Integration for TensorFlow Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perf_torch_compile">Optimize inference using `torch.compile()` </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Contribute</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/contributing">How to contribute to transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_model">How to add a model to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_tensorflow_model">How to convert a 🤗 Transformers model to TensorFlow? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_pipeline">How to add a pipeline to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/testing">Testing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pr_checks">Checks on a Pull Request </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Conceptual guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/philosophy">Philosophy </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/glossary">Glossary </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/task_summary">What 🤗 Transformers can do </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tasks_explained">How 🤗 Transformers solve tasks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_summary">The Transformer model family </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tokenizer_summary">Summary of the tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/attention">Attention mechanisms </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pad_truncation">Padding and truncation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/bertology">BERTology </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perplexity">Perplexity of fixed-length models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_webserver">Pipelines for webserver inference </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_memory_anatomy">Model training anatomy </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>API</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Main Classes</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/agent">Agents and Tools </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/model_doc/auto">Auto Classes </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/callback">Callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/configuration">Configuration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/data_collator">Data Collator </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks">Keras callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/logging">Logging </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/model">Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/text_generation">Text Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/onnx">ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules">Optimization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/output">Model outputs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/pipelines">Pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/processors">Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/quantization">Quantization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/tokenizer">Tokenizer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/trainer">Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/deepspeed">DeepSpeed Integration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor">Feature Extractor </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/image_processor">Image Processor </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Models</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Text models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Vision models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Reinforcement learning models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Time series models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Graph models</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Internal Helpers</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/modeling_utils">Custom Layers and Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/pipelines_utils">Utilities for pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/tokenization_utils">Utilities for Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/trainer_utils">Utilities for Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/generation_utils">Utilities for Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/image_processing_utils">Utilities for Image Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/audio_utils">Utilities for Audio processing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/file_utils">General Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/time_series_utils">Utilities for Time Series </a> </div> </div></nav></div></div></div> <div class="z-1 min-w-0 flex-1"> <div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg"> <div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div> <p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience </p> <div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes </div></div></div> <div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a> <p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div> <div class="prose-doc prose relative mx-auto max-w-4xl break-words"> <p></p> <h1 class="relative group"><a id="image-tasks-with-idefics" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#image-tasks-with-idefics"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1alwpo7">Image tasks with IDEFICS</span></h1> <div class="flex space-x-1 absolute z-10 right-0 top-0"> <div class="relative colab-dropdown "><button class=" " type="button"><img alt="Open In Colab" class="!m-0" src="https://colab.research.google.com/assets/colab-badge.svg"></button> </div> <div class="relative colab-dropdown "><button class=" " type="button"><img alt="Open In Studio Lab" class="!m-0" src="https://studiolab.sagemaker.aws/studiolab.svg"></button> </div></div> <p data-svelte-h="svelte-11wnsr">While individual tasks can be tackled by fine-tuning specialized models, an alternative approach that has recently emerged and gained popularity is to use large models for a diverse set of tasks without fine-tuning. For instance, large language models can handle such NLP tasks as summarization, translation, classification, and more. This approach is no longer limited to a single modality, such as text, and in this guide, we will illustrate how you can solve image-text tasks with a large multimodal model called IDEFICS.</p> <p data-svelte-h="svelte-1nt7nk6"><a href="../model_doc/idefics">IDEFICS</a> is an open-access vision and language model based on <a href="https://huggingface.co/papers/2204.14198" rel="nofollow">Flamingo</a>, a state-of-the-art visual language model initially developed by DeepMind. The model accepts arbitrary sequences of image and text inputs and generates coherent text as output. It can answer questions about images, describe visual content, create stories grounded in multiple images, and so on. IDEFICS comes in two variants - <a href="https://huggingface.co/HuggingFaceM4/idefics-80b" rel="nofollow">80 billion parameters</a> and <a href="https://huggingface.co/HuggingFaceM4/idefics-9b" rel="nofollow">9 billion parameters</a>, both of which are available on the 🤗 Hub. For each variant, you can also find fine-tuned instructed versions of the model adapted for conversational use cases.</p> <p data-svelte-h="svelte-1k5g1sg">This model is exceptionally versatile and can be used for a wide range of image and multimodal tasks. However, being a large model means it requires significant computational resources and infrastructure. It is up to you to decide whether this approach suits your use case better than fine-tuning specialized models for each individual task.</p> <p data-svelte-h="svelte-fp1hdi">In this guide, you’ll learn how to:</p> <ul data-svelte-h="svelte-187zpo0"><li><a href="#loading-the-model">Load IDEFICS</a> and <a href="#loading-the-quantized-version-of-the-model">load the quantized version of the model</a></li> <li>Use IDEFICS for: <ul><li><a href="#image-captioning">Image captioning</a></li> <li><a href="#prompted-image-captioning">Prompted image captioning</a></li> <li><a href="#few-shot-prompting">Few-shot prompting</a></li> <li><a href="#visual-question-answering">Visual question answering</a></li> <li><a href="#image-classification">Image classificaiton</a></li> <li><a href="#image-guided-text-generation">Image-guided text generation</a></li></ul></li> <li><a href="#running-inference-in-batch-mode">Run inference in batch mode</a></li> <li><a href="#idefics-instruct-for-conversational-use">Run IDEFICS instruct for conversational use</a></li></ul> <p data-svelte-h="svelte-qn4ey1">Before you begin, make sure you have all the necessary libraries installed.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class="">pip install -q bitsandbytes sentencepiece accelerate transformers</pre></div> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400">To run the following examples with a non-quantized version of the model checkpoint you will need at least 20GB of GPU memory.</div> <h2 class="relative group"><a id="loading-the-model" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#loading-the-model"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-5o4h2t">Loading the model</span></h2> <p data-svelte-h="svelte-pbsmkm">Let’s start by loading the model’s 9 billion parameters checkpoint:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>checkpoint = <span class="hljs-string">"HuggingFaceM4/idefics-9b"</span></pre></div> <p data-svelte-h="svelte-g97fg1">Just like for other Transformers models, you need to load a processor and the model itself from the checkpoint. The IDEFICS processor wraps a <a href="/docs/transformers/v4.34.0/en/model_doc/llama2#transformers.LlamaTokenizer">LlamaTokenizer</a> and IDEFICS image processor into a single processor to take care of preparing text and image inputs for the model.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> IdeficsForVisionText2Text, AutoProcessor <span class="hljs-meta">&gt;&gt;&gt; </span>processor = AutoProcessor.from_pretrained(checkpoint) <span class="hljs-meta">&gt;&gt;&gt; </span>model = IdeficsForVisionText2Text.from_pretrained(checkpoint, torch_dtype=torch.bfloat16, device_map=<span class="hljs-string">"auto"</span>)</pre></div> <p data-svelte-h="svelte-10jwul3">Setting <code>device_map</code> to <code>"auto"</code> will automatically determine how to load and store the model weights in the most optimized manner given existing devices.</p> <h3 class="relative group"><a id="quantized-model" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#quantized-model"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-g72wsb">Quantized model</span></h3> <p data-svelte-h="svelte-gb566o">If high-memory GPU availability is an issue, you can load the quantized version of the model. To load the model and the processor in 4bit precision, pass a <code>BitsAndBytesConfig</code> to the <code>from_pretrained</code> method and the model will be compressed on the fly while loading.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> IdeficsForVisionText2Text, AutoProcessor, BitsAndBytesConfig <span class="hljs-meta">&gt;&gt;&gt; </span>quantization_config = BitsAndBytesConfig( <span class="hljs-meta">... </span> load_in_4bit=<span class="hljs-literal">True</span>, <span class="hljs-meta">... </span> bnb_4bit_compute_dtype=torch.float16, <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>processor = AutoProcessor.from_pretrained(checkpoint) <span class="hljs-meta">&gt;&gt;&gt; </span>model = IdeficsForVisionText2Text.from_pretrained( <span class="hljs-meta">... </span> checkpoint, <span class="hljs-meta">... </span> quantization_config=quantization_config, <span class="hljs-meta">... </span> device_map=<span class="hljs-string">"auto"</span> <span class="hljs-meta">... </span>)</pre></div> <p data-svelte-h="svelte-vovwl0">Now that you have the model loaded in one of the suggested ways, let’s move on to exploring tasks that you can use IDEFICS for.</p> <h2 class="relative group"><a id="image-captioning" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#image-captioning"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1uwawv2">Image captioning</span></h2> <p data-svelte-h="svelte-195xyco">Image captioning is the task of predicting a caption for a given image. A common application is to aid visually impaired people navigate through different situations, for instance, explore image content online.</p> <p data-svelte-h="svelte-syqfn9">To illustrate the task, get an image to be captioned, e.g.:</p> <div class="flex justify-center" data-svelte-h="svelte-t8y7db"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/idefics-im-captioning.jpg" alt="Image of a puppy in a flower bed"></div> <p data-svelte-h="svelte-knozkn">Photo by <a href="https://unsplash.com/@hendoo" rel="nofollow">Hendo Wang</a>.</p> <p data-svelte-h="svelte-10bztc1">IDEFICS accepts text and image prompts. However, to caption an image, you do not have to provide a text prompt to the model, only the preprocessed input image. Without a text prompt, the model will start generating text from the BOS (beginning-of-sequence) token thus creating a caption.</p> <p data-svelte-h="svelte-sk4o55">As image input to the model, you can use either an image object (<code>PIL.Image</code>) or a url from which the image can be retrieved.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>prompt = [ <span class="hljs-meta">... </span> <span class="hljs-string">"https://images.unsplash.com/photo-1583160247711-2191776b4b91?ixlib=rb-4.0.3&amp;ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&amp;auto=format&amp;fit=crop&amp;w=3542&amp;q=80"</span>, <span class="hljs-meta">... </span>] <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = processor(prompt, return_tensors=<span class="hljs-string">"pt"</span>).to(<span class="hljs-string">"cuda"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>bad_words_ids = processor.tokenizer([<span class="hljs-string">"&lt;image&gt;"</span>, <span class="hljs-string">"&lt;fake_token_around_image&gt;"</span>], add_special_tokens=<span class="hljs-literal">False</span>).input_ids <span class="hljs-meta">&gt;&gt;&gt; </span>generated_ids = model.generate(**inputs, max_new_tokens=<span class="hljs-number">10</span>, bad_words_ids=bad_words_ids) <span class="hljs-meta">&gt;&gt;&gt; </span>generated_text = processor.batch_decode(generated_ids, skip_special_tokens=<span class="hljs-literal">True</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-built_in">print</span>(generated_text[<span class="hljs-number">0</span>]) A puppy <span class="hljs-keyword">in</span> a flower bed</pre></div> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-uwc51d">It is a good idea to include the <code>bad_words_ids</code> in the call to <code>generate</code> to avoid errors arising when increasing the <code>max_new_tokens</code>: the model will want to generate a new <code>&lt;image&gt;</code> or <code>&lt;fake_token_around_image&gt;</code> token when there is no image being generated by the model. You can set it on-the-fly as in this guide, or store in the <code>GenerationConfig</code> as described in the <a href="../generation_strategies">Text generation strategies</a> guide.</p></div> <h2 class="relative group"><a id="prompted-image-captioning" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#prompted-image-captioning"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1mu97bv">Prompted image captioning</span></h2> <p data-svelte-h="svelte-15l3yis">You can extend image captioning by providing a text prompt, which the model will continue given the image. Let’s take another image to illustrate:</p> <div class="flex justify-center" data-svelte-h="svelte-1ritb1k"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/idefics-prompted-im-captioning.jpg" alt="Image of the Eiffel Tower at night"></div> <p data-svelte-h="svelte-1uz68x8">Photo by <a href="https://unsplash.com/@dnevozhai" rel="nofollow">Denys Nevozhai</a>.</p> <p data-svelte-h="svelte-cncrxa">Textual and image prompts can be passed to the model’s processor as a single list to create appropriate inputs.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>prompt = [ <span class="hljs-meta">... </span> <span class="hljs-string">"https://images.unsplash.com/photo-1543349689-9a4d426bee8e?ixlib=rb-4.0.3&amp;ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&amp;auto=format&amp;fit=crop&amp;w=3501&amp;q=80"</span>, <span class="hljs-meta">... </span> <span class="hljs-string">"This is an image of "</span>, <span class="hljs-meta">... </span>] <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = processor(prompt, return_tensors=<span class="hljs-string">"pt"</span>).to(<span class="hljs-string">"cuda"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>bad_words_ids = processor.tokenizer([<span class="hljs-string">"&lt;image&gt;"</span>, <span class="hljs-string">"&lt;fake_token_around_image&gt;"</span>], add_special_tokens=<span class="hljs-literal">False</span>).input_ids <span class="hljs-meta">&gt;&gt;&gt; </span>generated_ids = model.generate(**inputs, max_new_tokens=<span class="hljs-number">10</span>, bad_words_ids=bad_words_ids) <span class="hljs-meta">&gt;&gt;&gt; </span>generated_text = processor.batch_decode(generated_ids, skip_special_tokens=<span class="hljs-literal">True</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-built_in">print</span>(generated_text[<span class="hljs-number">0</span>]) This <span class="hljs-keyword">is</span> an image of the Eiffel Tower <span class="hljs-keyword">in</span> Paris, France.</pre></div> <h2 class="relative group"><a id="fewshot-prompting" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#fewshot-prompting"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-5vdttc">Few-shot prompting</span></h2> <p data-svelte-h="svelte-fytdtn">While IDEFICS demonstrates great zero-shot results, your task may require a certain format of the caption, or come with other restrictions or requirements that increase task’s complexity. Few-shot prompting can be used to enable in-context learning. By providing examples in the prompt, you can steer the model to generate results that mimic the format of given examples.</p> <p data-svelte-h="svelte-148v07a">Let’s use the previous image of the Eiffel Tower as an example for the model and build a prompt that demonstrates to the model that in addition to learning what the object in an image is, we would also like to get some interesting information about it. Then, let’s see, if we can get the same response format for an image of the Statue of Liberty:</p> <div class="flex justify-center" data-svelte-h="svelte-gin1vp"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/idefics-few-shot.jpg" alt="Image of the Statue of Liberty"></div> <p data-svelte-h="svelte-s7b8dy">Photo by <a href="https://unsplash.com/@jmayobres" rel="nofollow">Juan Mayobre</a>.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>prompt = [<span class="hljs-string">"User:"</span>, <span class="hljs-meta">... </span> <span class="hljs-string">"https://images.unsplash.com/photo-1543349689-9a4d426bee8e?ixlib=rb-4.0.3&amp;ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&amp;auto=format&amp;fit=crop&amp;w=3501&amp;q=80"</span>, <span class="hljs-meta">... </span> <span class="hljs-string">"Describe this image.\nAssistant: An image of the Eiffel Tower at night. Fun fact: the Eiffel Tower is the same height as an 81-storey building.\n"</span>, <span class="hljs-meta">... </span> <span class="hljs-string">"User:"</span>, <span class="hljs-meta">... </span> <span class="hljs-string">"https://images.unsplash.com/photo-1524099163253-32b7f0256868?ixlib=rb-4.0.3&amp;ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&amp;auto=format&amp;fit=crop&amp;w=3387&amp;q=80"</span>, <span class="hljs-meta">... </span> <span class="hljs-string">"Describe this image.\nAssistant:"</span> <span class="hljs-meta">... </span> ] <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = processor(prompt, return_tensors=<span class="hljs-string">"pt"</span>).to(<span class="hljs-string">"cuda"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>bad_words_ids = processor.tokenizer([<span class="hljs-string">"&lt;image&gt;"</span>, <span class="hljs-string">"&lt;fake_token_around_image&gt;"</span>], add_special_tokens=<span class="hljs-literal">False</span>).input_ids <span class="hljs-meta">&gt;&gt;&gt; </span>generated_ids = model.generate(**inputs, max_new_tokens=<span class="hljs-number">30</span>, bad_words_ids=bad_words_ids) <span class="hljs-meta">&gt;&gt;&gt; </span>generated_text = processor.batch_decode(generated_ids, skip_special_tokens=<span class="hljs-literal">True</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-built_in">print</span>(generated_text[<span class="hljs-number">0</span>]) User: Describe this image. Assistant: An image of the Eiffel Tower at night. Fun fact: the Eiffel Tower <span class="hljs-keyword">is</span> the same height <span class="hljs-keyword">as</span> an <span class="hljs-number">81</span>-storey building. User: Describe this image. Assistant: An image of the Statue of Liberty. Fun fact: the Statue of Liberty <span class="hljs-keyword">is</span> <span class="hljs-number">151</span> feet tall.</pre></div> <p data-svelte-h="svelte-mpygyg">Notice that just from a single example (i.e., 1-shot) the model has learned how to perform the task. For more complex tasks, feel free to experiment with a larger number of examples (e.g., 3-shot, 5-shot, etc.).</p> <h2 class="relative group"><a id="visual-question-answering" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#visual-question-answering"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-pidp3l">Visual question answering</span></h2> <p data-svelte-h="svelte-cnfg69">Visual Question Answering (VQA) is the task of answering open-ended questions based on an image. Similar to image captioning it can be used in accessibility applications, but also in education (reasoning about visual materials), customer service (questions about products based on images), and image retrieval.</p> <p data-svelte-h="svelte-nptt59">Let’s get a new image for this task:</p> <div class="flex justify-center" data-svelte-h="svelte-1j2xr8e"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/idefics-vqa.jpg" alt="Image of a couple having a picnic"></div> <p data-svelte-h="svelte-1a68h7f">Photo by <a href="https://unsplash.com/@jarritos" rel="nofollow">Jarritos Mexican Soda</a>.</p> <p data-svelte-h="svelte-tdtrto">You can steer the model from image captioning to visual question answering by prompting it with appropriate instructions:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>prompt = [ <span class="hljs-meta">... </span> <span class="hljs-string">"Instruction: Provide an answer to the question. Use the image to answer.\n"</span>, <span class="hljs-meta">... </span> <span class="hljs-string">"https://images.unsplash.com/photo-1623944889288-cd147dbb517c?ixlib=rb-4.0.3&amp;ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&amp;auto=format&amp;fit=crop&amp;w=3540&amp;q=80"</span>, <span class="hljs-meta">... </span> <span class="hljs-string">"Question: Where are these people and what's the weather like? Answer:"</span> <span class="hljs-meta">... </span>] <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = processor(prompt, return_tensors=<span class="hljs-string">"pt"</span>).to(<span class="hljs-string">"cuda"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>bad_words_ids = processor.tokenizer([<span class="hljs-string">"&lt;image&gt;"</span>, <span class="hljs-string">"&lt;fake_token_around_image&gt;"</span>], add_special_tokens=<span class="hljs-literal">False</span>).input_ids <span class="hljs-meta">&gt;&gt;&gt; </span>generated_ids = model.generate(**inputs, max_new_tokens=<span class="hljs-number">20</span>, bad_words_ids=bad_words_ids) <span class="hljs-meta">&gt;&gt;&gt; </span>generated_text = processor.batch_decode(generated_ids, skip_special_tokens=<span class="hljs-literal">True</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-built_in">print</span>(generated_text[<span class="hljs-number">0</span>]) Instruction: Provide an answer to the question. Use the image to answer. Question: Where are these people <span class="hljs-keyword">and</span> what<span class="hljs-string">'s the weather like? Answer: They'</span>re <span class="hljs-keyword">in</span> a park <span class="hljs-keyword">in</span> New York City, <span class="hljs-keyword">and</span> it<span class="hljs-string">'s a beautiful day.</span></pre></div> <h2 class="relative group"><a id="image-classification" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#image-classification"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1cn7k7s">Image classification</span></h2> <p data-svelte-h="svelte-a4cfdv">IDEFICS is capable of classifying images into different categories without being explicitly trained on data containing labeled examples from those specific categories. Given a list of categories and using its image and text understanding capabilities, the model can infer which category the image likely belongs to.</p> <p data-svelte-h="svelte-1xbkffx">Say, we have this image of a vegetable stand:</p> <div class="flex justify-center" data-svelte-h="svelte-g02ga3"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/idefics-classification.jpg" alt="Image of a vegetable stand"></div> <p data-svelte-h="svelte-17q4ltv">Photo by <a href="https://unsplash.com/@peterwendt" rel="nofollow">Peter Wendt</a>.</p> <p data-svelte-h="svelte-13lz1gw">We can instruct the model to classify the image into one of the categories that we have:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>categories = [<span class="hljs-string">'animals'</span>,<span class="hljs-string">'vegetables'</span>, <span class="hljs-string">'city landscape'</span>, <span class="hljs-string">'cars'</span>, <span class="hljs-string">'office'</span>] <span class="hljs-meta">&gt;&gt;&gt; </span>prompt = [<span class="hljs-string">f"Instruction: Classify the following image into a single category from the following list: <span class="hljs-subst">{categories}</span>.\n"</span>, <span class="hljs-meta">... </span> <span class="hljs-string">"https://images.unsplash.com/photo-1471193945509-9ad0617afabf?ixlib=rb-4.0.3&amp;ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&amp;auto=format&amp;fit=crop&amp;w=3540&amp;q=80"</span>, <span class="hljs-meta">... </span> <span class="hljs-string">"Category: "</span> <span class="hljs-meta">... </span>] <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = processor(prompt, return_tensors=<span class="hljs-string">"pt"</span>).to(<span class="hljs-string">"cuda"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>bad_words_ids = processor.tokenizer([<span class="hljs-string">"&lt;image&gt;"</span>, <span class="hljs-string">"&lt;fake_token_around_image&gt;"</span>], add_special_tokens=<span class="hljs-literal">False</span>).input_ids <span class="hljs-meta">&gt;&gt;&gt; </span>generated_ids = model.generate(**inputs, max_new_tokens=<span class="hljs-number">4</span>, bad_words_ids=bad_words_ids) <span class="hljs-meta">&gt;&gt;&gt; </span>generated_text = processor.batch_decode(generated_ids, skip_special_tokens=<span class="hljs-literal">True</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-built_in">print</span>(generated_text[<span class="hljs-number">0</span>]) Instruction: Classify the following image into a single category <span class="hljs-keyword">from</span> the following <span class="hljs-built_in">list</span>: [<span class="hljs-string">'animals'</span>, <span class="hljs-string">'vegetables'</span>, <span class="hljs-string">'city landscape'</span>, <span class="hljs-string">'cars'</span>, <span class="hljs-string">'office'</span>]. Category: Vegetables</pre></div> <p data-svelte-h="svelte-571z8p">In the example above we instruct the model to classify the image into a single category, however, you can also prompt the model to do rank classification.</p> <h2 class="relative group"><a id="imageguided-text-generation" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#imageguided-text-generation"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-16al1v8">Image-guided text generation</span></h2> <p data-svelte-h="svelte-1cpi6ml">For more creative applications, you can use image-guided text generation to generate text based on an image. This can be useful to create descriptions of products, ads, descriptions of a scene, etc.</p> <p data-svelte-h="svelte-bxfm2h">Let’s prompt IDEFICS to write a story based on a simple image of a red door:</p> <div class="flex justify-center" data-svelte-h="svelte-1mf93u3"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/idefics-story-generation.jpg" alt="Image of a red door with a pumpkin on the steps"></div> <p data-svelte-h="svelte-75pbgi">Photo by <a href="https://unsplash.com/@devonshiremedia" rel="nofollow">Craig Tidball</a>.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>prompt = [<span class="hljs-string">"Instruction: Use the image to write a story. \n"</span>, <span class="hljs-meta">... </span> <span class="hljs-string">"https://images.unsplash.com/photo-1517086822157-2b0358e7684a?ixlib=rb-4.0.3&amp;ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&amp;auto=format&amp;fit=crop&amp;w=2203&amp;q=80"</span>, <span class="hljs-meta">... </span> <span class="hljs-string">"Story: \n"</span>] <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = processor(prompt, return_tensors=<span class="hljs-string">"pt"</span>).to(<span class="hljs-string">"cuda"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>bad_words_ids = processor.tokenizer([<span class="hljs-string">"&lt;image&gt;"</span>, <span class="hljs-string">"&lt;fake_token_around_image&gt;"</span>], add_special_tokens=<span class="hljs-literal">False</span>).input_ids <span class="hljs-meta">&gt;&gt;&gt; </span>generated_ids = model.generate(**inputs, num_beams=<span class="hljs-number">2</span>, max_new_tokens=<span class="hljs-number">200</span>, bad_words_ids=bad_words_ids) <span class="hljs-meta">&gt;&gt;&gt; </span>generated_text = processor.batch_decode(generated_ids, skip_special_tokens=<span class="hljs-literal">True</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-built_in">print</span>(generated_text[<span class="hljs-number">0</span>]) Instruction: Use the image to write a story. Story: Once upon a time, there was a little girl who lived <span class="hljs-keyword">in</span> a house <span class="hljs-keyword">with</span> a red door. She loved her red door. It was the prettiest door <span class="hljs-keyword">in</span> the whole world. One day, the little girl was playing <span class="hljs-keyword">in</span> her yard when she noticed a man standing on her doorstep. He was wearing a long black coat <span class="hljs-keyword">and</span> a top hat. The little girl ran inside <span class="hljs-keyword">and</span> told her mother about the man. Her mother said, “Don’t worry, honey. He’s just a friendly ghost.” The little girl wasn’t sure <span class="hljs-keyword">if</span> she believed her mother, but she went outside anyway. When she got to the door, the man was gone. The <span class="hljs-built_in">next</span> day, the little girl was playing <span class="hljs-keyword">in</span> her yard again when she noticed the man standing on her doorstep. He was wearing a long black coat <span class="hljs-keyword">and</span> a top hat. The little girl ran</pre></div> <p data-svelte-h="svelte-eumo01">Looks like IDEFICS noticed the pumpkin on the doorstep and went with a spooky Halloween story about a ghost.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-79vr3a">For longer outputs like this, you will greatly benefit from tweaking the text generation strategy. This can help you significantly improve the quality of the generated output. Check out <a href="../generation_strategies">Text generation strategies</a> to learn more.</p></div> <h2 class="relative group"><a id="running-inference-in-batch-mode" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#running-inference-in-batch-mode"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-thinkx">Running inference in batch mode</span></h2> <p data-svelte-h="svelte-2k4kpw">All of the earlier sections illustrated IDEFICS for a single example. In a very similar fashion, you can run inference for a batch of examples by passing a list of prompts:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>prompts = [ <span class="hljs-meta">... </span> [ <span class="hljs-string">"https://images.unsplash.com/photo-1543349689-9a4d426bee8e?ixlib=rb-4.0.3&amp;ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&amp;auto=format&amp;fit=crop&amp;w=3501&amp;q=80"</span>, <span class="hljs-meta">... </span> <span class="hljs-string">"This is an image of "</span>, <span class="hljs-meta">... </span> ], <span class="hljs-meta">... </span> [ <span class="hljs-string">"https://images.unsplash.com/photo-1623944889288-cd147dbb517c?ixlib=rb-4.0.3&amp;ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&amp;auto=format&amp;fit=crop&amp;w=3540&amp;q=80"</span>, <span class="hljs-meta">... </span> <span class="hljs-string">"This is an image of "</span>, <span class="hljs-meta">... </span> ], <span class="hljs-meta">... </span> [ <span class="hljs-string">"https://images.unsplash.com/photo-1471193945509-9ad0617afabf?ixlib=rb-4.0.3&amp;ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&amp;auto=format&amp;fit=crop&amp;w=3540&amp;q=80"</span>, <span class="hljs-meta">... </span> <span class="hljs-string">"This is an image of "</span>, <span class="hljs-meta">... </span> ], <span class="hljs-meta">... </span>] <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = processor(prompts, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>bad_words_ids = processor.tokenizer([<span class="hljs-string">"&lt;image&gt;"</span>, <span class="hljs-string">"&lt;fake_token_around_image&gt;"</span>], add_special_tokens=<span class="hljs-literal">False</span>).input_ids <span class="hljs-meta">&gt;&gt;&gt; </span>generated_ids = model.generate(**inputs, max_new_tokens=<span class="hljs-number">10</span>, bad_words_ids=bad_words_ids) <span class="hljs-meta">&gt;&gt;&gt; </span>generated_text = processor.batch_decode(generated_ids, skip_special_tokens=<span class="hljs-literal">True</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">for</span> i,t <span class="hljs-keyword">in</span> <span class="hljs-built_in">enumerate</span>(generated_text): <span class="hljs-meta">... </span> <span class="hljs-built_in">print</span>(<span class="hljs-string">f"<span class="hljs-subst">{i}</span>:\n<span class="hljs-subst">{t}</span>\n"</span>) <span class="hljs-number">0</span>: This <span class="hljs-keyword">is</span> an image of the Eiffel Tower <span class="hljs-keyword">in</span> Paris, France. <span class="hljs-number">1</span>: This <span class="hljs-keyword">is</span> an image of a couple on a picnic blanket. <span class="hljs-number">2</span>: This <span class="hljs-keyword">is</span> an image of a vegetable stand.</pre></div> <h2 class="relative group"><a id="idefics-instruct-for-conversational-use" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#idefics-instruct-for-conversational-use"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1svn972">IDEFICS instruct for conversational use</span></h2> <p data-svelte-h="svelte-ay5071">For conversational use cases, you can find fine-tuned instructed versions of the model on the 🤗 Hub: <code>HuggingFaceM4/idefics-80b-instruct</code> and <code>HuggingFaceM4/idefics-9b-instruct</code>.</p> <p data-svelte-h="svelte-8tbmhu">These checkpoints are the result of fine-tuning the respective base models on a mixture of supervised and instruction fine-tuning datasets, which boosts the downstream performance while making the models more usable in conversational settings.</p> <p data-svelte-h="svelte-ccrnle">The use and prompting for the conversational use is very similar to using the base models:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> IdeficsForVisionText2Text, AutoProcessor <span class="hljs-meta">&gt;&gt;&gt; </span>device = <span class="hljs-string">"cuda"</span> <span class="hljs-keyword">if</span> torch.cuda.is_available() <span class="hljs-keyword">else</span> <span class="hljs-string">"cpu"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>checkpoint = <span class="hljs-string">"HuggingFaceM4/idefics-9b-instruct"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model = IdeficsForVisionText2Text.from_pretrained(checkpoint, torch_dtype=torch.bfloat16).to(device) <span class="hljs-meta">&gt;&gt;&gt; </span>processor = AutoProcessor.from_pretrained(checkpoint) <span class="hljs-meta">&gt;&gt;&gt; </span>prompts = [ <span class="hljs-meta">... </span> [ <span class="hljs-meta">... </span> <span class="hljs-string">"User: What is in this image?"</span>, <span class="hljs-meta">... </span> <span class="hljs-string">"https://upload.wikimedia.org/wikipedia/commons/8/86/Id%C3%A9fix.JPG"</span>, <span class="hljs-meta">... </span> <span class="hljs-string">"&lt;end_of_utterance&gt;"</span>, <span class="hljs-meta">... </span> <span class="hljs-string">"\nAssistant: This picture depicts Idefix, the dog of Obelix in Asterix and Obelix. Idefix is running on the ground.&lt;end_of_utterance&gt;"</span>, <span class="hljs-meta">... </span> <span class="hljs-string">"\nUser:"</span>, <span class="hljs-meta">... </span> <span class="hljs-string">"https://static.wikia.nocookie.net/asterix/images/2/25/R22b.gif/revision/latest?cb=20110815073052"</span>, <span class="hljs-meta">... </span> <span class="hljs-string">"And who is that?&lt;end_of_utterance&gt;"</span>, <span class="hljs-meta">... </span> <span class="hljs-string">"\nAssistant:"</span>, <span class="hljs-meta">... </span> ], <span class="hljs-meta">... </span>] <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># --batched mode</span> <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = processor(prompts, add_end_of_utterance_token=<span class="hljs-literal">False</span>, return_tensors=<span class="hljs-string">"pt"</span>).to(device) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># --single sample mode</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># inputs = processor(prompts[0], return_tensors="pt").to(device)</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Generation args</span> <span class="hljs-meta">&gt;&gt;&gt; </span>exit_condition = processor.tokenizer(<span class="hljs-string">"&lt;end_of_utterance&gt;"</span>, add_special_tokens=<span class="hljs-literal">False</span>).input_ids <span class="hljs-meta">&gt;&gt;&gt; </span>bad_words_ids = processor.tokenizer([<span class="hljs-string">"&lt;image&gt;"</span>, <span class="hljs-string">"&lt;fake_token_around_image&gt;"</span>], add_special_tokens=<span class="hljs-literal">False</span>).input_ids <span class="hljs-meta">&gt;&gt;&gt; </span>generated_ids = model.generate(**inputs, eos_token_id=exit_condition, bad_words_ids=bad_words_ids, max_length=<span class="hljs-number">100</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>generated_text = processor.batch_decode(generated_ids, skip_special_tokens=<span class="hljs-literal">True</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">for</span> i, t <span class="hljs-keyword">in</span> <span class="hljs-built_in">enumerate</span>(generated_text): <span class="hljs-meta">... </span> <span class="hljs-built_in">print</span>(<span class="hljs-string">f"<span class="hljs-subst">{i}</span>:\n<span class="hljs-subst">{t}</span>\n"</span>)</pre></div> <p></p> <div id="svelte-announcer" aria-live="assertive" aria-atomic="true" style="position: absolute; left: 0px; top: 0px; clip: rect(0px, 0px, 0px, 0px); clip-path: inset(50%); overflow: hidden; white-space: nowrap; width: 1px; height: 1px;"></div></div> <div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/generation_strategies" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>Customize the generation strategy</a> <a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/fast_tokenizers" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">Use fast tokenizers from 🤗 Tokenizers<span class="ml-2 translate-y-px">→</span></a></div></div></div> <div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapter&quot;:{&quot;title&quot;:&quot;Image tasks with IDEFICS&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;image-tasks-with-idefics&quot;,&quot;url&quot;:&quot;#image-tasks-with-idefics&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Loading the model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;loading-the-model&quot;,&quot;url&quot;:&quot;#loading-the-model&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Quantized model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;quantized-model&quot;,&quot;url&quot;:&quot;#quantized-model&quot;}]},{&quot;title&quot;:&quot;Image captioning&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;image-captioning&quot;,&quot;url&quot;:&quot;#image-captioning&quot;},{&quot;title&quot;:&quot;Prompted image captioning&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;prompted-image-captioning&quot;,&quot;url&quot;:&quot;#prompted-image-captioning&quot;},{&quot;title&quot;:&quot;Few-shot prompting&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;fewshot-prompting&quot;,&quot;url&quot;:&quot;#fewshot-prompting&quot;},{&quot;title&quot;:&quot;Visual question answering&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;visual-question-answering&quot;,&quot;url&quot;:&quot;#visual-question-answering&quot;},{&quot;title&quot;:&quot;Image classification&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;image-classification&quot;,&quot;url&quot;:&quot;#image-classification&quot;},{&quot;title&quot;:&quot;Image-guided text generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;imageguided-text-generation&quot;,&quot;url&quot;:&quot;#imageguided-text-generation&quot;},{&quot;title&quot;:&quot;Running inference in batch mode&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;running-inference-in-batch-mode&quot;,&quot;url&quot;:&quot;#running-inference-in-batch-mode&quot;},{&quot;title&quot;:&quot;IDEFICS instruct for conversational use&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;idefics-instruct-for-conversational-use&quot;,&quot;url&quot;:&quot;#idefics-instruct-for-conversational-use&quot;}]}}" data-target="SubSideMenu"><nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#image-tasks-with-idefics" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-image-tasks-with-idefics"><wbr>Image tasks with IDEFICS</a> <a href="#loading-the-model" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-loading-the-model"><wbr>Loading the model</a> <a href="#quantized-model" class="pl-8 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-quantized-model"><wbr>Quantized model</a> <a href="#image-captioning" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-image-captioning"><wbr>Image captioning</a> <a href="#prompted-image-captioning" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-prompted-image-captioning"><wbr>Prompted image captioning</a> <a href="#fewshot-prompting" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-fewshot-prompting"><wbr>Few-shot prompting</a> <a href="#visual-question-answering" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-visual-question-answering"><wbr>Visual question answering</a> <a href="#image-classification" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-image-classification"><wbr>Image classification</a> <a href="#imageguided-text-generation" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-imageguided-text-generation"><wbr>Image-guided text generation</a> <a href="#running-inference-in-batch-mode" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-running-inference-in-batch-mode"><wbr>Running inference in batch mode</a> <a href="#idefics-instruct-for-conversational-use" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-idefics-instruct-for-conversational-use">IDEFIC<wbr>S instruct for conversational use</a> </nav></div></div></div> <div id="doc-footer"></div></main> </div> <script> import("/front/build/kube-b0520c1/index.js"); window.moonSha = "kube-b0520c1/"; window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc","environment":"production","userAgent":"HuggingFace (production)"}`); </script> <!-- Stripe --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://js.stripe.com/v3/"; script.async = true; document.head.appendChild(script); } </script> <!-- Google analytics v4 --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL"; script.async = true; document.head.appendChild(script); window.dataLayer = window.dataLayer || []; function gtag() { if (window.dataLayer !== undefined) { window.dataLayer.push(arguments); } } gtag("js", new Date()); gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/v4.34.0/en/tasks/idefics" }); /// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" }); /// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent /// TODO: ask the user for their consent and update this with gtag('consent', 'update') } </script> <!-- Google Analytics v3 (deprecated) --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { (function (i, s, o, g, r, a, m) { i["GoogleAnalyticsObject"] = r; (i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments); }), (i[r].l = 1 * new Date()); (a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]); a.async = 1; a.src = g; m.parentNode.insertBefore(a, m); })(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics"); ganalytics("create", "UA-83738774-2", "auto"); ganalytics("send", "pageview", "/docs/transformers/v4.34.0/en/tasks/idefics"); } </script> <iframe name="__privateStripeMetricsController6580" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-27c67c0d52761104439bb051c7856ab1.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fv4.34.0%2Fen%2Ftasks%2Fidefics&amp;title=Image%20tasks%20with%20IDEFICS&amp;referrer=&amp;muid=577a1d98-59a0-46fc-98a8-36ee316848488be1c3&amp;sid=95f156dd-eb84-4e70-95ef-3883996ebe1530e886&amp;version=6&amp;preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html>
2023-10-05T13:33:42.598Z
Text classification
https://huggingface.co/docs/transformers/v4.34.0/en/tasks/sequence_classification
# Text classification Text classification is a common NLP task that assigns a label or class to text. Some of the largest companies run text classification in production for a wide range of practical applications. One of the most popular forms of text classification is sentiment analysis, which assigns a label like 🙂 positive, 🙁 negative, or 😐 neutral to a sequence of text. This guide will show you how to: 1. Finetune [DistilBERT](https://huggingface.co/distilbert-base-uncased) on the [IMDb](https://huggingface.co/datasets/imdb) dataset to determine whether a movie review is positive or negative. 2. Use your finetuned model for inference. The task illustrated in this tutorial is supported by the following model architectures: [ALBERT](../model_doc/albert), [BART](../model_doc/bart), [BERT](../model_doc/bert), [BigBird](../model_doc/big_bird), [BigBird-Pegasus](../model_doc/bigbird_pegasus), [BioGpt](../model_doc/biogpt), [BLOOM](../model_doc/bloom), [CamemBERT](../model_doc/camembert), [CANINE](../model_doc/canine), [CodeLlama](../model_doc/code_llama), [ConvBERT](../model_doc/convbert), [CTRL](../model_doc/ctrl), [Data2VecText](../model_doc/data2vec-text), [DeBERTa](../model_doc/deberta), [DeBERTa-v2](../model_doc/deberta-v2), [DistilBERT](../model_doc/distilbert), [ELECTRA](../model_doc/electra), [ERNIE](../model_doc/ernie), [ErnieM](../model_doc/ernie_m), [ESM](../model_doc/esm), [Falcon](../model_doc/falcon), [FlauBERT](../model_doc/flaubert), [FNet](../model_doc/fnet), [Funnel Transformer](../model_doc/funnel), [GPT-Sw3](../model_doc/gpt-sw3), [OpenAI GPT-2](../model_doc/gpt2), [GPTBigCode](../model_doc/gpt_bigcode), [GPT Neo](../model_doc/gpt_neo), [GPT NeoX](../model_doc/gpt_neox), [GPT-J](../model_doc/gptj), [I-BERT](../model_doc/ibert), [LayoutLM](../model_doc/layoutlm), [LayoutLMv2](../model_doc/layoutlmv2), [LayoutLMv3](../model_doc/layoutlmv3), [LED](../model_doc/led), [LiLT](../model_doc/lilt), [LLaMA](../model_doc/llama), [Longformer](../model_doc/longformer), [LUKE](../model_doc/luke), [MarkupLM](../model_doc/markuplm), [mBART](../model_doc/mbart), [MEGA](../model_doc/mega), [Megatron-BERT](../model_doc/megatron-bert), [Mistral](../model_doc/mistral), [MobileBERT](../model_doc/mobilebert), [MPNet](../model_doc/mpnet), [MPT](../model_doc/mpt), [MRA](../model_doc/mra), [MT5](../model_doc/mt5), [MVP](../model_doc/mvp), [Nezha](../model_doc/nezha), [Nyströmformer](../model_doc/nystromformer), [OpenLlama](../model_doc/open-llama), [OpenAI GPT](../model_doc/openai-gpt), [OPT](../model_doc/opt), [Perceiver](../model_doc/perceiver), [Persimmon](../model_doc/persimmon), [PLBart](../model_doc/plbart), [QDQBert](../model_doc/qdqbert), [Reformer](../model_doc/reformer), [RemBERT](../model_doc/rembert), [RoBERTa](../model_doc/roberta), [RoBERTa-PreLayerNorm](../model_doc/roberta-prelayernorm), [RoCBert](../model_doc/roc_bert), [RoFormer](../model_doc/roformer), [SqueezeBERT](../model_doc/squeezebert), [T5](../model_doc/t5), [TAPAS](../model_doc/tapas), [Transformer-XL](../model_doc/transfo-xl), [UMT5](../model_doc/umt5), [XLM](../model_doc/xlm), [XLM-RoBERTa](../model_doc/xlm-roberta), [XLM-RoBERTa-XL](../model_doc/xlm-roberta-xl), [XLNet](../model_doc/xlnet), [X-MOD](../model_doc/xmod), [YOSO](../model_doc/yoso) Before you begin, make sure you have all the necessary libraries installed: ``` pip install transformers datasets evaluate ``` We encourage you to login to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to login: ``` >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## Load IMDb dataset Start by loading the IMDb dataset from the 🤗 Datasets library: ``` >>> from datasets import load_dataset >>> imdb = load_dataset("imdb") ``` Then take a look at an example: ``` >>> imdb["test"][0] { "label": 0, "text": "I love sci-fi and am willing to put up with a lot. Sci-fi movies/TV are usually underfunded, under-appreciated and misunderstood. I tried to like this, I really did, but it is to good TV sci-fi as Babylon 5 is to Star Trek (the original). Silly prosthetics, cheap cardboard sets, stilted dialogues, CG that doesn't match the background, and painfully one-dimensional characters cannot be overcome with a 'sci-fi' setting. (I'm sure there are those of you out there who think Babylon 5 is good sci-fi TV. It's not. It's clichéd and uninspiring.) While US viewers might like emotion and character development, sci-fi is a genre that does not take itself seriously (cf. Star Trek). It may treat important issues, yet not as a serious philosophy. It's really difficult to care about the characters here as they are not simply foolish, just missing a spark of life. Their actions and reactions are wooden and predictable, often painful to watch. The makers of Earth KNOW it's rubbish as they have to always say \"Gene Roddenberry's Earth...\" otherwise people would not continue watching. Roddenberry's ashes must be turning in their orbit as this dull, cheap, poorly edited (watching it without advert breaks really brings this home) trudging Trabant of a show lumbers into space. Spoiler. So, kill off a main character. And then bring him back as another actor. Jeeez! Dallas all over again.", } ``` There are two fields in this dataset: - `text`: the movie review text. - `label`: a value that is either `0` for a negative review or `1` for a positive review. ## Preprocess The next step is to load a DistilBERT tokenizer to preprocess the `text` field: ``` >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") ``` Create a preprocessing function to tokenize `text` and truncate sequences to be no longer than DistilBERT’s maximum input length: ``` >>> def preprocess_function(examples): ... return tokenizer(examples["text"], truncation=True) ``` To apply the preprocessing function over the entire dataset, use 🤗 Datasets [map](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.map) function. You can speed up `map` by setting `batched=True` to process multiple elements of the dataset at once: ``` tokenized_imdb = imdb.map(preprocess_function, batched=True) ``` Now create a batch of examples using [DataCollatorWithPadding](/docs/transformers/v4.34.0/en/main_classes/data_collator#transformers.DataCollatorWithPadding). It’s more efficient to _dynamically pad_ the sentences to the longest length in a batch during collation, instead of padding the whole dataset to the maximum length. ``` >>> from transformers import DataCollatorWithPadding >>> data_collator = DataCollatorWithPadding(tokenizer=tokenizer) ``` ``` >>> from transformers import DataCollatorWithPadding >>> data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors="tf") ``` ## Evaluate Including a metric during training is often helpful for evaluating your model’s performance. You can quickly load a evaluation method with the 🤗 [Evaluate](https://huggingface.co/docs/evaluate/index) library. For this task, load the [accuracy](https://huggingface.co/spaces/evaluate-metric/accuracy) metric (see the 🤗 Evaluate [quick tour](https://huggingface.co/docs/evaluate/a_quick_tour) to learn more about how to load and compute a metric): ``` >>> import evaluate >>> accuracy = evaluate.load("accuracy") ``` Then create a function that passes your predictions and labels to [compute](https://huggingface.co/docs/evaluate/v0.4.0/en/package_reference/main_classes#evaluate.EvaluationModule.compute) to calculate the accuracy: ``` >>> import numpy as np >>> def compute_metrics(eval_pred): ... predictions, labels = eval_pred ... predictions = np.argmax(predictions, axis=1) ... return accuracy.compute(predictions=predictions, references=labels) ``` Your `compute_metrics` function is ready to go now, and you’ll return to it when you setup your training. ## Train Before you start training your model, create a map of the expected ids to their labels with `id2label` and `label2id`: ``` >>> id2label = {0: "NEGATIVE", 1: "POSITIVE"} >>> label2id = {"NEGATIVE": 0, "POSITIVE": 1} ``` If you aren’t familiar with finetuning a model with the [Trainer](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer), take a look at the basic tutorial [here](../training#train-with-pytorch-trainer)! You’re ready to start training your model now! Load DistilBERT with [AutoModelForSequenceClassification](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoModelForSequenceClassification) along with the number of expected labels, and the label mappings: ``` >>> from transformers import AutoModelForSequenceClassification, TrainingArguments, Trainer >>> model = AutoModelForSequenceClassification.from_pretrained( ... "distilbert-base-uncased", num_labels=2, id2label=id2label, label2id=label2id ... ) ``` At this point, only three steps remain: 1. Define your training hyperparameters in [TrainingArguments](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.TrainingArguments). The only required parameter is `output_dir` which specifies where to save your model. You’ll push this model to the Hub by setting `push_to_hub=True` (you need to be signed in to Hugging Face to upload your model). At the end of each epoch, the [Trainer](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer) will evaluate the accuracy and save the training checkpoint. 2. Pass the training arguments to [Trainer](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer) along with the model, dataset, tokenizer, data collator, and `compute_metrics` function. 3. Call [train()](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.train) to finetune your model. ``` >>> training_args = TrainingArguments( ... output_dir="my_awesome_model", ... learning_rate=2e-5, ... per_device_train_batch_size=16, ... per_device_eval_batch_size=16, ... num_train_epochs=2, ... weight_decay=0.01, ... evaluation_strategy="epoch", ... save_strategy="epoch", ... load_best_model_at_end=True, ... push_to_hub=True, ... ) >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=tokenized_imdb["train"], ... eval_dataset=tokenized_imdb["test"], ... tokenizer=tokenizer, ... data_collator=data_collator, ... compute_metrics=compute_metrics, ... ) >>> trainer.train() ``` [Trainer](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer) applies dynamic padding by default when you pass `tokenizer` to it. In this case, you don’t need to specify a data collator explicitly. Once training is completed, share your model to the Hub with the [push\_to\_hub()](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.push_to_hub) method so everyone can use your model: ``` >>> trainer.push_to_hub() ``` If you aren’t familiar with finetuning a model with Keras, take a look at the basic tutorial [here](../training#train-a-tensorflow-model-with-keras)! To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters: ``` >>> from transformers import create_optimizer >>> import tensorflow as tf >>> batch_size = 16 >>> num_epochs = 5 >>> batches_per_epoch = len(tokenized_imdb["train"]) // batch_size >>> total_train_steps = int(batches_per_epoch * num_epochs) >>> optimizer, schedule = create_optimizer(init_lr=2e-5, num_warmup_steps=0, num_train_steps=total_train_steps) ``` Then you can load DistilBERT with [TFAutoModelForSequenceClassification](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.TFAutoModelForSequenceClassification) along with the number of expected labels, and the label mappings: ``` >>> from transformers import TFAutoModelForSequenceClassification >>> model = TFAutoModelForSequenceClassification.from_pretrained( ... "distilbert-base-uncased", num_labels=2, id2label=id2label, label2id=label2id ... ) ``` Convert your datasets to the `tf.data.Dataset` format with [prepare\_tf\_dataset()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel.prepare_tf_dataset): ``` >>> tf_train_set = model.prepare_tf_dataset( ... tokenized_imdb["train"], ... shuffle=True, ... batch_size=16, ... collate_fn=data_collator, ... ) >>> tf_validation_set = model.prepare_tf_dataset( ... tokenized_imdb["test"], ... shuffle=False, ... batch_size=16, ... collate_fn=data_collator, ... ) ``` Configure the model for training with [`compile`](https://keras.io/api/models/model_training_apis/#compile-method). Note that Transformers models all have a default task-relevant loss function, so you don’t need to specify one unless you want to: ``` >>> import tensorflow as tf >>> model.compile(optimizer=optimizer) ``` The last two things to setup before you start training is to compute the accuracy from the predictions, and provide a way to push your model to the Hub. Both are done by using [Keras callbacks](../main_classes/keras_callbacks). Pass your `compute_metrics` function to [KerasMetricCallback](/docs/transformers/v4.34.0/en/main_classes/keras_callbacks#transformers.KerasMetricCallback): ``` >>> from transformers.keras_callbacks import KerasMetricCallback >>> metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_set) ``` Specify where to push your model and tokenizer in the [PushToHubCallback](/docs/transformers/v4.34.0/en/main_classes/keras_callbacks#transformers.PushToHubCallback): ``` >>> from transformers.keras_callbacks import PushToHubCallback >>> push_to_hub_callback = PushToHubCallback( ... output_dir="my_awesome_model", ... tokenizer=tokenizer, ... ) ``` Then bundle your callbacks together: ``` >>> callbacks = [metric_callback, push_to_hub_callback] ``` Finally, you’re ready to start training your model! Call [`fit`](https://keras.io/api/models/model_training_apis/#fit-method) with your training and validation datasets, the number of epochs, and your callbacks to finetune the model: ``` >>> model.fit(x=tf_train_set, validation_data=tf_validation_set, epochs=3, callbacks=callbacks) ``` Once training is completed, your model is automatically uploaded to the Hub so everyone can use it! For a more in-depth example of how to finetune a model for text classification, take a look at the corresponding [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification.ipynb) or [TensorFlow notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification-tf.ipynb). ## Inference Great, now that you’ve finetuned a model, you can use it for inference! Grab some text you’d like to run inference on: ``` >>> text = "This was a masterpiece. Not completely faithful to the books, but enthralling from beginning to end. Might be my favorite of the three." ``` The simplest way to try out your finetuned model for inference is to use it in a [pipeline()](/docs/transformers/v4.34.0/en/main_classes/pipelines#transformers.pipeline). Instantiate a `pipeline` for sentiment analysis with your model, and pass your text to it: ``` >>> from transformers import pipeline >>> classifier = pipeline("sentiment-analysis", model="stevhliu/my_awesome_model") >>> classifier(text) [{'label': 'POSITIVE', 'score': 0.9994940757751465}] ``` You can also manually replicate the results of the `pipeline` if you’d like: Tokenize the text and return PyTorch tensors: ``` >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("stevhliu/my_awesome_model") >>> inputs = tokenizer(text, return_tensors="pt") ``` Pass your inputs to the model and return the `logits`: ``` >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained("stevhliu/my_awesome_model") >>> with torch.no_grad(): ... logits = model(**inputs).logits ``` Get the class with the highest probability, and use the model’s `id2label` mapping to convert it to a text label: ``` >>> predicted_class_id = logits.argmax().item() >>> model.config.id2label[predicted_class_id] 'POSITIVE' ``` Tokenize the text and return TensorFlow tensors: ``` >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("stevhliu/my_awesome_model") >>> inputs = tokenizer(text, return_tensors="tf") ``` Pass your inputs to the model and return the `logits`: ``` >>> from transformers import TFAutoModelForSequenceClassification >>> model = TFAutoModelForSequenceClassification.from_pretrained("stevhliu/my_awesome_model") >>> logits = model(**inputs).logits ``` Get the class with the highest probability, and use the model’s `id2label` mapping to convert it to a text label: ``` >>> predicted_class_id = int(tf.math.argmax(logits, axis=-1)[0]) >>> model.config.id2label[predicted_class_id] 'POSITIVE' ```
<!DOCTYPE html><html class=""><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"> <meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science."> <meta property="fb:app_id" content="1321688464574422"> <meta name="twitter:card" content="summary_large_image"> <meta name="twitter:site" content="@huggingface"> <meta property="og:title" content="Text classification"> <meta property="og:type" content="website"> <meta property="og:url" content="https://huggingface.co/docs/transformers/v4.34.0/en/tasks/sequence_classification"> <meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png"> <link rel="stylesheet" href="/front/build/kube-5e23f38/style.css"> <link rel="preconnect" href="https://fonts.gstatic.com"> <link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&amp;display=swap" rel="stylesheet"> <link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&amp;display=swap" rel="stylesheet"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" as="style" onload="this.onload=null;this.rel='stylesheet'"> <noscript> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" /> </noscript> <title>Text classification</title> <script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script> <script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"><link rel="stylesheet" href="/docs/transformers/v4.34.0/en/_app/immutable/assets/0.e3b0c442.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/1.38c5c2f6.js"><meta name="hf:doc:metadata" content="{&quot;local&quot;:&quot;text-classification&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;load-imdb-dataset&quot;,&quot;title&quot;:&quot;Load IMDb dataset&quot;},{&quot;local&quot;:&quot;preprocess&quot;,&quot;title&quot;:&quot;Preprocess&quot;},{&quot;local&quot;:&quot;evaluate&quot;,&quot;title&quot;:&quot;Evaluate&quot;},{&quot;local&quot;:&quot;train&quot;,&quot;title&quot;:&quot;Train&quot;},{&quot;local&quot;:&quot;inference&quot;,&quot;title&quot;:&quot;Inference&quot;}],&quot;title&quot;:&quot;Text classification&quot;}"></head> <body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage"> <div class="flex min-h-screen flex-col"> <div class="SVELTE_HYDRATER contents" data-props="{&quot;classNames&quot;:&quot;&quot;,&quot;isWide&quot;:true,&quot;isZh&quot;:false}" data-target="MainHeader"><header class="border-b border-gray-100 "><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <div class="flex flex-none items-center justify-center p-0.5 place-self-stretch lg:hidden"><button class="relative z-40 flex h-6 w-8 items-center justify-center" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div></div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="rounded-full border border-transparent bg-gray-900 px-3 py-1 leading-none text-white hover:border-black hover:bg-white hover:text-black" href="/join">Sign Up</a></li></ul></nav></div></header></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div> <main class="flex flex-1 flex-col"><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapters&quot;:[{&quot;title&quot;:&quot;Get started&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;🤗 Transformers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;index&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/index&quot;},{&quot;title&quot;:&quot;Quick tour&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;quicktour&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/quicktour&quot;},{&quot;title&quot;:&quot;Installation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;installation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/installation&quot;}]},{&quot;title&quot;:&quot;Tutorials&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Run inference with pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_tutorial&quot;},{&quot;title&quot;:&quot;Write portable code with AutoClass&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;autoclass_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/autoclass_tutorial&quot;},{&quot;title&quot;:&quot;Preprocess data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocessing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/preprocessing&quot;},{&quot;title&quot;:&quot;Fine-tune a pretrained model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;training&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/training&quot;},{&quot;title&quot;:&quot;Train with a script&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;run_scripts&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/run_scripts&quot;},{&quot;title&quot;:&quot;Set up distributed training with 🤗 Accelerate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;accelerate&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/accelerate&quot;},{&quot;title&quot;:&quot;Load and train adapters with 🤗 PEFT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;peft&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/peft&quot;},{&quot;title&quot;:&quot;Share your model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_sharing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_sharing&quot;},{&quot;title&quot;:&quot;Agents&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers_agents&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/transformers_agents&quot;},{&quot;title&quot;:&quot;Generation with LLMs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;llm_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/llm_tutorial&quot;}]},{&quot;title&quot;:&quot;Task Guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Natural Language Processing&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text classification&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks/sequence_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/sequence_classification&quot;},{&quot;title&quot;:&quot;Token classification&quot;,&quot;id&quot;:&quot;tasks/token_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/token_classification&quot;},{&quot;title&quot;:&quot;Question answering&quot;,&quot;id&quot;:&quot;tasks/question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/question_answering&quot;},{&quot;title&quot;:&quot;Causal language modeling&quot;,&quot;id&quot;:&quot;tasks/language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/language_modeling&quot;},{&quot;title&quot;:&quot;Masked language modeling&quot;,&quot;id&quot;:&quot;tasks/masked_language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/masked_language_modeling&quot;},{&quot;title&quot;:&quot;Translation&quot;,&quot;id&quot;:&quot;tasks/translation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/translation&quot;},{&quot;title&quot;:&quot;Summarization&quot;,&quot;id&quot;:&quot;tasks/summarization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/summarization&quot;},{&quot;title&quot;:&quot;Multiple choice&quot;,&quot;id&quot;:&quot;tasks/multiple_choice&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/multiple_choice&quot;}]},{&quot;title&quot;:&quot;Audio&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio classification&quot;,&quot;id&quot;:&quot;tasks/audio_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/audio_classification&quot;},{&quot;title&quot;:&quot;Automatic speech recognition&quot;,&quot;id&quot;:&quot;tasks/asr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/asr&quot;}]},{&quot;title&quot;:&quot;Computer Vision&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image classification&quot;,&quot;id&quot;:&quot;tasks/image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_classification&quot;},{&quot;title&quot;:&quot;Semantic segmentation&quot;,&quot;id&quot;:&quot;tasks/semantic_segmentation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/semantic_segmentation&quot;},{&quot;title&quot;:&quot;Video classification&quot;,&quot;id&quot;:&quot;tasks/video_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/video_classification&quot;},{&quot;title&quot;:&quot;Object detection&quot;,&quot;id&quot;:&quot;tasks/object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot object detection&quot;,&quot;id&quot;:&quot;tasks/zero_shot_object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot image classification&quot;,&quot;id&quot;:&quot;tasks/zero_shot_image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification&quot;},{&quot;title&quot;:&quot;Depth estimation&quot;,&quot;id&quot;:&quot;tasks/monocular_depth_estimation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation&quot;}]},{&quot;title&quot;:&quot;Multimodal&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image captioning&quot;,&quot;id&quot;:&quot;tasks/image_captioning&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_captioning&quot;},{&quot;title&quot;:&quot;Document Question Answering&quot;,&quot;id&quot;:&quot;tasks/document_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/document_question_answering&quot;},{&quot;title&quot;:&quot;Visual Question Answering&quot;,&quot;id&quot;:&quot;tasks/visual_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/visual_question_answering&quot;},{&quot;title&quot;:&quot;Text to speech&quot;,&quot;id&quot;:&quot;tasks/text-to-speech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/text-to-speech&quot;}]},{&quot;title&quot;:&quot;Generation&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Customize the generation strategy&quot;,&quot;id&quot;:&quot;generation_strategies&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/generation_strategies&quot;}]},{&quot;title&quot;:&quot;Prompting&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image tasks with IDEFICS&quot;,&quot;id&quot;:&quot;tasks/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/idefics&quot;}]}]},{&quot;title&quot;:&quot;Developer guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Use fast tokenizers from 🤗 Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;fast_tokenizers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/fast_tokenizers&quot;},{&quot;title&quot;:&quot;Run inference with multilingual models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;multilingual&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/multilingual&quot;},{&quot;title&quot;:&quot;Use model-specific APIs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;create_a_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/create_a_model&quot;},{&quot;title&quot;:&quot;Share a custom model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_models&quot;},{&quot;title&quot;:&quot;Templates for chat models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;chat_templating&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/chat_templating&quot;},{&quot;title&quot;:&quot;Run training on Amazon SageMaker&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;sagemaker&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/sagemaker&quot;},{&quot;title&quot;:&quot;Export to ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;serialization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/serialization&quot;},{&quot;title&quot;:&quot;Export to TFLite&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tflite&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tflite&quot;},{&quot;title&quot;:&quot;Export to TorchScript&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;torchscript&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/torchscript&quot;},{&quot;title&quot;:&quot;Benchmarks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;benchmarks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/benchmarks&quot;},{&quot;title&quot;:&quot;Notebooks with examples&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;notebooks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/notebooks&quot;},{&quot;title&quot;:&quot;Community resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;community&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/community&quot;},{&quot;title&quot;:&quot;Custom Tools and Prompts&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_tools&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_tools&quot;},{&quot;title&quot;:&quot;Troubleshoot&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;troubleshooting&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/troubleshooting&quot;}]},{&quot;title&quot;:&quot;Performance and scalability&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;performance&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/performance&quot;},{&quot;title&quot;:&quot;Efficient training techniques&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Methods and tools for efficient training on a single GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_one&quot;},{&quot;title&quot;:&quot;Multiple GPUs and parallelism&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_many&quot;},{&quot;title&quot;:&quot;Efficient training on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu&quot;},{&quot;title&quot;:&quot;Distributed CPU training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu_many&quot;},{&quot;title&quot;:&quot;Training on TPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu&quot;},{&quot;title&quot;:&quot;Training on TPU with TensorFlow&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu_tf&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu_tf&quot;},{&quot;title&quot;:&quot;Training on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_special&quot;},{&quot;title&quot;:&quot;Custom hardware for training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_hardware&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_hardware&quot;},{&quot;title&quot;:&quot;Hyperparameter Search using Trainer API&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;hpo_train&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/hpo_train&quot;}]},{&quot;title&quot;:&quot;Optimizing inference&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Inference on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_cpu&quot;},{&quot;title&quot;:&quot;Inference on one GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_one&quot;},{&quot;title&quot;:&quot;Inference on many GPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_many&quot;},{&quot;title&quot;:&quot;Inference on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_special&quot;}]},{&quot;title&quot;:&quot;Instantiating a big model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;big_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/big_models&quot;},{&quot;title&quot;:&quot;Troubleshooting&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;debugging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/debugging&quot;},{&quot;title&quot;:&quot;XLA Integration for TensorFlow Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tf_xla&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tf_xla&quot;},{&quot;title&quot;:&quot;Optimize inference using `torch.compile()`&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_torch_compile&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_torch_compile&quot;}]},{&quot;title&quot;:&quot;Contribute&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;How to contribute to transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;contributing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/contributing&quot;},{&quot;title&quot;:&quot;How to add a model to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_model&quot;},{&quot;title&quot;:&quot;How to convert a 🤗 Transformers model to TensorFlow?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_tensorflow_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_tensorflow_model&quot;},{&quot;title&quot;:&quot;How to add a pipeline to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_pipeline&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_pipeline&quot;},{&quot;title&quot;:&quot;Testing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;testing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/testing&quot;},{&quot;title&quot;:&quot;Checks on a Pull Request&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pr_checks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pr_checks&quot;}]},{&quot;title&quot;:&quot;Conceptual guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Philosophy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;philosophy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/philosophy&quot;},{&quot;title&quot;:&quot;Glossary&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;glossary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/glossary&quot;},{&quot;title&quot;:&quot;What 🤗 Transformers can do&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;task_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/task_summary&quot;},{&quot;title&quot;:&quot;How 🤗 Transformers solve tasks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks_explained&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks_explained&quot;},{&quot;title&quot;:&quot;The Transformer model family&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_summary&quot;},{&quot;title&quot;:&quot;Summary of the tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tokenizer_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tokenizer_summary&quot;},{&quot;title&quot;:&quot;Attention mechanisms&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;attention&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/attention&quot;},{&quot;title&quot;:&quot;Padding and truncation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pad_truncation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pad_truncation&quot;},{&quot;title&quot;:&quot;BERTology&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;bertology&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/bertology&quot;},{&quot;title&quot;:&quot;Perplexity of fixed-length models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perplexity&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perplexity&quot;},{&quot;title&quot;:&quot;Pipelines for webserver inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_webserver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_webserver&quot;},{&quot;title&quot;:&quot;Model training anatomy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_memory_anatomy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_memory_anatomy&quot;}]},{&quot;title&quot;:&quot;API&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Main Classes&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Agents and Tools&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/agent&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/agent&quot;},{&quot;title&quot;:&quot;Auto Classes&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/auto&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/auto&quot;},{&quot;title&quot;:&quot;Callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/callback&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/callback&quot;},{&quot;title&quot;:&quot;Configuration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/configuration&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/configuration&quot;},{&quot;title&quot;:&quot;Data Collator&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/data_collator&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/data_collator&quot;},{&quot;title&quot;:&quot;Keras callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/keras_callbacks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/keras_callbacks&quot;},{&quot;title&quot;:&quot;Logging&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/logging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/logging&quot;},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/model&quot;},{&quot;title&quot;:&quot;Text Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/text_generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/text_generation&quot;},{&quot;title&quot;:&quot;ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/onnx&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/onnx&quot;},{&quot;title&quot;:&quot;Optimization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/optimizer_schedules&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules&quot;},{&quot;title&quot;:&quot;Model outputs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/output&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/output&quot;},{&quot;title&quot;:&quot;Pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/pipelines&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/pipelines&quot;},{&quot;title&quot;:&quot;Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/processors&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/processors&quot;},{&quot;title&quot;:&quot;Quantization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/quantization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/quantization&quot;},{&quot;title&quot;:&quot;Tokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/tokenizer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/tokenizer&quot;},{&quot;title&quot;:&quot;Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/trainer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/trainer&quot;},{&quot;title&quot;:&quot;DeepSpeed Integration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/deepspeed&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/deepspeed&quot;},{&quot;title&quot;:&quot;Feature Extractor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/feature_extractor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/feature_extractor&quot;},{&quot;title&quot;:&quot;Image Processor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/image_processor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/image_processor&quot;}]},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALBERT&quot;,&quot;id&quot;:&quot;model_doc/albert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/albert&quot;},{&quot;title&quot;:&quot;BART&quot;,&quot;id&quot;:&quot;model_doc/bart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bart&quot;},{&quot;title&quot;:&quot;BARThez&quot;,&quot;id&quot;:&quot;model_doc/barthez&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/barthez&quot;},{&quot;title&quot;:&quot;BARTpho&quot;,&quot;id&quot;:&quot;model_doc/bartpho&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bartpho&quot;},{&quot;title&quot;:&quot;BERT&quot;,&quot;id&quot;:&quot;model_doc/bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert&quot;},{&quot;title&quot;:&quot;BertGeneration&quot;,&quot;id&quot;:&quot;model_doc/bert-generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-generation&quot;},{&quot;title&quot;:&quot;BertJapanese&quot;,&quot;id&quot;:&quot;model_doc/bert-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-japanese&quot;},{&quot;title&quot;:&quot;Bertweet&quot;,&quot;id&quot;:&quot;model_doc/bertweet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bertweet&quot;},{&quot;title&quot;:&quot;BigBird&quot;,&quot;id&quot;:&quot;model_doc/big_bird&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/big_bird&quot;},{&quot;title&quot;:&quot;BigBirdPegasus&quot;,&quot;id&quot;:&quot;model_doc/bigbird_pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus&quot;},{&quot;title&quot;:&quot;BioGpt&quot;,&quot;id&quot;:&quot;model_doc/biogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/biogpt&quot;},{&quot;title&quot;:&quot;Blenderbot&quot;,&quot;id&quot;:&quot;model_doc/blenderbot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot&quot;},{&quot;title&quot;:&quot;Blenderbot Small&quot;,&quot;id&quot;:&quot;model_doc/blenderbot-small&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot-small&quot;},{&quot;title&quot;:&quot;BLOOM&quot;,&quot;id&quot;:&quot;model_doc/bloom&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bloom&quot;},{&quot;title&quot;:&quot;BORT&quot;,&quot;id&quot;:&quot;model_doc/bort&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bort&quot;},{&quot;title&quot;:&quot;ByT5&quot;,&quot;id&quot;:&quot;model_doc/byt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/byt5&quot;},{&quot;title&quot;:&quot;CamemBERT&quot;,&quot;id&quot;:&quot;model_doc/camembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/camembert&quot;},{&quot;title&quot;:&quot;CANINE&quot;,&quot;id&quot;:&quot;model_doc/canine&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/canine&quot;},{&quot;title&quot;:&quot;CodeGen&quot;,&quot;id&quot;:&quot;model_doc/codegen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/codegen&quot;},{&quot;title&quot;:&quot;CodeLlama&quot;,&quot;id&quot;:&quot;model_doc/code_llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/code_llama&quot;},{&quot;title&quot;:&quot;ConvBERT&quot;,&quot;id&quot;:&quot;model_doc/convbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convbert&quot;},{&quot;title&quot;:&quot;CPM&quot;,&quot;id&quot;:&quot;model_doc/cpm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpm&quot;},{&quot;title&quot;:&quot;CPMANT&quot;,&quot;id&quot;:&quot;model_doc/cpmant&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpmant&quot;},{&quot;title&quot;:&quot;CTRL&quot;,&quot;id&quot;:&quot;model_doc/ctrl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ctrl&quot;},{&quot;title&quot;:&quot;DeBERTa&quot;,&quot;id&quot;:&quot;model_doc/deberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta&quot;},{&quot;title&quot;:&quot;DeBERTa-v2&quot;,&quot;id&quot;:&quot;model_doc/deberta-v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta-v2&quot;},{&quot;title&quot;:&quot;DialoGPT&quot;,&quot;id&quot;:&quot;model_doc/dialogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dialogpt&quot;},{&quot;title&quot;:&quot;DistilBERT&quot;,&quot;id&quot;:&quot;model_doc/distilbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/distilbert&quot;},{&quot;title&quot;:&quot;DPR&quot;,&quot;id&quot;:&quot;model_doc/dpr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpr&quot;},{&quot;title&quot;:&quot;ELECTRA&quot;,&quot;id&quot;:&quot;model_doc/electra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/electra&quot;},{&quot;title&quot;:&quot;Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encoder-decoder&quot;},{&quot;title&quot;:&quot;ERNIE&quot;,&quot;id&quot;:&quot;model_doc/ernie&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie&quot;},{&quot;title&quot;:&quot;ErnieM&quot;,&quot;id&quot;:&quot;model_doc/ernie_m&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie_m&quot;},{&quot;title&quot;:&quot;ESM&quot;,&quot;id&quot;:&quot;model_doc/esm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/esm&quot;},{&quot;title&quot;:&quot;Falcon&quot;,&quot;id&quot;:&quot;model_doc/falcon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/falcon&quot;},{&quot;title&quot;:&quot;FLAN-T5&quot;,&quot;id&quot;:&quot;model_doc/flan-t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-t5&quot;},{&quot;title&quot;:&quot;FLAN-UL2&quot;,&quot;id&quot;:&quot;model_doc/flan-ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-ul2&quot;},{&quot;title&quot;:&quot;FlauBERT&quot;,&quot;id&quot;:&quot;model_doc/flaubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flaubert&quot;},{&quot;title&quot;:&quot;FNet&quot;,&quot;id&quot;:&quot;model_doc/fnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fnet&quot;},{&quot;title&quot;:&quot;FSMT&quot;,&quot;id&quot;:&quot;model_doc/fsmt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fsmt&quot;},{&quot;title&quot;:&quot;Funnel Transformer&quot;,&quot;id&quot;:&quot;model_doc/funnel&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/funnel&quot;},{&quot;title&quot;:&quot;GPT&quot;,&quot;id&quot;:&quot;model_doc/openai-gpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/openai-gpt&quot;},{&quot;title&quot;:&quot;GPT Neo&quot;,&quot;id&quot;:&quot;model_doc/gpt_neo&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neo&quot;},{&quot;title&quot;:&quot;GPT NeoX&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox&quot;},{&quot;title&quot;:&quot;GPT NeoX Japanese&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox_japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese&quot;},{&quot;title&quot;:&quot;GPT-J&quot;,&quot;id&quot;:&quot;model_doc/gptj&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptj&quot;},{&quot;title&quot;:&quot;GPT2&quot;,&quot;id&quot;:&quot;model_doc/gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt2&quot;},{&quot;title&quot;:&quot;GPTBigCode&quot;,&quot;id&quot;:&quot;model_doc/gpt_bigcode&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode&quot;},{&quot;title&quot;:&quot;GPTSAN Japanese&quot;,&quot;id&quot;:&quot;model_doc/gptsan-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese&quot;},{&quot;title&quot;:&quot;GPTSw3&quot;,&quot;id&quot;:&quot;model_doc/gpt-sw3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt-sw3&quot;},{&quot;title&quot;:&quot;HerBERT&quot;,&quot;id&quot;:&quot;model_doc/herbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/herbert&quot;},{&quot;title&quot;:&quot;I-BERT&quot;,&quot;id&quot;:&quot;model_doc/ibert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ibert&quot;},{&quot;title&quot;:&quot;Jukebox&quot;,&quot;id&quot;:&quot;model_doc/jukebox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/jukebox&quot;},{&quot;title&quot;:&quot;LED&quot;,&quot;id&quot;:&quot;model_doc/led&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/led&quot;},{&quot;title&quot;:&quot;LLaMA&quot;,&quot;id&quot;:&quot;model_doc/llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama&quot;},{&quot;title&quot;:&quot;Llama2&quot;,&quot;id&quot;:&quot;model_doc/llama2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama2&quot;},{&quot;title&quot;:&quot;Longformer&quot;,&quot;id&quot;:&quot;model_doc/longformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longformer&quot;},{&quot;title&quot;:&quot;LongT5&quot;,&quot;id&quot;:&quot;model_doc/longt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longt5&quot;},{&quot;title&quot;:&quot;LUKE&quot;,&quot;id&quot;:&quot;model_doc/luke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/luke&quot;},{&quot;title&quot;:&quot;M2M100&quot;,&quot;id&quot;:&quot;model_doc/m2m_100&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/m2m_100&quot;},{&quot;title&quot;:&quot;MarianMT&quot;,&quot;id&quot;:&quot;model_doc/marian&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/marian&quot;},{&quot;title&quot;:&quot;MarkupLM&quot;,&quot;id&quot;:&quot;model_doc/markuplm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/markuplm&quot;},{&quot;title&quot;:&quot;MBart and MBart-50&quot;,&quot;id&quot;:&quot;model_doc/mbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mbart&quot;},{&quot;title&quot;:&quot;MEGA&quot;,&quot;id&quot;:&quot;model_doc/mega&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mega&quot;},{&quot;title&quot;:&quot;MegatronBERT&quot;,&quot;id&quot;:&quot;model_doc/megatron-bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron-bert&quot;},{&quot;title&quot;:&quot;MegatronGPT2&quot;,&quot;id&quot;:&quot;model_doc/megatron_gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2&quot;},{&quot;title&quot;:&quot;Mistral&quot;,&quot;id&quot;:&quot;model_doc/mistral&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mistral&quot;},{&quot;title&quot;:&quot;mLUKE&quot;,&quot;id&quot;:&quot;model_doc/mluke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mluke&quot;},{&quot;title&quot;:&quot;MobileBERT&quot;,&quot;id&quot;:&quot;model_doc/mobilebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilebert&quot;},{&quot;title&quot;:&quot;MPNet&quot;,&quot;id&quot;:&quot;model_doc/mpnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpnet&quot;},{&quot;title&quot;:&quot;MPT&quot;,&quot;id&quot;:&quot;model_doc/mpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpt&quot;},{&quot;title&quot;:&quot;MRA&quot;,&quot;id&quot;:&quot;model_doc/mra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mra&quot;},{&quot;title&quot;:&quot;MT5&quot;,&quot;id&quot;:&quot;model_doc/mt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mt5&quot;},{&quot;title&quot;:&quot;MVP&quot;,&quot;id&quot;:&quot;model_doc/mvp&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mvp&quot;},{&quot;title&quot;:&quot;NEZHA&quot;,&quot;id&quot;:&quot;model_doc/nezha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nezha&quot;},{&quot;title&quot;:&quot;NLLB&quot;,&quot;id&quot;:&quot;model_doc/nllb&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb&quot;},{&quot;title&quot;:&quot;NLLB-MoE&quot;,&quot;id&quot;:&quot;model_doc/nllb-moe&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb-moe&quot;},{&quot;title&quot;:&quot;Nyströmformer&quot;,&quot;id&quot;:&quot;model_doc/nystromformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nystromformer&quot;},{&quot;title&quot;:&quot;Open-Llama&quot;,&quot;id&quot;:&quot;model_doc/open-llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/open-llama&quot;},{&quot;title&quot;:&quot;OPT&quot;,&quot;id&quot;:&quot;model_doc/opt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/opt&quot;},{&quot;title&quot;:&quot;Pegasus&quot;,&quot;id&quot;:&quot;model_doc/pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus&quot;},{&quot;title&quot;:&quot;PEGASUS-X&quot;,&quot;id&quot;:&quot;model_doc/pegasus_x&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus_x&quot;},{&quot;title&quot;:&quot;Persimmon&quot;,&quot;id&quot;:&quot;model_doc/persimmon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/persimmon&quot;},{&quot;title&quot;:&quot;PhoBERT&quot;,&quot;id&quot;:&quot;model_doc/phobert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/phobert&quot;},{&quot;title&quot;:&quot;PLBart&quot;,&quot;id&quot;:&quot;model_doc/plbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/plbart&quot;},{&quot;title&quot;:&quot;ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/prophetnet&quot;},{&quot;title&quot;:&quot;QDQBert&quot;,&quot;id&quot;:&quot;model_doc/qdqbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/qdqbert&quot;},{&quot;title&quot;:&quot;RAG&quot;,&quot;id&quot;:&quot;model_doc/rag&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rag&quot;},{&quot;title&quot;:&quot;REALM&quot;,&quot;id&quot;:&quot;model_doc/realm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/realm&quot;},{&quot;title&quot;:&quot;Reformer&quot;,&quot;id&quot;:&quot;model_doc/reformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/reformer&quot;},{&quot;title&quot;:&quot;RemBERT&quot;,&quot;id&quot;:&quot;model_doc/rembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rembert&quot;},{&quot;title&quot;:&quot;RetriBERT&quot;,&quot;id&quot;:&quot;model_doc/retribert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/retribert&quot;},{&quot;title&quot;:&quot;RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta&quot;},{&quot;title&quot;:&quot;RoBERTa-PreLayerNorm&quot;,&quot;id&quot;:&quot;model_doc/roberta-prelayernorm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm&quot;},{&quot;title&quot;:&quot;RoCBert&quot;,&quot;id&quot;:&quot;model_doc/roc_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roc_bert&quot;},{&quot;title&quot;:&quot;RoFormer&quot;,&quot;id&quot;:&quot;model_doc/roformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roformer&quot;},{&quot;title&quot;:&quot;RWKV&quot;,&quot;id&quot;:&quot;model_doc/rwkv&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rwkv&quot;},{&quot;title&quot;:&quot;Splinter&quot;,&quot;id&quot;:&quot;model_doc/splinter&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/splinter&quot;},{&quot;title&quot;:&quot;SqueezeBERT&quot;,&quot;id&quot;:&quot;model_doc/squeezebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/squeezebert&quot;},{&quot;title&quot;:&quot;SwitchTransformers&quot;,&quot;id&quot;:&quot;model_doc/switch_transformers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/switch_transformers&quot;},{&quot;title&quot;:&quot;T5&quot;,&quot;id&quot;:&quot;model_doc/t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5&quot;},{&quot;title&quot;:&quot;T5v1.1&quot;,&quot;id&quot;:&quot;model_doc/t5v1.1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5v1.1&quot;},{&quot;title&quot;:&quot;TAPEX&quot;,&quot;id&quot;:&quot;model_doc/tapex&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapex&quot;},{&quot;title&quot;:&quot;Transformer XL&quot;,&quot;id&quot;:&quot;model_doc/transfo-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/transfo-xl&quot;},{&quot;title&quot;:&quot;UL2&quot;,&quot;id&quot;:&quot;model_doc/ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ul2&quot;},{&quot;title&quot;:&quot;UMT5&quot;,&quot;id&quot;:&quot;model_doc/umt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/umt5&quot;},{&quot;title&quot;:&quot;X-MOD&quot;,&quot;id&quot;:&quot;model_doc/xmod&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xmod&quot;},{&quot;title&quot;:&quot;XGLM&quot;,&quot;id&quot;:&quot;model_doc/xglm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xglm&quot;},{&quot;title&quot;:&quot;XLM&quot;,&quot;id&quot;:&quot;model_doc/xlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm&quot;},{&quot;title&quot;:&quot;XLM-ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/xlm-prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa-XL&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl&quot;},{&quot;title&quot;:&quot;XLM-V&quot;,&quot;id&quot;:&quot;model_doc/xlm-v&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-v&quot;},{&quot;title&quot;:&quot;XLNet&quot;,&quot;id&quot;:&quot;model_doc/xlnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlnet&quot;},{&quot;title&quot;:&quot;YOSO&quot;,&quot;id&quot;:&quot;model_doc/yoso&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yoso&quot;}]},{&quot;title&quot;:&quot;Vision models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;BEiT&quot;,&quot;id&quot;:&quot;model_doc/beit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/beit&quot;},{&quot;title&quot;:&quot;BiT&quot;,&quot;id&quot;:&quot;model_doc/bit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bit&quot;},{&quot;title&quot;:&quot;Conditional DETR&quot;,&quot;id&quot;:&quot;model_doc/conditional_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/conditional_detr&quot;},{&quot;title&quot;:&quot;ConvNeXT&quot;,&quot;id&quot;:&quot;model_doc/convnext&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnext&quot;},{&quot;title&quot;:&quot;ConvNeXTV2&quot;,&quot;id&quot;:&quot;model_doc/convnextv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnextv2&quot;},{&quot;title&quot;:&quot;CvT&quot;,&quot;id&quot;:&quot;model_doc/cvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cvt&quot;},{&quot;title&quot;:&quot;Deformable DETR&quot;,&quot;id&quot;:&quot;model_doc/deformable_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deformable_detr&quot;},{&quot;title&quot;:&quot;DeiT&quot;,&quot;id&quot;:&quot;model_doc/deit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deit&quot;},{&quot;title&quot;:&quot;DETA&quot;,&quot;id&quot;:&quot;model_doc/deta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deta&quot;},{&quot;title&quot;:&quot;DETR&quot;,&quot;id&quot;:&quot;model_doc/detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/detr&quot;},{&quot;title&quot;:&quot;DiNAT&quot;,&quot;id&quot;:&quot;model_doc/dinat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinat&quot;},{&quot;title&quot;:&quot;DINO V2&quot;,&quot;id&quot;:&quot;model_doc/dinov2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinov2&quot;},{&quot;title&quot;:&quot;DiT&quot;,&quot;id&quot;:&quot;model_doc/dit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dit&quot;},{&quot;title&quot;:&quot;DPT&quot;,&quot;id&quot;:&quot;model_doc/dpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpt&quot;},{&quot;title&quot;:&quot;EfficientFormer&quot;,&quot;id&quot;:&quot;model_doc/efficientformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientformer&quot;},{&quot;title&quot;:&quot;EfficientNet&quot;,&quot;id&quot;:&quot;model_doc/efficientnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientnet&quot;},{&quot;title&quot;:&quot;FocalNet&quot;,&quot;id&quot;:&quot;model_doc/focalnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/focalnet&quot;},{&quot;title&quot;:&quot;GLPN&quot;,&quot;id&quot;:&quot;model_doc/glpn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/glpn&quot;},{&quot;title&quot;:&quot;ImageGPT&quot;,&quot;id&quot;:&quot;model_doc/imagegpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/imagegpt&quot;},{&quot;title&quot;:&quot;LeViT&quot;,&quot;id&quot;:&quot;model_doc/levit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/levit&quot;},{&quot;title&quot;:&quot;Mask2Former&quot;,&quot;id&quot;:&quot;model_doc/mask2former&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mask2former&quot;},{&quot;title&quot;:&quot;MaskFormer&quot;,&quot;id&quot;:&quot;model_doc/maskformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/maskformer&quot;},{&quot;title&quot;:&quot;MobileNetV1&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v1&quot;},{&quot;title&quot;:&quot;MobileNetV2&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v2&quot;},{&quot;title&quot;:&quot;MobileViT&quot;,&quot;id&quot;:&quot;model_doc/mobilevit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevit&quot;},{&quot;title&quot;:&quot;MobileViTV2&quot;,&quot;id&quot;:&quot;model_doc/mobilevitv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevitv2&quot;},{&quot;title&quot;:&quot;NAT&quot;,&quot;id&quot;:&quot;model_doc/nat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nat&quot;},{&quot;title&quot;:&quot;PoolFormer&quot;,&quot;id&quot;:&quot;model_doc/poolformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/poolformer&quot;},{&quot;title&quot;:&quot;Pyramid Vision Transformer (PVT)&quot;,&quot;id&quot;:&quot;model_doc/pvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pvt&quot;},{&quot;title&quot;:&quot;RegNet&quot;,&quot;id&quot;:&quot;model_doc/regnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/regnet&quot;},{&quot;title&quot;:&quot;ResNet&quot;,&quot;id&quot;:&quot;model_doc/resnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/resnet&quot;},{&quot;title&quot;:&quot;SegFormer&quot;,&quot;id&quot;:&quot;model_doc/segformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/segformer&quot;},{&quot;title&quot;:&quot;SwiftFormer&quot;,&quot;id&quot;:&quot;model_doc/swiftformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swiftformer&quot;},{&quot;title&quot;:&quot;Swin Transformer&quot;,&quot;id&quot;:&quot;model_doc/swin&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin&quot;},{&quot;title&quot;:&quot;Swin Transformer V2&quot;,&quot;id&quot;:&quot;model_doc/swinv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swinv2&quot;},{&quot;title&quot;:&quot;Swin2SR&quot;,&quot;id&quot;:&quot;model_doc/swin2sr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin2sr&quot;},{&quot;title&quot;:&quot;Table Transformer&quot;,&quot;id&quot;:&quot;model_doc/table-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/table-transformer&quot;},{&quot;title&quot;:&quot;TimeSformer&quot;,&quot;id&quot;:&quot;model_doc/timesformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/timesformer&quot;},{&quot;title&quot;:&quot;UperNet&quot;,&quot;id&quot;:&quot;model_doc/upernet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/upernet&quot;},{&quot;title&quot;:&quot;VAN&quot;,&quot;id&quot;:&quot;model_doc/van&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/van&quot;},{&quot;title&quot;:&quot;VideoMAE&quot;,&quot;id&quot;:&quot;model_doc/videomae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/videomae&quot;},{&quot;title&quot;:&quot;Vision Transformer (ViT)&quot;,&quot;id&quot;:&quot;model_doc/vit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit&quot;},{&quot;title&quot;:&quot;ViT Hybrid&quot;,&quot;id&quot;:&quot;model_doc/vit_hybrid&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_hybrid&quot;},{&quot;title&quot;:&quot;ViTDet&quot;,&quot;id&quot;:&quot;model_doc/vitdet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitdet&quot;},{&quot;title&quot;:&quot;ViTMAE&quot;,&quot;id&quot;:&quot;model_doc/vit_mae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_mae&quot;},{&quot;title&quot;:&quot;ViTMatte&quot;,&quot;id&quot;:&quot;model_doc/vitmatte&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitmatte&quot;},{&quot;title&quot;:&quot;ViTMSN&quot;,&quot;id&quot;:&quot;model_doc/vit_msn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_msn&quot;},{&quot;title&quot;:&quot;ViViT&quot;,&quot;id&quot;:&quot;model_doc/vivit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vivit&quot;},{&quot;title&quot;:&quot;YOLOS&quot;,&quot;id&quot;:&quot;model_doc/yolos&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yolos&quot;}]},{&quot;title&quot;:&quot;Audio models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio Spectrogram Transformer&quot;,&quot;id&quot;:&quot;model_doc/audio-spectrogram-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer&quot;},{&quot;title&quot;:&quot;Bark&quot;,&quot;id&quot;:&quot;model_doc/bark&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bark&quot;},{&quot;title&quot;:&quot;CLAP&quot;,&quot;id&quot;:&quot;model_doc/clap&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clap&quot;},{&quot;title&quot;:&quot;EnCodec&quot;,&quot;id&quot;:&quot;model_doc/encodec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encodec&quot;},{&quot;title&quot;:&quot;Hubert&quot;,&quot;id&quot;:&quot;model_doc/hubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/hubert&quot;},{&quot;title&quot;:&quot;MCTCT&quot;,&quot;id&quot;:&quot;model_doc/mctct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mctct&quot;},{&quot;title&quot;:&quot;MMS&quot;,&quot;id&quot;:&quot;model_doc/mms&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mms&quot;},{&quot;title&quot;:&quot;MusicGen&quot;,&quot;id&quot;:&quot;model_doc/musicgen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/musicgen&quot;},{&quot;title&quot;:&quot;Pop2Piano&quot;,&quot;id&quot;:&quot;model_doc/pop2piano&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pop2piano&quot;},{&quot;title&quot;:&quot;SEW&quot;,&quot;id&quot;:&quot;model_doc/sew&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew&quot;},{&quot;title&quot;:&quot;SEW-D&quot;,&quot;id&quot;:&quot;model_doc/sew-d&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew-d&quot;},{&quot;title&quot;:&quot;Speech2Text&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text&quot;},{&quot;title&quot;:&quot;Speech2Text2&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text_2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2&quot;},{&quot;title&quot;:&quot;SpeechT5&quot;,&quot;id&quot;:&quot;model_doc/speecht5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speecht5&quot;},{&quot;title&quot;:&quot;UniSpeech&quot;,&quot;id&quot;:&quot;model_doc/unispeech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech&quot;},{&quot;title&quot;:&quot;UniSpeech-SAT&quot;,&quot;id&quot;:&quot;model_doc/unispeech-sat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech-sat&quot;},{&quot;title&quot;:&quot;VITS&quot;,&quot;id&quot;:&quot;model_doc/vits&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vits&quot;},{&quot;title&quot;:&quot;Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2&quot;},{&quot;title&quot;:&quot;Wav2Vec2-Conformer&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2-conformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer&quot;},{&quot;title&quot;:&quot;Wav2Vec2Phoneme&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2_phoneme&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme&quot;},{&quot;title&quot;:&quot;WavLM&quot;,&quot;id&quot;:&quot;model_doc/wavlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wavlm&quot;},{&quot;title&quot;:&quot;Whisper&quot;,&quot;id&quot;:&quot;model_doc/whisper&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/whisper&quot;},{&quot;title&quot;:&quot;XLS-R&quot;,&quot;id&quot;:&quot;model_doc/xls_r&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xls_r&quot;},{&quot;title&quot;:&quot;XLSR-Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/xlsr_wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2&quot;}]},{&quot;title&quot;:&quot;Multimodal models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALIGN&quot;,&quot;id&quot;:&quot;model_doc/align&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/align&quot;},{&quot;title&quot;:&quot;AltCLIP&quot;,&quot;id&quot;:&quot;model_doc/altclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/altclip&quot;},{&quot;title&quot;:&quot;BLIP&quot;,&quot;id&quot;:&quot;model_doc/blip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip&quot;},{&quot;title&quot;:&quot;BLIP-2&quot;,&quot;id&quot;:&quot;model_doc/blip-2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip-2&quot;},{&quot;title&quot;:&quot;BridgeTower&quot;,&quot;id&quot;:&quot;model_doc/bridgetower&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bridgetower&quot;},{&quot;title&quot;:&quot;BROS&quot;,&quot;id&quot;:&quot;model_doc/bros&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bros&quot;},{&quot;title&quot;:&quot;Chinese-CLIP&quot;,&quot;id&quot;:&quot;model_doc/chinese_clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/chinese_clip&quot;},{&quot;title&quot;:&quot;CLIP&quot;,&quot;id&quot;:&quot;model_doc/clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clip&quot;},{&quot;title&quot;:&quot;CLIPSeg&quot;,&quot;id&quot;:&quot;model_doc/clipseg&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clipseg&quot;},{&quot;title&quot;:&quot;Data2Vec&quot;,&quot;id&quot;:&quot;model_doc/data2vec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/data2vec&quot;},{&quot;title&quot;:&quot;DePlot&quot;,&quot;id&quot;:&quot;model_doc/deplot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deplot&quot;},{&quot;title&quot;:&quot;Donut&quot;,&quot;id&quot;:&quot;model_doc/donut&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/donut&quot;},{&quot;title&quot;:&quot;FLAVA&quot;,&quot;id&quot;:&quot;model_doc/flava&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flava&quot;},{&quot;title&quot;:&quot;GIT&quot;,&quot;id&quot;:&quot;model_doc/git&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/git&quot;},{&quot;title&quot;:&quot;GroupViT&quot;,&quot;id&quot;:&quot;model_doc/groupvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/groupvit&quot;},{&quot;title&quot;:&quot;IDEFICS&quot;,&quot;id&quot;:&quot;model_doc/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/idefics&quot;},{&quot;title&quot;:&quot;InstructBLIP&quot;,&quot;id&quot;:&quot;model_doc/instructblip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/instructblip&quot;},{&quot;title&quot;:&quot;LayoutLM&quot;,&quot;id&quot;:&quot;model_doc/layoutlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlm&quot;},{&quot;title&quot;:&quot;LayoutLMV2&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv2&quot;},{&quot;title&quot;:&quot;LayoutLMV3&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv3&quot;},{&quot;title&quot;:&quot;LayoutXLM&quot;,&quot;id&quot;:&quot;model_doc/layoutxlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutxlm&quot;},{&quot;title&quot;:&quot;LiLT&quot;,&quot;id&quot;:&quot;model_doc/lilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lilt&quot;},{&quot;title&quot;:&quot;LXMERT&quot;,&quot;id&quot;:&quot;model_doc/lxmert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lxmert&quot;},{&quot;title&quot;:&quot;MatCha&quot;,&quot;id&quot;:&quot;model_doc/matcha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/matcha&quot;},{&quot;title&quot;:&quot;MGP-STR&quot;,&quot;id&quot;:&quot;model_doc/mgp-str&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mgp-str&quot;},{&quot;title&quot;:&quot;Nougat&quot;,&quot;id&quot;:&quot;model_doc/nougat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nougat&quot;},{&quot;title&quot;:&quot;OneFormer&quot;,&quot;id&quot;:&quot;model_doc/oneformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/oneformer&quot;},{&quot;title&quot;:&quot;OWL-ViT&quot;,&quot;id&quot;:&quot;model_doc/owlvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/owlvit&quot;},{&quot;title&quot;:&quot;Perceiver&quot;,&quot;id&quot;:&quot;model_doc/perceiver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/perceiver&quot;},{&quot;title&quot;:&quot;Pix2Struct&quot;,&quot;id&quot;:&quot;model_doc/pix2struct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pix2struct&quot;},{&quot;title&quot;:&quot;Segment Anything&quot;,&quot;id&quot;:&quot;model_doc/sam&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sam&quot;},{&quot;title&quot;:&quot;Speech Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/speech-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder&quot;},{&quot;title&quot;:&quot;TAPAS&quot;,&quot;id&quot;:&quot;model_doc/tapas&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapas&quot;},{&quot;title&quot;:&quot;TrOCR&quot;,&quot;id&quot;:&quot;model_doc/trocr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trocr&quot;},{&quot;title&quot;:&quot;TVLT&quot;,&quot;id&quot;:&quot;model_doc/tvlt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tvlt&quot;},{&quot;title&quot;:&quot;ViLT&quot;,&quot;id&quot;:&quot;model_doc/vilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vilt&quot;},{&quot;title&quot;:&quot;Vision Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/vision-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder&quot;},{&quot;title&quot;:&quot;Vision Text Dual Encoder&quot;,&quot;id&quot;:&quot;model_doc/vision-text-dual-encoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder&quot;},{&quot;title&quot;:&quot;VisualBERT&quot;,&quot;id&quot;:&quot;model_doc/visual_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/visual_bert&quot;},{&quot;title&quot;:&quot;X-CLIP&quot;,&quot;id&quot;:&quot;model_doc/xclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xclip&quot;}]},{&quot;title&quot;:&quot;Reinforcement learning models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Decision Transformer&quot;,&quot;id&quot;:&quot;model_doc/decision_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/decision_transformer&quot;},{&quot;title&quot;:&quot;Trajectory Transformer&quot;,&quot;id&quot;:&quot;model_doc/trajectory_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer&quot;}]},{&quot;title&quot;:&quot;Time series models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Autoformer&quot;,&quot;id&quot;:&quot;model_doc/autoformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/autoformer&quot;},{&quot;title&quot;:&quot;Informer&quot;,&quot;id&quot;:&quot;model_doc/informer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/informer&quot;},{&quot;title&quot;:&quot;Time Series Transformer&quot;,&quot;id&quot;:&quot;model_doc/time_series_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/time_series_transformer&quot;}]},{&quot;title&quot;:&quot;Graph models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Graphormer&quot;,&quot;id&quot;:&quot;model_doc/graphormer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/graphormer&quot;}]}]},{&quot;title&quot;:&quot;Internal Helpers&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Custom Layers and Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/modeling_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/modeling_utils&quot;},{&quot;title&quot;:&quot;Utilities for pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/pipelines_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/pipelines_utils&quot;},{&quot;title&quot;:&quot;Utilities for Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/tokenization_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/tokenization_utils&quot;},{&quot;title&quot;:&quot;Utilities for Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/trainer_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/trainer_utils&quot;},{&quot;title&quot;:&quot;Utilities for Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/generation_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/generation_utils&quot;},{&quot;title&quot;:&quot;Utilities for Image Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/image_processing_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/image_processing_utils&quot;},{&quot;title&quot;:&quot;Utilities for Audio processing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/audio_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/audio_utils&quot;},{&quot;title&quot;:&quot;General Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/file_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/file_utils&quot;},{&quot;title&quot;:&quot;Utilities for Time Series&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/time_series_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/time_series_utils&quot;}]}]}],&quot;chapterId&quot;:&quot;tasks/sequence_classification&quot;,&quot;docType&quot;:&quot;docs&quot;,&quot;isLoggedIn&quot;:false,&quot;lang&quot;:&quot;en&quot;,&quot;langs&quot;:[&quot;de&quot;,&quot;en&quot;,&quot;es&quot;,&quot;fr&quot;,&quot;it&quot;,&quot;ko&quot;,&quot;pt&quot;,&quot;zh&quot;],&quot;library&quot;:&quot;transformers&quot;,&quot;theme&quot;:&quot;light&quot;,&quot;version&quot;:&quot;v4.34.0&quot;,&quot;versions&quot;:[{&quot;version&quot;:&quot;main&quot;},{&quot;version&quot;:&quot;v4.34.0&quot;},{&quot;version&quot;:&quot;v4.33.3&quot;},{&quot;version&quot;:&quot;v4.33.2&quot;},{&quot;version&quot;:&quot;v4.33.0&quot;},{&quot;version&quot;:&quot;v4.32.1&quot;},{&quot;version&quot;:&quot;v4.32.0&quot;},{&quot;version&quot;:&quot;v4.31.0&quot;},{&quot;version&quot;:&quot;v4.30.0&quot;},{&quot;version&quot;:&quot;v4.29.1&quot;},{&quot;version&quot;:&quot;v4.29.0&quot;},{&quot;version&quot;:&quot;v4.28.1&quot;},{&quot;version&quot;:&quot;v4.28.0&quot;},{&quot;version&quot;:&quot;v4.27.2&quot;},{&quot;version&quot;:&quot;v4.27.1&quot;},{&quot;version&quot;:&quot;v4.27.0&quot;},{&quot;version&quot;:&quot;v4.26.1&quot;},{&quot;version&quot;:&quot;v4.26.0&quot;},{&quot;version&quot;:&quot;v4.25.1&quot;},{&quot;version&quot;:&quot;v4.24.0&quot;},{&quot;version&quot;:&quot;v4.23.1&quot;},{&quot;version&quot;:&quot;v4.23.0&quot;},{&quot;version&quot;:&quot;v4.22.2&quot;},{&quot;version&quot;:&quot;v4.22.1&quot;},{&quot;version&quot;:&quot;v4.22.0&quot;},{&quot;version&quot;:&quot;v4.21.3&quot;},{&quot;version&quot;:&quot;v4.21.2&quot;},{&quot;version&quot;:&quot;v4.21.1&quot;},{&quot;version&quot;:&quot;v4.21.0&quot;},{&quot;version&quot;:&quot;v4.20.1&quot;},{&quot;version&quot;:&quot;v4.20.0&quot;},{&quot;version&quot;:&quot;v4.19.4&quot;},{&quot;version&quot;:&quot;v4.19.3&quot;},{&quot;version&quot;:&quot;v4.19.2&quot;},{&quot;version&quot;:&quot;v4.19.0&quot;},{&quot;version&quot;:&quot;v4.18.0&quot;},{&quot;version&quot;:&quot;v4.17.0&quot;},{&quot;version&quot;:&quot;v4.16.2&quot;},{&quot;version&quot;:&quot;v4.16.1&quot;},{&quot;version&quot;:&quot;v4.16.0&quot;},{&quot;version&quot;:&quot;v4.15.0&quot;},{&quot;version&quot;:&quot;v4.14.1&quot;},{&quot;version&quot;:&quot;v4.13.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.5&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.4&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.0.0&quot;},{&quot;version&quot;:&quot;doc-builder-html&quot;}],&quot;title&quot;:&quot;Text classification&quot;}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"><div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation</p> <div class="flex items-center"><p class="font-semibold">Text classification</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "><button class=" " type="button"><h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> <span class="ml-auto rounded border border-gray-200 bg-gray-100 px-0.5 text-xs dark:border-gray-800 dark:bg-gray-800"><kbd class="font-sans">⌘K</kbd></span></button> <div class="flex items-center"><select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1">v4.34.0</option><option value="2">v4.33.3</option><option value="3">v4.32.1</option><option value="4">v4.31.0</option><option value="5">v4.30.0</option><option value="6">v4.29.1</option><option value="7">v4.28.1</option><option value="8">v4.27.2</option><option value="9">v4.26.1</option><option value="10">v4.25.1</option><option value="11">v4.24.0</option><option value="12">v4.23.1</option><option value="13">v4.22.2</option><option value="14">v4.21.3</option><option value="15">v4.20.1</option><option value="16">v4.19.4</option><option value="17">v4.18.0</option><option value="18">v4.17.0</option><option value="19">v4.16.2</option><option value="20">v4.15.0</option><option value="21">v4.14.1</option><option value="22">v4.13.0</option><option value="23">v4.12.5</option><option value="24">v4.11.3</option><option value="25">v4.10.1</option><option value="26">v4.9.2</option><option value="27">v4.8.2</option><option value="28">v4.7.0</option><option value="29">v4.6.0</option><option value="30">v4.5.1</option><option value="31">v4.4.2</option><option value="32">v4.3.3</option><option value="33">v4.2.2</option><option value="34">v4.1.1</option><option value="35">v4.0.1</option><option value="36">v3.5.1</option><option value="37">v3.4.0</option><option value="38">v3.3.1</option><option value="39">v3.2.0</option><option value="40">v3.1.0</option><option value="41">v3.0.2</option><option value="42">v2.11.0</option><option value="43">v2.10.0</option><option value="44">v2.9.1</option><option value="45">v2.8.0</option><option value="46">v2.7.0</option><option value="47">v2.6.0</option><option value="48">v2.5.1</option><option value="49">v2.4.1</option><option value="50">v2.3.0</option><option value="51">v2.2.2</option><option value="52">v2.1.1</option><option value="53">v2.0.0</option><option value="54">v1.2.0</option><option value="55">v1.1.0</option><option value="56">v1.0.0</option><option value="57">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"><button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"><svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> 112,792</a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Get started</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/index">🤗 Transformers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/quicktour">Quick tour </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/installation">Installation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Tutorials</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_tutorial">Run inference with pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/autoclass_tutorial">Write portable code with AutoClass </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/preprocessing">Preprocess data </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/training">Fine-tune a pretrained model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/run_scripts">Train with a script </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/accelerate">Set up distributed training with 🤗 Accelerate </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/peft">Load and train adapters with 🤗 PEFT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_sharing">Share your model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/transformers_agents">Agents </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/llm_tutorial">Generation with LLMs </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Task Guides</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Natural Language Processing</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-4" href="/docs/transformers/v4.34.0/en/tasks/sequence_classification">Text classification </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/token_classification">Token classification </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/question_answering">Question answering </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/language_modeling">Causal language modeling </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/masked_language_modeling">Masked language modeling </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/translation">Translation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/summarization">Summarization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/multiple_choice">Multiple choice </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Computer Vision</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Generation</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Prompting</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Developer guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/fast_tokenizers">Use fast tokenizers from 🤗 Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/multilingual">Run inference with multilingual models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/create_a_model">Use model-specific APIs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_models">Share a custom model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/chat_templating">Templates for chat models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/sagemaker">Run training on Amazon SageMaker </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/serialization">Export to ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tflite">Export to TFLite </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/torchscript">Export to TorchScript </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/benchmarks">Benchmarks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/notebooks">Notebooks with examples </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/community">Community resources </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_tools">Custom Tools and Prompts </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/troubleshooting">Troubleshoot </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Performance and scalability</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/performance">Overview </a><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Efficient training techniques</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_one">Methods and tools for efficient training on a single GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_many">Multiple GPUs and parallelism </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu">Efficient training on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu_many">Distributed CPU training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu">Training on TPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu_tf">Training on TPU with TensorFlow </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_special">Training on Specialized Hardware </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_hardware">Custom hardware for training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/hpo_train">Hyperparameter Search using Trainer API </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Optimizing inference</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_cpu">Inference on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_one">Inference on one GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_many">Inference on many GPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_special">Inference on Specialized Hardware </a> </div><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/big_models">Instantiating a big model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/debugging">Troubleshooting </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tf_xla">XLA Integration for TensorFlow Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perf_torch_compile">Optimize inference using `torch.compile()` </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Contribute</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/contributing">How to contribute to transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_model">How to add a model to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_tensorflow_model">How to convert a 🤗 Transformers model to TensorFlow? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_pipeline">How to add a pipeline to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/testing">Testing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pr_checks">Checks on a Pull Request </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Conceptual guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/philosophy">Philosophy </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/glossary">Glossary </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/task_summary">What 🤗 Transformers can do </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tasks_explained">How 🤗 Transformers solve tasks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_summary">The Transformer model family </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tokenizer_summary">Summary of the tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/attention">Attention mechanisms </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pad_truncation">Padding and truncation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/bertology">BERTology </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perplexity">Perplexity of fixed-length models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_webserver">Pipelines for webserver inference </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_memory_anatomy">Model training anatomy </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>API</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Main Classes</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/agent">Agents and Tools </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/model_doc/auto">Auto Classes </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/callback">Callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/configuration">Configuration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/data_collator">Data Collator </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks">Keras callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/logging">Logging </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/model">Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/text_generation">Text Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/onnx">ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules">Optimization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/output">Model outputs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/pipelines">Pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/processors">Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/quantization">Quantization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/tokenizer">Tokenizer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/trainer">Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/deepspeed">DeepSpeed Integration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor">Feature Extractor </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/image_processor">Image Processor </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Models</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Text models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Vision models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Reinforcement learning models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Time series models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Graph models</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Internal Helpers</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/modeling_utils">Custom Layers and Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/pipelines_utils">Utilities for pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/tokenization_utils">Utilities for Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/trainer_utils">Utilities for Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/generation_utils">Utilities for Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/image_processing_utils">Utilities for Image Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/audio_utils">Utilities for Audio processing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/file_utils">General Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/time_series_utils">Utilities for Time Series </a> </div> </div></nav></div></div></div> <div class="z-1 min-w-0 flex-1"> <div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg"> <div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div> <p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience </p> <div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes </div></div></div> <div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a> <p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div> <div class="prose-doc prose relative mx-auto max-w-4xl break-words"> <p></p> <h1 class="relative group"><a id="text-classification" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#text-classification"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1olarbu">Text classification</span></h1> <div class="flex space-x-1 absolute z-10 right-0 top-0"> <div class="relative colab-dropdown "><button class=" " type="button"><img alt="Open In Colab" class="!m-0" src="https://colab.research.google.com/assets/colab-badge.svg"></button> </div> <div class="relative colab-dropdown "><button class=" " type="button"><img alt="Open In Studio Lab" class="!m-0" src="https://studiolab.sagemaker.aws/studiolab.svg"></button> </div></div> <iframe class="w-full xl:w-4/6 h-80" src="https://www.youtube-nocookie.com/embed/leNG9fN9FQU" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe> <p data-svelte-h="svelte-a44la3">Text classification is a common NLP task that assigns a label or class to text. Some of the largest companies run text classification in production for a wide range of practical applications. One of the most popular forms of text classification is sentiment analysis, which assigns a label like 🙂 positive, 🙁 negative, or 😐 neutral to a sequence of text.</p> <p data-svelte-h="svelte-1aff4p7">This guide will show you how to:</p> <ol data-svelte-h="svelte-15fvapo"><li>Finetune <a href="https://huggingface.co/distilbert-base-uncased" rel="nofollow">DistilBERT</a> on the <a href="https://huggingface.co/datasets/imdb" rel="nofollow">IMDb</a> dataset to determine whether a movie review is positive or negative.</li> <li>Use your finetuned model for inference.</li></ol> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400">The task illustrated in this tutorial is supported by the following model architectures: <p data-svelte-h="svelte-1f5s7ya"><a href="../model_doc/albert">ALBERT</a>, <a href="../model_doc/bart">BART</a>, <a href="../model_doc/bert">BERT</a>, <a href="../model_doc/big_bird">BigBird</a>, <a href="../model_doc/bigbird_pegasus">BigBird-Pegasus</a>, <a href="../model_doc/biogpt">BioGpt</a>, <a href="../model_doc/bloom">BLOOM</a>, <a href="../model_doc/camembert">CamemBERT</a>, <a href="../model_doc/canine">CANINE</a>, <a href="../model_doc/code_llama">CodeLlama</a>, <a href="../model_doc/convbert">ConvBERT</a>, <a href="../model_doc/ctrl">CTRL</a>, <a href="../model_doc/data2vec-text">Data2VecText</a>, <a href="../model_doc/deberta">DeBERTa</a>, <a href="../model_doc/deberta-v2">DeBERTa-v2</a>, <a href="../model_doc/distilbert">DistilBERT</a>, <a href="../model_doc/electra">ELECTRA</a>, <a href="../model_doc/ernie">ERNIE</a>, <a href="../model_doc/ernie_m">ErnieM</a>, <a href="../model_doc/esm">ESM</a>, <a href="../model_doc/falcon">Falcon</a>, <a href="../model_doc/flaubert">FlauBERT</a>, <a href="../model_doc/fnet">FNet</a>, <a href="../model_doc/funnel">Funnel Transformer</a>, <a href="../model_doc/gpt-sw3">GPT-Sw3</a>, <a href="../model_doc/gpt2">OpenAI GPT-2</a>, <a href="../model_doc/gpt_bigcode">GPTBigCode</a>, <a href="../model_doc/gpt_neo">GPT Neo</a>, <a href="../model_doc/gpt_neox">GPT NeoX</a>, <a href="../model_doc/gptj">GPT-J</a>, <a href="../model_doc/ibert">I-BERT</a>, <a href="../model_doc/layoutlm">LayoutLM</a>, <a href="../model_doc/layoutlmv2">LayoutLMv2</a>, <a href="../model_doc/layoutlmv3">LayoutLMv3</a>, <a href="../model_doc/led">LED</a>, <a href="../model_doc/lilt">LiLT</a>, <a href="../model_doc/llama">LLaMA</a>, <a href="../model_doc/longformer">Longformer</a>, <a href="../model_doc/luke">LUKE</a>, <a href="../model_doc/markuplm">MarkupLM</a>, <a href="../model_doc/mbart">mBART</a>, <a href="../model_doc/mega">MEGA</a>, <a href="../model_doc/megatron-bert">Megatron-BERT</a>, <a href="../model_doc/mistral">Mistral</a>, <a href="../model_doc/mobilebert">MobileBERT</a>, <a href="../model_doc/mpnet">MPNet</a>, <a href="../model_doc/mpt">MPT</a>, <a href="../model_doc/mra">MRA</a>, <a href="../model_doc/mt5">MT5</a>, <a href="../model_doc/mvp">MVP</a>, <a href="../model_doc/nezha">Nezha</a>, <a href="../model_doc/nystromformer">Nyströmformer</a>, <a href="../model_doc/open-llama">OpenLlama</a>, <a href="../model_doc/openai-gpt">OpenAI GPT</a>, <a href="../model_doc/opt">OPT</a>, <a href="../model_doc/perceiver">Perceiver</a>, <a href="../model_doc/persimmon">Persimmon</a>, <a href="../model_doc/plbart">PLBart</a>, <a href="../model_doc/qdqbert">QDQBert</a>, <a href="../model_doc/reformer">Reformer</a>, <a href="../model_doc/rembert">RemBERT</a>, <a href="../model_doc/roberta">RoBERTa</a>, <a href="../model_doc/roberta-prelayernorm">RoBERTa-PreLayerNorm</a>, <a href="../model_doc/roc_bert">RoCBert</a>, <a href="../model_doc/roformer">RoFormer</a>, <a href="../model_doc/squeezebert">SqueezeBERT</a>, <a href="../model_doc/t5">T5</a>, <a href="../model_doc/tapas">TAPAS</a>, <a href="../model_doc/transfo-xl">Transformer-XL</a>, <a href="../model_doc/umt5">UMT5</a>, <a href="../model_doc/xlm">XLM</a>, <a href="../model_doc/xlm-roberta">XLM-RoBERTa</a>, <a href="../model_doc/xlm-roberta-xl">XLM-RoBERTa-XL</a>, <a href="../model_doc/xlnet">XLNet</a>, <a href="../model_doc/xmod">X-MOD</a>, <a href="../model_doc/yoso">YOSO</a></p></div> <p data-svelte-h="svelte-1c9nexd">Before you begin, make sure you have all the necessary libraries installed:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class="">pip install transformers datasets evaluate</pre></div> <p data-svelte-h="svelte-k76o1m">We encourage you to login to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to login:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> huggingface_hub <span class="hljs-keyword">import</span> notebook_login <span class="hljs-meta">&gt;&gt;&gt; </span>notebook_login()</pre></div> <h2 class="relative group"><a id="load-imdb-dataset" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#load-imdb-dataset"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-17xlbj3">Load IMDb dataset</span></h2> <p data-svelte-h="svelte-cx4bj0">Start by loading the IMDb dataset from the 🤗 Datasets library:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset <span class="hljs-meta">&gt;&gt;&gt; </span>imdb = load_dataset(<span class="hljs-string">"imdb"</span>)</pre></div> <p data-svelte-h="svelte-1m91ua0">Then take a look at an example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>imdb[<span class="hljs-string">"test"</span>][<span class="hljs-number">0</span>] { <span class="hljs-string">"label"</span>: <span class="hljs-number">0</span>, <span class="hljs-string">"text"</span>: <span class="hljs-string">"I love sci-fi and am willing to put up with a lot. Sci-fi movies/TV are usually underfunded, under-appreciated and misunderstood. I tried to like this, I really did, but it is to good TV sci-fi as Babylon 5 is to Star Trek (the original). Silly prosthetics, cheap cardboard sets, stilted dialogues, CG that doesn't match the background, and painfully one-dimensional characters cannot be overcome with a 'sci-fi' setting. (I'm sure there are those of you out there who think Babylon 5 is good sci-fi TV. It's not. It's clichéd and uninspiring.) While US viewers might like emotion and character development, sci-fi is a genre that does not take itself seriously (cf. Star Trek). It may treat important issues, yet not as a serious philosophy. It's really difficult to care about the characters here as they are not simply foolish, just missing a spark of life. Their actions and reactions are wooden and predictable, often painful to watch. The makers of Earth KNOW it's rubbish as they have to always say \"Gene Roddenberry's Earth...\" otherwise people would not continue watching. Roddenberry's ashes must be turning in their orbit as this dull, cheap, poorly edited (watching it without advert breaks really brings this home) trudging Trabant of a show lumbers into space. Spoiler. So, kill off a main character. And then bring him back as another actor. Jeeez! Dallas all over again."</span>, }</pre></div> <p data-svelte-h="svelte-q802b4">There are two fields in this dataset:</p> <ul data-svelte-h="svelte-4d5l79"><li><code>text</code>: the movie review text.</li> <li><code>label</code>: a value that is either <code>0</code> for a negative review or <code>1</code> for a positive review.</li></ul> <h2 class="relative group"><a id="preprocess" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#preprocess"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1cg9qj">Preprocess</span></h2> <p data-svelte-h="svelte-1gepr51">The next step is to load a DistilBERT tokenizer to preprocess the <code>text</code> field:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"distilbert-base-uncased"</span>)</pre></div> <p data-svelte-h="svelte-1xsc197">Create a preprocessing function to tokenize <code>text</code> and truncate sequences to be no longer than DistilBERT’s maximum input length:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">preprocess_function</span>(<span class="hljs-params">examples</span>): <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> tokenizer(examples[<span class="hljs-string">"text"</span>], truncation=<span class="hljs-literal">True</span>)</pre></div> <p data-svelte-h="svelte-1n6dr0t">To apply the preprocessing function over the entire dataset, use 🤗 Datasets <a href="https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.map" rel="nofollow">map</a> function. You can speed up <code>map</code> by setting <code>batched=True</code> to process multiple elements of the dataset at once:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class="">tokenized_imdb = imdb.<span class="hljs-built_in">map</span>(preprocess_function, batched=<span class="hljs-literal">True</span>)</pre></div> <p data-svelte-h="svelte-1f1ft0n">Now create a batch of examples using <a href="/docs/transformers/v4.34.0/en/main_classes/data_collator#transformers.DataCollatorWithPadding">DataCollatorWithPadding</a>. It’s more efficient to <em>dynamically pad</em> the sentences to the longest length in a batch during collation, instead of padding the whole dataset to the maximum length.</p> <div class="space-y-10 py-6 2xl:py-8 2xl:-mx-4"><div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><defs><clipPath id="a"><rect x="3.05" y="0.5" width="25.73" height="31" fill="none"></rect></clipPath></defs><g clip-path="url(#a)"><path d="M24.94,9.51a12.81,12.81,0,0,1,0,18.16,12.68,12.68,0,0,1-18,0,12.81,12.81,0,0,1,0-18.16l9-9V5l-.84.83-6,6a9.58,9.58,0,1,0,13.55,0ZM20.44,9a1.68,1.68,0,1,1,1.67-1.67A1.68,1.68,0,0,1,20.44,9Z" fill="#ee4c2c"></path></g></svg> <span>Pytorch</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide Pytorch content</span></div></div> <div class="framework-content"><div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> DataCollatorWithPadding <span class="hljs-meta">&gt;&gt;&gt; </span>data_collator = DataCollatorWithPadding(tokenizer=tokenizer)</pre></div></div></div> <div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="0.94em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 274"><path d="M145.726 42.065v42.07l72.861 42.07v-42.07l-72.86-42.07zM0 84.135v42.07l36.43 21.03V105.17L0 84.135zm109.291 21.035l-36.43 21.034v126.2l36.43 21.035v-84.135l36.435 21.035v-42.07l-36.435-21.034V105.17z" fill="#E55B2D"></path><path d="M145.726 42.065L36.43 105.17v42.065l72.861-42.065v42.065l36.435-21.03v-84.14zM255.022 63.1l-36.435 21.035v42.07l36.435-21.035V63.1zm-72.865 84.135l-36.43 21.035v42.07l36.43-21.036v-42.07zm-36.43 63.104l-36.436-21.035v84.135l36.435-21.035V210.34z" fill="#ED8E24"></path><path d="M145.726 0L0 84.135l36.43 21.035l109.296-63.105l72.861 42.07L255.022 63.1L145.726 0zm0 126.204l-36.435 21.03l36.435 21.036l36.43-21.035l-36.43-21.03z" fill="#F8BF3C"></path></svg> <span>TensorFlow</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide TensorFlow content</span></div></div> <div class="framework-content"><div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> DataCollatorWithPadding <span class="hljs-meta">&gt;&gt;&gt; </span>data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors=<span class="hljs-string">"tf"</span>)</pre></div></div></div> </div> <h2 class="relative group"><a id="evaluate" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#evaluate"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-sh8s6s">Evaluate</span></h2> <p data-svelte-h="svelte-j1ipe9">Including a metric during training is often helpful for evaluating your model’s performance. You can quickly load a evaluation method with the 🤗 <a href="https://huggingface.co/docs/evaluate/index" rel="nofollow">Evaluate</a> library. For this task, load the <a href="https://huggingface.co/spaces/evaluate-metric/accuracy" rel="nofollow">accuracy</a> metric (see the 🤗 Evaluate <a href="https://huggingface.co/docs/evaluate/a_quick_tour" rel="nofollow">quick tour</a> to learn more about how to load and compute a metric):</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> evaluate <span class="hljs-meta">&gt;&gt;&gt; </span>accuracy = evaluate.load(<span class="hljs-string">"accuracy"</span>)</pre></div> <p data-svelte-h="svelte-14oy2j6">Then create a function that passes your predictions and labels to <a href="https://huggingface.co/docs/evaluate/v0.4.0/en/package_reference/main_classes#evaluate.EvaluationModule.compute" rel="nofollow">compute</a> to calculate the accuracy:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">compute_metrics</span>(<span class="hljs-params">eval_pred</span>): <span class="hljs-meta">... </span> predictions, labels = eval_pred <span class="hljs-meta">... </span> predictions = np.argmax(predictions, axis=<span class="hljs-number">1</span>) <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> accuracy.compute(predictions=predictions, references=labels)</pre></div> <p data-svelte-h="svelte-183aynn">Your <code>compute_metrics</code> function is ready to go now, and you’ll return to it when you setup your training.</p> <h2 class="relative group"><a id="train" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#train"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-5arm0l">Train</span></h2> <p data-svelte-h="svelte-18c6io4">Before you start training your model, create a map of the expected ids to their labels with <code>id2label</code> and <code>label2id</code>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>id2label = {<span class="hljs-number">0</span>: <span class="hljs-string">"NEGATIVE"</span>, <span class="hljs-number">1</span>: <span class="hljs-string">"POSITIVE"</span>} <span class="hljs-meta">&gt;&gt;&gt; </span>label2id = {<span class="hljs-string">"NEGATIVE"</span>: <span class="hljs-number">0</span>, <span class="hljs-string">"POSITIVE"</span>: <span class="hljs-number">1</span>}</pre></div> <div class="space-y-10 py-6 2xl:py-8 2xl:-mx-4"><div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><defs><clipPath id="a"><rect x="3.05" y="0.5" width="25.73" height="31" fill="none"></rect></clipPath></defs><g clip-path="url(#a)"><path d="M24.94,9.51a12.81,12.81,0,0,1,0,18.16,12.68,12.68,0,0,1-18,0,12.81,12.81,0,0,1,0-18.16l9-9V5l-.84.83-6,6a9.58,9.58,0,1,0,13.55,0ZM20.44,9a1.68,1.68,0,1,1,1.67-1.67A1.68,1.68,0,0,1,20.44,9Z" fill="#ee4c2c"></path></g></svg> <span>Pytorch</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide Pytorch content</span></div></div> <div class="framework-content"><div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1ufp0ay">If you aren’t familiar with finetuning a model with the <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer">Trainer</a>, take a look at the basic tutorial <a href="../training#train-with-pytorch-trainer">here</a>!</p></div> <p data-svelte-h="svelte-iwtp6q">You’re ready to start training your model now! Load DistilBERT with <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoModelForSequenceClassification">AutoModelForSequenceClassification</a> along with the number of expected labels, and the label mappings:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoModelForSequenceClassification, TrainingArguments, Trainer <span class="hljs-meta">&gt;&gt;&gt; </span>model = AutoModelForSequenceClassification.from_pretrained( <span class="hljs-meta">... </span> <span class="hljs-string">"distilbert-base-uncased"</span>, num_labels=<span class="hljs-number">2</span>, id2label=id2label, label2id=label2id <span class="hljs-meta">... </span>)</pre></div> <p data-svelte-h="svelte-l42k0i">At this point, only three steps remain:</p> <ol data-svelte-h="svelte-777kp9"><li>Define your training hyperparameters in <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.TrainingArguments">TrainingArguments</a>. The only required parameter is <code>output_dir</code> which specifies where to save your model. You’ll push this model to the Hub by setting <code>push_to_hub=True</code> (you need to be signed in to Hugging Face to upload your model). At the end of each epoch, the <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer">Trainer</a> will evaluate the accuracy and save the training checkpoint.</li> <li>Pass the training arguments to <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer">Trainer</a> along with the model, dataset, tokenizer, data collator, and <code>compute_metrics</code> function.</li> <li>Call <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.train">train()</a> to finetune your model.</li></ol> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>training_args = TrainingArguments( <span class="hljs-meta">... </span> output_dir=<span class="hljs-string">"my_awesome_model"</span>, <span class="hljs-meta">... </span> learning_rate=<span class="hljs-number">2e-5</span>, <span class="hljs-meta">... </span> per_device_train_batch_size=<span class="hljs-number">16</span>, <span class="hljs-meta">... </span> per_device_eval_batch_size=<span class="hljs-number">16</span>, <span class="hljs-meta">... </span> num_train_epochs=<span class="hljs-number">2</span>, <span class="hljs-meta">... </span> weight_decay=<span class="hljs-number">0.01</span>, <span class="hljs-meta">... </span> evaluation_strategy=<span class="hljs-string">"epoch"</span>, <span class="hljs-meta">... </span> save_strategy=<span class="hljs-string">"epoch"</span>, <span class="hljs-meta">... </span> load_best_model_at_end=<span class="hljs-literal">True</span>, <span class="hljs-meta">... </span> push_to_hub=<span class="hljs-literal">True</span>, <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>trainer = Trainer( <span class="hljs-meta">... </span> model=model, <span class="hljs-meta">... </span> args=training_args, <span class="hljs-meta">... </span> train_dataset=tokenized_imdb[<span class="hljs-string">"train"</span>], <span class="hljs-meta">... </span> eval_dataset=tokenized_imdb[<span class="hljs-string">"test"</span>], <span class="hljs-meta">... </span> tokenizer=tokenizer, <span class="hljs-meta">... </span> data_collator=data_collator, <span class="hljs-meta">... </span> compute_metrics=compute_metrics, <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>trainer.train()</pre></div> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-8kgkso"><a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer">Trainer</a> applies dynamic padding by default when you pass <code>tokenizer</code> to it. In this case, you don’t need to specify a data collator explicitly.</p></div> <p data-svelte-h="svelte-cv8z08">Once training is completed, share your model to the Hub with the <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.push_to_hub">push_to_hub()</a> method so everyone can use your model:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>trainer.push_to_hub()</pre></div></div></div> <div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="0.94em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 274"><path d="M145.726 42.065v42.07l72.861 42.07v-42.07l-72.86-42.07zM0 84.135v42.07l36.43 21.03V105.17L0 84.135zm109.291 21.035l-36.43 21.034v126.2l36.43 21.035v-84.135l36.435 21.035v-42.07l-36.435-21.034V105.17z" fill="#E55B2D"></path><path d="M145.726 42.065L36.43 105.17v42.065l72.861-42.065v42.065l36.435-21.03v-84.14zM255.022 63.1l-36.435 21.035v42.07l36.435-21.035V63.1zm-72.865 84.135l-36.43 21.035v42.07l36.43-21.036v-42.07zm-36.43 63.104l-36.436-21.035v84.135l36.435-21.035V210.34z" fill="#ED8E24"></path><path d="M145.726 0L0 84.135l36.43 21.035l109.296-63.105l72.861 42.07L255.022 63.1L145.726 0zm0 126.204l-36.435 21.03l36.435 21.036l36.43-21.035l-36.43-21.03z" fill="#F8BF3C"></path></svg> <span>TensorFlow</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide TensorFlow content</span></div></div> <div class="framework-content"><div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1rd4nl8">If you aren’t familiar with finetuning a model with Keras, take a look at the basic tutorial <a href="../training#train-a-tensorflow-model-with-keras">here</a>!</p></div> To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters: <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> create_optimizer <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> tensorflow <span class="hljs-keyword">as</span> tf <span class="hljs-meta">&gt;&gt;&gt; </span>batch_size = <span class="hljs-number">16</span> <span class="hljs-meta">&gt;&gt;&gt; </span>num_epochs = <span class="hljs-number">5</span> <span class="hljs-meta">&gt;&gt;&gt; </span>batches_per_epoch = <span class="hljs-built_in">len</span>(tokenized_imdb[<span class="hljs-string">"train"</span>]) // batch_size <span class="hljs-meta">&gt;&gt;&gt; </span>total_train_steps = <span class="hljs-built_in">int</span>(batches_per_epoch * num_epochs) <span class="hljs-meta">&gt;&gt;&gt; </span>optimizer, schedule = create_optimizer(init_lr=<span class="hljs-number">2e-5</span>, num_warmup_steps=<span class="hljs-number">0</span>, num_train_steps=total_train_steps)</pre></div> <p data-svelte-h="svelte-1dn4blc">Then you can load DistilBERT with <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.TFAutoModelForSequenceClassification">TFAutoModelForSequenceClassification</a> along with the number of expected labels, and the label mappings:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> TFAutoModelForSequenceClassification <span class="hljs-meta">&gt;&gt;&gt; </span>model = TFAutoModelForSequenceClassification.from_pretrained( <span class="hljs-meta">... </span> <span class="hljs-string">"distilbert-base-uncased"</span>, num_labels=<span class="hljs-number">2</span>, id2label=id2label, label2id=label2id <span class="hljs-meta">... </span>)</pre></div> <p data-svelte-h="svelte-qmwuyd">Convert your datasets to the <code>tf.data.Dataset</code> format with <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel.prepare_tf_dataset">prepare_tf_dataset()</a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>tf_train_set = model.prepare_tf_dataset( <span class="hljs-meta">... </span> tokenized_imdb[<span class="hljs-string">"train"</span>], <span class="hljs-meta">... </span> shuffle=<span class="hljs-literal">True</span>, <span class="hljs-meta">... </span> batch_size=<span class="hljs-number">16</span>, <span class="hljs-meta">... </span> collate_fn=data_collator, <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>tf_validation_set = model.prepare_tf_dataset( <span class="hljs-meta">... </span> tokenized_imdb[<span class="hljs-string">"test"</span>], <span class="hljs-meta">... </span> shuffle=<span class="hljs-literal">False</span>, <span class="hljs-meta">... </span> batch_size=<span class="hljs-number">16</span>, <span class="hljs-meta">... </span> collate_fn=data_collator, <span class="hljs-meta">... </span>)</pre></div> <p data-svelte-h="svelte-17cxx5e">Configure the model for training with <a href="https://keras.io/api/models/model_training_apis/#compile-method" rel="nofollow"><code>compile</code></a>. Note that Transformers models all have a default task-relevant loss function, so you don’t need to specify one unless you want to:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> tensorflow <span class="hljs-keyword">as</span> tf <span class="hljs-meta">&gt;&gt;&gt; </span>model.<span class="hljs-built_in">compile</span>(optimizer=optimizer) <span class="hljs-comment"># No loss argument!</span></pre></div> <p data-svelte-h="svelte-6l1wkp">The last two things to setup before you start training is to compute the accuracy from the predictions, and provide a way to push your model to the Hub. Both are done by using <a href="../main_classes/keras_callbacks">Keras callbacks</a>.</p> <p data-svelte-h="svelte-6vs5z9">Pass your <code>compute_metrics</code> function to <a href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks#transformers.KerasMetricCallback">KerasMetricCallback</a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers.keras_callbacks <span class="hljs-keyword">import</span> KerasMetricCallback <span class="hljs-meta">&gt;&gt;&gt; </span>metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_set)</pre></div> <p data-svelte-h="svelte-b2vwd">Specify where to push your model and tokenizer in the <a href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks#transformers.PushToHubCallback">PushToHubCallback</a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers.keras_callbacks <span class="hljs-keyword">import</span> PushToHubCallback <span class="hljs-meta">&gt;&gt;&gt; </span>push_to_hub_callback = PushToHubCallback( <span class="hljs-meta">... </span> output_dir=<span class="hljs-string">"my_awesome_model"</span>, <span class="hljs-meta">... </span> tokenizer=tokenizer, <span class="hljs-meta">... </span>)</pre></div> <p data-svelte-h="svelte-1lw9xm8">Then bundle your callbacks together:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>callbacks = [metric_callback, push_to_hub_callback]</pre></div> <p data-svelte-h="svelte-1hrpv1v">Finally, you’re ready to start training your model! Call <a href="https://keras.io/api/models/model_training_apis/#fit-method" rel="nofollow"><code>fit</code></a> with your training and validation datasets, the number of epochs, and your callbacks to finetune the model:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>model.fit(x=tf_train_set, validation_data=tf_validation_set, epochs=<span class="hljs-number">3</span>, callbacks=callbacks)</pre></div> <p data-svelte-h="svelte-2s71om">Once training is completed, your model is automatically uploaded to the Hub so everyone can use it!</p></div></div> </div> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-m6bho8">For a more in-depth example of how to finetune a model for text classification, take a look at the corresponding <a href="https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification.ipynb" rel="nofollow">PyTorch notebook</a> or <a href="https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification-tf.ipynb" rel="nofollow">TensorFlow notebook</a>.</p></div> <h2 class="relative group"><a id="inference" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#inference"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-199uz7g">Inference</span></h2> <p data-svelte-h="svelte-633ppb">Great, now that you’ve finetuned a model, you can use it for inference!</p> <p data-svelte-h="svelte-o1jbfg">Grab some text you’d like to run inference on:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>text = <span class="hljs-string">"This was a masterpiece. Not completely faithful to the books, but enthralling from beginning to end. Might be my favorite of the three."</span></pre></div> <p data-svelte-h="svelte-1kkp80l">The simplest way to try out your finetuned model for inference is to use it in a <a href="/docs/transformers/v4.34.0/en/main_classes/pipelines#transformers.pipeline">pipeline()</a>. Instantiate a <code>pipeline</code> for sentiment analysis with your model, and pass your text to it:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> pipeline <span class="hljs-meta">&gt;&gt;&gt; </span>classifier = pipeline(<span class="hljs-string">"sentiment-analysis"</span>, model=<span class="hljs-string">"stevhliu/my_awesome_model"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>classifier(text) [{<span class="hljs-string">'label'</span>: <span class="hljs-string">'POSITIVE'</span>, <span class="hljs-string">'score'</span>: <span class="hljs-number">0.9994940757751465</span>}]</pre></div> <p data-svelte-h="svelte-1njl8vm">You can also manually replicate the results of the <code>pipeline</code> if you’d like:</p> <div class="space-y-10 py-6 2xl:py-8 2xl:-mx-4"><div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><defs><clipPath id="a"><rect x="3.05" y="0.5" width="25.73" height="31" fill="none"></rect></clipPath></defs><g clip-path="url(#a)"><path d="M24.94,9.51a12.81,12.81,0,0,1,0,18.16,12.68,12.68,0,0,1-18,0,12.81,12.81,0,0,1,0-18.16l9-9V5l-.84.83-6,6a9.58,9.58,0,1,0,13.55,0ZM20.44,9a1.68,1.68,0,1,1,1.67-1.67A1.68,1.68,0,0,1,20.44,9Z" fill="#ee4c2c"></path></g></svg> <span>Pytorch</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide Pytorch content</span></div></div> <div class="framework-content"><p data-svelte-h="svelte-1qcz1wr">Tokenize the text and return PyTorch tensors:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"stevhliu/my_awesome_model"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(text, return_tensors=<span class="hljs-string">"pt"</span>)</pre></div> <p data-svelte-h="svelte-f3g043">Pass your inputs to the model and return the <code>logits</code>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoModelForSequenceClassification <span class="hljs-meta">&gt;&gt;&gt; </span>model = AutoModelForSequenceClassification.from_pretrained(<span class="hljs-string">"stevhliu/my_awesome_model"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> logits = model(**inputs).logits</pre></div> <p data-svelte-h="svelte-6mgrol">Get the class with the highest probability, and use the model’s <code>id2label</code> mapping to convert it to a text label:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>predicted_class_id = logits.argmax().item() <span class="hljs-meta">&gt;&gt;&gt; </span>model.config.id2label[predicted_class_id] <span class="hljs-string">'POSITIVE'</span></pre></div></div></div> <div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="0.94em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 274"><path d="M145.726 42.065v42.07l72.861 42.07v-42.07l-72.86-42.07zM0 84.135v42.07l36.43 21.03V105.17L0 84.135zm109.291 21.035l-36.43 21.034v126.2l36.43 21.035v-84.135l36.435 21.035v-42.07l-36.435-21.034V105.17z" fill="#E55B2D"></path><path d="M145.726 42.065L36.43 105.17v42.065l72.861-42.065v42.065l36.435-21.03v-84.14zM255.022 63.1l-36.435 21.035v42.07l36.435-21.035V63.1zm-72.865 84.135l-36.43 21.035v42.07l36.43-21.036v-42.07zm-36.43 63.104l-36.436-21.035v84.135l36.435-21.035V210.34z" fill="#ED8E24"></path><path d="M145.726 0L0 84.135l36.43 21.035l109.296-63.105l72.861 42.07L255.022 63.1L145.726 0zm0 126.204l-36.435 21.03l36.435 21.036l36.43-21.035l-36.43-21.03z" fill="#F8BF3C"></path></svg> <span>TensorFlow</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide TensorFlow content</span></div></div> <div class="framework-content"><p data-svelte-h="svelte-s1qr7b">Tokenize the text and return TensorFlow tensors:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"stevhliu/my_awesome_model"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(text, return_tensors=<span class="hljs-string">"tf"</span>)</pre></div> <p data-svelte-h="svelte-f3g043">Pass your inputs to the model and return the <code>logits</code>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> TFAutoModelForSequenceClassification <span class="hljs-meta">&gt;&gt;&gt; </span>model = TFAutoModelForSequenceClassification.from_pretrained(<span class="hljs-string">"stevhliu/my_awesome_model"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>logits = model(**inputs).logits</pre></div> <p data-svelte-h="svelte-6mgrol">Get the class with the highest probability, and use the model’s <code>id2label</code> mapping to convert it to a text label:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>predicted_class_id = <span class="hljs-built_in">int</span>(tf.math.argmax(logits, axis=-<span class="hljs-number">1</span>)[<span class="hljs-number">0</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span>model.config.id2label[predicted_class_id] <span class="hljs-string">'POSITIVE'</span></pre></div></div></div> </div> <p></p> <div id="svelte-announcer" aria-live="assertive" aria-atomic="true" style="position: absolute; left: 0px; top: 0px; clip: rect(0px, 0px, 0px, 0px); clip-path: inset(50%); overflow: hidden; white-space: nowrap; width: 1px; height: 1px;"></div></div> <div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/llm_tutorial" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>Generation with LLMs</a> <a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/tasks/token_classification" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">Token classification<span class="ml-2 translate-y-px">→</span></a></div></div></div> <div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapter&quot;:{&quot;title&quot;:&quot;Text classification&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;text-classification&quot;,&quot;url&quot;:&quot;#text-classification&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Load IMDb dataset&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;load-imdb-dataset&quot;,&quot;url&quot;:&quot;#load-imdb-dataset&quot;},{&quot;title&quot;:&quot;Preprocess&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocess&quot;,&quot;url&quot;:&quot;#preprocess&quot;},{&quot;title&quot;:&quot;Evaluate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;evaluate&quot;,&quot;url&quot;:&quot;#evaluate&quot;},{&quot;title&quot;:&quot;Train&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;train&quot;,&quot;url&quot;:&quot;#train&quot;},{&quot;title&quot;:&quot;Inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;inference&quot;,&quot;url&quot;:&quot;#inference&quot;}]}}" data-target="SubSideMenu"><nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#text-classification" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-text-classification"><wbr>Text classification</a> <a href="#load-imdb-dataset" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-load-imdb-dataset"><wbr>Load IM<wbr>Db dataset</a> <a href="#preprocess" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-preprocess"><wbr>Preprocess</a> <a href="#evaluate" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-evaluate"><wbr>Evaluate</a> <a href="#train" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-train"><wbr>Train</a> <a href="#inference" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-inference"><wbr>Inference</a> </nav></div></div></div> <div id="doc-footer"></div></main> </div> <script> import("/front/build/kube-5e23f38/index.js"); window.moonSha = "kube-5e23f38/"; window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc","environment":"production","userAgent":"HuggingFace (production)"}`); </script> <!-- Stripe --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://js.stripe.com/v3/"; script.async = true; document.head.appendChild(script); } </script> <!-- Google analytics v4 --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL"; script.async = true; document.head.appendChild(script); window.dataLayer = window.dataLayer || []; function gtag() { if (window.dataLayer !== undefined) { window.dataLayer.push(arguments); } } gtag("js", new Date()); gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/v4.34.0/en/tasks/sequence_classification" }); /// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" }); /// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent /// TODO: ask the user for their consent and update this with gtag('consent', 'update') } </script> <!-- Google Analytics v3 (deprecated) --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { (function (i, s, o, g, r, a, m) { i["GoogleAnalyticsObject"] = r; (i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments); }), (i[r].l = 1 * new Date()); (a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]); a.async = 1; a.src = g; m.parentNode.insertBefore(a, m); })(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics"); ganalytics("create", "UA-83738774-2", "auto"); ganalytics("send", "pageview", "/docs/transformers/v4.34.0/en/tasks/sequence_classification"); } </script> <iframe name="__privateStripeMetricsController6580" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-27c67c0d52761104439bb051c7856ab1.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fv4.34.0%2Fen%2Ftasks%2Fsequence_classification&amp;title=Text%20classification&amp;referrer=&amp;muid=577a1d98-59a0-46fc-98a8-36ee316848488be1c3&amp;sid=95f156dd-eb84-4e70-95ef-3883996ebe1530e886&amp;version=6&amp;preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html>
2023-10-05T13:33:42.835Z
https://huggingface.co/docs/transformers/v4.34.0/en/migration
The documentation page MIGRATION doesn’t exist in v4.34.0, but exists on the main version. Click [here](/docs/transformers/main/en/migration) to redirect to the main version of the documentation.
<html><head></head><body>The documentation page MIGRATION doesn’t exist in v4.34.0, but exists on the main version. Click <a href="/docs/transformers/main/en/migration">here</a> to redirect to the main version of the documentation.</body></html>
2023-10-05T13:33:42.879Z
https://huggingface.co/docs/transformers/v4.34.0/en/hf.co/models
The documentation page HF.CO/MODELS doesn’t exist in v4.34.0, but exists on the main version. Click [here](/docs/transformers/main/en/hf.co/models) to redirect to the main version of the documentation.
<html><head></head><body>The documentation page HF.CO/MODELS doesn’t exist in v4.34.0, but exists on the main version. Click <a href="/docs/transformers/main/en/hf.co/models">here</a> to redirect to the main version of the documentation.</body></html>
2023-10-05T13:33:43.195Z
https://huggingface.co/docs/transformers/v4.34.0/en/writing-documentation
The documentation page WRITING-DOCUMENTATION doesn’t exist in v4.34.0, but exists on the main version. Click [here](/docs/transformers/main/en/writing-documentation) to redirect to the main version of the documentation.
<html><head></head><body>The documentation page WRITING-DOCUMENTATION doesn’t exist in v4.34.0, but exists on the main version. Click <a href="/docs/transformers/main/en/writing-documentation">here</a> to redirect to the main version of the documentation.</body></html>
2023-10-05T13:33:43.214Z
Audio classification
https://huggingface.co/docs/transformers/v4.34.0/en/tasks/audio_classification
# Audio classification Audio classification - just like with text - assigns a class label output from the input data. The only difference is instead of text inputs, you have raw audio waveforms. Some practical applications of audio classification include identifying speaker intent, language classification, and even animal species by their sounds. This guide will show you how to: 1. Finetune [Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base) on the [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) dataset to classify speaker intent. 2. Use your finetuned model for inference. The task illustrated in this tutorial is supported by the following model architectures: [Audio Spectrogram Transformer](../model_doc/audio-spectrogram-transformer), [Data2VecAudio](../model_doc/data2vec-audio), [Hubert](../model_doc/hubert), [SEW](../model_doc/sew), [SEW-D](../model_doc/sew-d), [UniSpeech](../model_doc/unispeech), [UniSpeechSat](../model_doc/unispeech-sat), [Wav2Vec2](../model_doc/wav2vec2), [Wav2Vec2-Conformer](../model_doc/wav2vec2-conformer), [WavLM](../model_doc/wavlm), [Whisper](../model_doc/whisper) Before you begin, make sure you have all the necessary libraries installed: ``` pip install transformers datasets evaluate ``` We encourage you to login to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to login: ``` >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## Load MInDS-14 dataset Start by loading the MInDS-14 dataset from the 🤗 Datasets library: ``` >>> from datasets import load_dataset, Audio >>> minds = load_dataset("PolyAI/minds14", name="en-US", split="train") ``` Split the dataset’s `train` split into a smaller train and test set with the [train\_test\_split](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.train_test_split) method. This’ll give you a chance to experiment and make sure everything works before spending more time on the full dataset. ``` >>> minds = minds.train_test_split(test_size=0.2) ``` Then take a look at the dataset: ``` >>> minds DatasetDict({ train: Dataset({ features: ['path', 'audio', 'transcription', 'english_transcription', 'intent_class', 'lang_id'], num_rows: 450 }) test: Dataset({ features: ['path', 'audio', 'transcription', 'english_transcription', 'intent_class', 'lang_id'], num_rows: 113 }) }) ``` While the dataset contains a lot of useful information, like `lang_id` and `english_transcription`, you’ll focus on the `audio` and `intent_class` in this guide. Remove the other columns with the [remove\_columns](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.remove_columns) method: ``` >>> minds = minds.remove_columns(["path", "transcription", "english_transcription", "lang_id"]) ``` Take a look at an example now: ``` >>> minds["train"][0] {'audio': {'array': array([ 0. , 0. , 0. , ..., -0.00048828, -0.00024414, -0.00024414], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602b9a5fbb1e6d0fbce91f52.wav', 'sampling_rate': 8000}, 'intent_class': 2} ``` There are two fields: - `audio`: a 1-dimensional `array` of the speech signal that must be called to load and resample the audio file. - `intent_class`: represents the class id of the speaker’s intent. To make it easier for the model to get the label name from the label id, create a dictionary that maps the label name to an integer and vice versa: ``` >>> labels = minds["train"].features["intent_class"].names >>> label2id, id2label = dict(), dict() >>> for i, label in enumerate(labels): ... label2id[label] = str(i) ... id2label[str(i)] = label ``` Now you can convert the label id to a label name: ``` >>> id2label[str(2)] 'app_error' ``` ## Preprocess The next step is to load a Wav2Vec2 feature extractor to process the audio signal: ``` >>> from transformers import AutoFeatureExtractor >>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-base") ``` The MInDS-14 dataset has a sampling rate of 8000khz (you can find this information in it’s [dataset card](https://huggingface.co/datasets/PolyAI/minds14)), which means you’ll need to resample the dataset to 16000kHz to use the pretrained Wav2Vec2 model: ``` >>> minds = minds.cast_column("audio", Audio(sampling_rate=16_000)) >>> minds["train"][0] {'audio': {'array': array([ 2.2098757e-05, 4.6582241e-05, -2.2803260e-05, ..., -2.8419291e-04, -2.3305941e-04, -1.1425107e-04], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602b9a5fbb1e6d0fbce91f52.wav', 'sampling_rate': 16000}, 'intent_class': 2} ``` Now create a preprocessing function that: 1. Calls the `audio` column to load, and if necessary, resample the audio file. 2. Checks if the sampling rate of the audio file matches the sampling rate of the audio data a model was pretrained with. You can find this information in the Wav2Vec2 [model card](https://huggingface.co/facebook/wav2vec2-base). 3. Set a maximum input length to batch longer inputs without truncating them. ``` >>> def preprocess_function(examples): ... audio_arrays = [x["array"] for x in examples["audio"]] ... inputs = feature_extractor( ... audio_arrays, sampling_rate=feature_extractor.sampling_rate, max_length=16000, truncation=True ... ) ... return inputs ``` To apply the preprocessing function over the entire dataset, use 🤗 Datasets [map](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.map) function. You can speed up `map` by setting `batched=True` to process multiple elements of the dataset at once. Remove the columns you don’t need, and rename `intent_class` to `label` because that’s the name the model expects: ``` >>> encoded_minds = minds.map(preprocess_function, remove_columns="audio", batched=True) >>> encoded_minds = encoded_minds.rename_column("intent_class", "label") ``` ## Evaluate Including a metric during training is often helpful for evaluating your model’s performance. You can quickly load a evaluation method with the 🤗 [Evaluate](https://huggingface.co/docs/evaluate/index) library. For this task, load the [accuracy](https://huggingface.co/spaces/evaluate-metric/accuracy) metric (see the 🤗 Evaluate [quick tour](https://huggingface.co/docs/evaluate/a_quick_tour) to learn more about how to load and compute a metric): ``` >>> import evaluate >>> accuracy = evaluate.load("accuracy") ``` Then create a function that passes your predictions and labels to [compute](https://huggingface.co/docs/evaluate/v0.4.0/en/package_reference/main_classes#evaluate.EvaluationModule.compute) to calculate the accuracy: ``` >>> import numpy as np >>> def compute_metrics(eval_pred): ... predictions = np.argmax(eval_pred.predictions, axis=1) ... return accuracy.compute(predictions=predictions, references=eval_pred.label_ids) ``` Your `compute_metrics` function is ready to go now, and you’ll return to it when you setup your training. ## Train If you aren’t familiar with finetuning a model with the [Trainer](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer), take a look at the basic tutorial [here](../training#train-with-pytorch-trainer)! You’re ready to start training your model now! Load Wav2Vec2 with [AutoModelForAudioClassification](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoModelForAudioClassification) along with the number of expected labels, and the label mappings: ``` >>> from transformers import AutoModelForAudioClassification, TrainingArguments, Trainer >>> num_labels = len(id2label) >>> model = AutoModelForAudioClassification.from_pretrained( ... "facebook/wav2vec2-base", num_labels=num_labels, label2id=label2id, id2label=id2label ... ) ``` At this point, only three steps remain: 1. Define your training hyperparameters in [TrainingArguments](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.TrainingArguments). The only required parameter is `output_dir` which specifies where to save your model. You’ll push this model to the Hub by setting `push_to_hub=True` (you need to be signed in to Hugging Face to upload your model). At the end of each epoch, the [Trainer](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer) will evaluate the accuracy and save the training checkpoint. 2. Pass the training arguments to [Trainer](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer) along with the model, dataset, tokenizer, data collator, and `compute_metrics` function. 3. Call [train()](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.train) to finetune your model. ``` >>> training_args = TrainingArguments( ... output_dir="my_awesome_mind_model", ... evaluation_strategy="epoch", ... save_strategy="epoch", ... learning_rate=3e-5, ... per_device_train_batch_size=32, ... gradient_accumulation_steps=4, ... per_device_eval_batch_size=32, ... num_train_epochs=10, ... warmup_ratio=0.1, ... logging_steps=10, ... load_best_model_at_end=True, ... metric_for_best_model="accuracy", ... push_to_hub=True, ... ) >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=encoded_minds["train"], ... eval_dataset=encoded_minds["test"], ... tokenizer=feature_extractor, ... compute_metrics=compute_metrics, ... ) >>> trainer.train() ``` Once training is completed, share your model to the Hub with the [push\_to\_hub()](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.push_to_hub) method so everyone can use your model: ``` >>> trainer.push_to_hub() ``` For a more in-depth example of how to finetune a model for audio classification, take a look at the corresponding [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/audio_classification.ipynb). ## Inference Great, now that you’ve finetuned a model, you can use it for inference! Load an audio file you’d like to run inference on. Remember to resample the sampling rate of the audio file to match the sampling rate of the model if you need to! ``` >>> from datasets import load_dataset, Audio >>> dataset = load_dataset("PolyAI/minds14", name="en-US", split="train") >>> dataset = dataset.cast_column("audio", Audio(sampling_rate=16000)) >>> sampling_rate = dataset.features["audio"].sampling_rate >>> audio_file = dataset[0]["audio"]["path"] ``` The simplest way to try out your finetuned model for inference is to use it in a [pipeline()](/docs/transformers/v4.34.0/en/main_classes/pipelines#transformers.pipeline). Instantiate a `pipeline` for audio classification with your model, and pass your audio file to it: ``` >>> from transformers import pipeline >>> classifier = pipeline("audio-classification", model="stevhliu/my_awesome_minds_model") >>> classifier(audio_file) [ {'score': 0.09766869246959686, 'label': 'cash_deposit'}, {'score': 0.07998877018690109, 'label': 'app_error'}, {'score': 0.0781070664525032, 'label': 'joint_account'}, {'score': 0.07667109370231628, 'label': 'pay_bill'}, {'score': 0.0755252093076706, 'label': 'balance'} ] ``` You can also manually replicate the results of the `pipeline` if you’d like: Load a feature extractor to preprocess the audio file and return the `input` as PyTorch tensors: ``` >>> from transformers import AutoFeatureExtractor >>> feature_extractor = AutoFeatureExtractor.from_pretrained("stevhliu/my_awesome_minds_model") >>> inputs = feature_extractor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt") ``` Pass your inputs to the model and return the logits: ``` >>> from transformers import AutoModelForAudioClassification >>> model = AutoModelForAudioClassification.from_pretrained("stevhliu/my_awesome_minds_model") >>> with torch.no_grad(): ... logits = model(**inputs).logits ``` Get the class with the highest probability, and use the model’s `id2label` mapping to convert it to a label: ``` >>> import torch >>> predicted_class_ids = torch.argmax(logits).item() >>> predicted_label = model.config.id2label[predicted_class_ids] >>> predicted_label 'cash_deposit' ```
<!DOCTYPE html><html class=""><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"> <meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science."> <meta property="fb:app_id" content="1321688464574422"> <meta name="twitter:card" content="summary_large_image"> <meta name="twitter:site" content="@huggingface"> <meta property="og:title" content="Audio classification"> <meta property="og:type" content="website"> <meta property="og:url" content="https://huggingface.co/docs/transformers/v4.34.0/en/tasks/audio_classification"> <meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png"> <link rel="stylesheet" href="/front/build/kube-5e23f38/style.css"> <link rel="preconnect" href="https://fonts.gstatic.com"> <link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&amp;display=swap" rel="stylesheet"> <link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&amp;display=swap" rel="stylesheet"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" as="style" onload="this.onload=null;this.rel='stylesheet'"> <noscript> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" /> </noscript> <title>Audio classification</title> <script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script> <script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"><link rel="stylesheet" href="/docs/transformers/v4.34.0/en/_app/immutable/assets/0.e3b0c442.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/1.38c5c2f6.js"><meta name="hf:doc:metadata" content="{&quot;local&quot;:&quot;audio-classification&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;load-minds14-dataset&quot;,&quot;title&quot;:&quot;Load MInDS-14 dataset&quot;},{&quot;local&quot;:&quot;preprocess&quot;,&quot;title&quot;:&quot;Preprocess&quot;},{&quot;local&quot;:&quot;evaluate&quot;,&quot;title&quot;:&quot;Evaluate&quot;},{&quot;local&quot;:&quot;train&quot;,&quot;title&quot;:&quot;Train&quot;},{&quot;local&quot;:&quot;inference&quot;,&quot;title&quot;:&quot;Inference&quot;}],&quot;title&quot;:&quot;Audio classification&quot;}"></head> <body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage"> <div class="flex min-h-screen flex-col"> <div class="SVELTE_HYDRATER contents" data-props="{&quot;classNames&quot;:&quot;&quot;,&quot;isWide&quot;:true,&quot;isZh&quot;:false}" data-target="MainHeader"><header class="border-b border-gray-100 "><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <div class="flex flex-none items-center justify-center p-0.5 place-self-stretch lg:hidden"><button class="relative z-40 flex h-6 w-8 items-center justify-center" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div></div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="rounded-full border border-transparent bg-gray-900 px-3 py-1 leading-none text-white hover:border-black hover:bg-white hover:text-black" href="/join">Sign Up</a></li></ul></nav></div></header></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div> <main class="flex flex-1 flex-col"><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapters&quot;:[{&quot;title&quot;:&quot;Get started&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;🤗 Transformers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;index&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/index&quot;},{&quot;title&quot;:&quot;Quick tour&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;quicktour&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/quicktour&quot;},{&quot;title&quot;:&quot;Installation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;installation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/installation&quot;}]},{&quot;title&quot;:&quot;Tutorials&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Run inference with pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_tutorial&quot;},{&quot;title&quot;:&quot;Write portable code with AutoClass&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;autoclass_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/autoclass_tutorial&quot;},{&quot;title&quot;:&quot;Preprocess data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocessing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/preprocessing&quot;},{&quot;title&quot;:&quot;Fine-tune a pretrained model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;training&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/training&quot;},{&quot;title&quot;:&quot;Train with a script&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;run_scripts&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/run_scripts&quot;},{&quot;title&quot;:&quot;Set up distributed training with 🤗 Accelerate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;accelerate&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/accelerate&quot;},{&quot;title&quot;:&quot;Load and train adapters with 🤗 PEFT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;peft&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/peft&quot;},{&quot;title&quot;:&quot;Share your model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_sharing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_sharing&quot;},{&quot;title&quot;:&quot;Agents&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers_agents&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/transformers_agents&quot;},{&quot;title&quot;:&quot;Generation with LLMs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;llm_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/llm_tutorial&quot;}]},{&quot;title&quot;:&quot;Task Guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Natural Language Processing&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text classification&quot;,&quot;id&quot;:&quot;tasks/sequence_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/sequence_classification&quot;},{&quot;title&quot;:&quot;Token classification&quot;,&quot;id&quot;:&quot;tasks/token_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/token_classification&quot;},{&quot;title&quot;:&quot;Question answering&quot;,&quot;id&quot;:&quot;tasks/question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/question_answering&quot;},{&quot;title&quot;:&quot;Causal language modeling&quot;,&quot;id&quot;:&quot;tasks/language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/language_modeling&quot;},{&quot;title&quot;:&quot;Masked language modeling&quot;,&quot;id&quot;:&quot;tasks/masked_language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/masked_language_modeling&quot;},{&quot;title&quot;:&quot;Translation&quot;,&quot;id&quot;:&quot;tasks/translation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/translation&quot;},{&quot;title&quot;:&quot;Summarization&quot;,&quot;id&quot;:&quot;tasks/summarization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/summarization&quot;},{&quot;title&quot;:&quot;Multiple choice&quot;,&quot;id&quot;:&quot;tasks/multiple_choice&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/multiple_choice&quot;}]},{&quot;title&quot;:&quot;Audio&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio classification&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks/audio_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/audio_classification&quot;},{&quot;title&quot;:&quot;Automatic speech recognition&quot;,&quot;id&quot;:&quot;tasks/asr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/asr&quot;}]},{&quot;title&quot;:&quot;Computer Vision&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image classification&quot;,&quot;id&quot;:&quot;tasks/image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_classification&quot;},{&quot;title&quot;:&quot;Semantic segmentation&quot;,&quot;id&quot;:&quot;tasks/semantic_segmentation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/semantic_segmentation&quot;},{&quot;title&quot;:&quot;Video classification&quot;,&quot;id&quot;:&quot;tasks/video_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/video_classification&quot;},{&quot;title&quot;:&quot;Object detection&quot;,&quot;id&quot;:&quot;tasks/object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot object detection&quot;,&quot;id&quot;:&quot;tasks/zero_shot_object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot image classification&quot;,&quot;id&quot;:&quot;tasks/zero_shot_image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification&quot;},{&quot;title&quot;:&quot;Depth estimation&quot;,&quot;id&quot;:&quot;tasks/monocular_depth_estimation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation&quot;}]},{&quot;title&quot;:&quot;Multimodal&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image captioning&quot;,&quot;id&quot;:&quot;tasks/image_captioning&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_captioning&quot;},{&quot;title&quot;:&quot;Document Question Answering&quot;,&quot;id&quot;:&quot;tasks/document_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/document_question_answering&quot;},{&quot;title&quot;:&quot;Visual Question Answering&quot;,&quot;id&quot;:&quot;tasks/visual_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/visual_question_answering&quot;},{&quot;title&quot;:&quot;Text to speech&quot;,&quot;id&quot;:&quot;tasks/text-to-speech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/text-to-speech&quot;}]},{&quot;title&quot;:&quot;Generation&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Customize the generation strategy&quot;,&quot;id&quot;:&quot;generation_strategies&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/generation_strategies&quot;}]},{&quot;title&quot;:&quot;Prompting&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image tasks with IDEFICS&quot;,&quot;id&quot;:&quot;tasks/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/idefics&quot;}]}]},{&quot;title&quot;:&quot;Developer guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Use fast tokenizers from 🤗 Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;fast_tokenizers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/fast_tokenizers&quot;},{&quot;title&quot;:&quot;Run inference with multilingual models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;multilingual&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/multilingual&quot;},{&quot;title&quot;:&quot;Use model-specific APIs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;create_a_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/create_a_model&quot;},{&quot;title&quot;:&quot;Share a custom model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_models&quot;},{&quot;title&quot;:&quot;Templates for chat models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;chat_templating&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/chat_templating&quot;},{&quot;title&quot;:&quot;Run training on Amazon SageMaker&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;sagemaker&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/sagemaker&quot;},{&quot;title&quot;:&quot;Export to ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;serialization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/serialization&quot;},{&quot;title&quot;:&quot;Export to TFLite&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tflite&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tflite&quot;},{&quot;title&quot;:&quot;Export to TorchScript&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;torchscript&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/torchscript&quot;},{&quot;title&quot;:&quot;Benchmarks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;benchmarks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/benchmarks&quot;},{&quot;title&quot;:&quot;Notebooks with examples&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;notebooks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/notebooks&quot;},{&quot;title&quot;:&quot;Community resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;community&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/community&quot;},{&quot;title&quot;:&quot;Custom Tools and Prompts&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_tools&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_tools&quot;},{&quot;title&quot;:&quot;Troubleshoot&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;troubleshooting&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/troubleshooting&quot;}]},{&quot;title&quot;:&quot;Performance and scalability&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;performance&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/performance&quot;},{&quot;title&quot;:&quot;Efficient training techniques&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Methods and tools for efficient training on a single GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_one&quot;},{&quot;title&quot;:&quot;Multiple GPUs and parallelism&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_many&quot;},{&quot;title&quot;:&quot;Efficient training on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu&quot;},{&quot;title&quot;:&quot;Distributed CPU training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu_many&quot;},{&quot;title&quot;:&quot;Training on TPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu&quot;},{&quot;title&quot;:&quot;Training on TPU with TensorFlow&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu_tf&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu_tf&quot;},{&quot;title&quot;:&quot;Training on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_special&quot;},{&quot;title&quot;:&quot;Custom hardware for training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_hardware&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_hardware&quot;},{&quot;title&quot;:&quot;Hyperparameter Search using Trainer API&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;hpo_train&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/hpo_train&quot;}]},{&quot;title&quot;:&quot;Optimizing inference&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Inference on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_cpu&quot;},{&quot;title&quot;:&quot;Inference on one GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_one&quot;},{&quot;title&quot;:&quot;Inference on many GPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_many&quot;},{&quot;title&quot;:&quot;Inference on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_special&quot;}]},{&quot;title&quot;:&quot;Instantiating a big model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;big_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/big_models&quot;},{&quot;title&quot;:&quot;Troubleshooting&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;debugging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/debugging&quot;},{&quot;title&quot;:&quot;XLA Integration for TensorFlow Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tf_xla&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tf_xla&quot;},{&quot;title&quot;:&quot;Optimize inference using `torch.compile()`&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_torch_compile&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_torch_compile&quot;}]},{&quot;title&quot;:&quot;Contribute&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;How to contribute to transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;contributing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/contributing&quot;},{&quot;title&quot;:&quot;How to add a model to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_model&quot;},{&quot;title&quot;:&quot;How to convert a 🤗 Transformers model to TensorFlow?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_tensorflow_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_tensorflow_model&quot;},{&quot;title&quot;:&quot;How to add a pipeline to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_pipeline&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_pipeline&quot;},{&quot;title&quot;:&quot;Testing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;testing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/testing&quot;},{&quot;title&quot;:&quot;Checks on a Pull Request&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pr_checks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pr_checks&quot;}]},{&quot;title&quot;:&quot;Conceptual guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Philosophy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;philosophy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/philosophy&quot;},{&quot;title&quot;:&quot;Glossary&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;glossary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/glossary&quot;},{&quot;title&quot;:&quot;What 🤗 Transformers can do&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;task_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/task_summary&quot;},{&quot;title&quot;:&quot;How 🤗 Transformers solve tasks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks_explained&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks_explained&quot;},{&quot;title&quot;:&quot;The Transformer model family&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_summary&quot;},{&quot;title&quot;:&quot;Summary of the tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tokenizer_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tokenizer_summary&quot;},{&quot;title&quot;:&quot;Attention mechanisms&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;attention&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/attention&quot;},{&quot;title&quot;:&quot;Padding and truncation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pad_truncation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pad_truncation&quot;},{&quot;title&quot;:&quot;BERTology&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;bertology&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/bertology&quot;},{&quot;title&quot;:&quot;Perplexity of fixed-length models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perplexity&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perplexity&quot;},{&quot;title&quot;:&quot;Pipelines for webserver inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_webserver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_webserver&quot;},{&quot;title&quot;:&quot;Model training anatomy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_memory_anatomy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_memory_anatomy&quot;}]},{&quot;title&quot;:&quot;API&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Main Classes&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Agents and Tools&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/agent&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/agent&quot;},{&quot;title&quot;:&quot;Auto Classes&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/auto&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/auto&quot;},{&quot;title&quot;:&quot;Callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/callback&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/callback&quot;},{&quot;title&quot;:&quot;Configuration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/configuration&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/configuration&quot;},{&quot;title&quot;:&quot;Data Collator&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/data_collator&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/data_collator&quot;},{&quot;title&quot;:&quot;Keras callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/keras_callbacks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/keras_callbacks&quot;},{&quot;title&quot;:&quot;Logging&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/logging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/logging&quot;},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/model&quot;},{&quot;title&quot;:&quot;Text Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/text_generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/text_generation&quot;},{&quot;title&quot;:&quot;ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/onnx&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/onnx&quot;},{&quot;title&quot;:&quot;Optimization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/optimizer_schedules&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules&quot;},{&quot;title&quot;:&quot;Model outputs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/output&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/output&quot;},{&quot;title&quot;:&quot;Pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/pipelines&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/pipelines&quot;},{&quot;title&quot;:&quot;Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/processors&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/processors&quot;},{&quot;title&quot;:&quot;Quantization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/quantization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/quantization&quot;},{&quot;title&quot;:&quot;Tokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/tokenizer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/tokenizer&quot;},{&quot;title&quot;:&quot;Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/trainer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/trainer&quot;},{&quot;title&quot;:&quot;DeepSpeed Integration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/deepspeed&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/deepspeed&quot;},{&quot;title&quot;:&quot;Feature Extractor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/feature_extractor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/feature_extractor&quot;},{&quot;title&quot;:&quot;Image Processor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/image_processor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/image_processor&quot;}]},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALBERT&quot;,&quot;id&quot;:&quot;model_doc/albert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/albert&quot;},{&quot;title&quot;:&quot;BART&quot;,&quot;id&quot;:&quot;model_doc/bart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bart&quot;},{&quot;title&quot;:&quot;BARThez&quot;,&quot;id&quot;:&quot;model_doc/barthez&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/barthez&quot;},{&quot;title&quot;:&quot;BARTpho&quot;,&quot;id&quot;:&quot;model_doc/bartpho&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bartpho&quot;},{&quot;title&quot;:&quot;BERT&quot;,&quot;id&quot;:&quot;model_doc/bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert&quot;},{&quot;title&quot;:&quot;BertGeneration&quot;,&quot;id&quot;:&quot;model_doc/bert-generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-generation&quot;},{&quot;title&quot;:&quot;BertJapanese&quot;,&quot;id&quot;:&quot;model_doc/bert-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-japanese&quot;},{&quot;title&quot;:&quot;Bertweet&quot;,&quot;id&quot;:&quot;model_doc/bertweet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bertweet&quot;},{&quot;title&quot;:&quot;BigBird&quot;,&quot;id&quot;:&quot;model_doc/big_bird&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/big_bird&quot;},{&quot;title&quot;:&quot;BigBirdPegasus&quot;,&quot;id&quot;:&quot;model_doc/bigbird_pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus&quot;},{&quot;title&quot;:&quot;BioGpt&quot;,&quot;id&quot;:&quot;model_doc/biogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/biogpt&quot;},{&quot;title&quot;:&quot;Blenderbot&quot;,&quot;id&quot;:&quot;model_doc/blenderbot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot&quot;},{&quot;title&quot;:&quot;Blenderbot Small&quot;,&quot;id&quot;:&quot;model_doc/blenderbot-small&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot-small&quot;},{&quot;title&quot;:&quot;BLOOM&quot;,&quot;id&quot;:&quot;model_doc/bloom&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bloom&quot;},{&quot;title&quot;:&quot;BORT&quot;,&quot;id&quot;:&quot;model_doc/bort&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bort&quot;},{&quot;title&quot;:&quot;ByT5&quot;,&quot;id&quot;:&quot;model_doc/byt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/byt5&quot;},{&quot;title&quot;:&quot;CamemBERT&quot;,&quot;id&quot;:&quot;model_doc/camembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/camembert&quot;},{&quot;title&quot;:&quot;CANINE&quot;,&quot;id&quot;:&quot;model_doc/canine&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/canine&quot;},{&quot;title&quot;:&quot;CodeGen&quot;,&quot;id&quot;:&quot;model_doc/codegen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/codegen&quot;},{&quot;title&quot;:&quot;CodeLlama&quot;,&quot;id&quot;:&quot;model_doc/code_llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/code_llama&quot;},{&quot;title&quot;:&quot;ConvBERT&quot;,&quot;id&quot;:&quot;model_doc/convbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convbert&quot;},{&quot;title&quot;:&quot;CPM&quot;,&quot;id&quot;:&quot;model_doc/cpm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpm&quot;},{&quot;title&quot;:&quot;CPMANT&quot;,&quot;id&quot;:&quot;model_doc/cpmant&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpmant&quot;},{&quot;title&quot;:&quot;CTRL&quot;,&quot;id&quot;:&quot;model_doc/ctrl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ctrl&quot;},{&quot;title&quot;:&quot;DeBERTa&quot;,&quot;id&quot;:&quot;model_doc/deberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta&quot;},{&quot;title&quot;:&quot;DeBERTa-v2&quot;,&quot;id&quot;:&quot;model_doc/deberta-v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta-v2&quot;},{&quot;title&quot;:&quot;DialoGPT&quot;,&quot;id&quot;:&quot;model_doc/dialogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dialogpt&quot;},{&quot;title&quot;:&quot;DistilBERT&quot;,&quot;id&quot;:&quot;model_doc/distilbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/distilbert&quot;},{&quot;title&quot;:&quot;DPR&quot;,&quot;id&quot;:&quot;model_doc/dpr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpr&quot;},{&quot;title&quot;:&quot;ELECTRA&quot;,&quot;id&quot;:&quot;model_doc/electra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/electra&quot;},{&quot;title&quot;:&quot;Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encoder-decoder&quot;},{&quot;title&quot;:&quot;ERNIE&quot;,&quot;id&quot;:&quot;model_doc/ernie&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie&quot;},{&quot;title&quot;:&quot;ErnieM&quot;,&quot;id&quot;:&quot;model_doc/ernie_m&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie_m&quot;},{&quot;title&quot;:&quot;ESM&quot;,&quot;id&quot;:&quot;model_doc/esm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/esm&quot;},{&quot;title&quot;:&quot;Falcon&quot;,&quot;id&quot;:&quot;model_doc/falcon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/falcon&quot;},{&quot;title&quot;:&quot;FLAN-T5&quot;,&quot;id&quot;:&quot;model_doc/flan-t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-t5&quot;},{&quot;title&quot;:&quot;FLAN-UL2&quot;,&quot;id&quot;:&quot;model_doc/flan-ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-ul2&quot;},{&quot;title&quot;:&quot;FlauBERT&quot;,&quot;id&quot;:&quot;model_doc/flaubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flaubert&quot;},{&quot;title&quot;:&quot;FNet&quot;,&quot;id&quot;:&quot;model_doc/fnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fnet&quot;},{&quot;title&quot;:&quot;FSMT&quot;,&quot;id&quot;:&quot;model_doc/fsmt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fsmt&quot;},{&quot;title&quot;:&quot;Funnel Transformer&quot;,&quot;id&quot;:&quot;model_doc/funnel&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/funnel&quot;},{&quot;title&quot;:&quot;GPT&quot;,&quot;id&quot;:&quot;model_doc/openai-gpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/openai-gpt&quot;},{&quot;title&quot;:&quot;GPT Neo&quot;,&quot;id&quot;:&quot;model_doc/gpt_neo&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neo&quot;},{&quot;title&quot;:&quot;GPT NeoX&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox&quot;},{&quot;title&quot;:&quot;GPT NeoX Japanese&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox_japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese&quot;},{&quot;title&quot;:&quot;GPT-J&quot;,&quot;id&quot;:&quot;model_doc/gptj&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptj&quot;},{&quot;title&quot;:&quot;GPT2&quot;,&quot;id&quot;:&quot;model_doc/gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt2&quot;},{&quot;title&quot;:&quot;GPTBigCode&quot;,&quot;id&quot;:&quot;model_doc/gpt_bigcode&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode&quot;},{&quot;title&quot;:&quot;GPTSAN Japanese&quot;,&quot;id&quot;:&quot;model_doc/gptsan-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese&quot;},{&quot;title&quot;:&quot;GPTSw3&quot;,&quot;id&quot;:&quot;model_doc/gpt-sw3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt-sw3&quot;},{&quot;title&quot;:&quot;HerBERT&quot;,&quot;id&quot;:&quot;model_doc/herbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/herbert&quot;},{&quot;title&quot;:&quot;I-BERT&quot;,&quot;id&quot;:&quot;model_doc/ibert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ibert&quot;},{&quot;title&quot;:&quot;Jukebox&quot;,&quot;id&quot;:&quot;model_doc/jukebox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/jukebox&quot;},{&quot;title&quot;:&quot;LED&quot;,&quot;id&quot;:&quot;model_doc/led&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/led&quot;},{&quot;title&quot;:&quot;LLaMA&quot;,&quot;id&quot;:&quot;model_doc/llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama&quot;},{&quot;title&quot;:&quot;Llama2&quot;,&quot;id&quot;:&quot;model_doc/llama2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama2&quot;},{&quot;title&quot;:&quot;Longformer&quot;,&quot;id&quot;:&quot;model_doc/longformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longformer&quot;},{&quot;title&quot;:&quot;LongT5&quot;,&quot;id&quot;:&quot;model_doc/longt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longt5&quot;},{&quot;title&quot;:&quot;LUKE&quot;,&quot;id&quot;:&quot;model_doc/luke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/luke&quot;},{&quot;title&quot;:&quot;M2M100&quot;,&quot;id&quot;:&quot;model_doc/m2m_100&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/m2m_100&quot;},{&quot;title&quot;:&quot;MarianMT&quot;,&quot;id&quot;:&quot;model_doc/marian&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/marian&quot;},{&quot;title&quot;:&quot;MarkupLM&quot;,&quot;id&quot;:&quot;model_doc/markuplm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/markuplm&quot;},{&quot;title&quot;:&quot;MBart and MBart-50&quot;,&quot;id&quot;:&quot;model_doc/mbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mbart&quot;},{&quot;title&quot;:&quot;MEGA&quot;,&quot;id&quot;:&quot;model_doc/mega&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mega&quot;},{&quot;title&quot;:&quot;MegatronBERT&quot;,&quot;id&quot;:&quot;model_doc/megatron-bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron-bert&quot;},{&quot;title&quot;:&quot;MegatronGPT2&quot;,&quot;id&quot;:&quot;model_doc/megatron_gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2&quot;},{&quot;title&quot;:&quot;Mistral&quot;,&quot;id&quot;:&quot;model_doc/mistral&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mistral&quot;},{&quot;title&quot;:&quot;mLUKE&quot;,&quot;id&quot;:&quot;model_doc/mluke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mluke&quot;},{&quot;title&quot;:&quot;MobileBERT&quot;,&quot;id&quot;:&quot;model_doc/mobilebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilebert&quot;},{&quot;title&quot;:&quot;MPNet&quot;,&quot;id&quot;:&quot;model_doc/mpnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpnet&quot;},{&quot;title&quot;:&quot;MPT&quot;,&quot;id&quot;:&quot;model_doc/mpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpt&quot;},{&quot;title&quot;:&quot;MRA&quot;,&quot;id&quot;:&quot;model_doc/mra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mra&quot;},{&quot;title&quot;:&quot;MT5&quot;,&quot;id&quot;:&quot;model_doc/mt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mt5&quot;},{&quot;title&quot;:&quot;MVP&quot;,&quot;id&quot;:&quot;model_doc/mvp&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mvp&quot;},{&quot;title&quot;:&quot;NEZHA&quot;,&quot;id&quot;:&quot;model_doc/nezha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nezha&quot;},{&quot;title&quot;:&quot;NLLB&quot;,&quot;id&quot;:&quot;model_doc/nllb&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb&quot;},{&quot;title&quot;:&quot;NLLB-MoE&quot;,&quot;id&quot;:&quot;model_doc/nllb-moe&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb-moe&quot;},{&quot;title&quot;:&quot;Nyströmformer&quot;,&quot;id&quot;:&quot;model_doc/nystromformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nystromformer&quot;},{&quot;title&quot;:&quot;Open-Llama&quot;,&quot;id&quot;:&quot;model_doc/open-llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/open-llama&quot;},{&quot;title&quot;:&quot;OPT&quot;,&quot;id&quot;:&quot;model_doc/opt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/opt&quot;},{&quot;title&quot;:&quot;Pegasus&quot;,&quot;id&quot;:&quot;model_doc/pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus&quot;},{&quot;title&quot;:&quot;PEGASUS-X&quot;,&quot;id&quot;:&quot;model_doc/pegasus_x&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus_x&quot;},{&quot;title&quot;:&quot;Persimmon&quot;,&quot;id&quot;:&quot;model_doc/persimmon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/persimmon&quot;},{&quot;title&quot;:&quot;PhoBERT&quot;,&quot;id&quot;:&quot;model_doc/phobert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/phobert&quot;},{&quot;title&quot;:&quot;PLBart&quot;,&quot;id&quot;:&quot;model_doc/plbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/plbart&quot;},{&quot;title&quot;:&quot;ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/prophetnet&quot;},{&quot;title&quot;:&quot;QDQBert&quot;,&quot;id&quot;:&quot;model_doc/qdqbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/qdqbert&quot;},{&quot;title&quot;:&quot;RAG&quot;,&quot;id&quot;:&quot;model_doc/rag&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rag&quot;},{&quot;title&quot;:&quot;REALM&quot;,&quot;id&quot;:&quot;model_doc/realm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/realm&quot;},{&quot;title&quot;:&quot;Reformer&quot;,&quot;id&quot;:&quot;model_doc/reformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/reformer&quot;},{&quot;title&quot;:&quot;RemBERT&quot;,&quot;id&quot;:&quot;model_doc/rembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rembert&quot;},{&quot;title&quot;:&quot;RetriBERT&quot;,&quot;id&quot;:&quot;model_doc/retribert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/retribert&quot;},{&quot;title&quot;:&quot;RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta&quot;},{&quot;title&quot;:&quot;RoBERTa-PreLayerNorm&quot;,&quot;id&quot;:&quot;model_doc/roberta-prelayernorm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm&quot;},{&quot;title&quot;:&quot;RoCBert&quot;,&quot;id&quot;:&quot;model_doc/roc_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roc_bert&quot;},{&quot;title&quot;:&quot;RoFormer&quot;,&quot;id&quot;:&quot;model_doc/roformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roformer&quot;},{&quot;title&quot;:&quot;RWKV&quot;,&quot;id&quot;:&quot;model_doc/rwkv&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rwkv&quot;},{&quot;title&quot;:&quot;Splinter&quot;,&quot;id&quot;:&quot;model_doc/splinter&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/splinter&quot;},{&quot;title&quot;:&quot;SqueezeBERT&quot;,&quot;id&quot;:&quot;model_doc/squeezebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/squeezebert&quot;},{&quot;title&quot;:&quot;SwitchTransformers&quot;,&quot;id&quot;:&quot;model_doc/switch_transformers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/switch_transformers&quot;},{&quot;title&quot;:&quot;T5&quot;,&quot;id&quot;:&quot;model_doc/t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5&quot;},{&quot;title&quot;:&quot;T5v1.1&quot;,&quot;id&quot;:&quot;model_doc/t5v1.1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5v1.1&quot;},{&quot;title&quot;:&quot;TAPEX&quot;,&quot;id&quot;:&quot;model_doc/tapex&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapex&quot;},{&quot;title&quot;:&quot;Transformer XL&quot;,&quot;id&quot;:&quot;model_doc/transfo-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/transfo-xl&quot;},{&quot;title&quot;:&quot;UL2&quot;,&quot;id&quot;:&quot;model_doc/ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ul2&quot;},{&quot;title&quot;:&quot;UMT5&quot;,&quot;id&quot;:&quot;model_doc/umt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/umt5&quot;},{&quot;title&quot;:&quot;X-MOD&quot;,&quot;id&quot;:&quot;model_doc/xmod&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xmod&quot;},{&quot;title&quot;:&quot;XGLM&quot;,&quot;id&quot;:&quot;model_doc/xglm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xglm&quot;},{&quot;title&quot;:&quot;XLM&quot;,&quot;id&quot;:&quot;model_doc/xlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm&quot;},{&quot;title&quot;:&quot;XLM-ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/xlm-prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa-XL&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl&quot;},{&quot;title&quot;:&quot;XLM-V&quot;,&quot;id&quot;:&quot;model_doc/xlm-v&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-v&quot;},{&quot;title&quot;:&quot;XLNet&quot;,&quot;id&quot;:&quot;model_doc/xlnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlnet&quot;},{&quot;title&quot;:&quot;YOSO&quot;,&quot;id&quot;:&quot;model_doc/yoso&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yoso&quot;}]},{&quot;title&quot;:&quot;Vision models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;BEiT&quot;,&quot;id&quot;:&quot;model_doc/beit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/beit&quot;},{&quot;title&quot;:&quot;BiT&quot;,&quot;id&quot;:&quot;model_doc/bit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bit&quot;},{&quot;title&quot;:&quot;Conditional DETR&quot;,&quot;id&quot;:&quot;model_doc/conditional_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/conditional_detr&quot;},{&quot;title&quot;:&quot;ConvNeXT&quot;,&quot;id&quot;:&quot;model_doc/convnext&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnext&quot;},{&quot;title&quot;:&quot;ConvNeXTV2&quot;,&quot;id&quot;:&quot;model_doc/convnextv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnextv2&quot;},{&quot;title&quot;:&quot;CvT&quot;,&quot;id&quot;:&quot;model_doc/cvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cvt&quot;},{&quot;title&quot;:&quot;Deformable DETR&quot;,&quot;id&quot;:&quot;model_doc/deformable_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deformable_detr&quot;},{&quot;title&quot;:&quot;DeiT&quot;,&quot;id&quot;:&quot;model_doc/deit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deit&quot;},{&quot;title&quot;:&quot;DETA&quot;,&quot;id&quot;:&quot;model_doc/deta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deta&quot;},{&quot;title&quot;:&quot;DETR&quot;,&quot;id&quot;:&quot;model_doc/detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/detr&quot;},{&quot;title&quot;:&quot;DiNAT&quot;,&quot;id&quot;:&quot;model_doc/dinat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinat&quot;},{&quot;title&quot;:&quot;DINO V2&quot;,&quot;id&quot;:&quot;model_doc/dinov2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinov2&quot;},{&quot;title&quot;:&quot;DiT&quot;,&quot;id&quot;:&quot;model_doc/dit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dit&quot;},{&quot;title&quot;:&quot;DPT&quot;,&quot;id&quot;:&quot;model_doc/dpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpt&quot;},{&quot;title&quot;:&quot;EfficientFormer&quot;,&quot;id&quot;:&quot;model_doc/efficientformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientformer&quot;},{&quot;title&quot;:&quot;EfficientNet&quot;,&quot;id&quot;:&quot;model_doc/efficientnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientnet&quot;},{&quot;title&quot;:&quot;FocalNet&quot;,&quot;id&quot;:&quot;model_doc/focalnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/focalnet&quot;},{&quot;title&quot;:&quot;GLPN&quot;,&quot;id&quot;:&quot;model_doc/glpn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/glpn&quot;},{&quot;title&quot;:&quot;ImageGPT&quot;,&quot;id&quot;:&quot;model_doc/imagegpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/imagegpt&quot;},{&quot;title&quot;:&quot;LeViT&quot;,&quot;id&quot;:&quot;model_doc/levit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/levit&quot;},{&quot;title&quot;:&quot;Mask2Former&quot;,&quot;id&quot;:&quot;model_doc/mask2former&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mask2former&quot;},{&quot;title&quot;:&quot;MaskFormer&quot;,&quot;id&quot;:&quot;model_doc/maskformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/maskformer&quot;},{&quot;title&quot;:&quot;MobileNetV1&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v1&quot;},{&quot;title&quot;:&quot;MobileNetV2&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v2&quot;},{&quot;title&quot;:&quot;MobileViT&quot;,&quot;id&quot;:&quot;model_doc/mobilevit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevit&quot;},{&quot;title&quot;:&quot;MobileViTV2&quot;,&quot;id&quot;:&quot;model_doc/mobilevitv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevitv2&quot;},{&quot;title&quot;:&quot;NAT&quot;,&quot;id&quot;:&quot;model_doc/nat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nat&quot;},{&quot;title&quot;:&quot;PoolFormer&quot;,&quot;id&quot;:&quot;model_doc/poolformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/poolformer&quot;},{&quot;title&quot;:&quot;Pyramid Vision Transformer (PVT)&quot;,&quot;id&quot;:&quot;model_doc/pvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pvt&quot;},{&quot;title&quot;:&quot;RegNet&quot;,&quot;id&quot;:&quot;model_doc/regnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/regnet&quot;},{&quot;title&quot;:&quot;ResNet&quot;,&quot;id&quot;:&quot;model_doc/resnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/resnet&quot;},{&quot;title&quot;:&quot;SegFormer&quot;,&quot;id&quot;:&quot;model_doc/segformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/segformer&quot;},{&quot;title&quot;:&quot;SwiftFormer&quot;,&quot;id&quot;:&quot;model_doc/swiftformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swiftformer&quot;},{&quot;title&quot;:&quot;Swin Transformer&quot;,&quot;id&quot;:&quot;model_doc/swin&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin&quot;},{&quot;title&quot;:&quot;Swin Transformer V2&quot;,&quot;id&quot;:&quot;model_doc/swinv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swinv2&quot;},{&quot;title&quot;:&quot;Swin2SR&quot;,&quot;id&quot;:&quot;model_doc/swin2sr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin2sr&quot;},{&quot;title&quot;:&quot;Table Transformer&quot;,&quot;id&quot;:&quot;model_doc/table-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/table-transformer&quot;},{&quot;title&quot;:&quot;TimeSformer&quot;,&quot;id&quot;:&quot;model_doc/timesformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/timesformer&quot;},{&quot;title&quot;:&quot;UperNet&quot;,&quot;id&quot;:&quot;model_doc/upernet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/upernet&quot;},{&quot;title&quot;:&quot;VAN&quot;,&quot;id&quot;:&quot;model_doc/van&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/van&quot;},{&quot;title&quot;:&quot;VideoMAE&quot;,&quot;id&quot;:&quot;model_doc/videomae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/videomae&quot;},{&quot;title&quot;:&quot;Vision Transformer (ViT)&quot;,&quot;id&quot;:&quot;model_doc/vit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit&quot;},{&quot;title&quot;:&quot;ViT Hybrid&quot;,&quot;id&quot;:&quot;model_doc/vit_hybrid&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_hybrid&quot;},{&quot;title&quot;:&quot;ViTDet&quot;,&quot;id&quot;:&quot;model_doc/vitdet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitdet&quot;},{&quot;title&quot;:&quot;ViTMAE&quot;,&quot;id&quot;:&quot;model_doc/vit_mae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_mae&quot;},{&quot;title&quot;:&quot;ViTMatte&quot;,&quot;id&quot;:&quot;model_doc/vitmatte&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitmatte&quot;},{&quot;title&quot;:&quot;ViTMSN&quot;,&quot;id&quot;:&quot;model_doc/vit_msn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_msn&quot;},{&quot;title&quot;:&quot;ViViT&quot;,&quot;id&quot;:&quot;model_doc/vivit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vivit&quot;},{&quot;title&quot;:&quot;YOLOS&quot;,&quot;id&quot;:&quot;model_doc/yolos&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yolos&quot;}]},{&quot;title&quot;:&quot;Audio models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio Spectrogram Transformer&quot;,&quot;id&quot;:&quot;model_doc/audio-spectrogram-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer&quot;},{&quot;title&quot;:&quot;Bark&quot;,&quot;id&quot;:&quot;model_doc/bark&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bark&quot;},{&quot;title&quot;:&quot;CLAP&quot;,&quot;id&quot;:&quot;model_doc/clap&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clap&quot;},{&quot;title&quot;:&quot;EnCodec&quot;,&quot;id&quot;:&quot;model_doc/encodec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encodec&quot;},{&quot;title&quot;:&quot;Hubert&quot;,&quot;id&quot;:&quot;model_doc/hubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/hubert&quot;},{&quot;title&quot;:&quot;MCTCT&quot;,&quot;id&quot;:&quot;model_doc/mctct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mctct&quot;},{&quot;title&quot;:&quot;MMS&quot;,&quot;id&quot;:&quot;model_doc/mms&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mms&quot;},{&quot;title&quot;:&quot;MusicGen&quot;,&quot;id&quot;:&quot;model_doc/musicgen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/musicgen&quot;},{&quot;title&quot;:&quot;Pop2Piano&quot;,&quot;id&quot;:&quot;model_doc/pop2piano&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pop2piano&quot;},{&quot;title&quot;:&quot;SEW&quot;,&quot;id&quot;:&quot;model_doc/sew&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew&quot;},{&quot;title&quot;:&quot;SEW-D&quot;,&quot;id&quot;:&quot;model_doc/sew-d&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew-d&quot;},{&quot;title&quot;:&quot;Speech2Text&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text&quot;},{&quot;title&quot;:&quot;Speech2Text2&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text_2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2&quot;},{&quot;title&quot;:&quot;SpeechT5&quot;,&quot;id&quot;:&quot;model_doc/speecht5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speecht5&quot;},{&quot;title&quot;:&quot;UniSpeech&quot;,&quot;id&quot;:&quot;model_doc/unispeech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech&quot;},{&quot;title&quot;:&quot;UniSpeech-SAT&quot;,&quot;id&quot;:&quot;model_doc/unispeech-sat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech-sat&quot;},{&quot;title&quot;:&quot;VITS&quot;,&quot;id&quot;:&quot;model_doc/vits&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vits&quot;},{&quot;title&quot;:&quot;Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2&quot;},{&quot;title&quot;:&quot;Wav2Vec2-Conformer&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2-conformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer&quot;},{&quot;title&quot;:&quot;Wav2Vec2Phoneme&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2_phoneme&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme&quot;},{&quot;title&quot;:&quot;WavLM&quot;,&quot;id&quot;:&quot;model_doc/wavlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wavlm&quot;},{&quot;title&quot;:&quot;Whisper&quot;,&quot;id&quot;:&quot;model_doc/whisper&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/whisper&quot;},{&quot;title&quot;:&quot;XLS-R&quot;,&quot;id&quot;:&quot;model_doc/xls_r&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xls_r&quot;},{&quot;title&quot;:&quot;XLSR-Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/xlsr_wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2&quot;}]},{&quot;title&quot;:&quot;Multimodal models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALIGN&quot;,&quot;id&quot;:&quot;model_doc/align&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/align&quot;},{&quot;title&quot;:&quot;AltCLIP&quot;,&quot;id&quot;:&quot;model_doc/altclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/altclip&quot;},{&quot;title&quot;:&quot;BLIP&quot;,&quot;id&quot;:&quot;model_doc/blip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip&quot;},{&quot;title&quot;:&quot;BLIP-2&quot;,&quot;id&quot;:&quot;model_doc/blip-2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip-2&quot;},{&quot;title&quot;:&quot;BridgeTower&quot;,&quot;id&quot;:&quot;model_doc/bridgetower&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bridgetower&quot;},{&quot;title&quot;:&quot;BROS&quot;,&quot;id&quot;:&quot;model_doc/bros&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bros&quot;},{&quot;title&quot;:&quot;Chinese-CLIP&quot;,&quot;id&quot;:&quot;model_doc/chinese_clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/chinese_clip&quot;},{&quot;title&quot;:&quot;CLIP&quot;,&quot;id&quot;:&quot;model_doc/clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clip&quot;},{&quot;title&quot;:&quot;CLIPSeg&quot;,&quot;id&quot;:&quot;model_doc/clipseg&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clipseg&quot;},{&quot;title&quot;:&quot;Data2Vec&quot;,&quot;id&quot;:&quot;model_doc/data2vec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/data2vec&quot;},{&quot;title&quot;:&quot;DePlot&quot;,&quot;id&quot;:&quot;model_doc/deplot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deplot&quot;},{&quot;title&quot;:&quot;Donut&quot;,&quot;id&quot;:&quot;model_doc/donut&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/donut&quot;},{&quot;title&quot;:&quot;FLAVA&quot;,&quot;id&quot;:&quot;model_doc/flava&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flava&quot;},{&quot;title&quot;:&quot;GIT&quot;,&quot;id&quot;:&quot;model_doc/git&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/git&quot;},{&quot;title&quot;:&quot;GroupViT&quot;,&quot;id&quot;:&quot;model_doc/groupvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/groupvit&quot;},{&quot;title&quot;:&quot;IDEFICS&quot;,&quot;id&quot;:&quot;model_doc/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/idefics&quot;},{&quot;title&quot;:&quot;InstructBLIP&quot;,&quot;id&quot;:&quot;model_doc/instructblip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/instructblip&quot;},{&quot;title&quot;:&quot;LayoutLM&quot;,&quot;id&quot;:&quot;model_doc/layoutlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlm&quot;},{&quot;title&quot;:&quot;LayoutLMV2&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv2&quot;},{&quot;title&quot;:&quot;LayoutLMV3&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv3&quot;},{&quot;title&quot;:&quot;LayoutXLM&quot;,&quot;id&quot;:&quot;model_doc/layoutxlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutxlm&quot;},{&quot;title&quot;:&quot;LiLT&quot;,&quot;id&quot;:&quot;model_doc/lilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lilt&quot;},{&quot;title&quot;:&quot;LXMERT&quot;,&quot;id&quot;:&quot;model_doc/lxmert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lxmert&quot;},{&quot;title&quot;:&quot;MatCha&quot;,&quot;id&quot;:&quot;model_doc/matcha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/matcha&quot;},{&quot;title&quot;:&quot;MGP-STR&quot;,&quot;id&quot;:&quot;model_doc/mgp-str&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mgp-str&quot;},{&quot;title&quot;:&quot;Nougat&quot;,&quot;id&quot;:&quot;model_doc/nougat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nougat&quot;},{&quot;title&quot;:&quot;OneFormer&quot;,&quot;id&quot;:&quot;model_doc/oneformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/oneformer&quot;},{&quot;title&quot;:&quot;OWL-ViT&quot;,&quot;id&quot;:&quot;model_doc/owlvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/owlvit&quot;},{&quot;title&quot;:&quot;Perceiver&quot;,&quot;id&quot;:&quot;model_doc/perceiver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/perceiver&quot;},{&quot;title&quot;:&quot;Pix2Struct&quot;,&quot;id&quot;:&quot;model_doc/pix2struct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pix2struct&quot;},{&quot;title&quot;:&quot;Segment Anything&quot;,&quot;id&quot;:&quot;model_doc/sam&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sam&quot;},{&quot;title&quot;:&quot;Speech Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/speech-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder&quot;},{&quot;title&quot;:&quot;TAPAS&quot;,&quot;id&quot;:&quot;model_doc/tapas&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapas&quot;},{&quot;title&quot;:&quot;TrOCR&quot;,&quot;id&quot;:&quot;model_doc/trocr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trocr&quot;},{&quot;title&quot;:&quot;TVLT&quot;,&quot;id&quot;:&quot;model_doc/tvlt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tvlt&quot;},{&quot;title&quot;:&quot;ViLT&quot;,&quot;id&quot;:&quot;model_doc/vilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vilt&quot;},{&quot;title&quot;:&quot;Vision Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/vision-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder&quot;},{&quot;title&quot;:&quot;Vision Text Dual Encoder&quot;,&quot;id&quot;:&quot;model_doc/vision-text-dual-encoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder&quot;},{&quot;title&quot;:&quot;VisualBERT&quot;,&quot;id&quot;:&quot;model_doc/visual_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/visual_bert&quot;},{&quot;title&quot;:&quot;X-CLIP&quot;,&quot;id&quot;:&quot;model_doc/xclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xclip&quot;}]},{&quot;title&quot;:&quot;Reinforcement learning models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Decision Transformer&quot;,&quot;id&quot;:&quot;model_doc/decision_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/decision_transformer&quot;},{&quot;title&quot;:&quot;Trajectory Transformer&quot;,&quot;id&quot;:&quot;model_doc/trajectory_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer&quot;}]},{&quot;title&quot;:&quot;Time series models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Autoformer&quot;,&quot;id&quot;:&quot;model_doc/autoformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/autoformer&quot;},{&quot;title&quot;:&quot;Informer&quot;,&quot;id&quot;:&quot;model_doc/informer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/informer&quot;},{&quot;title&quot;:&quot;Time Series Transformer&quot;,&quot;id&quot;:&quot;model_doc/time_series_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/time_series_transformer&quot;}]},{&quot;title&quot;:&quot;Graph models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Graphormer&quot;,&quot;id&quot;:&quot;model_doc/graphormer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/graphormer&quot;}]}]},{&quot;title&quot;:&quot;Internal Helpers&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Custom Layers and Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/modeling_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/modeling_utils&quot;},{&quot;title&quot;:&quot;Utilities for pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/pipelines_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/pipelines_utils&quot;},{&quot;title&quot;:&quot;Utilities for Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/tokenization_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/tokenization_utils&quot;},{&quot;title&quot;:&quot;Utilities for Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/trainer_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/trainer_utils&quot;},{&quot;title&quot;:&quot;Utilities for Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/generation_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/generation_utils&quot;},{&quot;title&quot;:&quot;Utilities for Image Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/image_processing_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/image_processing_utils&quot;},{&quot;title&quot;:&quot;Utilities for Audio processing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/audio_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/audio_utils&quot;},{&quot;title&quot;:&quot;General Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/file_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/file_utils&quot;},{&quot;title&quot;:&quot;Utilities for Time Series&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/time_series_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/time_series_utils&quot;}]}]}],&quot;chapterId&quot;:&quot;tasks/audio_classification&quot;,&quot;docType&quot;:&quot;docs&quot;,&quot;isLoggedIn&quot;:false,&quot;lang&quot;:&quot;en&quot;,&quot;langs&quot;:[&quot;de&quot;,&quot;en&quot;,&quot;es&quot;,&quot;fr&quot;,&quot;it&quot;,&quot;ko&quot;,&quot;pt&quot;,&quot;zh&quot;],&quot;library&quot;:&quot;transformers&quot;,&quot;theme&quot;:&quot;light&quot;,&quot;version&quot;:&quot;v4.34.0&quot;,&quot;versions&quot;:[{&quot;version&quot;:&quot;main&quot;},{&quot;version&quot;:&quot;v4.34.0&quot;},{&quot;version&quot;:&quot;v4.33.3&quot;},{&quot;version&quot;:&quot;v4.33.2&quot;},{&quot;version&quot;:&quot;v4.33.0&quot;},{&quot;version&quot;:&quot;v4.32.1&quot;},{&quot;version&quot;:&quot;v4.32.0&quot;},{&quot;version&quot;:&quot;v4.31.0&quot;},{&quot;version&quot;:&quot;v4.30.0&quot;},{&quot;version&quot;:&quot;v4.29.1&quot;},{&quot;version&quot;:&quot;v4.29.0&quot;},{&quot;version&quot;:&quot;v4.28.1&quot;},{&quot;version&quot;:&quot;v4.28.0&quot;},{&quot;version&quot;:&quot;v4.27.2&quot;},{&quot;version&quot;:&quot;v4.27.1&quot;},{&quot;version&quot;:&quot;v4.27.0&quot;},{&quot;version&quot;:&quot;v4.26.1&quot;},{&quot;version&quot;:&quot;v4.26.0&quot;},{&quot;version&quot;:&quot;v4.25.1&quot;},{&quot;version&quot;:&quot;v4.24.0&quot;},{&quot;version&quot;:&quot;v4.23.1&quot;},{&quot;version&quot;:&quot;v4.23.0&quot;},{&quot;version&quot;:&quot;v4.22.2&quot;},{&quot;version&quot;:&quot;v4.22.1&quot;},{&quot;version&quot;:&quot;v4.22.0&quot;},{&quot;version&quot;:&quot;v4.21.3&quot;},{&quot;version&quot;:&quot;v4.21.2&quot;},{&quot;version&quot;:&quot;v4.21.1&quot;},{&quot;version&quot;:&quot;v4.21.0&quot;},{&quot;version&quot;:&quot;v4.20.1&quot;},{&quot;version&quot;:&quot;v4.20.0&quot;},{&quot;version&quot;:&quot;v4.19.4&quot;},{&quot;version&quot;:&quot;v4.19.3&quot;},{&quot;version&quot;:&quot;v4.19.2&quot;},{&quot;version&quot;:&quot;v4.19.0&quot;},{&quot;version&quot;:&quot;v4.18.0&quot;},{&quot;version&quot;:&quot;v4.17.0&quot;},{&quot;version&quot;:&quot;v4.16.2&quot;},{&quot;version&quot;:&quot;v4.16.1&quot;},{&quot;version&quot;:&quot;v4.16.0&quot;},{&quot;version&quot;:&quot;v4.15.0&quot;},{&quot;version&quot;:&quot;v4.14.1&quot;},{&quot;version&quot;:&quot;v4.13.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.5&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.4&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.0.0&quot;},{&quot;version&quot;:&quot;doc-builder-html&quot;}],&quot;title&quot;:&quot;Audio classification&quot;}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"><div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation</p> <div class="flex items-center"><p class="font-semibold">Audio classification</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "><button class=" " type="button"><h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> <span class="ml-auto rounded border border-gray-200 bg-gray-100 px-0.5 text-xs dark:border-gray-800 dark:bg-gray-800"><kbd class="font-sans">⌘K</kbd></span></button> <div class="flex items-center"><select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1">v4.34.0</option><option value="2">v4.33.3</option><option value="3">v4.32.1</option><option value="4">v4.31.0</option><option value="5">v4.30.0</option><option value="6">v4.29.1</option><option value="7">v4.28.1</option><option value="8">v4.27.2</option><option value="9">v4.26.1</option><option value="10">v4.25.1</option><option value="11">v4.24.0</option><option value="12">v4.23.1</option><option value="13">v4.22.2</option><option value="14">v4.21.3</option><option value="15">v4.20.1</option><option value="16">v4.19.4</option><option value="17">v4.18.0</option><option value="18">v4.17.0</option><option value="19">v4.16.2</option><option value="20">v4.15.0</option><option value="21">v4.14.1</option><option value="22">v4.13.0</option><option value="23">v4.12.5</option><option value="24">v4.11.3</option><option value="25">v4.10.1</option><option value="26">v4.9.2</option><option value="27">v4.8.2</option><option value="28">v4.7.0</option><option value="29">v4.6.0</option><option value="30">v4.5.1</option><option value="31">v4.4.2</option><option value="32">v4.3.3</option><option value="33">v4.2.2</option><option value="34">v4.1.1</option><option value="35">v4.0.1</option><option value="36">v3.5.1</option><option value="37">v3.4.0</option><option value="38">v3.3.1</option><option value="39">v3.2.0</option><option value="40">v3.1.0</option><option value="41">v3.0.2</option><option value="42">v2.11.0</option><option value="43">v2.10.0</option><option value="44">v2.9.1</option><option value="45">v2.8.0</option><option value="46">v2.7.0</option><option value="47">v2.6.0</option><option value="48">v2.5.1</option><option value="49">v2.4.1</option><option value="50">v2.3.0</option><option value="51">v2.2.2</option><option value="52">v2.1.1</option><option value="53">v2.0.0</option><option value="54">v1.2.0</option><option value="55">v1.1.0</option><option value="56">v1.0.0</option><option value="57">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"><button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"><svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> 112,792</a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Get started</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/index">🤗 Transformers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/quicktour">Quick tour </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/installation">Installation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Tutorials</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_tutorial">Run inference with pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/autoclass_tutorial">Write portable code with AutoClass </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/preprocessing">Preprocess data </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/training">Fine-tune a pretrained model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/run_scripts">Train with a script </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/accelerate">Set up distributed training with 🤗 Accelerate </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/peft">Load and train adapters with 🤗 PEFT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_sharing">Share your model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/transformers_agents">Agents </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/llm_tutorial">Generation with LLMs </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Task Guides</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Natural Language Processing</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Audio</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-4" href="/docs/transformers/v4.34.0/en/tasks/audio_classification">Audio classification </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/asr">Automatic speech recognition </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Computer Vision</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Generation</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Prompting</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Developer guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/fast_tokenizers">Use fast tokenizers from 🤗 Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/multilingual">Run inference with multilingual models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/create_a_model">Use model-specific APIs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_models">Share a custom model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/chat_templating">Templates for chat models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/sagemaker">Run training on Amazon SageMaker </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/serialization">Export to ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tflite">Export to TFLite </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/torchscript">Export to TorchScript </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/benchmarks">Benchmarks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/notebooks">Notebooks with examples </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/community">Community resources </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_tools">Custom Tools and Prompts </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/troubleshooting">Troubleshoot </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Performance and scalability</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/performance">Overview </a><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Efficient training techniques</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_one">Methods and tools for efficient training on a single GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_many">Multiple GPUs and parallelism </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu">Efficient training on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu_many">Distributed CPU training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu">Training on TPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu_tf">Training on TPU with TensorFlow </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_special">Training on Specialized Hardware </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_hardware">Custom hardware for training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/hpo_train">Hyperparameter Search using Trainer API </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Optimizing inference</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_cpu">Inference on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_one">Inference on one GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_many">Inference on many GPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_special">Inference on Specialized Hardware </a> </div><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/big_models">Instantiating a big model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/debugging">Troubleshooting </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tf_xla">XLA Integration for TensorFlow Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perf_torch_compile">Optimize inference using `torch.compile()` </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Contribute</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/contributing">How to contribute to transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_model">How to add a model to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_tensorflow_model">How to convert a 🤗 Transformers model to TensorFlow? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_pipeline">How to add a pipeline to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/testing">Testing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pr_checks">Checks on a Pull Request </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Conceptual guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/philosophy">Philosophy </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/glossary">Glossary </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/task_summary">What 🤗 Transformers can do </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tasks_explained">How 🤗 Transformers solve tasks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_summary">The Transformer model family </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tokenizer_summary">Summary of the tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/attention">Attention mechanisms </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pad_truncation">Padding and truncation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/bertology">BERTology </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perplexity">Perplexity of fixed-length models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_webserver">Pipelines for webserver inference </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_memory_anatomy">Model training anatomy </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>API</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Main Classes</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/agent">Agents and Tools </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/model_doc/auto">Auto Classes </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/callback">Callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/configuration">Configuration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/data_collator">Data Collator </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks">Keras callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/logging">Logging </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/model">Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/text_generation">Text Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/onnx">ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules">Optimization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/output">Model outputs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/pipelines">Pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/processors">Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/quantization">Quantization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/tokenizer">Tokenizer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/trainer">Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/deepspeed">DeepSpeed Integration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor">Feature Extractor </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/image_processor">Image Processor </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Models</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Text models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Vision models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Reinforcement learning models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Time series models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Graph models</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Internal Helpers</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/modeling_utils">Custom Layers and Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/pipelines_utils">Utilities for pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/tokenization_utils">Utilities for Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/trainer_utils">Utilities for Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/generation_utils">Utilities for Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/image_processing_utils">Utilities for Image Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/audio_utils">Utilities for Audio processing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/file_utils">General Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/time_series_utils">Utilities for Time Series </a> </div> </div></nav></div></div></div> <div class="z-1 min-w-0 flex-1"> <div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg"> <div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div> <p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience </p> <div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes </div></div></div> <div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a> <p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div> <div class="prose-doc prose relative mx-auto max-w-4xl break-words"> <p></p> <h1 class="relative group"><a id="audio-classification" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#audio-classification"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-v57evn">Audio classification</span></h1> <div class="flex space-x-1 absolute z-10 right-0 top-0"> <div class="relative colab-dropdown "><button class=" " type="button"><img alt="Open In Colab" class="!m-0" src="https://colab.research.google.com/assets/colab-badge.svg"></button> </div> <div class="relative colab-dropdown "><button class=" " type="button"><img alt="Open In Studio Lab" class="!m-0" src="https://studiolab.sagemaker.aws/studiolab.svg"></button> </div></div> <iframe class="w-full xl:w-4/6 h-80" src="https://www.youtube-nocookie.com/embed/KWwzcmG98Ds" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe> <p data-svelte-h="svelte-n3q9of">Audio classification - just like with text - assigns a class label output from the input data. The only difference is instead of text inputs, you have raw audio waveforms. Some practical applications of audio classification include identifying speaker intent, language classification, and even animal species by their sounds.</p> <p data-svelte-h="svelte-1aff4p7">This guide will show you how to:</p> <ol data-svelte-h="svelte-3izmna"><li>Finetune <a href="https://huggingface.co/facebook/wav2vec2-base" rel="nofollow">Wav2Vec2</a> on the <a href="https://huggingface.co/datasets/PolyAI/minds14" rel="nofollow">MInDS-14</a> dataset to classify speaker intent.</li> <li>Use your finetuned model for inference.</li></ol> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400">The task illustrated in this tutorial is supported by the following model architectures: <p data-svelte-h="svelte-1x7pfyj"><a href="../model_doc/audio-spectrogram-transformer">Audio Spectrogram Transformer</a>, <a href="../model_doc/data2vec-audio">Data2VecAudio</a>, <a href="../model_doc/hubert">Hubert</a>, <a href="../model_doc/sew">SEW</a>, <a href="../model_doc/sew-d">SEW-D</a>, <a href="../model_doc/unispeech">UniSpeech</a>, <a href="../model_doc/unispeech-sat">UniSpeechSat</a>, <a href="../model_doc/wav2vec2">Wav2Vec2</a>, <a href="../model_doc/wav2vec2-conformer">Wav2Vec2-Conformer</a>, <a href="../model_doc/wavlm">WavLM</a>, <a href="../model_doc/whisper">Whisper</a></p></div> <p data-svelte-h="svelte-1c9nexd">Before you begin, make sure you have all the necessary libraries installed:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class="">pip install transformers datasets evaluate</pre></div> <p data-svelte-h="svelte-k76o1m">We encourage you to login to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to login:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> huggingface_hub <span class="hljs-keyword">import</span> notebook_login <span class="hljs-meta">&gt;&gt;&gt; </span>notebook_login()</pre></div> <h2 class="relative group"><a id="load-minds14-dataset" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#load-minds14-dataset"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-sjpygq">Load MInDS-14 dataset</span></h2> <p data-svelte-h="svelte-19euwd1">Start by loading the MInDS-14 dataset from the 🤗 Datasets library:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset, Audio <span class="hljs-meta">&gt;&gt;&gt; </span>minds = load_dataset(<span class="hljs-string">"PolyAI/minds14"</span>, name=<span class="hljs-string">"en-US"</span>, split=<span class="hljs-string">"train"</span>)</pre></div> <p data-svelte-h="svelte-13sluwq">Split the dataset’s <code>train</code> split into a smaller train and test set with the <a href="https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.train_test_split" rel="nofollow">train_test_split</a> method. This’ll give you a chance to experiment and make sure everything works before spending more time on the full dataset.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>minds = minds.train_test_split(test_size=<span class="hljs-number">0.2</span>)</pre></div> <p data-svelte-h="svelte-2twqg0">Then take a look at the dataset:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>minds DatasetDict({ train: Dataset({ features: [<span class="hljs-string">'path'</span>, <span class="hljs-string">'audio'</span>, <span class="hljs-string">'transcription'</span>, <span class="hljs-string">'english_transcription'</span>, <span class="hljs-string">'intent_class'</span>, <span class="hljs-string">'lang_id'</span>], num_rows: <span class="hljs-number">450</span> }) test: Dataset({ features: [<span class="hljs-string">'path'</span>, <span class="hljs-string">'audio'</span>, <span class="hljs-string">'transcription'</span>, <span class="hljs-string">'english_transcription'</span>, <span class="hljs-string">'intent_class'</span>, <span class="hljs-string">'lang_id'</span>], num_rows: <span class="hljs-number">113</span> }) })</pre></div> <p data-svelte-h="svelte-okbh69">While the dataset contains a lot of useful information, like <code>lang_id</code> and <code>english_transcription</code>, you’ll focus on the <code>audio</code> and <code>intent_class</code> in this guide. Remove the other columns with the <a href="https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.remove_columns" rel="nofollow">remove_columns</a> method:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>minds = minds.remove_columns([<span class="hljs-string">"path"</span>, <span class="hljs-string">"transcription"</span>, <span class="hljs-string">"english_transcription"</span>, <span class="hljs-string">"lang_id"</span>])</pre></div> <p data-svelte-h="svelte-18u35un">Take a look at an example now:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>minds[<span class="hljs-string">"train"</span>][<span class="hljs-number">0</span>] {<span class="hljs-string">'audio'</span>: {<span class="hljs-string">'array'</span>: array([ <span class="hljs-number">0.</span> , <span class="hljs-number">0.</span> , <span class="hljs-number">0.</span> , ..., -<span class="hljs-number">0.00048828</span>, -<span class="hljs-number">0.00024414</span>, -<span class="hljs-number">0.00024414</span>], dtype=float32), <span class="hljs-string">'path'</span>: <span class="hljs-string">'/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602b9a5fbb1e6d0fbce91f52.wav'</span>, <span class="hljs-string">'sampling_rate'</span>: <span class="hljs-number">8000</span>}, <span class="hljs-string">'intent_class'</span>: <span class="hljs-number">2</span>}</pre></div> <p data-svelte-h="svelte-bf7elb">There are two fields:</p> <ul data-svelte-h="svelte-10l8u4b"><li><code>audio</code>: a 1-dimensional <code>array</code> of the speech signal that must be called to load and resample the audio file.</li> <li><code>intent_class</code>: represents the class id of the speaker’s intent.</li></ul> <p data-svelte-h="svelte-c16zyh">To make it easier for the model to get the label name from the label id, create a dictionary that maps the label name to an integer and vice versa:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>labels = minds[<span class="hljs-string">"train"</span>].features[<span class="hljs-string">"intent_class"</span>].names <span class="hljs-meta">&gt;&gt;&gt; </span>label2id, id2label = <span class="hljs-built_in">dict</span>(), <span class="hljs-built_in">dict</span>() <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">for</span> i, label <span class="hljs-keyword">in</span> <span class="hljs-built_in">enumerate</span>(labels): <span class="hljs-meta">... </span> label2id[label] = <span class="hljs-built_in">str</span>(i) <span class="hljs-meta">... </span> id2label[<span class="hljs-built_in">str</span>(i)] = label</pre></div> <p data-svelte-h="svelte-1e9n4a3">Now you can convert the label id to a label name:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>id2label[<span class="hljs-built_in">str</span>(<span class="hljs-number">2</span>)] <span class="hljs-string">'app_error'</span></pre></div> <h2 class="relative group"><a id="preprocess" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#preprocess"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1cg9qj">Preprocess</span></h2> <p data-svelte-h="svelte-f9h4ad">The next step is to load a Wav2Vec2 feature extractor to process the audio signal:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoFeatureExtractor <span class="hljs-meta">&gt;&gt;&gt; </span>feature_extractor = AutoFeatureExtractor.from_pretrained(<span class="hljs-string">"facebook/wav2vec2-base"</span>)</pre></div> <p data-svelte-h="svelte-3ohz7q">The MInDS-14 dataset has a sampling rate of 8000khz (you can find this information in it’s <a href="https://huggingface.co/datasets/PolyAI/minds14" rel="nofollow">dataset card</a>), which means you’ll need to resample the dataset to 16000kHz to use the pretrained Wav2Vec2 model:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>minds = minds.cast_column(<span class="hljs-string">"audio"</span>, Audio(sampling_rate=<span class="hljs-number">16_000</span>)) <span class="hljs-meta">&gt;&gt;&gt; </span>minds[<span class="hljs-string">"train"</span>][<span class="hljs-number">0</span>] {<span class="hljs-string">'audio'</span>: {<span class="hljs-string">'array'</span>: array([ <span class="hljs-number">2.2098757e-05</span>, <span class="hljs-number">4.6582241e-05</span>, -<span class="hljs-number">2.2803260e-05</span>, ..., -<span class="hljs-number">2.8419291e-04</span>, -<span class="hljs-number">2.3305941e-04</span>, -<span class="hljs-number">1.1425107e-04</span>], dtype=float32), <span class="hljs-string">'path'</span>: <span class="hljs-string">'/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602b9a5fbb1e6d0fbce91f52.wav'</span>, <span class="hljs-string">'sampling_rate'</span>: <span class="hljs-number">16000</span>}, <span class="hljs-string">'intent_class'</span>: <span class="hljs-number">2</span>}</pre></div> <p data-svelte-h="svelte-8cflje">Now create a preprocessing function that:</p> <ol data-svelte-h="svelte-fvrg6s"><li>Calls the <code>audio</code> column to load, and if necessary, resample the audio file.</li> <li>Checks if the sampling rate of the audio file matches the sampling rate of the audio data a model was pretrained with. You can find this information in the Wav2Vec2 <a href="https://huggingface.co/facebook/wav2vec2-base" rel="nofollow">model card</a>.</li> <li>Set a maximum input length to batch longer inputs without truncating them.</li></ol> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">preprocess_function</span>(<span class="hljs-params">examples</span>): <span class="hljs-meta">... </span> audio_arrays = [x[<span class="hljs-string">"array"</span>] <span class="hljs-keyword">for</span> x <span class="hljs-keyword">in</span> examples[<span class="hljs-string">"audio"</span>]] <span class="hljs-meta">... </span> inputs = feature_extractor( <span class="hljs-meta">... </span> audio_arrays, sampling_rate=feature_extractor.sampling_rate, max_length=<span class="hljs-number">16000</span>, truncation=<span class="hljs-literal">True</span> <span class="hljs-meta">... </span> ) <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> inputs</pre></div> <p data-svelte-h="svelte-rgqbok">To apply the preprocessing function over the entire dataset, use 🤗 Datasets <a href="https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.map" rel="nofollow">map</a> function. You can speed up <code>map</code> by setting <code>batched=True</code> to process multiple elements of the dataset at once. Remove the columns you don’t need, and rename <code>intent_class</code> to <code>label</code> because that’s the name the model expects:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>encoded_minds = minds.<span class="hljs-built_in">map</span>(preprocess_function, remove_columns=<span class="hljs-string">"audio"</span>, batched=<span class="hljs-literal">True</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>encoded_minds = encoded_minds.rename_column(<span class="hljs-string">"intent_class"</span>, <span class="hljs-string">"label"</span>)</pre></div> <h2 class="relative group"><a id="evaluate" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#evaluate"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-sh8s6s">Evaluate</span></h2> <p data-svelte-h="svelte-j1ipe9">Including a metric during training is often helpful for evaluating your model’s performance. You can quickly load a evaluation method with the 🤗 <a href="https://huggingface.co/docs/evaluate/index" rel="nofollow">Evaluate</a> library. For this task, load the <a href="https://huggingface.co/spaces/evaluate-metric/accuracy" rel="nofollow">accuracy</a> metric (see the 🤗 Evaluate <a href="https://huggingface.co/docs/evaluate/a_quick_tour" rel="nofollow">quick tour</a> to learn more about how to load and compute a metric):</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> evaluate <span class="hljs-meta">&gt;&gt;&gt; </span>accuracy = evaluate.load(<span class="hljs-string">"accuracy"</span>)</pre></div> <p data-svelte-h="svelte-14oy2j6">Then create a function that passes your predictions and labels to <a href="https://huggingface.co/docs/evaluate/v0.4.0/en/package_reference/main_classes#evaluate.EvaluationModule.compute" rel="nofollow">compute</a> to calculate the accuracy:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">compute_metrics</span>(<span class="hljs-params">eval_pred</span>): <span class="hljs-meta">... </span> predictions = np.argmax(eval_pred.predictions, axis=<span class="hljs-number">1</span>) <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> accuracy.compute(predictions=predictions, references=eval_pred.label_ids)</pre></div> <p data-svelte-h="svelte-183aynn">Your <code>compute_metrics</code> function is ready to go now, and you’ll return to it when you setup your training.</p> <h2 class="relative group"><a id="train" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#train"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-5arm0l">Train</span></h2> <div class="space-y-10 py-6 2xl:py-8 2xl:-mx-4"><div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><defs><clipPath id="a"><rect x="3.05" y="0.5" width="25.73" height="31" fill="none"></rect></clipPath></defs><g clip-path="url(#a)"><path d="M24.94,9.51a12.81,12.81,0,0,1,0,18.16,12.68,12.68,0,0,1-18,0,12.81,12.81,0,0,1,0-18.16l9-9V5l-.84.83-6,6a9.58,9.58,0,1,0,13.55,0ZM20.44,9a1.68,1.68,0,1,1,1.67-1.67A1.68,1.68,0,0,1,20.44,9Z" fill="#ee4c2c"></path></g></svg> <span>Pytorch</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide Pytorch content</span></div></div> <div class="framework-content"><div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1ufp0ay">If you aren’t familiar with finetuning a model with the <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer">Trainer</a>, take a look at the basic tutorial <a href="../training#train-with-pytorch-trainer">here</a>!</p></div> <p data-svelte-h="svelte-tfi5jw">You’re ready to start training your model now! Load Wav2Vec2 with <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoModelForAudioClassification">AutoModelForAudioClassification</a> along with the number of expected labels, and the label mappings:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoModelForAudioClassification, TrainingArguments, Trainer <span class="hljs-meta">&gt;&gt;&gt; </span>num_labels = <span class="hljs-built_in">len</span>(id2label) <span class="hljs-meta">&gt;&gt;&gt; </span>model = AutoModelForAudioClassification.from_pretrained( <span class="hljs-meta">... </span> <span class="hljs-string">"facebook/wav2vec2-base"</span>, num_labels=num_labels, label2id=label2id, id2label=id2label <span class="hljs-meta">... </span>)</pre></div> <p data-svelte-h="svelte-l42k0i">At this point, only three steps remain:</p> <ol data-svelte-h="svelte-777kp9"><li>Define your training hyperparameters in <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.TrainingArguments">TrainingArguments</a>. The only required parameter is <code>output_dir</code> which specifies where to save your model. You’ll push this model to the Hub by setting <code>push_to_hub=True</code> (you need to be signed in to Hugging Face to upload your model). At the end of each epoch, the <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer">Trainer</a> will evaluate the accuracy and save the training checkpoint.</li> <li>Pass the training arguments to <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer">Trainer</a> along with the model, dataset, tokenizer, data collator, and <code>compute_metrics</code> function.</li> <li>Call <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.train">train()</a> to finetune your model.</li></ol> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>training_args = TrainingArguments( <span class="hljs-meta">... </span> output_dir=<span class="hljs-string">"my_awesome_mind_model"</span>, <span class="hljs-meta">... </span> evaluation_strategy=<span class="hljs-string">"epoch"</span>, <span class="hljs-meta">... </span> save_strategy=<span class="hljs-string">"epoch"</span>, <span class="hljs-meta">... </span> learning_rate=<span class="hljs-number">3e-5</span>, <span class="hljs-meta">... </span> per_device_train_batch_size=<span class="hljs-number">32</span>, <span class="hljs-meta">... </span> gradient_accumulation_steps=<span class="hljs-number">4</span>, <span class="hljs-meta">... </span> per_device_eval_batch_size=<span class="hljs-number">32</span>, <span class="hljs-meta">... </span> num_train_epochs=<span class="hljs-number">10</span>, <span class="hljs-meta">... </span> warmup_ratio=<span class="hljs-number">0.1</span>, <span class="hljs-meta">... </span> logging_steps=<span class="hljs-number">10</span>, <span class="hljs-meta">... </span> load_best_model_at_end=<span class="hljs-literal">True</span>, <span class="hljs-meta">... </span> metric_for_best_model=<span class="hljs-string">"accuracy"</span>, <span class="hljs-meta">... </span> push_to_hub=<span class="hljs-literal">True</span>, <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>trainer = Trainer( <span class="hljs-meta">... </span> model=model, <span class="hljs-meta">... </span> args=training_args, <span class="hljs-meta">... </span> train_dataset=encoded_minds[<span class="hljs-string">"train"</span>], <span class="hljs-meta">... </span> eval_dataset=encoded_minds[<span class="hljs-string">"test"</span>], <span class="hljs-meta">... </span> tokenizer=feature_extractor, <span class="hljs-meta">... </span> compute_metrics=compute_metrics, <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>trainer.train()</pre></div> <p data-svelte-h="svelte-cv8z08">Once training is completed, share your model to the Hub with the <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.push_to_hub">push_to_hub()</a> method so everyone can use your model:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>trainer.push_to_hub()</pre></div></div></div> </div> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1zzytb">For a more in-depth example of how to finetune a model for audio classification, take a look at the corresponding <a href="https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/audio_classification.ipynb" rel="nofollow">PyTorch notebook</a>.</p></div> <h2 class="relative group"><a id="inference" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#inference"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-199uz7g">Inference</span></h2> <p data-svelte-h="svelte-633ppb">Great, now that you’ve finetuned a model, you can use it for inference!</p> <p data-svelte-h="svelte-1j24vrm">Load an audio file you’d like to run inference on. Remember to resample the sampling rate of the audio file to match the sampling rate of the model if you need to!</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset, Audio <span class="hljs-meta">&gt;&gt;&gt; </span>dataset = load_dataset(<span class="hljs-string">"PolyAI/minds14"</span>, name=<span class="hljs-string">"en-US"</span>, split=<span class="hljs-string">"train"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>dataset = dataset.cast_column(<span class="hljs-string">"audio"</span>, Audio(sampling_rate=<span class="hljs-number">16000</span>)) <span class="hljs-meta">&gt;&gt;&gt; </span>sampling_rate = dataset.features[<span class="hljs-string">"audio"</span>].sampling_rate <span class="hljs-meta">&gt;&gt;&gt; </span>audio_file = dataset[<span class="hljs-number">0</span>][<span class="hljs-string">"audio"</span>][<span class="hljs-string">"path"</span>]</pre></div> <p data-svelte-h="svelte-hcbdil">The simplest way to try out your finetuned model for inference is to use it in a <a href="/docs/transformers/v4.34.0/en/main_classes/pipelines#transformers.pipeline">pipeline()</a>. Instantiate a <code>pipeline</code> for audio classification with your model, and pass your audio file to it:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> pipeline <span class="hljs-meta">&gt;&gt;&gt; </span>classifier = pipeline(<span class="hljs-string">"audio-classification"</span>, model=<span class="hljs-string">"stevhliu/my_awesome_minds_model"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>classifier(audio_file) [ {<span class="hljs-string">'score'</span>: <span class="hljs-number">0.09766869246959686</span>, <span class="hljs-string">'label'</span>: <span class="hljs-string">'cash_deposit'</span>}, {<span class="hljs-string">'score'</span>: <span class="hljs-number">0.07998877018690109</span>, <span class="hljs-string">'label'</span>: <span class="hljs-string">'app_error'</span>}, {<span class="hljs-string">'score'</span>: <span class="hljs-number">0.0781070664525032</span>, <span class="hljs-string">'label'</span>: <span class="hljs-string">'joint_account'</span>}, {<span class="hljs-string">'score'</span>: <span class="hljs-number">0.07667109370231628</span>, <span class="hljs-string">'label'</span>: <span class="hljs-string">'pay_bill'</span>}, {<span class="hljs-string">'score'</span>: <span class="hljs-number">0.0755252093076706</span>, <span class="hljs-string">'label'</span>: <span class="hljs-string">'balance'</span>} ]</pre></div> <p data-svelte-h="svelte-1njl8vm">You can also manually replicate the results of the <code>pipeline</code> if you’d like:</p> <div class="space-y-10 py-6 2xl:py-8 2xl:-mx-4"><div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><defs><clipPath id="a"><rect x="3.05" y="0.5" width="25.73" height="31" fill="none"></rect></clipPath></defs><g clip-path="url(#a)"><path d="M24.94,9.51a12.81,12.81,0,0,1,0,18.16,12.68,12.68,0,0,1-18,0,12.81,12.81,0,0,1,0-18.16l9-9V5l-.84.83-6,6a9.58,9.58,0,1,0,13.55,0ZM20.44,9a1.68,1.68,0,1,1,1.67-1.67A1.68,1.68,0,0,1,20.44,9Z" fill="#ee4c2c"></path></g></svg> <span>Pytorch</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide Pytorch content</span></div></div> <div class="framework-content"><p data-svelte-h="svelte-eon9oh">Load a feature extractor to preprocess the audio file and return the <code>input</code> as PyTorch tensors:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoFeatureExtractor <span class="hljs-meta">&gt;&gt;&gt; </span>feature_extractor = AutoFeatureExtractor.from_pretrained(<span class="hljs-string">"stevhliu/my_awesome_minds_model"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = feature_extractor(dataset[<span class="hljs-number">0</span>][<span class="hljs-string">"audio"</span>][<span class="hljs-string">"array"</span>], sampling_rate=sampling_rate, return_tensors=<span class="hljs-string">"pt"</span>)</pre></div> <p data-svelte-h="svelte-1at92g">Pass your inputs to the model and return the logits:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoModelForAudioClassification <span class="hljs-meta">&gt;&gt;&gt; </span>model = AutoModelForAudioClassification.from_pretrained(<span class="hljs-string">"stevhliu/my_awesome_minds_model"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> logits = model(**inputs).logits</pre></div> <p data-svelte-h="svelte-1w2ymk2">Get the class with the highest probability, and use the model’s <code>id2label</code> mapping to convert it to a label:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_class_ids = torch.argmax(logits).item() <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_label = model.config.id2label[predicted_class_ids] <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_label <span class="hljs-string">'cash_deposit'</span></pre></div></div></div> </div> <p></p> <div id="svelte-announcer" aria-live="assertive" aria-atomic="true" style="position: absolute; left: 0px; top: 0px; clip: rect(0px, 0px, 0px, 0px); clip-path: inset(50%); overflow: hidden; white-space: nowrap; width: 1px; height: 1px;"></div></div> <div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/tasks/multiple_choice" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>Multiple choice</a> <a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/tasks/asr" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">Automatic speech recognition<span class="ml-2 translate-y-px">→</span></a></div></div></div> <div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapter&quot;:{&quot;title&quot;:&quot;Audio classification&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;audio-classification&quot;,&quot;url&quot;:&quot;#audio-classification&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Load MInDS-14 dataset&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;load-minds14-dataset&quot;,&quot;url&quot;:&quot;#load-minds14-dataset&quot;},{&quot;title&quot;:&quot;Preprocess&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocess&quot;,&quot;url&quot;:&quot;#preprocess&quot;},{&quot;title&quot;:&quot;Evaluate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;evaluate&quot;,&quot;url&quot;:&quot;#evaluate&quot;},{&quot;title&quot;:&quot;Train&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;train&quot;,&quot;url&quot;:&quot;#train&quot;},{&quot;title&quot;:&quot;Inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;inference&quot;,&quot;url&quot;:&quot;#inference&quot;}]}}" data-target="SubSideMenu"><nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#audio-classification" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-audio-classification"><wbr>Audio classification</a> <a href="#load-minds14-dataset" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-load-minds14-dataset"><wbr>Load M<wbr>InD<wbr>S-14 dataset</a> <a href="#preprocess" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-preprocess"><wbr>Preprocess</a> <a href="#evaluate" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-evaluate"><wbr>Evaluate</a> <a href="#train" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-train"><wbr>Train</a> <a href="#inference" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-inference"><wbr>Inference</a> </nav></div></div></div> <div id="doc-footer"></div></main> </div> <script> import("/front/build/kube-5e23f38/index.js"); window.moonSha = "kube-5e23f38/"; window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc","environment":"production","userAgent":"HuggingFace (production)"}`); </script> <!-- Stripe --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://js.stripe.com/v3/"; script.async = true; document.head.appendChild(script); } </script> <!-- Google analytics v4 --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL"; script.async = true; document.head.appendChild(script); window.dataLayer = window.dataLayer || []; function gtag() { if (window.dataLayer !== undefined) { window.dataLayer.push(arguments); } } gtag("js", new Date()); gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/v4.34.0/en/tasks/audio_classification" }); /// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" }); /// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent /// TODO: ask the user for their consent and update this with gtag('consent', 'update') } </script> <!-- Google Analytics v3 (deprecated) --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { (function (i, s, o, g, r, a, m) { i["GoogleAnalyticsObject"] = r; (i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments); }), (i[r].l = 1 * new Date()); (a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]); a.async = 1; a.src = g; m.parentNode.insertBefore(a, m); })(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics"); ganalytics("create", "UA-83738774-2", "auto"); ganalytics("send", "pageview", "/docs/transformers/v4.34.0/en/tasks/audio_classification"); } </script> <iframe name="__privateStripeMetricsController5830" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-27c67c0d52761104439bb051c7856ab1.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fv4.34.0%2Fen%2Ftasks%2Faudio_classification&amp;title=Audio%20classification&amp;referrer=&amp;muid=577a1d98-59a0-46fc-98a8-36ee316848488be1c3&amp;sid=95f156dd-eb84-4e70-95ef-3883996ebe1530e886&amp;version=6&amp;preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html>
2023-10-05T13:33:43.982Z
DPT
https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/dpt
# DPT ## Overview The DPT model was proposed in [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) by René Ranftl, Alexey Bochkovskiy, Vladlen Koltun. DPT is a model that leverages the [Vision Transformer (ViT)](vit) as backbone for dense prediction tasks like semantic segmentation and depth estimation. The abstract from the paper is the following: _We introduce dense vision transformers, an architecture that leverages vision transformers in place of convolutional networks as a backbone for dense prediction tasks. We assemble tokens from various stages of the vision transformer into image-like representations at various resolutions and progressively combine them into full-resolution predictions using a convolutional decoder. The transformer backbone processes representations at a constant and relatively high resolution and has a global receptive field at every stage. These properties allow the dense vision transformer to provide finer-grained and more globally coherent predictions when compared to fully-convolutional networks. Our experiments show that this architecture yields substantial improvements on dense prediction tasks, especially when a large amount of training data is available. For monocular depth estimation, we observe an improvement of up to 28% in relative performance when compared to a state-of-the-art fully-convolutional network. When applied to semantic segmentation, dense vision transformers set a new state of the art on ADE20K with 49.02% mIoU. We further show that the architecture can be fine-tuned on smaller datasets such as NYUv2, KITTI, and Pascal Context where it also sets the new state of the art._ ![drawing](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/dpt_architecture.jpg) DPT architecture. Taken from the [original paper](https://arxiv.org/abs/2103.13413). This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/isl-org/DPT). ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DPT. - Demo notebooks for [DPTForDepthEstimation](/docs/transformers/v4.34.0/en/model_doc/dpt#transformers.DPTForDepthEstimation) can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/DPT). - [Semantic segmentation task guide](../tasks/semantic_segmentation) - [Monocular depth estimation task guide](../tasks/monocular_depth_estimation) If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ## DPTConfig ### class transformers.DPTConfig [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/dpt/configuration_dpt.py#L32) ( hidden\_size = 768num\_hidden\_layers = 12num\_attention\_heads = 12intermediate\_size = 3072hidden\_act = 'gelu'hidden\_dropout\_prob = 0.0attention\_probs\_dropout\_prob = 0.0initializer\_range = 0.02layer\_norm\_eps = 1e-12image\_size = 384patch\_size = 16num\_channels = 3is\_hybrid = Falseqkv\_bias = Truebackbone\_out\_indices = \[2, 5, 8, 11\]readout\_type = 'project'reassemble\_factors = \[4, 2, 1, 0.5\]neck\_hidden\_sizes = \[96, 192, 384, 768\]fusion\_hidden\_size = 256head\_in\_index = -1use\_batch\_norm\_in\_fusion\_residual = Falseuse\_auxiliary\_head = Trueauxiliary\_loss\_weight = 0.4semantic\_loss\_ignore\_index = 255semantic\_classifier\_dropout = 0.1backbone\_featmap\_shape = \[1, 1024, 24, 24\]neck\_ignore\_stages = \[0, 1\]backbone\_config = None\*\*kwargs ) This is the configuration class to store the configuration of a [DPTModel](/docs/transformers/v4.34.0/en/model_doc/dpt#transformers.DPTModel). It is used to instantiate an DPT model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the DPT [Intel/dpt-large](https://huggingface.co/Intel/dpt-large) architecture. Configuration objects inherit from [PretrainedConfig](/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig) and can be used to control the model outputs. Read the documentation from [PretrainedConfig](/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig) for more information. Example: ``` >>> from transformers import DPTModel, DPTConfig >>> >>> configuration = DPTConfig() >>> >>> model = DPTModel(configuration) >>> >>> configuration = model.config ``` Serializes this instance to a Python dictionary. Override the default [to\_dict()](/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig.to_dict). Returns: `Dict[str, any]`: Dictionary of all the attributes that make up this configuration instance, ## DPTFeatureExtractor Preprocess an image or a batch of images. ( outputstarget\_sizes: typing.List\[typing.Tuple\] = None ) → semantic\_segmentation Parameters - **outputs** ([DPTForSemanticSegmentation](/docs/transformers/v4.34.0/en/model_doc/dpt#transformers.DPTForSemanticSegmentation)) — Raw outputs of the model. - **target\_sizes** (`List[Tuple]` of length `batch_size`, _optional_) — List of tuples corresponding to the requested final size (height, width) of each prediction. If unset, predictions will not be resized. `List[torch.Tensor]` of length `batch_size`, where each item is a semantic segmentation map of shape (height, width) corresponding to the target\_sizes entry (if `target_sizes` is specified). Each entry of each `torch.Tensor` correspond to a semantic class id. Converts the output of [DPTForSemanticSegmentation](/docs/transformers/v4.34.0/en/model_doc/dpt#transformers.DPTForSemanticSegmentation) into semantic segmentation maps. Only supports PyTorch. ## DPTImageProcessor ### class transformers.DPTImageProcessor [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/dpt/image_processing_dpt.py#L94) ( do\_resize: bool = Truesize: typing.Dict\[str, int\] = Noneresample: Resampling = <Resampling.BILINEAR: 2>keep\_aspect\_ratio: bool = Falseensure\_multiple\_of: int = 1do\_rescale: bool = Truerescale\_factor: typing.Union\[int, float\] = 0.00392156862745098do\_normalize: bool = Trueimage\_mean: typing.Union\[float, typing.List\[float\], NoneType\] = Noneimage\_std: typing.Union\[float, typing.List\[float\], NoneType\] = None\*\*kwargs ) Constructs a DPT image processor. #### preprocess [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/dpt/image_processing_dpt.py#L211) ( images: typing.Union\[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List\[ForwardRef('PIL.Image.Image')\], typing.List\[numpy.ndarray\], typing.List\[ForwardRef('torch.Tensor')\]\]do\_resize: bool = Nonesize: int = Nonekeep\_aspect\_ratio: bool = Noneensure\_multiple\_of: int = Noneresample: Resampling = Nonedo\_rescale: bool = Nonerescale\_factor: float = Nonedo\_normalize: bool = Noneimage\_mean: typing.Union\[float, typing.List\[float\], NoneType\] = Noneimage\_std: typing.Union\[float, typing.List\[float\], NoneType\] = Nonereturn\_tensors: typing.Union\[str, transformers.utils.generic.TensorType, NoneType\] = Nonedata\_format: ChannelDimension = <ChannelDimension.FIRST: 'channels\_first'>input\_data\_format: typing.Union\[transformers.image\_utils.ChannelDimension, str, NoneType\] = None\*\*kwargs ) Preprocess an image or batch of images. #### post\_process\_semantic\_segmentation [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/dpt/image_processing_dpt.py#L346) ( outputstarget\_sizes: typing.List\[typing.Tuple\] = None ) → semantic\_segmentation Parameters - **outputs** ([DPTForSemanticSegmentation](/docs/transformers/v4.34.0/en/model_doc/dpt#transformers.DPTForSemanticSegmentation)) — Raw outputs of the model. - **target\_sizes** (`List[Tuple]` of length `batch_size`, _optional_) — List of tuples corresponding to the requested final size (height, width) of each prediction. If unset, predictions will not be resized. Returns semantic\_segmentation `List[torch.Tensor]` of length `batch_size`, where each item is a semantic segmentation map of shape (height, width) corresponding to the target\_sizes entry (if `target_sizes` is specified). Each entry of each `torch.Tensor` correspond to a semantic class id. Converts the output of [DPTForSemanticSegmentation](/docs/transformers/v4.34.0/en/model_doc/dpt#transformers.DPTForSemanticSegmentation) into semantic segmentation maps. Only supports PyTorch. ## DPTModel ### class transformers.DPTModel [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/dpt/modeling_dpt.py#L864) ( configadd\_pooling\_layer = True ) Parameters - **config** ([ViTConfig](/docs/transformers/v4.34.0/en/model_doc/vit#transformers.ViTConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. The bare DPT Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/dpt/modeling_dpt.py#L896) ( pixel\_values: FloatTensorhead\_mask: typing.Optional\[torch.FloatTensor\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → `transformers.models.dpt.modeling_dpt.BaseModelOutputWithPoolingAndIntermediateActivations` or `tuple(torch.FloatTensor)` The [DPTModel](/docs/transformers/v4.34.0/en/model_doc/dpt#transformers.DPTModel) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoImageProcessor, DPTModel >>> import torch >>> from datasets import load_dataset >>> dataset = load_dataset("huggingface/cats-image") >>> image = dataset["test"]["image"][0] >>> image_processor = AutoImageProcessor.from_pretrained("Intel/dpt-large") >>> model = DPTModel.from_pretrained("Intel/dpt-large") >>> inputs = image_processor(image, return_tensors="pt") >>> with torch.no_grad(): ... outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state >>> list(last_hidden_states.shape) [1, 577, 1024] ``` ## DPTForDepthEstimation ### class transformers.DPTForDepthEstimation [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/dpt/modeling_dpt.py#L1052) ( config ) Parameters - **config** ([ViTConfig](/docs/transformers/v4.34.0/en/model_doc/vit#transformers.ViTConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. DPT Model with a depth estimation head on top (consisting of 3 convolutional layers) e.g. for KITTI, NYUv2. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/dpt/modeling_dpt.py#L1067) ( pixel\_values: FloatTensorhead\_mask: typing.Optional\[torch.FloatTensor\] = Nonelabels: typing.Optional\[torch.LongTensor\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → [transformers.modeling\_outputs.DepthEstimatorOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.DepthEstimatorOutput) or `tuple(torch.FloatTensor)` The [DPTForDepthEstimation](/docs/transformers/v4.34.0/en/model_doc/dpt#transformers.DPTForDepthEstimation) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: ``` >>> from transformers import AutoImageProcessor, DPTForDepthEstimation >>> import torch >>> import numpy as np >>> from PIL import Image >>> import requests >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> image_processor = AutoImageProcessor.from_pretrained("Intel/dpt-large") >>> model = DPTForDepthEstimation.from_pretrained("Intel/dpt-large") >>> >>> inputs = image_processor(images=image, return_tensors="pt") >>> with torch.no_grad(): ... outputs = model(**inputs) ... predicted_depth = outputs.predicted_depth >>> >>> prediction = torch.nn.functional.interpolate( ... predicted_depth.unsqueeze(1), ... size=image.size[::-1], ... mode="bicubic", ... align_corners=False, ... ) >>> >>> output = prediction.squeeze().cpu().numpy() >>> formatted = (output * 255 / np.max(output)).astype("uint8") >>> depth = Image.fromarray(formatted) ``` ## DPTForSemanticSegmentation ### class transformers.DPTForSemanticSegmentation [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/dpt/modeling_dpt.py#L1220) ( config ) Parameters - **config** ([ViTConfig](/docs/transformers/v4.34.0/en/model_doc/vit#transformers.ViTConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. DPT Model with a semantic segmentation head on top e.g. for ADE20k, CityScapes. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/dpt/modeling_dpt.py#L1236) ( pixel\_values: typing.Optional\[torch.FloatTensor\] = Nonehead\_mask: typing.Optional\[torch.FloatTensor\] = Nonelabels: typing.Optional\[torch.LongTensor\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → [transformers.modeling\_outputs.SemanticSegmenterOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.SemanticSegmenterOutput) or `tuple(torch.FloatTensor)` The [DPTForSemanticSegmentation](/docs/transformers/v4.34.0/en/model_doc/dpt#transformers.DPTForSemanticSegmentation) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: ``` >>> from transformers import AutoImageProcessor, DPTForSemanticSegmentation >>> from PIL import Image >>> import requests >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> image_processor = AutoImageProcessor.from_pretrained("Intel/dpt-large-ade") >>> model = DPTForSemanticSegmentation.from_pretrained("Intel/dpt-large-ade") >>> inputs = image_processor(images=image, return_tensors="pt") >>> outputs = model(**inputs) >>> logits = outputs.logits ```
<!DOCTYPE html><html class=""><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"> <meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science."> <meta property="fb:app_id" content="1321688464574422"> <meta name="twitter:card" content="summary_large_image"> <meta name="twitter:site" content="@huggingface"> <meta property="og:title" content="DPT"> <meta property="og:type" content="website"> <meta property="og:url" content="https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/dpt"> <meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png"> <link rel="stylesheet" href="/front/build/kube-5e23f38/style.css"> <link rel="preconnect" href="https://fonts.gstatic.com"> <link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&amp;display=swap" rel="stylesheet"> <link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&amp;display=swap" rel="stylesheet"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" as="style" onload="this.onload=null;this.rel='stylesheet'"> <noscript> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" /> </noscript> <title>DPT</title> <script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script> <script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"><link rel="stylesheet" href="/docs/transformers/v4.34.0/en/_app/immutable/assets/0.e3b0c442.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/1.38c5c2f6.js"><meta name="hf:doc:metadata" content="{&quot;local&quot;:&quot;dpt&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;overview&quot;,&quot;title&quot;:&quot;Overview&quot;},{&quot;local&quot;:&quot;resources&quot;,&quot;title&quot;:&quot;Resources&quot;},{&quot;local&quot;:&quot;transformers.DPTConfig&quot;,&quot;title&quot;:&quot;DPTConfig&quot;},{&quot;local&quot;:&quot;transformers.DPTFeatureExtractor&quot;,&quot;title&quot;:&quot;DPTFeatureExtractor&quot;},{&quot;local&quot;:&quot;transformers.DPTImageProcessor&quot;,&quot;title&quot;:&quot;DPTImageProcessor&quot;},{&quot;local&quot;:&quot;transformers.DPTModel&quot;,&quot;title&quot;:&quot;DPTModel&quot;},{&quot;local&quot;:&quot;transformers.DPTForDepthEstimation&quot;,&quot;title&quot;:&quot;DPTForDepthEstimation&quot;},{&quot;local&quot;:&quot;transformers.DPTForSemanticSegmentation&quot;,&quot;title&quot;:&quot;DPTForSemanticSegmentation&quot;}],&quot;title&quot;:&quot;DPT&quot;}"></head> <body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage"> <div class="flex min-h-screen flex-col"> <div class="SVELTE_HYDRATER contents" data-props="{&quot;classNames&quot;:&quot;&quot;,&quot;isWide&quot;:true,&quot;isZh&quot;:false}" data-target="MainHeader"><header class="border-b border-gray-100 "><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <div class="flex flex-none items-center justify-center p-0.5 place-self-stretch lg:hidden"><button class="relative z-40 flex h-6 w-8 items-center justify-center" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div></div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="rounded-full border border-transparent bg-gray-900 px-3 py-1 leading-none text-white hover:border-black hover:bg-white hover:text-black" href="/join">Sign Up</a></li></ul></nav></div></header></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div> <main class="flex flex-1 flex-col"><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapters&quot;:[{&quot;title&quot;:&quot;Get started&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;🤗 Transformers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;index&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/index&quot;},{&quot;title&quot;:&quot;Quick tour&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;quicktour&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/quicktour&quot;},{&quot;title&quot;:&quot;Installation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;installation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/installation&quot;}]},{&quot;title&quot;:&quot;Tutorials&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Run inference with pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_tutorial&quot;},{&quot;title&quot;:&quot;Write portable code with AutoClass&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;autoclass_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/autoclass_tutorial&quot;},{&quot;title&quot;:&quot;Preprocess data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocessing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/preprocessing&quot;},{&quot;title&quot;:&quot;Fine-tune a pretrained model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;training&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/training&quot;},{&quot;title&quot;:&quot;Train with a script&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;run_scripts&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/run_scripts&quot;},{&quot;title&quot;:&quot;Set up distributed training with 🤗 Accelerate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;accelerate&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/accelerate&quot;},{&quot;title&quot;:&quot;Load and train adapters with 🤗 PEFT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;peft&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/peft&quot;},{&quot;title&quot;:&quot;Share your model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_sharing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_sharing&quot;},{&quot;title&quot;:&quot;Agents&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers_agents&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/transformers_agents&quot;},{&quot;title&quot;:&quot;Generation with LLMs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;llm_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/llm_tutorial&quot;}]},{&quot;title&quot;:&quot;Task Guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Natural Language Processing&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text classification&quot;,&quot;id&quot;:&quot;tasks/sequence_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/sequence_classification&quot;},{&quot;title&quot;:&quot;Token classification&quot;,&quot;id&quot;:&quot;tasks/token_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/token_classification&quot;},{&quot;title&quot;:&quot;Question answering&quot;,&quot;id&quot;:&quot;tasks/question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/question_answering&quot;},{&quot;title&quot;:&quot;Causal language modeling&quot;,&quot;id&quot;:&quot;tasks/language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/language_modeling&quot;},{&quot;title&quot;:&quot;Masked language modeling&quot;,&quot;id&quot;:&quot;tasks/masked_language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/masked_language_modeling&quot;},{&quot;title&quot;:&quot;Translation&quot;,&quot;id&quot;:&quot;tasks/translation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/translation&quot;},{&quot;title&quot;:&quot;Summarization&quot;,&quot;id&quot;:&quot;tasks/summarization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/summarization&quot;},{&quot;title&quot;:&quot;Multiple choice&quot;,&quot;id&quot;:&quot;tasks/multiple_choice&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/multiple_choice&quot;}]},{&quot;title&quot;:&quot;Audio&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio classification&quot;,&quot;id&quot;:&quot;tasks/audio_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/audio_classification&quot;},{&quot;title&quot;:&quot;Automatic speech recognition&quot;,&quot;id&quot;:&quot;tasks/asr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/asr&quot;}]},{&quot;title&quot;:&quot;Computer Vision&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image classification&quot;,&quot;id&quot;:&quot;tasks/image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_classification&quot;},{&quot;title&quot;:&quot;Semantic segmentation&quot;,&quot;id&quot;:&quot;tasks/semantic_segmentation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/semantic_segmentation&quot;},{&quot;title&quot;:&quot;Video classification&quot;,&quot;id&quot;:&quot;tasks/video_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/video_classification&quot;},{&quot;title&quot;:&quot;Object detection&quot;,&quot;id&quot;:&quot;tasks/object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot object detection&quot;,&quot;id&quot;:&quot;tasks/zero_shot_object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot image classification&quot;,&quot;id&quot;:&quot;tasks/zero_shot_image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification&quot;},{&quot;title&quot;:&quot;Depth estimation&quot;,&quot;id&quot;:&quot;tasks/monocular_depth_estimation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation&quot;}]},{&quot;title&quot;:&quot;Multimodal&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image captioning&quot;,&quot;id&quot;:&quot;tasks/image_captioning&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_captioning&quot;},{&quot;title&quot;:&quot;Document Question Answering&quot;,&quot;id&quot;:&quot;tasks/document_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/document_question_answering&quot;},{&quot;title&quot;:&quot;Visual Question Answering&quot;,&quot;id&quot;:&quot;tasks/visual_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/visual_question_answering&quot;},{&quot;title&quot;:&quot;Text to speech&quot;,&quot;id&quot;:&quot;tasks/text-to-speech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/text-to-speech&quot;}]},{&quot;title&quot;:&quot;Generation&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Customize the generation strategy&quot;,&quot;id&quot;:&quot;generation_strategies&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/generation_strategies&quot;}]},{&quot;title&quot;:&quot;Prompting&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image tasks with IDEFICS&quot;,&quot;id&quot;:&quot;tasks/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/idefics&quot;}]}]},{&quot;title&quot;:&quot;Developer guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Use fast tokenizers from 🤗 Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;fast_tokenizers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/fast_tokenizers&quot;},{&quot;title&quot;:&quot;Run inference with multilingual models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;multilingual&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/multilingual&quot;},{&quot;title&quot;:&quot;Use model-specific APIs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;create_a_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/create_a_model&quot;},{&quot;title&quot;:&quot;Share a custom model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_models&quot;},{&quot;title&quot;:&quot;Templates for chat models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;chat_templating&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/chat_templating&quot;},{&quot;title&quot;:&quot;Run training on Amazon SageMaker&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;sagemaker&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/sagemaker&quot;},{&quot;title&quot;:&quot;Export to ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;serialization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/serialization&quot;},{&quot;title&quot;:&quot;Export to TFLite&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tflite&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tflite&quot;},{&quot;title&quot;:&quot;Export to TorchScript&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;torchscript&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/torchscript&quot;},{&quot;title&quot;:&quot;Benchmarks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;benchmarks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/benchmarks&quot;},{&quot;title&quot;:&quot;Notebooks with examples&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;notebooks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/notebooks&quot;},{&quot;title&quot;:&quot;Community resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;community&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/community&quot;},{&quot;title&quot;:&quot;Custom Tools and Prompts&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_tools&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_tools&quot;},{&quot;title&quot;:&quot;Troubleshoot&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;troubleshooting&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/troubleshooting&quot;}]},{&quot;title&quot;:&quot;Performance and scalability&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;performance&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/performance&quot;},{&quot;title&quot;:&quot;Efficient training techniques&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Methods and tools for efficient training on a single GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_one&quot;},{&quot;title&quot;:&quot;Multiple GPUs and parallelism&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_many&quot;},{&quot;title&quot;:&quot;Efficient training on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu&quot;},{&quot;title&quot;:&quot;Distributed CPU training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu_many&quot;},{&quot;title&quot;:&quot;Training on TPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu&quot;},{&quot;title&quot;:&quot;Training on TPU with TensorFlow&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu_tf&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu_tf&quot;},{&quot;title&quot;:&quot;Training on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_special&quot;},{&quot;title&quot;:&quot;Custom hardware for training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_hardware&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_hardware&quot;},{&quot;title&quot;:&quot;Hyperparameter Search using Trainer API&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;hpo_train&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/hpo_train&quot;}]},{&quot;title&quot;:&quot;Optimizing inference&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Inference on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_cpu&quot;},{&quot;title&quot;:&quot;Inference on one GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_one&quot;},{&quot;title&quot;:&quot;Inference on many GPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_many&quot;},{&quot;title&quot;:&quot;Inference on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_special&quot;}]},{&quot;title&quot;:&quot;Instantiating a big model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;big_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/big_models&quot;},{&quot;title&quot;:&quot;Troubleshooting&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;debugging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/debugging&quot;},{&quot;title&quot;:&quot;XLA Integration for TensorFlow Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tf_xla&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tf_xla&quot;},{&quot;title&quot;:&quot;Optimize inference using `torch.compile()`&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_torch_compile&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_torch_compile&quot;}]},{&quot;title&quot;:&quot;Contribute&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;How to contribute to transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;contributing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/contributing&quot;},{&quot;title&quot;:&quot;How to add a model to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_model&quot;},{&quot;title&quot;:&quot;How to convert a 🤗 Transformers model to TensorFlow?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_tensorflow_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_tensorflow_model&quot;},{&quot;title&quot;:&quot;How to add a pipeline to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_pipeline&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_pipeline&quot;},{&quot;title&quot;:&quot;Testing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;testing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/testing&quot;},{&quot;title&quot;:&quot;Checks on a Pull Request&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pr_checks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pr_checks&quot;}]},{&quot;title&quot;:&quot;Conceptual guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Philosophy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;philosophy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/philosophy&quot;},{&quot;title&quot;:&quot;Glossary&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;glossary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/glossary&quot;},{&quot;title&quot;:&quot;What 🤗 Transformers can do&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;task_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/task_summary&quot;},{&quot;title&quot;:&quot;How 🤗 Transformers solve tasks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks_explained&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks_explained&quot;},{&quot;title&quot;:&quot;The Transformer model family&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_summary&quot;},{&quot;title&quot;:&quot;Summary of the tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tokenizer_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tokenizer_summary&quot;},{&quot;title&quot;:&quot;Attention mechanisms&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;attention&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/attention&quot;},{&quot;title&quot;:&quot;Padding and truncation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pad_truncation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pad_truncation&quot;},{&quot;title&quot;:&quot;BERTology&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;bertology&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/bertology&quot;},{&quot;title&quot;:&quot;Perplexity of fixed-length models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perplexity&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perplexity&quot;},{&quot;title&quot;:&quot;Pipelines for webserver inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_webserver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_webserver&quot;},{&quot;title&quot;:&quot;Model training anatomy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_memory_anatomy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_memory_anatomy&quot;}]},{&quot;title&quot;:&quot;API&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Main Classes&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Agents and Tools&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/agent&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/agent&quot;},{&quot;title&quot;:&quot;Auto Classes&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/auto&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/auto&quot;},{&quot;title&quot;:&quot;Callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/callback&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/callback&quot;},{&quot;title&quot;:&quot;Configuration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/configuration&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/configuration&quot;},{&quot;title&quot;:&quot;Data Collator&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/data_collator&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/data_collator&quot;},{&quot;title&quot;:&quot;Keras callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/keras_callbacks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/keras_callbacks&quot;},{&quot;title&quot;:&quot;Logging&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/logging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/logging&quot;},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/model&quot;},{&quot;title&quot;:&quot;Text Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/text_generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/text_generation&quot;},{&quot;title&quot;:&quot;ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/onnx&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/onnx&quot;},{&quot;title&quot;:&quot;Optimization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/optimizer_schedules&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules&quot;},{&quot;title&quot;:&quot;Model outputs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/output&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/output&quot;},{&quot;title&quot;:&quot;Pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/pipelines&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/pipelines&quot;},{&quot;title&quot;:&quot;Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/processors&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/processors&quot;},{&quot;title&quot;:&quot;Quantization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/quantization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/quantization&quot;},{&quot;title&quot;:&quot;Tokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/tokenizer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/tokenizer&quot;},{&quot;title&quot;:&quot;Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/trainer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/trainer&quot;},{&quot;title&quot;:&quot;DeepSpeed Integration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/deepspeed&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/deepspeed&quot;},{&quot;title&quot;:&quot;Feature Extractor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/feature_extractor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/feature_extractor&quot;},{&quot;title&quot;:&quot;Image Processor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/image_processor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/image_processor&quot;}]},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALBERT&quot;,&quot;id&quot;:&quot;model_doc/albert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/albert&quot;},{&quot;title&quot;:&quot;BART&quot;,&quot;id&quot;:&quot;model_doc/bart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bart&quot;},{&quot;title&quot;:&quot;BARThez&quot;,&quot;id&quot;:&quot;model_doc/barthez&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/barthez&quot;},{&quot;title&quot;:&quot;BARTpho&quot;,&quot;id&quot;:&quot;model_doc/bartpho&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bartpho&quot;},{&quot;title&quot;:&quot;BERT&quot;,&quot;id&quot;:&quot;model_doc/bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert&quot;},{&quot;title&quot;:&quot;BertGeneration&quot;,&quot;id&quot;:&quot;model_doc/bert-generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-generation&quot;},{&quot;title&quot;:&quot;BertJapanese&quot;,&quot;id&quot;:&quot;model_doc/bert-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-japanese&quot;},{&quot;title&quot;:&quot;Bertweet&quot;,&quot;id&quot;:&quot;model_doc/bertweet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bertweet&quot;},{&quot;title&quot;:&quot;BigBird&quot;,&quot;id&quot;:&quot;model_doc/big_bird&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/big_bird&quot;},{&quot;title&quot;:&quot;BigBirdPegasus&quot;,&quot;id&quot;:&quot;model_doc/bigbird_pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus&quot;},{&quot;title&quot;:&quot;BioGpt&quot;,&quot;id&quot;:&quot;model_doc/biogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/biogpt&quot;},{&quot;title&quot;:&quot;Blenderbot&quot;,&quot;id&quot;:&quot;model_doc/blenderbot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot&quot;},{&quot;title&quot;:&quot;Blenderbot Small&quot;,&quot;id&quot;:&quot;model_doc/blenderbot-small&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot-small&quot;},{&quot;title&quot;:&quot;BLOOM&quot;,&quot;id&quot;:&quot;model_doc/bloom&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bloom&quot;},{&quot;title&quot;:&quot;BORT&quot;,&quot;id&quot;:&quot;model_doc/bort&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bort&quot;},{&quot;title&quot;:&quot;ByT5&quot;,&quot;id&quot;:&quot;model_doc/byt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/byt5&quot;},{&quot;title&quot;:&quot;CamemBERT&quot;,&quot;id&quot;:&quot;model_doc/camembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/camembert&quot;},{&quot;title&quot;:&quot;CANINE&quot;,&quot;id&quot;:&quot;model_doc/canine&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/canine&quot;},{&quot;title&quot;:&quot;CodeGen&quot;,&quot;id&quot;:&quot;model_doc/codegen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/codegen&quot;},{&quot;title&quot;:&quot;CodeLlama&quot;,&quot;id&quot;:&quot;model_doc/code_llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/code_llama&quot;},{&quot;title&quot;:&quot;ConvBERT&quot;,&quot;id&quot;:&quot;model_doc/convbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convbert&quot;},{&quot;title&quot;:&quot;CPM&quot;,&quot;id&quot;:&quot;model_doc/cpm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpm&quot;},{&quot;title&quot;:&quot;CPMANT&quot;,&quot;id&quot;:&quot;model_doc/cpmant&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpmant&quot;},{&quot;title&quot;:&quot;CTRL&quot;,&quot;id&quot;:&quot;model_doc/ctrl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ctrl&quot;},{&quot;title&quot;:&quot;DeBERTa&quot;,&quot;id&quot;:&quot;model_doc/deberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta&quot;},{&quot;title&quot;:&quot;DeBERTa-v2&quot;,&quot;id&quot;:&quot;model_doc/deberta-v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta-v2&quot;},{&quot;title&quot;:&quot;DialoGPT&quot;,&quot;id&quot;:&quot;model_doc/dialogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dialogpt&quot;},{&quot;title&quot;:&quot;DistilBERT&quot;,&quot;id&quot;:&quot;model_doc/distilbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/distilbert&quot;},{&quot;title&quot;:&quot;DPR&quot;,&quot;id&quot;:&quot;model_doc/dpr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpr&quot;},{&quot;title&quot;:&quot;ELECTRA&quot;,&quot;id&quot;:&quot;model_doc/electra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/electra&quot;},{&quot;title&quot;:&quot;Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encoder-decoder&quot;},{&quot;title&quot;:&quot;ERNIE&quot;,&quot;id&quot;:&quot;model_doc/ernie&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie&quot;},{&quot;title&quot;:&quot;ErnieM&quot;,&quot;id&quot;:&quot;model_doc/ernie_m&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie_m&quot;},{&quot;title&quot;:&quot;ESM&quot;,&quot;id&quot;:&quot;model_doc/esm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/esm&quot;},{&quot;title&quot;:&quot;Falcon&quot;,&quot;id&quot;:&quot;model_doc/falcon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/falcon&quot;},{&quot;title&quot;:&quot;FLAN-T5&quot;,&quot;id&quot;:&quot;model_doc/flan-t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-t5&quot;},{&quot;title&quot;:&quot;FLAN-UL2&quot;,&quot;id&quot;:&quot;model_doc/flan-ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-ul2&quot;},{&quot;title&quot;:&quot;FlauBERT&quot;,&quot;id&quot;:&quot;model_doc/flaubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flaubert&quot;},{&quot;title&quot;:&quot;FNet&quot;,&quot;id&quot;:&quot;model_doc/fnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fnet&quot;},{&quot;title&quot;:&quot;FSMT&quot;,&quot;id&quot;:&quot;model_doc/fsmt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fsmt&quot;},{&quot;title&quot;:&quot;Funnel Transformer&quot;,&quot;id&quot;:&quot;model_doc/funnel&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/funnel&quot;},{&quot;title&quot;:&quot;GPT&quot;,&quot;id&quot;:&quot;model_doc/openai-gpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/openai-gpt&quot;},{&quot;title&quot;:&quot;GPT Neo&quot;,&quot;id&quot;:&quot;model_doc/gpt_neo&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neo&quot;},{&quot;title&quot;:&quot;GPT NeoX&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox&quot;},{&quot;title&quot;:&quot;GPT NeoX Japanese&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox_japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese&quot;},{&quot;title&quot;:&quot;GPT-J&quot;,&quot;id&quot;:&quot;model_doc/gptj&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptj&quot;},{&quot;title&quot;:&quot;GPT2&quot;,&quot;id&quot;:&quot;model_doc/gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt2&quot;},{&quot;title&quot;:&quot;GPTBigCode&quot;,&quot;id&quot;:&quot;model_doc/gpt_bigcode&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode&quot;},{&quot;title&quot;:&quot;GPTSAN Japanese&quot;,&quot;id&quot;:&quot;model_doc/gptsan-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese&quot;},{&quot;title&quot;:&quot;GPTSw3&quot;,&quot;id&quot;:&quot;model_doc/gpt-sw3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt-sw3&quot;},{&quot;title&quot;:&quot;HerBERT&quot;,&quot;id&quot;:&quot;model_doc/herbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/herbert&quot;},{&quot;title&quot;:&quot;I-BERT&quot;,&quot;id&quot;:&quot;model_doc/ibert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ibert&quot;},{&quot;title&quot;:&quot;Jukebox&quot;,&quot;id&quot;:&quot;model_doc/jukebox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/jukebox&quot;},{&quot;title&quot;:&quot;LED&quot;,&quot;id&quot;:&quot;model_doc/led&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/led&quot;},{&quot;title&quot;:&quot;LLaMA&quot;,&quot;id&quot;:&quot;model_doc/llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama&quot;},{&quot;title&quot;:&quot;Llama2&quot;,&quot;id&quot;:&quot;model_doc/llama2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama2&quot;},{&quot;title&quot;:&quot;Longformer&quot;,&quot;id&quot;:&quot;model_doc/longformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longformer&quot;},{&quot;title&quot;:&quot;LongT5&quot;,&quot;id&quot;:&quot;model_doc/longt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longt5&quot;},{&quot;title&quot;:&quot;LUKE&quot;,&quot;id&quot;:&quot;model_doc/luke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/luke&quot;},{&quot;title&quot;:&quot;M2M100&quot;,&quot;id&quot;:&quot;model_doc/m2m_100&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/m2m_100&quot;},{&quot;title&quot;:&quot;MarianMT&quot;,&quot;id&quot;:&quot;model_doc/marian&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/marian&quot;},{&quot;title&quot;:&quot;MarkupLM&quot;,&quot;id&quot;:&quot;model_doc/markuplm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/markuplm&quot;},{&quot;title&quot;:&quot;MBart and MBart-50&quot;,&quot;id&quot;:&quot;model_doc/mbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mbart&quot;},{&quot;title&quot;:&quot;MEGA&quot;,&quot;id&quot;:&quot;model_doc/mega&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mega&quot;},{&quot;title&quot;:&quot;MegatronBERT&quot;,&quot;id&quot;:&quot;model_doc/megatron-bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron-bert&quot;},{&quot;title&quot;:&quot;MegatronGPT2&quot;,&quot;id&quot;:&quot;model_doc/megatron_gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2&quot;},{&quot;title&quot;:&quot;Mistral&quot;,&quot;id&quot;:&quot;model_doc/mistral&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mistral&quot;},{&quot;title&quot;:&quot;mLUKE&quot;,&quot;id&quot;:&quot;model_doc/mluke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mluke&quot;},{&quot;title&quot;:&quot;MobileBERT&quot;,&quot;id&quot;:&quot;model_doc/mobilebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilebert&quot;},{&quot;title&quot;:&quot;MPNet&quot;,&quot;id&quot;:&quot;model_doc/mpnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpnet&quot;},{&quot;title&quot;:&quot;MPT&quot;,&quot;id&quot;:&quot;model_doc/mpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpt&quot;},{&quot;title&quot;:&quot;MRA&quot;,&quot;id&quot;:&quot;model_doc/mra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mra&quot;},{&quot;title&quot;:&quot;MT5&quot;,&quot;id&quot;:&quot;model_doc/mt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mt5&quot;},{&quot;title&quot;:&quot;MVP&quot;,&quot;id&quot;:&quot;model_doc/mvp&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mvp&quot;},{&quot;title&quot;:&quot;NEZHA&quot;,&quot;id&quot;:&quot;model_doc/nezha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nezha&quot;},{&quot;title&quot;:&quot;NLLB&quot;,&quot;id&quot;:&quot;model_doc/nllb&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb&quot;},{&quot;title&quot;:&quot;NLLB-MoE&quot;,&quot;id&quot;:&quot;model_doc/nllb-moe&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb-moe&quot;},{&quot;title&quot;:&quot;Nyströmformer&quot;,&quot;id&quot;:&quot;model_doc/nystromformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nystromformer&quot;},{&quot;title&quot;:&quot;Open-Llama&quot;,&quot;id&quot;:&quot;model_doc/open-llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/open-llama&quot;},{&quot;title&quot;:&quot;OPT&quot;,&quot;id&quot;:&quot;model_doc/opt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/opt&quot;},{&quot;title&quot;:&quot;Pegasus&quot;,&quot;id&quot;:&quot;model_doc/pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus&quot;},{&quot;title&quot;:&quot;PEGASUS-X&quot;,&quot;id&quot;:&quot;model_doc/pegasus_x&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus_x&quot;},{&quot;title&quot;:&quot;Persimmon&quot;,&quot;id&quot;:&quot;model_doc/persimmon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/persimmon&quot;},{&quot;title&quot;:&quot;PhoBERT&quot;,&quot;id&quot;:&quot;model_doc/phobert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/phobert&quot;},{&quot;title&quot;:&quot;PLBart&quot;,&quot;id&quot;:&quot;model_doc/plbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/plbart&quot;},{&quot;title&quot;:&quot;ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/prophetnet&quot;},{&quot;title&quot;:&quot;QDQBert&quot;,&quot;id&quot;:&quot;model_doc/qdqbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/qdqbert&quot;},{&quot;title&quot;:&quot;RAG&quot;,&quot;id&quot;:&quot;model_doc/rag&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rag&quot;},{&quot;title&quot;:&quot;REALM&quot;,&quot;id&quot;:&quot;model_doc/realm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/realm&quot;},{&quot;title&quot;:&quot;Reformer&quot;,&quot;id&quot;:&quot;model_doc/reformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/reformer&quot;},{&quot;title&quot;:&quot;RemBERT&quot;,&quot;id&quot;:&quot;model_doc/rembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rembert&quot;},{&quot;title&quot;:&quot;RetriBERT&quot;,&quot;id&quot;:&quot;model_doc/retribert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/retribert&quot;},{&quot;title&quot;:&quot;RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta&quot;},{&quot;title&quot;:&quot;RoBERTa-PreLayerNorm&quot;,&quot;id&quot;:&quot;model_doc/roberta-prelayernorm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm&quot;},{&quot;title&quot;:&quot;RoCBert&quot;,&quot;id&quot;:&quot;model_doc/roc_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roc_bert&quot;},{&quot;title&quot;:&quot;RoFormer&quot;,&quot;id&quot;:&quot;model_doc/roformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roformer&quot;},{&quot;title&quot;:&quot;RWKV&quot;,&quot;id&quot;:&quot;model_doc/rwkv&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rwkv&quot;},{&quot;title&quot;:&quot;Splinter&quot;,&quot;id&quot;:&quot;model_doc/splinter&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/splinter&quot;},{&quot;title&quot;:&quot;SqueezeBERT&quot;,&quot;id&quot;:&quot;model_doc/squeezebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/squeezebert&quot;},{&quot;title&quot;:&quot;SwitchTransformers&quot;,&quot;id&quot;:&quot;model_doc/switch_transformers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/switch_transformers&quot;},{&quot;title&quot;:&quot;T5&quot;,&quot;id&quot;:&quot;model_doc/t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5&quot;},{&quot;title&quot;:&quot;T5v1.1&quot;,&quot;id&quot;:&quot;model_doc/t5v1.1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5v1.1&quot;},{&quot;title&quot;:&quot;TAPEX&quot;,&quot;id&quot;:&quot;model_doc/tapex&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapex&quot;},{&quot;title&quot;:&quot;Transformer XL&quot;,&quot;id&quot;:&quot;model_doc/transfo-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/transfo-xl&quot;},{&quot;title&quot;:&quot;UL2&quot;,&quot;id&quot;:&quot;model_doc/ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ul2&quot;},{&quot;title&quot;:&quot;UMT5&quot;,&quot;id&quot;:&quot;model_doc/umt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/umt5&quot;},{&quot;title&quot;:&quot;X-MOD&quot;,&quot;id&quot;:&quot;model_doc/xmod&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xmod&quot;},{&quot;title&quot;:&quot;XGLM&quot;,&quot;id&quot;:&quot;model_doc/xglm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xglm&quot;},{&quot;title&quot;:&quot;XLM&quot;,&quot;id&quot;:&quot;model_doc/xlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm&quot;},{&quot;title&quot;:&quot;XLM-ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/xlm-prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa-XL&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl&quot;},{&quot;title&quot;:&quot;XLM-V&quot;,&quot;id&quot;:&quot;model_doc/xlm-v&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-v&quot;},{&quot;title&quot;:&quot;XLNet&quot;,&quot;id&quot;:&quot;model_doc/xlnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlnet&quot;},{&quot;title&quot;:&quot;YOSO&quot;,&quot;id&quot;:&quot;model_doc/yoso&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yoso&quot;}]},{&quot;title&quot;:&quot;Vision models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;BEiT&quot;,&quot;id&quot;:&quot;model_doc/beit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/beit&quot;},{&quot;title&quot;:&quot;BiT&quot;,&quot;id&quot;:&quot;model_doc/bit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bit&quot;},{&quot;title&quot;:&quot;Conditional DETR&quot;,&quot;id&quot;:&quot;model_doc/conditional_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/conditional_detr&quot;},{&quot;title&quot;:&quot;ConvNeXT&quot;,&quot;id&quot;:&quot;model_doc/convnext&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnext&quot;},{&quot;title&quot;:&quot;ConvNeXTV2&quot;,&quot;id&quot;:&quot;model_doc/convnextv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnextv2&quot;},{&quot;title&quot;:&quot;CvT&quot;,&quot;id&quot;:&quot;model_doc/cvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cvt&quot;},{&quot;title&quot;:&quot;Deformable DETR&quot;,&quot;id&quot;:&quot;model_doc/deformable_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deformable_detr&quot;},{&quot;title&quot;:&quot;DeiT&quot;,&quot;id&quot;:&quot;model_doc/deit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deit&quot;},{&quot;title&quot;:&quot;DETA&quot;,&quot;id&quot;:&quot;model_doc/deta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deta&quot;},{&quot;title&quot;:&quot;DETR&quot;,&quot;id&quot;:&quot;model_doc/detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/detr&quot;},{&quot;title&quot;:&quot;DiNAT&quot;,&quot;id&quot;:&quot;model_doc/dinat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinat&quot;},{&quot;title&quot;:&quot;DINO V2&quot;,&quot;id&quot;:&quot;model_doc/dinov2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinov2&quot;},{&quot;title&quot;:&quot;DiT&quot;,&quot;id&quot;:&quot;model_doc/dit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dit&quot;},{&quot;title&quot;:&quot;DPT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/dpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpt&quot;},{&quot;title&quot;:&quot;EfficientFormer&quot;,&quot;id&quot;:&quot;model_doc/efficientformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientformer&quot;},{&quot;title&quot;:&quot;EfficientNet&quot;,&quot;id&quot;:&quot;model_doc/efficientnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientnet&quot;},{&quot;title&quot;:&quot;FocalNet&quot;,&quot;id&quot;:&quot;model_doc/focalnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/focalnet&quot;},{&quot;title&quot;:&quot;GLPN&quot;,&quot;id&quot;:&quot;model_doc/glpn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/glpn&quot;},{&quot;title&quot;:&quot;ImageGPT&quot;,&quot;id&quot;:&quot;model_doc/imagegpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/imagegpt&quot;},{&quot;title&quot;:&quot;LeViT&quot;,&quot;id&quot;:&quot;model_doc/levit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/levit&quot;},{&quot;title&quot;:&quot;Mask2Former&quot;,&quot;id&quot;:&quot;model_doc/mask2former&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mask2former&quot;},{&quot;title&quot;:&quot;MaskFormer&quot;,&quot;id&quot;:&quot;model_doc/maskformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/maskformer&quot;},{&quot;title&quot;:&quot;MobileNetV1&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v1&quot;},{&quot;title&quot;:&quot;MobileNetV2&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v2&quot;},{&quot;title&quot;:&quot;MobileViT&quot;,&quot;id&quot;:&quot;model_doc/mobilevit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevit&quot;},{&quot;title&quot;:&quot;MobileViTV2&quot;,&quot;id&quot;:&quot;model_doc/mobilevitv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevitv2&quot;},{&quot;title&quot;:&quot;NAT&quot;,&quot;id&quot;:&quot;model_doc/nat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nat&quot;},{&quot;title&quot;:&quot;PoolFormer&quot;,&quot;id&quot;:&quot;model_doc/poolformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/poolformer&quot;},{&quot;title&quot;:&quot;Pyramid Vision Transformer (PVT)&quot;,&quot;id&quot;:&quot;model_doc/pvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pvt&quot;},{&quot;title&quot;:&quot;RegNet&quot;,&quot;id&quot;:&quot;model_doc/regnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/regnet&quot;},{&quot;title&quot;:&quot;ResNet&quot;,&quot;id&quot;:&quot;model_doc/resnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/resnet&quot;},{&quot;title&quot;:&quot;SegFormer&quot;,&quot;id&quot;:&quot;model_doc/segformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/segformer&quot;},{&quot;title&quot;:&quot;SwiftFormer&quot;,&quot;id&quot;:&quot;model_doc/swiftformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swiftformer&quot;},{&quot;title&quot;:&quot;Swin Transformer&quot;,&quot;id&quot;:&quot;model_doc/swin&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin&quot;},{&quot;title&quot;:&quot;Swin Transformer V2&quot;,&quot;id&quot;:&quot;model_doc/swinv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swinv2&quot;},{&quot;title&quot;:&quot;Swin2SR&quot;,&quot;id&quot;:&quot;model_doc/swin2sr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin2sr&quot;},{&quot;title&quot;:&quot;Table Transformer&quot;,&quot;id&quot;:&quot;model_doc/table-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/table-transformer&quot;},{&quot;title&quot;:&quot;TimeSformer&quot;,&quot;id&quot;:&quot;model_doc/timesformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/timesformer&quot;},{&quot;title&quot;:&quot;UperNet&quot;,&quot;id&quot;:&quot;model_doc/upernet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/upernet&quot;},{&quot;title&quot;:&quot;VAN&quot;,&quot;id&quot;:&quot;model_doc/van&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/van&quot;},{&quot;title&quot;:&quot;VideoMAE&quot;,&quot;id&quot;:&quot;model_doc/videomae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/videomae&quot;},{&quot;title&quot;:&quot;Vision Transformer (ViT)&quot;,&quot;id&quot;:&quot;model_doc/vit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit&quot;},{&quot;title&quot;:&quot;ViT Hybrid&quot;,&quot;id&quot;:&quot;model_doc/vit_hybrid&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_hybrid&quot;},{&quot;title&quot;:&quot;ViTDet&quot;,&quot;id&quot;:&quot;model_doc/vitdet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitdet&quot;},{&quot;title&quot;:&quot;ViTMAE&quot;,&quot;id&quot;:&quot;model_doc/vit_mae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_mae&quot;},{&quot;title&quot;:&quot;ViTMatte&quot;,&quot;id&quot;:&quot;model_doc/vitmatte&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitmatte&quot;},{&quot;title&quot;:&quot;ViTMSN&quot;,&quot;id&quot;:&quot;model_doc/vit_msn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_msn&quot;},{&quot;title&quot;:&quot;ViViT&quot;,&quot;id&quot;:&quot;model_doc/vivit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vivit&quot;},{&quot;title&quot;:&quot;YOLOS&quot;,&quot;id&quot;:&quot;model_doc/yolos&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yolos&quot;}]},{&quot;title&quot;:&quot;Audio models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio Spectrogram Transformer&quot;,&quot;id&quot;:&quot;model_doc/audio-spectrogram-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer&quot;},{&quot;title&quot;:&quot;Bark&quot;,&quot;id&quot;:&quot;model_doc/bark&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bark&quot;},{&quot;title&quot;:&quot;CLAP&quot;,&quot;id&quot;:&quot;model_doc/clap&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clap&quot;},{&quot;title&quot;:&quot;EnCodec&quot;,&quot;id&quot;:&quot;model_doc/encodec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encodec&quot;},{&quot;title&quot;:&quot;Hubert&quot;,&quot;id&quot;:&quot;model_doc/hubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/hubert&quot;},{&quot;title&quot;:&quot;MCTCT&quot;,&quot;id&quot;:&quot;model_doc/mctct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mctct&quot;},{&quot;title&quot;:&quot;MMS&quot;,&quot;id&quot;:&quot;model_doc/mms&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mms&quot;},{&quot;title&quot;:&quot;MusicGen&quot;,&quot;id&quot;:&quot;model_doc/musicgen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/musicgen&quot;},{&quot;title&quot;:&quot;Pop2Piano&quot;,&quot;id&quot;:&quot;model_doc/pop2piano&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pop2piano&quot;},{&quot;title&quot;:&quot;SEW&quot;,&quot;id&quot;:&quot;model_doc/sew&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew&quot;},{&quot;title&quot;:&quot;SEW-D&quot;,&quot;id&quot;:&quot;model_doc/sew-d&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew-d&quot;},{&quot;title&quot;:&quot;Speech2Text&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text&quot;},{&quot;title&quot;:&quot;Speech2Text2&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text_2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2&quot;},{&quot;title&quot;:&quot;SpeechT5&quot;,&quot;id&quot;:&quot;model_doc/speecht5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speecht5&quot;},{&quot;title&quot;:&quot;UniSpeech&quot;,&quot;id&quot;:&quot;model_doc/unispeech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech&quot;},{&quot;title&quot;:&quot;UniSpeech-SAT&quot;,&quot;id&quot;:&quot;model_doc/unispeech-sat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech-sat&quot;},{&quot;title&quot;:&quot;VITS&quot;,&quot;id&quot;:&quot;model_doc/vits&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vits&quot;},{&quot;title&quot;:&quot;Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2&quot;},{&quot;title&quot;:&quot;Wav2Vec2-Conformer&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2-conformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer&quot;},{&quot;title&quot;:&quot;Wav2Vec2Phoneme&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2_phoneme&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme&quot;},{&quot;title&quot;:&quot;WavLM&quot;,&quot;id&quot;:&quot;model_doc/wavlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wavlm&quot;},{&quot;title&quot;:&quot;Whisper&quot;,&quot;id&quot;:&quot;model_doc/whisper&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/whisper&quot;},{&quot;title&quot;:&quot;XLS-R&quot;,&quot;id&quot;:&quot;model_doc/xls_r&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xls_r&quot;},{&quot;title&quot;:&quot;XLSR-Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/xlsr_wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2&quot;}]},{&quot;title&quot;:&quot;Multimodal models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALIGN&quot;,&quot;id&quot;:&quot;model_doc/align&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/align&quot;},{&quot;title&quot;:&quot;AltCLIP&quot;,&quot;id&quot;:&quot;model_doc/altclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/altclip&quot;},{&quot;title&quot;:&quot;BLIP&quot;,&quot;id&quot;:&quot;model_doc/blip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip&quot;},{&quot;title&quot;:&quot;BLIP-2&quot;,&quot;id&quot;:&quot;model_doc/blip-2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip-2&quot;},{&quot;title&quot;:&quot;BridgeTower&quot;,&quot;id&quot;:&quot;model_doc/bridgetower&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bridgetower&quot;},{&quot;title&quot;:&quot;BROS&quot;,&quot;id&quot;:&quot;model_doc/bros&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bros&quot;},{&quot;title&quot;:&quot;Chinese-CLIP&quot;,&quot;id&quot;:&quot;model_doc/chinese_clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/chinese_clip&quot;},{&quot;title&quot;:&quot;CLIP&quot;,&quot;id&quot;:&quot;model_doc/clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clip&quot;},{&quot;title&quot;:&quot;CLIPSeg&quot;,&quot;id&quot;:&quot;model_doc/clipseg&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clipseg&quot;},{&quot;title&quot;:&quot;Data2Vec&quot;,&quot;id&quot;:&quot;model_doc/data2vec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/data2vec&quot;},{&quot;title&quot;:&quot;DePlot&quot;,&quot;id&quot;:&quot;model_doc/deplot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deplot&quot;},{&quot;title&quot;:&quot;Donut&quot;,&quot;id&quot;:&quot;model_doc/donut&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/donut&quot;},{&quot;title&quot;:&quot;FLAVA&quot;,&quot;id&quot;:&quot;model_doc/flava&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flava&quot;},{&quot;title&quot;:&quot;GIT&quot;,&quot;id&quot;:&quot;model_doc/git&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/git&quot;},{&quot;title&quot;:&quot;GroupViT&quot;,&quot;id&quot;:&quot;model_doc/groupvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/groupvit&quot;},{&quot;title&quot;:&quot;IDEFICS&quot;,&quot;id&quot;:&quot;model_doc/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/idefics&quot;},{&quot;title&quot;:&quot;InstructBLIP&quot;,&quot;id&quot;:&quot;model_doc/instructblip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/instructblip&quot;},{&quot;title&quot;:&quot;LayoutLM&quot;,&quot;id&quot;:&quot;model_doc/layoutlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlm&quot;},{&quot;title&quot;:&quot;LayoutLMV2&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv2&quot;},{&quot;title&quot;:&quot;LayoutLMV3&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv3&quot;},{&quot;title&quot;:&quot;LayoutXLM&quot;,&quot;id&quot;:&quot;model_doc/layoutxlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutxlm&quot;},{&quot;title&quot;:&quot;LiLT&quot;,&quot;id&quot;:&quot;model_doc/lilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lilt&quot;},{&quot;title&quot;:&quot;LXMERT&quot;,&quot;id&quot;:&quot;model_doc/lxmert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lxmert&quot;},{&quot;title&quot;:&quot;MatCha&quot;,&quot;id&quot;:&quot;model_doc/matcha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/matcha&quot;},{&quot;title&quot;:&quot;MGP-STR&quot;,&quot;id&quot;:&quot;model_doc/mgp-str&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mgp-str&quot;},{&quot;title&quot;:&quot;Nougat&quot;,&quot;id&quot;:&quot;model_doc/nougat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nougat&quot;},{&quot;title&quot;:&quot;OneFormer&quot;,&quot;id&quot;:&quot;model_doc/oneformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/oneformer&quot;},{&quot;title&quot;:&quot;OWL-ViT&quot;,&quot;id&quot;:&quot;model_doc/owlvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/owlvit&quot;},{&quot;title&quot;:&quot;Perceiver&quot;,&quot;id&quot;:&quot;model_doc/perceiver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/perceiver&quot;},{&quot;title&quot;:&quot;Pix2Struct&quot;,&quot;id&quot;:&quot;model_doc/pix2struct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pix2struct&quot;},{&quot;title&quot;:&quot;Segment Anything&quot;,&quot;id&quot;:&quot;model_doc/sam&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sam&quot;},{&quot;title&quot;:&quot;Speech Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/speech-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder&quot;},{&quot;title&quot;:&quot;TAPAS&quot;,&quot;id&quot;:&quot;model_doc/tapas&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapas&quot;},{&quot;title&quot;:&quot;TrOCR&quot;,&quot;id&quot;:&quot;model_doc/trocr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trocr&quot;},{&quot;title&quot;:&quot;TVLT&quot;,&quot;id&quot;:&quot;model_doc/tvlt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tvlt&quot;},{&quot;title&quot;:&quot;ViLT&quot;,&quot;id&quot;:&quot;model_doc/vilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vilt&quot;},{&quot;title&quot;:&quot;Vision Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/vision-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder&quot;},{&quot;title&quot;:&quot;Vision Text Dual Encoder&quot;,&quot;id&quot;:&quot;model_doc/vision-text-dual-encoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder&quot;},{&quot;title&quot;:&quot;VisualBERT&quot;,&quot;id&quot;:&quot;model_doc/visual_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/visual_bert&quot;},{&quot;title&quot;:&quot;X-CLIP&quot;,&quot;id&quot;:&quot;model_doc/xclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xclip&quot;}]},{&quot;title&quot;:&quot;Reinforcement learning models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Decision Transformer&quot;,&quot;id&quot;:&quot;model_doc/decision_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/decision_transformer&quot;},{&quot;title&quot;:&quot;Trajectory Transformer&quot;,&quot;id&quot;:&quot;model_doc/trajectory_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer&quot;}]},{&quot;title&quot;:&quot;Time series models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Autoformer&quot;,&quot;id&quot;:&quot;model_doc/autoformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/autoformer&quot;},{&quot;title&quot;:&quot;Informer&quot;,&quot;id&quot;:&quot;model_doc/informer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/informer&quot;},{&quot;title&quot;:&quot;Time Series Transformer&quot;,&quot;id&quot;:&quot;model_doc/time_series_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/time_series_transformer&quot;}]},{&quot;title&quot;:&quot;Graph models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Graphormer&quot;,&quot;id&quot;:&quot;model_doc/graphormer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/graphormer&quot;}]}]},{&quot;title&quot;:&quot;Internal Helpers&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Custom Layers and Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/modeling_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/modeling_utils&quot;},{&quot;title&quot;:&quot;Utilities for pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/pipelines_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/pipelines_utils&quot;},{&quot;title&quot;:&quot;Utilities for Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/tokenization_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/tokenization_utils&quot;},{&quot;title&quot;:&quot;Utilities for Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/trainer_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/trainer_utils&quot;},{&quot;title&quot;:&quot;Utilities for Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/generation_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/generation_utils&quot;},{&quot;title&quot;:&quot;Utilities for Image Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/image_processing_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/image_processing_utils&quot;},{&quot;title&quot;:&quot;Utilities for Audio processing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/audio_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/audio_utils&quot;},{&quot;title&quot;:&quot;General Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/file_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/file_utils&quot;},{&quot;title&quot;:&quot;Utilities for Time Series&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/time_series_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/time_series_utils&quot;}]}]}],&quot;chapterId&quot;:&quot;model_doc/dpt&quot;,&quot;docType&quot;:&quot;docs&quot;,&quot;isLoggedIn&quot;:false,&quot;lang&quot;:&quot;en&quot;,&quot;langs&quot;:[&quot;de&quot;,&quot;en&quot;,&quot;es&quot;,&quot;fr&quot;,&quot;it&quot;,&quot;ko&quot;,&quot;pt&quot;,&quot;zh&quot;],&quot;library&quot;:&quot;transformers&quot;,&quot;theme&quot;:&quot;light&quot;,&quot;version&quot;:&quot;v4.34.0&quot;,&quot;versions&quot;:[{&quot;version&quot;:&quot;main&quot;},{&quot;version&quot;:&quot;v4.34.0&quot;},{&quot;version&quot;:&quot;v4.33.3&quot;},{&quot;version&quot;:&quot;v4.33.2&quot;},{&quot;version&quot;:&quot;v4.33.0&quot;},{&quot;version&quot;:&quot;v4.32.1&quot;},{&quot;version&quot;:&quot;v4.32.0&quot;},{&quot;version&quot;:&quot;v4.31.0&quot;},{&quot;version&quot;:&quot;v4.30.0&quot;},{&quot;version&quot;:&quot;v4.29.1&quot;},{&quot;version&quot;:&quot;v4.29.0&quot;},{&quot;version&quot;:&quot;v4.28.1&quot;},{&quot;version&quot;:&quot;v4.28.0&quot;},{&quot;version&quot;:&quot;v4.27.2&quot;},{&quot;version&quot;:&quot;v4.27.1&quot;},{&quot;version&quot;:&quot;v4.27.0&quot;},{&quot;version&quot;:&quot;v4.26.1&quot;},{&quot;version&quot;:&quot;v4.26.0&quot;},{&quot;version&quot;:&quot;v4.25.1&quot;},{&quot;version&quot;:&quot;v4.24.0&quot;},{&quot;version&quot;:&quot;v4.23.1&quot;},{&quot;version&quot;:&quot;v4.23.0&quot;},{&quot;version&quot;:&quot;v4.22.2&quot;},{&quot;version&quot;:&quot;v4.22.1&quot;},{&quot;version&quot;:&quot;v4.22.0&quot;},{&quot;version&quot;:&quot;v4.21.3&quot;},{&quot;version&quot;:&quot;v4.21.2&quot;},{&quot;version&quot;:&quot;v4.21.1&quot;},{&quot;version&quot;:&quot;v4.21.0&quot;},{&quot;version&quot;:&quot;v4.20.1&quot;},{&quot;version&quot;:&quot;v4.20.0&quot;},{&quot;version&quot;:&quot;v4.19.4&quot;},{&quot;version&quot;:&quot;v4.19.3&quot;},{&quot;version&quot;:&quot;v4.19.2&quot;},{&quot;version&quot;:&quot;v4.19.0&quot;},{&quot;version&quot;:&quot;v4.18.0&quot;},{&quot;version&quot;:&quot;v4.17.0&quot;},{&quot;version&quot;:&quot;v4.16.2&quot;},{&quot;version&quot;:&quot;v4.16.1&quot;},{&quot;version&quot;:&quot;v4.16.0&quot;},{&quot;version&quot;:&quot;v4.15.0&quot;},{&quot;version&quot;:&quot;v4.14.1&quot;},{&quot;version&quot;:&quot;v4.13.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.5&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.4&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.0.0&quot;},{&quot;version&quot;:&quot;doc-builder-html&quot;}],&quot;title&quot;:&quot;DPT&quot;}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"><div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation</p> <div class="flex items-center"><p class="font-semibold">DPT</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "><button class=" " type="button"><h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> <span class="ml-auto rounded border border-gray-200 bg-gray-100 px-0.5 text-xs dark:border-gray-800 dark:bg-gray-800"><kbd class="font-sans">⌘K</kbd></span></button> <div class="flex items-center"><select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1">v4.34.0</option><option value="2">v4.33.3</option><option value="3">v4.32.1</option><option value="4">v4.31.0</option><option value="5">v4.30.0</option><option value="6">v4.29.1</option><option value="7">v4.28.1</option><option value="8">v4.27.2</option><option value="9">v4.26.1</option><option value="10">v4.25.1</option><option value="11">v4.24.0</option><option value="12">v4.23.1</option><option value="13">v4.22.2</option><option value="14">v4.21.3</option><option value="15">v4.20.1</option><option value="16">v4.19.4</option><option value="17">v4.18.0</option><option value="18">v4.17.0</option><option value="19">v4.16.2</option><option value="20">v4.15.0</option><option value="21">v4.14.1</option><option value="22">v4.13.0</option><option value="23">v4.12.5</option><option value="24">v4.11.3</option><option value="25">v4.10.1</option><option value="26">v4.9.2</option><option value="27">v4.8.2</option><option value="28">v4.7.0</option><option value="29">v4.6.0</option><option value="30">v4.5.1</option><option value="31">v4.4.2</option><option value="32">v4.3.3</option><option value="33">v4.2.2</option><option value="34">v4.1.1</option><option value="35">v4.0.1</option><option value="36">v3.5.1</option><option value="37">v3.4.0</option><option value="38">v3.3.1</option><option value="39">v3.2.0</option><option value="40">v3.1.0</option><option value="41">v3.0.2</option><option value="42">v2.11.0</option><option value="43">v2.10.0</option><option value="44">v2.9.1</option><option value="45">v2.8.0</option><option value="46">v2.7.0</option><option value="47">v2.6.0</option><option value="48">v2.5.1</option><option value="49">v2.4.1</option><option value="50">v2.3.0</option><option value="51">v2.2.2</option><option value="52">v2.1.1</option><option value="53">v2.0.0</option><option value="54">v1.2.0</option><option value="55">v1.1.0</option><option value="56">v1.0.0</option><option value="57">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"><button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"><svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> 112,792</a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Get started</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/index">🤗 Transformers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/quicktour">Quick tour </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/installation">Installation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Tutorials</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_tutorial">Run inference with pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/autoclass_tutorial">Write portable code with AutoClass </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/preprocessing">Preprocess data </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/training">Fine-tune a pretrained model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/run_scripts">Train with a script </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/accelerate">Set up distributed training with 🤗 Accelerate </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/peft">Load and train adapters with 🤗 PEFT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_sharing">Share your model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/transformers_agents">Agents </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/llm_tutorial">Generation with LLMs </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Task Guides</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Natural Language Processing</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Computer Vision</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Generation</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Prompting</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Developer guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/fast_tokenizers">Use fast tokenizers from 🤗 Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/multilingual">Run inference with multilingual models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/create_a_model">Use model-specific APIs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_models">Share a custom model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/chat_templating">Templates for chat models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/sagemaker">Run training on Amazon SageMaker </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/serialization">Export to ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tflite">Export to TFLite </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/torchscript">Export to TorchScript </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/benchmarks">Benchmarks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/notebooks">Notebooks with examples </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/community">Community resources </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_tools">Custom Tools and Prompts </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/troubleshooting">Troubleshoot </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Performance and scalability</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/performance">Overview </a><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Efficient training techniques</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_one">Methods and tools for efficient training on a single GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_many">Multiple GPUs and parallelism </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu">Efficient training on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu_many">Distributed CPU training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu">Training on TPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu_tf">Training on TPU with TensorFlow </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_special">Training on Specialized Hardware </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_hardware">Custom hardware for training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/hpo_train">Hyperparameter Search using Trainer API </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Optimizing inference</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_cpu">Inference on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_one">Inference on one GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_many">Inference on many GPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_special">Inference on Specialized Hardware </a> </div><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/big_models">Instantiating a big model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/debugging">Troubleshooting </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tf_xla">XLA Integration for TensorFlow Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perf_torch_compile">Optimize inference using `torch.compile()` </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Contribute</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/contributing">How to contribute to transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_model">How to add a model to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_tensorflow_model">How to convert a 🤗 Transformers model to TensorFlow? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_pipeline">How to add a pipeline to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/testing">Testing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pr_checks">Checks on a Pull Request </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Conceptual guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/philosophy">Philosophy </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/glossary">Glossary </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/task_summary">What 🤗 Transformers can do </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tasks_explained">How 🤗 Transformers solve tasks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_summary">The Transformer model family </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tokenizer_summary">Summary of the tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/attention">Attention mechanisms </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pad_truncation">Padding and truncation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/bertology">BERTology </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perplexity">Perplexity of fixed-length models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_webserver">Pipelines for webserver inference </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_memory_anatomy">Model training anatomy </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>API</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Main Classes</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/agent">Agents and Tools </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/model_doc/auto">Auto Classes </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/callback">Callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/configuration">Configuration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/data_collator">Data Collator </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks">Keras callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/logging">Logging </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/model">Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/text_generation">Text Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/onnx">ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules">Optimization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/output">Model outputs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/pipelines">Pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/processors">Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/quantization">Quantization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/tokenizer">Tokenizer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/trainer">Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/deepspeed">DeepSpeed Integration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor">Feature Extractor </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/image_processor">Image Processor </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Models</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Text models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Vision models</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/beit">BEiT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bit">BiT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/conditional_detr">Conditional DETR </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/convnext">ConvNeXT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/convnextv2">ConvNeXTV2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/cvt">CvT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/deformable_detr">Deformable DETR </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/deit">DeiT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/deta">DETA </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/detr">DETR </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/dinat">DiNAT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/dinov2">DINO V2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/dit">DiT </a><a data-sveltekit-reload="" class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/dpt">DPT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/efficientformer">EfficientFormer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/efficientnet">EfficientNet </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/focalnet">FocalNet </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/glpn">GLPN </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/imagegpt">ImageGPT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/levit">LeViT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mask2former">Mask2Former </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/maskformer">MaskFormer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mobilenet_v1">MobileNetV1 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mobilenet_v2">MobileNetV2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mobilevit">MobileViT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mobilevitv2">MobileViTV2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nat">NAT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/poolformer">PoolFormer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/pvt">Pyramid Vision Transformer (PVT) </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/regnet">RegNet </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/resnet">ResNet </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/segformer">SegFormer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/swiftformer">SwiftFormer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/swin">Swin Transformer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/swinv2">Swin Transformer V2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/swin2sr">Swin2SR </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/table-transformer">Table Transformer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/timesformer">TimeSformer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/upernet">UperNet </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/van">VAN </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/videomae">VideoMAE </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/vit">Vision Transformer (ViT) </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/vit_hybrid">ViT Hybrid </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/vitdet">ViTDet </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/vit_mae">ViTMAE </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/vitmatte">ViTMatte </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/vit_msn">ViTMSN </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/vivit">ViViT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/yolos">YOLOS </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Reinforcement learning models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Time series models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Graph models</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Internal Helpers</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/modeling_utils">Custom Layers and Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/pipelines_utils">Utilities for pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/tokenization_utils">Utilities for Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/trainer_utils">Utilities for Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/generation_utils">Utilities for Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/image_processing_utils">Utilities for Image Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/audio_utils">Utilities for Audio processing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/file_utils">General Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/time_series_utils">Utilities for Time Series </a> </div> </div></nav></div></div></div> <div class="z-1 min-w-0 flex-1"> <div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg"> <div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div> <p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience </p> <div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes </div></div></div> <div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a> <p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div> <div class="prose-doc prose relative mx-auto max-w-4xl break-words"> <p></p> <h1 class="relative group"><a id="dpt" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#dpt"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1pwttnf">DPT</span></h1> <h2 class="relative group"><a id="overview" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#overview"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1jsw1pg">Overview</span></h2> <p data-svelte-h="svelte-19gxegr">The DPT model was proposed in <a href="https://arxiv.org/abs/2103.13413" rel="nofollow">Vision Transformers for Dense Prediction</a> by René Ranftl, Alexey Bochkovskiy, Vladlen Koltun. DPT is a model that leverages the <a href="vit">Vision Transformer (ViT)</a> as backbone for dense prediction tasks like semantic segmentation and depth estimation.</p> <p data-svelte-h="svelte-vfdo9a">The abstract from the paper is the following:</p> <p data-svelte-h="svelte-17tgro5"><em>We introduce dense vision transformers, an architecture that leverages vision transformers in place of convolutional networks as a backbone for dense prediction tasks. We assemble tokens from various stages of the vision transformer into image-like representations at various resolutions and progressively combine them into full-resolution predictions using a convolutional decoder. The transformer backbone processes representations at a constant and relatively high resolution and has a global receptive field at every stage. These properties allow the dense vision transformer to provide finer-grained and more globally coherent predictions when compared to fully-convolutional networks. Our experiments show that this architecture yields substantial improvements on dense prediction tasks, especially when a large amount of training data is available. For monocular depth estimation, we observe an improvement of up to 28% in relative performance when compared to a state-of-the-art fully-convolutional network. When applied to semantic segmentation, dense vision transformers set a new state of the art on ADE20K with 49.02% mIoU. We further show that the architecture can be fine-tuned on smaller datasets such as NYUv2, KITTI, and Pascal Context where it also sets the new state of the art.</em></p> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/dpt_architecture.jpg" alt="drawing" width="600"> <small data-svelte-h="svelte-6ly4vl">DPT architecture. Taken from the <a href="https://arxiv.org/abs/2103.13413" target="_blank">original paper</a>.</small> <p data-svelte-h="svelte-d1d7zy">This model was contributed by <a href="https://huggingface.co/nielsr" rel="nofollow">nielsr</a>. The original code can be found <a href="https://github.com/isl-org/DPT" rel="nofollow">here</a>.</p> <h2 class="relative group"><a id="resources" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#resources"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-w4zzv6">Resources</span></h2> <p data-svelte-h="svelte-1b0rfkj">A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DPT.</p> <ul data-svelte-h="svelte-z6nsnj"><li><p>Demo notebooks for <a href="/docs/transformers/v4.34.0/en/model_doc/dpt#transformers.DPTForDepthEstimation">DPTForDepthEstimation</a> can be found <a href="https://github.com/NielsRogge/Transformers-Tutorials/tree/master/DPT" rel="nofollow">here</a>.</p></li> <li><p><a href="../tasks/semantic_segmentation">Semantic segmentation task guide</a></p></li> <li><p><a href="../tasks/monocular_depth_estimation">Monocular depth estimation task guide</a></p></li></ul> <p data-svelte-h="svelte-1xesile">If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.</p> <h2 class="relative group"><a id="transformers.DPTConfig" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTConfig"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1a1zpdp">DPTConfig</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.DPTConfig"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">DPTConfig</span></span></h3> <a id="transformers.DPTConfig" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.DPTConfig"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/dpt/configuration_dpt.py#L32" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_size<span class="opacity-60"> = 768</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_hidden_layers<span class="opacity-60"> = 12</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_attention_heads<span class="opacity-60"> = 12</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">intermediate_size<span class="opacity-60"> = 3072</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_act<span class="opacity-60"> = 'gelu'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_dropout_prob<span class="opacity-60"> = 0.0</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_probs_dropout_prob<span class="opacity-60"> = 0.0</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">initializer_range<span class="opacity-60"> = 0.02</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">layer_norm_eps<span class="opacity-60"> = 1e-12</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">image_size<span class="opacity-60"> = 384</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">patch_size<span class="opacity-60"> = 16</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_channels<span class="opacity-60"> = 3</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">is_hybrid<span class="opacity-60"> = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">qkv_bias<span class="opacity-60"> = True</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">backbone_out_indices<span class="opacity-60"> = [2, 5, 8, 11]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">readout_type<span class="opacity-60"> = 'project'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">reassemble_factors<span class="opacity-60"> = [4, 2, 1, 0.5]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">neck_hidden_sizes<span class="opacity-60"> = [96, 192, 384, 768]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">fusion_hidden_size<span class="opacity-60"> = 256</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_in_index<span class="opacity-60"> = -1</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_batch_norm_in_fusion_residual<span class="opacity-60"> = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_auxiliary_head<span class="opacity-60"> = True</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">auxiliary_loss_weight<span class="opacity-60"> = 0.4</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">semantic_loss_ignore_index<span class="opacity-60"> = 255</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">semantic_classifier_dropout<span class="opacity-60"> = 0.1</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">backbone_featmap_shape<span class="opacity-60"> = [1, 1024, 24, 24]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">neck_ignore_stages<span class="opacity-60"> = [0, 1]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">backbone_config<span class="opacity-60"> = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 28 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTConfig.hidden_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTConfig.hidden_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hidden_size</strong> (<code>int</code>, <em>optional</em>, defaults to 768) — Dimensionality of the encoder layers and the pooler layer.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTConfig.num_hidden_layers" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTConfig.num_hidden_layers"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>num_hidden_layers</strong> (<code>int</code>, <em>optional</em>, defaults to 12) — Number of hidden layers in the Transformer encoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTConfig.num_attention_heads" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTConfig.num_attention_heads"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>num_attention_heads</strong> (<code>int</code>, <em>optional</em>, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTConfig.intermediate_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTConfig.intermediate_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>intermediate_size</strong> (<code>int</code>, <em>optional</em>, defaults to 3072) — Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTConfig.hidden_act" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTConfig.hidden_act"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hidden_act</strong> (<code>str</code> or <code>function</code>, <em>optional</em>, defaults to <code>"gelu"</code>) — The non-linear activation function (function or string) in the encoder and pooler. If string, <code>"gelu"</code>, <code>"relu"</code>, <code>"selu"</code> and <code>"gelu_new"</code> are supported.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTConfig.hidden_dropout_prob" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTConfig.hidden_dropout_prob"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hidden_dropout_prob</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) — The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTConfig.attention_probs_dropout_prob" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTConfig.attention_probs_dropout_prob"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_probs_dropout_prob</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) — The dropout ratio for the attention probabilities.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTConfig.initializer_range" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTConfig.initializer_range"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>initializer_range</strong> (<code>float</code>, <em>optional</em>, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTConfig.layer_norm_eps" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTConfig.layer_norm_eps"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>layer_norm_eps</strong> (<code>float</code>, <em>optional</em>, defaults to 1e-12) — The epsilon used by the layer normalization layers.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTConfig.image_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTConfig.image_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>image_size</strong> (<code>int</code>, <em>optional</em>, defaults to 384) — The size (resolution) of each image.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTConfig.patch_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTConfig.patch_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>patch_size</strong> (<code>int</code>, <em>optional</em>, defaults to 16) — The size (resolution) of each patch.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTConfig.num_channels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTConfig.num_channels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>num_channels</strong> (<code>int</code>, <em>optional</em>, defaults to 3) — The number of input channels.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTConfig.qkv_bias" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTConfig.qkv_bias"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>qkv_bias</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether to add a bias to the queries, keys and values.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTConfig.backbone_out_indices" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTConfig.backbone_out_indices"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>backbone_out_indices</strong> (<code>List[int]</code>, <em>optional</em>, defaults to <code>[2, 5, 8, 11]</code>) — Indices of the intermediate hidden states to use from backbone.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTConfig.readout_type" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTConfig.readout_type"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>readout_type</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"project"</code>) — The readout type to use when processing the readout token (CLS token) of the intermediate hidden states of the ViT backbone. Can be one of [<code>"ignore"</code>, <code>"add"</code>, <code>"project"</code>].<p></p> <ul> <li>“ignore” simply ignores the CLS token.</li> <li>“add” passes the information from the CLS token to all other tokens by adding the representations.</li> <li>“project” passes information to the other tokens by concatenating the readout to all other tokens before projecting the representation to the original feature dimension D using a linear layer followed by a GELU non-linearity.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTConfig.is_hybrid" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTConfig.is_hybrid"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>is_hybrid</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether to use a hybrid backbone. Useful in the context of loading DPT-Hybrid models.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTConfig.reassemble_factors" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTConfig.reassemble_factors"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>reassemble_factors</strong> (<code>List[int]</code>, <em>optional</em>, defaults to <code>[4, 2, 1, 0.5]</code>) — The up/downsampling factors of the reassemble layers.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTConfig.neck_hidden_sizes" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTConfig.neck_hidden_sizes"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>neck_hidden_sizes</strong> (<code>List[str]</code>, <em>optional</em>, defaults to [96, 192, 384, 768]) — The hidden sizes to project to for the feature maps of the backbone.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTConfig.fusion_hidden_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTConfig.fusion_hidden_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>fusion_hidden_size</strong> (<code>int</code>, <em>optional</em>, defaults to 256) — The number of channels before fusion.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTConfig.head_in_index" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTConfig.head_in_index"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_in_index</strong> (<code>int</code>, <em>optional</em>, defaults to -1) — The index of the features to use in the heads.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTConfig.use_batch_norm_in_fusion_residual" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTConfig.use_batch_norm_in_fusion_residual"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>use_batch_norm_in_fusion_residual</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether to use batch normalization in the pre-activate residual units of the fusion blocks.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTConfig.use_auxiliary_head" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTConfig.use_auxiliary_head"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>use_auxiliary_head</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether to use an auxiliary head during training.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTConfig.auxiliary_loss_weight" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTConfig.auxiliary_loss_weight"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>auxiliary_loss_weight</strong> (<code>float</code>, <em>optional</em>, defaults to 0.4) — Weight of the cross-entropy loss of the auxiliary head.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTConfig.semantic_loss_ignore_index" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTConfig.semantic_loss_ignore_index"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>semantic_loss_ignore_index</strong> (<code>int</code>, <em>optional</em>, defaults to 255) — The index that is ignored by the loss function of the semantic segmentation model.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTConfig.semantic_classifier_dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTConfig.semantic_classifier_dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>semantic_classifier_dropout</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) — The dropout ratio for the semantic classification head.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTConfig.backbone_featmap_shape" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTConfig.backbone_featmap_shape"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>backbone_featmap_shape</strong> (<code>List[int]</code>, <em>optional</em>, defaults to <code>[1, 1024, 24, 24]</code>) — Used only for the <code>hybrid</code> embedding type. The shape of the feature maps of the backbone.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTConfig.neck_ignore_stages" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTConfig.neck_ignore_stages"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>neck_ignore_stages</strong> (<code>List[int]</code>, <em>optional</em>, defaults to <code>[0, 1]</code>) — Used only for the <code>hybrid</code> embedding type. The stages of the readout layers to ignore.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTConfig.backbone_config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTConfig.backbone_config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>backbone_config</strong> (<code>Union[Dict[str, Any], PretrainedConfig]</code>, <em>optional</em>) — Used only for the <code>hybrid</code> embedding type. The configuration of the backbone in a dictionary.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-x1zpxx">This is the configuration class to store the configuration of a <a href="/docs/transformers/v4.34.0/en/model_doc/dpt#transformers.DPTModel">DPTModel</a>. It is used to instantiate an DPT model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the DPT <a href="https://huggingface.co/Intel/dpt-large" rel="nofollow">Intel/dpt-large</a> architecture.</p> <p data-svelte-h="svelte-10kqkkl">Configuration objects inherit from <a href="/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> and can be used to control the model outputs. Read the documentation from <a href="/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> for more information.</p> <div class="relative group rounded-md"><a id="transformers.DPTConfig.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTConfig.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> DPTModel, DPTConfig <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Initializing a DPT dpt-large style configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>configuration = DPTConfig() <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Initializing a model from the dpt-large style configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model = DPTModel(configuration) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Accessing the model configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>configuration = model.config</pre></div></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.DPTConfig.to_dict"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>to_dict</span></h4> <a id="transformers.DPTConfig.to_dict" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.DPTConfig.to_dict"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/dpt/configuration_dpt.py#L220" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> </div></div> <p data-svelte-h="svelte-p09piq">Serializes this instance to a Python dictionary. Override the default <a href="/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig.to_dict">to_dict()</a>. Returns: <code>Dict[str, any]</code>: Dictionary of all the attributes that make up this configuration instance,</p></div></div> <h2 class="relative group"><a id="transformers.DPTFeatureExtractor" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTFeatureExtractor"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-m1hzz5">DPTFeatureExtractor</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.DPTFeatureExtractor"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">DPTFeatureExtractor</span></span></h3> <a id="transformers.DPTFeatureExtractor" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.DPTFeatureExtractor"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/dpt/feature_extraction_dpt.py#L26" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">*args<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> </div></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.DPTFeatureExtractor.__call__"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>__call__</span></h4> <a id="transformers.DPTFeatureExtractor.__call__" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.DPTFeatureExtractor.__call__"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/image_processing_utils.py#L544" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">images<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> </div></div> <p data-svelte-h="svelte-khengj">Preprocess an image or a batch of images.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.DPTFeatureExtractor.post_process_semantic_segmentation"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>post_process_semantic_segmentation</span></h4> <a id="transformers.DPTFeatureExtractor.post_process_semantic_segmentation" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.DPTFeatureExtractor.post_process_semantic_segmentation"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/dpt/image_processing_dpt.py#L346" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">outputs<span class="opacity-60"></span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">target_sizes<span class="opacity-60">: typing.List[typing.Tuple] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span>semantic_segmentation</span></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTFeatureExtractor.post_process_semantic_segmentation.outputs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTFeatureExtractor.post_process_semantic_segmentation.outputs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>outputs</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/dpt#transformers.DPTForSemanticSegmentation">DPTForSemanticSegmentation</a>) — Raw outputs of the model.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTFeatureExtractor.post_process_semantic_segmentation.target_sizes" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTFeatureExtractor.post_process_semantic_segmentation.target_sizes"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>target_sizes</strong> (<code>List[Tuple]</code> of length <code>batch_size</code>, <em>optional</em>) — List of tuples corresponding to the requested final size (height, width) of each prediction. If unset, predictions will not be resized.</span></span> </li></ul> <div id="transformers.DPTFeatureExtractor.post_process_semantic_segmentation.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p>semantic_segmentation</p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p><code>List[torch.Tensor]</code> of length <code>batch_size</code>, where each item is a semantic segmentation map of shape (height, width) corresponding to the target_sizes entry (if <code>target_sizes</code> is specified). Each entry of each <code>torch.Tensor</code> correspond to a semantic class id.</p> </p> </div></div> <p data-svelte-h="svelte-4h7glm">Converts the output of <a href="/docs/transformers/v4.34.0/en/model_doc/dpt#transformers.DPTForSemanticSegmentation">DPTForSemanticSegmentation</a> into semantic segmentation maps. Only supports PyTorch.</p></div></div> <h2 class="relative group"><a id="transformers.DPTImageProcessor" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTImageProcessor"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-pj0sp8">DPTImageProcessor</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.DPTImageProcessor"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">DPTImageProcessor</span></span></h3> <a id="transformers.DPTImageProcessor" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.DPTImageProcessor"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/dpt/image_processing_dpt.py#L94" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">do_resize<span class="opacity-60">: bool = True</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">size<span class="opacity-60">: typing.Dict[str, int] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">resample<span class="opacity-60">: Resampling = &lt;Resampling.BILINEAR: 2&gt;</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">keep_aspect_ratio<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">ensure_multiple_of<span class="opacity-60">: int = 1</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">do_rescale<span class="opacity-60">: bool = True</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">rescale_factor<span class="opacity-60">: typing.Union[int, float] = 0.00392156862745098</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">do_normalize<span class="opacity-60">: bool = True</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">image_mean<span class="opacity-60">: typing.Union[float, typing.List[float], NoneType] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">image_std<span class="opacity-60">: typing.Union[float, typing.List[float], NoneType] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 10 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTImageProcessor.do_resize" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTImageProcessor.do_resize"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>do_resize</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether to resize the image’s (height, width) dimensions. Can be overidden by <code>do_resize</code> in <code>preprocess</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTImageProcessor.size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTImageProcessor.size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>size</strong> (<code>Dict[str, int]</code> <em>optional</em>, defaults to <code>{"height" -- 384, "width": 384}</code>): Size of the image after resizing. Can be overidden by <code>size</code> in <code>preprocess</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTImageProcessor.keep_aspect_ratio" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTImageProcessor.keep_aspect_ratio"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>keep_aspect_ratio</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — If <code>True</code>, the image is resized to the largest possible size such that the aspect ratio is preserved. Can be overidden by <code>keep_aspect_ratio</code> in <code>preprocess</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTImageProcessor.ensure_multiple_of" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTImageProcessor.ensure_multiple_of"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>ensure_multiple_of</strong> (<code>int</code>, <em>optional</em>, defaults to 1) — If <code>do_resize</code> is <code>True</code>, the image is resized to a size that is a multiple of this value. Can be overidden by <code>ensure_multiple_of</code> in <code>preprocess</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTImageProcessor.resample" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTImageProcessor.resample"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>resample</strong> (<code>PILImageResampling</code>, <em>optional</em>, defaults to <code>PILImageResampling.BILINEAR</code>) — Defines the resampling filter to use if resizing the image. Can be overidden by <code>resample</code> in <code>preprocess</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTImageProcessor.do_rescale" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTImageProcessor.do_rescale"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>do_rescale</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether to rescale the image by the specified scale <code>rescale_factor</code>. Can be overidden by <code>do_rescale</code> in <code>preprocess</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTImageProcessor.rescale_factor" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTImageProcessor.rescale_factor"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>rescale_factor</strong> (<code>int</code> or <code>float</code>, <em>optional</em>, defaults to <code>1/255</code>) — Scale factor to use if rescaling the image. Can be overidden by <code>rescale_factor</code> in <code>preprocess</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTImageProcessor.do_normalize" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTImageProcessor.do_normalize"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>do_normalize</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether to normalize the image. Can be overridden by the <code>do_normalize</code> parameter in the <code>preprocess</code> method.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTImageProcessor.image_mean" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTImageProcessor.image_mean"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>image_mean</strong> (<code>float</code> or <code>List[float]</code>, <em>optional</em>, defaults to <code>IMAGENET_STANDARD_MEAN</code>) — Mean to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the <code>image_mean</code> parameter in the <code>preprocess</code> method.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTImageProcessor.image_std" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTImageProcessor.image_std"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>image_std</strong> (<code>float</code> or <code>List[float]</code>, <em>optional</em>, defaults to <code>IMAGENET_STANDARD_STD</code>) — Standard deviation to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the <code>image_std</code> parameter in the <code>preprocess</code> method.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-oc8yhy">Constructs a DPT image processor.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.DPTImageProcessor.preprocess"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>preprocess</span></h4> <a id="transformers.DPTImageProcessor.preprocess" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.DPTImageProcessor.preprocess"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/dpt/image_processing_dpt.py#L211" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">images<span class="opacity-60">: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">do_resize<span class="opacity-60">: bool = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">size<span class="opacity-60">: int = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">keep_aspect_ratio<span class="opacity-60">: bool = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">ensure_multiple_of<span class="opacity-60">: int = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">resample<span class="opacity-60">: Resampling = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">do_rescale<span class="opacity-60">: bool = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">rescale_factor<span class="opacity-60">: float = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">do_normalize<span class="opacity-60">: bool = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">image_mean<span class="opacity-60">: typing.Union[float, typing.List[float], NoneType] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">image_std<span class="opacity-60">: typing.Union[float, typing.List[float], NoneType] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_tensors<span class="opacity-60">: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">data_format<span class="opacity-60">: ChannelDimension = &lt;ChannelDimension.FIRST: 'channels_first'&gt;</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_data_format<span class="opacity-60">: typing.Union[transformers.image_utils.ChannelDimension, str, NoneType] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 14 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTImageProcessor.preprocess.images" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTImageProcessor.preprocess.images"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>images</strong> (<code>ImageInput</code>) — Image to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If passing in images with pixel values between 0 and 1, set <code>do_rescale=False</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTImageProcessor.preprocess.do_resize" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTImageProcessor.preprocess.do_resize"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>do_resize</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>self.do_resize</code>) — Whether to resize the image.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTImageProcessor.preprocess.size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTImageProcessor.preprocess.size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>size</strong> (<code>Dict[str, int]</code>, <em>optional</em>, defaults to <code>self.size</code>) — Size of the image after reszing. If <code>keep_aspect_ratio</code> is <code>True</code>, the image is resized to the largest possible size such that the aspect ratio is preserved. If <code>ensure_multiple_of</code> is set, the image is resized to a size that is a multiple of this value.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTImageProcessor.preprocess.keep_aspect_ratio" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTImageProcessor.preprocess.keep_aspect_ratio"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>keep_aspect_ratio</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>self.keep_aspect_ratio</code>) — Whether to keep the aspect ratio of the image. If False, the image will be resized to (size, size). If True, the image will be resized to keep the aspect ratio and the size will be the maximum possible.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTImageProcessor.preprocess.ensure_multiple_of" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTImageProcessor.preprocess.ensure_multiple_of"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>ensure_multiple_of</strong> (<code>int</code>, <em>optional</em>, defaults to <code>self.ensure_multiple_of</code>) — Ensure that the image size is a multiple of this value.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTImageProcessor.preprocess.resample" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTImageProcessor.preprocess.resample"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>resample</strong> (<code>int</code>, <em>optional</em>, defaults to <code>self.resample</code>) — Resampling filter to use if resizing the image. This can be one of the enum <code>PILImageResampling</code>, Only has an effect if <code>do_resize</code> is set to <code>True</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTImageProcessor.preprocess.do_rescale" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTImageProcessor.preprocess.do_rescale"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>do_rescale</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>self.do_rescale</code>) — Whether to rescale the image values between [0 - 1].</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTImageProcessor.preprocess.rescale_factor" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTImageProcessor.preprocess.rescale_factor"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>rescale_factor</strong> (<code>float</code>, <em>optional</em>, defaults to <code>self.rescale_factor</code>) — Rescale factor to rescale the image by if <code>do_rescale</code> is set to <code>True</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTImageProcessor.preprocess.do_normalize" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTImageProcessor.preprocess.do_normalize"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>do_normalize</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>self.do_normalize</code>) — Whether to normalize the image.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTImageProcessor.preprocess.image_mean" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTImageProcessor.preprocess.image_mean"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>image_mean</strong> (<code>float</code> or <code>List[float]</code>, <em>optional</em>, defaults to <code>self.image_mean</code>) — Image mean.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTImageProcessor.preprocess.image_std" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTImageProcessor.preprocess.image_std"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>image_std</strong> (<code>float</code> or <code>List[float]</code>, <em>optional</em>, defaults to <code>self.image_std</code>) — Image standard deviation.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTImageProcessor.preprocess.return_tensors" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTImageProcessor.preprocess.return_tensors"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_tensors</strong> (<code>str</code> or <code>TensorType</code>, <em>optional</em>) — The type of tensors to return. Can be one of:<ul> <li>Unset: Return a list of <code>np.ndarray</code>.</li> <li><code>TensorType.TENSORFLOW</code> or <code>'tf'</code>: Return a batch of type <code>tf.Tensor</code>.</li> <li><code>TensorType.PYTORCH</code> or <code>'pt'</code>: Return a batch of type <code>torch.Tensor</code>.</li> <li><code>TensorType.NUMPY</code> or <code>'np'</code>: Return a batch of type <code>np.ndarray</code>.</li> <li><code>TensorType.JAX</code> or <code>'jax'</code>: Return a batch of type <code>jax.numpy.ndarray</code>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTImageProcessor.preprocess.data_format" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTImageProcessor.preprocess.data_format"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>data_format</strong> (<code>ChannelDimension</code> or <code>str</code>, <em>optional</em>, defaults to <code>ChannelDimension.FIRST</code>) — The channel dimension format for the output image. Can be one of:<ul> <li><code>ChannelDimension.FIRST</code>: image in (num_channels, height, width) format.</li> <li><code>ChannelDimension.LAST</code>: image in (height, width, num_channels) format.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTImageProcessor.preprocess.input_data_format" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTImageProcessor.preprocess.input_data_format"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_data_format</strong> (<code>ChannelDimension</code> or <code>str</code>, <em>optional</em>) — The channel dimension format for the input image. If unset, the channel dimension format is inferred from the input image. Can be one of:<ul> <li><code>"channels_first"</code> or <code>ChannelDimension.FIRST</code>: image in (num_channels, height, width) format.</li> <li><code>"channels_last"</code> or <code>ChannelDimension.LAST</code>: image in (height, width, num_channels) format.</li> <li><code>"none"</code> or <code>ChannelDimension.NONE</code>: image in (height, width) format.</li> </ul></span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1x3yxsa">Preprocess an image or batch of images.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.DPTImageProcessor.post_process_semantic_segmentation"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>post_process_semantic_segmentation</span></h4> <a id="transformers.DPTImageProcessor.post_process_semantic_segmentation" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.DPTImageProcessor.post_process_semantic_segmentation"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/dpt/image_processing_dpt.py#L346" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">outputs<span class="opacity-60"></span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">target_sizes<span class="opacity-60">: typing.List[typing.Tuple] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span>semantic_segmentation</span></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTImageProcessor.post_process_semantic_segmentation.outputs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTImageProcessor.post_process_semantic_segmentation.outputs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>outputs</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/dpt#transformers.DPTForSemanticSegmentation">DPTForSemanticSegmentation</a>) — Raw outputs of the model.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTImageProcessor.post_process_semantic_segmentation.target_sizes" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTImageProcessor.post_process_semantic_segmentation.target_sizes"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>target_sizes</strong> (<code>List[Tuple]</code> of length <code>batch_size</code>, <em>optional</em>) — List of tuples corresponding to the requested final size (height, width) of each prediction. If unset, predictions will not be resized.</span></span> </li></ul> <div id="transformers.DPTImageProcessor.post_process_semantic_segmentation.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p>semantic_segmentation</p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p><code>List[torch.Tensor]</code> of length <code>batch_size</code>, where each item is a semantic segmentation map of shape (height, width) corresponding to the target_sizes entry (if <code>target_sizes</code> is specified). Each entry of each <code>torch.Tensor</code> correspond to a semantic class id.</p> </p> </div></div> <p data-svelte-h="svelte-4h7glm">Converts the output of <a href="/docs/transformers/v4.34.0/en/model_doc/dpt#transformers.DPTForSemanticSegmentation">DPTForSemanticSegmentation</a> into semantic segmentation maps. Only supports PyTorch.</p></div></div> <h2 class="relative group"><a id="transformers.DPTModel" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTModel"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1w327hy">DPTModel</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.DPTModel"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">DPTModel</span></span></h3> <a id="transformers.DPTModel" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.DPTModel"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/dpt/modeling_dpt.py#L864" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">add_pooling_layer<span class="opacity-60"> = True</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTModel.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTModel.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/vit#transformers.ViTConfig">ViTConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-s1wkj9">The bare DPT Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.DPTModel.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.DPTModel.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.DPTModel.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/dpt/modeling_dpt.py#L896" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pixel_values<span class="opacity-60">: FloatTensor</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>transformers.models.dpt.modeling_dpt.BaseModelOutputWithPoolingAndIntermediateActivations</code> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 5 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTModel.forward.pixel_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTModel.forward.pixel_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>pixel_values</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_channels, height, width)</code>) — Pixel values. Pixel values can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoImageProcessor">AutoImageProcessor</a>. See <a href="/docs/transformers/v4.34.0/en/model_doc/deit#transformers.DeiTFeatureExtractor.__call__">DPTImageProcessor.<strong>call</strong>()</a> for details.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTModel.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTModel.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTModel.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTModel.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTModel.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTModel.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTModel.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTModel.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li></ul> <div id="transformers.DPTModel.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><code>transformers.models.dpt.modeling_dpt.BaseModelOutputWithPoolingAndIntermediateActivations</code> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <code>transformers.models.dpt.modeling_dpt.BaseModelOutputWithPoolingAndIntermediateActivations</code> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/dpt#transformers.DPTConfig">DPTConfig</a>) and inputs.</p> <ul> <li> <p><strong>last_hidden_state</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>) — Sequence of hidden-states at the output of the last layer of the model.</p> </li> <li> <p><strong>pooler_output</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, hidden_size)</code>) — Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining.</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> <li> <p><strong>intermediate_activations</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>) — Intermediate activations that can be used to compute hidden states of the model at various layers.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-19eruob">The <a href="/docs/transformers/v4.34.0/en/model_doc/dpt#transformers.DPTModel">DPTModel</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.DPTModel.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTModel.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoImageProcessor, DPTModel <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset <span class="hljs-meta">&gt;&gt;&gt; </span>dataset = load_dataset(<span class="hljs-string">"huggingface/cats-image"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>image = dataset[<span class="hljs-string">"test"</span>][<span class="hljs-string">"image"</span>][<span class="hljs-number">0</span>] <span class="hljs-meta">&gt;&gt;&gt; </span>image_processor = AutoImageProcessor.from_pretrained(<span class="hljs-string">"Intel/dpt-large"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = DPTModel.from_pretrained(<span class="hljs-string">"Intel/dpt-large"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = image_processor(image, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> outputs = model(**inputs) <span class="hljs-meta">&gt;&gt;&gt; </span>last_hidden_states = outputs.last_hidden_state <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-built_in">list</span>(last_hidden_states.shape) [<span class="hljs-number">1</span>, <span class="hljs-number">577</span>, <span class="hljs-number">1024</span>]</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.DPTForDepthEstimation" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTForDepthEstimation"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1v2wwhs">DPTForDepthEstimation</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.DPTForDepthEstimation"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">DPTForDepthEstimation</span></span></h3> <a id="transformers.DPTForDepthEstimation" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.DPTForDepthEstimation"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/dpt/modeling_dpt.py#L1052" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTForDepthEstimation.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTForDepthEstimation.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/vit#transformers.ViTConfig">ViTConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1w35xhv">DPT Model with a depth estimation head on top (consisting of 3 convolutional layers) e.g. for KITTI, NYUv2.</p> <p data-svelte-h="svelte-1gjh92c">This model is a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.DPTForDepthEstimation.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.DPTForDepthEstimation.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.DPTForDepthEstimation.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/dpt/modeling_dpt.py#L1067" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pixel_values<span class="opacity-60">: FloatTensor</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.DepthEstimatorOutput">transformers.modeling_outputs.DepthEstimatorOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 6 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTForDepthEstimation.forward.pixel_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTForDepthEstimation.forward.pixel_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>pixel_values</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_channels, height, width)</code>) — Pixel values. Pixel values can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoImageProcessor">AutoImageProcessor</a>. See <a href="/docs/transformers/v4.34.0/en/model_doc/deit#transformers.DeiTFeatureExtractor.__call__">DPTImageProcessor.<strong>call</strong>()</a> for details.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTForDepthEstimation.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTForDepthEstimation.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTForDepthEstimation.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTForDepthEstimation.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTForDepthEstimation.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTForDepthEstimation.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTForDepthEstimation.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTForDepthEstimation.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTForDepthEstimation.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTForDepthEstimation.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, height, width)</code>, <em>optional</em>) — Ground truth depth estimation maps for computing the loss.</span></span> </li></ul> <div id="transformers.DPTForDepthEstimation.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.DepthEstimatorOutput">transformers.modeling_outputs.DepthEstimatorOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.DepthEstimatorOutput">transformers.modeling_outputs.DepthEstimatorOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/dpt#transformers.DPTConfig">DPTConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Classification (or regression if config.num_labels==1) loss.</p> </li> <li> <p><strong>predicted_depth</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, height, width)</code>) — Predicted depth for each pixel.</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, num_channels, height, width)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, patch_size, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-g3fbjp">The <a href="/docs/transformers/v4.34.0/en/model_doc/dpt#transformers.DPTForDepthEstimation">DPTForDepthEstimation</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.DPTForDepthEstimation.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTForDepthEstimation.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-kvfsh7">Examples:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoImageProcessor, DPTForDepthEstimation <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> PIL <span class="hljs-keyword">import</span> Image <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> requests <span class="hljs-meta">&gt;&gt;&gt; </span>url = <span class="hljs-string">"http://images.cocodataset.org/val2017/000000039769.jpg"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>image = Image.<span class="hljs-built_in">open</span>(requests.get(url, stream=<span class="hljs-literal">True</span>).raw) <span class="hljs-meta">&gt;&gt;&gt; </span>image_processor = AutoImageProcessor.from_pretrained(<span class="hljs-string">"Intel/dpt-large"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = DPTForDepthEstimation.from_pretrained(<span class="hljs-string">"Intel/dpt-large"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># prepare image for the model</span> <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = image_processor(images=image, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> outputs = model(**inputs) <span class="hljs-meta">... </span> predicted_depth = outputs.predicted_depth <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># interpolate to original size</span> <span class="hljs-meta">&gt;&gt;&gt; </span>prediction = torch.nn.functional.interpolate( <span class="hljs-meta">... </span> predicted_depth.unsqueeze(<span class="hljs-number">1</span>), <span class="hljs-meta">... </span> size=image.size[::-<span class="hljs-number">1</span>], <span class="hljs-meta">... </span> mode=<span class="hljs-string">"bicubic"</span>, <span class="hljs-meta">... </span> align_corners=<span class="hljs-literal">False</span>, <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># visualize the prediction</span> <span class="hljs-meta">&gt;&gt;&gt; </span>output = prediction.squeeze().cpu().numpy() <span class="hljs-meta">&gt;&gt;&gt; </span>formatted = (output * <span class="hljs-number">255</span> / np.<span class="hljs-built_in">max</span>(output)).astype(<span class="hljs-string">"uint8"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>depth = Image.fromarray(formatted)</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.DPTForSemanticSegmentation" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTForSemanticSegmentation"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-14vnhs6">DPTForSemanticSegmentation</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.DPTForSemanticSegmentation"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">DPTForSemanticSegmentation</span></span></h3> <a id="transformers.DPTForSemanticSegmentation" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.DPTForSemanticSegmentation"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/dpt/modeling_dpt.py#L1220" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTForSemanticSegmentation.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTForSemanticSegmentation.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/vit#transformers.ViTConfig">ViTConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-a7kv1k">DPT Model with a semantic segmentation head on top e.g. for ADE20k, CityScapes.</p> <p data-svelte-h="svelte-1gjh92c">This model is a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.DPTForSemanticSegmentation.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.DPTForSemanticSegmentation.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.DPTForSemanticSegmentation.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/dpt/modeling_dpt.py#L1236" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pixel_values<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.SemanticSegmenterOutput">transformers.modeling_outputs.SemanticSegmenterOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 6 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTForSemanticSegmentation.forward.pixel_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTForSemanticSegmentation.forward.pixel_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>pixel_values</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_channels, height, width)</code>) — Pixel values. Pixel values can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoImageProcessor">AutoImageProcessor</a>. See <a href="/docs/transformers/v4.34.0/en/model_doc/deit#transformers.DeiTFeatureExtractor.__call__">DPTImageProcessor.<strong>call</strong>()</a> for details.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTForSemanticSegmentation.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTForSemanticSegmentation.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTForSemanticSegmentation.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTForSemanticSegmentation.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTForSemanticSegmentation.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTForSemanticSegmentation.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTForSemanticSegmentation.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTForSemanticSegmentation.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.DPTForSemanticSegmentation.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTForSemanticSegmentation.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, height, width)</code>, <em>optional</em>) — Ground truth semantic segmentation maps for computing the loss. Indices should be in <code>[0, ..., config.num_labels - 1]</code>. If <code>config.num_labels &gt; 1</code>, a classification loss is computed (Cross-Entropy).</span></span> </li></ul> <div id="transformers.DPTForSemanticSegmentation.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.SemanticSegmenterOutput">transformers.modeling_outputs.SemanticSegmenterOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.SemanticSegmenterOutput">transformers.modeling_outputs.SemanticSegmenterOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/dpt#transformers.DPTConfig">DPTConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Classification (or regression if config.num_labels==1) loss.</p> </li> <li> <p><strong>logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, config.num_labels, logits_height, logits_width)</code>) — Classification scores for each pixel.</p> <tip warning="{true}"> <p>The logits returned do not necessarily have the same size as the <code>pixel_values</code> passed as inputs. This is to avoid doing two interpolations and lose some quality when a user needs to resize the logits to the original image size as post-processing. You should always check your logits shape and resize as needed.</p> </tip> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, patch_size, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, patch_size, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-1um28nj">The <a href="/docs/transformers/v4.34.0/en/model_doc/dpt#transformers.DPTForSemanticSegmentation">DPTForSemanticSegmentation</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.DPTForSemanticSegmentation.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.DPTForSemanticSegmentation.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-kvfsh7">Examples:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoImageProcessor, DPTForSemanticSegmentation <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> PIL <span class="hljs-keyword">import</span> Image <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> requests <span class="hljs-meta">&gt;&gt;&gt; </span>url = <span class="hljs-string">"http://images.cocodataset.org/val2017/000000039769.jpg"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>image = Image.<span class="hljs-built_in">open</span>(requests.get(url, stream=<span class="hljs-literal">True</span>).raw) <span class="hljs-meta">&gt;&gt;&gt; </span>image_processor = AutoImageProcessor.from_pretrained(<span class="hljs-string">"Intel/dpt-large-ade"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = DPTForSemanticSegmentation.from_pretrained(<span class="hljs-string">"Intel/dpt-large-ade"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = image_processor(images=image, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(**inputs) <span class="hljs-meta">&gt;&gt;&gt; </span>logits = outputs.logits</pre></div></div></div></div> <p></p> <div id="svelte-announcer" aria-live="assertive" aria-atomic="true" style="position: absolute; left: 0px; top: 0px; clip: rect(0px, 0px, 0px, 0px); clip-path: inset(50%); overflow: hidden; white-space: nowrap; width: 1px; height: 1px;"></div></div> <div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/dit" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>DiT</a> <a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/efficientformer" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">EfficientFormer<span class="ml-2 translate-y-px">→</span></a></div></div></div> <div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapter&quot;:{&quot;title&quot;:&quot;DPT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;dpt&quot;,&quot;url&quot;:&quot;#dpt&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;overview&quot;,&quot;url&quot;:&quot;#overview&quot;},{&quot;title&quot;:&quot;Resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;resources&quot;,&quot;url&quot;:&quot;#resources&quot;},{&quot;title&quot;:&quot;DPTConfig&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.DPTConfig&quot;,&quot;url&quot;:&quot;#transformers.DPTConfig&quot;},{&quot;title&quot;:&quot;DPTFeatureExtractor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.DPTFeatureExtractor&quot;,&quot;url&quot;:&quot;#transformers.DPTFeatureExtractor&quot;},{&quot;title&quot;:&quot;DPTImageProcessor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.DPTImageProcessor&quot;,&quot;url&quot;:&quot;#transformers.DPTImageProcessor&quot;},{&quot;title&quot;:&quot;DPTModel&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.DPTModel&quot;,&quot;url&quot;:&quot;#transformers.DPTModel&quot;},{&quot;title&quot;:&quot;DPTForDepthEstimation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.DPTForDepthEstimation&quot;,&quot;url&quot;:&quot;#transformers.DPTForDepthEstimation&quot;},{&quot;title&quot;:&quot;DPTForSemanticSegmentation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.DPTForSemanticSegmentation&quot;,&quot;url&quot;:&quot;#transformers.DPTForSemanticSegmentation&quot;}]}}" data-target="SubSideMenu"><nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#dpt" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-dpt">DPT</a> <a href="#overview" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-overview"><wbr>Overview</a> <a href="#resources" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-resources"><wbr>Resources</a> <a href="#transformers.DPTConfig" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.DPTConfig">DPT<wbr>Config</a> <a href="#transformers.DPTFeatureExtractor" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.DPTFeatureExtractor">DPT<wbr>Feature<wbr>Extractor</a> <a href="#transformers.DPTImageProcessor" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.DPTImageProcessor">DPT<wbr>Image<wbr>Processor</a> <a href="#transformers.DPTModel" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.DPTModel">DPT<wbr>Model</a> <a href="#transformers.DPTForDepthEstimation" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.DPTForDepthEstimation">DPT<wbr>For<wbr>Depth<wbr>Estimation</a> <a href="#transformers.DPTForSemanticSegmentation" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.DPTForSemanticSegmentation">DPT<wbr>For<wbr>Semantic<wbr>Segmentation</a> </nav></div></div></div> <div id="doc-footer"></div></main> </div> <script> import("/front/build/kube-5e23f38/index.js"); window.moonSha = "kube-5e23f38/"; window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc","environment":"production","userAgent":"HuggingFace (production)"}`); </script> <!-- Stripe --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://js.stripe.com/v3/"; script.async = true; document.head.appendChild(script); } </script> <!-- Google analytics v4 --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL"; script.async = true; document.head.appendChild(script); window.dataLayer = window.dataLayer || []; function gtag() { if (window.dataLayer !== undefined) { window.dataLayer.push(arguments); } } gtag("js", new Date()); gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/v4.34.0/en/model_doc/dpt" }); /// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" }); /// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent /// TODO: ask the user for their consent and update this with gtag('consent', 'update') } </script> <!-- Google Analytics v3 (deprecated) --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { (function (i, s, o, g, r, a, m) { i["GoogleAnalyticsObject"] = r; (i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments); }), (i[r].l = 1 * new Date()); (a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]); a.async = 1; a.src = g; m.parentNode.insertBefore(a, m); })(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics"); ganalytics("create", "UA-83738774-2", "auto"); ganalytics("send", "pageview", "/docs/transformers/v4.34.0/en/model_doc/dpt"); } </script> <iframe name="__privateStripeMetricsController0480" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-27c67c0d52761104439bb051c7856ab1.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fv4.34.0%2Fen%2Fmodel_doc%2Fdpt&amp;title=DPT&amp;referrer=&amp;muid=NA&amp;sid=NA&amp;version=6&amp;preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html>
2023-10-05T13:33:45.058Z
Automatic speech recognition
https://huggingface.co/docs/transformers/v4.34.0/en/tasks/asr
# Automatic speech recognition Automatic speech recognition (ASR) converts a speech signal to text, mapping a sequence of audio inputs to text outputs. Virtual assistants like Siri and Alexa use ASR models to help users everyday, and there are many other useful user-facing applications like live captioning and note-taking during meetings. This guide will show you how to: 1. Finetune [Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base) on the [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) dataset to transcribe audio to text. 2. Use your finetuned model for inference. The task illustrated in this tutorial is supported by the following model architectures: [Data2VecAudio](../model_doc/data2vec-audio), [Hubert](../model_doc/hubert), [M-CTC-T](../model_doc/mctct), [SEW](../model_doc/sew), [SEW-D](../model_doc/sew-d), [UniSpeech](../model_doc/unispeech), [UniSpeechSat](../model_doc/unispeech-sat), [Wav2Vec2](../model_doc/wav2vec2), [Wav2Vec2-Conformer](../model_doc/wav2vec2-conformer), [WavLM](../model_doc/wavlm) Before you begin, make sure you have all the necessary libraries installed: ``` pip install transformers datasets evaluate jiwer ``` We encourage you to login to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to login: ``` >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## Load MInDS-14 dataset Start by loading a smaller subset of the [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) dataset from the 🤗 Datasets library. This’ll give you a chance to experiment and make sure everything works before spending more time training on the full dataset. ``` >>> from datasets import load_dataset, Audio >>> minds = load_dataset("PolyAI/minds14", name="en-US", split="train[:100]") ``` Split the dataset’s `train` split into a train and test set with the `~Dataset.train_test_split` method: ``` >>> minds = minds.train_test_split(test_size=0.2) ``` Then take a look at the dataset: ``` >>> minds DatasetDict({ train: Dataset({ features: ['path', 'audio', 'transcription', 'english_transcription', 'intent_class', 'lang_id'], num_rows: 16 }) test: Dataset({ features: ['path', 'audio', 'transcription', 'english_transcription', 'intent_class', 'lang_id'], num_rows: 4 }) }) ``` While the dataset contains a lot of useful information, like `lang_id` and `english_transcription`, you’ll focus on the `audio` and `transcription` in this guide. Remove the other columns with the [remove\_columns](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.remove_columns) method: ``` >>> minds = minds.remove_columns(["english_transcription", "intent_class", "lang_id"]) ``` Take a look at the example again: ``` >>> minds["train"][0] {'audio': {'array': array([-0.00024414, 0. , 0. , ..., 0.00024414, 0.00024414, 0.00024414], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602ba9e2963e11ccd901cd4f.wav', 'sampling_rate': 8000}, 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602ba9e2963e11ccd901cd4f.wav', 'transcription': "hi I'm trying to use the banking app on my phone and currently my checking and savings account balance is not refreshing"} ``` There are two fields: - `audio`: a 1-dimensional `array` of the speech signal that must be called to load and resample the audio file. - `transcription`: the target text. ## Preprocess The next step is to load a Wav2Vec2 processor to process the audio signal: ``` >>> from transformers import AutoProcessor >>> processor = AutoProcessor.from_pretrained("facebook/wav2vec2-base") ``` The MInDS-14 dataset has a sampling rate of 8000kHz (you can find this information in its [dataset card](https://huggingface.co/datasets/PolyAI/minds14)), which means you’ll need to resample the dataset to 16000kHz to use the pretrained Wav2Vec2 model: ``` >>> minds = minds.cast_column("audio", Audio(sampling_rate=16_000)) >>> minds["train"][0] {'audio': {'array': array([-2.38064706e-04, -1.58618059e-04, -5.43987835e-06, ..., 2.78103951e-04, 2.38446111e-04, 1.18740834e-04], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602ba9e2963e11ccd901cd4f.wav', 'sampling_rate': 16000}, 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602ba9e2963e11ccd901cd4f.wav', 'transcription': "hi I'm trying to use the banking app on my phone and currently my checking and savings account balance is not refreshing"} ``` As you can see in the `transcription` above, the text contains a mix of upper and lowercase characters. The Wav2Vec2 tokenizer is only trained on uppercase characters so you’ll need to make sure the text matches the tokenizer’s vocabulary: ``` >>> def uppercase(example): ... return {"transcription": example["transcription"].upper()} >>> minds = minds.map(uppercase) ``` Now create a preprocessing function that: 1. Calls the `audio` column to load and resample the audio file. 2. Extracts the `input_values` from the audio file and tokenize the `transcription` column with the processor. ``` >>> def prepare_dataset(batch): ... audio = batch["audio"] ... batch = processor(audio["array"], sampling_rate=audio["sampling_rate"], text=batch["transcription"]) ... batch["input_length"] = len(batch["input_values"][0]) ... return batch ``` To apply the preprocessing function over the entire dataset, use 🤗 Datasets [map](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.map) function. You can speed up `map` by increasing the number of processes with the `num_proc` parameter. Remove the columns you don’t need with the [remove\_columns](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.remove_columns) method: ``` >>> encoded_minds = minds.map(prepare_dataset, remove_columns=minds.column_names["train"], num_proc=4) ``` 🤗 Transformers doesn’t have a data collator for ASR, so you’ll need to adapt the [DataCollatorWithPadding](/docs/transformers/v4.34.0/en/main_classes/data_collator#transformers.DataCollatorWithPadding) to create a batch of examples. It’ll also dynamically pad your text and labels to the length of the longest element in its batch (instead of the entire dataset) so they are a uniform length. While it is possible to pad your text in the `tokenizer` function by setting `padding=True`, dynamic padding is more efficient. Unlike other data collators, this specific data collator needs to apply a different padding method to `input_values` and `labels`: ``` >>> import torch >>> from dataclasses import dataclass, field >>> from typing import Any, Dict, List, Optional, Union >>> @dataclass ... class DataCollatorCTCWithPadding: ... processor: AutoProcessor ... padding: Union[bool, str] = "longest" ... def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]: ... ... ... input_features = [{"input_values": feature["input_values"][0]} for feature in features] ... label_features = [{"input_ids": feature["labels"]} for feature in features] ... batch = self.processor.pad(input_features, padding=self.padding, return_tensors="pt") ... labels_batch = self.processor.pad(labels=label_features, padding=self.padding, return_tensors="pt") ... ... labels = labels_batch["input_ids"].masked_fill(labels_batch.attention_mask.ne(1), -100) ... batch["labels"] = labels ... return batch ``` Now instantiate your `DataCollatorForCTCWithPadding`: ``` >>> data_collator = DataCollatorCTCWithPadding(processor=processor, padding="longest") ``` ## Evaluate Including a metric during training is often helpful for evaluating your model’s performance. You can quickly load a evaluation method with the 🤗 [Evaluate](https://huggingface.co/docs/evaluate/index) library. For this task, load the [word error rate](https://huggingface.co/spaces/evaluate-metric/wer) (WER) metric (see the 🤗 Evaluate [quick tour](https://huggingface.co/docs/evaluate/a_quick_tour) to learn more about how to load and compute a metric): ``` >>> import evaluate >>> wer = evaluate.load("wer") ``` Then create a function that passes your predictions and labels to [compute](https://huggingface.co/docs/evaluate/v0.4.0/en/package_reference/main_classes#evaluate.EvaluationModule.compute) to calculate the WER: ``` >>> import numpy as np >>> def compute_metrics(pred): ... pred_logits = pred.predictions ... pred_ids = np.argmax(pred_logits, axis=-1) ... pred.label_ids[pred.label_ids == -100] = processor.tokenizer.pad_token_id ... pred_str = processor.batch_decode(pred_ids) ... label_str = processor.batch_decode(pred.label_ids, group_tokens=False) ... wer = wer.compute(predictions=pred_str, references=label_str) ... return {"wer": wer} ``` Your `compute_metrics` function is ready to go now, and you’ll return to it when you setup your training. ## Train If you aren’t familiar with finetuning a model with the [Trainer](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer), take a look at the basic tutorial [here](../training#train-with-pytorch-trainer)! You’re ready to start training your model now! Load Wav2Vec2 with [AutoModelForCTC](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoModelForCTC). Specify the reduction to apply with the `ctc_loss_reduction` parameter. It is often better to use the average instead of the default summation: ``` >>> from transformers import AutoModelForCTC, TrainingArguments, Trainer >>> model = AutoModelForCTC.from_pretrained( ... "facebook/wav2vec2-base", ... ctc_loss_reduction="mean", ... pad_token_id=processor.tokenizer.pad_token_id, ... ) ``` At this point, only three steps remain: 1. Define your training hyperparameters in [TrainingArguments](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.TrainingArguments). The only required parameter is `output_dir` which specifies where to save your model. You’ll push this model to the Hub by setting `push_to_hub=True` (you need to be signed in to Hugging Face to upload your model). At the end of each epoch, the [Trainer](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer) will evaluate the WER and save the training checkpoint. 2. Pass the training arguments to [Trainer](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer) along with the model, dataset, tokenizer, data collator, and `compute_metrics` function. 3. Call [train()](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.train) to finetune your model. ``` >>> training_args = TrainingArguments( ... output_dir="my_awesome_asr_mind_model", ... per_device_train_batch_size=8, ... gradient_accumulation_steps=2, ... learning_rate=1e-5, ... warmup_steps=500, ... max_steps=2000, ... gradient_checkpointing=True, ... fp16=True, ... group_by_length=True, ... evaluation_strategy="steps", ... per_device_eval_batch_size=8, ... save_steps=1000, ... eval_steps=1000, ... logging_steps=25, ... load_best_model_at_end=True, ... metric_for_best_model="wer", ... greater_is_better=False, ... push_to_hub=True, ... ) >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=encoded_minds["train"], ... eval_dataset=encoded_minds["test"], ... tokenizer=processor, ... data_collator=data_collator, ... compute_metrics=compute_metrics, ... ) >>> trainer.train() ``` Once training is completed, share your model to the Hub with the [push\_to\_hub()](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.push_to_hub) method so everyone can use your model: ``` >>> trainer.push_to_hub() ``` For a more in-depth example of how to finetune a model for automatic speech recognition, take a look at this blog [post](https://huggingface.co/blog/fine-tune-wav2vec2-english) for English ASR and this [post](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for multilingual ASR. ## Inference Great, now that you’ve finetuned a model, you can use it for inference! Load an audio file you’d like to run inference on. Remember to resample the sampling rate of the audio file to match the sampling rate of the model if you need to! ``` >>> from datasets import load_dataset, Audio >>> dataset = load_dataset("PolyAI/minds14", "en-US", split="train") >>> dataset = dataset.cast_column("audio", Audio(sampling_rate=16000)) >>> sampling_rate = dataset.features["audio"].sampling_rate >>> audio_file = dataset[0]["audio"]["path"] ``` The simplest way to try out your finetuned model for inference is to use it in a [pipeline()](/docs/transformers/v4.34.0/en/main_classes/pipelines#transformers.pipeline). Instantiate a `pipeline` for automatic speech recognition with your model, and pass your audio file to it: ``` >>> from transformers import pipeline >>> transcriber = pipeline("automatic-speech-recognition", model="stevhliu/my_awesome_asr_minds_model") >>> transcriber(audio_file) {'text': 'I WOUD LIKE O SET UP JOINT ACOUNT WTH Y PARTNER'} ``` The transcription is decent, but it could be better! Try finetuning your model on more examples to get even better results! You can also manually replicate the results of the `pipeline` if you’d like: Load a processor to preprocess the audio file and transcription and return the `input` as PyTorch tensors: ``` >>> from transformers import AutoProcessor >>> processor = AutoProcessor.from_pretrained("stevhliu/my_awesome_asr_mind_model") >>> inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt") ``` Pass your inputs to the model and return the logits: ``` >>> from transformers import AutoModelForCTC >>> model = AutoModelForCTC.from_pretrained("stevhliu/my_awesome_asr_mind_model") >>> with torch.no_grad(): ... logits = model(**inputs).logits ``` Get the predicted `input_ids` with the highest probability, and use the processor to decode the predicted `input_ids` back into text: ``` >>> import torch >>> predicted_ids = torch.argmax(logits, dim=-1) >>> transcription = processor.batch_decode(predicted_ids) >>> transcription ['I WOUL LIKE O SET UP JOINT ACOUNT WTH Y PARTNER'] ```
<!DOCTYPE html><html class=""><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"> <meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science."> <meta property="fb:app_id" content="1321688464574422"> <meta name="twitter:card" content="summary_large_image"> <meta name="twitter:site" content="@huggingface"> <meta property="og:title" content="Automatic speech recognition"> <meta property="og:type" content="website"> <meta property="og:url" content="https://huggingface.co/docs/transformers/v4.34.0/en/tasks/asr"> <meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png"> <link rel="stylesheet" href="/front/build/kube-5e23f38/style.css"> <link rel="preconnect" href="https://fonts.gstatic.com"> <link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&amp;display=swap" rel="stylesheet"> <link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&amp;display=swap" rel="stylesheet"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" as="style" onload="this.onload=null;this.rel='stylesheet'"> <noscript> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" /> </noscript> <title>Automatic speech recognition</title> <script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script> <script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"><link rel="stylesheet" href="/docs/transformers/v4.34.0/en/_app/immutable/assets/0.e3b0c442.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/1.38c5c2f6.js"><meta name="hf:doc:metadata" content="{&quot;local&quot;:&quot;automatic-speech-recognition&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;load-minds14-dataset&quot;,&quot;title&quot;:&quot;Load MInDS-14 dataset&quot;},{&quot;local&quot;:&quot;preprocess&quot;,&quot;title&quot;:&quot;Preprocess&quot;},{&quot;local&quot;:&quot;evaluate&quot;,&quot;title&quot;:&quot;Evaluate&quot;},{&quot;local&quot;:&quot;train&quot;,&quot;title&quot;:&quot;Train&quot;},{&quot;local&quot;:&quot;inference&quot;,&quot;title&quot;:&quot;Inference&quot;}],&quot;title&quot;:&quot;Automatic speech recognition&quot;}"></head> <body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage"> <div class="flex min-h-screen flex-col"> <div class="SVELTE_HYDRATER contents" data-props="{&quot;classNames&quot;:&quot;&quot;,&quot;isWide&quot;:true,&quot;isZh&quot;:false}" data-target="MainHeader"><header class="border-b border-gray-100 "><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <div class="flex flex-none items-center justify-center p-0.5 place-self-stretch lg:hidden"><button class="relative z-40 flex h-6 w-8 items-center justify-center" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div></div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="rounded-full border border-transparent bg-gray-900 px-3 py-1 leading-none text-white hover:border-black hover:bg-white hover:text-black" href="/join">Sign Up</a></li></ul></nav></div></header></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div> <main class="flex flex-1 flex-col"><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapters&quot;:[{&quot;title&quot;:&quot;Get started&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;🤗 Transformers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;index&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/index&quot;},{&quot;title&quot;:&quot;Quick tour&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;quicktour&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/quicktour&quot;},{&quot;title&quot;:&quot;Installation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;installation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/installation&quot;}]},{&quot;title&quot;:&quot;Tutorials&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Run inference with pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_tutorial&quot;},{&quot;title&quot;:&quot;Write portable code with AutoClass&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;autoclass_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/autoclass_tutorial&quot;},{&quot;title&quot;:&quot;Preprocess data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocessing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/preprocessing&quot;},{&quot;title&quot;:&quot;Fine-tune a pretrained model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;training&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/training&quot;},{&quot;title&quot;:&quot;Train with a script&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;run_scripts&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/run_scripts&quot;},{&quot;title&quot;:&quot;Set up distributed training with 🤗 Accelerate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;accelerate&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/accelerate&quot;},{&quot;title&quot;:&quot;Load and train adapters with 🤗 PEFT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;peft&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/peft&quot;},{&quot;title&quot;:&quot;Share your model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_sharing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_sharing&quot;},{&quot;title&quot;:&quot;Agents&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers_agents&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/transformers_agents&quot;},{&quot;title&quot;:&quot;Generation with LLMs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;llm_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/llm_tutorial&quot;}]},{&quot;title&quot;:&quot;Task Guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Natural Language Processing&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text classification&quot;,&quot;id&quot;:&quot;tasks/sequence_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/sequence_classification&quot;},{&quot;title&quot;:&quot;Token classification&quot;,&quot;id&quot;:&quot;tasks/token_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/token_classification&quot;},{&quot;title&quot;:&quot;Question answering&quot;,&quot;id&quot;:&quot;tasks/question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/question_answering&quot;},{&quot;title&quot;:&quot;Causal language modeling&quot;,&quot;id&quot;:&quot;tasks/language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/language_modeling&quot;},{&quot;title&quot;:&quot;Masked language modeling&quot;,&quot;id&quot;:&quot;tasks/masked_language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/masked_language_modeling&quot;},{&quot;title&quot;:&quot;Translation&quot;,&quot;id&quot;:&quot;tasks/translation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/translation&quot;},{&quot;title&quot;:&quot;Summarization&quot;,&quot;id&quot;:&quot;tasks/summarization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/summarization&quot;},{&quot;title&quot;:&quot;Multiple choice&quot;,&quot;id&quot;:&quot;tasks/multiple_choice&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/multiple_choice&quot;}]},{&quot;title&quot;:&quot;Audio&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio classification&quot;,&quot;id&quot;:&quot;tasks/audio_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/audio_classification&quot;},{&quot;title&quot;:&quot;Automatic speech recognition&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks/asr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/asr&quot;}]},{&quot;title&quot;:&quot;Computer Vision&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image classification&quot;,&quot;id&quot;:&quot;tasks/image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_classification&quot;},{&quot;title&quot;:&quot;Semantic segmentation&quot;,&quot;id&quot;:&quot;tasks/semantic_segmentation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/semantic_segmentation&quot;},{&quot;title&quot;:&quot;Video classification&quot;,&quot;id&quot;:&quot;tasks/video_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/video_classification&quot;},{&quot;title&quot;:&quot;Object detection&quot;,&quot;id&quot;:&quot;tasks/object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot object detection&quot;,&quot;id&quot;:&quot;tasks/zero_shot_object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot image classification&quot;,&quot;id&quot;:&quot;tasks/zero_shot_image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification&quot;},{&quot;title&quot;:&quot;Depth estimation&quot;,&quot;id&quot;:&quot;tasks/monocular_depth_estimation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation&quot;}]},{&quot;title&quot;:&quot;Multimodal&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image captioning&quot;,&quot;id&quot;:&quot;tasks/image_captioning&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_captioning&quot;},{&quot;title&quot;:&quot;Document Question Answering&quot;,&quot;id&quot;:&quot;tasks/document_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/document_question_answering&quot;},{&quot;title&quot;:&quot;Visual Question Answering&quot;,&quot;id&quot;:&quot;tasks/visual_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/visual_question_answering&quot;},{&quot;title&quot;:&quot;Text to speech&quot;,&quot;id&quot;:&quot;tasks/text-to-speech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/text-to-speech&quot;}]},{&quot;title&quot;:&quot;Generation&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Customize the generation strategy&quot;,&quot;id&quot;:&quot;generation_strategies&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/generation_strategies&quot;}]},{&quot;title&quot;:&quot;Prompting&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image tasks with IDEFICS&quot;,&quot;id&quot;:&quot;tasks/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/idefics&quot;}]}]},{&quot;title&quot;:&quot;Developer guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Use fast tokenizers from 🤗 Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;fast_tokenizers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/fast_tokenizers&quot;},{&quot;title&quot;:&quot;Run inference with multilingual models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;multilingual&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/multilingual&quot;},{&quot;title&quot;:&quot;Use model-specific APIs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;create_a_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/create_a_model&quot;},{&quot;title&quot;:&quot;Share a custom model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_models&quot;},{&quot;title&quot;:&quot;Templates for chat models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;chat_templating&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/chat_templating&quot;},{&quot;title&quot;:&quot;Run training on Amazon SageMaker&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;sagemaker&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/sagemaker&quot;},{&quot;title&quot;:&quot;Export to ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;serialization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/serialization&quot;},{&quot;title&quot;:&quot;Export to TFLite&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tflite&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tflite&quot;},{&quot;title&quot;:&quot;Export to TorchScript&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;torchscript&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/torchscript&quot;},{&quot;title&quot;:&quot;Benchmarks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;benchmarks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/benchmarks&quot;},{&quot;title&quot;:&quot;Notebooks with examples&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;notebooks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/notebooks&quot;},{&quot;title&quot;:&quot;Community resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;community&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/community&quot;},{&quot;title&quot;:&quot;Custom Tools and Prompts&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_tools&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_tools&quot;},{&quot;title&quot;:&quot;Troubleshoot&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;troubleshooting&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/troubleshooting&quot;}]},{&quot;title&quot;:&quot;Performance and scalability&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;performance&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/performance&quot;},{&quot;title&quot;:&quot;Efficient training techniques&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Methods and tools for efficient training on a single GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_one&quot;},{&quot;title&quot;:&quot;Multiple GPUs and parallelism&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_many&quot;},{&quot;title&quot;:&quot;Efficient training on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu&quot;},{&quot;title&quot;:&quot;Distributed CPU training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu_many&quot;},{&quot;title&quot;:&quot;Training on TPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu&quot;},{&quot;title&quot;:&quot;Training on TPU with TensorFlow&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu_tf&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu_tf&quot;},{&quot;title&quot;:&quot;Training on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_special&quot;},{&quot;title&quot;:&quot;Custom hardware for training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_hardware&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_hardware&quot;},{&quot;title&quot;:&quot;Hyperparameter Search using Trainer API&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;hpo_train&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/hpo_train&quot;}]},{&quot;title&quot;:&quot;Optimizing inference&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Inference on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_cpu&quot;},{&quot;title&quot;:&quot;Inference on one GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_one&quot;},{&quot;title&quot;:&quot;Inference on many GPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_many&quot;},{&quot;title&quot;:&quot;Inference on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_special&quot;}]},{&quot;title&quot;:&quot;Instantiating a big model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;big_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/big_models&quot;},{&quot;title&quot;:&quot;Troubleshooting&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;debugging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/debugging&quot;},{&quot;title&quot;:&quot;XLA Integration for TensorFlow Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tf_xla&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tf_xla&quot;},{&quot;title&quot;:&quot;Optimize inference using `torch.compile()`&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_torch_compile&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_torch_compile&quot;}]},{&quot;title&quot;:&quot;Contribute&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;How to contribute to transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;contributing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/contributing&quot;},{&quot;title&quot;:&quot;How to add a model to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_model&quot;},{&quot;title&quot;:&quot;How to convert a 🤗 Transformers model to TensorFlow?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_tensorflow_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_tensorflow_model&quot;},{&quot;title&quot;:&quot;How to add a pipeline to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_pipeline&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_pipeline&quot;},{&quot;title&quot;:&quot;Testing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;testing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/testing&quot;},{&quot;title&quot;:&quot;Checks on a Pull Request&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pr_checks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pr_checks&quot;}]},{&quot;title&quot;:&quot;Conceptual guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Philosophy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;philosophy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/philosophy&quot;},{&quot;title&quot;:&quot;Glossary&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;glossary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/glossary&quot;},{&quot;title&quot;:&quot;What 🤗 Transformers can do&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;task_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/task_summary&quot;},{&quot;title&quot;:&quot;How 🤗 Transformers solve tasks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks_explained&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks_explained&quot;},{&quot;title&quot;:&quot;The Transformer model family&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_summary&quot;},{&quot;title&quot;:&quot;Summary of the tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tokenizer_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tokenizer_summary&quot;},{&quot;title&quot;:&quot;Attention mechanisms&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;attention&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/attention&quot;},{&quot;title&quot;:&quot;Padding and truncation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pad_truncation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pad_truncation&quot;},{&quot;title&quot;:&quot;BERTology&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;bertology&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/bertology&quot;},{&quot;title&quot;:&quot;Perplexity of fixed-length models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perplexity&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perplexity&quot;},{&quot;title&quot;:&quot;Pipelines for webserver inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_webserver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_webserver&quot;},{&quot;title&quot;:&quot;Model training anatomy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_memory_anatomy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_memory_anatomy&quot;}]},{&quot;title&quot;:&quot;API&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Main Classes&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Agents and Tools&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/agent&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/agent&quot;},{&quot;title&quot;:&quot;Auto Classes&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/auto&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/auto&quot;},{&quot;title&quot;:&quot;Callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/callback&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/callback&quot;},{&quot;title&quot;:&quot;Configuration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/configuration&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/configuration&quot;},{&quot;title&quot;:&quot;Data Collator&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/data_collator&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/data_collator&quot;},{&quot;title&quot;:&quot;Keras callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/keras_callbacks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/keras_callbacks&quot;},{&quot;title&quot;:&quot;Logging&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/logging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/logging&quot;},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/model&quot;},{&quot;title&quot;:&quot;Text Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/text_generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/text_generation&quot;},{&quot;title&quot;:&quot;ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/onnx&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/onnx&quot;},{&quot;title&quot;:&quot;Optimization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/optimizer_schedules&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules&quot;},{&quot;title&quot;:&quot;Model outputs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/output&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/output&quot;},{&quot;title&quot;:&quot;Pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/pipelines&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/pipelines&quot;},{&quot;title&quot;:&quot;Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/processors&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/processors&quot;},{&quot;title&quot;:&quot;Quantization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/quantization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/quantization&quot;},{&quot;title&quot;:&quot;Tokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/tokenizer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/tokenizer&quot;},{&quot;title&quot;:&quot;Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/trainer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/trainer&quot;},{&quot;title&quot;:&quot;DeepSpeed Integration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/deepspeed&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/deepspeed&quot;},{&quot;title&quot;:&quot;Feature Extractor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/feature_extractor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/feature_extractor&quot;},{&quot;title&quot;:&quot;Image Processor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/image_processor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/image_processor&quot;}]},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALBERT&quot;,&quot;id&quot;:&quot;model_doc/albert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/albert&quot;},{&quot;title&quot;:&quot;BART&quot;,&quot;id&quot;:&quot;model_doc/bart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bart&quot;},{&quot;title&quot;:&quot;BARThez&quot;,&quot;id&quot;:&quot;model_doc/barthez&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/barthez&quot;},{&quot;title&quot;:&quot;BARTpho&quot;,&quot;id&quot;:&quot;model_doc/bartpho&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bartpho&quot;},{&quot;title&quot;:&quot;BERT&quot;,&quot;id&quot;:&quot;model_doc/bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert&quot;},{&quot;title&quot;:&quot;BertGeneration&quot;,&quot;id&quot;:&quot;model_doc/bert-generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-generation&quot;},{&quot;title&quot;:&quot;BertJapanese&quot;,&quot;id&quot;:&quot;model_doc/bert-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-japanese&quot;},{&quot;title&quot;:&quot;Bertweet&quot;,&quot;id&quot;:&quot;model_doc/bertweet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bertweet&quot;},{&quot;title&quot;:&quot;BigBird&quot;,&quot;id&quot;:&quot;model_doc/big_bird&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/big_bird&quot;},{&quot;title&quot;:&quot;BigBirdPegasus&quot;,&quot;id&quot;:&quot;model_doc/bigbird_pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus&quot;},{&quot;title&quot;:&quot;BioGpt&quot;,&quot;id&quot;:&quot;model_doc/biogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/biogpt&quot;},{&quot;title&quot;:&quot;Blenderbot&quot;,&quot;id&quot;:&quot;model_doc/blenderbot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot&quot;},{&quot;title&quot;:&quot;Blenderbot Small&quot;,&quot;id&quot;:&quot;model_doc/blenderbot-small&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot-small&quot;},{&quot;title&quot;:&quot;BLOOM&quot;,&quot;id&quot;:&quot;model_doc/bloom&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bloom&quot;},{&quot;title&quot;:&quot;BORT&quot;,&quot;id&quot;:&quot;model_doc/bort&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bort&quot;},{&quot;title&quot;:&quot;ByT5&quot;,&quot;id&quot;:&quot;model_doc/byt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/byt5&quot;},{&quot;title&quot;:&quot;CamemBERT&quot;,&quot;id&quot;:&quot;model_doc/camembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/camembert&quot;},{&quot;title&quot;:&quot;CANINE&quot;,&quot;id&quot;:&quot;model_doc/canine&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/canine&quot;},{&quot;title&quot;:&quot;CodeGen&quot;,&quot;id&quot;:&quot;model_doc/codegen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/codegen&quot;},{&quot;title&quot;:&quot;CodeLlama&quot;,&quot;id&quot;:&quot;model_doc/code_llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/code_llama&quot;},{&quot;title&quot;:&quot;ConvBERT&quot;,&quot;id&quot;:&quot;model_doc/convbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convbert&quot;},{&quot;title&quot;:&quot;CPM&quot;,&quot;id&quot;:&quot;model_doc/cpm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpm&quot;},{&quot;title&quot;:&quot;CPMANT&quot;,&quot;id&quot;:&quot;model_doc/cpmant&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpmant&quot;},{&quot;title&quot;:&quot;CTRL&quot;,&quot;id&quot;:&quot;model_doc/ctrl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ctrl&quot;},{&quot;title&quot;:&quot;DeBERTa&quot;,&quot;id&quot;:&quot;model_doc/deberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta&quot;},{&quot;title&quot;:&quot;DeBERTa-v2&quot;,&quot;id&quot;:&quot;model_doc/deberta-v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta-v2&quot;},{&quot;title&quot;:&quot;DialoGPT&quot;,&quot;id&quot;:&quot;model_doc/dialogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dialogpt&quot;},{&quot;title&quot;:&quot;DistilBERT&quot;,&quot;id&quot;:&quot;model_doc/distilbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/distilbert&quot;},{&quot;title&quot;:&quot;DPR&quot;,&quot;id&quot;:&quot;model_doc/dpr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpr&quot;},{&quot;title&quot;:&quot;ELECTRA&quot;,&quot;id&quot;:&quot;model_doc/electra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/electra&quot;},{&quot;title&quot;:&quot;Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encoder-decoder&quot;},{&quot;title&quot;:&quot;ERNIE&quot;,&quot;id&quot;:&quot;model_doc/ernie&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie&quot;},{&quot;title&quot;:&quot;ErnieM&quot;,&quot;id&quot;:&quot;model_doc/ernie_m&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie_m&quot;},{&quot;title&quot;:&quot;ESM&quot;,&quot;id&quot;:&quot;model_doc/esm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/esm&quot;},{&quot;title&quot;:&quot;Falcon&quot;,&quot;id&quot;:&quot;model_doc/falcon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/falcon&quot;},{&quot;title&quot;:&quot;FLAN-T5&quot;,&quot;id&quot;:&quot;model_doc/flan-t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-t5&quot;},{&quot;title&quot;:&quot;FLAN-UL2&quot;,&quot;id&quot;:&quot;model_doc/flan-ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-ul2&quot;},{&quot;title&quot;:&quot;FlauBERT&quot;,&quot;id&quot;:&quot;model_doc/flaubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flaubert&quot;},{&quot;title&quot;:&quot;FNet&quot;,&quot;id&quot;:&quot;model_doc/fnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fnet&quot;},{&quot;title&quot;:&quot;FSMT&quot;,&quot;id&quot;:&quot;model_doc/fsmt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fsmt&quot;},{&quot;title&quot;:&quot;Funnel Transformer&quot;,&quot;id&quot;:&quot;model_doc/funnel&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/funnel&quot;},{&quot;title&quot;:&quot;GPT&quot;,&quot;id&quot;:&quot;model_doc/openai-gpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/openai-gpt&quot;},{&quot;title&quot;:&quot;GPT Neo&quot;,&quot;id&quot;:&quot;model_doc/gpt_neo&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neo&quot;},{&quot;title&quot;:&quot;GPT NeoX&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox&quot;},{&quot;title&quot;:&quot;GPT NeoX Japanese&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox_japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese&quot;},{&quot;title&quot;:&quot;GPT-J&quot;,&quot;id&quot;:&quot;model_doc/gptj&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptj&quot;},{&quot;title&quot;:&quot;GPT2&quot;,&quot;id&quot;:&quot;model_doc/gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt2&quot;},{&quot;title&quot;:&quot;GPTBigCode&quot;,&quot;id&quot;:&quot;model_doc/gpt_bigcode&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode&quot;},{&quot;title&quot;:&quot;GPTSAN Japanese&quot;,&quot;id&quot;:&quot;model_doc/gptsan-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese&quot;},{&quot;title&quot;:&quot;GPTSw3&quot;,&quot;id&quot;:&quot;model_doc/gpt-sw3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt-sw3&quot;},{&quot;title&quot;:&quot;HerBERT&quot;,&quot;id&quot;:&quot;model_doc/herbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/herbert&quot;},{&quot;title&quot;:&quot;I-BERT&quot;,&quot;id&quot;:&quot;model_doc/ibert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ibert&quot;},{&quot;title&quot;:&quot;Jukebox&quot;,&quot;id&quot;:&quot;model_doc/jukebox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/jukebox&quot;},{&quot;title&quot;:&quot;LED&quot;,&quot;id&quot;:&quot;model_doc/led&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/led&quot;},{&quot;title&quot;:&quot;LLaMA&quot;,&quot;id&quot;:&quot;model_doc/llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama&quot;},{&quot;title&quot;:&quot;Llama2&quot;,&quot;id&quot;:&quot;model_doc/llama2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama2&quot;},{&quot;title&quot;:&quot;Longformer&quot;,&quot;id&quot;:&quot;model_doc/longformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longformer&quot;},{&quot;title&quot;:&quot;LongT5&quot;,&quot;id&quot;:&quot;model_doc/longt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longt5&quot;},{&quot;title&quot;:&quot;LUKE&quot;,&quot;id&quot;:&quot;model_doc/luke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/luke&quot;},{&quot;title&quot;:&quot;M2M100&quot;,&quot;id&quot;:&quot;model_doc/m2m_100&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/m2m_100&quot;},{&quot;title&quot;:&quot;MarianMT&quot;,&quot;id&quot;:&quot;model_doc/marian&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/marian&quot;},{&quot;title&quot;:&quot;MarkupLM&quot;,&quot;id&quot;:&quot;model_doc/markuplm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/markuplm&quot;},{&quot;title&quot;:&quot;MBart and MBart-50&quot;,&quot;id&quot;:&quot;model_doc/mbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mbart&quot;},{&quot;title&quot;:&quot;MEGA&quot;,&quot;id&quot;:&quot;model_doc/mega&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mega&quot;},{&quot;title&quot;:&quot;MegatronBERT&quot;,&quot;id&quot;:&quot;model_doc/megatron-bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron-bert&quot;},{&quot;title&quot;:&quot;MegatronGPT2&quot;,&quot;id&quot;:&quot;model_doc/megatron_gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2&quot;},{&quot;title&quot;:&quot;Mistral&quot;,&quot;id&quot;:&quot;model_doc/mistral&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mistral&quot;},{&quot;title&quot;:&quot;mLUKE&quot;,&quot;id&quot;:&quot;model_doc/mluke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mluke&quot;},{&quot;title&quot;:&quot;MobileBERT&quot;,&quot;id&quot;:&quot;model_doc/mobilebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilebert&quot;},{&quot;title&quot;:&quot;MPNet&quot;,&quot;id&quot;:&quot;model_doc/mpnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpnet&quot;},{&quot;title&quot;:&quot;MPT&quot;,&quot;id&quot;:&quot;model_doc/mpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpt&quot;},{&quot;title&quot;:&quot;MRA&quot;,&quot;id&quot;:&quot;model_doc/mra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mra&quot;},{&quot;title&quot;:&quot;MT5&quot;,&quot;id&quot;:&quot;model_doc/mt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mt5&quot;},{&quot;title&quot;:&quot;MVP&quot;,&quot;id&quot;:&quot;model_doc/mvp&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mvp&quot;},{&quot;title&quot;:&quot;NEZHA&quot;,&quot;id&quot;:&quot;model_doc/nezha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nezha&quot;},{&quot;title&quot;:&quot;NLLB&quot;,&quot;id&quot;:&quot;model_doc/nllb&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb&quot;},{&quot;title&quot;:&quot;NLLB-MoE&quot;,&quot;id&quot;:&quot;model_doc/nllb-moe&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb-moe&quot;},{&quot;title&quot;:&quot;Nyströmformer&quot;,&quot;id&quot;:&quot;model_doc/nystromformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nystromformer&quot;},{&quot;title&quot;:&quot;Open-Llama&quot;,&quot;id&quot;:&quot;model_doc/open-llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/open-llama&quot;},{&quot;title&quot;:&quot;OPT&quot;,&quot;id&quot;:&quot;model_doc/opt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/opt&quot;},{&quot;title&quot;:&quot;Pegasus&quot;,&quot;id&quot;:&quot;model_doc/pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus&quot;},{&quot;title&quot;:&quot;PEGASUS-X&quot;,&quot;id&quot;:&quot;model_doc/pegasus_x&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus_x&quot;},{&quot;title&quot;:&quot;Persimmon&quot;,&quot;id&quot;:&quot;model_doc/persimmon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/persimmon&quot;},{&quot;title&quot;:&quot;PhoBERT&quot;,&quot;id&quot;:&quot;model_doc/phobert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/phobert&quot;},{&quot;title&quot;:&quot;PLBart&quot;,&quot;id&quot;:&quot;model_doc/plbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/plbart&quot;},{&quot;title&quot;:&quot;ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/prophetnet&quot;},{&quot;title&quot;:&quot;QDQBert&quot;,&quot;id&quot;:&quot;model_doc/qdqbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/qdqbert&quot;},{&quot;title&quot;:&quot;RAG&quot;,&quot;id&quot;:&quot;model_doc/rag&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rag&quot;},{&quot;title&quot;:&quot;REALM&quot;,&quot;id&quot;:&quot;model_doc/realm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/realm&quot;},{&quot;title&quot;:&quot;Reformer&quot;,&quot;id&quot;:&quot;model_doc/reformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/reformer&quot;},{&quot;title&quot;:&quot;RemBERT&quot;,&quot;id&quot;:&quot;model_doc/rembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rembert&quot;},{&quot;title&quot;:&quot;RetriBERT&quot;,&quot;id&quot;:&quot;model_doc/retribert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/retribert&quot;},{&quot;title&quot;:&quot;RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta&quot;},{&quot;title&quot;:&quot;RoBERTa-PreLayerNorm&quot;,&quot;id&quot;:&quot;model_doc/roberta-prelayernorm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm&quot;},{&quot;title&quot;:&quot;RoCBert&quot;,&quot;id&quot;:&quot;model_doc/roc_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roc_bert&quot;},{&quot;title&quot;:&quot;RoFormer&quot;,&quot;id&quot;:&quot;model_doc/roformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roformer&quot;},{&quot;title&quot;:&quot;RWKV&quot;,&quot;id&quot;:&quot;model_doc/rwkv&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rwkv&quot;},{&quot;title&quot;:&quot;Splinter&quot;,&quot;id&quot;:&quot;model_doc/splinter&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/splinter&quot;},{&quot;title&quot;:&quot;SqueezeBERT&quot;,&quot;id&quot;:&quot;model_doc/squeezebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/squeezebert&quot;},{&quot;title&quot;:&quot;SwitchTransformers&quot;,&quot;id&quot;:&quot;model_doc/switch_transformers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/switch_transformers&quot;},{&quot;title&quot;:&quot;T5&quot;,&quot;id&quot;:&quot;model_doc/t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5&quot;},{&quot;title&quot;:&quot;T5v1.1&quot;,&quot;id&quot;:&quot;model_doc/t5v1.1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5v1.1&quot;},{&quot;title&quot;:&quot;TAPEX&quot;,&quot;id&quot;:&quot;model_doc/tapex&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapex&quot;},{&quot;title&quot;:&quot;Transformer XL&quot;,&quot;id&quot;:&quot;model_doc/transfo-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/transfo-xl&quot;},{&quot;title&quot;:&quot;UL2&quot;,&quot;id&quot;:&quot;model_doc/ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ul2&quot;},{&quot;title&quot;:&quot;UMT5&quot;,&quot;id&quot;:&quot;model_doc/umt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/umt5&quot;},{&quot;title&quot;:&quot;X-MOD&quot;,&quot;id&quot;:&quot;model_doc/xmod&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xmod&quot;},{&quot;title&quot;:&quot;XGLM&quot;,&quot;id&quot;:&quot;model_doc/xglm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xglm&quot;},{&quot;title&quot;:&quot;XLM&quot;,&quot;id&quot;:&quot;model_doc/xlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm&quot;},{&quot;title&quot;:&quot;XLM-ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/xlm-prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa-XL&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl&quot;},{&quot;title&quot;:&quot;XLM-V&quot;,&quot;id&quot;:&quot;model_doc/xlm-v&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-v&quot;},{&quot;title&quot;:&quot;XLNet&quot;,&quot;id&quot;:&quot;model_doc/xlnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlnet&quot;},{&quot;title&quot;:&quot;YOSO&quot;,&quot;id&quot;:&quot;model_doc/yoso&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yoso&quot;}]},{&quot;title&quot;:&quot;Vision models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;BEiT&quot;,&quot;id&quot;:&quot;model_doc/beit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/beit&quot;},{&quot;title&quot;:&quot;BiT&quot;,&quot;id&quot;:&quot;model_doc/bit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bit&quot;},{&quot;title&quot;:&quot;Conditional DETR&quot;,&quot;id&quot;:&quot;model_doc/conditional_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/conditional_detr&quot;},{&quot;title&quot;:&quot;ConvNeXT&quot;,&quot;id&quot;:&quot;model_doc/convnext&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnext&quot;},{&quot;title&quot;:&quot;ConvNeXTV2&quot;,&quot;id&quot;:&quot;model_doc/convnextv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnextv2&quot;},{&quot;title&quot;:&quot;CvT&quot;,&quot;id&quot;:&quot;model_doc/cvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cvt&quot;},{&quot;title&quot;:&quot;Deformable DETR&quot;,&quot;id&quot;:&quot;model_doc/deformable_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deformable_detr&quot;},{&quot;title&quot;:&quot;DeiT&quot;,&quot;id&quot;:&quot;model_doc/deit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deit&quot;},{&quot;title&quot;:&quot;DETA&quot;,&quot;id&quot;:&quot;model_doc/deta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deta&quot;},{&quot;title&quot;:&quot;DETR&quot;,&quot;id&quot;:&quot;model_doc/detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/detr&quot;},{&quot;title&quot;:&quot;DiNAT&quot;,&quot;id&quot;:&quot;model_doc/dinat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinat&quot;},{&quot;title&quot;:&quot;DINO V2&quot;,&quot;id&quot;:&quot;model_doc/dinov2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinov2&quot;},{&quot;title&quot;:&quot;DiT&quot;,&quot;id&quot;:&quot;model_doc/dit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dit&quot;},{&quot;title&quot;:&quot;DPT&quot;,&quot;id&quot;:&quot;model_doc/dpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpt&quot;},{&quot;title&quot;:&quot;EfficientFormer&quot;,&quot;id&quot;:&quot;model_doc/efficientformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientformer&quot;},{&quot;title&quot;:&quot;EfficientNet&quot;,&quot;id&quot;:&quot;model_doc/efficientnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientnet&quot;},{&quot;title&quot;:&quot;FocalNet&quot;,&quot;id&quot;:&quot;model_doc/focalnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/focalnet&quot;},{&quot;title&quot;:&quot;GLPN&quot;,&quot;id&quot;:&quot;model_doc/glpn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/glpn&quot;},{&quot;title&quot;:&quot;ImageGPT&quot;,&quot;id&quot;:&quot;model_doc/imagegpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/imagegpt&quot;},{&quot;title&quot;:&quot;LeViT&quot;,&quot;id&quot;:&quot;model_doc/levit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/levit&quot;},{&quot;title&quot;:&quot;Mask2Former&quot;,&quot;id&quot;:&quot;model_doc/mask2former&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mask2former&quot;},{&quot;title&quot;:&quot;MaskFormer&quot;,&quot;id&quot;:&quot;model_doc/maskformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/maskformer&quot;},{&quot;title&quot;:&quot;MobileNetV1&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v1&quot;},{&quot;title&quot;:&quot;MobileNetV2&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v2&quot;},{&quot;title&quot;:&quot;MobileViT&quot;,&quot;id&quot;:&quot;model_doc/mobilevit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevit&quot;},{&quot;title&quot;:&quot;MobileViTV2&quot;,&quot;id&quot;:&quot;model_doc/mobilevitv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevitv2&quot;},{&quot;title&quot;:&quot;NAT&quot;,&quot;id&quot;:&quot;model_doc/nat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nat&quot;},{&quot;title&quot;:&quot;PoolFormer&quot;,&quot;id&quot;:&quot;model_doc/poolformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/poolformer&quot;},{&quot;title&quot;:&quot;Pyramid Vision Transformer (PVT)&quot;,&quot;id&quot;:&quot;model_doc/pvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pvt&quot;},{&quot;title&quot;:&quot;RegNet&quot;,&quot;id&quot;:&quot;model_doc/regnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/regnet&quot;},{&quot;title&quot;:&quot;ResNet&quot;,&quot;id&quot;:&quot;model_doc/resnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/resnet&quot;},{&quot;title&quot;:&quot;SegFormer&quot;,&quot;id&quot;:&quot;model_doc/segformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/segformer&quot;},{&quot;title&quot;:&quot;SwiftFormer&quot;,&quot;id&quot;:&quot;model_doc/swiftformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swiftformer&quot;},{&quot;title&quot;:&quot;Swin Transformer&quot;,&quot;id&quot;:&quot;model_doc/swin&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin&quot;},{&quot;title&quot;:&quot;Swin Transformer V2&quot;,&quot;id&quot;:&quot;model_doc/swinv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swinv2&quot;},{&quot;title&quot;:&quot;Swin2SR&quot;,&quot;id&quot;:&quot;model_doc/swin2sr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin2sr&quot;},{&quot;title&quot;:&quot;Table Transformer&quot;,&quot;id&quot;:&quot;model_doc/table-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/table-transformer&quot;},{&quot;title&quot;:&quot;TimeSformer&quot;,&quot;id&quot;:&quot;model_doc/timesformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/timesformer&quot;},{&quot;title&quot;:&quot;UperNet&quot;,&quot;id&quot;:&quot;model_doc/upernet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/upernet&quot;},{&quot;title&quot;:&quot;VAN&quot;,&quot;id&quot;:&quot;model_doc/van&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/van&quot;},{&quot;title&quot;:&quot;VideoMAE&quot;,&quot;id&quot;:&quot;model_doc/videomae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/videomae&quot;},{&quot;title&quot;:&quot;Vision Transformer (ViT)&quot;,&quot;id&quot;:&quot;model_doc/vit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit&quot;},{&quot;title&quot;:&quot;ViT Hybrid&quot;,&quot;id&quot;:&quot;model_doc/vit_hybrid&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_hybrid&quot;},{&quot;title&quot;:&quot;ViTDet&quot;,&quot;id&quot;:&quot;model_doc/vitdet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitdet&quot;},{&quot;title&quot;:&quot;ViTMAE&quot;,&quot;id&quot;:&quot;model_doc/vit_mae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_mae&quot;},{&quot;title&quot;:&quot;ViTMatte&quot;,&quot;id&quot;:&quot;model_doc/vitmatte&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitmatte&quot;},{&quot;title&quot;:&quot;ViTMSN&quot;,&quot;id&quot;:&quot;model_doc/vit_msn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_msn&quot;},{&quot;title&quot;:&quot;ViViT&quot;,&quot;id&quot;:&quot;model_doc/vivit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vivit&quot;},{&quot;title&quot;:&quot;YOLOS&quot;,&quot;id&quot;:&quot;model_doc/yolos&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yolos&quot;}]},{&quot;title&quot;:&quot;Audio models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio Spectrogram Transformer&quot;,&quot;id&quot;:&quot;model_doc/audio-spectrogram-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer&quot;},{&quot;title&quot;:&quot;Bark&quot;,&quot;id&quot;:&quot;model_doc/bark&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bark&quot;},{&quot;title&quot;:&quot;CLAP&quot;,&quot;id&quot;:&quot;model_doc/clap&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clap&quot;},{&quot;title&quot;:&quot;EnCodec&quot;,&quot;id&quot;:&quot;model_doc/encodec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encodec&quot;},{&quot;title&quot;:&quot;Hubert&quot;,&quot;id&quot;:&quot;model_doc/hubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/hubert&quot;},{&quot;title&quot;:&quot;MCTCT&quot;,&quot;id&quot;:&quot;model_doc/mctct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mctct&quot;},{&quot;title&quot;:&quot;MMS&quot;,&quot;id&quot;:&quot;model_doc/mms&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mms&quot;},{&quot;title&quot;:&quot;MusicGen&quot;,&quot;id&quot;:&quot;model_doc/musicgen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/musicgen&quot;},{&quot;title&quot;:&quot;Pop2Piano&quot;,&quot;id&quot;:&quot;model_doc/pop2piano&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pop2piano&quot;},{&quot;title&quot;:&quot;SEW&quot;,&quot;id&quot;:&quot;model_doc/sew&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew&quot;},{&quot;title&quot;:&quot;SEW-D&quot;,&quot;id&quot;:&quot;model_doc/sew-d&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew-d&quot;},{&quot;title&quot;:&quot;Speech2Text&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text&quot;},{&quot;title&quot;:&quot;Speech2Text2&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text_2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2&quot;},{&quot;title&quot;:&quot;SpeechT5&quot;,&quot;id&quot;:&quot;model_doc/speecht5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speecht5&quot;},{&quot;title&quot;:&quot;UniSpeech&quot;,&quot;id&quot;:&quot;model_doc/unispeech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech&quot;},{&quot;title&quot;:&quot;UniSpeech-SAT&quot;,&quot;id&quot;:&quot;model_doc/unispeech-sat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech-sat&quot;},{&quot;title&quot;:&quot;VITS&quot;,&quot;id&quot;:&quot;model_doc/vits&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vits&quot;},{&quot;title&quot;:&quot;Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2&quot;},{&quot;title&quot;:&quot;Wav2Vec2-Conformer&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2-conformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer&quot;},{&quot;title&quot;:&quot;Wav2Vec2Phoneme&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2_phoneme&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme&quot;},{&quot;title&quot;:&quot;WavLM&quot;,&quot;id&quot;:&quot;model_doc/wavlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wavlm&quot;},{&quot;title&quot;:&quot;Whisper&quot;,&quot;id&quot;:&quot;model_doc/whisper&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/whisper&quot;},{&quot;title&quot;:&quot;XLS-R&quot;,&quot;id&quot;:&quot;model_doc/xls_r&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xls_r&quot;},{&quot;title&quot;:&quot;XLSR-Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/xlsr_wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2&quot;}]},{&quot;title&quot;:&quot;Multimodal models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALIGN&quot;,&quot;id&quot;:&quot;model_doc/align&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/align&quot;},{&quot;title&quot;:&quot;AltCLIP&quot;,&quot;id&quot;:&quot;model_doc/altclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/altclip&quot;},{&quot;title&quot;:&quot;BLIP&quot;,&quot;id&quot;:&quot;model_doc/blip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip&quot;},{&quot;title&quot;:&quot;BLIP-2&quot;,&quot;id&quot;:&quot;model_doc/blip-2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip-2&quot;},{&quot;title&quot;:&quot;BridgeTower&quot;,&quot;id&quot;:&quot;model_doc/bridgetower&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bridgetower&quot;},{&quot;title&quot;:&quot;BROS&quot;,&quot;id&quot;:&quot;model_doc/bros&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bros&quot;},{&quot;title&quot;:&quot;Chinese-CLIP&quot;,&quot;id&quot;:&quot;model_doc/chinese_clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/chinese_clip&quot;},{&quot;title&quot;:&quot;CLIP&quot;,&quot;id&quot;:&quot;model_doc/clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clip&quot;},{&quot;title&quot;:&quot;CLIPSeg&quot;,&quot;id&quot;:&quot;model_doc/clipseg&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clipseg&quot;},{&quot;title&quot;:&quot;Data2Vec&quot;,&quot;id&quot;:&quot;model_doc/data2vec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/data2vec&quot;},{&quot;title&quot;:&quot;DePlot&quot;,&quot;id&quot;:&quot;model_doc/deplot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deplot&quot;},{&quot;title&quot;:&quot;Donut&quot;,&quot;id&quot;:&quot;model_doc/donut&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/donut&quot;},{&quot;title&quot;:&quot;FLAVA&quot;,&quot;id&quot;:&quot;model_doc/flava&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flava&quot;},{&quot;title&quot;:&quot;GIT&quot;,&quot;id&quot;:&quot;model_doc/git&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/git&quot;},{&quot;title&quot;:&quot;GroupViT&quot;,&quot;id&quot;:&quot;model_doc/groupvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/groupvit&quot;},{&quot;title&quot;:&quot;IDEFICS&quot;,&quot;id&quot;:&quot;model_doc/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/idefics&quot;},{&quot;title&quot;:&quot;InstructBLIP&quot;,&quot;id&quot;:&quot;model_doc/instructblip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/instructblip&quot;},{&quot;title&quot;:&quot;LayoutLM&quot;,&quot;id&quot;:&quot;model_doc/layoutlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlm&quot;},{&quot;title&quot;:&quot;LayoutLMV2&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv2&quot;},{&quot;title&quot;:&quot;LayoutLMV3&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv3&quot;},{&quot;title&quot;:&quot;LayoutXLM&quot;,&quot;id&quot;:&quot;model_doc/layoutxlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutxlm&quot;},{&quot;title&quot;:&quot;LiLT&quot;,&quot;id&quot;:&quot;model_doc/lilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lilt&quot;},{&quot;title&quot;:&quot;LXMERT&quot;,&quot;id&quot;:&quot;model_doc/lxmert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lxmert&quot;},{&quot;title&quot;:&quot;MatCha&quot;,&quot;id&quot;:&quot;model_doc/matcha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/matcha&quot;},{&quot;title&quot;:&quot;MGP-STR&quot;,&quot;id&quot;:&quot;model_doc/mgp-str&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mgp-str&quot;},{&quot;title&quot;:&quot;Nougat&quot;,&quot;id&quot;:&quot;model_doc/nougat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nougat&quot;},{&quot;title&quot;:&quot;OneFormer&quot;,&quot;id&quot;:&quot;model_doc/oneformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/oneformer&quot;},{&quot;title&quot;:&quot;OWL-ViT&quot;,&quot;id&quot;:&quot;model_doc/owlvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/owlvit&quot;},{&quot;title&quot;:&quot;Perceiver&quot;,&quot;id&quot;:&quot;model_doc/perceiver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/perceiver&quot;},{&quot;title&quot;:&quot;Pix2Struct&quot;,&quot;id&quot;:&quot;model_doc/pix2struct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pix2struct&quot;},{&quot;title&quot;:&quot;Segment Anything&quot;,&quot;id&quot;:&quot;model_doc/sam&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sam&quot;},{&quot;title&quot;:&quot;Speech Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/speech-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder&quot;},{&quot;title&quot;:&quot;TAPAS&quot;,&quot;id&quot;:&quot;model_doc/tapas&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapas&quot;},{&quot;title&quot;:&quot;TrOCR&quot;,&quot;id&quot;:&quot;model_doc/trocr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trocr&quot;},{&quot;title&quot;:&quot;TVLT&quot;,&quot;id&quot;:&quot;model_doc/tvlt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tvlt&quot;},{&quot;title&quot;:&quot;ViLT&quot;,&quot;id&quot;:&quot;model_doc/vilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vilt&quot;},{&quot;title&quot;:&quot;Vision Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/vision-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder&quot;},{&quot;title&quot;:&quot;Vision Text Dual Encoder&quot;,&quot;id&quot;:&quot;model_doc/vision-text-dual-encoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder&quot;},{&quot;title&quot;:&quot;VisualBERT&quot;,&quot;id&quot;:&quot;model_doc/visual_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/visual_bert&quot;},{&quot;title&quot;:&quot;X-CLIP&quot;,&quot;id&quot;:&quot;model_doc/xclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xclip&quot;}]},{&quot;title&quot;:&quot;Reinforcement learning models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Decision Transformer&quot;,&quot;id&quot;:&quot;model_doc/decision_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/decision_transformer&quot;},{&quot;title&quot;:&quot;Trajectory Transformer&quot;,&quot;id&quot;:&quot;model_doc/trajectory_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer&quot;}]},{&quot;title&quot;:&quot;Time series models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Autoformer&quot;,&quot;id&quot;:&quot;model_doc/autoformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/autoformer&quot;},{&quot;title&quot;:&quot;Informer&quot;,&quot;id&quot;:&quot;model_doc/informer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/informer&quot;},{&quot;title&quot;:&quot;Time Series Transformer&quot;,&quot;id&quot;:&quot;model_doc/time_series_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/time_series_transformer&quot;}]},{&quot;title&quot;:&quot;Graph models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Graphormer&quot;,&quot;id&quot;:&quot;model_doc/graphormer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/graphormer&quot;}]}]},{&quot;title&quot;:&quot;Internal Helpers&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Custom Layers and Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/modeling_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/modeling_utils&quot;},{&quot;title&quot;:&quot;Utilities for pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/pipelines_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/pipelines_utils&quot;},{&quot;title&quot;:&quot;Utilities for Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/tokenization_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/tokenization_utils&quot;},{&quot;title&quot;:&quot;Utilities for Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/trainer_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/trainer_utils&quot;},{&quot;title&quot;:&quot;Utilities for Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/generation_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/generation_utils&quot;},{&quot;title&quot;:&quot;Utilities for Image Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/image_processing_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/image_processing_utils&quot;},{&quot;title&quot;:&quot;Utilities for Audio processing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/audio_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/audio_utils&quot;},{&quot;title&quot;:&quot;General Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/file_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/file_utils&quot;},{&quot;title&quot;:&quot;Utilities for Time Series&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/time_series_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/time_series_utils&quot;}]}]}],&quot;chapterId&quot;:&quot;tasks/asr&quot;,&quot;docType&quot;:&quot;docs&quot;,&quot;isLoggedIn&quot;:false,&quot;lang&quot;:&quot;en&quot;,&quot;langs&quot;:[&quot;de&quot;,&quot;en&quot;,&quot;es&quot;,&quot;fr&quot;,&quot;it&quot;,&quot;ko&quot;,&quot;pt&quot;,&quot;zh&quot;],&quot;library&quot;:&quot;transformers&quot;,&quot;theme&quot;:&quot;light&quot;,&quot;version&quot;:&quot;v4.34.0&quot;,&quot;versions&quot;:[{&quot;version&quot;:&quot;main&quot;},{&quot;version&quot;:&quot;v4.34.0&quot;},{&quot;version&quot;:&quot;v4.33.3&quot;},{&quot;version&quot;:&quot;v4.33.2&quot;},{&quot;version&quot;:&quot;v4.33.0&quot;},{&quot;version&quot;:&quot;v4.32.1&quot;},{&quot;version&quot;:&quot;v4.32.0&quot;},{&quot;version&quot;:&quot;v4.31.0&quot;},{&quot;version&quot;:&quot;v4.30.0&quot;},{&quot;version&quot;:&quot;v4.29.1&quot;},{&quot;version&quot;:&quot;v4.29.0&quot;},{&quot;version&quot;:&quot;v4.28.1&quot;},{&quot;version&quot;:&quot;v4.28.0&quot;},{&quot;version&quot;:&quot;v4.27.2&quot;},{&quot;version&quot;:&quot;v4.27.1&quot;},{&quot;version&quot;:&quot;v4.27.0&quot;},{&quot;version&quot;:&quot;v4.26.1&quot;},{&quot;version&quot;:&quot;v4.26.0&quot;},{&quot;version&quot;:&quot;v4.25.1&quot;},{&quot;version&quot;:&quot;v4.24.0&quot;},{&quot;version&quot;:&quot;v4.23.1&quot;},{&quot;version&quot;:&quot;v4.23.0&quot;},{&quot;version&quot;:&quot;v4.22.2&quot;},{&quot;version&quot;:&quot;v4.22.1&quot;},{&quot;version&quot;:&quot;v4.22.0&quot;},{&quot;version&quot;:&quot;v4.21.3&quot;},{&quot;version&quot;:&quot;v4.21.2&quot;},{&quot;version&quot;:&quot;v4.21.1&quot;},{&quot;version&quot;:&quot;v4.21.0&quot;},{&quot;version&quot;:&quot;v4.20.1&quot;},{&quot;version&quot;:&quot;v4.20.0&quot;},{&quot;version&quot;:&quot;v4.19.4&quot;},{&quot;version&quot;:&quot;v4.19.3&quot;},{&quot;version&quot;:&quot;v4.19.2&quot;},{&quot;version&quot;:&quot;v4.19.0&quot;},{&quot;version&quot;:&quot;v4.18.0&quot;},{&quot;version&quot;:&quot;v4.17.0&quot;},{&quot;version&quot;:&quot;v4.16.2&quot;},{&quot;version&quot;:&quot;v4.16.1&quot;},{&quot;version&quot;:&quot;v4.16.0&quot;},{&quot;version&quot;:&quot;v4.15.0&quot;},{&quot;version&quot;:&quot;v4.14.1&quot;},{&quot;version&quot;:&quot;v4.13.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.5&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.4&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.0.0&quot;},{&quot;version&quot;:&quot;doc-builder-html&quot;}],&quot;title&quot;:&quot;Automatic speech recognition&quot;}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"><div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation</p> <div class="flex items-center"><p class="font-semibold">Automatic speech recognition</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "><button class=" " type="button"><h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> <span class="ml-auto rounded border border-gray-200 bg-gray-100 px-0.5 text-xs dark:border-gray-800 dark:bg-gray-800"><kbd class="font-sans">⌘K</kbd></span></button> <div class="flex items-center"><select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1">v4.34.0</option><option value="2">v4.33.3</option><option value="3">v4.32.1</option><option value="4">v4.31.0</option><option value="5">v4.30.0</option><option value="6">v4.29.1</option><option value="7">v4.28.1</option><option value="8">v4.27.2</option><option value="9">v4.26.1</option><option value="10">v4.25.1</option><option value="11">v4.24.0</option><option value="12">v4.23.1</option><option value="13">v4.22.2</option><option value="14">v4.21.3</option><option value="15">v4.20.1</option><option value="16">v4.19.4</option><option value="17">v4.18.0</option><option value="18">v4.17.0</option><option value="19">v4.16.2</option><option value="20">v4.15.0</option><option value="21">v4.14.1</option><option value="22">v4.13.0</option><option value="23">v4.12.5</option><option value="24">v4.11.3</option><option value="25">v4.10.1</option><option value="26">v4.9.2</option><option value="27">v4.8.2</option><option value="28">v4.7.0</option><option value="29">v4.6.0</option><option value="30">v4.5.1</option><option value="31">v4.4.2</option><option value="32">v4.3.3</option><option value="33">v4.2.2</option><option value="34">v4.1.1</option><option value="35">v4.0.1</option><option value="36">v3.5.1</option><option value="37">v3.4.0</option><option value="38">v3.3.1</option><option value="39">v3.2.0</option><option value="40">v3.1.0</option><option value="41">v3.0.2</option><option value="42">v2.11.0</option><option value="43">v2.10.0</option><option value="44">v2.9.1</option><option value="45">v2.8.0</option><option value="46">v2.7.0</option><option value="47">v2.6.0</option><option value="48">v2.5.1</option><option value="49">v2.4.1</option><option value="50">v2.3.0</option><option value="51">v2.2.2</option><option value="52">v2.1.1</option><option value="53">v2.0.0</option><option value="54">v1.2.0</option><option value="55">v1.1.0</option><option value="56">v1.0.0</option><option value="57">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"><button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"><svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> 112,792</a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Get started</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/index">🤗 Transformers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/quicktour">Quick tour </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/installation">Installation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Tutorials</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_tutorial">Run inference with pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/autoclass_tutorial">Write portable code with AutoClass </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/preprocessing">Preprocess data </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/training">Fine-tune a pretrained model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/run_scripts">Train with a script </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/accelerate">Set up distributed training with 🤗 Accelerate </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/peft">Load and train adapters with 🤗 PEFT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_sharing">Share your model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/transformers_agents">Agents </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/llm_tutorial">Generation with LLMs </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Task Guides</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Natural Language Processing</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Audio</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/audio_classification">Audio classification </a><a data-sveltekit-reload="" class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-4" href="/docs/transformers/v4.34.0/en/tasks/asr">Automatic speech recognition </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Computer Vision</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Generation</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Prompting</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Developer guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/fast_tokenizers">Use fast tokenizers from 🤗 Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/multilingual">Run inference with multilingual models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/create_a_model">Use model-specific APIs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_models">Share a custom model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/chat_templating">Templates for chat models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/sagemaker">Run training on Amazon SageMaker </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/serialization">Export to ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tflite">Export to TFLite </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/torchscript">Export to TorchScript </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/benchmarks">Benchmarks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/notebooks">Notebooks with examples </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/community">Community resources </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_tools">Custom Tools and Prompts </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/troubleshooting">Troubleshoot </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Performance and scalability</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/performance">Overview </a><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Efficient training techniques</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_one">Methods and tools for efficient training on a single GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_many">Multiple GPUs and parallelism </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu">Efficient training on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu_many">Distributed CPU training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu">Training on TPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu_tf">Training on TPU with TensorFlow </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_special">Training on Specialized Hardware </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_hardware">Custom hardware for training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/hpo_train">Hyperparameter Search using Trainer API </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Optimizing inference</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_cpu">Inference on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_one">Inference on one GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_many">Inference on many GPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_special">Inference on Specialized Hardware </a> </div><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/big_models">Instantiating a big model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/debugging">Troubleshooting </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tf_xla">XLA Integration for TensorFlow Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perf_torch_compile">Optimize inference using `torch.compile()` </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Contribute</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/contributing">How to contribute to transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_model">How to add a model to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_tensorflow_model">How to convert a 🤗 Transformers model to TensorFlow? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_pipeline">How to add a pipeline to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/testing">Testing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pr_checks">Checks on a Pull Request </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Conceptual guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/philosophy">Philosophy </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/glossary">Glossary </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/task_summary">What 🤗 Transformers can do </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tasks_explained">How 🤗 Transformers solve tasks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_summary">The Transformer model family </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tokenizer_summary">Summary of the tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/attention">Attention mechanisms </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pad_truncation">Padding and truncation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/bertology">BERTology </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perplexity">Perplexity of fixed-length models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_webserver">Pipelines for webserver inference </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_memory_anatomy">Model training anatomy </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>API</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Main Classes</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/agent">Agents and Tools </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/model_doc/auto">Auto Classes </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/callback">Callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/configuration">Configuration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/data_collator">Data Collator </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks">Keras callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/logging">Logging </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/model">Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/text_generation">Text Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/onnx">ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules">Optimization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/output">Model outputs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/pipelines">Pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/processors">Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/quantization">Quantization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/tokenizer">Tokenizer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/trainer">Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/deepspeed">DeepSpeed Integration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor">Feature Extractor </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/image_processor">Image Processor </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Models</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Text models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Vision models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Reinforcement learning models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Time series models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Graph models</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Internal Helpers</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/modeling_utils">Custom Layers and Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/pipelines_utils">Utilities for pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/tokenization_utils">Utilities for Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/trainer_utils">Utilities for Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/generation_utils">Utilities for Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/image_processing_utils">Utilities for Image Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/audio_utils">Utilities for Audio processing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/file_utils">General Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/time_series_utils">Utilities for Time Series </a> </div> </div></nav></div></div></div> <div class="z-1 min-w-0 flex-1"> <div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg"> <div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div> <p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience </p> <div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes </div></div></div> <div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a> <p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div> <div class="prose-doc prose relative mx-auto max-w-4xl break-words"> <p></p> <h1 class="relative group"><a id="automatic-speech-recognition" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#automatic-speech-recognition"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-4lpwt">Automatic speech recognition</span></h1> <div class="flex space-x-1 absolute z-10 right-0 top-0"> <div class="relative colab-dropdown "><button class=" " type="button"><img alt="Open In Colab" class="!m-0" src="https://colab.research.google.com/assets/colab-badge.svg"></button> </div> <div class="relative colab-dropdown "><button class=" " type="button"><img alt="Open In Studio Lab" class="!m-0" src="https://studiolab.sagemaker.aws/studiolab.svg"></button> </div></div> <iframe class="w-full xl:w-4/6 h-80" src="https://www.youtube-nocookie.com/embed/TksaY_FDgnk" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe> <p data-svelte-h="svelte-jaigob">Automatic speech recognition (ASR) converts a speech signal to text, mapping a sequence of audio inputs to text outputs. Virtual assistants like Siri and Alexa use ASR models to help users everyday, and there are many other useful user-facing applications like live captioning and note-taking during meetings.</p> <p data-svelte-h="svelte-1aff4p7">This guide will show you how to:</p> <ol data-svelte-h="svelte-swvwb8"><li>Finetune <a href="https://huggingface.co/facebook/wav2vec2-base" rel="nofollow">Wav2Vec2</a> on the <a href="https://huggingface.co/datasets/PolyAI/minds14" rel="nofollow">MInDS-14</a> dataset to transcribe audio to text.</li> <li>Use your finetuned model for inference.</li></ol> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400">The task illustrated in this tutorial is supported by the following model architectures: <p data-svelte-h="svelte-1xfq5e"><a href="../model_doc/data2vec-audio">Data2VecAudio</a>, <a href="../model_doc/hubert">Hubert</a>, <a href="../model_doc/mctct">M-CTC-T</a>, <a href="../model_doc/sew">SEW</a>, <a href="../model_doc/sew-d">SEW-D</a>, <a href="../model_doc/unispeech">UniSpeech</a>, <a href="../model_doc/unispeech-sat">UniSpeechSat</a>, <a href="../model_doc/wav2vec2">Wav2Vec2</a>, <a href="../model_doc/wav2vec2-conformer">Wav2Vec2-Conformer</a>, <a href="../model_doc/wavlm">WavLM</a></p></div> <p data-svelte-h="svelte-1c9nexd">Before you begin, make sure you have all the necessary libraries installed:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class="">pip install transformers datasets evaluate jiwer</pre></div> <p data-svelte-h="svelte-k76o1m">We encourage you to login to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to login:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> huggingface_hub <span class="hljs-keyword">import</span> notebook_login <span class="hljs-meta">&gt;&gt;&gt; </span>notebook_login()</pre></div> <h2 class="relative group"><a id="load-minds14-dataset" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#load-minds14-dataset"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-sjpygq">Load MInDS-14 dataset</span></h2> <p data-svelte-h="svelte-17q5vds">Start by loading a smaller subset of the <a href="https://huggingface.co/datasets/PolyAI/minds14" rel="nofollow">MInDS-14</a> dataset from the 🤗 Datasets library. This’ll give you a chance to experiment and make sure everything works before spending more time training on the full dataset.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset, Audio <span class="hljs-meta">&gt;&gt;&gt; </span>minds = load_dataset(<span class="hljs-string">"PolyAI/minds14"</span>, name=<span class="hljs-string">"en-US"</span>, split=<span class="hljs-string">"train[:100]"</span>)</pre></div> <p data-svelte-h="svelte-1mnfdd3">Split the dataset’s <code>train</code> split into a train and test set with the <code>~Dataset.train_test_split</code> method:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>minds = minds.train_test_split(test_size=<span class="hljs-number">0.2</span>)</pre></div> <p data-svelte-h="svelte-2twqg0">Then take a look at the dataset:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>minds DatasetDict({ train: Dataset({ features: [<span class="hljs-string">'path'</span>, <span class="hljs-string">'audio'</span>, <span class="hljs-string">'transcription'</span>, <span class="hljs-string">'english_transcription'</span>, <span class="hljs-string">'intent_class'</span>, <span class="hljs-string">'lang_id'</span>], num_rows: <span class="hljs-number">16</span> }) test: Dataset({ features: [<span class="hljs-string">'path'</span>, <span class="hljs-string">'audio'</span>, <span class="hljs-string">'transcription'</span>, <span class="hljs-string">'english_transcription'</span>, <span class="hljs-string">'intent_class'</span>, <span class="hljs-string">'lang_id'</span>], num_rows: <span class="hljs-number">4</span> }) })</pre></div> <p data-svelte-h="svelte-r5lg50">While the dataset contains a lot of useful information, like <code>lang_id</code> and <code>english_transcription</code>, you’ll focus on the <code>audio</code> and <code>transcription</code> in this guide. Remove the other columns with the <a href="https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.remove_columns" rel="nofollow">remove_columns</a> method:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>minds = minds.remove_columns([<span class="hljs-string">"english_transcription"</span>, <span class="hljs-string">"intent_class"</span>, <span class="hljs-string">"lang_id"</span>])</pre></div> <p data-svelte-h="svelte-kucdg1">Take a look at the example again:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>minds[<span class="hljs-string">"train"</span>][<span class="hljs-number">0</span>] {<span class="hljs-string">'audio'</span>: {<span class="hljs-string">'array'</span>: array([-<span class="hljs-number">0.00024414</span>, <span class="hljs-number">0.</span> , <span class="hljs-number">0.</span> , ..., <span class="hljs-number">0.00024414</span>, <span class="hljs-number">0.00024414</span>, <span class="hljs-number">0.00024414</span>], dtype=float32), <span class="hljs-string">'path'</span>: <span class="hljs-string">'/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602ba9e2963e11ccd901cd4f.wav'</span>, <span class="hljs-string">'sampling_rate'</span>: <span class="hljs-number">8000</span>}, <span class="hljs-string">'path'</span>: <span class="hljs-string">'/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602ba9e2963e11ccd901cd4f.wav'</span>, <span class="hljs-string">'transcription'</span>: <span class="hljs-string">"hi I'm trying to use the banking app on my phone and currently my checking and savings account balance is not refreshing"</span>}</pre></div> <p data-svelte-h="svelte-bf7elb">There are two fields:</p> <ul data-svelte-h="svelte-k1dj8f"><li><code>audio</code>: a 1-dimensional <code>array</code> of the speech signal that must be called to load and resample the audio file.</li> <li><code>transcription</code>: the target text.</li></ul> <h2 class="relative group"><a id="preprocess" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#preprocess"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1cg9qj">Preprocess</span></h2> <p data-svelte-h="svelte-1u4mmu7">The next step is to load a Wav2Vec2 processor to process the audio signal:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoProcessor <span class="hljs-meta">&gt;&gt;&gt; </span>processor = AutoProcessor.from_pretrained(<span class="hljs-string">"facebook/wav2vec2-base"</span>)</pre></div> <p data-svelte-h="svelte-yrxfgz">The MInDS-14 dataset has a sampling rate of 8000kHz (you can find this information in its <a href="https://huggingface.co/datasets/PolyAI/minds14" rel="nofollow">dataset card</a>), which means you’ll need to resample the dataset to 16000kHz to use the pretrained Wav2Vec2 model:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>minds = minds.cast_column(<span class="hljs-string">"audio"</span>, Audio(sampling_rate=<span class="hljs-number">16_000</span>)) <span class="hljs-meta">&gt;&gt;&gt; </span>minds[<span class="hljs-string">"train"</span>][<span class="hljs-number">0</span>] {<span class="hljs-string">'audio'</span>: {<span class="hljs-string">'array'</span>: array([-<span class="hljs-number">2.38064706e-04</span>, -<span class="hljs-number">1.58618059e-04</span>, -<span class="hljs-number">5.43987835e-06</span>, ..., <span class="hljs-number">2.78103951e-04</span>, <span class="hljs-number">2.38446111e-04</span>, <span class="hljs-number">1.18740834e-04</span>], dtype=float32), <span class="hljs-string">'path'</span>: <span class="hljs-string">'/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602ba9e2963e11ccd901cd4f.wav'</span>, <span class="hljs-string">'sampling_rate'</span>: <span class="hljs-number">16000</span>}, <span class="hljs-string">'path'</span>: <span class="hljs-string">'/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602ba9e2963e11ccd901cd4f.wav'</span>, <span class="hljs-string">'transcription'</span>: <span class="hljs-string">"hi I'm trying to use the banking app on my phone and currently my checking and savings account balance is not refreshing"</span>}</pre></div> <p data-svelte-h="svelte-w66yza">As you can see in the <code>transcription</code> above, the text contains a mix of upper and lowercase characters. The Wav2Vec2 tokenizer is only trained on uppercase characters so you’ll need to make sure the text matches the tokenizer’s vocabulary:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">uppercase</span>(<span class="hljs-params">example</span>): <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> {<span class="hljs-string">"transcription"</span>: example[<span class="hljs-string">"transcription"</span>].upper()} <span class="hljs-meta">&gt;&gt;&gt; </span>minds = minds.<span class="hljs-built_in">map</span>(uppercase)</pre></div> <p data-svelte-h="svelte-8cflje">Now create a preprocessing function that:</p> <ol data-svelte-h="svelte-1ydcdgg"><li>Calls the <code>audio</code> column to load and resample the audio file.</li> <li>Extracts the <code>input_values</code> from the audio file and tokenize the <code>transcription</code> column with the processor.</li></ol> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">prepare_dataset</span>(<span class="hljs-params">batch</span>): <span class="hljs-meta">... </span> audio = batch[<span class="hljs-string">"audio"</span>] <span class="hljs-meta">... </span> batch = processor(audio[<span class="hljs-string">"array"</span>], sampling_rate=audio[<span class="hljs-string">"sampling_rate"</span>], text=batch[<span class="hljs-string">"transcription"</span>]) <span class="hljs-meta">... </span> batch[<span class="hljs-string">"input_length"</span>] = <span class="hljs-built_in">len</span>(batch[<span class="hljs-string">"input_values"</span>][<span class="hljs-number">0</span>]) <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> batch</pre></div> <p data-svelte-h="svelte-1eam6z6">To apply the preprocessing function over the entire dataset, use 🤗 Datasets <a href="https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.map" rel="nofollow">map</a> function. You can speed up <code>map</code> by increasing the number of processes with the <code>num_proc</code> parameter. Remove the columns you don’t need with the <a href="https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.remove_columns" rel="nofollow">remove_columns</a> method:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>encoded_minds = minds.<span class="hljs-built_in">map</span>(prepare_dataset, remove_columns=minds.column_names[<span class="hljs-string">"train"</span>], num_proc=<span class="hljs-number">4</span>)</pre></div> <p data-svelte-h="svelte-1qd6rav">🤗 Transformers doesn’t have a data collator for ASR, so you’ll need to adapt the <a href="/docs/transformers/v4.34.0/en/main_classes/data_collator#transformers.DataCollatorWithPadding">DataCollatorWithPadding</a> to create a batch of examples. It’ll also dynamically pad your text and labels to the length of the longest element in its batch (instead of the entire dataset) so they are a uniform length. While it is possible to pad your text in the <code>tokenizer</code> function by setting <code>padding=True</code>, dynamic padding is more efficient.</p> <p data-svelte-h="svelte-ik80vf">Unlike other data collators, this specific data collator needs to apply a different padding method to <code>input_values</code> and <code>labels</code>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> dataclasses <span class="hljs-keyword">import</span> dataclass, field <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> typing <span class="hljs-keyword">import</span> <span class="hljs-type">Any</span>, <span class="hljs-type">Dict</span>, <span class="hljs-type">List</span>, <span class="hljs-type">Optional</span>, <span class="hljs-type">Union</span> <span class="hljs-meta">&gt;&gt;&gt; </span>@dataclass <span class="hljs-meta">... </span><span class="hljs-keyword">class</span> <span class="hljs-title class_">DataCollatorCTCWithPadding</span>: <span class="hljs-meta">... </span> processor: AutoProcessor <span class="hljs-meta">... </span> padding: <span class="hljs-type">Union</span>[<span class="hljs-built_in">bool</span>, <span class="hljs-built_in">str</span>] = <span class="hljs-string">"longest"</span> <span class="hljs-meta">... </span> <span class="hljs-keyword">def</span> <span class="hljs-title function_">__call__</span>(<span class="hljs-params">self, features: <span class="hljs-type">List</span>[<span class="hljs-type">Dict</span>[<span class="hljs-built_in">str</span>, <span class="hljs-type">Union</span>[<span class="hljs-type">List</span>[<span class="hljs-built_in">int</span>], torch.Tensor]]]</span>) -&gt; <span class="hljs-type">Dict</span>[<span class="hljs-built_in">str</span>, torch.Tensor]: <span class="hljs-meta">... </span> <span class="hljs-comment"># split inputs and labels since they have to be of different lengths and need</span> <span class="hljs-meta">... </span> <span class="hljs-comment"># different padding methods</span> <span class="hljs-meta">... </span> input_features = [{<span class="hljs-string">"input_values"</span>: feature[<span class="hljs-string">"input_values"</span>][<span class="hljs-number">0</span>]} <span class="hljs-keyword">for</span> feature <span class="hljs-keyword">in</span> features] <span class="hljs-meta">... </span> label_features = [{<span class="hljs-string">"input_ids"</span>: feature[<span class="hljs-string">"labels"</span>]} <span class="hljs-keyword">for</span> feature <span class="hljs-keyword">in</span> features] <span class="hljs-meta">... </span> batch = self.processor.pad(input_features, padding=self.padding, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">... </span> labels_batch = self.processor.pad(labels=label_features, padding=self.padding, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">... </span> <span class="hljs-comment"># replace padding with -100 to ignore loss correctly</span> <span class="hljs-meta">... </span> labels = labels_batch[<span class="hljs-string">"input_ids"</span>].masked_fill(labels_batch.attention_mask.ne(<span class="hljs-number">1</span>), -<span class="hljs-number">100</span>) <span class="hljs-meta">... </span> batch[<span class="hljs-string">"labels"</span>] = labels <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> batch</pre></div> <p data-svelte-h="svelte-hven70">Now instantiate your <code>DataCollatorForCTCWithPadding</code>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>data_collator = DataCollatorCTCWithPadding(processor=processor, padding=<span class="hljs-string">"longest"</span>)</pre></div> <h2 class="relative group"><a id="evaluate" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#evaluate"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-sh8s6s">Evaluate</span></h2> <p data-svelte-h="svelte-od28ug">Including a metric during training is often helpful for evaluating your model’s performance. You can quickly load a evaluation method with the 🤗 <a href="https://huggingface.co/docs/evaluate/index" rel="nofollow">Evaluate</a> library. For this task, load the <a href="https://huggingface.co/spaces/evaluate-metric/wer" rel="nofollow">word error rate</a> (WER) metric (see the 🤗 Evaluate <a href="https://huggingface.co/docs/evaluate/a_quick_tour" rel="nofollow">quick tour</a> to learn more about how to load and compute a metric):</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> evaluate <span class="hljs-meta">&gt;&gt;&gt; </span>wer = evaluate.load(<span class="hljs-string">"wer"</span>)</pre></div> <p data-svelte-h="svelte-1c1khyh">Then create a function that passes your predictions and labels to <a href="https://huggingface.co/docs/evaluate/v0.4.0/en/package_reference/main_classes#evaluate.EvaluationModule.compute" rel="nofollow">compute</a> to calculate the WER:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">compute_metrics</span>(<span class="hljs-params">pred</span>): <span class="hljs-meta">... </span> pred_logits = pred.predictions <span class="hljs-meta">... </span> pred_ids = np.argmax(pred_logits, axis=-<span class="hljs-number">1</span>) <span class="hljs-meta">... </span> pred.label_ids[pred.label_ids == -<span class="hljs-number">100</span>] = processor.tokenizer.pad_token_id <span class="hljs-meta">... </span> pred_str = processor.batch_decode(pred_ids) <span class="hljs-meta">... </span> label_str = processor.batch_decode(pred.label_ids, group_tokens=<span class="hljs-literal">False</span>) <span class="hljs-meta">... </span> wer = wer.compute(predictions=pred_str, references=label_str) <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> {<span class="hljs-string">"wer"</span>: wer}</pre></div> <p data-svelte-h="svelte-183aynn">Your <code>compute_metrics</code> function is ready to go now, and you’ll return to it when you setup your training.</p> <h2 class="relative group"><a id="train" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#train"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-5arm0l">Train</span></h2> <div class="space-y-10 py-6 2xl:py-8 2xl:-mx-4"><div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><defs><clipPath id="a"><rect x="3.05" y="0.5" width="25.73" height="31" fill="none"></rect></clipPath></defs><g clip-path="url(#a)"><path d="M24.94,9.51a12.81,12.81,0,0,1,0,18.16,12.68,12.68,0,0,1-18,0,12.81,12.81,0,0,1,0-18.16l9-9V5l-.84.83-6,6a9.58,9.58,0,1,0,13.55,0ZM20.44,9a1.68,1.68,0,1,1,1.67-1.67A1.68,1.68,0,0,1,20.44,9Z" fill="#ee4c2c"></path></g></svg> <span>Pytorch</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide Pytorch content</span></div></div> <div class="framework-content"><div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1ufp0ay">If you aren’t familiar with finetuning a model with the <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer">Trainer</a>, take a look at the basic tutorial <a href="../training#train-with-pytorch-trainer">here</a>!</p></div> <p data-svelte-h="svelte-17ujk4q">You’re ready to start training your model now! Load Wav2Vec2 with <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoModelForCTC">AutoModelForCTC</a>. Specify the reduction to apply with the <code>ctc_loss_reduction</code> parameter. It is often better to use the average instead of the default summation:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoModelForCTC, TrainingArguments, Trainer <span class="hljs-meta">&gt;&gt;&gt; </span>model = AutoModelForCTC.from_pretrained( <span class="hljs-meta">... </span> <span class="hljs-string">"facebook/wav2vec2-base"</span>, <span class="hljs-meta">... </span> ctc_loss_reduction=<span class="hljs-string">"mean"</span>, <span class="hljs-meta">... </span> pad_token_id=processor.tokenizer.pad_token_id, <span class="hljs-meta">... </span>)</pre></div> <p data-svelte-h="svelte-l42k0i">At this point, only three steps remain:</p> <ol data-svelte-h="svelte-51dsj6"><li>Define your training hyperparameters in <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.TrainingArguments">TrainingArguments</a>. The only required parameter is <code>output_dir</code> which specifies where to save your model. You’ll push this model to the Hub by setting <code>push_to_hub=True</code> (you need to be signed in to Hugging Face to upload your model). At the end of each epoch, the <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer">Trainer</a> will evaluate the WER and save the training checkpoint.</li> <li>Pass the training arguments to <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer">Trainer</a> along with the model, dataset, tokenizer, data collator, and <code>compute_metrics</code> function.</li> <li>Call <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.train">train()</a> to finetune your model.</li></ol> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>training_args = TrainingArguments( <span class="hljs-meta">... </span> output_dir=<span class="hljs-string">"my_awesome_asr_mind_model"</span>, <span class="hljs-meta">... </span> per_device_train_batch_size=<span class="hljs-number">8</span>, <span class="hljs-meta">... </span> gradient_accumulation_steps=<span class="hljs-number">2</span>, <span class="hljs-meta">... </span> learning_rate=<span class="hljs-number">1e-5</span>, <span class="hljs-meta">... </span> warmup_steps=<span class="hljs-number">500</span>, <span class="hljs-meta">... </span> max_steps=<span class="hljs-number">2000</span>, <span class="hljs-meta">... </span> gradient_checkpointing=<span class="hljs-literal">True</span>, <span class="hljs-meta">... </span> fp16=<span class="hljs-literal">True</span>, <span class="hljs-meta">... </span> group_by_length=<span class="hljs-literal">True</span>, <span class="hljs-meta">... </span> evaluation_strategy=<span class="hljs-string">"steps"</span>, <span class="hljs-meta">... </span> per_device_eval_batch_size=<span class="hljs-number">8</span>, <span class="hljs-meta">... </span> save_steps=<span class="hljs-number">1000</span>, <span class="hljs-meta">... </span> eval_steps=<span class="hljs-number">1000</span>, <span class="hljs-meta">... </span> logging_steps=<span class="hljs-number">25</span>, <span class="hljs-meta">... </span> load_best_model_at_end=<span class="hljs-literal">True</span>, <span class="hljs-meta">... </span> metric_for_best_model=<span class="hljs-string">"wer"</span>, <span class="hljs-meta">... </span> greater_is_better=<span class="hljs-literal">False</span>, <span class="hljs-meta">... </span> push_to_hub=<span class="hljs-literal">True</span>, <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>trainer = Trainer( <span class="hljs-meta">... </span> model=model, <span class="hljs-meta">... </span> args=training_args, <span class="hljs-meta">... </span> train_dataset=encoded_minds[<span class="hljs-string">"train"</span>], <span class="hljs-meta">... </span> eval_dataset=encoded_minds[<span class="hljs-string">"test"</span>], <span class="hljs-meta">... </span> tokenizer=processor, <span class="hljs-meta">... </span> data_collator=data_collator, <span class="hljs-meta">... </span> compute_metrics=compute_metrics, <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>trainer.train()</pre></div> <p data-svelte-h="svelte-cv8z08">Once training is completed, share your model to the Hub with the <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.push_to_hub">push_to_hub()</a> method so everyone can use your model:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>trainer.push_to_hub()</pre></div></div></div> </div> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1jxkvde">For a more in-depth example of how to finetune a model for automatic speech recognition, take a look at this blog <a href="https://huggingface.co/blog/fine-tune-wav2vec2-english" rel="nofollow">post</a> for English ASR and this <a href="https://huggingface.co/blog/fine-tune-xlsr-wav2vec2" rel="nofollow">post</a> for multilingual ASR.</p></div> <h2 class="relative group"><a id="inference" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#inference"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-199uz7g">Inference</span></h2> <p data-svelte-h="svelte-633ppb">Great, now that you’ve finetuned a model, you can use it for inference!</p> <p data-svelte-h="svelte-1j24vrm">Load an audio file you’d like to run inference on. Remember to resample the sampling rate of the audio file to match the sampling rate of the model if you need to!</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset, Audio <span class="hljs-meta">&gt;&gt;&gt; </span>dataset = load_dataset(<span class="hljs-string">"PolyAI/minds14"</span>, <span class="hljs-string">"en-US"</span>, split=<span class="hljs-string">"train"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>dataset = dataset.cast_column(<span class="hljs-string">"audio"</span>, Audio(sampling_rate=<span class="hljs-number">16000</span>)) <span class="hljs-meta">&gt;&gt;&gt; </span>sampling_rate = dataset.features[<span class="hljs-string">"audio"</span>].sampling_rate <span class="hljs-meta">&gt;&gt;&gt; </span>audio_file = dataset[<span class="hljs-number">0</span>][<span class="hljs-string">"audio"</span>][<span class="hljs-string">"path"</span>]</pre></div> <p data-svelte-h="svelte-17ibqv7">The simplest way to try out your finetuned model for inference is to use it in a <a href="/docs/transformers/v4.34.0/en/main_classes/pipelines#transformers.pipeline">pipeline()</a>. Instantiate a <code>pipeline</code> for automatic speech recognition with your model, and pass your audio file to it:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> pipeline <span class="hljs-meta">&gt;&gt;&gt; </span>transcriber = pipeline(<span class="hljs-string">"automatic-speech-recognition"</span>, model=<span class="hljs-string">"stevhliu/my_awesome_asr_minds_model"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>transcriber(audio_file) {<span class="hljs-string">'text'</span>: <span class="hljs-string">'I WOUD LIKE O SET UP JOINT ACOUNT WTH Y PARTNER'</span>}</pre></div> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1a2d12i">The transcription is decent, but it could be better! Try finetuning your model on more examples to get even better results!</p></div> <p data-svelte-h="svelte-1njl8vm">You can also manually replicate the results of the <code>pipeline</code> if you’d like:</p> <div class="space-y-10 py-6 2xl:py-8 2xl:-mx-4"><div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><defs><clipPath id="a"><rect x="3.05" y="0.5" width="25.73" height="31" fill="none"></rect></clipPath></defs><g clip-path="url(#a)"><path d="M24.94,9.51a12.81,12.81,0,0,1,0,18.16,12.68,12.68,0,0,1-18,0,12.81,12.81,0,0,1,0-18.16l9-9V5l-.84.83-6,6a9.58,9.58,0,1,0,13.55,0ZM20.44,9a1.68,1.68,0,1,1,1.67-1.67A1.68,1.68,0,0,1,20.44,9Z" fill="#ee4c2c"></path></g></svg> <span>Pytorch</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide Pytorch content</span></div></div> <div class="framework-content"><p data-svelte-h="svelte-apwwja">Load a processor to preprocess the audio file and transcription and return the <code>input</code> as PyTorch tensors:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoProcessor <span class="hljs-meta">&gt;&gt;&gt; </span>processor = AutoProcessor.from_pretrained(<span class="hljs-string">"stevhliu/my_awesome_asr_mind_model"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = processor(dataset[<span class="hljs-number">0</span>][<span class="hljs-string">"audio"</span>][<span class="hljs-string">"array"</span>], sampling_rate=sampling_rate, return_tensors=<span class="hljs-string">"pt"</span>)</pre></div> <p data-svelte-h="svelte-1at92g">Pass your inputs to the model and return the logits:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoModelForCTC <span class="hljs-meta">&gt;&gt;&gt; </span>model = AutoModelForCTC.from_pretrained(<span class="hljs-string">"stevhliu/my_awesome_asr_mind_model"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> logits = model(**inputs).logits</pre></div> <p data-svelte-h="svelte-1hxzj8m">Get the predicted <code>input_ids</code> with the highest probability, and use the processor to decode the predicted <code>input_ids</code> back into text:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_ids = torch.argmax(logits, dim=-<span class="hljs-number">1</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>transcription = processor.batch_decode(predicted_ids) <span class="hljs-meta">&gt;&gt;&gt; </span>transcription [<span class="hljs-string">'I WOUL LIKE O SET UP JOINT ACOUNT WTH Y PARTNER'</span>]</pre></div></div></div> </div> <p></p> <div id="svelte-announcer" aria-live="assertive" aria-atomic="true" style="position: absolute; left: 0px; top: 0px; clip: rect(0px, 0px, 0px, 0px); clip-path: inset(50%); overflow: hidden; white-space: nowrap; width: 1px; height: 1px;"></div></div> <div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/tasks/audio_classification" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>Audio classification</a> <a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/tasks/image_classification" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">Image classification<span class="ml-2 translate-y-px">→</span></a></div></div></div> <div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapter&quot;:{&quot;title&quot;:&quot;Automatic speech recognition&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;automatic-speech-recognition&quot;,&quot;url&quot;:&quot;#automatic-speech-recognition&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Load MInDS-14 dataset&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;load-minds14-dataset&quot;,&quot;url&quot;:&quot;#load-minds14-dataset&quot;},{&quot;title&quot;:&quot;Preprocess&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocess&quot;,&quot;url&quot;:&quot;#preprocess&quot;},{&quot;title&quot;:&quot;Evaluate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;evaluate&quot;,&quot;url&quot;:&quot;#evaluate&quot;},{&quot;title&quot;:&quot;Train&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;train&quot;,&quot;url&quot;:&quot;#train&quot;},{&quot;title&quot;:&quot;Inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;inference&quot;,&quot;url&quot;:&quot;#inference&quot;}]}}" data-target="SubSideMenu"><nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#automatic-speech-recognition" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-automatic-speech-recognition"><wbr>Automatic speech recognition</a> <a href="#load-minds14-dataset" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-load-minds14-dataset"><wbr>Load M<wbr>InD<wbr>S-14 dataset</a> <a href="#preprocess" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-preprocess"><wbr>Preprocess</a> <a href="#evaluate" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-evaluate"><wbr>Evaluate</a> <a href="#train" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-train"><wbr>Train</a> <a href="#inference" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-inference"><wbr>Inference</a> </nav></div></div></div> <div id="doc-footer"></div></main> </div> <script> import("/front/build/kube-5e23f38/index.js"); window.moonSha = "kube-5e23f38/"; window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc","environment":"production","userAgent":"HuggingFace (production)"}`); </script> <!-- Stripe --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://js.stripe.com/v3/"; script.async = true; document.head.appendChild(script); } </script> <!-- Google analytics v4 --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL"; script.async = true; document.head.appendChild(script); window.dataLayer = window.dataLayer || []; function gtag() { if (window.dataLayer !== undefined) { window.dataLayer.push(arguments); } } gtag("js", new Date()); gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/v4.34.0/en/tasks/asr" }); /// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" }); /// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent /// TODO: ask the user for their consent and update this with gtag('consent', 'update') } </script> <!-- Google Analytics v3 (deprecated) --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { (function (i, s, o, g, r, a, m) { i["GoogleAnalyticsObject"] = r; (i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments); }), (i[r].l = 1 * new Date()); (a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]); a.async = 1; a.src = g; m.parentNode.insertBefore(a, m); })(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics"); ganalytics("create", "UA-83738774-2", "auto"); ganalytics("send", "pageview", "/docs/transformers/v4.34.0/en/tasks/asr"); } </script> <iframe name="__privateStripeMetricsController8020" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-27c67c0d52761104439bb051c7856ab1.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fv4.34.0%2Fen%2Ftasks%2Fasr&amp;title=Automatic%20speech%20recognition&amp;referrer=&amp;muid=NA&amp;sid=NA&amp;version=6&amp;preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html>
2023-10-05T13:33:45.207Z
Image classification
https://huggingface.co/docs/transformers/v4.34.0/en/tasks/image_classification
# Image classification Image classification assigns a label or class to an image. Unlike text or audio classification, the inputs are the pixel values that comprise an image. There are many applications for image classification, such as detecting damage after a natural disaster, monitoring crop health, or helping screen medical images for signs of disease. This guide illustrates how to: 1. Fine-tune [ViT](model_doc/vit) on the [Food-101](https://huggingface.co/datasets/food101) dataset to classify a food item in an image. 2. Use your fine-tuned model for inference. The task illustrated in this tutorial is supported by the following model architectures: [BEiT](../model_doc/beit), [BiT](../model_doc/bit), [ConvNeXT](../model_doc/convnext), [ConvNeXTV2](../model_doc/convnextv2), [CvT](../model_doc/cvt), [Data2VecVision](../model_doc/data2vec-vision), [DeiT](../model_doc/deit), [DiNAT](../model_doc/dinat), [DINOv2](../model_doc/dinov2), [EfficientFormer](../model_doc/efficientformer), [EfficientNet](../model_doc/efficientnet), [FocalNet](../model_doc/focalnet), [ImageGPT](../model_doc/imagegpt), [LeViT](../model_doc/levit), [MobileNetV1](../model_doc/mobilenet_v1), [MobileNetV2](../model_doc/mobilenet_v2), [MobileViT](../model_doc/mobilevit), [MobileViTV2](../model_doc/mobilevitv2), [NAT](../model_doc/nat), [Perceiver](../model_doc/perceiver), [PoolFormer](../model_doc/poolformer), [PVT](../model_doc/pvt), [RegNet](../model_doc/regnet), [ResNet](../model_doc/resnet), [SegFormer](../model_doc/segformer), [SwiftFormer](../model_doc/swiftformer), [Swin Transformer](../model_doc/swin), [Swin Transformer V2](../model_doc/swinv2), [VAN](../model_doc/van), [ViT](../model_doc/vit), [ViT Hybrid](../model_doc/vit_hybrid), [ViTMSN](../model_doc/vit_msn) Before you begin, make sure you have all the necessary libraries installed: ``` pip install transformers datasets evaluate ``` We encourage you to log in to your Hugging Face account to upload and share your model with the community. When prompted, enter your token to log in: ``` >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## Load Food-101 dataset Start by loading a smaller subset of the Food-101 dataset from the 🤗 Datasets library. This will give you a chance to experiment and make sure everything works before spending more time training on the full dataset. ``` >>> from datasets import load_dataset >>> food = load_dataset("food101", split="train[:5000]") ``` Split the dataset’s `train` split into a train and test set with the [train\_test\_split](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.train_test_split) method: ``` >>> food = food.train_test_split(test_size=0.2) ``` Then take a look at an example: ``` >>> food["train"][0] {'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=512x512 at 0x7F52AFC8AC50>, 'label': 79} ``` Each example in the dataset has two fields: - `image`: a PIL image of the food item - `label`: the label class of the food item To make it easier for the model to get the label name from the label id, create a dictionary that maps the label name to an integer and vice versa: ``` >>> labels = food["train"].features["label"].names >>> label2id, id2label = dict(), dict() >>> for i, label in enumerate(labels): ... label2id[label] = str(i) ... id2label[str(i)] = label ``` Now you can convert the label id to a label name: ``` >>> id2label[str(79)] 'prime_rib' ``` ## Preprocess The next step is to load a ViT image processor to process the image into a tensor: ``` >>> from transformers import AutoImageProcessor >>> checkpoint = "google/vit-base-patch16-224-in21k" >>> image_processor = AutoImageProcessor.from_pretrained(checkpoint) ``` Apply some image transformations to the images to make the model more robust against overfitting. Here you’ll use torchvision’s [`transforms`](https://pytorch.org/vision/stable/transforms.html) module, but you can also use any image library you like. Crop a random part of the image, resize it, and normalize it with the image mean and standard deviation: ``` >>> from torchvision.transforms import RandomResizedCrop, Compose, Normalize, ToTensor >>> normalize = Normalize(mean=image_processor.image_mean, std=image_processor.image_std) >>> size = ( ... image_processor.size["shortest_edge"] ... if "shortest_edge" in image_processor.size ... else (image_processor.size["height"], image_processor.size["width"]) ... ) >>> _transforms = Compose([RandomResizedCrop(size), ToTensor(), normalize]) ``` Then create a preprocessing function to apply the transforms and return the `pixel_values` - the inputs to the model - of the image: ``` >>> def transforms(examples): ... examples["pixel_values"] = [_transforms(img.convert("RGB")) for img in examples["image"]] ... del examples["image"] ... return examples ``` To apply the preprocessing function over the entire dataset, use 🤗 Datasets [with\_transform](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.with_transform) method. The transforms are applied on the fly when you load an element of the dataset: ``` >>> food = food.with_transform(transforms) ``` Now create a batch of examples using [DefaultDataCollator](/docs/transformers/v4.34.0/en/main_classes/data_collator#transformers.DefaultDataCollator). Unlike other data collators in 🤗 Transformers, the `DefaultDataCollator` does not apply additional preprocessing such as padding. ``` >>> from transformers import DefaultDataCollator >>> data_collator = DefaultDataCollator() ``` To avoid overfitting and to make the model more robust, add some data augmentation to the training part of the dataset. Here we use Keras preprocessing layers to define the transformations for the training data (includes data augmentation), and transformations for the validation data (only center cropping, resizing and normalizing). You can use `tf.image`or any other library you prefer. ``` >>> from tensorflow import keras >>> from tensorflow.keras import layers >>> size = (image_processor.size["height"], image_processor.size["width"]) >>> train_data_augmentation = keras.Sequential( ... [ ... layers.RandomCrop(size[0], size[1]), ... layers.Rescaling(scale=1.0 / 127.5, offset=-1), ... layers.RandomFlip("horizontal"), ... layers.RandomRotation(factor=0.02), ... layers.RandomZoom(height_factor=0.2, width_factor=0.2), ... ], ... name="train_data_augmentation", ... ) >>> val_data_augmentation = keras.Sequential( ... [ ... layers.CenterCrop(size[0], size[1]), ... layers.Rescaling(scale=1.0 / 127.5, offset=-1), ... ], ... name="val_data_augmentation", ... ) ``` Next, create functions to apply appropriate transformations to a batch of images, instead of one image at a time. ``` >>> import numpy as np >>> import tensorflow as tf >>> from PIL import Image >>> def convert_to_tf_tensor(image: Image): ... np_image = np.array(image) ... tf_image = tf.convert_to_tensor(np_image) ... ... ... return tf.expand_dims(tf_image, 0) >>> def preprocess_train(example_batch): ... """Apply train_transforms across a batch.""" ... images = [ ... train_data_augmentation(convert_to_tf_tensor(image.convert("RGB"))) for image in example_batch["image"] ... ] ... example_batch["pixel_values"] = [tf.transpose(tf.squeeze(image)) for image in images] ... return example_batch ... def preprocess_val(example_batch): ... """Apply val_transforms across a batch.""" ... images = [ ... val_data_augmentation(convert_to_tf_tensor(image.convert("RGB"))) for image in example_batch["image"] ... ] ... example_batch["pixel_values"] = [tf.transpose(tf.squeeze(image)) for image in images] ... return example_batch ``` Use 🤗 Datasets [set\_transform](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.set_transform) to apply the transformations on the fly: ``` food["train"].set_transform(preprocess_train) food["test"].set_transform(preprocess_val) ``` As a final preprocessing step, create a batch of examples using `DefaultDataCollator`. Unlike other data collators in 🤗 Transformers, the `DefaultDataCollator` does not apply additional preprocessing, such as padding. ``` >>> from transformers import DefaultDataCollator >>> data_collator = DefaultDataCollator(return_tensors="tf") ``` ## Evaluate Including a metric during training is often helpful for evaluating your model’s performance. You can quickly load an evaluation method with the 🤗 [Evaluate](https://huggingface.co/docs/evaluate/index) library. For this task, load the [accuracy](https://huggingface.co/spaces/evaluate-metric/accuracy) metric (see the 🤗 Evaluate [quick tour](https://huggingface.co/docs/evaluate/a_quick_tour) to learn more about how to load and compute a metric): ``` >>> import evaluate >>> accuracy = evaluate.load("accuracy") ``` Then create a function that passes your predictions and labels to [compute](https://huggingface.co/docs/evaluate/v0.4.0/en/package_reference/main_classes#evaluate.EvaluationModule.compute) to calculate the accuracy: ``` >>> import numpy as np >>> def compute_metrics(eval_pred): ... predictions, labels = eval_pred ... predictions = np.argmax(predictions, axis=1) ... return accuracy.compute(predictions=predictions, references=labels) ``` Your `compute_metrics` function is ready to go now, and you’ll return to it when you set up your training. ## Train If you aren’t familiar with finetuning a model with the [Trainer](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer), take a look at the basic tutorial [here](../training#train-with-pytorch-trainer)! You’re ready to start training your model now! Load ViT with [AutoModelForImageClassification](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoModelForImageClassification). Specify the number of labels along with the number of expected labels, and the label mappings: ``` >>> from transformers import AutoModelForImageClassification, TrainingArguments, Trainer >>> model = AutoModelForImageClassification.from_pretrained( ... checkpoint, ... num_labels=len(labels), ... id2label=id2label, ... label2id=label2id, ... ) ``` At this point, only three steps remain: 1. Define your training hyperparameters in [TrainingArguments](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.TrainingArguments). It is important you don’t remove unused columns because that’ll drop the `image` column. Without the `image` column, you can’t create `pixel_values`. Set `remove_unused_columns=False` to prevent this behavior! The only other required parameter is `output_dir` which specifies where to save your model. You’ll push this model to the Hub by setting `push_to_hub=True` (you need to be signed in to Hugging Face to upload your model). At the end of each epoch, the [Trainer](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer) will evaluate the accuracy and save the training checkpoint. 2. Pass the training arguments to [Trainer](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer) along with the model, dataset, tokenizer, data collator, and `compute_metrics` function. 3. Call [train()](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.train) to finetune your model. ``` >>> training_args = TrainingArguments( ... output_dir="my_awesome_food_model", ... remove_unused_columns=False, ... evaluation_strategy="epoch", ... save_strategy="epoch", ... learning_rate=5e-5, ... per_device_train_batch_size=16, ... gradient_accumulation_steps=4, ... per_device_eval_batch_size=16, ... num_train_epochs=3, ... warmup_ratio=0.1, ... logging_steps=10, ... load_best_model_at_end=True, ... metric_for_best_model="accuracy", ... push_to_hub=True, ... ) >>> trainer = Trainer( ... model=model, ... args=training_args, ... data_collator=data_collator, ... train_dataset=food["train"], ... eval_dataset=food["test"], ... tokenizer=image_processor, ... compute_metrics=compute_metrics, ... ) >>> trainer.train() ``` Once training is completed, share your model to the Hub with the [push\_to\_hub()](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.push_to_hub) method so everyone can use your model: ``` >>> trainer.push_to_hub() ``` If you are unfamiliar with fine-tuning a model with Keras, check out the [basic tutorial](./training#train-a-tensorflow-model-with-keras) first! To fine-tune a model in TensorFlow, follow these steps: 1. Define the training hyperparameters, and set up an optimizer and a learning rate schedule. 2. Instantiate a pre-trained model. 3. Convert a 🤗 Dataset to a `tf.data.Dataset`. 4. Compile your model. 5. Add callbacks and use the `fit()` method to run the training. 6. Upload your model to 🤗 Hub to share with the community. Start by defining the hyperparameters, optimizer and learning rate schedule: ``` >>> from transformers import create_optimizer >>> batch_size = 16 >>> num_epochs = 5 >>> num_train_steps = len(food["train"]) * num_epochs >>> learning_rate = 3e-5 >>> weight_decay_rate = 0.01 >>> optimizer, lr_schedule = create_optimizer( ... init_lr=learning_rate, ... num_train_steps=num_train_steps, ... weight_decay_rate=weight_decay_rate, ... num_warmup_steps=0, ... ) ``` Then, load ViT with [TFAutoModelForImageClassification](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.TFAutoModelForImageClassification) along with the label mappings: ``` >>> from transformers import TFAutoModelForImageClassification >>> model = TFAutoModelForImageClassification.from_pretrained( ... checkpoint, ... id2label=id2label, ... label2id=label2id, ... ) ``` Convert your datasets to the `tf.data.Dataset` format using the [to\_tf\_dataset](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.to_tf_dataset) and your `data_collator`: ``` >>> >>> tf_train_dataset = food["train"].to_tf_dataset( ... columns="pixel_values", label_cols="label", shuffle=True, batch_size=batch_size, collate_fn=data_collator ... ) >>> >>> tf_eval_dataset = food["test"].to_tf_dataset( ... columns="pixel_values", label_cols="label", shuffle=True, batch_size=batch_size, collate_fn=data_collator ... ) ``` Configure the model for training with `compile()`: ``` >>> from tensorflow.keras.losses import SparseCategoricalCrossentropy >>> loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) >>> model.compile(optimizer=optimizer, loss=loss) ``` To compute the accuracy from the predictions and push your model to the 🤗 Hub, use [Keras callbacks](../main_classes/keras_callbacks). Pass your `compute_metrics` function to [KerasMetricCallback](../main_classes/keras_callbacks#transformers.KerasMetricCallback), and use the [PushToHubCallback](../main_classes/keras_callbacks#transformers.PushToHubCallback) to upload the model: ``` >>> from transformers.keras_callbacks import KerasMetricCallback, PushToHubCallback >>> metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_eval_dataset) >>> push_to_hub_callback = PushToHubCallback( ... output_dir="food_classifier", ... tokenizer=image_processor, ... save_strategy="no", ... ) >>> callbacks = [metric_callback, push_to_hub_callback] ``` Finally, you are ready to train your model! Call `fit()` with your training and validation datasets, the number of epochs, and your callbacks to fine-tune the model: ``` >>> model.fit(tf_train_dataset, validation_data=tf_eval_dataset, epochs=num_epochs, callbacks=callbacks) Epoch 1/5 250/250 [==============================] - 313s 1s/step - loss: 2.5623 - val_loss: 1.4161 - accuracy: 0.9290 Epoch 2/5 250/250 [==============================] - 265s 1s/step - loss: 0.9181 - val_loss: 0.6808 - accuracy: 0.9690 Epoch 3/5 250/250 [==============================] - 252s 1s/step - loss: 0.3910 - val_loss: 0.4303 - accuracy: 0.9820 Epoch 4/5 250/250 [==============================] - 251s 1s/step - loss: 0.2028 - val_loss: 0.3191 - accuracy: 0.9900 Epoch 5/5 250/250 [==============================] - 238s 949ms/step - loss: 0.1232 - val_loss: 0.3259 - accuracy: 0.9890 ``` Congratulations! You have fine-tuned your model and shared it on the 🤗 Hub. You can now use it for inference! For a more in-depth example of how to finetune a model for image classification, take a look at the corresponding [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb). ## Inference Great, now that you’ve fine-tuned a model, you can use it for inference! Load an image you’d like to run inference on: ``` >>> ds = load_dataset("food101", split="validation[:10]") >>> image = ds["image"][0] ``` ![image of beignets](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png) The simplest way to try out your finetuned model for inference is to use it in a [pipeline()](/docs/transformers/v4.34.0/en/main_classes/pipelines#transformers.pipeline). Instantiate a `pipeline` for image classification with your model, and pass your image to it: ``` >>> from transformers import pipeline >>> classifier = pipeline("image-classification", model="my_awesome_food_model") >>> classifier(image) [{'score': 0.31856709718704224, 'label': 'beignets'}, {'score': 0.015232225880026817, 'label': 'bruschetta'}, {'score': 0.01519392803311348, 'label': 'chicken_wings'}, {'score': 0.013022331520915031, 'label': 'pork_chop'}, {'score': 0.012728818692266941, 'label': 'prime_rib'}] ``` You can also manually replicate the results of the `pipeline` if you’d like: Load an image processor to preprocess the image and return the `input` as PyTorch tensors: ``` >>> from transformers import AutoImageProcessor >>> import torch >>> image_processor = AutoImageProcessor.from_pretrained("my_awesome_food_model") >>> inputs = image_processor(image, return_tensors="pt") ``` Pass your inputs to the model and return the logits: ``` >>> from transformers import AutoModelForImageClassification >>> model = AutoModelForImageClassification.from_pretrained("my_awesome_food_model") >>> with torch.no_grad(): ... logits = model(**inputs).logits ``` Get the predicted label with the highest probability, and use the model’s `id2label` mapping to convert it to a label: ``` >>> predicted_label = logits.argmax(-1).item() >>> model.config.id2label[predicted_label] 'beignets' ``` Load an image processor to preprocess the image and return the `input` as TensorFlow tensors: ``` >>> from transformers import AutoImageProcessor >>> image_processor = AutoImageProcessor.from_pretrained("MariaK/food_classifier") >>> inputs = image_processor(image, return_tensors="tf") ``` Pass your inputs to the model and return the logits: ``` >>> from transformers import TFAutoModelForImageClassification >>> model = TFAutoModelForImageClassification.from_pretrained("MariaK/food_classifier") >>> logits = model(**inputs).logits ``` Get the predicted label with the highest probability, and use the model’s `id2label` mapping to convert it to a label: ``` >>> predicted_class_id = int(tf.math.argmax(logits, axis=-1)[0]) >>> model.config.id2label[predicted_class_id] 'beignets' ```
<!DOCTYPE html><html class=""><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"> <meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science."> <meta property="fb:app_id" content="1321688464574422"> <meta name="twitter:card" content="summary_large_image"> <meta name="twitter:site" content="@huggingface"> <meta property="og:title" content="Image classification"> <meta property="og:type" content="website"> <meta property="og:url" content="https://huggingface.co/docs/transformers/v4.34.0/en/tasks/image_classification"> <meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png"> <link rel="stylesheet" href="/front/build/kube-5e23f38/style.css"> <link rel="preconnect" href="https://fonts.gstatic.com"> <link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&amp;display=swap" rel="stylesheet"> <link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&amp;display=swap" rel="stylesheet"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" as="style" onload="this.onload=null;this.rel='stylesheet'"> <noscript> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" /> </noscript> <title>Image classification</title> <script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script> <script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"><link rel="stylesheet" href="/docs/transformers/v4.34.0/en/_app/immutable/assets/0.e3b0c442.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/1.38c5c2f6.js"><meta name="hf:doc:metadata" content="{&quot;local&quot;:&quot;image-classification&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;load-food101-dataset&quot;,&quot;title&quot;:&quot;Load Food-101 dataset&quot;},{&quot;local&quot;:&quot;preprocess&quot;,&quot;title&quot;:&quot;Preprocess&quot;},{&quot;local&quot;:&quot;evaluate&quot;,&quot;title&quot;:&quot;Evaluate&quot;},{&quot;local&quot;:&quot;train&quot;,&quot;title&quot;:&quot;Train&quot;},{&quot;local&quot;:&quot;inference&quot;,&quot;title&quot;:&quot;Inference&quot;}],&quot;title&quot;:&quot;Image classification&quot;}"></head> <body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage"> <div class="flex min-h-screen flex-col"> <div class="SVELTE_HYDRATER contents" data-props="{&quot;classNames&quot;:&quot;&quot;,&quot;isWide&quot;:true,&quot;isZh&quot;:false}" data-target="MainHeader"><header class="border-b border-gray-100 "><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <div class="flex flex-none items-center justify-center p-0.5 place-self-stretch lg:hidden"><button class="relative z-40 flex h-6 w-8 items-center justify-center" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div></div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="rounded-full border border-transparent bg-gray-900 px-3 py-1 leading-none text-white hover:border-black hover:bg-white hover:text-black" href="/join">Sign Up</a></li></ul></nav></div></header></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div> <main class="flex flex-1 flex-col"><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapters&quot;:[{&quot;title&quot;:&quot;Get started&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;🤗 Transformers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;index&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/index&quot;},{&quot;title&quot;:&quot;Quick tour&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;quicktour&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/quicktour&quot;},{&quot;title&quot;:&quot;Installation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;installation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/installation&quot;}]},{&quot;title&quot;:&quot;Tutorials&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Run inference with pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_tutorial&quot;},{&quot;title&quot;:&quot;Write portable code with AutoClass&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;autoclass_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/autoclass_tutorial&quot;},{&quot;title&quot;:&quot;Preprocess data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocessing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/preprocessing&quot;},{&quot;title&quot;:&quot;Fine-tune a pretrained model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;training&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/training&quot;},{&quot;title&quot;:&quot;Train with a script&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;run_scripts&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/run_scripts&quot;},{&quot;title&quot;:&quot;Set up distributed training with 🤗 Accelerate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;accelerate&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/accelerate&quot;},{&quot;title&quot;:&quot;Load and train adapters with 🤗 PEFT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;peft&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/peft&quot;},{&quot;title&quot;:&quot;Share your model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_sharing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_sharing&quot;},{&quot;title&quot;:&quot;Agents&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers_agents&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/transformers_agents&quot;},{&quot;title&quot;:&quot;Generation with LLMs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;llm_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/llm_tutorial&quot;}]},{&quot;title&quot;:&quot;Task Guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Natural Language Processing&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text classification&quot;,&quot;id&quot;:&quot;tasks/sequence_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/sequence_classification&quot;},{&quot;title&quot;:&quot;Token classification&quot;,&quot;id&quot;:&quot;tasks/token_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/token_classification&quot;},{&quot;title&quot;:&quot;Question answering&quot;,&quot;id&quot;:&quot;tasks/question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/question_answering&quot;},{&quot;title&quot;:&quot;Causal language modeling&quot;,&quot;id&quot;:&quot;tasks/language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/language_modeling&quot;},{&quot;title&quot;:&quot;Masked language modeling&quot;,&quot;id&quot;:&quot;tasks/masked_language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/masked_language_modeling&quot;},{&quot;title&quot;:&quot;Translation&quot;,&quot;id&quot;:&quot;tasks/translation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/translation&quot;},{&quot;title&quot;:&quot;Summarization&quot;,&quot;id&quot;:&quot;tasks/summarization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/summarization&quot;},{&quot;title&quot;:&quot;Multiple choice&quot;,&quot;id&quot;:&quot;tasks/multiple_choice&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/multiple_choice&quot;}]},{&quot;title&quot;:&quot;Audio&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio classification&quot;,&quot;id&quot;:&quot;tasks/audio_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/audio_classification&quot;},{&quot;title&quot;:&quot;Automatic speech recognition&quot;,&quot;id&quot;:&quot;tasks/asr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/asr&quot;}]},{&quot;title&quot;:&quot;Computer Vision&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image classification&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks/image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_classification&quot;},{&quot;title&quot;:&quot;Semantic segmentation&quot;,&quot;id&quot;:&quot;tasks/semantic_segmentation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/semantic_segmentation&quot;},{&quot;title&quot;:&quot;Video classification&quot;,&quot;id&quot;:&quot;tasks/video_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/video_classification&quot;},{&quot;title&quot;:&quot;Object detection&quot;,&quot;id&quot;:&quot;tasks/object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot object detection&quot;,&quot;id&quot;:&quot;tasks/zero_shot_object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot image classification&quot;,&quot;id&quot;:&quot;tasks/zero_shot_image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification&quot;},{&quot;title&quot;:&quot;Depth estimation&quot;,&quot;id&quot;:&quot;tasks/monocular_depth_estimation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation&quot;}]},{&quot;title&quot;:&quot;Multimodal&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image captioning&quot;,&quot;id&quot;:&quot;tasks/image_captioning&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_captioning&quot;},{&quot;title&quot;:&quot;Document Question Answering&quot;,&quot;id&quot;:&quot;tasks/document_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/document_question_answering&quot;},{&quot;title&quot;:&quot;Visual Question Answering&quot;,&quot;id&quot;:&quot;tasks/visual_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/visual_question_answering&quot;},{&quot;title&quot;:&quot;Text to speech&quot;,&quot;id&quot;:&quot;tasks/text-to-speech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/text-to-speech&quot;}]},{&quot;title&quot;:&quot;Generation&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Customize the generation strategy&quot;,&quot;id&quot;:&quot;generation_strategies&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/generation_strategies&quot;}]},{&quot;title&quot;:&quot;Prompting&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image tasks with IDEFICS&quot;,&quot;id&quot;:&quot;tasks/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/idefics&quot;}]}]},{&quot;title&quot;:&quot;Developer guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Use fast tokenizers from 🤗 Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;fast_tokenizers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/fast_tokenizers&quot;},{&quot;title&quot;:&quot;Run inference with multilingual models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;multilingual&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/multilingual&quot;},{&quot;title&quot;:&quot;Use model-specific APIs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;create_a_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/create_a_model&quot;},{&quot;title&quot;:&quot;Share a custom model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_models&quot;},{&quot;title&quot;:&quot;Templates for chat models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;chat_templating&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/chat_templating&quot;},{&quot;title&quot;:&quot;Run training on Amazon SageMaker&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;sagemaker&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/sagemaker&quot;},{&quot;title&quot;:&quot;Export to ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;serialization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/serialization&quot;},{&quot;title&quot;:&quot;Export to TFLite&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tflite&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tflite&quot;},{&quot;title&quot;:&quot;Export to TorchScript&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;torchscript&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/torchscript&quot;},{&quot;title&quot;:&quot;Benchmarks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;benchmarks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/benchmarks&quot;},{&quot;title&quot;:&quot;Notebooks with examples&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;notebooks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/notebooks&quot;},{&quot;title&quot;:&quot;Community resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;community&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/community&quot;},{&quot;title&quot;:&quot;Custom Tools and Prompts&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_tools&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_tools&quot;},{&quot;title&quot;:&quot;Troubleshoot&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;troubleshooting&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/troubleshooting&quot;}]},{&quot;title&quot;:&quot;Performance and scalability&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;performance&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/performance&quot;},{&quot;title&quot;:&quot;Efficient training techniques&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Methods and tools for efficient training on a single GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_one&quot;},{&quot;title&quot;:&quot;Multiple GPUs and parallelism&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_many&quot;},{&quot;title&quot;:&quot;Efficient training on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu&quot;},{&quot;title&quot;:&quot;Distributed CPU training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu_many&quot;},{&quot;title&quot;:&quot;Training on TPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu&quot;},{&quot;title&quot;:&quot;Training on TPU with TensorFlow&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu_tf&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu_tf&quot;},{&quot;title&quot;:&quot;Training on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_special&quot;},{&quot;title&quot;:&quot;Custom hardware for training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_hardware&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_hardware&quot;},{&quot;title&quot;:&quot;Hyperparameter Search using Trainer API&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;hpo_train&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/hpo_train&quot;}]},{&quot;title&quot;:&quot;Optimizing inference&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Inference on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_cpu&quot;},{&quot;title&quot;:&quot;Inference on one GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_one&quot;},{&quot;title&quot;:&quot;Inference on many GPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_many&quot;},{&quot;title&quot;:&quot;Inference on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_special&quot;}]},{&quot;title&quot;:&quot;Instantiating a big model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;big_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/big_models&quot;},{&quot;title&quot;:&quot;Troubleshooting&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;debugging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/debugging&quot;},{&quot;title&quot;:&quot;XLA Integration for TensorFlow Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tf_xla&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tf_xla&quot;},{&quot;title&quot;:&quot;Optimize inference using `torch.compile()`&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_torch_compile&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_torch_compile&quot;}]},{&quot;title&quot;:&quot;Contribute&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;How to contribute to transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;contributing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/contributing&quot;},{&quot;title&quot;:&quot;How to add a model to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_model&quot;},{&quot;title&quot;:&quot;How to convert a 🤗 Transformers model to TensorFlow?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_tensorflow_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_tensorflow_model&quot;},{&quot;title&quot;:&quot;How to add a pipeline to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_pipeline&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_pipeline&quot;},{&quot;title&quot;:&quot;Testing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;testing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/testing&quot;},{&quot;title&quot;:&quot;Checks on a Pull Request&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pr_checks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pr_checks&quot;}]},{&quot;title&quot;:&quot;Conceptual guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Philosophy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;philosophy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/philosophy&quot;},{&quot;title&quot;:&quot;Glossary&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;glossary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/glossary&quot;},{&quot;title&quot;:&quot;What 🤗 Transformers can do&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;task_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/task_summary&quot;},{&quot;title&quot;:&quot;How 🤗 Transformers solve tasks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks_explained&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks_explained&quot;},{&quot;title&quot;:&quot;The Transformer model family&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_summary&quot;},{&quot;title&quot;:&quot;Summary of the tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tokenizer_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tokenizer_summary&quot;},{&quot;title&quot;:&quot;Attention mechanisms&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;attention&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/attention&quot;},{&quot;title&quot;:&quot;Padding and truncation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pad_truncation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pad_truncation&quot;},{&quot;title&quot;:&quot;BERTology&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;bertology&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/bertology&quot;},{&quot;title&quot;:&quot;Perplexity of fixed-length models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perplexity&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perplexity&quot;},{&quot;title&quot;:&quot;Pipelines for webserver inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_webserver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_webserver&quot;},{&quot;title&quot;:&quot;Model training anatomy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_memory_anatomy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_memory_anatomy&quot;}]},{&quot;title&quot;:&quot;API&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Main Classes&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Agents and Tools&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/agent&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/agent&quot;},{&quot;title&quot;:&quot;Auto Classes&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/auto&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/auto&quot;},{&quot;title&quot;:&quot;Callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/callback&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/callback&quot;},{&quot;title&quot;:&quot;Configuration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/configuration&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/configuration&quot;},{&quot;title&quot;:&quot;Data Collator&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/data_collator&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/data_collator&quot;},{&quot;title&quot;:&quot;Keras callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/keras_callbacks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/keras_callbacks&quot;},{&quot;title&quot;:&quot;Logging&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/logging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/logging&quot;},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/model&quot;},{&quot;title&quot;:&quot;Text Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/text_generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/text_generation&quot;},{&quot;title&quot;:&quot;ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/onnx&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/onnx&quot;},{&quot;title&quot;:&quot;Optimization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/optimizer_schedules&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules&quot;},{&quot;title&quot;:&quot;Model outputs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/output&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/output&quot;},{&quot;title&quot;:&quot;Pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/pipelines&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/pipelines&quot;},{&quot;title&quot;:&quot;Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/processors&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/processors&quot;},{&quot;title&quot;:&quot;Quantization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/quantization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/quantization&quot;},{&quot;title&quot;:&quot;Tokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/tokenizer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/tokenizer&quot;},{&quot;title&quot;:&quot;Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/trainer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/trainer&quot;},{&quot;title&quot;:&quot;DeepSpeed Integration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/deepspeed&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/deepspeed&quot;},{&quot;title&quot;:&quot;Feature Extractor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/feature_extractor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/feature_extractor&quot;},{&quot;title&quot;:&quot;Image Processor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/image_processor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/image_processor&quot;}]},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALBERT&quot;,&quot;id&quot;:&quot;model_doc/albert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/albert&quot;},{&quot;title&quot;:&quot;BART&quot;,&quot;id&quot;:&quot;model_doc/bart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bart&quot;},{&quot;title&quot;:&quot;BARThez&quot;,&quot;id&quot;:&quot;model_doc/barthez&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/barthez&quot;},{&quot;title&quot;:&quot;BARTpho&quot;,&quot;id&quot;:&quot;model_doc/bartpho&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bartpho&quot;},{&quot;title&quot;:&quot;BERT&quot;,&quot;id&quot;:&quot;model_doc/bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert&quot;},{&quot;title&quot;:&quot;BertGeneration&quot;,&quot;id&quot;:&quot;model_doc/bert-generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-generation&quot;},{&quot;title&quot;:&quot;BertJapanese&quot;,&quot;id&quot;:&quot;model_doc/bert-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-japanese&quot;},{&quot;title&quot;:&quot;Bertweet&quot;,&quot;id&quot;:&quot;model_doc/bertweet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bertweet&quot;},{&quot;title&quot;:&quot;BigBird&quot;,&quot;id&quot;:&quot;model_doc/big_bird&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/big_bird&quot;},{&quot;title&quot;:&quot;BigBirdPegasus&quot;,&quot;id&quot;:&quot;model_doc/bigbird_pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus&quot;},{&quot;title&quot;:&quot;BioGpt&quot;,&quot;id&quot;:&quot;model_doc/biogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/biogpt&quot;},{&quot;title&quot;:&quot;Blenderbot&quot;,&quot;id&quot;:&quot;model_doc/blenderbot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot&quot;},{&quot;title&quot;:&quot;Blenderbot Small&quot;,&quot;id&quot;:&quot;model_doc/blenderbot-small&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot-small&quot;},{&quot;title&quot;:&quot;BLOOM&quot;,&quot;id&quot;:&quot;model_doc/bloom&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bloom&quot;},{&quot;title&quot;:&quot;BORT&quot;,&quot;id&quot;:&quot;model_doc/bort&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bort&quot;},{&quot;title&quot;:&quot;ByT5&quot;,&quot;id&quot;:&quot;model_doc/byt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/byt5&quot;},{&quot;title&quot;:&quot;CamemBERT&quot;,&quot;id&quot;:&quot;model_doc/camembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/camembert&quot;},{&quot;title&quot;:&quot;CANINE&quot;,&quot;id&quot;:&quot;model_doc/canine&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/canine&quot;},{&quot;title&quot;:&quot;CodeGen&quot;,&quot;id&quot;:&quot;model_doc/codegen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/codegen&quot;},{&quot;title&quot;:&quot;CodeLlama&quot;,&quot;id&quot;:&quot;model_doc/code_llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/code_llama&quot;},{&quot;title&quot;:&quot;ConvBERT&quot;,&quot;id&quot;:&quot;model_doc/convbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convbert&quot;},{&quot;title&quot;:&quot;CPM&quot;,&quot;id&quot;:&quot;model_doc/cpm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpm&quot;},{&quot;title&quot;:&quot;CPMANT&quot;,&quot;id&quot;:&quot;model_doc/cpmant&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpmant&quot;},{&quot;title&quot;:&quot;CTRL&quot;,&quot;id&quot;:&quot;model_doc/ctrl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ctrl&quot;},{&quot;title&quot;:&quot;DeBERTa&quot;,&quot;id&quot;:&quot;model_doc/deberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta&quot;},{&quot;title&quot;:&quot;DeBERTa-v2&quot;,&quot;id&quot;:&quot;model_doc/deberta-v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta-v2&quot;},{&quot;title&quot;:&quot;DialoGPT&quot;,&quot;id&quot;:&quot;model_doc/dialogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dialogpt&quot;},{&quot;title&quot;:&quot;DistilBERT&quot;,&quot;id&quot;:&quot;model_doc/distilbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/distilbert&quot;},{&quot;title&quot;:&quot;DPR&quot;,&quot;id&quot;:&quot;model_doc/dpr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpr&quot;},{&quot;title&quot;:&quot;ELECTRA&quot;,&quot;id&quot;:&quot;model_doc/electra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/electra&quot;},{&quot;title&quot;:&quot;Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encoder-decoder&quot;},{&quot;title&quot;:&quot;ERNIE&quot;,&quot;id&quot;:&quot;model_doc/ernie&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie&quot;},{&quot;title&quot;:&quot;ErnieM&quot;,&quot;id&quot;:&quot;model_doc/ernie_m&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie_m&quot;},{&quot;title&quot;:&quot;ESM&quot;,&quot;id&quot;:&quot;model_doc/esm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/esm&quot;},{&quot;title&quot;:&quot;Falcon&quot;,&quot;id&quot;:&quot;model_doc/falcon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/falcon&quot;},{&quot;title&quot;:&quot;FLAN-T5&quot;,&quot;id&quot;:&quot;model_doc/flan-t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-t5&quot;},{&quot;title&quot;:&quot;FLAN-UL2&quot;,&quot;id&quot;:&quot;model_doc/flan-ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-ul2&quot;},{&quot;title&quot;:&quot;FlauBERT&quot;,&quot;id&quot;:&quot;model_doc/flaubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flaubert&quot;},{&quot;title&quot;:&quot;FNet&quot;,&quot;id&quot;:&quot;model_doc/fnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fnet&quot;},{&quot;title&quot;:&quot;FSMT&quot;,&quot;id&quot;:&quot;model_doc/fsmt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fsmt&quot;},{&quot;title&quot;:&quot;Funnel Transformer&quot;,&quot;id&quot;:&quot;model_doc/funnel&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/funnel&quot;},{&quot;title&quot;:&quot;GPT&quot;,&quot;id&quot;:&quot;model_doc/openai-gpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/openai-gpt&quot;},{&quot;title&quot;:&quot;GPT Neo&quot;,&quot;id&quot;:&quot;model_doc/gpt_neo&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neo&quot;},{&quot;title&quot;:&quot;GPT NeoX&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox&quot;},{&quot;title&quot;:&quot;GPT NeoX Japanese&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox_japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese&quot;},{&quot;title&quot;:&quot;GPT-J&quot;,&quot;id&quot;:&quot;model_doc/gptj&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptj&quot;},{&quot;title&quot;:&quot;GPT2&quot;,&quot;id&quot;:&quot;model_doc/gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt2&quot;},{&quot;title&quot;:&quot;GPTBigCode&quot;,&quot;id&quot;:&quot;model_doc/gpt_bigcode&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode&quot;},{&quot;title&quot;:&quot;GPTSAN Japanese&quot;,&quot;id&quot;:&quot;model_doc/gptsan-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese&quot;},{&quot;title&quot;:&quot;GPTSw3&quot;,&quot;id&quot;:&quot;model_doc/gpt-sw3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt-sw3&quot;},{&quot;title&quot;:&quot;HerBERT&quot;,&quot;id&quot;:&quot;model_doc/herbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/herbert&quot;},{&quot;title&quot;:&quot;I-BERT&quot;,&quot;id&quot;:&quot;model_doc/ibert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ibert&quot;},{&quot;title&quot;:&quot;Jukebox&quot;,&quot;id&quot;:&quot;model_doc/jukebox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/jukebox&quot;},{&quot;title&quot;:&quot;LED&quot;,&quot;id&quot;:&quot;model_doc/led&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/led&quot;},{&quot;title&quot;:&quot;LLaMA&quot;,&quot;id&quot;:&quot;model_doc/llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama&quot;},{&quot;title&quot;:&quot;Llama2&quot;,&quot;id&quot;:&quot;model_doc/llama2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama2&quot;},{&quot;title&quot;:&quot;Longformer&quot;,&quot;id&quot;:&quot;model_doc/longformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longformer&quot;},{&quot;title&quot;:&quot;LongT5&quot;,&quot;id&quot;:&quot;model_doc/longt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longt5&quot;},{&quot;title&quot;:&quot;LUKE&quot;,&quot;id&quot;:&quot;model_doc/luke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/luke&quot;},{&quot;title&quot;:&quot;M2M100&quot;,&quot;id&quot;:&quot;model_doc/m2m_100&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/m2m_100&quot;},{&quot;title&quot;:&quot;MarianMT&quot;,&quot;id&quot;:&quot;model_doc/marian&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/marian&quot;},{&quot;title&quot;:&quot;MarkupLM&quot;,&quot;id&quot;:&quot;model_doc/markuplm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/markuplm&quot;},{&quot;title&quot;:&quot;MBart and MBart-50&quot;,&quot;id&quot;:&quot;model_doc/mbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mbart&quot;},{&quot;title&quot;:&quot;MEGA&quot;,&quot;id&quot;:&quot;model_doc/mega&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mega&quot;},{&quot;title&quot;:&quot;MegatronBERT&quot;,&quot;id&quot;:&quot;model_doc/megatron-bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron-bert&quot;},{&quot;title&quot;:&quot;MegatronGPT2&quot;,&quot;id&quot;:&quot;model_doc/megatron_gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2&quot;},{&quot;title&quot;:&quot;Mistral&quot;,&quot;id&quot;:&quot;model_doc/mistral&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mistral&quot;},{&quot;title&quot;:&quot;mLUKE&quot;,&quot;id&quot;:&quot;model_doc/mluke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mluke&quot;},{&quot;title&quot;:&quot;MobileBERT&quot;,&quot;id&quot;:&quot;model_doc/mobilebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilebert&quot;},{&quot;title&quot;:&quot;MPNet&quot;,&quot;id&quot;:&quot;model_doc/mpnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpnet&quot;},{&quot;title&quot;:&quot;MPT&quot;,&quot;id&quot;:&quot;model_doc/mpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpt&quot;},{&quot;title&quot;:&quot;MRA&quot;,&quot;id&quot;:&quot;model_doc/mra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mra&quot;},{&quot;title&quot;:&quot;MT5&quot;,&quot;id&quot;:&quot;model_doc/mt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mt5&quot;},{&quot;title&quot;:&quot;MVP&quot;,&quot;id&quot;:&quot;model_doc/mvp&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mvp&quot;},{&quot;title&quot;:&quot;NEZHA&quot;,&quot;id&quot;:&quot;model_doc/nezha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nezha&quot;},{&quot;title&quot;:&quot;NLLB&quot;,&quot;id&quot;:&quot;model_doc/nllb&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb&quot;},{&quot;title&quot;:&quot;NLLB-MoE&quot;,&quot;id&quot;:&quot;model_doc/nllb-moe&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb-moe&quot;},{&quot;title&quot;:&quot;Nyströmformer&quot;,&quot;id&quot;:&quot;model_doc/nystromformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nystromformer&quot;},{&quot;title&quot;:&quot;Open-Llama&quot;,&quot;id&quot;:&quot;model_doc/open-llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/open-llama&quot;},{&quot;title&quot;:&quot;OPT&quot;,&quot;id&quot;:&quot;model_doc/opt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/opt&quot;},{&quot;title&quot;:&quot;Pegasus&quot;,&quot;id&quot;:&quot;model_doc/pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus&quot;},{&quot;title&quot;:&quot;PEGASUS-X&quot;,&quot;id&quot;:&quot;model_doc/pegasus_x&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus_x&quot;},{&quot;title&quot;:&quot;Persimmon&quot;,&quot;id&quot;:&quot;model_doc/persimmon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/persimmon&quot;},{&quot;title&quot;:&quot;PhoBERT&quot;,&quot;id&quot;:&quot;model_doc/phobert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/phobert&quot;},{&quot;title&quot;:&quot;PLBart&quot;,&quot;id&quot;:&quot;model_doc/plbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/plbart&quot;},{&quot;title&quot;:&quot;ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/prophetnet&quot;},{&quot;title&quot;:&quot;QDQBert&quot;,&quot;id&quot;:&quot;model_doc/qdqbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/qdqbert&quot;},{&quot;title&quot;:&quot;RAG&quot;,&quot;id&quot;:&quot;model_doc/rag&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rag&quot;},{&quot;title&quot;:&quot;REALM&quot;,&quot;id&quot;:&quot;model_doc/realm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/realm&quot;},{&quot;title&quot;:&quot;Reformer&quot;,&quot;id&quot;:&quot;model_doc/reformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/reformer&quot;},{&quot;title&quot;:&quot;RemBERT&quot;,&quot;id&quot;:&quot;model_doc/rembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rembert&quot;},{&quot;title&quot;:&quot;RetriBERT&quot;,&quot;id&quot;:&quot;model_doc/retribert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/retribert&quot;},{&quot;title&quot;:&quot;RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta&quot;},{&quot;title&quot;:&quot;RoBERTa-PreLayerNorm&quot;,&quot;id&quot;:&quot;model_doc/roberta-prelayernorm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm&quot;},{&quot;title&quot;:&quot;RoCBert&quot;,&quot;id&quot;:&quot;model_doc/roc_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roc_bert&quot;},{&quot;title&quot;:&quot;RoFormer&quot;,&quot;id&quot;:&quot;model_doc/roformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roformer&quot;},{&quot;title&quot;:&quot;RWKV&quot;,&quot;id&quot;:&quot;model_doc/rwkv&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rwkv&quot;},{&quot;title&quot;:&quot;Splinter&quot;,&quot;id&quot;:&quot;model_doc/splinter&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/splinter&quot;},{&quot;title&quot;:&quot;SqueezeBERT&quot;,&quot;id&quot;:&quot;model_doc/squeezebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/squeezebert&quot;},{&quot;title&quot;:&quot;SwitchTransformers&quot;,&quot;id&quot;:&quot;model_doc/switch_transformers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/switch_transformers&quot;},{&quot;title&quot;:&quot;T5&quot;,&quot;id&quot;:&quot;model_doc/t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5&quot;},{&quot;title&quot;:&quot;T5v1.1&quot;,&quot;id&quot;:&quot;model_doc/t5v1.1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5v1.1&quot;},{&quot;title&quot;:&quot;TAPEX&quot;,&quot;id&quot;:&quot;model_doc/tapex&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapex&quot;},{&quot;title&quot;:&quot;Transformer XL&quot;,&quot;id&quot;:&quot;model_doc/transfo-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/transfo-xl&quot;},{&quot;title&quot;:&quot;UL2&quot;,&quot;id&quot;:&quot;model_doc/ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ul2&quot;},{&quot;title&quot;:&quot;UMT5&quot;,&quot;id&quot;:&quot;model_doc/umt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/umt5&quot;},{&quot;title&quot;:&quot;X-MOD&quot;,&quot;id&quot;:&quot;model_doc/xmod&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xmod&quot;},{&quot;title&quot;:&quot;XGLM&quot;,&quot;id&quot;:&quot;model_doc/xglm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xglm&quot;},{&quot;title&quot;:&quot;XLM&quot;,&quot;id&quot;:&quot;model_doc/xlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm&quot;},{&quot;title&quot;:&quot;XLM-ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/xlm-prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa-XL&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl&quot;},{&quot;title&quot;:&quot;XLM-V&quot;,&quot;id&quot;:&quot;model_doc/xlm-v&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-v&quot;},{&quot;title&quot;:&quot;XLNet&quot;,&quot;id&quot;:&quot;model_doc/xlnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlnet&quot;},{&quot;title&quot;:&quot;YOSO&quot;,&quot;id&quot;:&quot;model_doc/yoso&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yoso&quot;}]},{&quot;title&quot;:&quot;Vision models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;BEiT&quot;,&quot;id&quot;:&quot;model_doc/beit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/beit&quot;},{&quot;title&quot;:&quot;BiT&quot;,&quot;id&quot;:&quot;model_doc/bit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bit&quot;},{&quot;title&quot;:&quot;Conditional DETR&quot;,&quot;id&quot;:&quot;model_doc/conditional_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/conditional_detr&quot;},{&quot;title&quot;:&quot;ConvNeXT&quot;,&quot;id&quot;:&quot;model_doc/convnext&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnext&quot;},{&quot;title&quot;:&quot;ConvNeXTV2&quot;,&quot;id&quot;:&quot;model_doc/convnextv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnextv2&quot;},{&quot;title&quot;:&quot;CvT&quot;,&quot;id&quot;:&quot;model_doc/cvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cvt&quot;},{&quot;title&quot;:&quot;Deformable DETR&quot;,&quot;id&quot;:&quot;model_doc/deformable_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deformable_detr&quot;},{&quot;title&quot;:&quot;DeiT&quot;,&quot;id&quot;:&quot;model_doc/deit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deit&quot;},{&quot;title&quot;:&quot;DETA&quot;,&quot;id&quot;:&quot;model_doc/deta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deta&quot;},{&quot;title&quot;:&quot;DETR&quot;,&quot;id&quot;:&quot;model_doc/detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/detr&quot;},{&quot;title&quot;:&quot;DiNAT&quot;,&quot;id&quot;:&quot;model_doc/dinat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinat&quot;},{&quot;title&quot;:&quot;DINO V2&quot;,&quot;id&quot;:&quot;model_doc/dinov2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinov2&quot;},{&quot;title&quot;:&quot;DiT&quot;,&quot;id&quot;:&quot;model_doc/dit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dit&quot;},{&quot;title&quot;:&quot;DPT&quot;,&quot;id&quot;:&quot;model_doc/dpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpt&quot;},{&quot;title&quot;:&quot;EfficientFormer&quot;,&quot;id&quot;:&quot;model_doc/efficientformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientformer&quot;},{&quot;title&quot;:&quot;EfficientNet&quot;,&quot;id&quot;:&quot;model_doc/efficientnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientnet&quot;},{&quot;title&quot;:&quot;FocalNet&quot;,&quot;id&quot;:&quot;model_doc/focalnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/focalnet&quot;},{&quot;title&quot;:&quot;GLPN&quot;,&quot;id&quot;:&quot;model_doc/glpn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/glpn&quot;},{&quot;title&quot;:&quot;ImageGPT&quot;,&quot;id&quot;:&quot;model_doc/imagegpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/imagegpt&quot;},{&quot;title&quot;:&quot;LeViT&quot;,&quot;id&quot;:&quot;model_doc/levit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/levit&quot;},{&quot;title&quot;:&quot;Mask2Former&quot;,&quot;id&quot;:&quot;model_doc/mask2former&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mask2former&quot;},{&quot;title&quot;:&quot;MaskFormer&quot;,&quot;id&quot;:&quot;model_doc/maskformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/maskformer&quot;},{&quot;title&quot;:&quot;MobileNetV1&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v1&quot;},{&quot;title&quot;:&quot;MobileNetV2&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v2&quot;},{&quot;title&quot;:&quot;MobileViT&quot;,&quot;id&quot;:&quot;model_doc/mobilevit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevit&quot;},{&quot;title&quot;:&quot;MobileViTV2&quot;,&quot;id&quot;:&quot;model_doc/mobilevitv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevitv2&quot;},{&quot;title&quot;:&quot;NAT&quot;,&quot;id&quot;:&quot;model_doc/nat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nat&quot;},{&quot;title&quot;:&quot;PoolFormer&quot;,&quot;id&quot;:&quot;model_doc/poolformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/poolformer&quot;},{&quot;title&quot;:&quot;Pyramid Vision Transformer (PVT)&quot;,&quot;id&quot;:&quot;model_doc/pvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pvt&quot;},{&quot;title&quot;:&quot;RegNet&quot;,&quot;id&quot;:&quot;model_doc/regnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/regnet&quot;},{&quot;title&quot;:&quot;ResNet&quot;,&quot;id&quot;:&quot;model_doc/resnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/resnet&quot;},{&quot;title&quot;:&quot;SegFormer&quot;,&quot;id&quot;:&quot;model_doc/segformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/segformer&quot;},{&quot;title&quot;:&quot;SwiftFormer&quot;,&quot;id&quot;:&quot;model_doc/swiftformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swiftformer&quot;},{&quot;title&quot;:&quot;Swin Transformer&quot;,&quot;id&quot;:&quot;model_doc/swin&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin&quot;},{&quot;title&quot;:&quot;Swin Transformer V2&quot;,&quot;id&quot;:&quot;model_doc/swinv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swinv2&quot;},{&quot;title&quot;:&quot;Swin2SR&quot;,&quot;id&quot;:&quot;model_doc/swin2sr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin2sr&quot;},{&quot;title&quot;:&quot;Table Transformer&quot;,&quot;id&quot;:&quot;model_doc/table-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/table-transformer&quot;},{&quot;title&quot;:&quot;TimeSformer&quot;,&quot;id&quot;:&quot;model_doc/timesformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/timesformer&quot;},{&quot;title&quot;:&quot;UperNet&quot;,&quot;id&quot;:&quot;model_doc/upernet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/upernet&quot;},{&quot;title&quot;:&quot;VAN&quot;,&quot;id&quot;:&quot;model_doc/van&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/van&quot;},{&quot;title&quot;:&quot;VideoMAE&quot;,&quot;id&quot;:&quot;model_doc/videomae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/videomae&quot;},{&quot;title&quot;:&quot;Vision Transformer (ViT)&quot;,&quot;id&quot;:&quot;model_doc/vit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit&quot;},{&quot;title&quot;:&quot;ViT Hybrid&quot;,&quot;id&quot;:&quot;model_doc/vit_hybrid&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_hybrid&quot;},{&quot;title&quot;:&quot;ViTDet&quot;,&quot;id&quot;:&quot;model_doc/vitdet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitdet&quot;},{&quot;title&quot;:&quot;ViTMAE&quot;,&quot;id&quot;:&quot;model_doc/vit_mae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_mae&quot;},{&quot;title&quot;:&quot;ViTMatte&quot;,&quot;id&quot;:&quot;model_doc/vitmatte&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitmatte&quot;},{&quot;title&quot;:&quot;ViTMSN&quot;,&quot;id&quot;:&quot;model_doc/vit_msn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_msn&quot;},{&quot;title&quot;:&quot;ViViT&quot;,&quot;id&quot;:&quot;model_doc/vivit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vivit&quot;},{&quot;title&quot;:&quot;YOLOS&quot;,&quot;id&quot;:&quot;model_doc/yolos&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yolos&quot;}]},{&quot;title&quot;:&quot;Audio models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio Spectrogram Transformer&quot;,&quot;id&quot;:&quot;model_doc/audio-spectrogram-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer&quot;},{&quot;title&quot;:&quot;Bark&quot;,&quot;id&quot;:&quot;model_doc/bark&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bark&quot;},{&quot;title&quot;:&quot;CLAP&quot;,&quot;id&quot;:&quot;model_doc/clap&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clap&quot;},{&quot;title&quot;:&quot;EnCodec&quot;,&quot;id&quot;:&quot;model_doc/encodec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encodec&quot;},{&quot;title&quot;:&quot;Hubert&quot;,&quot;id&quot;:&quot;model_doc/hubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/hubert&quot;},{&quot;title&quot;:&quot;MCTCT&quot;,&quot;id&quot;:&quot;model_doc/mctct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mctct&quot;},{&quot;title&quot;:&quot;MMS&quot;,&quot;id&quot;:&quot;model_doc/mms&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mms&quot;},{&quot;title&quot;:&quot;MusicGen&quot;,&quot;id&quot;:&quot;model_doc/musicgen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/musicgen&quot;},{&quot;title&quot;:&quot;Pop2Piano&quot;,&quot;id&quot;:&quot;model_doc/pop2piano&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pop2piano&quot;},{&quot;title&quot;:&quot;SEW&quot;,&quot;id&quot;:&quot;model_doc/sew&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew&quot;},{&quot;title&quot;:&quot;SEW-D&quot;,&quot;id&quot;:&quot;model_doc/sew-d&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew-d&quot;},{&quot;title&quot;:&quot;Speech2Text&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text&quot;},{&quot;title&quot;:&quot;Speech2Text2&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text_2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2&quot;},{&quot;title&quot;:&quot;SpeechT5&quot;,&quot;id&quot;:&quot;model_doc/speecht5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speecht5&quot;},{&quot;title&quot;:&quot;UniSpeech&quot;,&quot;id&quot;:&quot;model_doc/unispeech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech&quot;},{&quot;title&quot;:&quot;UniSpeech-SAT&quot;,&quot;id&quot;:&quot;model_doc/unispeech-sat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech-sat&quot;},{&quot;title&quot;:&quot;VITS&quot;,&quot;id&quot;:&quot;model_doc/vits&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vits&quot;},{&quot;title&quot;:&quot;Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2&quot;},{&quot;title&quot;:&quot;Wav2Vec2-Conformer&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2-conformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer&quot;},{&quot;title&quot;:&quot;Wav2Vec2Phoneme&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2_phoneme&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme&quot;},{&quot;title&quot;:&quot;WavLM&quot;,&quot;id&quot;:&quot;model_doc/wavlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wavlm&quot;},{&quot;title&quot;:&quot;Whisper&quot;,&quot;id&quot;:&quot;model_doc/whisper&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/whisper&quot;},{&quot;title&quot;:&quot;XLS-R&quot;,&quot;id&quot;:&quot;model_doc/xls_r&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xls_r&quot;},{&quot;title&quot;:&quot;XLSR-Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/xlsr_wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2&quot;}]},{&quot;title&quot;:&quot;Multimodal models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALIGN&quot;,&quot;id&quot;:&quot;model_doc/align&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/align&quot;},{&quot;title&quot;:&quot;AltCLIP&quot;,&quot;id&quot;:&quot;model_doc/altclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/altclip&quot;},{&quot;title&quot;:&quot;BLIP&quot;,&quot;id&quot;:&quot;model_doc/blip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip&quot;},{&quot;title&quot;:&quot;BLIP-2&quot;,&quot;id&quot;:&quot;model_doc/blip-2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip-2&quot;},{&quot;title&quot;:&quot;BridgeTower&quot;,&quot;id&quot;:&quot;model_doc/bridgetower&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bridgetower&quot;},{&quot;title&quot;:&quot;BROS&quot;,&quot;id&quot;:&quot;model_doc/bros&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bros&quot;},{&quot;title&quot;:&quot;Chinese-CLIP&quot;,&quot;id&quot;:&quot;model_doc/chinese_clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/chinese_clip&quot;},{&quot;title&quot;:&quot;CLIP&quot;,&quot;id&quot;:&quot;model_doc/clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clip&quot;},{&quot;title&quot;:&quot;CLIPSeg&quot;,&quot;id&quot;:&quot;model_doc/clipseg&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clipseg&quot;},{&quot;title&quot;:&quot;Data2Vec&quot;,&quot;id&quot;:&quot;model_doc/data2vec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/data2vec&quot;},{&quot;title&quot;:&quot;DePlot&quot;,&quot;id&quot;:&quot;model_doc/deplot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deplot&quot;},{&quot;title&quot;:&quot;Donut&quot;,&quot;id&quot;:&quot;model_doc/donut&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/donut&quot;},{&quot;title&quot;:&quot;FLAVA&quot;,&quot;id&quot;:&quot;model_doc/flava&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flava&quot;},{&quot;title&quot;:&quot;GIT&quot;,&quot;id&quot;:&quot;model_doc/git&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/git&quot;},{&quot;title&quot;:&quot;GroupViT&quot;,&quot;id&quot;:&quot;model_doc/groupvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/groupvit&quot;},{&quot;title&quot;:&quot;IDEFICS&quot;,&quot;id&quot;:&quot;model_doc/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/idefics&quot;},{&quot;title&quot;:&quot;InstructBLIP&quot;,&quot;id&quot;:&quot;model_doc/instructblip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/instructblip&quot;},{&quot;title&quot;:&quot;LayoutLM&quot;,&quot;id&quot;:&quot;model_doc/layoutlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlm&quot;},{&quot;title&quot;:&quot;LayoutLMV2&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv2&quot;},{&quot;title&quot;:&quot;LayoutLMV3&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv3&quot;},{&quot;title&quot;:&quot;LayoutXLM&quot;,&quot;id&quot;:&quot;model_doc/layoutxlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutxlm&quot;},{&quot;title&quot;:&quot;LiLT&quot;,&quot;id&quot;:&quot;model_doc/lilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lilt&quot;},{&quot;title&quot;:&quot;LXMERT&quot;,&quot;id&quot;:&quot;model_doc/lxmert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lxmert&quot;},{&quot;title&quot;:&quot;MatCha&quot;,&quot;id&quot;:&quot;model_doc/matcha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/matcha&quot;},{&quot;title&quot;:&quot;MGP-STR&quot;,&quot;id&quot;:&quot;model_doc/mgp-str&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mgp-str&quot;},{&quot;title&quot;:&quot;Nougat&quot;,&quot;id&quot;:&quot;model_doc/nougat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nougat&quot;},{&quot;title&quot;:&quot;OneFormer&quot;,&quot;id&quot;:&quot;model_doc/oneformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/oneformer&quot;},{&quot;title&quot;:&quot;OWL-ViT&quot;,&quot;id&quot;:&quot;model_doc/owlvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/owlvit&quot;},{&quot;title&quot;:&quot;Perceiver&quot;,&quot;id&quot;:&quot;model_doc/perceiver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/perceiver&quot;},{&quot;title&quot;:&quot;Pix2Struct&quot;,&quot;id&quot;:&quot;model_doc/pix2struct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pix2struct&quot;},{&quot;title&quot;:&quot;Segment Anything&quot;,&quot;id&quot;:&quot;model_doc/sam&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sam&quot;},{&quot;title&quot;:&quot;Speech Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/speech-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder&quot;},{&quot;title&quot;:&quot;TAPAS&quot;,&quot;id&quot;:&quot;model_doc/tapas&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapas&quot;},{&quot;title&quot;:&quot;TrOCR&quot;,&quot;id&quot;:&quot;model_doc/trocr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trocr&quot;},{&quot;title&quot;:&quot;TVLT&quot;,&quot;id&quot;:&quot;model_doc/tvlt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tvlt&quot;},{&quot;title&quot;:&quot;ViLT&quot;,&quot;id&quot;:&quot;model_doc/vilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vilt&quot;},{&quot;title&quot;:&quot;Vision Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/vision-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder&quot;},{&quot;title&quot;:&quot;Vision Text Dual Encoder&quot;,&quot;id&quot;:&quot;model_doc/vision-text-dual-encoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder&quot;},{&quot;title&quot;:&quot;VisualBERT&quot;,&quot;id&quot;:&quot;model_doc/visual_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/visual_bert&quot;},{&quot;title&quot;:&quot;X-CLIP&quot;,&quot;id&quot;:&quot;model_doc/xclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xclip&quot;}]},{&quot;title&quot;:&quot;Reinforcement learning models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Decision Transformer&quot;,&quot;id&quot;:&quot;model_doc/decision_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/decision_transformer&quot;},{&quot;title&quot;:&quot;Trajectory Transformer&quot;,&quot;id&quot;:&quot;model_doc/trajectory_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer&quot;}]},{&quot;title&quot;:&quot;Time series models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Autoformer&quot;,&quot;id&quot;:&quot;model_doc/autoformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/autoformer&quot;},{&quot;title&quot;:&quot;Informer&quot;,&quot;id&quot;:&quot;model_doc/informer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/informer&quot;},{&quot;title&quot;:&quot;Time Series Transformer&quot;,&quot;id&quot;:&quot;model_doc/time_series_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/time_series_transformer&quot;}]},{&quot;title&quot;:&quot;Graph models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Graphormer&quot;,&quot;id&quot;:&quot;model_doc/graphormer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/graphormer&quot;}]}]},{&quot;title&quot;:&quot;Internal Helpers&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Custom Layers and Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/modeling_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/modeling_utils&quot;},{&quot;title&quot;:&quot;Utilities for pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/pipelines_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/pipelines_utils&quot;},{&quot;title&quot;:&quot;Utilities for Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/tokenization_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/tokenization_utils&quot;},{&quot;title&quot;:&quot;Utilities for Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/trainer_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/trainer_utils&quot;},{&quot;title&quot;:&quot;Utilities for Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/generation_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/generation_utils&quot;},{&quot;title&quot;:&quot;Utilities for Image Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/image_processing_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/image_processing_utils&quot;},{&quot;title&quot;:&quot;Utilities for Audio processing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/audio_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/audio_utils&quot;},{&quot;title&quot;:&quot;General Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/file_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/file_utils&quot;},{&quot;title&quot;:&quot;Utilities for Time Series&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/time_series_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/time_series_utils&quot;}]}]}],&quot;chapterId&quot;:&quot;tasks/image_classification&quot;,&quot;docType&quot;:&quot;docs&quot;,&quot;isLoggedIn&quot;:false,&quot;lang&quot;:&quot;en&quot;,&quot;langs&quot;:[&quot;de&quot;,&quot;en&quot;,&quot;es&quot;,&quot;fr&quot;,&quot;it&quot;,&quot;ko&quot;,&quot;pt&quot;,&quot;zh&quot;],&quot;library&quot;:&quot;transformers&quot;,&quot;theme&quot;:&quot;light&quot;,&quot;version&quot;:&quot;v4.34.0&quot;,&quot;versions&quot;:[{&quot;version&quot;:&quot;main&quot;},{&quot;version&quot;:&quot;v4.34.0&quot;},{&quot;version&quot;:&quot;v4.33.3&quot;},{&quot;version&quot;:&quot;v4.33.2&quot;},{&quot;version&quot;:&quot;v4.33.0&quot;},{&quot;version&quot;:&quot;v4.32.1&quot;},{&quot;version&quot;:&quot;v4.32.0&quot;},{&quot;version&quot;:&quot;v4.31.0&quot;},{&quot;version&quot;:&quot;v4.30.0&quot;},{&quot;version&quot;:&quot;v4.29.1&quot;},{&quot;version&quot;:&quot;v4.29.0&quot;},{&quot;version&quot;:&quot;v4.28.1&quot;},{&quot;version&quot;:&quot;v4.28.0&quot;},{&quot;version&quot;:&quot;v4.27.2&quot;},{&quot;version&quot;:&quot;v4.27.1&quot;},{&quot;version&quot;:&quot;v4.27.0&quot;},{&quot;version&quot;:&quot;v4.26.1&quot;},{&quot;version&quot;:&quot;v4.26.0&quot;},{&quot;version&quot;:&quot;v4.25.1&quot;},{&quot;version&quot;:&quot;v4.24.0&quot;},{&quot;version&quot;:&quot;v4.23.1&quot;},{&quot;version&quot;:&quot;v4.23.0&quot;},{&quot;version&quot;:&quot;v4.22.2&quot;},{&quot;version&quot;:&quot;v4.22.1&quot;},{&quot;version&quot;:&quot;v4.22.0&quot;},{&quot;version&quot;:&quot;v4.21.3&quot;},{&quot;version&quot;:&quot;v4.21.2&quot;},{&quot;version&quot;:&quot;v4.21.1&quot;},{&quot;version&quot;:&quot;v4.21.0&quot;},{&quot;version&quot;:&quot;v4.20.1&quot;},{&quot;version&quot;:&quot;v4.20.0&quot;},{&quot;version&quot;:&quot;v4.19.4&quot;},{&quot;version&quot;:&quot;v4.19.3&quot;},{&quot;version&quot;:&quot;v4.19.2&quot;},{&quot;version&quot;:&quot;v4.19.0&quot;},{&quot;version&quot;:&quot;v4.18.0&quot;},{&quot;version&quot;:&quot;v4.17.0&quot;},{&quot;version&quot;:&quot;v4.16.2&quot;},{&quot;version&quot;:&quot;v4.16.1&quot;},{&quot;version&quot;:&quot;v4.16.0&quot;},{&quot;version&quot;:&quot;v4.15.0&quot;},{&quot;version&quot;:&quot;v4.14.1&quot;},{&quot;version&quot;:&quot;v4.13.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.5&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.4&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.0.0&quot;},{&quot;version&quot;:&quot;doc-builder-html&quot;}],&quot;title&quot;:&quot;Image classification&quot;}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"><div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation</p> <div class="flex items-center"><p class="font-semibold">Image classification</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "><button class=" " type="button"><h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> <span class="ml-auto rounded border border-gray-200 bg-gray-100 px-0.5 text-xs dark:border-gray-800 dark:bg-gray-800"><kbd class="font-sans">⌘K</kbd></span></button> <div class="flex items-center"><select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1">v4.34.0</option><option value="2">v4.33.3</option><option value="3">v4.32.1</option><option value="4">v4.31.0</option><option value="5">v4.30.0</option><option value="6">v4.29.1</option><option value="7">v4.28.1</option><option value="8">v4.27.2</option><option value="9">v4.26.1</option><option value="10">v4.25.1</option><option value="11">v4.24.0</option><option value="12">v4.23.1</option><option value="13">v4.22.2</option><option value="14">v4.21.3</option><option value="15">v4.20.1</option><option value="16">v4.19.4</option><option value="17">v4.18.0</option><option value="18">v4.17.0</option><option value="19">v4.16.2</option><option value="20">v4.15.0</option><option value="21">v4.14.1</option><option value="22">v4.13.0</option><option value="23">v4.12.5</option><option value="24">v4.11.3</option><option value="25">v4.10.1</option><option value="26">v4.9.2</option><option value="27">v4.8.2</option><option value="28">v4.7.0</option><option value="29">v4.6.0</option><option value="30">v4.5.1</option><option value="31">v4.4.2</option><option value="32">v4.3.3</option><option value="33">v4.2.2</option><option value="34">v4.1.1</option><option value="35">v4.0.1</option><option value="36">v3.5.1</option><option value="37">v3.4.0</option><option value="38">v3.3.1</option><option value="39">v3.2.0</option><option value="40">v3.1.0</option><option value="41">v3.0.2</option><option value="42">v2.11.0</option><option value="43">v2.10.0</option><option value="44">v2.9.1</option><option value="45">v2.8.0</option><option value="46">v2.7.0</option><option value="47">v2.6.0</option><option value="48">v2.5.1</option><option value="49">v2.4.1</option><option value="50">v2.3.0</option><option value="51">v2.2.2</option><option value="52">v2.1.1</option><option value="53">v2.0.0</option><option value="54">v1.2.0</option><option value="55">v1.1.0</option><option value="56">v1.0.0</option><option value="57">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"><button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"><svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> 112,792</a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Get started</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/index">🤗 Transformers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/quicktour">Quick tour </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/installation">Installation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Tutorials</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_tutorial">Run inference with pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/autoclass_tutorial">Write portable code with AutoClass </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/preprocessing">Preprocess data </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/training">Fine-tune a pretrained model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/run_scripts">Train with a script </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/accelerate">Set up distributed training with 🤗 Accelerate </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/peft">Load and train adapters with 🤗 PEFT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_sharing">Share your model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/transformers_agents">Agents </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/llm_tutorial">Generation with LLMs </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Task Guides</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Natural Language Processing</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Computer Vision</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-4" href="/docs/transformers/v4.34.0/en/tasks/image_classification">Image classification </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/semantic_segmentation">Semantic segmentation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/video_classification">Video classification </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/object_detection">Object detection </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection">Zero-shot object detection </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification">Zero-shot image classification </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation">Depth estimation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Generation</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Prompting</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Developer guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/fast_tokenizers">Use fast tokenizers from 🤗 Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/multilingual">Run inference with multilingual models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/create_a_model">Use model-specific APIs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_models">Share a custom model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/chat_templating">Templates for chat models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/sagemaker">Run training on Amazon SageMaker </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/serialization">Export to ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tflite">Export to TFLite </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/torchscript">Export to TorchScript </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/benchmarks">Benchmarks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/notebooks">Notebooks with examples </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/community">Community resources </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_tools">Custom Tools and Prompts </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/troubleshooting">Troubleshoot </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Performance and scalability</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/performance">Overview </a><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Efficient training techniques</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_one">Methods and tools for efficient training on a single GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_many">Multiple GPUs and parallelism </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu">Efficient training on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu_many">Distributed CPU training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu">Training on TPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu_tf">Training on TPU with TensorFlow </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_special">Training on Specialized Hardware </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_hardware">Custom hardware for training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/hpo_train">Hyperparameter Search using Trainer API </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Optimizing inference</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_cpu">Inference on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_one">Inference on one GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_many">Inference on many GPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_special">Inference on Specialized Hardware </a> </div><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/big_models">Instantiating a big model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/debugging">Troubleshooting </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tf_xla">XLA Integration for TensorFlow Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perf_torch_compile">Optimize inference using `torch.compile()` </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Contribute</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/contributing">How to contribute to transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_model">How to add a model to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_tensorflow_model">How to convert a 🤗 Transformers model to TensorFlow? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_pipeline">How to add a pipeline to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/testing">Testing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pr_checks">Checks on a Pull Request </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Conceptual guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/philosophy">Philosophy </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/glossary">Glossary </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/task_summary">What 🤗 Transformers can do </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tasks_explained">How 🤗 Transformers solve tasks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_summary">The Transformer model family </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tokenizer_summary">Summary of the tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/attention">Attention mechanisms </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pad_truncation">Padding and truncation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/bertology">BERTology </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perplexity">Perplexity of fixed-length models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_webserver">Pipelines for webserver inference </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_memory_anatomy">Model training anatomy </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>API</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Main Classes</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/agent">Agents and Tools </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/model_doc/auto">Auto Classes </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/callback">Callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/configuration">Configuration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/data_collator">Data Collator </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks">Keras callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/logging">Logging </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/model">Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/text_generation">Text Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/onnx">ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules">Optimization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/output">Model outputs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/pipelines">Pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/processors">Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/quantization">Quantization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/tokenizer">Tokenizer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/trainer">Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/deepspeed">DeepSpeed Integration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor">Feature Extractor </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/image_processor">Image Processor </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Models</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Text models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Vision models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Reinforcement learning models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Time series models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Graph models</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Internal Helpers</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/modeling_utils">Custom Layers and Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/pipelines_utils">Utilities for pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/tokenization_utils">Utilities for Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/trainer_utils">Utilities for Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/generation_utils">Utilities for Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/image_processing_utils">Utilities for Image Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/audio_utils">Utilities for Audio processing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/file_utils">General Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/time_series_utils">Utilities for Time Series </a> </div> </div></nav></div></div></div> <div class="z-1 min-w-0 flex-1"> <div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg"> <div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div> <p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience </p> <div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes </div></div></div> <div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a> <p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div> <div class="prose-doc prose relative mx-auto max-w-4xl break-words"> <p></p> <h1 class="relative group"><a id="image-classification" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#image-classification"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1cn7k7s">Image classification</span></h1> <div class="flex space-x-1 absolute z-10 right-0 top-0"> <div class="relative colab-dropdown "><button class=" " type="button"><img alt="Open In Colab" class="!m-0" src="https://colab.research.google.com/assets/colab-badge.svg"></button> </div> <div class="relative colab-dropdown "><button class=" " type="button"><img alt="Open In Studio Lab" class="!m-0" src="https://studiolab.sagemaker.aws/studiolab.svg"></button> </div></div> <iframe class="w-full xl:w-4/6 h-80" src="https://www.youtube-nocookie.com/embed/tjAIM7BOYhw" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe> <p data-svelte-h="svelte-dpadt7">Image classification assigns a label or class to an image. Unlike text or audio classification, the inputs are the pixel values that comprise an image. There are many applications for image classification, such as detecting damage after a natural disaster, monitoring crop health, or helping screen medical images for signs of disease.</p> <p data-svelte-h="svelte-ku8orh">This guide illustrates how to:</p> <ol data-svelte-h="svelte-1j1nbn7"><li>Fine-tune <a href="model_doc/vit">ViT</a> on the <a href="https://huggingface.co/datasets/food101" rel="nofollow">Food-101</a> dataset to classify a food item in an image.</li> <li>Use your fine-tuned model for inference.</li></ol> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400">The task illustrated in this tutorial is supported by the following model architectures: <p data-svelte-h="svelte-6ce6f7"><a href="../model_doc/beit">BEiT</a>, <a href="../model_doc/bit">BiT</a>, <a href="../model_doc/convnext">ConvNeXT</a>, <a href="../model_doc/convnextv2">ConvNeXTV2</a>, <a href="../model_doc/cvt">CvT</a>, <a href="../model_doc/data2vec-vision">Data2VecVision</a>, <a href="../model_doc/deit">DeiT</a>, <a href="../model_doc/dinat">DiNAT</a>, <a href="../model_doc/dinov2">DINOv2</a>, <a href="../model_doc/efficientformer">EfficientFormer</a>, <a href="../model_doc/efficientnet">EfficientNet</a>, <a href="../model_doc/focalnet">FocalNet</a>, <a href="../model_doc/imagegpt">ImageGPT</a>, <a href="../model_doc/levit">LeViT</a>, <a href="../model_doc/mobilenet_v1">MobileNetV1</a>, <a href="../model_doc/mobilenet_v2">MobileNetV2</a>, <a href="../model_doc/mobilevit">MobileViT</a>, <a href="../model_doc/mobilevitv2">MobileViTV2</a>, <a href="../model_doc/nat">NAT</a>, <a href="../model_doc/perceiver">Perceiver</a>, <a href="../model_doc/poolformer">PoolFormer</a>, <a href="../model_doc/pvt">PVT</a>, <a href="../model_doc/regnet">RegNet</a>, <a href="../model_doc/resnet">ResNet</a>, <a href="../model_doc/segformer">SegFormer</a>, <a href="../model_doc/swiftformer">SwiftFormer</a>, <a href="../model_doc/swin">Swin Transformer</a>, <a href="../model_doc/swinv2">Swin Transformer V2</a>, <a href="../model_doc/van">VAN</a>, <a href="../model_doc/vit">ViT</a>, <a href="../model_doc/vit_hybrid">ViT Hybrid</a>, <a href="../model_doc/vit_msn">ViTMSN</a></p></div> <p data-svelte-h="svelte-1c9nexd">Before you begin, make sure you have all the necessary libraries installed:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class="">pip install transformers datasets evaluate</pre></div> <p data-svelte-h="svelte-yib87s">We encourage you to log in to your Hugging Face account to upload and share your model with the community. When prompted, enter your token to log in:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> huggingface_hub <span class="hljs-keyword">import</span> notebook_login <span class="hljs-meta">&gt;&gt;&gt; </span>notebook_login()</pre></div> <h2 class="relative group"><a id="load-food101-dataset" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#load-food101-dataset"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-tl19vq">Load Food-101 dataset</span></h2> <p data-svelte-h="svelte-1cr1dw1">Start by loading a smaller subset of the Food-101 dataset from the 🤗 Datasets library. This will give you a chance to experiment and make sure everything works before spending more time training on the full dataset.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset <span class="hljs-meta">&gt;&gt;&gt; </span>food = load_dataset(<span class="hljs-string">"food101"</span>, split=<span class="hljs-string">"train[:5000]"</span>)</pre></div> <p data-svelte-h="svelte-1izknij">Split the dataset’s <code>train</code> split into a train and test set with the <a href="https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.train_test_split" rel="nofollow">train_test_split</a> method:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>food = food.train_test_split(test_size=<span class="hljs-number">0.2</span>)</pre></div> <p data-svelte-h="svelte-1m91ua0">Then take a look at an example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>food[<span class="hljs-string">"train"</span>][<span class="hljs-number">0</span>] {<span class="hljs-string">'image'</span>: &lt;PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=512x512 at <span class="hljs-number">0x7F52AFC8AC50</span>&gt;, <span class="hljs-string">'label'</span>: <span class="hljs-number">79</span>}</pre></div> <p data-svelte-h="svelte-w87shu">Each example in the dataset has two fields:</p> <ul data-svelte-h="svelte-133so41"><li><code>image</code>: a PIL image of the food item</li> <li><code>label</code>: the label class of the food item</li></ul> <p data-svelte-h="svelte-1j34ajz">To make it easier for the model to get the label name from the label id, create a dictionary that maps the label name to an integer and vice versa:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>labels = food[<span class="hljs-string">"train"</span>].features[<span class="hljs-string">"label"</span>].names <span class="hljs-meta">&gt;&gt;&gt; </span>label2id, id2label = <span class="hljs-built_in">dict</span>(), <span class="hljs-built_in">dict</span>() <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">for</span> i, label <span class="hljs-keyword">in</span> <span class="hljs-built_in">enumerate</span>(labels): <span class="hljs-meta">... </span> label2id[label] = <span class="hljs-built_in">str</span>(i) <span class="hljs-meta">... </span> id2label[<span class="hljs-built_in">str</span>(i)] = label</pre></div> <p data-svelte-h="svelte-1e9n4a3">Now you can convert the label id to a label name:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>id2label[<span class="hljs-built_in">str</span>(<span class="hljs-number">79</span>)] <span class="hljs-string">'prime_rib'</span></pre></div> <h2 class="relative group"><a id="preprocess" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#preprocess"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1cg9qj">Preprocess</span></h2> <p data-svelte-h="svelte-25xdfm">The next step is to load a ViT image processor to process the image into a tensor:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoImageProcessor <span class="hljs-meta">&gt;&gt;&gt; </span>checkpoint = <span class="hljs-string">"google/vit-base-patch16-224-in21k"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>image_processor = AutoImageProcessor.from_pretrained(checkpoint)</pre></div> <div class="space-y-10 py-6 2xl:py-8 2xl:-mx-4"><div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><defs><clipPath id="a"><rect x="3.05" y="0.5" width="25.73" height="31" fill="none"></rect></clipPath></defs><g clip-path="url(#a)"><path d="M24.94,9.51a12.81,12.81,0,0,1,0,18.16,12.68,12.68,0,0,1-18,0,12.81,12.81,0,0,1,0-18.16l9-9V5l-.84.83-6,6a9.58,9.58,0,1,0,13.55,0ZM20.44,9a1.68,1.68,0,1,1,1.67-1.67A1.68,1.68,0,0,1,20.44,9Z" fill="#ee4c2c"></path></g></svg> <span>Pytorch</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide Pytorch content</span></div></div> <div class="framework-content"><p data-svelte-h="svelte-1h04qv0">Apply some image transformations to the images to make the model more robust against overfitting. Here you’ll use torchvision’s <a href="https://pytorch.org/vision/stable/transforms.html" rel="nofollow"><code>transforms</code></a> module, but you can also use any image library you like.</p> <p data-svelte-h="svelte-1ocztfo">Crop a random part of the image, resize it, and normalize it with the image mean and standard deviation:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> torchvision.transforms <span class="hljs-keyword">import</span> RandomResizedCrop, Compose, Normalize, ToTensor <span class="hljs-meta">&gt;&gt;&gt; </span>normalize = Normalize(mean=image_processor.image_mean, std=image_processor.image_std) <span class="hljs-meta">&gt;&gt;&gt; </span>size = ( <span class="hljs-meta">... </span> image_processor.size[<span class="hljs-string">"shortest_edge"</span>] <span class="hljs-meta">... </span> <span class="hljs-keyword">if</span> <span class="hljs-string">"shortest_edge"</span> <span class="hljs-keyword">in</span> image_processor.size <span class="hljs-meta">... </span> <span class="hljs-keyword">else</span> (image_processor.size[<span class="hljs-string">"height"</span>], image_processor.size[<span class="hljs-string">"width"</span>]) <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>_transforms = Compose([RandomResizedCrop(size), ToTensor(), normalize])</pre></div> <p data-svelte-h="svelte-q25jfj">Then create a preprocessing function to apply the transforms and return the <code>pixel_values</code> - the inputs to the model - of the image:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">transforms</span>(<span class="hljs-params">examples</span>): <span class="hljs-meta">... </span> examples[<span class="hljs-string">"pixel_values"</span>] = [_transforms(img.convert(<span class="hljs-string">"RGB"</span>)) <span class="hljs-keyword">for</span> img <span class="hljs-keyword">in</span> examples[<span class="hljs-string">"image"</span>]] <span class="hljs-meta">... </span> <span class="hljs-keyword">del</span> examples[<span class="hljs-string">"image"</span>] <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> examples</pre></div> <p data-svelte-h="svelte-ascyzf">To apply the preprocessing function over the entire dataset, use 🤗 Datasets <a href="https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.with_transform" rel="nofollow">with_transform</a> method. The transforms are applied on the fly when you load an element of the dataset:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>food = food.with_transform(transforms)</pre></div> <p data-svelte-h="svelte-i8gzwk">Now create a batch of examples using <a href="/docs/transformers/v4.34.0/en/main_classes/data_collator#transformers.DefaultDataCollator">DefaultDataCollator</a>. Unlike other data collators in 🤗 Transformers, the <code>DefaultDataCollator</code> does not apply additional preprocessing such as padding.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> DefaultDataCollator <span class="hljs-meta">&gt;&gt;&gt; </span>data_collator = DefaultDataCollator()</pre></div></div></div> </div> <div class="space-y-10 py-6 2xl:py-8 2xl:-mx-4"> <div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="0.94em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 274"><path d="M145.726 42.065v42.07l72.861 42.07v-42.07l-72.86-42.07zM0 84.135v42.07l36.43 21.03V105.17L0 84.135zm109.291 21.035l-36.43 21.034v126.2l36.43 21.035v-84.135l36.435 21.035v-42.07l-36.435-21.034V105.17z" fill="#E55B2D"></path><path d="M145.726 42.065L36.43 105.17v42.065l72.861-42.065v42.065l36.435-21.03v-84.14zM255.022 63.1l-36.435 21.035v42.07l36.435-21.035V63.1zm-72.865 84.135l-36.43 21.035v42.07l36.43-21.036v-42.07zm-36.43 63.104l-36.436-21.035v84.135l36.435-21.035V210.34z" fill="#ED8E24"></path><path d="M145.726 0L0 84.135l36.43 21.035l109.296-63.105l72.861 42.07L255.022 63.1L145.726 0zm0 126.204l-36.435 21.03l36.435 21.036l36.43-21.035l-36.43-21.03z" fill="#F8BF3C"></path></svg> <span>TensorFlow</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide TensorFlow content</span></div></div> <div class="framework-content"><p data-svelte-h="svelte-134vg0e">To avoid overfitting and to make the model more robust, add some data augmentation to the training part of the dataset. Here we use Keras preprocessing layers to define the transformations for the training data (includes data augmentation), and transformations for the validation data (only center cropping, resizing and normalizing). You can use <code>tf.image</code>or any other library you prefer.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> tensorflow <span class="hljs-keyword">import</span> keras <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> tensorflow.keras <span class="hljs-keyword">import</span> layers <span class="hljs-meta">&gt;&gt;&gt; </span>size = (image_processor.size[<span class="hljs-string">"height"</span>], image_processor.size[<span class="hljs-string">"width"</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span>train_data_augmentation = keras.Sequential( <span class="hljs-meta">... </span> [ <span class="hljs-meta">... </span> layers.RandomCrop(size[<span class="hljs-number">0</span>], size[<span class="hljs-number">1</span>]), <span class="hljs-meta">... </span> layers.Rescaling(scale=<span class="hljs-number">1.0</span> / <span class="hljs-number">127.5</span>, offset=-<span class="hljs-number">1</span>), <span class="hljs-meta">... </span> layers.RandomFlip(<span class="hljs-string">"horizontal"</span>), <span class="hljs-meta">... </span> layers.RandomRotation(factor=<span class="hljs-number">0.02</span>), <span class="hljs-meta">... </span> layers.RandomZoom(height_factor=<span class="hljs-number">0.2</span>, width_factor=<span class="hljs-number">0.2</span>), <span class="hljs-meta">... </span> ], <span class="hljs-meta">... </span> name=<span class="hljs-string">"train_data_augmentation"</span>, <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>val_data_augmentation = keras.Sequential( <span class="hljs-meta">... </span> [ <span class="hljs-meta">... </span> layers.CenterCrop(size[<span class="hljs-number">0</span>], size[<span class="hljs-number">1</span>]), <span class="hljs-meta">... </span> layers.Rescaling(scale=<span class="hljs-number">1.0</span> / <span class="hljs-number">127.5</span>, offset=-<span class="hljs-number">1</span>), <span class="hljs-meta">... </span> ], <span class="hljs-meta">... </span> name=<span class="hljs-string">"val_data_augmentation"</span>, <span class="hljs-meta">... </span>)</pre></div> <p data-svelte-h="svelte-kdpfis">Next, create functions to apply appropriate transformations to a batch of images, instead of one image at a time.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> tensorflow <span class="hljs-keyword">as</span> tf <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> PIL <span class="hljs-keyword">import</span> Image <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">convert_to_tf_tensor</span>(<span class="hljs-params">image: Image</span>): <span class="hljs-meta">... </span> np_image = np.array(image) <span class="hljs-meta">... </span> tf_image = tf.convert_to_tensor(np_image) <span class="hljs-meta">... </span> <span class="hljs-comment"># `expand_dims()` is used to add a batch dimension since</span> <span class="hljs-meta">... </span> <span class="hljs-comment"># the TF augmentation layers operates on batched inputs.</span> <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> tf.expand_dims(tf_image, <span class="hljs-number">0</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">preprocess_train</span>(<span class="hljs-params">example_batch</span>): <span class="hljs-meta">... </span> <span class="hljs-string">"""Apply train_transforms across a batch."""</span> <span class="hljs-meta">... </span> images = [ <span class="hljs-meta">... </span> train_data_augmentation(convert_to_tf_tensor(image.convert(<span class="hljs-string">"RGB"</span>))) <span class="hljs-keyword">for</span> image <span class="hljs-keyword">in</span> example_batch[<span class="hljs-string">"image"</span>] <span class="hljs-meta">... </span> ] <span class="hljs-meta">... </span> example_batch[<span class="hljs-string">"pixel_values"</span>] = [tf.transpose(tf.squeeze(image)) <span class="hljs-keyword">for</span> image <span class="hljs-keyword">in</span> images] <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> example_batch <span class="hljs-meta">... </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">preprocess_val</span>(<span class="hljs-params">example_batch</span>): <span class="hljs-meta">... </span> <span class="hljs-string">"""Apply val_transforms across a batch."""</span> <span class="hljs-meta">... </span> images = [ <span class="hljs-meta">... </span> val_data_augmentation(convert_to_tf_tensor(image.convert(<span class="hljs-string">"RGB"</span>))) <span class="hljs-keyword">for</span> image <span class="hljs-keyword">in</span> example_batch[<span class="hljs-string">"image"</span>] <span class="hljs-meta">... </span> ] <span class="hljs-meta">... </span> example_batch[<span class="hljs-string">"pixel_values"</span>] = [tf.transpose(tf.squeeze(image)) <span class="hljs-keyword">for</span> image <span class="hljs-keyword">in</span> images] <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> example_batch</pre></div> <p data-svelte-h="svelte-nydvlb">Use 🤗 Datasets <a href="https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.set_transform" rel="nofollow">set_transform</a> to apply the transformations on the fly:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class="">food[<span class="hljs-string">"train"</span>].set_transform(preprocess_train) food[<span class="hljs-string">"test"</span>].set_transform(preprocess_val)</pre></div> <p data-svelte-h="svelte-j9ih75">As a final preprocessing step, create a batch of examples using <code>DefaultDataCollator</code>. Unlike other data collators in 🤗 Transformers, the <code>DefaultDataCollator</code> does not apply additional preprocessing, such as padding.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> DefaultDataCollator <span class="hljs-meta">&gt;&gt;&gt; </span>data_collator = DefaultDataCollator(return_tensors=<span class="hljs-string">"tf"</span>)</pre></div></div></div> </div> <h2 class="relative group"><a id="evaluate" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#evaluate"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-sh8s6s">Evaluate</span></h2> <p data-svelte-h="svelte-1qqtfp">Including a metric during training is often helpful for evaluating your model’s performance. You can quickly load an evaluation method with the 🤗 <a href="https://huggingface.co/docs/evaluate/index" rel="nofollow">Evaluate</a> library. For this task, load the <a href="https://huggingface.co/spaces/evaluate-metric/accuracy" rel="nofollow">accuracy</a> metric (see the 🤗 Evaluate <a href="https://huggingface.co/docs/evaluate/a_quick_tour" rel="nofollow">quick tour</a> to learn more about how to load and compute a metric):</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> evaluate <span class="hljs-meta">&gt;&gt;&gt; </span>accuracy = evaluate.load(<span class="hljs-string">"accuracy"</span>)</pre></div> <p data-svelte-h="svelte-14oy2j6">Then create a function that passes your predictions and labels to <a href="https://huggingface.co/docs/evaluate/v0.4.0/en/package_reference/main_classes#evaluate.EvaluationModule.compute" rel="nofollow">compute</a> to calculate the accuracy:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">compute_metrics</span>(<span class="hljs-params">eval_pred</span>): <span class="hljs-meta">... </span> predictions, labels = eval_pred <span class="hljs-meta">... </span> predictions = np.argmax(predictions, axis=<span class="hljs-number">1</span>) <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> accuracy.compute(predictions=predictions, references=labels)</pre></div> <p data-svelte-h="svelte-44vib3">Your <code>compute_metrics</code> function is ready to go now, and you’ll return to it when you set up your training.</p> <h2 class="relative group"><a id="train" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#train"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-5arm0l">Train</span></h2> <div class="space-y-10 py-6 2xl:py-8 2xl:-mx-4"><div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><defs><clipPath id="a"><rect x="3.05" y="0.5" width="25.73" height="31" fill="none"></rect></clipPath></defs><g clip-path="url(#a)"><path d="M24.94,9.51a12.81,12.81,0,0,1,0,18.16,12.68,12.68,0,0,1-18,0,12.81,12.81,0,0,1,0-18.16l9-9V5l-.84.83-6,6a9.58,9.58,0,1,0,13.55,0ZM20.44,9a1.68,1.68,0,1,1,1.67-1.67A1.68,1.68,0,0,1,20.44,9Z" fill="#ee4c2c"></path></g></svg> <span>Pytorch</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide Pytorch content</span></div></div> <div class="framework-content"><div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1ufp0ay">If you aren’t familiar with finetuning a model with the <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer">Trainer</a>, take a look at the basic tutorial <a href="../training#train-with-pytorch-trainer">here</a>!</p></div> <p data-svelte-h="svelte-axbw58">You’re ready to start training your model now! Load ViT with <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoModelForImageClassification">AutoModelForImageClassification</a>. Specify the number of labels along with the number of expected labels, and the label mappings:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoModelForImageClassification, TrainingArguments, Trainer <span class="hljs-meta">&gt;&gt;&gt; </span>model = AutoModelForImageClassification.from_pretrained( <span class="hljs-meta">... </span> checkpoint, <span class="hljs-meta">... </span> num_labels=<span class="hljs-built_in">len</span>(labels), <span class="hljs-meta">... </span> id2label=id2label, <span class="hljs-meta">... </span> label2id=label2id, <span class="hljs-meta">... </span>)</pre></div> <p data-svelte-h="svelte-l42k0i">At this point, only three steps remain:</p> <ol data-svelte-h="svelte-zo1hmx"><li>Define your training hyperparameters in <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.TrainingArguments">TrainingArguments</a>. It is important you don’t remove unused columns because that’ll drop the <code>image</code> column. Without the <code>image</code> column, you can’t create <code>pixel_values</code>. Set <code>remove_unused_columns=False</code> to prevent this behavior! The only other required parameter is <code>output_dir</code> which specifies where to save your model. You’ll push this model to the Hub by setting <code>push_to_hub=True</code> (you need to be signed in to Hugging Face to upload your model). At the end of each epoch, the <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer">Trainer</a> will evaluate the accuracy and save the training checkpoint.</li> <li>Pass the training arguments to <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer">Trainer</a> along with the model, dataset, tokenizer, data collator, and <code>compute_metrics</code> function.</li> <li>Call <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.train">train()</a> to finetune your model.</li></ol> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>training_args = TrainingArguments( <span class="hljs-meta">... </span> output_dir=<span class="hljs-string">"my_awesome_food_model"</span>, <span class="hljs-meta">... </span> remove_unused_columns=<span class="hljs-literal">False</span>, <span class="hljs-meta">... </span> evaluation_strategy=<span class="hljs-string">"epoch"</span>, <span class="hljs-meta">... </span> save_strategy=<span class="hljs-string">"epoch"</span>, <span class="hljs-meta">... </span> learning_rate=<span class="hljs-number">5e-5</span>, <span class="hljs-meta">... </span> per_device_train_batch_size=<span class="hljs-number">16</span>, <span class="hljs-meta">... </span> gradient_accumulation_steps=<span class="hljs-number">4</span>, <span class="hljs-meta">... </span> per_device_eval_batch_size=<span class="hljs-number">16</span>, <span class="hljs-meta">... </span> num_train_epochs=<span class="hljs-number">3</span>, <span class="hljs-meta">... </span> warmup_ratio=<span class="hljs-number">0.1</span>, <span class="hljs-meta">... </span> logging_steps=<span class="hljs-number">10</span>, <span class="hljs-meta">... </span> load_best_model_at_end=<span class="hljs-literal">True</span>, <span class="hljs-meta">... </span> metric_for_best_model=<span class="hljs-string">"accuracy"</span>, <span class="hljs-meta">... </span> push_to_hub=<span class="hljs-literal">True</span>, <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>trainer = Trainer( <span class="hljs-meta">... </span> model=model, <span class="hljs-meta">... </span> args=training_args, <span class="hljs-meta">... </span> data_collator=data_collator, <span class="hljs-meta">... </span> train_dataset=food[<span class="hljs-string">"train"</span>], <span class="hljs-meta">... </span> eval_dataset=food[<span class="hljs-string">"test"</span>], <span class="hljs-meta">... </span> tokenizer=image_processor, <span class="hljs-meta">... </span> compute_metrics=compute_metrics, <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>trainer.train()</pre></div> <p data-svelte-h="svelte-cv8z08">Once training is completed, share your model to the Hub with the <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.push_to_hub">push_to_hub()</a> method so everyone can use your model:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>trainer.push_to_hub()</pre></div></div></div> </div> <div class="space-y-10 py-6 2xl:py-8 2xl:-mx-4"> <div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="0.94em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 274"><path d="M145.726 42.065v42.07l72.861 42.07v-42.07l-72.86-42.07zM0 84.135v42.07l36.43 21.03V105.17L0 84.135zm109.291 21.035l-36.43 21.034v126.2l36.43 21.035v-84.135l36.435 21.035v-42.07l-36.435-21.034V105.17z" fill="#E55B2D"></path><path d="M145.726 42.065L36.43 105.17v42.065l72.861-42.065v42.065l36.435-21.03v-84.14zM255.022 63.1l-36.435 21.035v42.07l36.435-21.035V63.1zm-72.865 84.135l-36.43 21.035v42.07l36.43-21.036v-42.07zm-36.43 63.104l-36.436-21.035v84.135l36.435-21.035V210.34z" fill="#ED8E24"></path><path d="M145.726 0L0 84.135l36.43 21.035l109.296-63.105l72.861 42.07L255.022 63.1L145.726 0zm0 126.204l-36.435 21.03l36.435 21.036l36.43-21.035l-36.43-21.03z" fill="#F8BF3C"></path></svg> <span>TensorFlow</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide TensorFlow content</span></div></div> <div class="framework-content"><div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1egt5s9">If you are unfamiliar with fine-tuning a model with Keras, check out the <a href="./training#train-a-tensorflow-model-with-keras">basic tutorial</a> first!</p></div> <p data-svelte-h="svelte-s07fxj">To fine-tune a model in TensorFlow, follow these steps:</p> <ol data-svelte-h="svelte-1psiqa4"><li>Define the training hyperparameters, and set up an optimizer and a learning rate schedule.</li> <li>Instantiate a pre-trained model.</li> <li>Convert a 🤗 Dataset to a <code>tf.data.Dataset</code>.</li> <li>Compile your model.</li> <li>Add callbacks and use the <code>fit()</code> method to run the training.</li> <li>Upload your model to 🤗 Hub to share with the community.</li></ol> <p data-svelte-h="svelte-ccl3wn">Start by defining the hyperparameters, optimizer and learning rate schedule:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> create_optimizer <span class="hljs-meta">&gt;&gt;&gt; </span>batch_size = <span class="hljs-number">16</span> <span class="hljs-meta">&gt;&gt;&gt; </span>num_epochs = <span class="hljs-number">5</span> <span class="hljs-meta">&gt;&gt;&gt; </span>num_train_steps = <span class="hljs-built_in">len</span>(food[<span class="hljs-string">"train"</span>]) * num_epochs <span class="hljs-meta">&gt;&gt;&gt; </span>learning_rate = <span class="hljs-number">3e-5</span> <span class="hljs-meta">&gt;&gt;&gt; </span>weight_decay_rate = <span class="hljs-number">0.01</span> <span class="hljs-meta">&gt;&gt;&gt; </span>optimizer, lr_schedule = create_optimizer( <span class="hljs-meta">... </span> init_lr=learning_rate, <span class="hljs-meta">... </span> num_train_steps=num_train_steps, <span class="hljs-meta">... </span> weight_decay_rate=weight_decay_rate, <span class="hljs-meta">... </span> num_warmup_steps=<span class="hljs-number">0</span>, <span class="hljs-meta">... </span>)</pre></div> <p data-svelte-h="svelte-xok0xd">Then, load ViT with <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.TFAutoModelForImageClassification">TFAutoModelForImageClassification</a> along with the label mappings:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> TFAutoModelForImageClassification <span class="hljs-meta">&gt;&gt;&gt; </span>model = TFAutoModelForImageClassification.from_pretrained( <span class="hljs-meta">... </span> checkpoint, <span class="hljs-meta">... </span> id2label=id2label, <span class="hljs-meta">... </span> label2id=label2id, <span class="hljs-meta">... </span>)</pre></div> <p data-svelte-h="svelte-yzs0or">Convert your datasets to the <code>tf.data.Dataset</code> format using the <a href="https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.to_tf_dataset" rel="nofollow">to_tf_dataset</a> and your <code>data_collator</code>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># converting our train dataset to tf.data.Dataset</span> <span class="hljs-meta">&gt;&gt;&gt; </span>tf_train_dataset = food[<span class="hljs-string">"train"</span>].to_tf_dataset( <span class="hljs-meta">... </span> columns=<span class="hljs-string">"pixel_values"</span>, label_cols=<span class="hljs-string">"label"</span>, shuffle=<span class="hljs-literal">True</span>, batch_size=batch_size, collate_fn=data_collator <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># converting our test dataset to tf.data.Dataset</span> <span class="hljs-meta">&gt;&gt;&gt; </span>tf_eval_dataset = food[<span class="hljs-string">"test"</span>].to_tf_dataset( <span class="hljs-meta">... </span> columns=<span class="hljs-string">"pixel_values"</span>, label_cols=<span class="hljs-string">"label"</span>, shuffle=<span class="hljs-literal">True</span>, batch_size=batch_size, collate_fn=data_collator <span class="hljs-meta">... </span>)</pre></div> <p data-svelte-h="svelte-fhefbq">Configure the model for training with <code>compile()</code>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> tensorflow.keras.losses <span class="hljs-keyword">import</span> SparseCategoricalCrossentropy <span class="hljs-meta">&gt;&gt;&gt; </span>loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=<span class="hljs-literal">True</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model.<span class="hljs-built_in">compile</span>(optimizer=optimizer, loss=loss)</pre></div> <p data-svelte-h="svelte-887qtl">To compute the accuracy from the predictions and push your model to the 🤗 Hub, use <a href="../main_classes/keras_callbacks">Keras callbacks</a>. Pass your <code>compute_metrics</code> function to <a href="../main_classes/keras_callbacks#transformers.KerasMetricCallback">KerasMetricCallback</a>, and use the <a href="../main_classes/keras_callbacks#transformers.PushToHubCallback">PushToHubCallback</a> to upload the model:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers.keras_callbacks <span class="hljs-keyword">import</span> KerasMetricCallback, PushToHubCallback <span class="hljs-meta">&gt;&gt;&gt; </span>metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_eval_dataset) <span class="hljs-meta">&gt;&gt;&gt; </span>push_to_hub_callback = PushToHubCallback( <span class="hljs-meta">... </span> output_dir=<span class="hljs-string">"food_classifier"</span>, <span class="hljs-meta">... </span> tokenizer=image_processor, <span class="hljs-meta">... </span> save_strategy=<span class="hljs-string">"no"</span>, <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>callbacks = [metric_callback, push_to_hub_callback]</pre></div> <p data-svelte-h="svelte-1occr1z">Finally, you are ready to train your model! Call <code>fit()</code> with your training and validation datasets, the number of epochs, and your callbacks to fine-tune the model:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>model.fit(tf_train_dataset, validation_data=tf_eval_dataset, epochs=num_epochs, callbacks=callbacks) Epoch <span class="hljs-number">1</span>/<span class="hljs-number">5</span> <span class="hljs-number">250</span>/<span class="hljs-number">250</span> [==============================] - 313s 1s/step - loss: <span class="hljs-number">2.5623</span> - val_loss: <span class="hljs-number">1.4161</span> - accuracy: <span class="hljs-number">0.9290</span> Epoch <span class="hljs-number">2</span>/<span class="hljs-number">5</span> <span class="hljs-number">250</span>/<span class="hljs-number">250</span> [==============================] - 265s 1s/step - loss: <span class="hljs-number">0.9181</span> - val_loss: <span class="hljs-number">0.6808</span> - accuracy: <span class="hljs-number">0.9690</span> Epoch <span class="hljs-number">3</span>/<span class="hljs-number">5</span> <span class="hljs-number">250</span>/<span class="hljs-number">250</span> [==============================] - 252s 1s/step - loss: <span class="hljs-number">0.3910</span> - val_loss: <span class="hljs-number">0.4303</span> - accuracy: <span class="hljs-number">0.9820</span> Epoch <span class="hljs-number">4</span>/<span class="hljs-number">5</span> <span class="hljs-number">250</span>/<span class="hljs-number">250</span> [==============================] - 251s 1s/step - loss: <span class="hljs-number">0.2028</span> - val_loss: <span class="hljs-number">0.3191</span> - accuracy: <span class="hljs-number">0.9900</span> Epoch <span class="hljs-number">5</span>/<span class="hljs-number">5</span> <span class="hljs-number">250</span>/<span class="hljs-number">250</span> [==============================] - 238s 949ms/step - loss: <span class="hljs-number">0.1232</span> - val_loss: <span class="hljs-number">0.3259</span> - accuracy: <span class="hljs-number">0.9890</span></pre></div> <p data-svelte-h="svelte-1r99pbn">Congratulations! You have fine-tuned your model and shared it on the 🤗 Hub. You can now use it for inference!</p></div></div> </div> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1gyicmz">For a more in-depth example of how to finetune a model for image classification, take a look at the corresponding <a href="https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb" rel="nofollow">PyTorch notebook</a>.</p></div> <h2 class="relative group"><a id="inference" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#inference"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-199uz7g">Inference</span></h2> <p data-svelte-h="svelte-l3g61e">Great, now that you’ve fine-tuned a model, you can use it for inference!</p> <p data-svelte-h="svelte-1jlr7r7">Load an image you’d like to run inference on:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>ds = load_dataset(<span class="hljs-string">"food101"</span>, split=<span class="hljs-string">"validation[:10]"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>image = ds[<span class="hljs-string">"image"</span>][<span class="hljs-number">0</span>]</pre></div> <div class="flex justify-center" data-svelte-h="svelte-pnh0xy"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png" alt="image of beignets"></div> <p data-svelte-h="svelte-dpgpp3">The simplest way to try out your finetuned model for inference is to use it in a <a href="/docs/transformers/v4.34.0/en/main_classes/pipelines#transformers.pipeline">pipeline()</a>. Instantiate a <code>pipeline</code> for image classification with your model, and pass your image to it:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> pipeline <span class="hljs-meta">&gt;&gt;&gt; </span>classifier = pipeline(<span class="hljs-string">"image-classification"</span>, model=<span class="hljs-string">"my_awesome_food_model"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>classifier(image) [{<span class="hljs-string">'score'</span>: <span class="hljs-number">0.31856709718704224</span>, <span class="hljs-string">'label'</span>: <span class="hljs-string">'beignets'</span>}, {<span class="hljs-string">'score'</span>: <span class="hljs-number">0.015232225880026817</span>, <span class="hljs-string">'label'</span>: <span class="hljs-string">'bruschetta'</span>}, {<span class="hljs-string">'score'</span>: <span class="hljs-number">0.01519392803311348</span>, <span class="hljs-string">'label'</span>: <span class="hljs-string">'chicken_wings'</span>}, {<span class="hljs-string">'score'</span>: <span class="hljs-number">0.013022331520915031</span>, <span class="hljs-string">'label'</span>: <span class="hljs-string">'pork_chop'</span>}, {<span class="hljs-string">'score'</span>: <span class="hljs-number">0.012728818692266941</span>, <span class="hljs-string">'label'</span>: <span class="hljs-string">'prime_rib'</span>}]</pre></div> <p data-svelte-h="svelte-1njl8vm">You can also manually replicate the results of the <code>pipeline</code> if you’d like:</p> <div class="space-y-10 py-6 2xl:py-8 2xl:-mx-4"><div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><defs><clipPath id="a"><rect x="3.05" y="0.5" width="25.73" height="31" fill="none"></rect></clipPath></defs><g clip-path="url(#a)"><path d="M24.94,9.51a12.81,12.81,0,0,1,0,18.16,12.68,12.68,0,0,1-18,0,12.81,12.81,0,0,1,0-18.16l9-9V5l-.84.83-6,6a9.58,9.58,0,1,0,13.55,0ZM20.44,9a1.68,1.68,0,1,1,1.67-1.67A1.68,1.68,0,0,1,20.44,9Z" fill="#ee4c2c"></path></g></svg> <span>Pytorch</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide Pytorch content</span></div></div> <div class="framework-content"><p data-svelte-h="svelte-65kh0h">Load an image processor to preprocess the image and return the <code>input</code> as PyTorch tensors:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoImageProcessor <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>image_processor = AutoImageProcessor.from_pretrained(<span class="hljs-string">"my_awesome_food_model"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = image_processor(image, return_tensors=<span class="hljs-string">"pt"</span>)</pre></div> <p data-svelte-h="svelte-1at92g">Pass your inputs to the model and return the logits:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoModelForImageClassification <span class="hljs-meta">&gt;&gt;&gt; </span>model = AutoModelForImageClassification.from_pretrained(<span class="hljs-string">"my_awesome_food_model"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> logits = model(**inputs).logits</pre></div> <p data-svelte-h="svelte-uvq5m0">Get the predicted label with the highest probability, and use the model’s <code>id2label</code> mapping to convert it to a label:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>predicted_label = logits.argmax(-<span class="hljs-number">1</span>).item() <span class="hljs-meta">&gt;&gt;&gt; </span>model.config.id2label[predicted_label] <span class="hljs-string">'beignets'</span></pre></div></div></div> </div> <div class="space-y-10 py-6 2xl:py-8 2xl:-mx-4"> <div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="0.94em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 274"><path d="M145.726 42.065v42.07l72.861 42.07v-42.07l-72.86-42.07zM0 84.135v42.07l36.43 21.03V105.17L0 84.135zm109.291 21.035l-36.43 21.034v126.2l36.43 21.035v-84.135l36.435 21.035v-42.07l-36.435-21.034V105.17z" fill="#E55B2D"></path><path d="M145.726 42.065L36.43 105.17v42.065l72.861-42.065v42.065l36.435-21.03v-84.14zM255.022 63.1l-36.435 21.035v42.07l36.435-21.035V63.1zm-72.865 84.135l-36.43 21.035v42.07l36.43-21.036v-42.07zm-36.43 63.104l-36.436-21.035v84.135l36.435-21.035V210.34z" fill="#ED8E24"></path><path d="M145.726 0L0 84.135l36.43 21.035l109.296-63.105l72.861 42.07L255.022 63.1L145.726 0zm0 126.204l-36.435 21.03l36.435 21.036l36.43-21.035l-36.43-21.03z" fill="#F8BF3C"></path></svg> <span>TensorFlow</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide TensorFlow content</span></div></div> <div class="framework-content"><p data-svelte-h="svelte-1l61n0d">Load an image processor to preprocess the image and return the <code>input</code> as TensorFlow tensors:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoImageProcessor <span class="hljs-meta">&gt;&gt;&gt; </span>image_processor = AutoImageProcessor.from_pretrained(<span class="hljs-string">"MariaK/food_classifier"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = image_processor(image, return_tensors=<span class="hljs-string">"tf"</span>)</pre></div> <p data-svelte-h="svelte-1at92g">Pass your inputs to the model and return the logits:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> TFAutoModelForImageClassification <span class="hljs-meta">&gt;&gt;&gt; </span>model = TFAutoModelForImageClassification.from_pretrained(<span class="hljs-string">"MariaK/food_classifier"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>logits = model(**inputs).logits</pre></div> <p data-svelte-h="svelte-uvq5m0">Get the predicted label with the highest probability, and use the model’s <code>id2label</code> mapping to convert it to a label:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>predicted_class_id = <span class="hljs-built_in">int</span>(tf.math.argmax(logits, axis=-<span class="hljs-number">1</span>)[<span class="hljs-number">0</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span>model.config.id2label[predicted_class_id] <span class="hljs-string">'beignets'</span></pre></div></div></div> </div> <p></p> <div id="svelte-announcer" aria-live="assertive" aria-atomic="true" style="position: absolute; left: 0px; top: 0px; clip: rect(0px, 0px, 0px, 0px); clip-path: inset(50%); overflow: hidden; white-space: nowrap; width: 1px; height: 1px;"></div></div> <div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/tasks/asr" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>Automatic speech recognition</a> <a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/tasks/semantic_segmentation" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">Semantic segmentation<span class="ml-2 translate-y-px">→</span></a></div></div></div> <div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapter&quot;:{&quot;title&quot;:&quot;Image classification&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;image-classification&quot;,&quot;url&quot;:&quot;#image-classification&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Load Food-101 dataset&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;load-food101-dataset&quot;,&quot;url&quot;:&quot;#load-food101-dataset&quot;},{&quot;title&quot;:&quot;Preprocess&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocess&quot;,&quot;url&quot;:&quot;#preprocess&quot;},{&quot;title&quot;:&quot;Evaluate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;evaluate&quot;,&quot;url&quot;:&quot;#evaluate&quot;},{&quot;title&quot;:&quot;Train&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;train&quot;,&quot;url&quot;:&quot;#train&quot;},{&quot;title&quot;:&quot;Inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;inference&quot;,&quot;url&quot;:&quot;#inference&quot;}]}}" data-target="SubSideMenu"><nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#image-classification" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-image-classification"><wbr>Image classification</a> <a href="#load-food101-dataset" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-load-food101-dataset"><wbr>Load <wbr>Food-101 dataset</a> <a href="#preprocess" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-preprocess"><wbr>Preprocess</a> <a href="#evaluate" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-evaluate"><wbr>Evaluate</a> <a href="#train" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-train"><wbr>Train</a> <a href="#inference" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-inference"><wbr>Inference</a> </nav></div></div></div> <div id="doc-footer"></div></main> </div> <script> import("/front/build/kube-5e23f38/index.js"); window.moonSha = "kube-5e23f38/"; window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc","environment":"production","userAgent":"HuggingFace (production)"}`); </script> <!-- Stripe --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://js.stripe.com/v3/"; script.async = true; document.head.appendChild(script); } </script> <!-- Google analytics v4 --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL"; script.async = true; document.head.appendChild(script); window.dataLayer = window.dataLayer || []; function gtag() { if (window.dataLayer !== undefined) { window.dataLayer.push(arguments); } } gtag("js", new Date()); gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/v4.34.0/en/tasks/image_classification" }); /// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" }); /// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent /// TODO: ask the user for their consent and update this with gtag('consent', 'update') } </script> <!-- Google Analytics v3 (deprecated) --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { (function (i, s, o, g, r, a, m) { i["GoogleAnalyticsObject"] = r; (i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments); }), (i[r].l = 1 * new Date()); (a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]); a.async = 1; a.src = g; m.parentNode.insertBefore(a, m); })(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics"); ganalytics("create", "UA-83738774-2", "auto"); ganalytics("send", "pageview", "/docs/transformers/v4.34.0/en/tasks/image_classification"); } </script> <iframe name="__privateStripeMetricsController9670" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-27c67c0d52761104439bb051c7856ab1.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fv4.34.0%2Fen%2Ftasks%2Fimage_classification&amp;title=Image%20classification&amp;referrer=&amp;muid=NA&amp;sid=NA&amp;version=6&amp;preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html>
2023-10-05T13:33:45.607Z
Object detection
https://huggingface.co/docs/transformers/v4.34.0/en/tasks/object_detection
# Object detection Object detection is the computer vision task of detecting instances (such as humans, buildings, or cars) in an image. Object detection models receive an image as input and output coordinates of the bounding boxes and associated labels of the detected objects. An image can contain multiple objects, each with its own bounding box and a label (e.g. it can have a car and a building), and each object can be present in different parts of an image (e.g. the image can have several cars). This task is commonly used in autonomous driving for detecting things like pedestrians, road signs, and traffic lights. Other applications include counting objects in images, image search, and more. In this guide, you will learn how to: 1. Finetune [DETR](https://huggingface.co/docs/transformers/model_doc/detr), a model that combines a convolutional backbone with an encoder-decoder Transformer, on the [CPPE-5](https://huggingface.co/datasets/cppe-5) dataset. 2. Use your finetuned model for inference. The task illustrated in this tutorial is supported by the following model architectures: [Conditional DETR](../model_doc/conditional_detr), [Deformable DETR](../model_doc/deformable_detr), [DETA](../model_doc/deta), [DETR](../model_doc/detr), [Table Transformer](../model_doc/table-transformer), [YOLOS](../model_doc/yolos) Before you begin, make sure you have all the necessary libraries installed: ``` pip install -q datasets transformers evaluate timm albumentations ``` You’ll use 🤗 Datasets to load a dataset from the Hugging Face Hub, 🤗 Transformers to train your model, and `albumentations` to augment the data. `timm` is currently required to load a convolutional backbone for the DETR model. We encourage you to share your model with the community. Log in to your Hugging Face account to upload it to the Hub. When prompted, enter your token to log in: ``` >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## Load the CPPE-5 dataset The [CPPE-5 dataset](https://huggingface.co/datasets/cppe-5) contains images with annotations identifying medical personal protective equipment (PPE) in the context of the COVID-19 pandemic. Start by loading the dataset: ``` >>> from datasets import load_dataset >>> cppe5 = load_dataset("cppe-5") >>> cppe5 DatasetDict({ train: Dataset({ features: ['image_id', 'image', 'width', 'height', 'objects'], num_rows: 1000 }) test: Dataset({ features: ['image_id', 'image', 'width', 'height', 'objects'], num_rows: 29 }) }) ``` You’ll see that this dataset already comes with a training set containing 1000 images and a test set with 29 images. To get familiar with the data, explore what the examples look like. ``` >>> cppe5["train"][0] {'image_id': 15, 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=943x663 at 0x7F9EC9E77C10>, 'width': 943, 'height': 663, 'objects': {'id': [114, 115, 116, 117], 'area': [3796, 1596, 152768, 81002], 'bbox': [[302.0, 109.0, 73.0, 52.0], [810.0, 100.0, 57.0, 28.0], [160.0, 31.0, 248.0, 616.0], [741.0, 68.0, 202.0, 401.0]], 'category': [4, 4, 0, 0]}} ``` The examples in the dataset have the following fields: - `image_id`: the example image id - `image`: a `PIL.Image.Image` object containing the image - `width`: width of the image - `height`: height of the image - `objects`: a dictionary containing bounding box metadata for the objects in the image: - `id`: the annotation id - `area`: the area of the bounding box - `bbox`: the object’s bounding box (in the [COCO format](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) ) - `category`: the object’s category, with possible values including `Coverall (0)`, `Face_Shield (1)`, `Gloves (2)`, `Goggles (3)` and `Mask (4)` You may notice that the `bbox` field follows the COCO format, which is the format that the DETR model expects. However, the grouping of the fields inside `objects` differs from the annotation format DETR requires. You will need to apply some preprocessing transformations before using this data for training. To get an even better understanding of the data, visualize an example in the dataset. ``` >>> import numpy as np >>> import os >>> from PIL import Image, ImageDraw >>> image = cppe5["train"][0]["image"] >>> annotations = cppe5["train"][0]["objects"] >>> draw = ImageDraw.Draw(image) >>> categories = cppe5["train"].features["objects"].feature["category"].names >>> id2label = {index: x for index, x in enumerate(categories, start=0)} >>> label2id = {v: k for k, v in id2label.items()} >>> for i in range(len(annotations["id"])): ... box = annotations["bbox"][i] ... class_idx = annotations["category"][i] ... x, y, w, h = tuple(box) ... draw.rectangle((x, y, x + w, y + h), outline="red", width=1) ... draw.text((x, y), id2label[class_idx], fill="white") >>> image ``` ![CPPE-5 Image Example](https://i.imgur.com/TdaqPJO.png) To visualize the bounding boxes with associated labels, you can get the labels from the dataset’s metadata, specifically the `category` field. You’ll also want to create dictionaries that map a label id to a label class (`id2label`) and the other way around (`label2id`). You can use them later when setting up the model. Including these maps will make your model reusable by others if you share it on the Hugging Face Hub. As a final step of getting familiar with the data, explore it for potential issues. One common problem with datasets for object detection is bounding boxes that “stretch” beyond the edge of the image. Such “runaway” bounding boxes can raise errors during training and should be addressed at this stage. There are a few examples with this issue in this dataset. To keep things simple in this guide, we remove these images from the data. ``` >>> remove_idx = [590, 821, 822, 875, 876, 878, 879] >>> keep = [i for i in range(len(cppe5["train"])) if i not in remove_idx] >>> cppe5["train"] = cppe5["train"].select(keep) ``` ## Preprocess the data To finetune a model, you must preprocess the data you plan to use to match precisely the approach used for the pre-trained model. [AutoImageProcessor](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoImageProcessor) takes care of processing image data to create `pixel_values`, `pixel_mask`, and `labels` that a DETR model can train with. The image processor has some attributes that you won’t have to worry about: - `image_mean = [0.485, 0.456, 0.406 ]` - `image_std = [0.229, 0.224, 0.225]` These are the mean and standard deviation used to normalize images during the model pre-training. These values are crucial to replicate when doing inference or finetuning a pre-trained image model. Instantiate the image processor from the same checkpoint as the model you want to finetune. ``` >>> from transformers import AutoImageProcessor >>> checkpoint = "facebook/detr-resnet-50" >>> image_processor = AutoImageProcessor.from_pretrained(checkpoint) ``` Before passing the images to the `image_processor`, apply two preprocessing transformations to the dataset: - Augmenting images - Reformatting annotations to meet DETR expectations First, to make sure the model does not overfit on the training data, you can apply image augmentation with any data augmentation library. Here we use [Albumentations](https://albumentations.ai/docs/) … This library ensures that transformations affect the image and update the bounding boxes accordingly. The 🤗 Datasets library documentation has a detailed [guide on how to augment images for object detection](https://huggingface.co/docs/datasets/object_detection), and it uses the exact same dataset as an example. Apply the same approach here, resize each image to (480, 480), flip it horizontally, and brighten it: ``` >>> import albumentations >>> import numpy as np >>> import torch >>> transform = albumentations.Compose( ... [ ... albumentations.Resize(480, 480), ... albumentations.HorizontalFlip(p=1.0), ... albumentations.RandomBrightnessContrast(p=1.0), ... ], ... bbox_params=albumentations.BboxParams(format="coco", label_fields=["category"]), ... ) ``` The `image_processor` expects the annotations to be in the following format: `{'image_id': int, 'annotations': List[Dict]}`, where each dictionary is a COCO object annotation. Let’s add a function to reformat annotations for a single example: ``` >>> def formatted_anns(image_id, category, area, bbox): ... annotations = [] ... for i in range(0, len(category)): ... new_ann = { ... "image_id": image_id, ... "category_id": category[i], ... "isCrowd": 0, ... "area": area[i], ... "bbox": list(bbox[i]), ... } ... annotations.append(new_ann) ... return annotations ``` Now you can combine the image and annotation transformations to use on a batch of examples: ``` >>> >>> def transform_aug_ann(examples): ... image_ids = examples["image_id"] ... images, bboxes, area, categories = [], [], [], [] ... for image, objects in zip(examples["image"], examples["objects"]): ... image = np.array(image.convert("RGB"))[:, :, ::-1] ... out = transform(image=image, bboxes=objects["bbox"], category=objects["category"]) ... area.append(objects["area"]) ... images.append(out["image"]) ... bboxes.append(out["bboxes"]) ... categories.append(out["category"]) ... targets = [ ... {"image_id": id_, "annotations": formatted_anns(id_, cat_, ar_, box_)} ... for id_, cat_, ar_, box_ in zip(image_ids, categories, area, bboxes) ... ] ... return image_processor(images=images, annotations=targets, return_tensors="pt") ``` Apply this preprocessing function to the entire dataset using 🤗 Datasets [with\_transform](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.with_transform) method. This method applies transformations on the fly when you load an element of the dataset. At this point, you can check what an example from the dataset looks like after the transformations. You should see a tensor with `pixel_values`, a tensor with `pixel_mask`, and `labels`. ``` >>> cppe5["train"] = cppe5["train"].with_transform(transform_aug_ann) >>> cppe5["train"][15] {'pixel_values': tensor([[[ 0.9132, 0.9132, 0.9132, ..., -1.9809, -1.9809, -1.9809], [ 0.9132, 0.9132, 0.9132, ..., -1.9809, -1.9809, -1.9809], [ 0.9132, 0.9132, 0.9132, ..., -1.9638, -1.9638, -1.9638], ..., [-1.5699, -1.5699, -1.5699, ..., -1.9980, -1.9980, -1.9980], [-1.5528, -1.5528, -1.5528, ..., -1.9980, -1.9809, -1.9809], [-1.5528, -1.5528, -1.5528, ..., -1.9980, -1.9809, -1.9809]], [[ 1.3081, 1.3081, 1.3081, ..., -1.8431, -1.8431, -1.8431], [ 1.3081, 1.3081, 1.3081, ..., -1.8431, -1.8431, -1.8431], [ 1.3081, 1.3081, 1.3081, ..., -1.8256, -1.8256, -1.8256], ..., [-1.3179, -1.3179, -1.3179, ..., -1.8606, -1.8606, -1.8606], [-1.3004, -1.3004, -1.3004, ..., -1.8606, -1.8431, -1.8431], [-1.3004, -1.3004, -1.3004, ..., -1.8606, -1.8431, -1.8431]], [[ 1.4200, 1.4200, 1.4200, ..., -1.6476, -1.6476, -1.6476], [ 1.4200, 1.4200, 1.4200, ..., -1.6476, -1.6476, -1.6476], [ 1.4200, 1.4200, 1.4200, ..., -1.6302, -1.6302, -1.6302], ..., [-1.0201, -1.0201, -1.0201, ..., -1.5604, -1.5604, -1.5604], [-1.0027, -1.0027, -1.0027, ..., -1.5604, -1.5430, -1.5430], [-1.0027, -1.0027, -1.0027, ..., -1.5604, -1.5430, -1.5430]]]), 'pixel_mask': tensor([[1, 1, 1, ..., 1, 1, 1], [1, 1, 1, ..., 1, 1, 1], [1, 1, 1, ..., 1, 1, 1], ..., [1, 1, 1, ..., 1, 1, 1], [1, 1, 1, ..., 1, 1, 1], [1, 1, 1, ..., 1, 1, 1]]), 'labels': {'size': tensor([800, 800]), 'image_id': tensor([756]), 'class_labels': tensor([4]), 'boxes': tensor([[0.7340, 0.6986, 0.3414, 0.5944]]), 'area': tensor([519544.4375]), 'iscrowd': tensor([0]), 'orig_size': tensor([480, 480])}} ``` You have successfully augmented the individual images and prepared their annotations. However, preprocessing isn’t complete yet. In the final step, create a custom `collate_fn` to batch images together. Pad images (which are now `pixel_values`) to the largest image in a batch, and create a corresponding `pixel_mask` to indicate which pixels are real (1) and which are padding (0). ``` >>> def collate_fn(batch): ... pixel_values = [item["pixel_values"] for item in batch] ... encoding = image_processor.pad(pixel_values, return_tensors="pt") ... labels = [item["labels"] for item in batch] ... batch = {} ... batch["pixel_values"] = encoding["pixel_values"] ... batch["pixel_mask"] = encoding["pixel_mask"] ... batch["labels"] = labels ... return batch ``` ## Training the DETR model You have done most of the heavy lifting in the previous sections, so now you are ready to train your model! The images in this dataset are still quite large, even after resizing. This means that finetuning this model will require at least one GPU. Training involves the following steps: 1. Load the model with [AutoModelForObjectDetection](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoModelForObjectDetection) using the same checkpoint as in the preprocessing. 2. Define your training hyperparameters in [TrainingArguments](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.TrainingArguments). 3. Pass the training arguments to [Trainer](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer) along with the model, dataset, image processor, and data collator. 4. Call [train()](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.train) to finetune your model. When loading the model from the same checkpoint that you used for the preprocessing, remember to pass the `label2id` and `id2label` maps that you created earlier from the dataset’s metadata. Additionally, we specify `ignore_mismatched_sizes=True` to replace the existing classification head with a new one. ``` >>> from transformers import AutoModelForObjectDetection >>> model = AutoModelForObjectDetection.from_pretrained( ... checkpoint, ... id2label=id2label, ... label2id=label2id, ... ignore_mismatched_sizes=True, ... ) ``` In the [TrainingArguments](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.TrainingArguments) use `output_dir` to specify where to save your model, then configure hyperparameters as you see fit. It is important you do not remove unused columns because this will drop the image column. Without the image column, you can’t create `pixel_values`. For this reason, set `remove_unused_columns` to `False`. If you wish to share your model by pushing to the Hub, set `push_to_hub` to `True` (you must be signed in to Hugging Face to upload your model). ``` >>> from transformers import TrainingArguments >>> training_args = TrainingArguments( ... output_dir="detr-resnet-50_finetuned_cppe5", ... per_device_train_batch_size=8, ... num_train_epochs=10, ... fp16=True, ... save_steps=200, ... logging_steps=50, ... learning_rate=1e-5, ... weight_decay=1e-4, ... save_total_limit=2, ... remove_unused_columns=False, ... push_to_hub=True, ... ) ``` Finally, bring everything together, and call [train()](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.train): ``` >>> from transformers import Trainer >>> trainer = Trainer( ... model=model, ... args=training_args, ... data_collator=collate_fn, ... train_dataset=cppe5["train"], ... tokenizer=image_processor, ... ) >>> trainer.train() ``` If you have set `push_to_hub` to `True` in the `training_args`, the training checkpoints are pushed to the Hugging Face Hub. Upon training completion, push the final model to the Hub as well by calling the [push\_to\_hub()](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.push_to_hub) method. ``` >>> trainer.push_to_hub() ``` ## Evaluate Object detection models are commonly evaluated with a set of [COCO-style metrics](https://cocodataset.org/#detection-eval). You can use one of the existing metrics implementations, but here you'll use the one from \`torchvision\` to evaluate the final model that you pushed to the Hub. To use the `torchvision` evaluator, you’ll need to prepare a ground truth COCO dataset. The API to build a COCO dataset requires the data to be stored in a certain format, so you’ll need to save images and annotations to disk first. Just like when you prepared your data for training, the annotations from the `cppe5["test"]` need to be formatted. However, images should stay as they are. The evaluation step requires a bit of work, but it can be split in three major steps. First, prepare the `cppe5["test"]` set: format the annotations and save the data to disk. ``` >>> import json >>> >>> def val_formatted_anns(image_id, objects): ... annotations = [] ... for i in range(0, len(objects["id"])): ... new_ann = { ... "id": objects["id"][i], ... "category_id": objects["category"][i], ... "iscrowd": 0, ... "image_id": image_id, ... "area": objects["area"][i], ... "bbox": objects["bbox"][i], ... } ... annotations.append(new_ann) ... return annotations >>> >>> def save_cppe5_annotation_file_images(cppe5): ... output_json = {} ... path_output_cppe5 = f"{os.getcwd()}/cppe5/" ... if not os.path.exists(path_output_cppe5): ... os.makedirs(path_output_cppe5) ... path_anno = os.path.join(path_output_cppe5, "cppe5_ann.json") ... categories_json = [{"supercategory": "none", "id": id, "name": id2label[id]} for id in id2label] ... output_json["images"] = [] ... output_json["annotations"] = [] ... for example in cppe5: ... ann = val_formatted_anns(example["image_id"], example["objects"]) ... output_json["images"].append( ... { ... "id": example["image_id"], ... "width": example["image"].width, ... "height": example["image"].height, ... "file_name": f"{example['image_id']}.png", ... } ... ) ... output_json["annotations"].extend(ann) ... output_json["categories"] = categories_json ... with open(path_anno, "w") as file: ... json.dump(output_json, file, ensure_ascii=False, indent=4) ... for im, img_id in zip(cppe5["image"], cppe5["image_id"]): ... path_img = os.path.join(path_output_cppe5, f"{img_id}.png") ... im.save(path_img) ... return path_output_cppe5, path_anno ``` Next, prepare an instance of a `CocoDetection` class that can be used with `cocoevaluator`. ``` >>> import torchvision >>> class CocoDetection(torchvision.datasets.CocoDetection): ... def __init__(self, img_folder, image_processor, ann_file): ... super().__init__(img_folder, ann_file) ... self.image_processor = image_processor ... def __getitem__(self, idx): ... ... img, target = super(CocoDetection, self).__getitem__(idx) ... ... ... image_id = self.ids[idx] ... target = {"image_id": image_id, "annotations": target} ... encoding = self.image_processor(images=img, annotations=target, return_tensors="pt") ... pixel_values = encoding["pixel_values"].squeeze() ... target = encoding["labels"][0] ... return {"pixel_values": pixel_values, "labels": target} >>> im_processor = AutoImageProcessor.from_pretrained("devonho/detr-resnet-50_finetuned_cppe5") >>> path_output_cppe5, path_anno = save_cppe5_annotation_file_images(cppe5["test"]) >>> test_ds_coco_format = CocoDetection(path_output_cppe5, im_processor, path_anno) ``` Finally, load the metrics and run the evaluation. ``` >>> import evaluate >>> from tqdm import tqdm >>> model = AutoModelForObjectDetection.from_pretrained("devonho/detr-resnet-50_finetuned_cppe5") >>> module = evaluate.load("ybelkada/cocoevaluate", coco=test_ds_coco_format.coco) >>> val_dataloader = torch.utils.data.DataLoader( ... test_ds_coco_format, batch_size=8, shuffle=False, num_workers=4, collate_fn=collate_fn ... ) >>> with torch.no_grad(): ... for idx, batch in enumerate(tqdm(val_dataloader)): ... pixel_values = batch["pixel_values"] ... pixel_mask = batch["pixel_mask"] ... labels = [ ... {k: v for k, v in t.items()} for t in batch["labels"] ... ] ... ... outputs = model(pixel_values=pixel_values, pixel_mask=pixel_mask) ... orig_target_sizes = torch.stack([target["orig_size"] for target in labels], dim=0) ... results = im_processor.post_process(outputs, orig_target_sizes) ... module.add(prediction=results, reference=labels) ... del batch >>> results = module.compute() >>> print(results) Accumulating evaluation results... DONE (t=0.08s). IoU metric: bbox Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.352 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.681 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.292 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.168 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.208 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.429 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.274 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.484 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.501 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.191 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.323 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.590 ``` These results can be further improved by adjusting the hyperparameters in [TrainingArguments](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.TrainingArguments). Give it a go! ## Inference Now that you have finetuned a DETR model, evaluated it, and uploaded it to the Hugging Face Hub, you can use it for inference. The simplest way to try out your finetuned model for inference is to use it in a \[Pipeline\](/docs/transformers/v4.34.0/en/main\_classes/pipelines#transformers.Pipeline). Instantiate a pipeline for object detection with your model, and pass an image to it: ``` >>> from transformers import pipeline >>> import requests >>> url = "https://i.imgur.com/2lnWoly.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> obj_detector = pipeline("object-detection", model="devonho/detr-resnet-50_finetuned_cppe5") >>> obj_detector(image) ``` You can also manually replicate the results of the pipeline if you’d like: ``` >>> image_processor = AutoImageProcessor.from_pretrained("devonho/detr-resnet-50_finetuned_cppe5") >>> model = AutoModelForObjectDetection.from_pretrained("devonho/detr-resnet-50_finetuned_cppe5") >>> with torch.no_grad(): ... inputs = image_processor(images=image, return_tensors="pt") ... outputs = model(**inputs) ... target_sizes = torch.tensor([image.size[::-1]]) ... results = image_processor.post_process_object_detection(outputs, threshold=0.5, target_sizes=target_sizes)[0] >>> for score, label, box in zip(results["scores"], results["labels"], results["boxes"]): ... box = [round(i, 2) for i in box.tolist()] ... print( ... f"Detected {model.config.id2label[label.item()]} with confidence " ... f"{round(score.item(), 3)} at location {box}" ... ) Detected Coverall with confidence 0.566 at location [1215.32, 147.38, 4401.81, 3227.08] Detected Mask with confidence 0.584 at location [2449.06, 823.19, 3256.43, 1413.9] ``` Let’s plot the result: ``` >>> draw = ImageDraw.Draw(image) >>> for score, label, box in zip(results["scores"], results["labels"], results["boxes"]): ... box = [round(i, 2) for i in box.tolist()] ... x, y, x2, y2 = tuple(box) ... draw.rectangle((x, y, x2, y2), outline="red", width=1) ... draw.text((x, y), model.config.id2label[label.item()], fill="white") >>> image ``` ![Object detection result on a new image](https://i.imgur.com/4QZnf9A.png)
<!DOCTYPE html><html class=""><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"> <meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science."> <meta property="fb:app_id" content="1321688464574422"> <meta name="twitter:card" content="summary_large_image"> <meta name="twitter:site" content="@huggingface"> <meta property="og:title" content="Object detection"> <meta property="og:type" content="website"> <meta property="og:url" content="https://huggingface.co/docs/transformers/v4.34.0/en/tasks/object_detection"> <meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png"> <link rel="stylesheet" href="/front/build/kube-5e23f38/style.css"> <link rel="preconnect" href="https://fonts.gstatic.com"> <link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&amp;display=swap" rel="stylesheet"> <link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&amp;display=swap" rel="stylesheet"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" as="style" onload="this.onload=null;this.rel='stylesheet'"> <noscript> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" /> </noscript> <title>Object detection</title> <script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script> <script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"><link rel="stylesheet" href="/docs/transformers/v4.34.0/en/_app/immutable/assets/0.e3b0c442.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/1.38c5c2f6.js"><meta name="hf:doc:metadata" content="{&quot;local&quot;:&quot;object-detection&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;load-the-cppe5-dataset&quot;,&quot;title&quot;:&quot;Load the CPPE-5 dataset&quot;},{&quot;local&quot;:&quot;preprocess-the-data&quot;,&quot;title&quot;:&quot;Preprocess the data&quot;},{&quot;local&quot;:&quot;training-the-detr-model&quot;,&quot;title&quot;:&quot;Training the DETR model&quot;},{&quot;local&quot;:&quot;evaluate&quot;,&quot;title&quot;:&quot;Evaluate&quot;},{&quot;local&quot;:&quot;inference&quot;,&quot;title&quot;:&quot;Inference&quot;}],&quot;title&quot;:&quot;Object detection&quot;}"></head> <body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage"> <div class="flex min-h-screen flex-col"> <div class="SVELTE_HYDRATER contents" data-props="{&quot;classNames&quot;:&quot;&quot;,&quot;isWide&quot;:true,&quot;isZh&quot;:false}" data-target="MainHeader"><header class="border-b border-gray-100 "><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <div class="flex flex-none items-center justify-center p-0.5 place-self-stretch lg:hidden"><button class="relative z-40 flex h-6 w-8 items-center justify-center" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div></div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="rounded-full border border-transparent bg-gray-900 px-3 py-1 leading-none text-white hover:border-black hover:bg-white hover:text-black" href="/join">Sign Up</a></li></ul></nav></div></header></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div> <main class="flex flex-1 flex-col"><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapters&quot;:[{&quot;title&quot;:&quot;Get started&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;🤗 Transformers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;index&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/index&quot;},{&quot;title&quot;:&quot;Quick tour&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;quicktour&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/quicktour&quot;},{&quot;title&quot;:&quot;Installation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;installation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/installation&quot;}]},{&quot;title&quot;:&quot;Tutorials&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Run inference with pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_tutorial&quot;},{&quot;title&quot;:&quot;Write portable code with AutoClass&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;autoclass_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/autoclass_tutorial&quot;},{&quot;title&quot;:&quot;Preprocess data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocessing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/preprocessing&quot;},{&quot;title&quot;:&quot;Fine-tune a pretrained model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;training&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/training&quot;},{&quot;title&quot;:&quot;Train with a script&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;run_scripts&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/run_scripts&quot;},{&quot;title&quot;:&quot;Set up distributed training with 🤗 Accelerate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;accelerate&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/accelerate&quot;},{&quot;title&quot;:&quot;Load and train adapters with 🤗 PEFT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;peft&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/peft&quot;},{&quot;title&quot;:&quot;Share your model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_sharing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_sharing&quot;},{&quot;title&quot;:&quot;Agents&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers_agents&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/transformers_agents&quot;},{&quot;title&quot;:&quot;Generation with LLMs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;llm_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/llm_tutorial&quot;}]},{&quot;title&quot;:&quot;Task Guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Natural Language Processing&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text classification&quot;,&quot;id&quot;:&quot;tasks/sequence_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/sequence_classification&quot;},{&quot;title&quot;:&quot;Token classification&quot;,&quot;id&quot;:&quot;tasks/token_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/token_classification&quot;},{&quot;title&quot;:&quot;Question answering&quot;,&quot;id&quot;:&quot;tasks/question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/question_answering&quot;},{&quot;title&quot;:&quot;Causal language modeling&quot;,&quot;id&quot;:&quot;tasks/language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/language_modeling&quot;},{&quot;title&quot;:&quot;Masked language modeling&quot;,&quot;id&quot;:&quot;tasks/masked_language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/masked_language_modeling&quot;},{&quot;title&quot;:&quot;Translation&quot;,&quot;id&quot;:&quot;tasks/translation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/translation&quot;},{&quot;title&quot;:&quot;Summarization&quot;,&quot;id&quot;:&quot;tasks/summarization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/summarization&quot;},{&quot;title&quot;:&quot;Multiple choice&quot;,&quot;id&quot;:&quot;tasks/multiple_choice&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/multiple_choice&quot;}]},{&quot;title&quot;:&quot;Audio&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio classification&quot;,&quot;id&quot;:&quot;tasks/audio_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/audio_classification&quot;},{&quot;title&quot;:&quot;Automatic speech recognition&quot;,&quot;id&quot;:&quot;tasks/asr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/asr&quot;}]},{&quot;title&quot;:&quot;Computer Vision&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image classification&quot;,&quot;id&quot;:&quot;tasks/image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_classification&quot;},{&quot;title&quot;:&quot;Semantic segmentation&quot;,&quot;id&quot;:&quot;tasks/semantic_segmentation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/semantic_segmentation&quot;},{&quot;title&quot;:&quot;Video classification&quot;,&quot;id&quot;:&quot;tasks/video_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/video_classification&quot;},{&quot;title&quot;:&quot;Object detection&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks/object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot object detection&quot;,&quot;id&quot;:&quot;tasks/zero_shot_object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot image classification&quot;,&quot;id&quot;:&quot;tasks/zero_shot_image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification&quot;},{&quot;title&quot;:&quot;Depth estimation&quot;,&quot;id&quot;:&quot;tasks/monocular_depth_estimation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation&quot;}]},{&quot;title&quot;:&quot;Multimodal&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image captioning&quot;,&quot;id&quot;:&quot;tasks/image_captioning&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_captioning&quot;},{&quot;title&quot;:&quot;Document Question Answering&quot;,&quot;id&quot;:&quot;tasks/document_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/document_question_answering&quot;},{&quot;title&quot;:&quot;Visual Question Answering&quot;,&quot;id&quot;:&quot;tasks/visual_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/visual_question_answering&quot;},{&quot;title&quot;:&quot;Text to speech&quot;,&quot;id&quot;:&quot;tasks/text-to-speech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/text-to-speech&quot;}]},{&quot;title&quot;:&quot;Generation&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Customize the generation strategy&quot;,&quot;id&quot;:&quot;generation_strategies&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/generation_strategies&quot;}]},{&quot;title&quot;:&quot;Prompting&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image tasks with IDEFICS&quot;,&quot;id&quot;:&quot;tasks/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/idefics&quot;}]}]},{&quot;title&quot;:&quot;Developer guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Use fast tokenizers from 🤗 Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;fast_tokenizers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/fast_tokenizers&quot;},{&quot;title&quot;:&quot;Run inference with multilingual models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;multilingual&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/multilingual&quot;},{&quot;title&quot;:&quot;Use model-specific APIs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;create_a_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/create_a_model&quot;},{&quot;title&quot;:&quot;Share a custom model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_models&quot;},{&quot;title&quot;:&quot;Templates for chat models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;chat_templating&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/chat_templating&quot;},{&quot;title&quot;:&quot;Run training on Amazon SageMaker&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;sagemaker&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/sagemaker&quot;},{&quot;title&quot;:&quot;Export to ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;serialization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/serialization&quot;},{&quot;title&quot;:&quot;Export to TFLite&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tflite&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tflite&quot;},{&quot;title&quot;:&quot;Export to TorchScript&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;torchscript&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/torchscript&quot;},{&quot;title&quot;:&quot;Benchmarks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;benchmarks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/benchmarks&quot;},{&quot;title&quot;:&quot;Notebooks with examples&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;notebooks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/notebooks&quot;},{&quot;title&quot;:&quot;Community resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;community&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/community&quot;},{&quot;title&quot;:&quot;Custom Tools and Prompts&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_tools&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_tools&quot;},{&quot;title&quot;:&quot;Troubleshoot&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;troubleshooting&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/troubleshooting&quot;}]},{&quot;title&quot;:&quot;Performance and scalability&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;performance&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/performance&quot;},{&quot;title&quot;:&quot;Efficient training techniques&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Methods and tools for efficient training on a single GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_one&quot;},{&quot;title&quot;:&quot;Multiple GPUs and parallelism&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_many&quot;},{&quot;title&quot;:&quot;Efficient training on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu&quot;},{&quot;title&quot;:&quot;Distributed CPU training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu_many&quot;},{&quot;title&quot;:&quot;Training on TPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu&quot;},{&quot;title&quot;:&quot;Training on TPU with TensorFlow&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu_tf&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu_tf&quot;},{&quot;title&quot;:&quot;Training on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_special&quot;},{&quot;title&quot;:&quot;Custom hardware for training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_hardware&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_hardware&quot;},{&quot;title&quot;:&quot;Hyperparameter Search using Trainer API&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;hpo_train&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/hpo_train&quot;}]},{&quot;title&quot;:&quot;Optimizing inference&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Inference on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_cpu&quot;},{&quot;title&quot;:&quot;Inference on one GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_one&quot;},{&quot;title&quot;:&quot;Inference on many GPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_many&quot;},{&quot;title&quot;:&quot;Inference on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_special&quot;}]},{&quot;title&quot;:&quot;Instantiating a big model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;big_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/big_models&quot;},{&quot;title&quot;:&quot;Troubleshooting&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;debugging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/debugging&quot;},{&quot;title&quot;:&quot;XLA Integration for TensorFlow Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tf_xla&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tf_xla&quot;},{&quot;title&quot;:&quot;Optimize inference using `torch.compile()`&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_torch_compile&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_torch_compile&quot;}]},{&quot;title&quot;:&quot;Contribute&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;How to contribute to transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;contributing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/contributing&quot;},{&quot;title&quot;:&quot;How to add a model to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_model&quot;},{&quot;title&quot;:&quot;How to convert a 🤗 Transformers model to TensorFlow?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_tensorflow_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_tensorflow_model&quot;},{&quot;title&quot;:&quot;How to add a pipeline to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_pipeline&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_pipeline&quot;},{&quot;title&quot;:&quot;Testing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;testing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/testing&quot;},{&quot;title&quot;:&quot;Checks on a Pull Request&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pr_checks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pr_checks&quot;}]},{&quot;title&quot;:&quot;Conceptual guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Philosophy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;philosophy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/philosophy&quot;},{&quot;title&quot;:&quot;Glossary&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;glossary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/glossary&quot;},{&quot;title&quot;:&quot;What 🤗 Transformers can do&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;task_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/task_summary&quot;},{&quot;title&quot;:&quot;How 🤗 Transformers solve tasks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks_explained&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks_explained&quot;},{&quot;title&quot;:&quot;The Transformer model family&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_summary&quot;},{&quot;title&quot;:&quot;Summary of the tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tokenizer_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tokenizer_summary&quot;},{&quot;title&quot;:&quot;Attention mechanisms&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;attention&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/attention&quot;},{&quot;title&quot;:&quot;Padding and truncation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pad_truncation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pad_truncation&quot;},{&quot;title&quot;:&quot;BERTology&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;bertology&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/bertology&quot;},{&quot;title&quot;:&quot;Perplexity of fixed-length models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perplexity&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perplexity&quot;},{&quot;title&quot;:&quot;Pipelines for webserver inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_webserver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_webserver&quot;},{&quot;title&quot;:&quot;Model training anatomy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_memory_anatomy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_memory_anatomy&quot;}]},{&quot;title&quot;:&quot;API&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Main Classes&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Agents and Tools&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/agent&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/agent&quot;},{&quot;title&quot;:&quot;Auto Classes&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/auto&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/auto&quot;},{&quot;title&quot;:&quot;Callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/callback&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/callback&quot;},{&quot;title&quot;:&quot;Configuration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/configuration&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/configuration&quot;},{&quot;title&quot;:&quot;Data Collator&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/data_collator&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/data_collator&quot;},{&quot;title&quot;:&quot;Keras callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/keras_callbacks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/keras_callbacks&quot;},{&quot;title&quot;:&quot;Logging&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/logging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/logging&quot;},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/model&quot;},{&quot;title&quot;:&quot;Text Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/text_generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/text_generation&quot;},{&quot;title&quot;:&quot;ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/onnx&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/onnx&quot;},{&quot;title&quot;:&quot;Optimization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/optimizer_schedules&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules&quot;},{&quot;title&quot;:&quot;Model outputs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/output&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/output&quot;},{&quot;title&quot;:&quot;Pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/pipelines&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/pipelines&quot;},{&quot;title&quot;:&quot;Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/processors&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/processors&quot;},{&quot;title&quot;:&quot;Quantization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/quantization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/quantization&quot;},{&quot;title&quot;:&quot;Tokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/tokenizer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/tokenizer&quot;},{&quot;title&quot;:&quot;Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/trainer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/trainer&quot;},{&quot;title&quot;:&quot;DeepSpeed Integration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/deepspeed&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/deepspeed&quot;},{&quot;title&quot;:&quot;Feature Extractor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/feature_extractor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/feature_extractor&quot;},{&quot;title&quot;:&quot;Image Processor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/image_processor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/image_processor&quot;}]},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALBERT&quot;,&quot;id&quot;:&quot;model_doc/albert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/albert&quot;},{&quot;title&quot;:&quot;BART&quot;,&quot;id&quot;:&quot;model_doc/bart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bart&quot;},{&quot;title&quot;:&quot;BARThez&quot;,&quot;id&quot;:&quot;model_doc/barthez&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/barthez&quot;},{&quot;title&quot;:&quot;BARTpho&quot;,&quot;id&quot;:&quot;model_doc/bartpho&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bartpho&quot;},{&quot;title&quot;:&quot;BERT&quot;,&quot;id&quot;:&quot;model_doc/bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert&quot;},{&quot;title&quot;:&quot;BertGeneration&quot;,&quot;id&quot;:&quot;model_doc/bert-generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-generation&quot;},{&quot;title&quot;:&quot;BertJapanese&quot;,&quot;id&quot;:&quot;model_doc/bert-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-japanese&quot;},{&quot;title&quot;:&quot;Bertweet&quot;,&quot;id&quot;:&quot;model_doc/bertweet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bertweet&quot;},{&quot;title&quot;:&quot;BigBird&quot;,&quot;id&quot;:&quot;model_doc/big_bird&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/big_bird&quot;},{&quot;title&quot;:&quot;BigBirdPegasus&quot;,&quot;id&quot;:&quot;model_doc/bigbird_pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus&quot;},{&quot;title&quot;:&quot;BioGpt&quot;,&quot;id&quot;:&quot;model_doc/biogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/biogpt&quot;},{&quot;title&quot;:&quot;Blenderbot&quot;,&quot;id&quot;:&quot;model_doc/blenderbot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot&quot;},{&quot;title&quot;:&quot;Blenderbot Small&quot;,&quot;id&quot;:&quot;model_doc/blenderbot-small&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot-small&quot;},{&quot;title&quot;:&quot;BLOOM&quot;,&quot;id&quot;:&quot;model_doc/bloom&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bloom&quot;},{&quot;title&quot;:&quot;BORT&quot;,&quot;id&quot;:&quot;model_doc/bort&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bort&quot;},{&quot;title&quot;:&quot;ByT5&quot;,&quot;id&quot;:&quot;model_doc/byt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/byt5&quot;},{&quot;title&quot;:&quot;CamemBERT&quot;,&quot;id&quot;:&quot;model_doc/camembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/camembert&quot;},{&quot;title&quot;:&quot;CANINE&quot;,&quot;id&quot;:&quot;model_doc/canine&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/canine&quot;},{&quot;title&quot;:&quot;CodeGen&quot;,&quot;id&quot;:&quot;model_doc/codegen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/codegen&quot;},{&quot;title&quot;:&quot;CodeLlama&quot;,&quot;id&quot;:&quot;model_doc/code_llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/code_llama&quot;},{&quot;title&quot;:&quot;ConvBERT&quot;,&quot;id&quot;:&quot;model_doc/convbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convbert&quot;},{&quot;title&quot;:&quot;CPM&quot;,&quot;id&quot;:&quot;model_doc/cpm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpm&quot;},{&quot;title&quot;:&quot;CPMANT&quot;,&quot;id&quot;:&quot;model_doc/cpmant&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpmant&quot;},{&quot;title&quot;:&quot;CTRL&quot;,&quot;id&quot;:&quot;model_doc/ctrl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ctrl&quot;},{&quot;title&quot;:&quot;DeBERTa&quot;,&quot;id&quot;:&quot;model_doc/deberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta&quot;},{&quot;title&quot;:&quot;DeBERTa-v2&quot;,&quot;id&quot;:&quot;model_doc/deberta-v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta-v2&quot;},{&quot;title&quot;:&quot;DialoGPT&quot;,&quot;id&quot;:&quot;model_doc/dialogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dialogpt&quot;},{&quot;title&quot;:&quot;DistilBERT&quot;,&quot;id&quot;:&quot;model_doc/distilbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/distilbert&quot;},{&quot;title&quot;:&quot;DPR&quot;,&quot;id&quot;:&quot;model_doc/dpr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpr&quot;},{&quot;title&quot;:&quot;ELECTRA&quot;,&quot;id&quot;:&quot;model_doc/electra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/electra&quot;},{&quot;title&quot;:&quot;Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encoder-decoder&quot;},{&quot;title&quot;:&quot;ERNIE&quot;,&quot;id&quot;:&quot;model_doc/ernie&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie&quot;},{&quot;title&quot;:&quot;ErnieM&quot;,&quot;id&quot;:&quot;model_doc/ernie_m&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie_m&quot;},{&quot;title&quot;:&quot;ESM&quot;,&quot;id&quot;:&quot;model_doc/esm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/esm&quot;},{&quot;title&quot;:&quot;Falcon&quot;,&quot;id&quot;:&quot;model_doc/falcon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/falcon&quot;},{&quot;title&quot;:&quot;FLAN-T5&quot;,&quot;id&quot;:&quot;model_doc/flan-t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-t5&quot;},{&quot;title&quot;:&quot;FLAN-UL2&quot;,&quot;id&quot;:&quot;model_doc/flan-ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-ul2&quot;},{&quot;title&quot;:&quot;FlauBERT&quot;,&quot;id&quot;:&quot;model_doc/flaubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flaubert&quot;},{&quot;title&quot;:&quot;FNet&quot;,&quot;id&quot;:&quot;model_doc/fnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fnet&quot;},{&quot;title&quot;:&quot;FSMT&quot;,&quot;id&quot;:&quot;model_doc/fsmt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fsmt&quot;},{&quot;title&quot;:&quot;Funnel Transformer&quot;,&quot;id&quot;:&quot;model_doc/funnel&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/funnel&quot;},{&quot;title&quot;:&quot;GPT&quot;,&quot;id&quot;:&quot;model_doc/openai-gpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/openai-gpt&quot;},{&quot;title&quot;:&quot;GPT Neo&quot;,&quot;id&quot;:&quot;model_doc/gpt_neo&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neo&quot;},{&quot;title&quot;:&quot;GPT NeoX&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox&quot;},{&quot;title&quot;:&quot;GPT NeoX Japanese&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox_japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese&quot;},{&quot;title&quot;:&quot;GPT-J&quot;,&quot;id&quot;:&quot;model_doc/gptj&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptj&quot;},{&quot;title&quot;:&quot;GPT2&quot;,&quot;id&quot;:&quot;model_doc/gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt2&quot;},{&quot;title&quot;:&quot;GPTBigCode&quot;,&quot;id&quot;:&quot;model_doc/gpt_bigcode&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode&quot;},{&quot;title&quot;:&quot;GPTSAN Japanese&quot;,&quot;id&quot;:&quot;model_doc/gptsan-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese&quot;},{&quot;title&quot;:&quot;GPTSw3&quot;,&quot;id&quot;:&quot;model_doc/gpt-sw3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt-sw3&quot;},{&quot;title&quot;:&quot;HerBERT&quot;,&quot;id&quot;:&quot;model_doc/herbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/herbert&quot;},{&quot;title&quot;:&quot;I-BERT&quot;,&quot;id&quot;:&quot;model_doc/ibert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ibert&quot;},{&quot;title&quot;:&quot;Jukebox&quot;,&quot;id&quot;:&quot;model_doc/jukebox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/jukebox&quot;},{&quot;title&quot;:&quot;LED&quot;,&quot;id&quot;:&quot;model_doc/led&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/led&quot;},{&quot;title&quot;:&quot;LLaMA&quot;,&quot;id&quot;:&quot;model_doc/llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama&quot;},{&quot;title&quot;:&quot;Llama2&quot;,&quot;id&quot;:&quot;model_doc/llama2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama2&quot;},{&quot;title&quot;:&quot;Longformer&quot;,&quot;id&quot;:&quot;model_doc/longformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longformer&quot;},{&quot;title&quot;:&quot;LongT5&quot;,&quot;id&quot;:&quot;model_doc/longt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longt5&quot;},{&quot;title&quot;:&quot;LUKE&quot;,&quot;id&quot;:&quot;model_doc/luke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/luke&quot;},{&quot;title&quot;:&quot;M2M100&quot;,&quot;id&quot;:&quot;model_doc/m2m_100&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/m2m_100&quot;},{&quot;title&quot;:&quot;MarianMT&quot;,&quot;id&quot;:&quot;model_doc/marian&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/marian&quot;},{&quot;title&quot;:&quot;MarkupLM&quot;,&quot;id&quot;:&quot;model_doc/markuplm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/markuplm&quot;},{&quot;title&quot;:&quot;MBart and MBart-50&quot;,&quot;id&quot;:&quot;model_doc/mbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mbart&quot;},{&quot;title&quot;:&quot;MEGA&quot;,&quot;id&quot;:&quot;model_doc/mega&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mega&quot;},{&quot;title&quot;:&quot;MegatronBERT&quot;,&quot;id&quot;:&quot;model_doc/megatron-bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron-bert&quot;},{&quot;title&quot;:&quot;MegatronGPT2&quot;,&quot;id&quot;:&quot;model_doc/megatron_gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2&quot;},{&quot;title&quot;:&quot;Mistral&quot;,&quot;id&quot;:&quot;model_doc/mistral&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mistral&quot;},{&quot;title&quot;:&quot;mLUKE&quot;,&quot;id&quot;:&quot;model_doc/mluke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mluke&quot;},{&quot;title&quot;:&quot;MobileBERT&quot;,&quot;id&quot;:&quot;model_doc/mobilebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilebert&quot;},{&quot;title&quot;:&quot;MPNet&quot;,&quot;id&quot;:&quot;model_doc/mpnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpnet&quot;},{&quot;title&quot;:&quot;MPT&quot;,&quot;id&quot;:&quot;model_doc/mpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpt&quot;},{&quot;title&quot;:&quot;MRA&quot;,&quot;id&quot;:&quot;model_doc/mra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mra&quot;},{&quot;title&quot;:&quot;MT5&quot;,&quot;id&quot;:&quot;model_doc/mt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mt5&quot;},{&quot;title&quot;:&quot;MVP&quot;,&quot;id&quot;:&quot;model_doc/mvp&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mvp&quot;},{&quot;title&quot;:&quot;NEZHA&quot;,&quot;id&quot;:&quot;model_doc/nezha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nezha&quot;},{&quot;title&quot;:&quot;NLLB&quot;,&quot;id&quot;:&quot;model_doc/nllb&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb&quot;},{&quot;title&quot;:&quot;NLLB-MoE&quot;,&quot;id&quot;:&quot;model_doc/nllb-moe&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb-moe&quot;},{&quot;title&quot;:&quot;Nyströmformer&quot;,&quot;id&quot;:&quot;model_doc/nystromformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nystromformer&quot;},{&quot;title&quot;:&quot;Open-Llama&quot;,&quot;id&quot;:&quot;model_doc/open-llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/open-llama&quot;},{&quot;title&quot;:&quot;OPT&quot;,&quot;id&quot;:&quot;model_doc/opt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/opt&quot;},{&quot;title&quot;:&quot;Pegasus&quot;,&quot;id&quot;:&quot;model_doc/pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus&quot;},{&quot;title&quot;:&quot;PEGASUS-X&quot;,&quot;id&quot;:&quot;model_doc/pegasus_x&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus_x&quot;},{&quot;title&quot;:&quot;Persimmon&quot;,&quot;id&quot;:&quot;model_doc/persimmon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/persimmon&quot;},{&quot;title&quot;:&quot;PhoBERT&quot;,&quot;id&quot;:&quot;model_doc/phobert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/phobert&quot;},{&quot;title&quot;:&quot;PLBart&quot;,&quot;id&quot;:&quot;model_doc/plbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/plbart&quot;},{&quot;title&quot;:&quot;ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/prophetnet&quot;},{&quot;title&quot;:&quot;QDQBert&quot;,&quot;id&quot;:&quot;model_doc/qdqbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/qdqbert&quot;},{&quot;title&quot;:&quot;RAG&quot;,&quot;id&quot;:&quot;model_doc/rag&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rag&quot;},{&quot;title&quot;:&quot;REALM&quot;,&quot;id&quot;:&quot;model_doc/realm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/realm&quot;},{&quot;title&quot;:&quot;Reformer&quot;,&quot;id&quot;:&quot;model_doc/reformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/reformer&quot;},{&quot;title&quot;:&quot;RemBERT&quot;,&quot;id&quot;:&quot;model_doc/rembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rembert&quot;},{&quot;title&quot;:&quot;RetriBERT&quot;,&quot;id&quot;:&quot;model_doc/retribert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/retribert&quot;},{&quot;title&quot;:&quot;RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta&quot;},{&quot;title&quot;:&quot;RoBERTa-PreLayerNorm&quot;,&quot;id&quot;:&quot;model_doc/roberta-prelayernorm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm&quot;},{&quot;title&quot;:&quot;RoCBert&quot;,&quot;id&quot;:&quot;model_doc/roc_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roc_bert&quot;},{&quot;title&quot;:&quot;RoFormer&quot;,&quot;id&quot;:&quot;model_doc/roformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roformer&quot;},{&quot;title&quot;:&quot;RWKV&quot;,&quot;id&quot;:&quot;model_doc/rwkv&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rwkv&quot;},{&quot;title&quot;:&quot;Splinter&quot;,&quot;id&quot;:&quot;model_doc/splinter&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/splinter&quot;},{&quot;title&quot;:&quot;SqueezeBERT&quot;,&quot;id&quot;:&quot;model_doc/squeezebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/squeezebert&quot;},{&quot;title&quot;:&quot;SwitchTransformers&quot;,&quot;id&quot;:&quot;model_doc/switch_transformers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/switch_transformers&quot;},{&quot;title&quot;:&quot;T5&quot;,&quot;id&quot;:&quot;model_doc/t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5&quot;},{&quot;title&quot;:&quot;T5v1.1&quot;,&quot;id&quot;:&quot;model_doc/t5v1.1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5v1.1&quot;},{&quot;title&quot;:&quot;TAPEX&quot;,&quot;id&quot;:&quot;model_doc/tapex&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapex&quot;},{&quot;title&quot;:&quot;Transformer XL&quot;,&quot;id&quot;:&quot;model_doc/transfo-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/transfo-xl&quot;},{&quot;title&quot;:&quot;UL2&quot;,&quot;id&quot;:&quot;model_doc/ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ul2&quot;},{&quot;title&quot;:&quot;UMT5&quot;,&quot;id&quot;:&quot;model_doc/umt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/umt5&quot;},{&quot;title&quot;:&quot;X-MOD&quot;,&quot;id&quot;:&quot;model_doc/xmod&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xmod&quot;},{&quot;title&quot;:&quot;XGLM&quot;,&quot;id&quot;:&quot;model_doc/xglm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xglm&quot;},{&quot;title&quot;:&quot;XLM&quot;,&quot;id&quot;:&quot;model_doc/xlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm&quot;},{&quot;title&quot;:&quot;XLM-ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/xlm-prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa-XL&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl&quot;},{&quot;title&quot;:&quot;XLM-V&quot;,&quot;id&quot;:&quot;model_doc/xlm-v&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-v&quot;},{&quot;title&quot;:&quot;XLNet&quot;,&quot;id&quot;:&quot;model_doc/xlnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlnet&quot;},{&quot;title&quot;:&quot;YOSO&quot;,&quot;id&quot;:&quot;model_doc/yoso&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yoso&quot;}]},{&quot;title&quot;:&quot;Vision models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;BEiT&quot;,&quot;id&quot;:&quot;model_doc/beit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/beit&quot;},{&quot;title&quot;:&quot;BiT&quot;,&quot;id&quot;:&quot;model_doc/bit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bit&quot;},{&quot;title&quot;:&quot;Conditional DETR&quot;,&quot;id&quot;:&quot;model_doc/conditional_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/conditional_detr&quot;},{&quot;title&quot;:&quot;ConvNeXT&quot;,&quot;id&quot;:&quot;model_doc/convnext&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnext&quot;},{&quot;title&quot;:&quot;ConvNeXTV2&quot;,&quot;id&quot;:&quot;model_doc/convnextv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnextv2&quot;},{&quot;title&quot;:&quot;CvT&quot;,&quot;id&quot;:&quot;model_doc/cvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cvt&quot;},{&quot;title&quot;:&quot;Deformable DETR&quot;,&quot;id&quot;:&quot;model_doc/deformable_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deformable_detr&quot;},{&quot;title&quot;:&quot;DeiT&quot;,&quot;id&quot;:&quot;model_doc/deit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deit&quot;},{&quot;title&quot;:&quot;DETA&quot;,&quot;id&quot;:&quot;model_doc/deta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deta&quot;},{&quot;title&quot;:&quot;DETR&quot;,&quot;id&quot;:&quot;model_doc/detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/detr&quot;},{&quot;title&quot;:&quot;DiNAT&quot;,&quot;id&quot;:&quot;model_doc/dinat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinat&quot;},{&quot;title&quot;:&quot;DINO V2&quot;,&quot;id&quot;:&quot;model_doc/dinov2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinov2&quot;},{&quot;title&quot;:&quot;DiT&quot;,&quot;id&quot;:&quot;model_doc/dit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dit&quot;},{&quot;title&quot;:&quot;DPT&quot;,&quot;id&quot;:&quot;model_doc/dpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpt&quot;},{&quot;title&quot;:&quot;EfficientFormer&quot;,&quot;id&quot;:&quot;model_doc/efficientformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientformer&quot;},{&quot;title&quot;:&quot;EfficientNet&quot;,&quot;id&quot;:&quot;model_doc/efficientnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientnet&quot;},{&quot;title&quot;:&quot;FocalNet&quot;,&quot;id&quot;:&quot;model_doc/focalnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/focalnet&quot;},{&quot;title&quot;:&quot;GLPN&quot;,&quot;id&quot;:&quot;model_doc/glpn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/glpn&quot;},{&quot;title&quot;:&quot;ImageGPT&quot;,&quot;id&quot;:&quot;model_doc/imagegpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/imagegpt&quot;},{&quot;title&quot;:&quot;LeViT&quot;,&quot;id&quot;:&quot;model_doc/levit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/levit&quot;},{&quot;title&quot;:&quot;Mask2Former&quot;,&quot;id&quot;:&quot;model_doc/mask2former&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mask2former&quot;},{&quot;title&quot;:&quot;MaskFormer&quot;,&quot;id&quot;:&quot;model_doc/maskformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/maskformer&quot;},{&quot;title&quot;:&quot;MobileNetV1&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v1&quot;},{&quot;title&quot;:&quot;MobileNetV2&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v2&quot;},{&quot;title&quot;:&quot;MobileViT&quot;,&quot;id&quot;:&quot;model_doc/mobilevit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevit&quot;},{&quot;title&quot;:&quot;MobileViTV2&quot;,&quot;id&quot;:&quot;model_doc/mobilevitv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevitv2&quot;},{&quot;title&quot;:&quot;NAT&quot;,&quot;id&quot;:&quot;model_doc/nat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nat&quot;},{&quot;title&quot;:&quot;PoolFormer&quot;,&quot;id&quot;:&quot;model_doc/poolformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/poolformer&quot;},{&quot;title&quot;:&quot;Pyramid Vision Transformer (PVT)&quot;,&quot;id&quot;:&quot;model_doc/pvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pvt&quot;},{&quot;title&quot;:&quot;RegNet&quot;,&quot;id&quot;:&quot;model_doc/regnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/regnet&quot;},{&quot;title&quot;:&quot;ResNet&quot;,&quot;id&quot;:&quot;model_doc/resnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/resnet&quot;},{&quot;title&quot;:&quot;SegFormer&quot;,&quot;id&quot;:&quot;model_doc/segformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/segformer&quot;},{&quot;title&quot;:&quot;SwiftFormer&quot;,&quot;id&quot;:&quot;model_doc/swiftformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swiftformer&quot;},{&quot;title&quot;:&quot;Swin Transformer&quot;,&quot;id&quot;:&quot;model_doc/swin&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin&quot;},{&quot;title&quot;:&quot;Swin Transformer V2&quot;,&quot;id&quot;:&quot;model_doc/swinv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swinv2&quot;},{&quot;title&quot;:&quot;Swin2SR&quot;,&quot;id&quot;:&quot;model_doc/swin2sr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin2sr&quot;},{&quot;title&quot;:&quot;Table Transformer&quot;,&quot;id&quot;:&quot;model_doc/table-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/table-transformer&quot;},{&quot;title&quot;:&quot;TimeSformer&quot;,&quot;id&quot;:&quot;model_doc/timesformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/timesformer&quot;},{&quot;title&quot;:&quot;UperNet&quot;,&quot;id&quot;:&quot;model_doc/upernet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/upernet&quot;},{&quot;title&quot;:&quot;VAN&quot;,&quot;id&quot;:&quot;model_doc/van&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/van&quot;},{&quot;title&quot;:&quot;VideoMAE&quot;,&quot;id&quot;:&quot;model_doc/videomae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/videomae&quot;},{&quot;title&quot;:&quot;Vision Transformer (ViT)&quot;,&quot;id&quot;:&quot;model_doc/vit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit&quot;},{&quot;title&quot;:&quot;ViT Hybrid&quot;,&quot;id&quot;:&quot;model_doc/vit_hybrid&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_hybrid&quot;},{&quot;title&quot;:&quot;ViTDet&quot;,&quot;id&quot;:&quot;model_doc/vitdet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitdet&quot;},{&quot;title&quot;:&quot;ViTMAE&quot;,&quot;id&quot;:&quot;model_doc/vit_mae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_mae&quot;},{&quot;title&quot;:&quot;ViTMatte&quot;,&quot;id&quot;:&quot;model_doc/vitmatte&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitmatte&quot;},{&quot;title&quot;:&quot;ViTMSN&quot;,&quot;id&quot;:&quot;model_doc/vit_msn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_msn&quot;},{&quot;title&quot;:&quot;ViViT&quot;,&quot;id&quot;:&quot;model_doc/vivit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vivit&quot;},{&quot;title&quot;:&quot;YOLOS&quot;,&quot;id&quot;:&quot;model_doc/yolos&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yolos&quot;}]},{&quot;title&quot;:&quot;Audio models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio Spectrogram Transformer&quot;,&quot;id&quot;:&quot;model_doc/audio-spectrogram-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer&quot;},{&quot;title&quot;:&quot;Bark&quot;,&quot;id&quot;:&quot;model_doc/bark&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bark&quot;},{&quot;title&quot;:&quot;CLAP&quot;,&quot;id&quot;:&quot;model_doc/clap&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clap&quot;},{&quot;title&quot;:&quot;EnCodec&quot;,&quot;id&quot;:&quot;model_doc/encodec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encodec&quot;},{&quot;title&quot;:&quot;Hubert&quot;,&quot;id&quot;:&quot;model_doc/hubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/hubert&quot;},{&quot;title&quot;:&quot;MCTCT&quot;,&quot;id&quot;:&quot;model_doc/mctct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mctct&quot;},{&quot;title&quot;:&quot;MMS&quot;,&quot;id&quot;:&quot;model_doc/mms&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mms&quot;},{&quot;title&quot;:&quot;MusicGen&quot;,&quot;id&quot;:&quot;model_doc/musicgen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/musicgen&quot;},{&quot;title&quot;:&quot;Pop2Piano&quot;,&quot;id&quot;:&quot;model_doc/pop2piano&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pop2piano&quot;},{&quot;title&quot;:&quot;SEW&quot;,&quot;id&quot;:&quot;model_doc/sew&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew&quot;},{&quot;title&quot;:&quot;SEW-D&quot;,&quot;id&quot;:&quot;model_doc/sew-d&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew-d&quot;},{&quot;title&quot;:&quot;Speech2Text&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text&quot;},{&quot;title&quot;:&quot;Speech2Text2&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text_2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2&quot;},{&quot;title&quot;:&quot;SpeechT5&quot;,&quot;id&quot;:&quot;model_doc/speecht5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speecht5&quot;},{&quot;title&quot;:&quot;UniSpeech&quot;,&quot;id&quot;:&quot;model_doc/unispeech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech&quot;},{&quot;title&quot;:&quot;UniSpeech-SAT&quot;,&quot;id&quot;:&quot;model_doc/unispeech-sat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech-sat&quot;},{&quot;title&quot;:&quot;VITS&quot;,&quot;id&quot;:&quot;model_doc/vits&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vits&quot;},{&quot;title&quot;:&quot;Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2&quot;},{&quot;title&quot;:&quot;Wav2Vec2-Conformer&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2-conformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer&quot;},{&quot;title&quot;:&quot;Wav2Vec2Phoneme&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2_phoneme&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme&quot;},{&quot;title&quot;:&quot;WavLM&quot;,&quot;id&quot;:&quot;model_doc/wavlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wavlm&quot;},{&quot;title&quot;:&quot;Whisper&quot;,&quot;id&quot;:&quot;model_doc/whisper&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/whisper&quot;},{&quot;title&quot;:&quot;XLS-R&quot;,&quot;id&quot;:&quot;model_doc/xls_r&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xls_r&quot;},{&quot;title&quot;:&quot;XLSR-Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/xlsr_wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2&quot;}]},{&quot;title&quot;:&quot;Multimodal models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALIGN&quot;,&quot;id&quot;:&quot;model_doc/align&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/align&quot;},{&quot;title&quot;:&quot;AltCLIP&quot;,&quot;id&quot;:&quot;model_doc/altclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/altclip&quot;},{&quot;title&quot;:&quot;BLIP&quot;,&quot;id&quot;:&quot;model_doc/blip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip&quot;},{&quot;title&quot;:&quot;BLIP-2&quot;,&quot;id&quot;:&quot;model_doc/blip-2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip-2&quot;},{&quot;title&quot;:&quot;BridgeTower&quot;,&quot;id&quot;:&quot;model_doc/bridgetower&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bridgetower&quot;},{&quot;title&quot;:&quot;BROS&quot;,&quot;id&quot;:&quot;model_doc/bros&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bros&quot;},{&quot;title&quot;:&quot;Chinese-CLIP&quot;,&quot;id&quot;:&quot;model_doc/chinese_clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/chinese_clip&quot;},{&quot;title&quot;:&quot;CLIP&quot;,&quot;id&quot;:&quot;model_doc/clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clip&quot;},{&quot;title&quot;:&quot;CLIPSeg&quot;,&quot;id&quot;:&quot;model_doc/clipseg&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clipseg&quot;},{&quot;title&quot;:&quot;Data2Vec&quot;,&quot;id&quot;:&quot;model_doc/data2vec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/data2vec&quot;},{&quot;title&quot;:&quot;DePlot&quot;,&quot;id&quot;:&quot;model_doc/deplot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deplot&quot;},{&quot;title&quot;:&quot;Donut&quot;,&quot;id&quot;:&quot;model_doc/donut&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/donut&quot;},{&quot;title&quot;:&quot;FLAVA&quot;,&quot;id&quot;:&quot;model_doc/flava&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flava&quot;},{&quot;title&quot;:&quot;GIT&quot;,&quot;id&quot;:&quot;model_doc/git&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/git&quot;},{&quot;title&quot;:&quot;GroupViT&quot;,&quot;id&quot;:&quot;model_doc/groupvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/groupvit&quot;},{&quot;title&quot;:&quot;IDEFICS&quot;,&quot;id&quot;:&quot;model_doc/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/idefics&quot;},{&quot;title&quot;:&quot;InstructBLIP&quot;,&quot;id&quot;:&quot;model_doc/instructblip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/instructblip&quot;},{&quot;title&quot;:&quot;LayoutLM&quot;,&quot;id&quot;:&quot;model_doc/layoutlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlm&quot;},{&quot;title&quot;:&quot;LayoutLMV2&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv2&quot;},{&quot;title&quot;:&quot;LayoutLMV3&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv3&quot;},{&quot;title&quot;:&quot;LayoutXLM&quot;,&quot;id&quot;:&quot;model_doc/layoutxlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutxlm&quot;},{&quot;title&quot;:&quot;LiLT&quot;,&quot;id&quot;:&quot;model_doc/lilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lilt&quot;},{&quot;title&quot;:&quot;LXMERT&quot;,&quot;id&quot;:&quot;model_doc/lxmert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lxmert&quot;},{&quot;title&quot;:&quot;MatCha&quot;,&quot;id&quot;:&quot;model_doc/matcha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/matcha&quot;},{&quot;title&quot;:&quot;MGP-STR&quot;,&quot;id&quot;:&quot;model_doc/mgp-str&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mgp-str&quot;},{&quot;title&quot;:&quot;Nougat&quot;,&quot;id&quot;:&quot;model_doc/nougat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nougat&quot;},{&quot;title&quot;:&quot;OneFormer&quot;,&quot;id&quot;:&quot;model_doc/oneformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/oneformer&quot;},{&quot;title&quot;:&quot;OWL-ViT&quot;,&quot;id&quot;:&quot;model_doc/owlvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/owlvit&quot;},{&quot;title&quot;:&quot;Perceiver&quot;,&quot;id&quot;:&quot;model_doc/perceiver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/perceiver&quot;},{&quot;title&quot;:&quot;Pix2Struct&quot;,&quot;id&quot;:&quot;model_doc/pix2struct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pix2struct&quot;},{&quot;title&quot;:&quot;Segment Anything&quot;,&quot;id&quot;:&quot;model_doc/sam&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sam&quot;},{&quot;title&quot;:&quot;Speech Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/speech-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder&quot;},{&quot;title&quot;:&quot;TAPAS&quot;,&quot;id&quot;:&quot;model_doc/tapas&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapas&quot;},{&quot;title&quot;:&quot;TrOCR&quot;,&quot;id&quot;:&quot;model_doc/trocr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trocr&quot;},{&quot;title&quot;:&quot;TVLT&quot;,&quot;id&quot;:&quot;model_doc/tvlt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tvlt&quot;},{&quot;title&quot;:&quot;ViLT&quot;,&quot;id&quot;:&quot;model_doc/vilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vilt&quot;},{&quot;title&quot;:&quot;Vision Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/vision-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder&quot;},{&quot;title&quot;:&quot;Vision Text Dual Encoder&quot;,&quot;id&quot;:&quot;model_doc/vision-text-dual-encoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder&quot;},{&quot;title&quot;:&quot;VisualBERT&quot;,&quot;id&quot;:&quot;model_doc/visual_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/visual_bert&quot;},{&quot;title&quot;:&quot;X-CLIP&quot;,&quot;id&quot;:&quot;model_doc/xclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xclip&quot;}]},{&quot;title&quot;:&quot;Reinforcement learning models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Decision Transformer&quot;,&quot;id&quot;:&quot;model_doc/decision_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/decision_transformer&quot;},{&quot;title&quot;:&quot;Trajectory Transformer&quot;,&quot;id&quot;:&quot;model_doc/trajectory_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer&quot;}]},{&quot;title&quot;:&quot;Time series models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Autoformer&quot;,&quot;id&quot;:&quot;model_doc/autoformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/autoformer&quot;},{&quot;title&quot;:&quot;Informer&quot;,&quot;id&quot;:&quot;model_doc/informer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/informer&quot;},{&quot;title&quot;:&quot;Time Series Transformer&quot;,&quot;id&quot;:&quot;model_doc/time_series_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/time_series_transformer&quot;}]},{&quot;title&quot;:&quot;Graph models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Graphormer&quot;,&quot;id&quot;:&quot;model_doc/graphormer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/graphormer&quot;}]}]},{&quot;title&quot;:&quot;Internal Helpers&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Custom Layers and Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/modeling_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/modeling_utils&quot;},{&quot;title&quot;:&quot;Utilities for pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/pipelines_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/pipelines_utils&quot;},{&quot;title&quot;:&quot;Utilities for Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/tokenization_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/tokenization_utils&quot;},{&quot;title&quot;:&quot;Utilities for Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/trainer_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/trainer_utils&quot;},{&quot;title&quot;:&quot;Utilities for Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/generation_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/generation_utils&quot;},{&quot;title&quot;:&quot;Utilities for Image Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/image_processing_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/image_processing_utils&quot;},{&quot;title&quot;:&quot;Utilities for Audio processing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/audio_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/audio_utils&quot;},{&quot;title&quot;:&quot;General Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/file_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/file_utils&quot;},{&quot;title&quot;:&quot;Utilities for Time Series&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/time_series_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/time_series_utils&quot;}]}]}],&quot;chapterId&quot;:&quot;tasks/object_detection&quot;,&quot;docType&quot;:&quot;docs&quot;,&quot;isLoggedIn&quot;:false,&quot;lang&quot;:&quot;en&quot;,&quot;langs&quot;:[&quot;de&quot;,&quot;en&quot;,&quot;es&quot;,&quot;fr&quot;,&quot;it&quot;,&quot;ko&quot;,&quot;pt&quot;,&quot;zh&quot;],&quot;library&quot;:&quot;transformers&quot;,&quot;theme&quot;:&quot;light&quot;,&quot;version&quot;:&quot;v4.34.0&quot;,&quot;versions&quot;:[{&quot;version&quot;:&quot;main&quot;},{&quot;version&quot;:&quot;v4.34.0&quot;},{&quot;version&quot;:&quot;v4.33.3&quot;},{&quot;version&quot;:&quot;v4.33.2&quot;},{&quot;version&quot;:&quot;v4.33.0&quot;},{&quot;version&quot;:&quot;v4.32.1&quot;},{&quot;version&quot;:&quot;v4.32.0&quot;},{&quot;version&quot;:&quot;v4.31.0&quot;},{&quot;version&quot;:&quot;v4.30.0&quot;},{&quot;version&quot;:&quot;v4.29.1&quot;},{&quot;version&quot;:&quot;v4.29.0&quot;},{&quot;version&quot;:&quot;v4.28.1&quot;},{&quot;version&quot;:&quot;v4.28.0&quot;},{&quot;version&quot;:&quot;v4.27.2&quot;},{&quot;version&quot;:&quot;v4.27.1&quot;},{&quot;version&quot;:&quot;v4.27.0&quot;},{&quot;version&quot;:&quot;v4.26.1&quot;},{&quot;version&quot;:&quot;v4.26.0&quot;},{&quot;version&quot;:&quot;v4.25.1&quot;},{&quot;version&quot;:&quot;v4.24.0&quot;},{&quot;version&quot;:&quot;v4.23.1&quot;},{&quot;version&quot;:&quot;v4.23.0&quot;},{&quot;version&quot;:&quot;v4.22.2&quot;},{&quot;version&quot;:&quot;v4.22.1&quot;},{&quot;version&quot;:&quot;v4.22.0&quot;},{&quot;version&quot;:&quot;v4.21.3&quot;},{&quot;version&quot;:&quot;v4.21.2&quot;},{&quot;version&quot;:&quot;v4.21.1&quot;},{&quot;version&quot;:&quot;v4.21.0&quot;},{&quot;version&quot;:&quot;v4.20.1&quot;},{&quot;version&quot;:&quot;v4.20.0&quot;},{&quot;version&quot;:&quot;v4.19.4&quot;},{&quot;version&quot;:&quot;v4.19.3&quot;},{&quot;version&quot;:&quot;v4.19.2&quot;},{&quot;version&quot;:&quot;v4.19.0&quot;},{&quot;version&quot;:&quot;v4.18.0&quot;},{&quot;version&quot;:&quot;v4.17.0&quot;},{&quot;version&quot;:&quot;v4.16.2&quot;},{&quot;version&quot;:&quot;v4.16.1&quot;},{&quot;version&quot;:&quot;v4.16.0&quot;},{&quot;version&quot;:&quot;v4.15.0&quot;},{&quot;version&quot;:&quot;v4.14.1&quot;},{&quot;version&quot;:&quot;v4.13.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.5&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.4&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.0.0&quot;},{&quot;version&quot;:&quot;doc-builder-html&quot;}],&quot;title&quot;:&quot;Object detection&quot;}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"><div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation</p> <div class="flex items-center"><p class="font-semibold">Object detection</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "><button class=" " type="button"><h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> <span class="ml-auto rounded border border-gray-200 bg-gray-100 px-0.5 text-xs dark:border-gray-800 dark:bg-gray-800"><kbd class="font-sans">⌘K</kbd></span></button> <div class="flex items-center"><select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1">v4.34.0</option><option value="2">v4.33.3</option><option value="3">v4.32.1</option><option value="4">v4.31.0</option><option value="5">v4.30.0</option><option value="6">v4.29.1</option><option value="7">v4.28.1</option><option value="8">v4.27.2</option><option value="9">v4.26.1</option><option value="10">v4.25.1</option><option value="11">v4.24.0</option><option value="12">v4.23.1</option><option value="13">v4.22.2</option><option value="14">v4.21.3</option><option value="15">v4.20.1</option><option value="16">v4.19.4</option><option value="17">v4.18.0</option><option value="18">v4.17.0</option><option value="19">v4.16.2</option><option value="20">v4.15.0</option><option value="21">v4.14.1</option><option value="22">v4.13.0</option><option value="23">v4.12.5</option><option value="24">v4.11.3</option><option value="25">v4.10.1</option><option value="26">v4.9.2</option><option value="27">v4.8.2</option><option value="28">v4.7.0</option><option value="29">v4.6.0</option><option value="30">v4.5.1</option><option value="31">v4.4.2</option><option value="32">v4.3.3</option><option value="33">v4.2.2</option><option value="34">v4.1.1</option><option value="35">v4.0.1</option><option value="36">v3.5.1</option><option value="37">v3.4.0</option><option value="38">v3.3.1</option><option value="39">v3.2.0</option><option value="40">v3.1.0</option><option value="41">v3.0.2</option><option value="42">v2.11.0</option><option value="43">v2.10.0</option><option value="44">v2.9.1</option><option value="45">v2.8.0</option><option value="46">v2.7.0</option><option value="47">v2.6.0</option><option value="48">v2.5.1</option><option value="49">v2.4.1</option><option value="50">v2.3.0</option><option value="51">v2.2.2</option><option value="52">v2.1.1</option><option value="53">v2.0.0</option><option value="54">v1.2.0</option><option value="55">v1.1.0</option><option value="56">v1.0.0</option><option value="57">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"><button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"><svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> 112,792</a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Get started</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/index">🤗 Transformers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/quicktour">Quick tour </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/installation">Installation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Tutorials</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_tutorial">Run inference with pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/autoclass_tutorial">Write portable code with AutoClass </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/preprocessing">Preprocess data </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/training">Fine-tune a pretrained model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/run_scripts">Train with a script </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/accelerate">Set up distributed training with 🤗 Accelerate </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/peft">Load and train adapters with 🤗 PEFT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_sharing">Share your model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/transformers_agents">Agents </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/llm_tutorial">Generation with LLMs </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Task Guides</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Natural Language Processing</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Computer Vision</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/image_classification">Image classification </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/semantic_segmentation">Semantic segmentation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/video_classification">Video classification </a><a data-sveltekit-reload="" class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-4" href="/docs/transformers/v4.34.0/en/tasks/object_detection">Object detection </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection">Zero-shot object detection </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification">Zero-shot image classification </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation">Depth estimation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Generation</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Prompting</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Developer guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/fast_tokenizers">Use fast tokenizers from 🤗 Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/multilingual">Run inference with multilingual models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/create_a_model">Use model-specific APIs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_models">Share a custom model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/chat_templating">Templates for chat models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/sagemaker">Run training on Amazon SageMaker </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/serialization">Export to ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tflite">Export to TFLite </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/torchscript">Export to TorchScript </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/benchmarks">Benchmarks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/notebooks">Notebooks with examples </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/community">Community resources </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_tools">Custom Tools and Prompts </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/troubleshooting">Troubleshoot </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Performance and scalability</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/performance">Overview </a><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Efficient training techniques</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_one">Methods and tools for efficient training on a single GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_many">Multiple GPUs and parallelism </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu">Efficient training on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu_many">Distributed CPU training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu">Training on TPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu_tf">Training on TPU with TensorFlow </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_special">Training on Specialized Hardware </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_hardware">Custom hardware for training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/hpo_train">Hyperparameter Search using Trainer API </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Optimizing inference</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_cpu">Inference on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_one">Inference on one GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_many">Inference on many GPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_special">Inference on Specialized Hardware </a> </div><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/big_models">Instantiating a big model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/debugging">Troubleshooting </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tf_xla">XLA Integration for TensorFlow Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perf_torch_compile">Optimize inference using `torch.compile()` </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Contribute</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/contributing">How to contribute to transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_model">How to add a model to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_tensorflow_model">How to convert a 🤗 Transformers model to TensorFlow? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_pipeline">How to add a pipeline to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/testing">Testing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pr_checks">Checks on a Pull Request </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Conceptual guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/philosophy">Philosophy </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/glossary">Glossary </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/task_summary">What 🤗 Transformers can do </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tasks_explained">How 🤗 Transformers solve tasks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_summary">The Transformer model family </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tokenizer_summary">Summary of the tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/attention">Attention mechanisms </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pad_truncation">Padding and truncation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/bertology">BERTology </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perplexity">Perplexity of fixed-length models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_webserver">Pipelines for webserver inference </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_memory_anatomy">Model training anatomy </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>API</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Main Classes</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/agent">Agents and Tools </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/model_doc/auto">Auto Classes </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/callback">Callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/configuration">Configuration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/data_collator">Data Collator </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks">Keras callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/logging">Logging </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/model">Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/text_generation">Text Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/onnx">ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules">Optimization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/output">Model outputs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/pipelines">Pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/processors">Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/quantization">Quantization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/tokenizer">Tokenizer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/trainer">Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/deepspeed">DeepSpeed Integration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor">Feature Extractor </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/image_processor">Image Processor </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Models</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Text models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Vision models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Reinforcement learning models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Time series models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Graph models</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Internal Helpers</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/modeling_utils">Custom Layers and Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/pipelines_utils">Utilities for pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/tokenization_utils">Utilities for Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/trainer_utils">Utilities for Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/generation_utils">Utilities for Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/image_processing_utils">Utilities for Image Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/audio_utils">Utilities for Audio processing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/file_utils">General Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/time_series_utils">Utilities for Time Series </a> </div> </div></nav></div></div></div> <div class="z-1 min-w-0 flex-1"> <div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg"> <div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div> <p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience </p> <div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes </div></div></div> <div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a> <p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div> <div class="prose-doc prose relative mx-auto max-w-4xl break-words"> <p></p> <h1 class="relative group"><a id="object-detection" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#object-detection"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-sdfyw1">Object detection</span></h1> <div class="flex space-x-1 absolute z-10 right-0 top-0"> <div class="relative colab-dropdown "><button class=" " type="button"><img alt="Open In Colab" class="!m-0" src="https://colab.research.google.com/assets/colab-badge.svg"></button> </div> <div class="relative colab-dropdown "><button class=" " type="button"><img alt="Open In Studio Lab" class="!m-0" src="https://studiolab.sagemaker.aws/studiolab.svg"></button> </div></div> <p data-svelte-h="svelte-18pt6j0">Object detection is the computer vision task of detecting instances (such as humans, buildings, or cars) in an image. Object detection models receive an image as input and output coordinates of the bounding boxes and associated labels of the detected objects. An image can contain multiple objects, each with its own bounding box and a label (e.g. it can have a car and a building), and each object can be present in different parts of an image (e.g. the image can have several cars). This task is commonly used in autonomous driving for detecting things like pedestrians, road signs, and traffic lights. Other applications include counting objects in images, image search, and more.</p> <p data-svelte-h="svelte-1xy9go1">In this guide, you will learn how to:</p> <ol data-svelte-h="svelte-6qcuz8"><li>Finetune <a href="https://huggingface.co/docs/transformers/model_doc/detr" rel="nofollow">DETR</a>, a model that combines a convolutional backbone with an encoder-decoder Transformer, on the <a href="https://huggingface.co/datasets/cppe-5" rel="nofollow">CPPE-5</a> dataset.</li> <li>Use your finetuned model for inference.</li></ol> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400">The task illustrated in this tutorial is supported by the following model architectures: <p data-svelte-h="svelte-1cb65l9"><a href="../model_doc/conditional_detr">Conditional DETR</a>, <a href="../model_doc/deformable_detr">Deformable DETR</a>, <a href="../model_doc/deta">DETA</a>, <a href="../model_doc/detr">DETR</a>, <a href="../model_doc/table-transformer">Table Transformer</a>, <a href="../model_doc/yolos">YOLOS</a></p></div> <p data-svelte-h="svelte-1c9nexd">Before you begin, make sure you have all the necessary libraries installed:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class="">pip install -q datasets transformers evaluate timm albumentations</pre></div> <p data-svelte-h="svelte-8eui7r">You’ll use 🤗 Datasets to load a dataset from the Hugging Face Hub, 🤗 Transformers to train your model, and <code>albumentations</code> to augment the data. <code>timm</code> is currently required to load a convolutional backbone for the DETR model.</p> <p data-svelte-h="svelte-1oee7b1">We encourage you to share your model with the community. Log in to your Hugging Face account to upload it to the Hub. When prompted, enter your token to log in:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> huggingface_hub <span class="hljs-keyword">import</span> notebook_login <span class="hljs-meta">&gt;&gt;&gt; </span>notebook_login()</pre></div> <h2 class="relative group"><a id="load-the-cppe5-dataset" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#load-the-cppe5-dataset"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-mdnhj4">Load the CPPE-5 dataset</span></h2> <p data-svelte-h="svelte-9es7uf">The <a href="https://huggingface.co/datasets/cppe-5" rel="nofollow">CPPE-5 dataset</a> contains images with annotations identifying medical personal protective equipment (PPE) in the context of the COVID-19 pandemic.</p> <p data-svelte-h="svelte-kvqzlo">Start by loading the dataset:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset <span class="hljs-meta">&gt;&gt;&gt; </span>cppe5 = load_dataset(<span class="hljs-string">"cppe-5"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>cppe5 DatasetDict({ train: Dataset({ features: [<span class="hljs-string">'image_id'</span>, <span class="hljs-string">'image'</span>, <span class="hljs-string">'width'</span>, <span class="hljs-string">'height'</span>, <span class="hljs-string">'objects'</span>], num_rows: <span class="hljs-number">1000</span> }) test: Dataset({ features: [<span class="hljs-string">'image_id'</span>, <span class="hljs-string">'image'</span>, <span class="hljs-string">'width'</span>, <span class="hljs-string">'height'</span>, <span class="hljs-string">'objects'</span>], num_rows: <span class="hljs-number">29</span> }) })</pre></div> <p data-svelte-h="svelte-1f9ettc">You’ll see that this dataset already comes with a training set containing 1000 images and a test set with 29 images.</p> <p data-svelte-h="svelte-4bevpw">To get familiar with the data, explore what the examples look like.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>cppe5[<span class="hljs-string">"train"</span>][<span class="hljs-number">0</span>] {<span class="hljs-string">'image_id'</span>: <span class="hljs-number">15</span>, <span class="hljs-string">'image'</span>: &lt;PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=943x663 at <span class="hljs-number">0x7F9EC9E77C10</span>&gt;, <span class="hljs-string">'width'</span>: <span class="hljs-number">943</span>, <span class="hljs-string">'height'</span>: <span class="hljs-number">663</span>, <span class="hljs-string">'objects'</span>: {<span class="hljs-string">'id'</span>: [<span class="hljs-number">114</span>, <span class="hljs-number">115</span>, <span class="hljs-number">116</span>, <span class="hljs-number">117</span>], <span class="hljs-string">'area'</span>: [<span class="hljs-number">3796</span>, <span class="hljs-number">1596</span>, <span class="hljs-number">152768</span>, <span class="hljs-number">81002</span>], <span class="hljs-string">'bbox'</span>: [[<span class="hljs-number">302.0</span>, <span class="hljs-number">109.0</span>, <span class="hljs-number">73.0</span>, <span class="hljs-number">52.0</span>], [<span class="hljs-number">810.0</span>, <span class="hljs-number">100.0</span>, <span class="hljs-number">57.0</span>, <span class="hljs-number">28.0</span>], [<span class="hljs-number">160.0</span>, <span class="hljs-number">31.0</span>, <span class="hljs-number">248.0</span>, <span class="hljs-number">616.0</span>], [<span class="hljs-number">741.0</span>, <span class="hljs-number">68.0</span>, <span class="hljs-number">202.0</span>, <span class="hljs-number">401.0</span>]], <span class="hljs-string">'category'</span>: [<span class="hljs-number">4</span>, <span class="hljs-number">4</span>, <span class="hljs-number">0</span>, <span class="hljs-number">0</span>]}}</pre></div> <p data-svelte-h="svelte-m0t76z">The examples in the dataset have the following fields:</p> <ul data-svelte-h="svelte-1tn0avh"><li><code>image_id</code>: the example image id</li> <li><code>image</code>: a <code>PIL.Image.Image</code> object containing the image</li> <li><code>width</code>: width of the image</li> <li><code>height</code>: height of the image</li> <li><code>objects</code>: a dictionary containing bounding box metadata for the objects in the image:<ul><li><code>id</code>: the annotation id</li> <li><code>area</code>: the area of the bounding box</li> <li><code>bbox</code>: the object’s bounding box (in the <a href="https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco" rel="nofollow">COCO format</a> )</li> <li><code>category</code>: the object’s category, with possible values including <code>Coverall (0)</code>, <code>Face_Shield (1)</code>, <code>Gloves (2)</code>, <code>Goggles (3)</code> and <code>Mask (4)</code></li></ul></li></ul> <p data-svelte-h="svelte-edp0uk">You may notice that the <code>bbox</code> field follows the COCO format, which is the format that the DETR model expects. However, the grouping of the fields inside <code>objects</code> differs from the annotation format DETR requires. You will need to apply some preprocessing transformations before using this data for training.</p> <p data-svelte-h="svelte-1o4zzv7">To get an even better understanding of the data, visualize an example in the dataset.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> os <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> PIL <span class="hljs-keyword">import</span> Image, ImageDraw <span class="hljs-meta">&gt;&gt;&gt; </span>image = cppe5[<span class="hljs-string">"train"</span>][<span class="hljs-number">0</span>][<span class="hljs-string">"image"</span>] <span class="hljs-meta">&gt;&gt;&gt; </span>annotations = cppe5[<span class="hljs-string">"train"</span>][<span class="hljs-number">0</span>][<span class="hljs-string">"objects"</span>] <span class="hljs-meta">&gt;&gt;&gt; </span>draw = ImageDraw.Draw(image) <span class="hljs-meta">&gt;&gt;&gt; </span>categories = cppe5[<span class="hljs-string">"train"</span>].features[<span class="hljs-string">"objects"</span>].feature[<span class="hljs-string">"category"</span>].names <span class="hljs-meta">&gt;&gt;&gt; </span>id2label = {index: x <span class="hljs-keyword">for</span> index, x <span class="hljs-keyword">in</span> <span class="hljs-built_in">enumerate</span>(categories, start=<span class="hljs-number">0</span>)} <span class="hljs-meta">&gt;&gt;&gt; </span>label2id = {v: k <span class="hljs-keyword">for</span> k, v <span class="hljs-keyword">in</span> id2label.items()} <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">for</span> i <span class="hljs-keyword">in</span> <span class="hljs-built_in">range</span>(<span class="hljs-built_in">len</span>(annotations[<span class="hljs-string">"id"</span>])): <span class="hljs-meta">... </span> box = annotations[<span class="hljs-string">"bbox"</span>][i] <span class="hljs-meta">... </span> class_idx = annotations[<span class="hljs-string">"category"</span>][i] <span class="hljs-meta">... </span> x, y, w, h = <span class="hljs-built_in">tuple</span>(box) <span class="hljs-meta">... </span> draw.rectangle((x, y, x + w, y + h), outline=<span class="hljs-string">"red"</span>, width=<span class="hljs-number">1</span>) <span class="hljs-meta">... </span> draw.text((x, y), id2label[class_idx], fill=<span class="hljs-string">"white"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>image</pre></div> <div class="flex justify-center" data-svelte-h="svelte-1mkaz8h"><img src="https://i.imgur.com/TdaqPJO.png" alt="CPPE-5 Image Example"></div> <p data-svelte-h="svelte-8e05io">To visualize the bounding boxes with associated labels, you can get the labels from the dataset’s metadata, specifically the <code>category</code> field. You’ll also want to create dictionaries that map a label id to a label class (<code>id2label</code>) and the other way around (<code>label2id</code>). You can use them later when setting up the model. Including these maps will make your model reusable by others if you share it on the Hugging Face Hub.</p> <p data-svelte-h="svelte-34dvp0">As a final step of getting familiar with the data, explore it for potential issues. One common problem with datasets for object detection is bounding boxes that “stretch” beyond the edge of the image. Such “runaway” bounding boxes can raise errors during training and should be addressed at this stage. There are a few examples with this issue in this dataset. To keep things simple in this guide, we remove these images from the data.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>remove_idx = [<span class="hljs-number">590</span>, <span class="hljs-number">821</span>, <span class="hljs-number">822</span>, <span class="hljs-number">875</span>, <span class="hljs-number">876</span>, <span class="hljs-number">878</span>, <span class="hljs-number">879</span>] <span class="hljs-meta">&gt;&gt;&gt; </span>keep = [i <span class="hljs-keyword">for</span> i <span class="hljs-keyword">in</span> <span class="hljs-built_in">range</span>(<span class="hljs-built_in">len</span>(cppe5[<span class="hljs-string">"train"</span>])) <span class="hljs-keyword">if</span> i <span class="hljs-keyword">not</span> <span class="hljs-keyword">in</span> remove_idx] <span class="hljs-meta">&gt;&gt;&gt; </span>cppe5[<span class="hljs-string">"train"</span>] = cppe5[<span class="hljs-string">"train"</span>].select(keep)</pre></div> <h2 class="relative group"><a id="preprocess-the-data" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#preprocess-the-data"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-171xy48">Preprocess the data</span></h2> <p data-svelte-h="svelte-5a7bjj">To finetune a model, you must preprocess the data you plan to use to match precisely the approach used for the pre-trained model. <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoImageProcessor">AutoImageProcessor</a> takes care of processing image data to create <code>pixel_values</code>, <code>pixel_mask</code>, and <code>labels</code> that a DETR model can train with. The image processor has some attributes that you won’t have to worry about:</p> <ul data-svelte-h="svelte-9xz2l6"><li><code>image_mean = [0.485, 0.456, 0.406 ]</code></li> <li><code>image_std = [0.229, 0.224, 0.225]</code></li></ul> <p data-svelte-h="svelte-1uiy3io">These are the mean and standard deviation used to normalize images during the model pre-training. These values are crucial to replicate when doing inference or finetuning a pre-trained image model.</p> <p data-svelte-h="svelte-1ipxopl">Instantiate the image processor from the same checkpoint as the model you want to finetune.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoImageProcessor <span class="hljs-meta">&gt;&gt;&gt; </span>checkpoint = <span class="hljs-string">"facebook/detr-resnet-50"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>image_processor = AutoImageProcessor.from_pretrained(checkpoint)</pre></div> <p data-svelte-h="svelte-16qnm9y">Before passing the images to the <code>image_processor</code>, apply two preprocessing transformations to the dataset:</p> <ul data-svelte-h="svelte-pqfhzt"><li>Augmenting images</li> <li>Reformatting annotations to meet DETR expectations</li></ul> <p data-svelte-h="svelte-1p8vci">First, to make sure the model does not overfit on the training data, you can apply image augmentation with any data augmentation library. Here we use <a href="https://albumentations.ai/docs/" rel="nofollow">Albumentations</a> … This library ensures that transformations affect the image and update the bounding boxes accordingly. The 🤗 Datasets library documentation has a detailed <a href="https://huggingface.co/docs/datasets/object_detection" rel="nofollow">guide on how to augment images for object detection</a>, and it uses the exact same dataset as an example. Apply the same approach here, resize each image to (480, 480), flip it horizontally, and brighten it:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> albumentations <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>transform = albumentations.Compose( <span class="hljs-meta">... </span> [ <span class="hljs-meta">... </span> albumentations.Resize(<span class="hljs-number">480</span>, <span class="hljs-number">480</span>), <span class="hljs-meta">... </span> albumentations.HorizontalFlip(p=<span class="hljs-number">1.0</span>), <span class="hljs-meta">... </span> albumentations.RandomBrightnessContrast(p=<span class="hljs-number">1.0</span>), <span class="hljs-meta">... </span> ], <span class="hljs-meta">... </span> bbox_params=albumentations.BboxParams(<span class="hljs-built_in">format</span>=<span class="hljs-string">"coco"</span>, label_fields=[<span class="hljs-string">"category"</span>]), <span class="hljs-meta">... </span>)</pre></div> <p data-svelte-h="svelte-bsjzql">The <code>image_processor</code> expects the annotations to be in the following format: <code>{'image_id': int, 'annotations': List[Dict]}</code>, where each dictionary is a COCO object annotation. Let’s add a function to reformat annotations for a single example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">formatted_anns</span>(<span class="hljs-params">image_id, category, area, bbox</span>): <span class="hljs-meta">... </span> annotations = [] <span class="hljs-meta">... </span> <span class="hljs-keyword">for</span> i <span class="hljs-keyword">in</span> <span class="hljs-built_in">range</span>(<span class="hljs-number">0</span>, <span class="hljs-built_in">len</span>(category)): <span class="hljs-meta">... </span> new_ann = { <span class="hljs-meta">... </span> <span class="hljs-string">"image_id"</span>: image_id, <span class="hljs-meta">... </span> <span class="hljs-string">"category_id"</span>: category[i], <span class="hljs-meta">... </span> <span class="hljs-string">"isCrowd"</span>: <span class="hljs-number">0</span>, <span class="hljs-meta">... </span> <span class="hljs-string">"area"</span>: area[i], <span class="hljs-meta">... </span> <span class="hljs-string">"bbox"</span>: <span class="hljs-built_in">list</span>(bbox[i]), <span class="hljs-meta">... </span> } <span class="hljs-meta">... </span> annotations.append(new_ann) <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> annotations</pre></div> <p data-svelte-h="svelte-16yruro">Now you can combine the image and annotation transformations to use on a batch of examples:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># transforming a batch</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">transform_aug_ann</span>(<span class="hljs-params">examples</span>): <span class="hljs-meta">... </span> image_ids = examples[<span class="hljs-string">"image_id"</span>] <span class="hljs-meta">... </span> images, bboxes, area, categories = [], [], [], [] <span class="hljs-meta">... </span> <span class="hljs-keyword">for</span> image, objects <span class="hljs-keyword">in</span> <span class="hljs-built_in">zip</span>(examples[<span class="hljs-string">"image"</span>], examples[<span class="hljs-string">"objects"</span>]): <span class="hljs-meta">... </span> image = np.array(image.convert(<span class="hljs-string">"RGB"</span>))[:, :, ::-<span class="hljs-number">1</span>] <span class="hljs-meta">... </span> out = transform(image=image, bboxes=objects[<span class="hljs-string">"bbox"</span>], category=objects[<span class="hljs-string">"category"</span>]) <span class="hljs-meta">... </span> area.append(objects[<span class="hljs-string">"area"</span>]) <span class="hljs-meta">... </span> images.append(out[<span class="hljs-string">"image"</span>]) <span class="hljs-meta">... </span> bboxes.append(out[<span class="hljs-string">"bboxes"</span>]) <span class="hljs-meta">... </span> categories.append(out[<span class="hljs-string">"category"</span>]) <span class="hljs-meta">... </span> targets = [ <span class="hljs-meta">... </span> {<span class="hljs-string">"image_id"</span>: id_, <span class="hljs-string">"annotations"</span>: formatted_anns(id_, cat_, ar_, box_)} <span class="hljs-meta">... </span> <span class="hljs-keyword">for</span> id_, cat_, ar_, box_ <span class="hljs-keyword">in</span> <span class="hljs-built_in">zip</span>(image_ids, categories, area, bboxes) <span class="hljs-meta">... </span> ] <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> image_processor(images=images, annotations=targets, return_tensors=<span class="hljs-string">"pt"</span>)</pre></div> <p data-svelte-h="svelte-gc4auf">Apply this preprocessing function to the entire dataset using 🤗 Datasets <a href="https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.with_transform" rel="nofollow">with_transform</a> method. This method applies transformations on the fly when you load an element of the dataset.</p> <p data-svelte-h="svelte-1o4lbgk">At this point, you can check what an example from the dataset looks like after the transformations. You should see a tensor with <code>pixel_values</code>, a tensor with <code>pixel_mask</code>, and <code>labels</code>.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>cppe5[<span class="hljs-string">"train"</span>] = cppe5[<span class="hljs-string">"train"</span>].with_transform(transform_aug_ann) <span class="hljs-meta">&gt;&gt;&gt; </span>cppe5[<span class="hljs-string">"train"</span>][<span class="hljs-number">15</span>] {<span class="hljs-string">'pixel_values'</span>: tensor([[[ <span class="hljs-number">0.9132</span>, <span class="hljs-number">0.9132</span>, <span class="hljs-number">0.9132</span>, ..., -<span class="hljs-number">1.9809</span>, -<span class="hljs-number">1.9809</span>, -<span class="hljs-number">1.9809</span>], [ <span class="hljs-number">0.9132</span>, <span class="hljs-number">0.9132</span>, <span class="hljs-number">0.9132</span>, ..., -<span class="hljs-number">1.9809</span>, -<span class="hljs-number">1.9809</span>, -<span class="hljs-number">1.9809</span>], [ <span class="hljs-number">0.9132</span>, <span class="hljs-number">0.9132</span>, <span class="hljs-number">0.9132</span>, ..., -<span class="hljs-number">1.9638</span>, -<span class="hljs-number">1.9638</span>, -<span class="hljs-number">1.9638</span>], ..., [-<span class="hljs-number">1.5699</span>, -<span class="hljs-number">1.5699</span>, -<span class="hljs-number">1.5699</span>, ..., -<span class="hljs-number">1.9980</span>, -<span class="hljs-number">1.9980</span>, -<span class="hljs-number">1.9980</span>], [-<span class="hljs-number">1.5528</span>, -<span class="hljs-number">1.5528</span>, -<span class="hljs-number">1.5528</span>, ..., -<span class="hljs-number">1.9980</span>, -<span class="hljs-number">1.9809</span>, -<span class="hljs-number">1.9809</span>], [-<span class="hljs-number">1.5528</span>, -<span class="hljs-number">1.5528</span>, -<span class="hljs-number">1.5528</span>, ..., -<span class="hljs-number">1.9980</span>, -<span class="hljs-number">1.9809</span>, -<span class="hljs-number">1.9809</span>]], [[ <span class="hljs-number">1.3081</span>, <span class="hljs-number">1.3081</span>, <span class="hljs-number">1.3081</span>, ..., -<span class="hljs-number">1.8431</span>, -<span class="hljs-number">1.8431</span>, -<span class="hljs-number">1.8431</span>], [ <span class="hljs-number">1.3081</span>, <span class="hljs-number">1.3081</span>, <span class="hljs-number">1.3081</span>, ..., -<span class="hljs-number">1.8431</span>, -<span class="hljs-number">1.8431</span>, -<span class="hljs-number">1.8431</span>], [ <span class="hljs-number">1.3081</span>, <span class="hljs-number">1.3081</span>, <span class="hljs-number">1.3081</span>, ..., -<span class="hljs-number">1.8256</span>, -<span class="hljs-number">1.8256</span>, -<span class="hljs-number">1.8256</span>], ..., [-<span class="hljs-number">1.3179</span>, -<span class="hljs-number">1.3179</span>, -<span class="hljs-number">1.3179</span>, ..., -<span class="hljs-number">1.8606</span>, -<span class="hljs-number">1.8606</span>, -<span class="hljs-number">1.8606</span>], [-<span class="hljs-number">1.3004</span>, -<span class="hljs-number">1.3004</span>, -<span class="hljs-number">1.3004</span>, ..., -<span class="hljs-number">1.8606</span>, -<span class="hljs-number">1.8431</span>, -<span class="hljs-number">1.8431</span>], [-<span class="hljs-number">1.3004</span>, -<span class="hljs-number">1.3004</span>, -<span class="hljs-number">1.3004</span>, ..., -<span class="hljs-number">1.8606</span>, -<span class="hljs-number">1.8431</span>, -<span class="hljs-number">1.8431</span>]], [[ <span class="hljs-number">1.4200</span>, <span class="hljs-number">1.4200</span>, <span class="hljs-number">1.4200</span>, ..., -<span class="hljs-number">1.6476</span>, -<span class="hljs-number">1.6476</span>, -<span class="hljs-number">1.6476</span>], [ <span class="hljs-number">1.4200</span>, <span class="hljs-number">1.4200</span>, <span class="hljs-number">1.4200</span>, ..., -<span class="hljs-number">1.6476</span>, -<span class="hljs-number">1.6476</span>, -<span class="hljs-number">1.6476</span>], [ <span class="hljs-number">1.4200</span>, <span class="hljs-number">1.4200</span>, <span class="hljs-number">1.4200</span>, ..., -<span class="hljs-number">1.6302</span>, -<span class="hljs-number">1.6302</span>, -<span class="hljs-number">1.6302</span>], ..., [-<span class="hljs-number">1.0201</span>, -<span class="hljs-number">1.0201</span>, -<span class="hljs-number">1.0201</span>, ..., -<span class="hljs-number">1.5604</span>, -<span class="hljs-number">1.5604</span>, -<span class="hljs-number">1.5604</span>], [-<span class="hljs-number">1.0027</span>, -<span class="hljs-number">1.0027</span>, -<span class="hljs-number">1.0027</span>, ..., -<span class="hljs-number">1.5604</span>, -<span class="hljs-number">1.5430</span>, -<span class="hljs-number">1.5430</span>], [-<span class="hljs-number">1.0027</span>, -<span class="hljs-number">1.0027</span>, -<span class="hljs-number">1.0027</span>, ..., -<span class="hljs-number">1.5604</span>, -<span class="hljs-number">1.5430</span>, -<span class="hljs-number">1.5430</span>]]]), <span class="hljs-string">'pixel_mask'</span>: tensor([[<span class="hljs-number">1</span>, <span class="hljs-number">1</span>, <span class="hljs-number">1</span>, ..., <span class="hljs-number">1</span>, <span class="hljs-number">1</span>, <span class="hljs-number">1</span>], [<span class="hljs-number">1</span>, <span class="hljs-number">1</span>, <span class="hljs-number">1</span>, ..., <span class="hljs-number">1</span>, <span class="hljs-number">1</span>, <span class="hljs-number">1</span>], [<span class="hljs-number">1</span>, <span class="hljs-number">1</span>, <span class="hljs-number">1</span>, ..., <span class="hljs-number">1</span>, <span class="hljs-number">1</span>, <span class="hljs-number">1</span>], ..., [<span class="hljs-number">1</span>, <span class="hljs-number">1</span>, <span class="hljs-number">1</span>, ..., <span class="hljs-number">1</span>, <span class="hljs-number">1</span>, <span class="hljs-number">1</span>], [<span class="hljs-number">1</span>, <span class="hljs-number">1</span>, <span class="hljs-number">1</span>, ..., <span class="hljs-number">1</span>, <span class="hljs-number">1</span>, <span class="hljs-number">1</span>], [<span class="hljs-number">1</span>, <span class="hljs-number">1</span>, <span class="hljs-number">1</span>, ..., <span class="hljs-number">1</span>, <span class="hljs-number">1</span>, <span class="hljs-number">1</span>]]), <span class="hljs-string">'labels'</span>: {<span class="hljs-string">'size'</span>: tensor([<span class="hljs-number">800</span>, <span class="hljs-number">800</span>]), <span class="hljs-string">'image_id'</span>: tensor([<span class="hljs-number">756</span>]), <span class="hljs-string">'class_labels'</span>: tensor([<span class="hljs-number">4</span>]), <span class="hljs-string">'boxes'</span>: tensor([[<span class="hljs-number">0.7340</span>, <span class="hljs-number">0.6986</span>, <span class="hljs-number">0.3414</span>, <span class="hljs-number">0.5944</span>]]), <span class="hljs-string">'area'</span>: tensor([<span class="hljs-number">519544.4375</span>]), <span class="hljs-string">'iscrowd'</span>: tensor([<span class="hljs-number">0</span>]), <span class="hljs-string">'orig_size'</span>: tensor([<span class="hljs-number">480</span>, <span class="hljs-number">480</span>])}}</pre></div> <p data-svelte-h="svelte-1ghsv74">You have successfully augmented the individual images and prepared their annotations. However, preprocessing isn’t complete yet. In the final step, create a custom <code>collate_fn</code> to batch images together. Pad images (which are now <code>pixel_values</code>) to the largest image in a batch, and create a corresponding <code>pixel_mask</code> to indicate which pixels are real (1) and which are padding (0).</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">collate_fn</span>(<span class="hljs-params">batch</span>): <span class="hljs-meta">... </span> pixel_values = [item[<span class="hljs-string">"pixel_values"</span>] <span class="hljs-keyword">for</span> item <span class="hljs-keyword">in</span> batch] <span class="hljs-meta">... </span> encoding = image_processor.pad(pixel_values, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">... </span> labels = [item[<span class="hljs-string">"labels"</span>] <span class="hljs-keyword">for</span> item <span class="hljs-keyword">in</span> batch] <span class="hljs-meta">... </span> batch = {} <span class="hljs-meta">... </span> batch[<span class="hljs-string">"pixel_values"</span>] = encoding[<span class="hljs-string">"pixel_values"</span>] <span class="hljs-meta">... </span> batch[<span class="hljs-string">"pixel_mask"</span>] = encoding[<span class="hljs-string">"pixel_mask"</span>] <span class="hljs-meta">... </span> batch[<span class="hljs-string">"labels"</span>] = labels <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> batch</pre></div> <h2 class="relative group"><a id="training-the-detr-model" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#training-the-detr-model"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-pihf6o">Training the DETR model</span></h2> You have done most of the heavy lifting in the previous sections, so now you are ready to train your model! The images in this dataset are still quite large, even after resizing. This means that finetuning this model will require at least one GPU. <p data-svelte-h="svelte-qp7n2l">Training involves the following steps:</p> <ol data-svelte-h="svelte-pn8cxf"><li>Load the model with <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoModelForObjectDetection">AutoModelForObjectDetection</a> using the same checkpoint as in the preprocessing.</li> <li>Define your training hyperparameters in <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.TrainingArguments">TrainingArguments</a>.</li> <li>Pass the training arguments to <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer">Trainer</a> along with the model, dataset, image processor, and data collator.</li> <li>Call <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.train">train()</a> to finetune your model.</li></ol> <p data-svelte-h="svelte-3tgt16">When loading the model from the same checkpoint that you used for the preprocessing, remember to pass the <code>label2id</code> and <code>id2label</code> maps that you created earlier from the dataset’s metadata. Additionally, we specify <code>ignore_mismatched_sizes=True</code> to replace the existing classification head with a new one.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoModelForObjectDetection <span class="hljs-meta">&gt;&gt;&gt; </span>model = AutoModelForObjectDetection.from_pretrained( <span class="hljs-meta">... </span> checkpoint, <span class="hljs-meta">... </span> id2label=id2label, <span class="hljs-meta">... </span> label2id=label2id, <span class="hljs-meta">... </span> ignore_mismatched_sizes=<span class="hljs-literal">True</span>, <span class="hljs-meta">... </span>)</pre></div> <p data-svelte-h="svelte-6fy4ua">In the <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.TrainingArguments">TrainingArguments</a> use <code>output_dir</code> to specify where to save your model, then configure hyperparameters as you see fit. It is important you do not remove unused columns because this will drop the image column. Without the image column, you can’t create <code>pixel_values</code>. For this reason, set <code>remove_unused_columns</code> to <code>False</code>. If you wish to share your model by pushing to the Hub, set <code>push_to_hub</code> to <code>True</code> (you must be signed in to Hugging Face to upload your model).</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> TrainingArguments <span class="hljs-meta">&gt;&gt;&gt; </span>training_args = TrainingArguments( <span class="hljs-meta">... </span> output_dir=<span class="hljs-string">"detr-resnet-50_finetuned_cppe5"</span>, <span class="hljs-meta">... </span> per_device_train_batch_size=<span class="hljs-number">8</span>, <span class="hljs-meta">... </span> num_train_epochs=<span class="hljs-number">10</span>, <span class="hljs-meta">... </span> fp16=<span class="hljs-literal">True</span>, <span class="hljs-meta">... </span> save_steps=<span class="hljs-number">200</span>, <span class="hljs-meta">... </span> logging_steps=<span class="hljs-number">50</span>, <span class="hljs-meta">... </span> learning_rate=<span class="hljs-number">1e-5</span>, <span class="hljs-meta">... </span> weight_decay=<span class="hljs-number">1e-4</span>, <span class="hljs-meta">... </span> save_total_limit=<span class="hljs-number">2</span>, <span class="hljs-meta">... </span> remove_unused_columns=<span class="hljs-literal">False</span>, <span class="hljs-meta">... </span> push_to_hub=<span class="hljs-literal">True</span>, <span class="hljs-meta">... </span>)</pre></div> <p data-svelte-h="svelte-1990uzx">Finally, bring everything together, and call <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.train">train()</a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> Trainer <span class="hljs-meta">&gt;&gt;&gt; </span>trainer = Trainer( <span class="hljs-meta">... </span> model=model, <span class="hljs-meta">... </span> args=training_args, <span class="hljs-meta">... </span> data_collator=collate_fn, <span class="hljs-meta">... </span> train_dataset=cppe5[<span class="hljs-string">"train"</span>], <span class="hljs-meta">... </span> tokenizer=image_processor, <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>trainer.train()</pre></div> <p data-svelte-h="svelte-7n417q">If you have set <code>push_to_hub</code> to <code>True</code> in the <code>training_args</code>, the training checkpoints are pushed to the Hugging Face Hub. Upon training completion, push the final model to the Hub as well by calling the <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.push_to_hub">push_to_hub()</a> method.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>trainer.push_to_hub()</pre></div> <h2 class="relative group"><a id="evaluate" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#evaluate"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-sh8s6s">Evaluate</span></h2> Object detection models are commonly evaluated with a set of <a href="https://cocodataset.org/#detection-eval" data-svelte-h="svelte-6plsj8">COCO-style metrics</a>. You can use one of the existing metrics implementations, but here you'll use the one from `torchvision` to evaluate the final model that you pushed to the Hub. <p data-svelte-h="svelte-1gyi24o">To use the <code>torchvision</code> evaluator, you’ll need to prepare a ground truth COCO dataset. The API to build a COCO dataset requires the data to be stored in a certain format, so you’ll need to save images and annotations to disk first. Just like when you prepared your data for training, the annotations from the <code>cppe5["test"]</code> need to be formatted. However, images should stay as they are.</p> <p data-svelte-h="svelte-1egi90u">The evaluation step requires a bit of work, but it can be split in three major steps. First, prepare the <code>cppe5["test"]</code> set: format the annotations and save the data to disk.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> json <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># format annotations the same as for training, no need for data augmentation</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">val_formatted_anns</span>(<span class="hljs-params">image_id, objects</span>): <span class="hljs-meta">... </span> annotations = [] <span class="hljs-meta">... </span> <span class="hljs-keyword">for</span> i <span class="hljs-keyword">in</span> <span class="hljs-built_in">range</span>(<span class="hljs-number">0</span>, <span class="hljs-built_in">len</span>(objects[<span class="hljs-string">"id"</span>])): <span class="hljs-meta">... </span> new_ann = { <span class="hljs-meta">... </span> <span class="hljs-string">"id"</span>: objects[<span class="hljs-string">"id"</span>][i], <span class="hljs-meta">... </span> <span class="hljs-string">"category_id"</span>: objects[<span class="hljs-string">"category"</span>][i], <span class="hljs-meta">... </span> <span class="hljs-string">"iscrowd"</span>: <span class="hljs-number">0</span>, <span class="hljs-meta">... </span> <span class="hljs-string">"image_id"</span>: image_id, <span class="hljs-meta">... </span> <span class="hljs-string">"area"</span>: objects[<span class="hljs-string">"area"</span>][i], <span class="hljs-meta">... </span> <span class="hljs-string">"bbox"</span>: objects[<span class="hljs-string">"bbox"</span>][i], <span class="hljs-meta">... </span> } <span class="hljs-meta">... </span> annotations.append(new_ann) <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> annotations <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Save images and annotations into the files torchvision.datasets.CocoDetection expects</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">save_cppe5_annotation_file_images</span>(<span class="hljs-params">cppe5</span>): <span class="hljs-meta">... </span> output_json = {} <span class="hljs-meta">... </span> path_output_cppe5 = <span class="hljs-string">f"<span class="hljs-subst">{os.getcwd()}</span>/cppe5/"</span> <span class="hljs-meta">... </span> <span class="hljs-keyword">if</span> <span class="hljs-keyword">not</span> os.path.exists(path_output_cppe5): <span class="hljs-meta">... </span> os.makedirs(path_output_cppe5) <span class="hljs-meta">... </span> path_anno = os.path.join(path_output_cppe5, <span class="hljs-string">"cppe5_ann.json"</span>) <span class="hljs-meta">... </span> categories_json = [{<span class="hljs-string">"supercategory"</span>: <span class="hljs-string">"none"</span>, <span class="hljs-string">"id"</span>: <span class="hljs-built_in">id</span>, <span class="hljs-string">"name"</span>: id2label[<span class="hljs-built_in">id</span>]} <span class="hljs-keyword">for</span> <span class="hljs-built_in">id</span> <span class="hljs-keyword">in</span> id2label] <span class="hljs-meta">... </span> output_json[<span class="hljs-string">"images"</span>] = [] <span class="hljs-meta">... </span> output_json[<span class="hljs-string">"annotations"</span>] = [] <span class="hljs-meta">... </span> <span class="hljs-keyword">for</span> example <span class="hljs-keyword">in</span> cppe5: <span class="hljs-meta">... </span> ann = val_formatted_anns(example[<span class="hljs-string">"image_id"</span>], example[<span class="hljs-string">"objects"</span>]) <span class="hljs-meta">... </span> output_json[<span class="hljs-string">"images"</span>].append( <span class="hljs-meta">... </span> { <span class="hljs-meta">... </span> <span class="hljs-string">"id"</span>: example[<span class="hljs-string">"image_id"</span>], <span class="hljs-meta">... </span> <span class="hljs-string">"width"</span>: example[<span class="hljs-string">"image"</span>].width, <span class="hljs-meta">... </span> <span class="hljs-string">"height"</span>: example[<span class="hljs-string">"image"</span>].height, <span class="hljs-meta">... </span> <span class="hljs-string">"file_name"</span>: <span class="hljs-string">f"<span class="hljs-subst">{example[<span class="hljs-string">'image_id'</span>]}</span>.png"</span>, <span class="hljs-meta">... </span> } <span class="hljs-meta">... </span> ) <span class="hljs-meta">... </span> output_json[<span class="hljs-string">"annotations"</span>].extend(ann) <span class="hljs-meta">... </span> output_json[<span class="hljs-string">"categories"</span>] = categories_json <span class="hljs-meta">... </span> <span class="hljs-keyword">with</span> <span class="hljs-built_in">open</span>(path_anno, <span class="hljs-string">"w"</span>) <span class="hljs-keyword">as</span> file: <span class="hljs-meta">... </span> json.dump(output_json, file, ensure_ascii=<span class="hljs-literal">False</span>, indent=<span class="hljs-number">4</span>) <span class="hljs-meta">... </span> <span class="hljs-keyword">for</span> im, img_id <span class="hljs-keyword">in</span> <span class="hljs-built_in">zip</span>(cppe5[<span class="hljs-string">"image"</span>], cppe5[<span class="hljs-string">"image_id"</span>]): <span class="hljs-meta">... </span> path_img = os.path.join(path_output_cppe5, <span class="hljs-string">f"<span class="hljs-subst">{img_id}</span>.png"</span>) <span class="hljs-meta">... </span> im.save(path_img) <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> path_output_cppe5, path_anno</pre></div> <p data-svelte-h="svelte-4e01dv">Next, prepare an instance of a <code>CocoDetection</code> class that can be used with <code>cocoevaluator</code>.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torchvision <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">class</span> <span class="hljs-title class_">CocoDetection</span>(torchvision.datasets.CocoDetection): <span class="hljs-meta">... </span> <span class="hljs-keyword">def</span> <span class="hljs-title function_">__init__</span>(<span class="hljs-params">self, img_folder, image_processor, ann_file</span>): <span class="hljs-meta">... </span> <span class="hljs-built_in">super</span>().__init__(img_folder, ann_file) <span class="hljs-meta">... </span> self.image_processor = image_processor <span class="hljs-meta">... </span> <span class="hljs-keyword">def</span> <span class="hljs-title function_">__getitem__</span>(<span class="hljs-params">self, idx</span>): <span class="hljs-meta">... </span> <span class="hljs-comment"># read in PIL image and target in COCO format</span> <span class="hljs-meta">... </span> img, target = <span class="hljs-built_in">super</span>(CocoDetection, self).__getitem__(idx) <span class="hljs-meta">... </span> <span class="hljs-comment"># preprocess image and target: converting target to DETR format,</span> <span class="hljs-meta">... </span> <span class="hljs-comment"># resizing + normalization of both image and target)</span> <span class="hljs-meta">... </span> image_id = self.ids[idx] <span class="hljs-meta">... </span> target = {<span class="hljs-string">"image_id"</span>: image_id, <span class="hljs-string">"annotations"</span>: target} <span class="hljs-meta">... </span> encoding = self.image_processor(images=img, annotations=target, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">... </span> pixel_values = encoding[<span class="hljs-string">"pixel_values"</span>].squeeze() <span class="hljs-comment"># remove batch dimension</span> <span class="hljs-meta">... </span> target = encoding[<span class="hljs-string">"labels"</span>][<span class="hljs-number">0</span>] <span class="hljs-comment"># remove batch dimension</span> <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> {<span class="hljs-string">"pixel_values"</span>: pixel_values, <span class="hljs-string">"labels"</span>: target} <span class="hljs-meta">&gt;&gt;&gt; </span>im_processor = AutoImageProcessor.from_pretrained(<span class="hljs-string">"devonho/detr-resnet-50_finetuned_cppe5"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>path_output_cppe5, path_anno = save_cppe5_annotation_file_images(cppe5[<span class="hljs-string">"test"</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span>test_ds_coco_format = CocoDetection(path_output_cppe5, im_processor, path_anno)</pre></div> <p data-svelte-h="svelte-xiphfy">Finally, load the metrics and run the evaluation.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> evaluate <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> tqdm <span class="hljs-keyword">import</span> tqdm <span class="hljs-meta">&gt;&gt;&gt; </span>model = AutoModelForObjectDetection.from_pretrained(<span class="hljs-string">"devonho/detr-resnet-50_finetuned_cppe5"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>module = evaluate.load(<span class="hljs-string">"ybelkada/cocoevaluate"</span>, coco=test_ds_coco_format.coco) <span class="hljs-meta">&gt;&gt;&gt; </span>val_dataloader = torch.utils.data.DataLoader( <span class="hljs-meta">... </span> test_ds_coco_format, batch_size=<span class="hljs-number">8</span>, shuffle=<span class="hljs-literal">False</span>, num_workers=<span class="hljs-number">4</span>, collate_fn=collate_fn <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> <span class="hljs-keyword">for</span> idx, batch <span class="hljs-keyword">in</span> <span class="hljs-built_in">enumerate</span>(tqdm(val_dataloader)): <span class="hljs-meta">... </span> pixel_values = batch[<span class="hljs-string">"pixel_values"</span>] <span class="hljs-meta">... </span> pixel_mask = batch[<span class="hljs-string">"pixel_mask"</span>] <span class="hljs-meta">... </span> labels = [ <span class="hljs-meta">... </span> {k: v <span class="hljs-keyword">for</span> k, v <span class="hljs-keyword">in</span> t.items()} <span class="hljs-keyword">for</span> t <span class="hljs-keyword">in</span> batch[<span class="hljs-string">"labels"</span>] <span class="hljs-meta">... </span> ] <span class="hljs-comment"># these are in DETR format, resized + normalized</span> <span class="hljs-meta">... </span> <span class="hljs-comment"># forward pass</span> <span class="hljs-meta">... </span> outputs = model(pixel_values=pixel_values, pixel_mask=pixel_mask) <span class="hljs-meta">... </span> orig_target_sizes = torch.stack([target[<span class="hljs-string">"orig_size"</span>] <span class="hljs-keyword">for</span> target <span class="hljs-keyword">in</span> labels], dim=<span class="hljs-number">0</span>) <span class="hljs-meta">... </span> results = im_processor.post_process(outputs, orig_target_sizes) <span class="hljs-comment"># convert outputs of model to COCO api</span> <span class="hljs-meta">... </span> module.add(prediction=results, reference=labels) <span class="hljs-meta">... </span> <span class="hljs-keyword">del</span> batch <span class="hljs-meta">&gt;&gt;&gt; </span>results = module.compute() <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-built_in">print</span>(results) Accumulating evaluation results... DONE (t=<span class="hljs-number">0.08</span>s). IoU metric: bbox Average Precision (AP) @[ IoU=<span class="hljs-number">0.50</span>:<span class="hljs-number">0.95</span> | area= <span class="hljs-built_in">all</span> | maxDets=<span class="hljs-number">100</span> ] = <span class="hljs-number">0.352</span> Average Precision (AP) @[ IoU=<span class="hljs-number">0.50</span> | area= <span class="hljs-built_in">all</span> | maxDets=<span class="hljs-number">100</span> ] = <span class="hljs-number">0.681</span> Average Precision (AP) @[ IoU=<span class="hljs-number">0.75</span> | area= <span class="hljs-built_in">all</span> | maxDets=<span class="hljs-number">100</span> ] = <span class="hljs-number">0.292</span> Average Precision (AP) @[ IoU=<span class="hljs-number">0.50</span>:<span class="hljs-number">0.95</span> | area= small | maxDets=<span class="hljs-number">100</span> ] = <span class="hljs-number">0.168</span> Average Precision (AP) @[ IoU=<span class="hljs-number">0.50</span>:<span class="hljs-number">0.95</span> | area=medium | maxDets=<span class="hljs-number">100</span> ] = <span class="hljs-number">0.208</span> Average Precision (AP) @[ IoU=<span class="hljs-number">0.50</span>:<span class="hljs-number">0.95</span> | area= large | maxDets=<span class="hljs-number">100</span> ] = <span class="hljs-number">0.429</span> Average Recall (AR) @[ IoU=<span class="hljs-number">0.50</span>:<span class="hljs-number">0.95</span> | area= <span class="hljs-built_in">all</span> | maxDets= <span class="hljs-number">1</span> ] = <span class="hljs-number">0.274</span> Average Recall (AR) @[ IoU=<span class="hljs-number">0.50</span>:<span class="hljs-number">0.95</span> | area= <span class="hljs-built_in">all</span> | maxDets= <span class="hljs-number">10</span> ] = <span class="hljs-number">0.484</span> Average Recall (AR) @[ IoU=<span class="hljs-number">0.50</span>:<span class="hljs-number">0.95</span> | area= <span class="hljs-built_in">all</span> | maxDets=<span class="hljs-number">100</span> ] = <span class="hljs-number">0.501</span> Average Recall (AR) @[ IoU=<span class="hljs-number">0.50</span>:<span class="hljs-number">0.95</span> | area= small | maxDets=<span class="hljs-number">100</span> ] = <span class="hljs-number">0.191</span> Average Recall (AR) @[ IoU=<span class="hljs-number">0.50</span>:<span class="hljs-number">0.95</span> | area=medium | maxDets=<span class="hljs-number">100</span> ] = <span class="hljs-number">0.323</span> Average Recall (AR) @[ IoU=<span class="hljs-number">0.50</span>:<span class="hljs-number">0.95</span> | area= large | maxDets=<span class="hljs-number">100</span> ] = <span class="hljs-number">0.590</span></pre></div> <p data-svelte-h="svelte-ds72iu">These results can be further improved by adjusting the hyperparameters in <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.TrainingArguments">TrainingArguments</a>. Give it a go!</p> <h2 class="relative group"><a id="inference" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#inference"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-199uz7g">Inference</span></h2> Now that you have finetuned a DETR model, evaluated it, and uploaded it to the Hugging Face Hub, you can use it for inference. The simplest way to try out your finetuned model for inference is to use it in a [Pipeline](/docs/transformers/v4.34.0/en/main_classes/pipelines#transformers.Pipeline). Instantiate a pipeline for object detection with your model, and pass an image to it: <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> pipeline <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> requests <span class="hljs-meta">&gt;&gt;&gt; </span>url = <span class="hljs-string">"https://i.imgur.com/2lnWoly.jpg"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>image = Image.<span class="hljs-built_in">open</span>(requests.get(url, stream=<span class="hljs-literal">True</span>).raw) <span class="hljs-meta">&gt;&gt;&gt; </span>obj_detector = pipeline(<span class="hljs-string">"object-detection"</span>, model=<span class="hljs-string">"devonho/detr-resnet-50_finetuned_cppe5"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>obj_detector(image)</pre></div> <p data-svelte-h="svelte-o6117l">You can also manually replicate the results of the pipeline if you’d like:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>image_processor = AutoImageProcessor.from_pretrained(<span class="hljs-string">"devonho/detr-resnet-50_finetuned_cppe5"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = AutoModelForObjectDetection.from_pretrained(<span class="hljs-string">"devonho/detr-resnet-50_finetuned_cppe5"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> inputs = image_processor(images=image, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">... </span> outputs = model(**inputs) <span class="hljs-meta">... </span> target_sizes = torch.tensor([image.size[::-<span class="hljs-number">1</span>]]) <span class="hljs-meta">... </span> results = image_processor.post_process_object_detection(outputs, threshold=<span class="hljs-number">0.5</span>, target_sizes=target_sizes)[<span class="hljs-number">0</span>] <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">for</span> score, label, box <span class="hljs-keyword">in</span> <span class="hljs-built_in">zip</span>(results[<span class="hljs-string">"scores"</span>], results[<span class="hljs-string">"labels"</span>], results[<span class="hljs-string">"boxes"</span>]): <span class="hljs-meta">... </span> box = [<span class="hljs-built_in">round</span>(i, <span class="hljs-number">2</span>) <span class="hljs-keyword">for</span> i <span class="hljs-keyword">in</span> box.tolist()] <span class="hljs-meta">... </span> <span class="hljs-built_in">print</span>( <span class="hljs-meta">... </span> <span class="hljs-string">f"Detected <span class="hljs-subst">{model.config.id2label[label.item()]}</span> with confidence "</span> <span class="hljs-meta">... </span> <span class="hljs-string">f"<span class="hljs-subst">{<span class="hljs-built_in">round</span>(score.item(), <span class="hljs-number">3</span>)}</span> at location <span class="hljs-subst">{box}</span>"</span> <span class="hljs-meta">... </span> ) Detected Coverall <span class="hljs-keyword">with</span> confidence <span class="hljs-number">0.566</span> at location [<span class="hljs-number">1215.32</span>, <span class="hljs-number">147.38</span>, <span class="hljs-number">4401.81</span>, <span class="hljs-number">3227.08</span>] Detected Mask <span class="hljs-keyword">with</span> confidence <span class="hljs-number">0.584</span> at location [<span class="hljs-number">2449.06</span>, <span class="hljs-number">823.19</span>, <span class="hljs-number">3256.43</span>, <span class="hljs-number">1413.9</span>]</pre></div> <p data-svelte-h="svelte-7zeucu">Let’s plot the result:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>draw = ImageDraw.Draw(image) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">for</span> score, label, box <span class="hljs-keyword">in</span> <span class="hljs-built_in">zip</span>(results[<span class="hljs-string">"scores"</span>], results[<span class="hljs-string">"labels"</span>], results[<span class="hljs-string">"boxes"</span>]): <span class="hljs-meta">... </span> box = [<span class="hljs-built_in">round</span>(i, <span class="hljs-number">2</span>) <span class="hljs-keyword">for</span> i <span class="hljs-keyword">in</span> box.tolist()] <span class="hljs-meta">... </span> x, y, x2, y2 = <span class="hljs-built_in">tuple</span>(box) <span class="hljs-meta">... </span> draw.rectangle((x, y, x2, y2), outline=<span class="hljs-string">"red"</span>, width=<span class="hljs-number">1</span>) <span class="hljs-meta">... </span> draw.text((x, y), model.config.id2label[label.item()], fill=<span class="hljs-string">"white"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>image</pre></div> <div class="flex justify-center" data-svelte-h="svelte-16oi5q2"><img src="https://i.imgur.com/4QZnf9A.png" alt="Object detection result on a new image"></div> <p></p> <div id="svelte-announcer" aria-live="assertive" aria-atomic="true" style="position: absolute; left: 0px; top: 0px; clip: rect(0px, 0px, 0px, 0px); clip-path: inset(50%); overflow: hidden; white-space: nowrap; width: 1px; height: 1px;"></div></div> <div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/tasks/video_classification" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>Video classification</a> <a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">Zero-shot object detection<span class="ml-2 translate-y-px">→</span></a></div></div></div> <div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapter&quot;:{&quot;title&quot;:&quot;Object detection&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;object-detection&quot;,&quot;url&quot;:&quot;#object-detection&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Load the CPPE-5 dataset&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;load-the-cppe5-dataset&quot;,&quot;url&quot;:&quot;#load-the-cppe5-dataset&quot;},{&quot;title&quot;:&quot;Preprocess the data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocess-the-data&quot;,&quot;url&quot;:&quot;#preprocess-the-data&quot;},{&quot;title&quot;:&quot;Training the DETR model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;training-the-detr-model&quot;,&quot;url&quot;:&quot;#training-the-detr-model&quot;},{&quot;title&quot;:&quot;Evaluate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;evaluate&quot;,&quot;url&quot;:&quot;#evaluate&quot;},{&quot;title&quot;:&quot;Inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;inference&quot;,&quot;url&quot;:&quot;#inference&quot;}]}}" data-target="SubSideMenu"><nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#object-detection" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-object-detection"><wbr>Object detection</a> <a href="#load-the-cppe5-dataset" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-load-the-cppe5-dataset"><wbr>Load the CPP<wbr>E-5 dataset</a> <a href="#preprocess-the-data" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-preprocess-the-data"><wbr>Preprocess the data</a> <a href="#training-the-detr-model" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-training-the-detr-model"><wbr>Training the DET<wbr>R model</a> <a href="#evaluate" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-evaluate"><wbr>Evaluate</a> <a href="#inference" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-inference"><wbr>Inference</a> </nav></div></div></div> <div id="doc-footer"></div></main> </div> <script> import("/front/build/kube-5e23f38/index.js"); window.moonSha = "kube-5e23f38/"; window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc","environment":"production","userAgent":"HuggingFace (production)"}`); </script> <!-- Stripe --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://js.stripe.com/v3/"; script.async = true; document.head.appendChild(script); } </script> <!-- Google analytics v4 --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL"; script.async = true; document.head.appendChild(script); window.dataLayer = window.dataLayer || []; function gtag() { if (window.dataLayer !== undefined) { window.dataLayer.push(arguments); } } gtag("js", new Date()); gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/v4.34.0/en/tasks/object_detection" }); /// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" }); /// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent /// TODO: ask the user for their consent and update this with gtag('consent', 'update') } </script> <!-- Google Analytics v3 (deprecated) --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { (function (i, s, o, g, r, a, m) { i["GoogleAnalyticsObject"] = r; (i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments); }), (i[r].l = 1 * new Date()); (a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]); a.async = 1; a.src = g; m.parentNode.insertBefore(a, m); })(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics"); ganalytics("create", "UA-83738774-2", "auto"); ganalytics("send", "pageview", "/docs/transformers/v4.34.0/en/tasks/object_detection"); } </script> <iframe name="__privateStripeMetricsController6900" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-27c67c0d52761104439bb051c7856ab1.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fv4.34.0%2Fen%2Ftasks%2Fobject_detection&amp;title=Object%20detection&amp;referrer=&amp;muid=NA&amp;sid=NA&amp;version=6&amp;preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html>
2023-10-05T13:33:45.839Z
Semantic segmentation
https://huggingface.co/docs/transformers/v4.34.0/en/tasks/semantic_segmentation
# Semantic segmentation Semantic segmentation assigns a label or class to each individual pixel of an image. There are several types of segmentation, and in the case of semantic segmentation, no distinction is made between unique instances of the same object. Both objects are given the same label (for example, “car” instead of “car-1” and “car-2”). Common real-world applications of semantic segmentation include training self-driving cars to identify pedestrians and important traffic information, identifying cells and abnormalities in medical imagery, and monitoring environmental changes from satellite imagery. This guide will show you how to: 1. Finetune [SegFormer](https://huggingface.co/docs/transformers/main/en/model_doc/segformer#segformer) on the [SceneParse150](https://huggingface.co/datasets/scene_parse_150) dataset. 2. Use your finetuned model for inference. The task illustrated in this tutorial is supported by the following model architectures: [BEiT](../model_doc/beit), [Data2VecVision](../model_doc/data2vec-vision), [DPT](../model_doc/dpt), [MobileNetV2](../model_doc/mobilenet_v2), [MobileViT](../model_doc/mobilevit), [MobileViTV2](../model_doc/mobilevitv2), [SegFormer](../model_doc/segformer), [UPerNet](../model_doc/upernet) Before you begin, make sure you have all the necessary libraries installed: ``` pip install -q datasets transformers evaluate ``` We encourage you to log in to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to log in: ``` >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## Load SceneParse150 dataset Start by loading a smaller subset of the SceneParse150 dataset from the 🤗 Datasets library. This’ll give you a chance to experiment and make sure everything works before spending more time training on the full dataset. ``` >>> from datasets import load_dataset >>> ds = load_dataset("scene_parse_150", split="train[:50]") ``` Split the dataset’s `train` split into a train and test set with the [train\_test\_split](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.train_test_split) method: ``` >>> ds = ds.train_test_split(test_size=0.2) >>> train_ds = ds["train"] >>> test_ds = ds["test"] ``` Then take a look at an example: ``` >>> train_ds[0] {'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=512x683 at 0x7F9B0C201F90>, 'annotation': <PIL.PngImagePlugin.PngImageFile image mode=L size=512x683 at 0x7F9B0C201DD0>, 'scene_category': 368} ``` - `image`: a PIL image of the scene. - `annotation`: a PIL image of the segmentation map, which is also the model’s target. - `scene_category`: a category id that describes the image scene like “kitchen” or “office”. In this guide, you’ll only need `image` and `annotation`, both of which are PIL images. You’ll also want to create a dictionary that maps a label id to a label class which will be useful when you set up the model later. Download the mappings from the Hub and create the `id2label` and `label2id` dictionaries: ``` >>> import json >>> from huggingface_hub import cached_download, hf_hub_url >>> repo_id = "huggingface/label-files" >>> filename = "ade20k-id2label.json" >>> id2label = json.load(open(cached_download(hf_hub_url(repo_id, filename, repo_type="dataset")), "r")) >>> id2label = {int(k): v for k, v in id2label.items()} >>> label2id = {v: k for k, v in id2label.items()} >>> num_labels = len(id2label) ``` ## Preprocess The next step is to load a SegFormer image processor to prepare the images and annotations for the model. Some datasets, like this one, use the zero-index as the background class. However, the background class isn’t actually included in the 150 classes, so you’ll need to set `reduce_labels=True` to subtract one from all the labels. The zero-index is replaced by `255` so it’s ignored by SegFormer’s loss function: ``` >>> from transformers import AutoImageProcessor >>> checkpoint = "nvidia/mit-b0" >>> image_processor = AutoImageProcessor.from_pretrained(checkpoint, reduce_labels=True) ``` It is common to apply some data augmentations to an image dataset to make a model more robust against overfitting. In this guide, you’ll use the [`ColorJitter`](https://pytorch.org/vision/stable/generated/torchvision.transforms.ColorJitter.html) function from [torchvision](https://pytorch.org/vision/stable/index.html) to randomly change the color properties of an image, but you can also use any image library you like. ``` >>> from torchvision.transforms import ColorJitter >>> jitter = ColorJitter(brightness=0.25, contrast=0.25, saturation=0.25, hue=0.1) ``` Now create two preprocessing functions to prepare the images and annotations for the model. These functions convert the images into `pixel_values` and annotations to `labels`. For the training set, `jitter` is applied before providing the images to the image processor. For the test set, the image processor crops and normalizes the `images`, and only crops the `labels` because no data augmentation is applied during testing. ``` >>> def train_transforms(example_batch): ... images = [jitter(x) for x in example_batch["image"]] ... labels = [x for x in example_batch["annotation"]] ... inputs = image_processor(images, labels) ... return inputs >>> def val_transforms(example_batch): ... images = [x for x in example_batch["image"]] ... labels = [x for x in example_batch["annotation"]] ... inputs = image_processor(images, labels) ... return inputs ``` To apply the `jitter` over the entire dataset, use the 🤗 Datasets [set\_transform](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.set_transform) function. The transform is applied on the fly which is faster and consumes less disk space: ``` >>> train_ds.set_transform(train_transforms) >>> test_ds.set_transform(val_transforms) ``` It is common to apply some data augmentations to an image dataset to make a model more robust against overfitting. In this guide, you’ll use [`tf.image`](https://www.tensorflow.org/api_docs/python/tf/image) to randomly change the color properties of an image, but you can also use any image library you like. Define two separate transformation functions: - training data transformations that include image augmentation - validation data transformations that only transpose the images, since computer vision models in 🤗 Transformers expect channels-first layout ``` >>> import tensorflow as tf >>> def aug_transforms(image): ... image = tf.keras.utils.img_to_array(image) ... image = tf.image.random_brightness(image, 0.25) ... image = tf.image.random_contrast(image, 0.5, 2.0) ... image = tf.image.random_saturation(image, 0.75, 1.25) ... image = tf.image.random_hue(image, 0.1) ... image = tf.transpose(image, (2, 0, 1)) ... return image >>> def transforms(image): ... image = tf.keras.utils.img_to_array(image) ... image = tf.transpose(image, (2, 0, 1)) ... return image ``` Next, create two preprocessing functions to prepare batches of images and annotations for the model. These functions apply the image transformations and use the earlier loaded `image_processor` to convert the images into `pixel_values` and annotations to `labels`. `ImageProcessor` also takes care of resizing and normalizing the images. ``` >>> def train_transforms(example_batch): ... images = [aug_transforms(x.convert("RGB")) for x in example_batch["image"]] ... labels = [x for x in example_batch["annotation"]] ... inputs = image_processor(images, labels) ... return inputs >>> def val_transforms(example_batch): ... images = [transforms(x.convert("RGB")) for x in example_batch["image"]] ... labels = [x for x in example_batch["annotation"]] ... inputs = image_processor(images, labels) ... return inputs ``` To apply the preprocessing transformations over the entire dataset, use the 🤗 Datasets [set\_transform](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.set_transform) function. The transform is applied on the fly which is faster and consumes less disk space: ``` >>> train_ds.set_transform(train_transforms) >>> test_ds.set_transform(val_transforms) ``` ## Evaluate Including a metric during training is often helpful for evaluating your model’s performance. You can quickly load an evaluation method with the 🤗 [Evaluate](https://huggingface.co/docs/evaluate/index) library. For this task, load the [mean Intersection over Union](https://huggingface.co/spaces/evaluate-metric/accuracy) (IoU) metric (see the 🤗 Evaluate [quick tour](https://huggingface.co/docs/evaluate/a_quick_tour) to learn more about how to load and compute a metric): ``` >>> import evaluate >>> metric = evaluate.load("mean_iou") ``` Then create a function to [compute](https://huggingface.co/docs/evaluate/v0.4.0/en/package_reference/main_classes#evaluate.EvaluationModule.compute) the metrics. Your predictions need to be converted to logits first, and then reshaped to match the size of the labels before you can call [compute](https://huggingface.co/docs/evaluate/v0.4.0/en/package_reference/main_classes#evaluate.EvaluationModule.compute): ``` >>> import numpy as np >>> import torch >>> from torch import nn >>> def compute_metrics(eval_pred): ... with torch.no_grad(): ... logits, labels = eval_pred ... logits_tensor = torch.from_numpy(logits) ... logits_tensor = nn.functional.interpolate( ... logits_tensor, ... size=labels.shape[-2:], ... mode="bilinear", ... align_corners=False, ... ).argmax(dim=1) ... pred_labels = logits_tensor.detach().cpu().numpy() ... metrics = metric.compute( ... predictions=pred_labels, ... references=labels, ... num_labels=num_labels, ... ignore_index=255, ... reduce_labels=False, ... ) ... for key, value in metrics.items(): ... if type(value) is np.ndarray: ... metrics[key] = value.tolist() ... return metrics ``` ``` >>> def compute_metrics(eval_pred): ... logits, labels = eval_pred ... logits = tf.transpose(logits, perm=[0, 2, 3, 1]) ... logits_resized = tf.image.resize( ... logits, ... size=tf.shape(labels)[1:], ... method="bilinear", ... ) ... pred_labels = tf.argmax(logits_resized, axis=-1) ... metrics = metric.compute( ... predictions=pred_labels, ... references=labels, ... num_labels=num_labels, ... ignore_index=-1, ... reduce_labels=image_processor.do_reduce_labels, ... ) ... per_category_accuracy = metrics.pop("per_category_accuracy").tolist() ... per_category_iou = metrics.pop("per_category_iou").tolist() ... metrics.update({f"accuracy_{id2label[i]}": v for i, v in enumerate(per_category_accuracy)}) ... metrics.update({f"iou_{id2label[i]}": v for i, v in enumerate(per_category_iou)}) ... return {"val_" + k: v for k, v in metrics.items()} ``` Your `compute_metrics` function is ready to go now, and you’ll return to it when you setup your training. ## Train If you aren’t familiar with finetuning a model with the [Trainer](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer), take a look at the basic tutorial [here](../training#finetune-with-trainer)! You’re ready to start training your model now! Load SegFormer with [AutoModelForSemanticSegmentation](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoModelForSemanticSegmentation), and pass the model the mapping between label ids and label classes: ``` >>> from transformers import AutoModelForSemanticSegmentation, TrainingArguments, Trainer >>> model = AutoModelForSemanticSegmentation.from_pretrained(checkpoint, id2label=id2label, label2id=label2id) ``` At this point, only three steps remain: 1. Define your training hyperparameters in [TrainingArguments](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.TrainingArguments). It is important you don’t remove unused columns because this’ll drop the `image` column. Without the `image` column, you can’t create `pixel_values`. Set `remove_unused_columns=False` to prevent this behavior! The only other required parameter is `output_dir` which specifies where to save your model. You’ll push this model to the Hub by setting `push_to_hub=True` (you need to be signed in to Hugging Face to upload your model). At the end of each epoch, the [Trainer](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer) will evaluate the IoU metric and save the training checkpoint. 2. Pass the training arguments to [Trainer](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer) along with the model, dataset, tokenizer, data collator, and `compute_metrics` function. 3. Call [train()](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.train) to finetune your model. ``` >>> training_args = TrainingArguments( ... output_dir="segformer-b0-scene-parse-150", ... learning_rate=6e-5, ... num_train_epochs=50, ... per_device_train_batch_size=2, ... per_device_eval_batch_size=2, ... save_total_limit=3, ... evaluation_strategy="steps", ... save_strategy="steps", ... save_steps=20, ... eval_steps=20, ... logging_steps=1, ... eval_accumulation_steps=5, ... remove_unused_columns=False, ... push_to_hub=True, ... ) >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=train_ds, ... eval_dataset=test_ds, ... compute_metrics=compute_metrics, ... ) >>> trainer.train() ``` Once training is completed, share your model to the Hub with the [push\_to\_hub()](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.push_to_hub) method so everyone can use your model: ``` >>> trainer.push_to_hub() ``` If you are unfamiliar with fine-tuning a model with Keras, check out the [basic tutorial](./training#train-a-tensorflow-model-with-keras) first! To fine-tune a model in TensorFlow, follow these steps: 1. Define the training hyperparameters, and set up an optimizer and a learning rate schedule. 2. Instantiate a pretrained model. 3. Convert a 🤗 Dataset to a `tf.data.Dataset`. 4. Compile your model. 5. Add callbacks to calculate metrics and upload your model to 🤗 Hub 6. Use the `fit()` method to run the training. Start by defining the hyperparameters, optimizer and learning rate schedule: ``` >>> from transformers import create_optimizer >>> batch_size = 2 >>> num_epochs = 50 >>> num_train_steps = len(train_ds) * num_epochs >>> learning_rate = 6e-5 >>> weight_decay_rate = 0.01 >>> optimizer, lr_schedule = create_optimizer( ... init_lr=learning_rate, ... num_train_steps=num_train_steps, ... weight_decay_rate=weight_decay_rate, ... num_warmup_steps=0, ... ) ``` Then, load SegFormer with [TFAutoModelForSemanticSegmentation](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.TFAutoModelForSemanticSegmentation) along with the label mappings, and compile it with the optimizer. Note that Transformers models all have a default task-relevant loss function, so you don’t need to specify one unless you want to: ``` >>> from transformers import TFAutoModelForSemanticSegmentation >>> model = TFAutoModelForSemanticSegmentation.from_pretrained( ... checkpoint, ... id2label=id2label, ... label2id=label2id, ... ) >>> model.compile(optimizer=optimizer) ``` Convert your datasets to the `tf.data.Dataset` format using the [to\_tf\_dataset](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.to_tf_dataset) and the [DefaultDataCollator](/docs/transformers/v4.34.0/en/main_classes/data_collator#transformers.DefaultDataCollator): ``` >>> from transformers import DefaultDataCollator >>> data_collator = DefaultDataCollator(return_tensors="tf") >>> tf_train_dataset = train_ds.to_tf_dataset( ... columns=["pixel_values", "label"], ... shuffle=True, ... batch_size=batch_size, ... collate_fn=data_collator, ... ) >>> tf_eval_dataset = test_ds.to_tf_dataset( ... columns=["pixel_values", "label"], ... shuffle=True, ... batch_size=batch_size, ... collate_fn=data_collator, ... ) ``` To compute the accuracy from the predictions and push your model to the 🤗 Hub, use [Keras callbacks](../main_classes/keras_callbacks). Pass your `compute_metrics` function to [KerasMetricCallback](/docs/transformers/v4.34.0/en/main_classes/keras_callbacks#transformers.KerasMetricCallback), and use the [PushToHubCallback](/docs/transformers/v4.34.0/en/main_classes/keras_callbacks#transformers.PushToHubCallback) to upload the model: ``` >>> from transformers.keras_callbacks import KerasMetricCallback, PushToHubCallback >>> metric_callback = KerasMetricCallback( ... metric_fn=compute_metrics, eval_dataset=tf_eval_dataset, batch_size=batch_size, label_cols=["labels"] ... ) >>> push_to_hub_callback = PushToHubCallback(output_dir="scene_segmentation", tokenizer=image_processor) >>> callbacks = [metric_callback, push_to_hub_callback] ``` Finally, you are ready to train your model! Call `fit()` with your training and validation datasets, the number of epochs, and your callbacks to fine-tune the model: ``` >>> model.fit( ... tf_train_dataset, ... validation_data=tf_eval_dataset, ... callbacks=callbacks, ... epochs=num_epochs, ... ) ``` Congratulations! You have fine-tuned your model and shared it on the 🤗 Hub. You can now use it for inference! ## Inference Great, now that you’ve finetuned a model, you can use it for inference! Load an image for inference: ``` >>> image = ds[0]["image"] >>> image ``` ![Image of bedroom](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/semantic-seg-image.png) The simplest way to try out your finetuned model for inference is to use it in a [pipeline()](/docs/transformers/v4.34.0/en/main_classes/pipelines#transformers.pipeline). Instantiate a `pipeline` for image segmentation with your model, and pass your image to it: ``` >>> from transformers import pipeline >>> segmenter = pipeline("image-segmentation", model="my_awesome_seg_model") >>> segmenter(image) [{'score': None, 'label': 'wall', 'mask': <PIL.Image.Image image mode=L size=640x427 at 0x7FD5B2062690>}, {'score': None, 'label': 'sky', 'mask': <PIL.Image.Image image mode=L size=640x427 at 0x7FD5B2062A50>}, {'score': None, 'label': 'floor', 'mask': <PIL.Image.Image image mode=L size=640x427 at 0x7FD5B2062B50>}, {'score': None, 'label': 'ceiling', 'mask': <PIL.Image.Image image mode=L size=640x427 at 0x7FD5B2062A10>}, {'score': None, 'label': 'bed ', 'mask': <PIL.Image.Image image mode=L size=640x427 at 0x7FD5B2062E90>}, {'score': None, 'label': 'windowpane', 'mask': <PIL.Image.Image image mode=L size=640x427 at 0x7FD5B2062390>}, {'score': None, 'label': 'cabinet', 'mask': <PIL.Image.Image image mode=L size=640x427 at 0x7FD5B2062550>}, {'score': None, 'label': 'chair', 'mask': <PIL.Image.Image image mode=L size=640x427 at 0x7FD5B2062D90>}, {'score': None, 'label': 'armchair', 'mask': <PIL.Image.Image image mode=L size=640x427 at 0x7FD5B2062E10>}] ``` You can also manually replicate the results of the `pipeline` if you’d like. Process the image with an image processor and place the `pixel_values` on a GPU: ``` >>> device = torch.device("cuda" if torch.cuda.is_available() else "cpu") >>> encoding = image_processor(image, return_tensors="pt") >>> pixel_values = encoding.pixel_values.to(device) ``` Pass your input to the model and return the `logits`: ``` >>> outputs = model(pixel_values=pixel_values) >>> logits = outputs.logits.cpu() ``` Next, rescale the logits to the original image size: ``` >>> upsampled_logits = nn.functional.interpolate( ... logits, ... size=image.size[::-1], ... mode="bilinear", ... align_corners=False, ... ) >>> pred_seg = upsampled_logits.argmax(dim=1)[0] ``` Load an image processor to preprocess the image and return the input as TensorFlow tensors: ``` >>> from transformers import AutoImageProcessor >>> image_processor = AutoImageProcessor.from_pretrained("MariaK/scene_segmentation") >>> inputs = image_processor(image, return_tensors="tf") ``` Pass your input to the model and return the `logits`: ``` >>> from transformers import TFAutoModelForSemanticSegmentation >>> model = TFAutoModelForSemanticSegmentation.from_pretrained("MariaK/scene_segmentation") >>> logits = model(**inputs).logits ``` Next, rescale the logits to the original image size and apply argmax on the class dimension: ``` >>> logits = tf.transpose(logits, [0, 2, 3, 1]) >>> upsampled_logits = tf.image.resize( ... logits, ... ... image.size[::-1], ... ) >>> pred_seg = tf.math.argmax(upsampled_logits, axis=-1)[0] ``` To visualize the results, load the [dataset color palette](https://github.com/tensorflow/models/blob/3f1ca33afe3c1631b733ea7e40c294273b9e406d/research/deeplab/utils/get_dataset_colormap.py#L51) as `ade_palette()` that maps each class to their RGB values. Then you can combine and plot your image and the predicted segmentation map: ``` >>> import matplotlib.pyplot as plt >>> import numpy as np >>> color_seg = np.zeros((pred_seg.shape[0], pred_seg.shape[1], 3), dtype=np.uint8) >>> palette = np.array(ade_palette()) >>> for label, color in enumerate(palette): ... color_seg[pred_seg == label, :] = color >>> color_seg = color_seg[..., ::-1] >>> img = np.array(image) * 0.5 + color_seg * 0.5 >>> img = img.astype(np.uint8) >>> plt.figure(figsize=(15, 10)) >>> plt.imshow(img) >>> plt.show() ``` ![Image of bedroom overlaid with segmentation map](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/semantic-seg-preds.png)
<!DOCTYPE html><html class=""><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"> <meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science."> <meta property="fb:app_id" content="1321688464574422"> <meta name="twitter:card" content="summary_large_image"> <meta name="twitter:site" content="@huggingface"> <meta property="og:title" content="Semantic segmentation"> <meta property="og:type" content="website"> <meta property="og:url" content="https://huggingface.co/docs/transformers/v4.34.0/en/tasks/semantic_segmentation"> <meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png"> <link rel="stylesheet" href="/front/build/kube-5e23f38/style.css"> <link rel="preconnect" href="https://fonts.gstatic.com"> <link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&amp;display=swap" rel="stylesheet"> <link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&amp;display=swap" rel="stylesheet"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" as="style" onload="this.onload=null;this.rel='stylesheet'"> <noscript> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" /> </noscript> <title>Semantic segmentation</title> <script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script> <script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"><link rel="stylesheet" href="/docs/transformers/v4.34.0/en/_app/immutable/assets/0.e3b0c442.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/1.38c5c2f6.js"><meta name="hf:doc:metadata" content="{&quot;local&quot;:&quot;semantic-segmentation&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;load-sceneparse150-dataset&quot;,&quot;title&quot;:&quot;Load SceneParse150 dataset&quot;},{&quot;local&quot;:&quot;preprocess&quot;,&quot;title&quot;:&quot;Preprocess&quot;},{&quot;local&quot;:&quot;evaluate&quot;,&quot;title&quot;:&quot;Evaluate&quot;},{&quot;local&quot;:&quot;train&quot;,&quot;title&quot;:&quot;Train&quot;},{&quot;local&quot;:&quot;inference&quot;,&quot;title&quot;:&quot;Inference&quot;}],&quot;title&quot;:&quot;Semantic segmentation&quot;}"></head> <body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage"> <div class="flex min-h-screen flex-col"> <div class="SVELTE_HYDRATER contents" data-props="{&quot;classNames&quot;:&quot;&quot;,&quot;isWide&quot;:true,&quot;isZh&quot;:false}" data-target="MainHeader"><header class="border-b border-gray-100 "><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <div class="flex flex-none items-center justify-center p-0.5 place-self-stretch lg:hidden"><button class="relative z-40 flex h-6 w-8 items-center justify-center" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div></div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="rounded-full border border-transparent bg-gray-900 px-3 py-1 leading-none text-white hover:border-black hover:bg-white hover:text-black" href="/join">Sign Up</a></li></ul></nav></div></header></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div> <main class="flex flex-1 flex-col"><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapters&quot;:[{&quot;title&quot;:&quot;Get started&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;🤗 Transformers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;index&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/index&quot;},{&quot;title&quot;:&quot;Quick tour&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;quicktour&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/quicktour&quot;},{&quot;title&quot;:&quot;Installation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;installation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/installation&quot;}]},{&quot;title&quot;:&quot;Tutorials&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Run inference with pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_tutorial&quot;},{&quot;title&quot;:&quot;Write portable code with AutoClass&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;autoclass_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/autoclass_tutorial&quot;},{&quot;title&quot;:&quot;Preprocess data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocessing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/preprocessing&quot;},{&quot;title&quot;:&quot;Fine-tune a pretrained model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;training&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/training&quot;},{&quot;title&quot;:&quot;Train with a script&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;run_scripts&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/run_scripts&quot;},{&quot;title&quot;:&quot;Set up distributed training with 🤗 Accelerate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;accelerate&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/accelerate&quot;},{&quot;title&quot;:&quot;Load and train adapters with 🤗 PEFT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;peft&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/peft&quot;},{&quot;title&quot;:&quot;Share your model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_sharing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_sharing&quot;},{&quot;title&quot;:&quot;Agents&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers_agents&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/transformers_agents&quot;},{&quot;title&quot;:&quot;Generation with LLMs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;llm_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/llm_tutorial&quot;}]},{&quot;title&quot;:&quot;Task Guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Natural Language Processing&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text classification&quot;,&quot;id&quot;:&quot;tasks/sequence_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/sequence_classification&quot;},{&quot;title&quot;:&quot;Token classification&quot;,&quot;id&quot;:&quot;tasks/token_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/token_classification&quot;},{&quot;title&quot;:&quot;Question answering&quot;,&quot;id&quot;:&quot;tasks/question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/question_answering&quot;},{&quot;title&quot;:&quot;Causal language modeling&quot;,&quot;id&quot;:&quot;tasks/language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/language_modeling&quot;},{&quot;title&quot;:&quot;Masked language modeling&quot;,&quot;id&quot;:&quot;tasks/masked_language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/masked_language_modeling&quot;},{&quot;title&quot;:&quot;Translation&quot;,&quot;id&quot;:&quot;tasks/translation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/translation&quot;},{&quot;title&quot;:&quot;Summarization&quot;,&quot;id&quot;:&quot;tasks/summarization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/summarization&quot;},{&quot;title&quot;:&quot;Multiple choice&quot;,&quot;id&quot;:&quot;tasks/multiple_choice&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/multiple_choice&quot;}]},{&quot;title&quot;:&quot;Audio&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio classification&quot;,&quot;id&quot;:&quot;tasks/audio_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/audio_classification&quot;},{&quot;title&quot;:&quot;Automatic speech recognition&quot;,&quot;id&quot;:&quot;tasks/asr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/asr&quot;}]},{&quot;title&quot;:&quot;Computer Vision&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image classification&quot;,&quot;id&quot;:&quot;tasks/image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_classification&quot;},{&quot;title&quot;:&quot;Semantic segmentation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks/semantic_segmentation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/semantic_segmentation&quot;},{&quot;title&quot;:&quot;Video classification&quot;,&quot;id&quot;:&quot;tasks/video_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/video_classification&quot;},{&quot;title&quot;:&quot;Object detection&quot;,&quot;id&quot;:&quot;tasks/object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot object detection&quot;,&quot;id&quot;:&quot;tasks/zero_shot_object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot image classification&quot;,&quot;id&quot;:&quot;tasks/zero_shot_image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification&quot;},{&quot;title&quot;:&quot;Depth estimation&quot;,&quot;id&quot;:&quot;tasks/monocular_depth_estimation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation&quot;}]},{&quot;title&quot;:&quot;Multimodal&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image captioning&quot;,&quot;id&quot;:&quot;tasks/image_captioning&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_captioning&quot;},{&quot;title&quot;:&quot;Document Question Answering&quot;,&quot;id&quot;:&quot;tasks/document_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/document_question_answering&quot;},{&quot;title&quot;:&quot;Visual Question Answering&quot;,&quot;id&quot;:&quot;tasks/visual_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/visual_question_answering&quot;},{&quot;title&quot;:&quot;Text to speech&quot;,&quot;id&quot;:&quot;tasks/text-to-speech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/text-to-speech&quot;}]},{&quot;title&quot;:&quot;Generation&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Customize the generation strategy&quot;,&quot;id&quot;:&quot;generation_strategies&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/generation_strategies&quot;}]},{&quot;title&quot;:&quot;Prompting&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image tasks with IDEFICS&quot;,&quot;id&quot;:&quot;tasks/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/idefics&quot;}]}]},{&quot;title&quot;:&quot;Developer guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Use fast tokenizers from 🤗 Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;fast_tokenizers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/fast_tokenizers&quot;},{&quot;title&quot;:&quot;Run inference with multilingual models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;multilingual&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/multilingual&quot;},{&quot;title&quot;:&quot;Use model-specific APIs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;create_a_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/create_a_model&quot;},{&quot;title&quot;:&quot;Share a custom model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_models&quot;},{&quot;title&quot;:&quot;Templates for chat models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;chat_templating&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/chat_templating&quot;},{&quot;title&quot;:&quot;Run training on Amazon SageMaker&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;sagemaker&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/sagemaker&quot;},{&quot;title&quot;:&quot;Export to ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;serialization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/serialization&quot;},{&quot;title&quot;:&quot;Export to TFLite&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tflite&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tflite&quot;},{&quot;title&quot;:&quot;Export to TorchScript&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;torchscript&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/torchscript&quot;},{&quot;title&quot;:&quot;Benchmarks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;benchmarks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/benchmarks&quot;},{&quot;title&quot;:&quot;Notebooks with examples&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;notebooks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/notebooks&quot;},{&quot;title&quot;:&quot;Community resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;community&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/community&quot;},{&quot;title&quot;:&quot;Custom Tools and Prompts&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_tools&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_tools&quot;},{&quot;title&quot;:&quot;Troubleshoot&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;troubleshooting&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/troubleshooting&quot;}]},{&quot;title&quot;:&quot;Performance and scalability&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;performance&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/performance&quot;},{&quot;title&quot;:&quot;Efficient training techniques&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Methods and tools for efficient training on a single GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_one&quot;},{&quot;title&quot;:&quot;Multiple GPUs and parallelism&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_many&quot;},{&quot;title&quot;:&quot;Efficient training on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu&quot;},{&quot;title&quot;:&quot;Distributed CPU training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu_many&quot;},{&quot;title&quot;:&quot;Training on TPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu&quot;},{&quot;title&quot;:&quot;Training on TPU with TensorFlow&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu_tf&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu_tf&quot;},{&quot;title&quot;:&quot;Training on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_special&quot;},{&quot;title&quot;:&quot;Custom hardware for training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_hardware&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_hardware&quot;},{&quot;title&quot;:&quot;Hyperparameter Search using Trainer API&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;hpo_train&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/hpo_train&quot;}]},{&quot;title&quot;:&quot;Optimizing inference&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Inference on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_cpu&quot;},{&quot;title&quot;:&quot;Inference on one GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_one&quot;},{&quot;title&quot;:&quot;Inference on many GPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_many&quot;},{&quot;title&quot;:&quot;Inference on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_special&quot;}]},{&quot;title&quot;:&quot;Instantiating a big model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;big_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/big_models&quot;},{&quot;title&quot;:&quot;Troubleshooting&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;debugging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/debugging&quot;},{&quot;title&quot;:&quot;XLA Integration for TensorFlow Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tf_xla&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tf_xla&quot;},{&quot;title&quot;:&quot;Optimize inference using `torch.compile()`&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_torch_compile&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_torch_compile&quot;}]},{&quot;title&quot;:&quot;Contribute&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;How to contribute to transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;contributing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/contributing&quot;},{&quot;title&quot;:&quot;How to add a model to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_model&quot;},{&quot;title&quot;:&quot;How to convert a 🤗 Transformers model to TensorFlow?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_tensorflow_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_tensorflow_model&quot;},{&quot;title&quot;:&quot;How to add a pipeline to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_pipeline&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_pipeline&quot;},{&quot;title&quot;:&quot;Testing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;testing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/testing&quot;},{&quot;title&quot;:&quot;Checks on a Pull Request&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pr_checks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pr_checks&quot;}]},{&quot;title&quot;:&quot;Conceptual guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Philosophy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;philosophy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/philosophy&quot;},{&quot;title&quot;:&quot;Glossary&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;glossary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/glossary&quot;},{&quot;title&quot;:&quot;What 🤗 Transformers can do&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;task_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/task_summary&quot;},{&quot;title&quot;:&quot;How 🤗 Transformers solve tasks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks_explained&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks_explained&quot;},{&quot;title&quot;:&quot;The Transformer model family&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_summary&quot;},{&quot;title&quot;:&quot;Summary of the tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tokenizer_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tokenizer_summary&quot;},{&quot;title&quot;:&quot;Attention mechanisms&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;attention&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/attention&quot;},{&quot;title&quot;:&quot;Padding and truncation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pad_truncation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pad_truncation&quot;},{&quot;title&quot;:&quot;BERTology&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;bertology&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/bertology&quot;},{&quot;title&quot;:&quot;Perplexity of fixed-length models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perplexity&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perplexity&quot;},{&quot;title&quot;:&quot;Pipelines for webserver inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_webserver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_webserver&quot;},{&quot;title&quot;:&quot;Model training anatomy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_memory_anatomy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_memory_anatomy&quot;}]},{&quot;title&quot;:&quot;API&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Main Classes&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Agents and Tools&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/agent&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/agent&quot;},{&quot;title&quot;:&quot;Auto Classes&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/auto&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/auto&quot;},{&quot;title&quot;:&quot;Callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/callback&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/callback&quot;},{&quot;title&quot;:&quot;Configuration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/configuration&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/configuration&quot;},{&quot;title&quot;:&quot;Data Collator&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/data_collator&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/data_collator&quot;},{&quot;title&quot;:&quot;Keras callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/keras_callbacks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/keras_callbacks&quot;},{&quot;title&quot;:&quot;Logging&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/logging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/logging&quot;},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/model&quot;},{&quot;title&quot;:&quot;Text Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/text_generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/text_generation&quot;},{&quot;title&quot;:&quot;ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/onnx&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/onnx&quot;},{&quot;title&quot;:&quot;Optimization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/optimizer_schedules&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules&quot;},{&quot;title&quot;:&quot;Model outputs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/output&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/output&quot;},{&quot;title&quot;:&quot;Pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/pipelines&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/pipelines&quot;},{&quot;title&quot;:&quot;Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/processors&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/processors&quot;},{&quot;title&quot;:&quot;Quantization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/quantization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/quantization&quot;},{&quot;title&quot;:&quot;Tokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/tokenizer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/tokenizer&quot;},{&quot;title&quot;:&quot;Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/trainer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/trainer&quot;},{&quot;title&quot;:&quot;DeepSpeed Integration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/deepspeed&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/deepspeed&quot;},{&quot;title&quot;:&quot;Feature Extractor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/feature_extractor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/feature_extractor&quot;},{&quot;title&quot;:&quot;Image Processor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/image_processor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/image_processor&quot;}]},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALBERT&quot;,&quot;id&quot;:&quot;model_doc/albert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/albert&quot;},{&quot;title&quot;:&quot;BART&quot;,&quot;id&quot;:&quot;model_doc/bart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bart&quot;},{&quot;title&quot;:&quot;BARThez&quot;,&quot;id&quot;:&quot;model_doc/barthez&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/barthez&quot;},{&quot;title&quot;:&quot;BARTpho&quot;,&quot;id&quot;:&quot;model_doc/bartpho&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bartpho&quot;},{&quot;title&quot;:&quot;BERT&quot;,&quot;id&quot;:&quot;model_doc/bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert&quot;},{&quot;title&quot;:&quot;BertGeneration&quot;,&quot;id&quot;:&quot;model_doc/bert-generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-generation&quot;},{&quot;title&quot;:&quot;BertJapanese&quot;,&quot;id&quot;:&quot;model_doc/bert-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-japanese&quot;},{&quot;title&quot;:&quot;Bertweet&quot;,&quot;id&quot;:&quot;model_doc/bertweet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bertweet&quot;},{&quot;title&quot;:&quot;BigBird&quot;,&quot;id&quot;:&quot;model_doc/big_bird&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/big_bird&quot;},{&quot;title&quot;:&quot;BigBirdPegasus&quot;,&quot;id&quot;:&quot;model_doc/bigbird_pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus&quot;},{&quot;title&quot;:&quot;BioGpt&quot;,&quot;id&quot;:&quot;model_doc/biogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/biogpt&quot;},{&quot;title&quot;:&quot;Blenderbot&quot;,&quot;id&quot;:&quot;model_doc/blenderbot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot&quot;},{&quot;title&quot;:&quot;Blenderbot Small&quot;,&quot;id&quot;:&quot;model_doc/blenderbot-small&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot-small&quot;},{&quot;title&quot;:&quot;BLOOM&quot;,&quot;id&quot;:&quot;model_doc/bloom&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bloom&quot;},{&quot;title&quot;:&quot;BORT&quot;,&quot;id&quot;:&quot;model_doc/bort&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bort&quot;},{&quot;title&quot;:&quot;ByT5&quot;,&quot;id&quot;:&quot;model_doc/byt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/byt5&quot;},{&quot;title&quot;:&quot;CamemBERT&quot;,&quot;id&quot;:&quot;model_doc/camembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/camembert&quot;},{&quot;title&quot;:&quot;CANINE&quot;,&quot;id&quot;:&quot;model_doc/canine&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/canine&quot;},{&quot;title&quot;:&quot;CodeGen&quot;,&quot;id&quot;:&quot;model_doc/codegen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/codegen&quot;},{&quot;title&quot;:&quot;CodeLlama&quot;,&quot;id&quot;:&quot;model_doc/code_llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/code_llama&quot;},{&quot;title&quot;:&quot;ConvBERT&quot;,&quot;id&quot;:&quot;model_doc/convbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convbert&quot;},{&quot;title&quot;:&quot;CPM&quot;,&quot;id&quot;:&quot;model_doc/cpm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpm&quot;},{&quot;title&quot;:&quot;CPMANT&quot;,&quot;id&quot;:&quot;model_doc/cpmant&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpmant&quot;},{&quot;title&quot;:&quot;CTRL&quot;,&quot;id&quot;:&quot;model_doc/ctrl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ctrl&quot;},{&quot;title&quot;:&quot;DeBERTa&quot;,&quot;id&quot;:&quot;model_doc/deberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta&quot;},{&quot;title&quot;:&quot;DeBERTa-v2&quot;,&quot;id&quot;:&quot;model_doc/deberta-v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta-v2&quot;},{&quot;title&quot;:&quot;DialoGPT&quot;,&quot;id&quot;:&quot;model_doc/dialogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dialogpt&quot;},{&quot;title&quot;:&quot;DistilBERT&quot;,&quot;id&quot;:&quot;model_doc/distilbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/distilbert&quot;},{&quot;title&quot;:&quot;DPR&quot;,&quot;id&quot;:&quot;model_doc/dpr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpr&quot;},{&quot;title&quot;:&quot;ELECTRA&quot;,&quot;id&quot;:&quot;model_doc/electra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/electra&quot;},{&quot;title&quot;:&quot;Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encoder-decoder&quot;},{&quot;title&quot;:&quot;ERNIE&quot;,&quot;id&quot;:&quot;model_doc/ernie&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie&quot;},{&quot;title&quot;:&quot;ErnieM&quot;,&quot;id&quot;:&quot;model_doc/ernie_m&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie_m&quot;},{&quot;title&quot;:&quot;ESM&quot;,&quot;id&quot;:&quot;model_doc/esm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/esm&quot;},{&quot;title&quot;:&quot;Falcon&quot;,&quot;id&quot;:&quot;model_doc/falcon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/falcon&quot;},{&quot;title&quot;:&quot;FLAN-T5&quot;,&quot;id&quot;:&quot;model_doc/flan-t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-t5&quot;},{&quot;title&quot;:&quot;FLAN-UL2&quot;,&quot;id&quot;:&quot;model_doc/flan-ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-ul2&quot;},{&quot;title&quot;:&quot;FlauBERT&quot;,&quot;id&quot;:&quot;model_doc/flaubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flaubert&quot;},{&quot;title&quot;:&quot;FNet&quot;,&quot;id&quot;:&quot;model_doc/fnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fnet&quot;},{&quot;title&quot;:&quot;FSMT&quot;,&quot;id&quot;:&quot;model_doc/fsmt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fsmt&quot;},{&quot;title&quot;:&quot;Funnel Transformer&quot;,&quot;id&quot;:&quot;model_doc/funnel&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/funnel&quot;},{&quot;title&quot;:&quot;GPT&quot;,&quot;id&quot;:&quot;model_doc/openai-gpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/openai-gpt&quot;},{&quot;title&quot;:&quot;GPT Neo&quot;,&quot;id&quot;:&quot;model_doc/gpt_neo&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neo&quot;},{&quot;title&quot;:&quot;GPT NeoX&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox&quot;},{&quot;title&quot;:&quot;GPT NeoX Japanese&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox_japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese&quot;},{&quot;title&quot;:&quot;GPT-J&quot;,&quot;id&quot;:&quot;model_doc/gptj&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptj&quot;},{&quot;title&quot;:&quot;GPT2&quot;,&quot;id&quot;:&quot;model_doc/gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt2&quot;},{&quot;title&quot;:&quot;GPTBigCode&quot;,&quot;id&quot;:&quot;model_doc/gpt_bigcode&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode&quot;},{&quot;title&quot;:&quot;GPTSAN Japanese&quot;,&quot;id&quot;:&quot;model_doc/gptsan-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese&quot;},{&quot;title&quot;:&quot;GPTSw3&quot;,&quot;id&quot;:&quot;model_doc/gpt-sw3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt-sw3&quot;},{&quot;title&quot;:&quot;HerBERT&quot;,&quot;id&quot;:&quot;model_doc/herbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/herbert&quot;},{&quot;title&quot;:&quot;I-BERT&quot;,&quot;id&quot;:&quot;model_doc/ibert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ibert&quot;},{&quot;title&quot;:&quot;Jukebox&quot;,&quot;id&quot;:&quot;model_doc/jukebox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/jukebox&quot;},{&quot;title&quot;:&quot;LED&quot;,&quot;id&quot;:&quot;model_doc/led&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/led&quot;},{&quot;title&quot;:&quot;LLaMA&quot;,&quot;id&quot;:&quot;model_doc/llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama&quot;},{&quot;title&quot;:&quot;Llama2&quot;,&quot;id&quot;:&quot;model_doc/llama2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama2&quot;},{&quot;title&quot;:&quot;Longformer&quot;,&quot;id&quot;:&quot;model_doc/longformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longformer&quot;},{&quot;title&quot;:&quot;LongT5&quot;,&quot;id&quot;:&quot;model_doc/longt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longt5&quot;},{&quot;title&quot;:&quot;LUKE&quot;,&quot;id&quot;:&quot;model_doc/luke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/luke&quot;},{&quot;title&quot;:&quot;M2M100&quot;,&quot;id&quot;:&quot;model_doc/m2m_100&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/m2m_100&quot;},{&quot;title&quot;:&quot;MarianMT&quot;,&quot;id&quot;:&quot;model_doc/marian&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/marian&quot;},{&quot;title&quot;:&quot;MarkupLM&quot;,&quot;id&quot;:&quot;model_doc/markuplm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/markuplm&quot;},{&quot;title&quot;:&quot;MBart and MBart-50&quot;,&quot;id&quot;:&quot;model_doc/mbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mbart&quot;},{&quot;title&quot;:&quot;MEGA&quot;,&quot;id&quot;:&quot;model_doc/mega&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mega&quot;},{&quot;title&quot;:&quot;MegatronBERT&quot;,&quot;id&quot;:&quot;model_doc/megatron-bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron-bert&quot;},{&quot;title&quot;:&quot;MegatronGPT2&quot;,&quot;id&quot;:&quot;model_doc/megatron_gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2&quot;},{&quot;title&quot;:&quot;Mistral&quot;,&quot;id&quot;:&quot;model_doc/mistral&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mistral&quot;},{&quot;title&quot;:&quot;mLUKE&quot;,&quot;id&quot;:&quot;model_doc/mluke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mluke&quot;},{&quot;title&quot;:&quot;MobileBERT&quot;,&quot;id&quot;:&quot;model_doc/mobilebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilebert&quot;},{&quot;title&quot;:&quot;MPNet&quot;,&quot;id&quot;:&quot;model_doc/mpnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpnet&quot;},{&quot;title&quot;:&quot;MPT&quot;,&quot;id&quot;:&quot;model_doc/mpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpt&quot;},{&quot;title&quot;:&quot;MRA&quot;,&quot;id&quot;:&quot;model_doc/mra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mra&quot;},{&quot;title&quot;:&quot;MT5&quot;,&quot;id&quot;:&quot;model_doc/mt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mt5&quot;},{&quot;title&quot;:&quot;MVP&quot;,&quot;id&quot;:&quot;model_doc/mvp&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mvp&quot;},{&quot;title&quot;:&quot;NEZHA&quot;,&quot;id&quot;:&quot;model_doc/nezha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nezha&quot;},{&quot;title&quot;:&quot;NLLB&quot;,&quot;id&quot;:&quot;model_doc/nllb&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb&quot;},{&quot;title&quot;:&quot;NLLB-MoE&quot;,&quot;id&quot;:&quot;model_doc/nllb-moe&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb-moe&quot;},{&quot;title&quot;:&quot;Nyströmformer&quot;,&quot;id&quot;:&quot;model_doc/nystromformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nystromformer&quot;},{&quot;title&quot;:&quot;Open-Llama&quot;,&quot;id&quot;:&quot;model_doc/open-llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/open-llama&quot;},{&quot;title&quot;:&quot;OPT&quot;,&quot;id&quot;:&quot;model_doc/opt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/opt&quot;},{&quot;title&quot;:&quot;Pegasus&quot;,&quot;id&quot;:&quot;model_doc/pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus&quot;},{&quot;title&quot;:&quot;PEGASUS-X&quot;,&quot;id&quot;:&quot;model_doc/pegasus_x&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus_x&quot;},{&quot;title&quot;:&quot;Persimmon&quot;,&quot;id&quot;:&quot;model_doc/persimmon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/persimmon&quot;},{&quot;title&quot;:&quot;PhoBERT&quot;,&quot;id&quot;:&quot;model_doc/phobert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/phobert&quot;},{&quot;title&quot;:&quot;PLBart&quot;,&quot;id&quot;:&quot;model_doc/plbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/plbart&quot;},{&quot;title&quot;:&quot;ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/prophetnet&quot;},{&quot;title&quot;:&quot;QDQBert&quot;,&quot;id&quot;:&quot;model_doc/qdqbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/qdqbert&quot;},{&quot;title&quot;:&quot;RAG&quot;,&quot;id&quot;:&quot;model_doc/rag&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rag&quot;},{&quot;title&quot;:&quot;REALM&quot;,&quot;id&quot;:&quot;model_doc/realm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/realm&quot;},{&quot;title&quot;:&quot;Reformer&quot;,&quot;id&quot;:&quot;model_doc/reformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/reformer&quot;},{&quot;title&quot;:&quot;RemBERT&quot;,&quot;id&quot;:&quot;model_doc/rembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rembert&quot;},{&quot;title&quot;:&quot;RetriBERT&quot;,&quot;id&quot;:&quot;model_doc/retribert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/retribert&quot;},{&quot;title&quot;:&quot;RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta&quot;},{&quot;title&quot;:&quot;RoBERTa-PreLayerNorm&quot;,&quot;id&quot;:&quot;model_doc/roberta-prelayernorm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm&quot;},{&quot;title&quot;:&quot;RoCBert&quot;,&quot;id&quot;:&quot;model_doc/roc_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roc_bert&quot;},{&quot;title&quot;:&quot;RoFormer&quot;,&quot;id&quot;:&quot;model_doc/roformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roformer&quot;},{&quot;title&quot;:&quot;RWKV&quot;,&quot;id&quot;:&quot;model_doc/rwkv&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rwkv&quot;},{&quot;title&quot;:&quot;Splinter&quot;,&quot;id&quot;:&quot;model_doc/splinter&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/splinter&quot;},{&quot;title&quot;:&quot;SqueezeBERT&quot;,&quot;id&quot;:&quot;model_doc/squeezebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/squeezebert&quot;},{&quot;title&quot;:&quot;SwitchTransformers&quot;,&quot;id&quot;:&quot;model_doc/switch_transformers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/switch_transformers&quot;},{&quot;title&quot;:&quot;T5&quot;,&quot;id&quot;:&quot;model_doc/t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5&quot;},{&quot;title&quot;:&quot;T5v1.1&quot;,&quot;id&quot;:&quot;model_doc/t5v1.1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5v1.1&quot;},{&quot;title&quot;:&quot;TAPEX&quot;,&quot;id&quot;:&quot;model_doc/tapex&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapex&quot;},{&quot;title&quot;:&quot;Transformer XL&quot;,&quot;id&quot;:&quot;model_doc/transfo-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/transfo-xl&quot;},{&quot;title&quot;:&quot;UL2&quot;,&quot;id&quot;:&quot;model_doc/ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ul2&quot;},{&quot;title&quot;:&quot;UMT5&quot;,&quot;id&quot;:&quot;model_doc/umt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/umt5&quot;},{&quot;title&quot;:&quot;X-MOD&quot;,&quot;id&quot;:&quot;model_doc/xmod&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xmod&quot;},{&quot;title&quot;:&quot;XGLM&quot;,&quot;id&quot;:&quot;model_doc/xglm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xglm&quot;},{&quot;title&quot;:&quot;XLM&quot;,&quot;id&quot;:&quot;model_doc/xlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm&quot;},{&quot;title&quot;:&quot;XLM-ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/xlm-prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa-XL&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl&quot;},{&quot;title&quot;:&quot;XLM-V&quot;,&quot;id&quot;:&quot;model_doc/xlm-v&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-v&quot;},{&quot;title&quot;:&quot;XLNet&quot;,&quot;id&quot;:&quot;model_doc/xlnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlnet&quot;},{&quot;title&quot;:&quot;YOSO&quot;,&quot;id&quot;:&quot;model_doc/yoso&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yoso&quot;}]},{&quot;title&quot;:&quot;Vision models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;BEiT&quot;,&quot;id&quot;:&quot;model_doc/beit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/beit&quot;},{&quot;title&quot;:&quot;BiT&quot;,&quot;id&quot;:&quot;model_doc/bit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bit&quot;},{&quot;title&quot;:&quot;Conditional DETR&quot;,&quot;id&quot;:&quot;model_doc/conditional_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/conditional_detr&quot;},{&quot;title&quot;:&quot;ConvNeXT&quot;,&quot;id&quot;:&quot;model_doc/convnext&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnext&quot;},{&quot;title&quot;:&quot;ConvNeXTV2&quot;,&quot;id&quot;:&quot;model_doc/convnextv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnextv2&quot;},{&quot;title&quot;:&quot;CvT&quot;,&quot;id&quot;:&quot;model_doc/cvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cvt&quot;},{&quot;title&quot;:&quot;Deformable DETR&quot;,&quot;id&quot;:&quot;model_doc/deformable_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deformable_detr&quot;},{&quot;title&quot;:&quot;DeiT&quot;,&quot;id&quot;:&quot;model_doc/deit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deit&quot;},{&quot;title&quot;:&quot;DETA&quot;,&quot;id&quot;:&quot;model_doc/deta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deta&quot;},{&quot;title&quot;:&quot;DETR&quot;,&quot;id&quot;:&quot;model_doc/detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/detr&quot;},{&quot;title&quot;:&quot;DiNAT&quot;,&quot;id&quot;:&quot;model_doc/dinat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinat&quot;},{&quot;title&quot;:&quot;DINO V2&quot;,&quot;id&quot;:&quot;model_doc/dinov2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinov2&quot;},{&quot;title&quot;:&quot;DiT&quot;,&quot;id&quot;:&quot;model_doc/dit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dit&quot;},{&quot;title&quot;:&quot;DPT&quot;,&quot;id&quot;:&quot;model_doc/dpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpt&quot;},{&quot;title&quot;:&quot;EfficientFormer&quot;,&quot;id&quot;:&quot;model_doc/efficientformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientformer&quot;},{&quot;title&quot;:&quot;EfficientNet&quot;,&quot;id&quot;:&quot;model_doc/efficientnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientnet&quot;},{&quot;title&quot;:&quot;FocalNet&quot;,&quot;id&quot;:&quot;model_doc/focalnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/focalnet&quot;},{&quot;title&quot;:&quot;GLPN&quot;,&quot;id&quot;:&quot;model_doc/glpn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/glpn&quot;},{&quot;title&quot;:&quot;ImageGPT&quot;,&quot;id&quot;:&quot;model_doc/imagegpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/imagegpt&quot;},{&quot;title&quot;:&quot;LeViT&quot;,&quot;id&quot;:&quot;model_doc/levit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/levit&quot;},{&quot;title&quot;:&quot;Mask2Former&quot;,&quot;id&quot;:&quot;model_doc/mask2former&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mask2former&quot;},{&quot;title&quot;:&quot;MaskFormer&quot;,&quot;id&quot;:&quot;model_doc/maskformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/maskformer&quot;},{&quot;title&quot;:&quot;MobileNetV1&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v1&quot;},{&quot;title&quot;:&quot;MobileNetV2&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v2&quot;},{&quot;title&quot;:&quot;MobileViT&quot;,&quot;id&quot;:&quot;model_doc/mobilevit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevit&quot;},{&quot;title&quot;:&quot;MobileViTV2&quot;,&quot;id&quot;:&quot;model_doc/mobilevitv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevitv2&quot;},{&quot;title&quot;:&quot;NAT&quot;,&quot;id&quot;:&quot;model_doc/nat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nat&quot;},{&quot;title&quot;:&quot;PoolFormer&quot;,&quot;id&quot;:&quot;model_doc/poolformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/poolformer&quot;},{&quot;title&quot;:&quot;Pyramid Vision Transformer (PVT)&quot;,&quot;id&quot;:&quot;model_doc/pvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pvt&quot;},{&quot;title&quot;:&quot;RegNet&quot;,&quot;id&quot;:&quot;model_doc/regnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/regnet&quot;},{&quot;title&quot;:&quot;ResNet&quot;,&quot;id&quot;:&quot;model_doc/resnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/resnet&quot;},{&quot;title&quot;:&quot;SegFormer&quot;,&quot;id&quot;:&quot;model_doc/segformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/segformer&quot;},{&quot;title&quot;:&quot;SwiftFormer&quot;,&quot;id&quot;:&quot;model_doc/swiftformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swiftformer&quot;},{&quot;title&quot;:&quot;Swin Transformer&quot;,&quot;id&quot;:&quot;model_doc/swin&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin&quot;},{&quot;title&quot;:&quot;Swin Transformer V2&quot;,&quot;id&quot;:&quot;model_doc/swinv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swinv2&quot;},{&quot;title&quot;:&quot;Swin2SR&quot;,&quot;id&quot;:&quot;model_doc/swin2sr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin2sr&quot;},{&quot;title&quot;:&quot;Table Transformer&quot;,&quot;id&quot;:&quot;model_doc/table-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/table-transformer&quot;},{&quot;title&quot;:&quot;TimeSformer&quot;,&quot;id&quot;:&quot;model_doc/timesformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/timesformer&quot;},{&quot;title&quot;:&quot;UperNet&quot;,&quot;id&quot;:&quot;model_doc/upernet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/upernet&quot;},{&quot;title&quot;:&quot;VAN&quot;,&quot;id&quot;:&quot;model_doc/van&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/van&quot;},{&quot;title&quot;:&quot;VideoMAE&quot;,&quot;id&quot;:&quot;model_doc/videomae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/videomae&quot;},{&quot;title&quot;:&quot;Vision Transformer (ViT)&quot;,&quot;id&quot;:&quot;model_doc/vit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit&quot;},{&quot;title&quot;:&quot;ViT Hybrid&quot;,&quot;id&quot;:&quot;model_doc/vit_hybrid&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_hybrid&quot;},{&quot;title&quot;:&quot;ViTDet&quot;,&quot;id&quot;:&quot;model_doc/vitdet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitdet&quot;},{&quot;title&quot;:&quot;ViTMAE&quot;,&quot;id&quot;:&quot;model_doc/vit_mae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_mae&quot;},{&quot;title&quot;:&quot;ViTMatte&quot;,&quot;id&quot;:&quot;model_doc/vitmatte&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitmatte&quot;},{&quot;title&quot;:&quot;ViTMSN&quot;,&quot;id&quot;:&quot;model_doc/vit_msn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_msn&quot;},{&quot;title&quot;:&quot;ViViT&quot;,&quot;id&quot;:&quot;model_doc/vivit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vivit&quot;},{&quot;title&quot;:&quot;YOLOS&quot;,&quot;id&quot;:&quot;model_doc/yolos&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yolos&quot;}]},{&quot;title&quot;:&quot;Audio models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio Spectrogram Transformer&quot;,&quot;id&quot;:&quot;model_doc/audio-spectrogram-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer&quot;},{&quot;title&quot;:&quot;Bark&quot;,&quot;id&quot;:&quot;model_doc/bark&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bark&quot;},{&quot;title&quot;:&quot;CLAP&quot;,&quot;id&quot;:&quot;model_doc/clap&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clap&quot;},{&quot;title&quot;:&quot;EnCodec&quot;,&quot;id&quot;:&quot;model_doc/encodec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encodec&quot;},{&quot;title&quot;:&quot;Hubert&quot;,&quot;id&quot;:&quot;model_doc/hubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/hubert&quot;},{&quot;title&quot;:&quot;MCTCT&quot;,&quot;id&quot;:&quot;model_doc/mctct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mctct&quot;},{&quot;title&quot;:&quot;MMS&quot;,&quot;id&quot;:&quot;model_doc/mms&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mms&quot;},{&quot;title&quot;:&quot;MusicGen&quot;,&quot;id&quot;:&quot;model_doc/musicgen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/musicgen&quot;},{&quot;title&quot;:&quot;Pop2Piano&quot;,&quot;id&quot;:&quot;model_doc/pop2piano&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pop2piano&quot;},{&quot;title&quot;:&quot;SEW&quot;,&quot;id&quot;:&quot;model_doc/sew&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew&quot;},{&quot;title&quot;:&quot;SEW-D&quot;,&quot;id&quot;:&quot;model_doc/sew-d&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew-d&quot;},{&quot;title&quot;:&quot;Speech2Text&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text&quot;},{&quot;title&quot;:&quot;Speech2Text2&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text_2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2&quot;},{&quot;title&quot;:&quot;SpeechT5&quot;,&quot;id&quot;:&quot;model_doc/speecht5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speecht5&quot;},{&quot;title&quot;:&quot;UniSpeech&quot;,&quot;id&quot;:&quot;model_doc/unispeech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech&quot;},{&quot;title&quot;:&quot;UniSpeech-SAT&quot;,&quot;id&quot;:&quot;model_doc/unispeech-sat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech-sat&quot;},{&quot;title&quot;:&quot;VITS&quot;,&quot;id&quot;:&quot;model_doc/vits&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vits&quot;},{&quot;title&quot;:&quot;Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2&quot;},{&quot;title&quot;:&quot;Wav2Vec2-Conformer&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2-conformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer&quot;},{&quot;title&quot;:&quot;Wav2Vec2Phoneme&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2_phoneme&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme&quot;},{&quot;title&quot;:&quot;WavLM&quot;,&quot;id&quot;:&quot;model_doc/wavlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wavlm&quot;},{&quot;title&quot;:&quot;Whisper&quot;,&quot;id&quot;:&quot;model_doc/whisper&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/whisper&quot;},{&quot;title&quot;:&quot;XLS-R&quot;,&quot;id&quot;:&quot;model_doc/xls_r&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xls_r&quot;},{&quot;title&quot;:&quot;XLSR-Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/xlsr_wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2&quot;}]},{&quot;title&quot;:&quot;Multimodal models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALIGN&quot;,&quot;id&quot;:&quot;model_doc/align&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/align&quot;},{&quot;title&quot;:&quot;AltCLIP&quot;,&quot;id&quot;:&quot;model_doc/altclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/altclip&quot;},{&quot;title&quot;:&quot;BLIP&quot;,&quot;id&quot;:&quot;model_doc/blip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip&quot;},{&quot;title&quot;:&quot;BLIP-2&quot;,&quot;id&quot;:&quot;model_doc/blip-2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip-2&quot;},{&quot;title&quot;:&quot;BridgeTower&quot;,&quot;id&quot;:&quot;model_doc/bridgetower&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bridgetower&quot;},{&quot;title&quot;:&quot;BROS&quot;,&quot;id&quot;:&quot;model_doc/bros&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bros&quot;},{&quot;title&quot;:&quot;Chinese-CLIP&quot;,&quot;id&quot;:&quot;model_doc/chinese_clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/chinese_clip&quot;},{&quot;title&quot;:&quot;CLIP&quot;,&quot;id&quot;:&quot;model_doc/clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clip&quot;},{&quot;title&quot;:&quot;CLIPSeg&quot;,&quot;id&quot;:&quot;model_doc/clipseg&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clipseg&quot;},{&quot;title&quot;:&quot;Data2Vec&quot;,&quot;id&quot;:&quot;model_doc/data2vec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/data2vec&quot;},{&quot;title&quot;:&quot;DePlot&quot;,&quot;id&quot;:&quot;model_doc/deplot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deplot&quot;},{&quot;title&quot;:&quot;Donut&quot;,&quot;id&quot;:&quot;model_doc/donut&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/donut&quot;},{&quot;title&quot;:&quot;FLAVA&quot;,&quot;id&quot;:&quot;model_doc/flava&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flava&quot;},{&quot;title&quot;:&quot;GIT&quot;,&quot;id&quot;:&quot;model_doc/git&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/git&quot;},{&quot;title&quot;:&quot;GroupViT&quot;,&quot;id&quot;:&quot;model_doc/groupvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/groupvit&quot;},{&quot;title&quot;:&quot;IDEFICS&quot;,&quot;id&quot;:&quot;model_doc/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/idefics&quot;},{&quot;title&quot;:&quot;InstructBLIP&quot;,&quot;id&quot;:&quot;model_doc/instructblip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/instructblip&quot;},{&quot;title&quot;:&quot;LayoutLM&quot;,&quot;id&quot;:&quot;model_doc/layoutlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlm&quot;},{&quot;title&quot;:&quot;LayoutLMV2&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv2&quot;},{&quot;title&quot;:&quot;LayoutLMV3&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv3&quot;},{&quot;title&quot;:&quot;LayoutXLM&quot;,&quot;id&quot;:&quot;model_doc/layoutxlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutxlm&quot;},{&quot;title&quot;:&quot;LiLT&quot;,&quot;id&quot;:&quot;model_doc/lilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lilt&quot;},{&quot;title&quot;:&quot;LXMERT&quot;,&quot;id&quot;:&quot;model_doc/lxmert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lxmert&quot;},{&quot;title&quot;:&quot;MatCha&quot;,&quot;id&quot;:&quot;model_doc/matcha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/matcha&quot;},{&quot;title&quot;:&quot;MGP-STR&quot;,&quot;id&quot;:&quot;model_doc/mgp-str&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mgp-str&quot;},{&quot;title&quot;:&quot;Nougat&quot;,&quot;id&quot;:&quot;model_doc/nougat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nougat&quot;},{&quot;title&quot;:&quot;OneFormer&quot;,&quot;id&quot;:&quot;model_doc/oneformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/oneformer&quot;},{&quot;title&quot;:&quot;OWL-ViT&quot;,&quot;id&quot;:&quot;model_doc/owlvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/owlvit&quot;},{&quot;title&quot;:&quot;Perceiver&quot;,&quot;id&quot;:&quot;model_doc/perceiver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/perceiver&quot;},{&quot;title&quot;:&quot;Pix2Struct&quot;,&quot;id&quot;:&quot;model_doc/pix2struct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pix2struct&quot;},{&quot;title&quot;:&quot;Segment Anything&quot;,&quot;id&quot;:&quot;model_doc/sam&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sam&quot;},{&quot;title&quot;:&quot;Speech Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/speech-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder&quot;},{&quot;title&quot;:&quot;TAPAS&quot;,&quot;id&quot;:&quot;model_doc/tapas&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapas&quot;},{&quot;title&quot;:&quot;TrOCR&quot;,&quot;id&quot;:&quot;model_doc/trocr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trocr&quot;},{&quot;title&quot;:&quot;TVLT&quot;,&quot;id&quot;:&quot;model_doc/tvlt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tvlt&quot;},{&quot;title&quot;:&quot;ViLT&quot;,&quot;id&quot;:&quot;model_doc/vilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vilt&quot;},{&quot;title&quot;:&quot;Vision Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/vision-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder&quot;},{&quot;title&quot;:&quot;Vision Text Dual Encoder&quot;,&quot;id&quot;:&quot;model_doc/vision-text-dual-encoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder&quot;},{&quot;title&quot;:&quot;VisualBERT&quot;,&quot;id&quot;:&quot;model_doc/visual_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/visual_bert&quot;},{&quot;title&quot;:&quot;X-CLIP&quot;,&quot;id&quot;:&quot;model_doc/xclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xclip&quot;}]},{&quot;title&quot;:&quot;Reinforcement learning models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Decision Transformer&quot;,&quot;id&quot;:&quot;model_doc/decision_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/decision_transformer&quot;},{&quot;title&quot;:&quot;Trajectory Transformer&quot;,&quot;id&quot;:&quot;model_doc/trajectory_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer&quot;}]},{&quot;title&quot;:&quot;Time series models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Autoformer&quot;,&quot;id&quot;:&quot;model_doc/autoformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/autoformer&quot;},{&quot;title&quot;:&quot;Informer&quot;,&quot;id&quot;:&quot;model_doc/informer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/informer&quot;},{&quot;title&quot;:&quot;Time Series Transformer&quot;,&quot;id&quot;:&quot;model_doc/time_series_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/time_series_transformer&quot;}]},{&quot;title&quot;:&quot;Graph models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Graphormer&quot;,&quot;id&quot;:&quot;model_doc/graphormer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/graphormer&quot;}]}]},{&quot;title&quot;:&quot;Internal Helpers&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Custom Layers and Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/modeling_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/modeling_utils&quot;},{&quot;title&quot;:&quot;Utilities for pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/pipelines_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/pipelines_utils&quot;},{&quot;title&quot;:&quot;Utilities for Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/tokenization_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/tokenization_utils&quot;},{&quot;title&quot;:&quot;Utilities for Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/trainer_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/trainer_utils&quot;},{&quot;title&quot;:&quot;Utilities for Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/generation_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/generation_utils&quot;},{&quot;title&quot;:&quot;Utilities for Image Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/image_processing_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/image_processing_utils&quot;},{&quot;title&quot;:&quot;Utilities for Audio processing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/audio_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/audio_utils&quot;},{&quot;title&quot;:&quot;General Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/file_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/file_utils&quot;},{&quot;title&quot;:&quot;Utilities for Time Series&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/time_series_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/time_series_utils&quot;}]}]}],&quot;chapterId&quot;:&quot;tasks/semantic_segmentation&quot;,&quot;docType&quot;:&quot;docs&quot;,&quot;isLoggedIn&quot;:false,&quot;lang&quot;:&quot;en&quot;,&quot;langs&quot;:[&quot;de&quot;,&quot;en&quot;,&quot;es&quot;,&quot;fr&quot;,&quot;it&quot;,&quot;ko&quot;,&quot;pt&quot;,&quot;zh&quot;],&quot;library&quot;:&quot;transformers&quot;,&quot;theme&quot;:&quot;light&quot;,&quot;version&quot;:&quot;v4.34.0&quot;,&quot;versions&quot;:[{&quot;version&quot;:&quot;main&quot;},{&quot;version&quot;:&quot;v4.34.0&quot;},{&quot;version&quot;:&quot;v4.33.3&quot;},{&quot;version&quot;:&quot;v4.33.2&quot;},{&quot;version&quot;:&quot;v4.33.0&quot;},{&quot;version&quot;:&quot;v4.32.1&quot;},{&quot;version&quot;:&quot;v4.32.0&quot;},{&quot;version&quot;:&quot;v4.31.0&quot;},{&quot;version&quot;:&quot;v4.30.0&quot;},{&quot;version&quot;:&quot;v4.29.1&quot;},{&quot;version&quot;:&quot;v4.29.0&quot;},{&quot;version&quot;:&quot;v4.28.1&quot;},{&quot;version&quot;:&quot;v4.28.0&quot;},{&quot;version&quot;:&quot;v4.27.2&quot;},{&quot;version&quot;:&quot;v4.27.1&quot;},{&quot;version&quot;:&quot;v4.27.0&quot;},{&quot;version&quot;:&quot;v4.26.1&quot;},{&quot;version&quot;:&quot;v4.26.0&quot;},{&quot;version&quot;:&quot;v4.25.1&quot;},{&quot;version&quot;:&quot;v4.24.0&quot;},{&quot;version&quot;:&quot;v4.23.1&quot;},{&quot;version&quot;:&quot;v4.23.0&quot;},{&quot;version&quot;:&quot;v4.22.2&quot;},{&quot;version&quot;:&quot;v4.22.1&quot;},{&quot;version&quot;:&quot;v4.22.0&quot;},{&quot;version&quot;:&quot;v4.21.3&quot;},{&quot;version&quot;:&quot;v4.21.2&quot;},{&quot;version&quot;:&quot;v4.21.1&quot;},{&quot;version&quot;:&quot;v4.21.0&quot;},{&quot;version&quot;:&quot;v4.20.1&quot;},{&quot;version&quot;:&quot;v4.20.0&quot;},{&quot;version&quot;:&quot;v4.19.4&quot;},{&quot;version&quot;:&quot;v4.19.3&quot;},{&quot;version&quot;:&quot;v4.19.2&quot;},{&quot;version&quot;:&quot;v4.19.0&quot;},{&quot;version&quot;:&quot;v4.18.0&quot;},{&quot;version&quot;:&quot;v4.17.0&quot;},{&quot;version&quot;:&quot;v4.16.2&quot;},{&quot;version&quot;:&quot;v4.16.1&quot;},{&quot;version&quot;:&quot;v4.16.0&quot;},{&quot;version&quot;:&quot;v4.15.0&quot;},{&quot;version&quot;:&quot;v4.14.1&quot;},{&quot;version&quot;:&quot;v4.13.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.5&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.4&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.0.0&quot;},{&quot;version&quot;:&quot;doc-builder-html&quot;}],&quot;title&quot;:&quot;Semantic segmentation&quot;}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"><div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation</p> <div class="flex items-center"><p class="font-semibold">Semantic segmentation</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "><button class=" " type="button"><h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> <span class="ml-auto rounded border border-gray-200 bg-gray-100 px-0.5 text-xs dark:border-gray-800 dark:bg-gray-800"><kbd class="font-sans">⌘K</kbd></span></button> <div class="flex items-center"><select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1">v4.34.0</option><option value="2">v4.33.3</option><option value="3">v4.32.1</option><option value="4">v4.31.0</option><option value="5">v4.30.0</option><option value="6">v4.29.1</option><option value="7">v4.28.1</option><option value="8">v4.27.2</option><option value="9">v4.26.1</option><option value="10">v4.25.1</option><option value="11">v4.24.0</option><option value="12">v4.23.1</option><option value="13">v4.22.2</option><option value="14">v4.21.3</option><option value="15">v4.20.1</option><option value="16">v4.19.4</option><option value="17">v4.18.0</option><option value="18">v4.17.0</option><option value="19">v4.16.2</option><option value="20">v4.15.0</option><option value="21">v4.14.1</option><option value="22">v4.13.0</option><option value="23">v4.12.5</option><option value="24">v4.11.3</option><option value="25">v4.10.1</option><option value="26">v4.9.2</option><option value="27">v4.8.2</option><option value="28">v4.7.0</option><option value="29">v4.6.0</option><option value="30">v4.5.1</option><option value="31">v4.4.2</option><option value="32">v4.3.3</option><option value="33">v4.2.2</option><option value="34">v4.1.1</option><option value="35">v4.0.1</option><option value="36">v3.5.1</option><option value="37">v3.4.0</option><option value="38">v3.3.1</option><option value="39">v3.2.0</option><option value="40">v3.1.0</option><option value="41">v3.0.2</option><option value="42">v2.11.0</option><option value="43">v2.10.0</option><option value="44">v2.9.1</option><option value="45">v2.8.0</option><option value="46">v2.7.0</option><option value="47">v2.6.0</option><option value="48">v2.5.1</option><option value="49">v2.4.1</option><option value="50">v2.3.0</option><option value="51">v2.2.2</option><option value="52">v2.1.1</option><option value="53">v2.0.0</option><option value="54">v1.2.0</option><option value="55">v1.1.0</option><option value="56">v1.0.0</option><option value="57">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"><button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"><svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> 112,792</a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Get started</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/index">🤗 Transformers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/quicktour">Quick tour </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/installation">Installation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Tutorials</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_tutorial">Run inference with pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/autoclass_tutorial">Write portable code with AutoClass </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/preprocessing">Preprocess data </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/training">Fine-tune a pretrained model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/run_scripts">Train with a script </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/accelerate">Set up distributed training with 🤗 Accelerate </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/peft">Load and train adapters with 🤗 PEFT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_sharing">Share your model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/transformers_agents">Agents </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/llm_tutorial">Generation with LLMs </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Task Guides</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Natural Language Processing</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Computer Vision</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/image_classification">Image classification </a><a data-sveltekit-reload="" class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-4" href="/docs/transformers/v4.34.0/en/tasks/semantic_segmentation">Semantic segmentation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/video_classification">Video classification </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/object_detection">Object detection </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection">Zero-shot object detection </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification">Zero-shot image classification </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation">Depth estimation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Generation</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Prompting</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Developer guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/fast_tokenizers">Use fast tokenizers from 🤗 Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/multilingual">Run inference with multilingual models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/create_a_model">Use model-specific APIs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_models">Share a custom model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/chat_templating">Templates for chat models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/sagemaker">Run training on Amazon SageMaker </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/serialization">Export to ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tflite">Export to TFLite </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/torchscript">Export to TorchScript </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/benchmarks">Benchmarks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/notebooks">Notebooks with examples </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/community">Community resources </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_tools">Custom Tools and Prompts </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/troubleshooting">Troubleshoot </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Performance and scalability</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/performance">Overview </a><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Efficient training techniques</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_one">Methods and tools for efficient training on a single GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_many">Multiple GPUs and parallelism </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu">Efficient training on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu_many">Distributed CPU training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu">Training on TPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu_tf">Training on TPU with TensorFlow </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_special">Training on Specialized Hardware </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_hardware">Custom hardware for training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/hpo_train">Hyperparameter Search using Trainer API </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Optimizing inference</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_cpu">Inference on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_one">Inference on one GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_many">Inference on many GPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_special">Inference on Specialized Hardware </a> </div><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/big_models">Instantiating a big model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/debugging">Troubleshooting </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tf_xla">XLA Integration for TensorFlow Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perf_torch_compile">Optimize inference using `torch.compile()` </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Contribute</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/contributing">How to contribute to transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_model">How to add a model to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_tensorflow_model">How to convert a 🤗 Transformers model to TensorFlow? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_pipeline">How to add a pipeline to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/testing">Testing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pr_checks">Checks on a Pull Request </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Conceptual guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/philosophy">Philosophy </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/glossary">Glossary </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/task_summary">What 🤗 Transformers can do </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tasks_explained">How 🤗 Transformers solve tasks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_summary">The Transformer model family </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tokenizer_summary">Summary of the tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/attention">Attention mechanisms </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pad_truncation">Padding and truncation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/bertology">BERTology </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perplexity">Perplexity of fixed-length models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_webserver">Pipelines for webserver inference </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_memory_anatomy">Model training anatomy </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>API</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Main Classes</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/agent">Agents and Tools </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/model_doc/auto">Auto Classes </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/callback">Callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/configuration">Configuration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/data_collator">Data Collator </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks">Keras callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/logging">Logging </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/model">Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/text_generation">Text Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/onnx">ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules">Optimization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/output">Model outputs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/pipelines">Pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/processors">Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/quantization">Quantization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/tokenizer">Tokenizer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/trainer">Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/deepspeed">DeepSpeed Integration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor">Feature Extractor </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/image_processor">Image Processor </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Models</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Text models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Vision models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Reinforcement learning models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Time series models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Graph models</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Internal Helpers</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/modeling_utils">Custom Layers and Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/pipelines_utils">Utilities for pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/tokenization_utils">Utilities for Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/trainer_utils">Utilities for Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/generation_utils">Utilities for Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/image_processing_utils">Utilities for Image Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/audio_utils">Utilities for Audio processing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/file_utils">General Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/time_series_utils">Utilities for Time Series </a> </div> </div></nav></div></div></div> <div class="z-1 min-w-0 flex-1"> <div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg"> <div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div> <p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience </p> <div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes </div></div></div> <div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a> <p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div> <div class="prose-doc prose relative mx-auto max-w-4xl break-words"> <p></p> <h1 class="relative group"><a id="semantic-segmentation" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#semantic-segmentation"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-6h5c9x">Semantic segmentation</span></h1> <div class="flex space-x-1 absolute z-10 right-0 top-0"> <div class="relative colab-dropdown "><button class=" " type="button"><img alt="Open In Colab" class="!m-0" src="https://colab.research.google.com/assets/colab-badge.svg"></button> </div> <div class="relative colab-dropdown "><button class=" " type="button"><img alt="Open In Studio Lab" class="!m-0" src="https://studiolab.sagemaker.aws/studiolab.svg"></button> </div></div> <iframe class="w-full xl:w-4/6 h-80" src="https://www.youtube-nocookie.com/embed/dKE8SIt9C-w" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe> <p data-svelte-h="svelte-1sbjiw">Semantic segmentation assigns a label or class to each individual pixel of an image. There are several types of segmentation, and in the case of semantic segmentation, no distinction is made between unique instances of the same object. Both objects are given the same label (for example, “car” instead of “car-1” and “car-2”). Common real-world applications of semantic segmentation include training self-driving cars to identify pedestrians and important traffic information, identifying cells and abnormalities in medical imagery, and monitoring environmental changes from satellite imagery.</p> <p data-svelte-h="svelte-1aff4p7">This guide will show you how to:</p> <ol data-svelte-h="svelte-u87rmm"><li>Finetune <a href="https://huggingface.co/docs/transformers/main/en/model_doc/segformer#segformer" rel="nofollow">SegFormer</a> on the <a href="https://huggingface.co/datasets/scene_parse_150" rel="nofollow">SceneParse150</a> dataset.</li> <li>Use your finetuned model for inference.</li></ol> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400">The task illustrated in this tutorial is supported by the following model architectures: <p data-svelte-h="svelte-19l83ao"><a href="../model_doc/beit">BEiT</a>, <a href="../model_doc/data2vec-vision">Data2VecVision</a>, <a href="../model_doc/dpt">DPT</a>, <a href="../model_doc/mobilenet_v2">MobileNetV2</a>, <a href="../model_doc/mobilevit">MobileViT</a>, <a href="../model_doc/mobilevitv2">MobileViTV2</a>, <a href="../model_doc/segformer">SegFormer</a>, <a href="../model_doc/upernet">UPerNet</a></p></div> <p data-svelte-h="svelte-1c9nexd">Before you begin, make sure you have all the necessary libraries installed:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class="">pip install -q datasets transformers evaluate</pre></div> <p data-svelte-h="svelte-27hn0u">We encourage you to log in to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to log in:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> huggingface_hub <span class="hljs-keyword">import</span> notebook_login <span class="hljs-meta">&gt;&gt;&gt; </span>notebook_login()</pre></div> <h2 class="relative group"><a id="load-sceneparse150-dataset" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#load-sceneparse150-dataset"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-90yv5c">Load SceneParse150 dataset</span></h2> <p data-svelte-h="svelte-ldhwp2">Start by loading a smaller subset of the SceneParse150 dataset from the 🤗 Datasets library. This’ll give you a chance to experiment and make sure everything works before spending more time training on the full dataset.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset <span class="hljs-meta">&gt;&gt;&gt; </span>ds = load_dataset(<span class="hljs-string">"scene_parse_150"</span>, split=<span class="hljs-string">"train[:50]"</span>)</pre></div> <p data-svelte-h="svelte-1izknij">Split the dataset’s <code>train</code> split into a train and test set with the <a href="https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.train_test_split" rel="nofollow">train_test_split</a> method:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>ds = ds.train_test_split(test_size=<span class="hljs-number">0.2</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>train_ds = ds[<span class="hljs-string">"train"</span>] <span class="hljs-meta">&gt;&gt;&gt; </span>test_ds = ds[<span class="hljs-string">"test"</span>]</pre></div> <p data-svelte-h="svelte-1m91ua0">Then take a look at an example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>train_ds[<span class="hljs-number">0</span>] {<span class="hljs-string">'image'</span>: &lt;PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=512x683 at <span class="hljs-number">0x7F9B0C201F90</span>&gt;, <span class="hljs-string">'annotation'</span>: &lt;PIL.PngImagePlugin.PngImageFile image mode=L size=512x683 at <span class="hljs-number">0x7F9B0C201DD0</span>&gt;, <span class="hljs-string">'scene_category'</span>: <span class="hljs-number">368</span>}</pre></div> <ul data-svelte-h="svelte-1gb3b0f"><li><code>image</code>: a PIL image of the scene.</li> <li><code>annotation</code>: a PIL image of the segmentation map, which is also the model’s target.</li> <li><code>scene_category</code>: a category id that describes the image scene like “kitchen” or “office”. In this guide, you’ll only need <code>image</code> and <code>annotation</code>, both of which are PIL images.</li></ul> <p data-svelte-h="svelte-j46pio">You’ll also want to create a dictionary that maps a label id to a label class which will be useful when you set up the model later. Download the mappings from the Hub and create the <code>id2label</code> and <code>label2id</code> dictionaries:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> json <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> huggingface_hub <span class="hljs-keyword">import</span> cached_download, hf_hub_url <span class="hljs-meta">&gt;&gt;&gt; </span>repo_id = <span class="hljs-string">"huggingface/label-files"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>filename = <span class="hljs-string">"ade20k-id2label.json"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>id2label = json.load(<span class="hljs-built_in">open</span>(cached_download(hf_hub_url(repo_id, filename, repo_type=<span class="hljs-string">"dataset"</span>)), <span class="hljs-string">"r"</span>)) <span class="hljs-meta">&gt;&gt;&gt; </span>id2label = {<span class="hljs-built_in">int</span>(k): v <span class="hljs-keyword">for</span> k, v <span class="hljs-keyword">in</span> id2label.items()} <span class="hljs-meta">&gt;&gt;&gt; </span>label2id = {v: k <span class="hljs-keyword">for</span> k, v <span class="hljs-keyword">in</span> id2label.items()} <span class="hljs-meta">&gt;&gt;&gt; </span>num_labels = <span class="hljs-built_in">len</span>(id2label)</pre></div> <h2 class="relative group"><a id="preprocess" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#preprocess"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1cg9qj">Preprocess</span></h2> <p data-svelte-h="svelte-7ebr">The next step is to load a SegFormer image processor to prepare the images and annotations for the model. Some datasets, like this one, use the zero-index as the background class. However, the background class isn’t actually included in the 150 classes, so you’ll need to set <code>reduce_labels=True</code> to subtract one from all the labels. The zero-index is replaced by <code>255</code> so it’s ignored by SegFormer’s loss function:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoImageProcessor <span class="hljs-meta">&gt;&gt;&gt; </span>checkpoint = <span class="hljs-string">"nvidia/mit-b0"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>image_processor = AutoImageProcessor.from_pretrained(checkpoint, reduce_labels=<span class="hljs-literal">True</span>)</pre></div> <div class="space-y-10 py-6 2xl:py-8 2xl:-mx-4"><div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><defs><clipPath id="a"><rect x="3.05" y="0.5" width="25.73" height="31" fill="none"></rect></clipPath></defs><g clip-path="url(#a)"><path d="M24.94,9.51a12.81,12.81,0,0,1,0,18.16,12.68,12.68,0,0,1-18,0,12.81,12.81,0,0,1,0-18.16l9-9V5l-.84.83-6,6a9.58,9.58,0,1,0,13.55,0ZM20.44,9a1.68,1.68,0,1,1,1.67-1.67A1.68,1.68,0,0,1,20.44,9Z" fill="#ee4c2c"></path></g></svg> <span>Pytorch</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide Pytorch content</span></div></div> <div class="framework-content"><p data-svelte-h="svelte-43dxhy">It is common to apply some data augmentations to an image dataset to make a model more robust against overfitting. In this guide, you’ll use the <a href="https://pytorch.org/vision/stable/generated/torchvision.transforms.ColorJitter.html" rel="nofollow"><code>ColorJitter</code></a> function from <a href="https://pytorch.org/vision/stable/index.html" rel="nofollow">torchvision</a> to randomly change the color properties of an image, but you can also use any image library you like.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> torchvision.transforms <span class="hljs-keyword">import</span> ColorJitter <span class="hljs-meta">&gt;&gt;&gt; </span>jitter = ColorJitter(brightness=<span class="hljs-number">0.25</span>, contrast=<span class="hljs-number">0.25</span>, saturation=<span class="hljs-number">0.25</span>, hue=<span class="hljs-number">0.1</span>)</pre></div> <p data-svelte-h="svelte-qerdt3">Now create two preprocessing functions to prepare the images and annotations for the model. These functions convert the images into <code>pixel_values</code> and annotations to <code>labels</code>. For the training set, <code>jitter</code> is applied before providing the images to the image processor. For the test set, the image processor crops and normalizes the <code>images</code>, and only crops the <code>labels</code> because no data augmentation is applied during testing.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">train_transforms</span>(<span class="hljs-params">example_batch</span>): <span class="hljs-meta">... </span> images = [jitter(x) <span class="hljs-keyword">for</span> x <span class="hljs-keyword">in</span> example_batch[<span class="hljs-string">"image"</span>]] <span class="hljs-meta">... </span> labels = [x <span class="hljs-keyword">for</span> x <span class="hljs-keyword">in</span> example_batch[<span class="hljs-string">"annotation"</span>]] <span class="hljs-meta">... </span> inputs = image_processor(images, labels) <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> inputs <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">val_transforms</span>(<span class="hljs-params">example_batch</span>): <span class="hljs-meta">... </span> images = [x <span class="hljs-keyword">for</span> x <span class="hljs-keyword">in</span> example_batch[<span class="hljs-string">"image"</span>]] <span class="hljs-meta">... </span> labels = [x <span class="hljs-keyword">for</span> x <span class="hljs-keyword">in</span> example_batch[<span class="hljs-string">"annotation"</span>]] <span class="hljs-meta">... </span> inputs = image_processor(images, labels) <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> inputs</pre></div> <p data-svelte-h="svelte-1i0lv5j">To apply the <code>jitter</code> over the entire dataset, use the 🤗 Datasets <a href="https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.set_transform" rel="nofollow">set_transform</a> function. The transform is applied on the fly which is faster and consumes less disk space:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>train_ds.set_transform(train_transforms) <span class="hljs-meta">&gt;&gt;&gt; </span>test_ds.set_transform(val_transforms)</pre></div></div></div> </div> <div class="space-y-10 py-6 2xl:py-8 2xl:-mx-4"> <div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="0.94em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 274"><path d="M145.726 42.065v42.07l72.861 42.07v-42.07l-72.86-42.07zM0 84.135v42.07l36.43 21.03V105.17L0 84.135zm109.291 21.035l-36.43 21.034v126.2l36.43 21.035v-84.135l36.435 21.035v-42.07l-36.435-21.034V105.17z" fill="#E55B2D"></path><path d="M145.726 42.065L36.43 105.17v42.065l72.861-42.065v42.065l36.435-21.03v-84.14zM255.022 63.1l-36.435 21.035v42.07l36.435-21.035V63.1zm-72.865 84.135l-36.43 21.035v42.07l36.43-21.036v-42.07zm-36.43 63.104l-36.436-21.035v84.135l36.435-21.035V210.34z" fill="#ED8E24"></path><path d="M145.726 0L0 84.135l36.43 21.035l109.296-63.105l72.861 42.07L255.022 63.1L145.726 0zm0 126.204l-36.435 21.03l36.435 21.036l36.43-21.035l-36.43-21.03z" fill="#F8BF3C"></path></svg> <span>TensorFlow</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide TensorFlow content</span></div></div> <div class="framework-content"><p data-svelte-h="svelte-1rj6vla">It is common to apply some data augmentations to an image dataset to make a model more robust against overfitting. In this guide, you’ll use <a href="https://www.tensorflow.org/api_docs/python/tf/image" rel="nofollow"><code>tf.image</code></a> to randomly change the color properties of an image, but you can also use any image library you like. Define two separate transformation functions:</p> <ul data-svelte-h="svelte-lmmfid"><li>training data transformations that include image augmentation</li> <li>validation data transformations that only transpose the images, since computer vision models in 🤗 Transformers expect channels-first layout</li></ul> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> tensorflow <span class="hljs-keyword">as</span> tf <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">aug_transforms</span>(<span class="hljs-params">image</span>): <span class="hljs-meta">... </span> image = tf.keras.utils.img_to_array(image) <span class="hljs-meta">... </span> image = tf.image.random_brightness(image, <span class="hljs-number">0.25</span>) <span class="hljs-meta">... </span> image = tf.image.random_contrast(image, <span class="hljs-number">0.5</span>, <span class="hljs-number">2.0</span>) <span class="hljs-meta">... </span> image = tf.image.random_saturation(image, <span class="hljs-number">0.75</span>, <span class="hljs-number">1.25</span>) <span class="hljs-meta">... </span> image = tf.image.random_hue(image, <span class="hljs-number">0.1</span>) <span class="hljs-meta">... </span> image = tf.transpose(image, (<span class="hljs-number">2</span>, <span class="hljs-number">0</span>, <span class="hljs-number">1</span>)) <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> image <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">transforms</span>(<span class="hljs-params">image</span>): <span class="hljs-meta">... </span> image = tf.keras.utils.img_to_array(image) <span class="hljs-meta">... </span> image = tf.transpose(image, (<span class="hljs-number">2</span>, <span class="hljs-number">0</span>, <span class="hljs-number">1</span>)) <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> image</pre></div> <p data-svelte-h="svelte-hpi41l">Next, create two preprocessing functions to prepare batches of images and annotations for the model. These functions apply the image transformations and use the earlier loaded <code>image_processor</code> to convert the images into <code>pixel_values</code> and annotations to <code>labels</code>. <code>ImageProcessor</code> also takes care of resizing and normalizing the images.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">train_transforms</span>(<span class="hljs-params">example_batch</span>): <span class="hljs-meta">... </span> images = [aug_transforms(x.convert(<span class="hljs-string">"RGB"</span>)) <span class="hljs-keyword">for</span> x <span class="hljs-keyword">in</span> example_batch[<span class="hljs-string">"image"</span>]] <span class="hljs-meta">... </span> labels = [x <span class="hljs-keyword">for</span> x <span class="hljs-keyword">in</span> example_batch[<span class="hljs-string">"annotation"</span>]] <span class="hljs-meta">... </span> inputs = image_processor(images, labels) <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> inputs <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">val_transforms</span>(<span class="hljs-params">example_batch</span>): <span class="hljs-meta">... </span> images = [transforms(x.convert(<span class="hljs-string">"RGB"</span>)) <span class="hljs-keyword">for</span> x <span class="hljs-keyword">in</span> example_batch[<span class="hljs-string">"image"</span>]] <span class="hljs-meta">... </span> labels = [x <span class="hljs-keyword">for</span> x <span class="hljs-keyword">in</span> example_batch[<span class="hljs-string">"annotation"</span>]] <span class="hljs-meta">... </span> inputs = image_processor(images, labels) <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> inputs</pre></div> <p data-svelte-h="svelte-1if23t4">To apply the preprocessing transformations over the entire dataset, use the 🤗 Datasets <a href="https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.set_transform" rel="nofollow">set_transform</a> function. The transform is applied on the fly which is faster and consumes less disk space:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>train_ds.set_transform(train_transforms) <span class="hljs-meta">&gt;&gt;&gt; </span>test_ds.set_transform(val_transforms)</pre></div></div></div> </div> <h2 class="relative group"><a id="evaluate" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#evaluate"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-sh8s6s">Evaluate</span></h2> <p data-svelte-h="svelte-vg2xml">Including a metric during training is often helpful for evaluating your model’s performance. You can quickly load an evaluation method with the 🤗 <a href="https://huggingface.co/docs/evaluate/index" rel="nofollow">Evaluate</a> library. For this task, load the <a href="https://huggingface.co/spaces/evaluate-metric/accuracy" rel="nofollow">mean Intersection over Union</a> (IoU) metric (see the 🤗 Evaluate <a href="https://huggingface.co/docs/evaluate/a_quick_tour" rel="nofollow">quick tour</a> to learn more about how to load and compute a metric):</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> evaluate <span class="hljs-meta">&gt;&gt;&gt; </span>metric = evaluate.load(<span class="hljs-string">"mean_iou"</span>)</pre></div> <p data-svelte-h="svelte-czaf8w">Then create a function to <a href="https://huggingface.co/docs/evaluate/v0.4.0/en/package_reference/main_classes#evaluate.EvaluationModule.compute" rel="nofollow">compute</a> the metrics. Your predictions need to be converted to logits first, and then reshaped to match the size of the labels before you can call <a href="https://huggingface.co/docs/evaluate/v0.4.0/en/package_reference/main_classes#evaluate.EvaluationModule.compute" rel="nofollow">compute</a>:</p> <div class="space-y-10 py-6 2xl:py-8 2xl:-mx-4"><div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><defs><clipPath id="a"><rect x="3.05" y="0.5" width="25.73" height="31" fill="none"></rect></clipPath></defs><g clip-path="url(#a)"><path d="M24.94,9.51a12.81,12.81,0,0,1,0,18.16,12.68,12.68,0,0,1-18,0,12.81,12.81,0,0,1,0-18.16l9-9V5l-.84.83-6,6a9.58,9.58,0,1,0,13.55,0ZM20.44,9a1.68,1.68,0,1,1,1.67-1.67A1.68,1.68,0,0,1,20.44,9Z" fill="#ee4c2c"></path></g></svg> <span>Pytorch</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide Pytorch content</span></div></div> <div class="framework-content"><div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> torch <span class="hljs-keyword">import</span> nn <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">compute_metrics</span>(<span class="hljs-params">eval_pred</span>): <span class="hljs-meta">... </span> <span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> logits, labels = eval_pred <span class="hljs-meta">... </span> logits_tensor = torch.from_numpy(logits) <span class="hljs-meta">... </span> logits_tensor = nn.functional.interpolate( <span class="hljs-meta">... </span> logits_tensor, <span class="hljs-meta">... </span> size=labels.shape[-<span class="hljs-number">2</span>:], <span class="hljs-meta">... </span> mode=<span class="hljs-string">"bilinear"</span>, <span class="hljs-meta">... </span> align_corners=<span class="hljs-literal">False</span>, <span class="hljs-meta">... </span> ).argmax(dim=<span class="hljs-number">1</span>) <span class="hljs-meta">... </span> pred_labels = logits_tensor.detach().cpu().numpy() <span class="hljs-meta">... </span> metrics = metric.compute( <span class="hljs-meta">... </span> predictions=pred_labels, <span class="hljs-meta">... </span> references=labels, <span class="hljs-meta">... </span> num_labels=num_labels, <span class="hljs-meta">... </span> ignore_index=<span class="hljs-number">255</span>, <span class="hljs-meta">... </span> reduce_labels=<span class="hljs-literal">False</span>, <span class="hljs-meta">... </span> ) <span class="hljs-meta">... </span> <span class="hljs-keyword">for</span> key, value <span class="hljs-keyword">in</span> metrics.items(): <span class="hljs-meta">... </span> <span class="hljs-keyword">if</span> <span class="hljs-built_in">type</span>(value) <span class="hljs-keyword">is</span> np.ndarray: <span class="hljs-meta">... </span> metrics[key] = value.tolist() <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> metrics</pre></div></div></div> </div> <div class="space-y-10 py-6 2xl:py-8 2xl:-mx-4"> <div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="0.94em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 274"><path d="M145.726 42.065v42.07l72.861 42.07v-42.07l-72.86-42.07zM0 84.135v42.07l36.43 21.03V105.17L0 84.135zm109.291 21.035l-36.43 21.034v126.2l36.43 21.035v-84.135l36.435 21.035v-42.07l-36.435-21.034V105.17z" fill="#E55B2D"></path><path d="M145.726 42.065L36.43 105.17v42.065l72.861-42.065v42.065l36.435-21.03v-84.14zM255.022 63.1l-36.435 21.035v42.07l36.435-21.035V63.1zm-72.865 84.135l-36.43 21.035v42.07l36.43-21.036v-42.07zm-36.43 63.104l-36.436-21.035v84.135l36.435-21.035V210.34z" fill="#ED8E24"></path><path d="M145.726 0L0 84.135l36.43 21.035l109.296-63.105l72.861 42.07L255.022 63.1L145.726 0zm0 126.204l-36.435 21.03l36.435 21.036l36.43-21.035l-36.43-21.03z" fill="#F8BF3C"></path></svg> <span>TensorFlow</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide TensorFlow content</span></div></div> <div class="framework-content"><div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">compute_metrics</span>(<span class="hljs-params">eval_pred</span>): <span class="hljs-meta">... </span> logits, labels = eval_pred <span class="hljs-meta">... </span> logits = tf.transpose(logits, perm=[<span class="hljs-number">0</span>, <span class="hljs-number">2</span>, <span class="hljs-number">3</span>, <span class="hljs-number">1</span>]) <span class="hljs-meta">... </span> logits_resized = tf.image.resize( <span class="hljs-meta">... </span> logits, <span class="hljs-meta">... </span> size=tf.shape(labels)[<span class="hljs-number">1</span>:], <span class="hljs-meta">... </span> method=<span class="hljs-string">"bilinear"</span>, <span class="hljs-meta">... </span> ) <span class="hljs-meta">... </span> pred_labels = tf.argmax(logits_resized, axis=-<span class="hljs-number">1</span>) <span class="hljs-meta">... </span> metrics = metric.compute( <span class="hljs-meta">... </span> predictions=pred_labels, <span class="hljs-meta">... </span> references=labels, <span class="hljs-meta">... </span> num_labels=num_labels, <span class="hljs-meta">... </span> ignore_index=-<span class="hljs-number">1</span>, <span class="hljs-meta">... </span> reduce_labels=image_processor.do_reduce_labels, <span class="hljs-meta">... </span> ) <span class="hljs-meta">... </span> per_category_accuracy = metrics.pop(<span class="hljs-string">"per_category_accuracy"</span>).tolist() <span class="hljs-meta">... </span> per_category_iou = metrics.pop(<span class="hljs-string">"per_category_iou"</span>).tolist() <span class="hljs-meta">... </span> metrics.update({<span class="hljs-string">f"accuracy_<span class="hljs-subst">{id2label[i]}</span>"</span>: v <span class="hljs-keyword">for</span> i, v <span class="hljs-keyword">in</span> <span class="hljs-built_in">enumerate</span>(per_category_accuracy)}) <span class="hljs-meta">... </span> metrics.update({<span class="hljs-string">f"iou_<span class="hljs-subst">{id2label[i]}</span>"</span>: v <span class="hljs-keyword">for</span> i, v <span class="hljs-keyword">in</span> <span class="hljs-built_in">enumerate</span>(per_category_iou)}) <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> {<span class="hljs-string">"val_"</span> + k: v <span class="hljs-keyword">for</span> k, v <span class="hljs-keyword">in</span> metrics.items()}</pre></div></div></div> </div> <p data-svelte-h="svelte-183aynn">Your <code>compute_metrics</code> function is ready to go now, and you’ll return to it when you setup your training.</p> <h2 class="relative group"><a id="train" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#train"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-5arm0l">Train</span></h2> <div class="space-y-10 py-6 2xl:py-8 2xl:-mx-4"><div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><defs><clipPath id="a"><rect x="3.05" y="0.5" width="25.73" height="31" fill="none"></rect></clipPath></defs><g clip-path="url(#a)"><path d="M24.94,9.51a12.81,12.81,0,0,1,0,18.16,12.68,12.68,0,0,1-18,0,12.81,12.81,0,0,1,0-18.16l9-9V5l-.84.83-6,6a9.58,9.58,0,1,0,13.55,0ZM20.44,9a1.68,1.68,0,1,1,1.67-1.67A1.68,1.68,0,0,1,20.44,9Z" fill="#ee4c2c"></path></g></svg> <span>Pytorch</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide Pytorch content</span></div></div> <div class="framework-content"><div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1i97uds">If you aren’t familiar with finetuning a model with the <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer">Trainer</a>, take a look at the basic tutorial <a href="../training#finetune-with-trainer">here</a>!</p></div> <p data-svelte-h="svelte-r9lche">You’re ready to start training your model now! Load SegFormer with <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoModelForSemanticSegmentation">AutoModelForSemanticSegmentation</a>, and pass the model the mapping between label ids and label classes:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoModelForSemanticSegmentation, TrainingArguments, Trainer <span class="hljs-meta">&gt;&gt;&gt; </span>model = AutoModelForSemanticSegmentation.from_pretrained(checkpoint, id2label=id2label, label2id=label2id)</pre></div> <p data-svelte-h="svelte-l42k0i">At this point, only three steps remain:</p> <ol data-svelte-h="svelte-1fwzkdi"><li>Define your training hyperparameters in <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.TrainingArguments">TrainingArguments</a>. It is important you don’t remove unused columns because this’ll drop the <code>image</code> column. Without the <code>image</code> column, you can’t create <code>pixel_values</code>. Set <code>remove_unused_columns=False</code> to prevent this behavior! The only other required parameter is <code>output_dir</code> which specifies where to save your model. You’ll push this model to the Hub by setting <code>push_to_hub=True</code> (you need to be signed in to Hugging Face to upload your model). At the end of each epoch, the <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer">Trainer</a> will evaluate the IoU metric and save the training checkpoint.</li> <li>Pass the training arguments to <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer">Trainer</a> along with the model, dataset, tokenizer, data collator, and <code>compute_metrics</code> function.</li> <li>Call <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.train">train()</a> to finetune your model.</li></ol> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>training_args = TrainingArguments( <span class="hljs-meta">... </span> output_dir=<span class="hljs-string">"segformer-b0-scene-parse-150"</span>, <span class="hljs-meta">... </span> learning_rate=<span class="hljs-number">6e-5</span>, <span class="hljs-meta">... </span> num_train_epochs=<span class="hljs-number">50</span>, <span class="hljs-meta">... </span> per_device_train_batch_size=<span class="hljs-number">2</span>, <span class="hljs-meta">... </span> per_device_eval_batch_size=<span class="hljs-number">2</span>, <span class="hljs-meta">... </span> save_total_limit=<span class="hljs-number">3</span>, <span class="hljs-meta">... </span> evaluation_strategy=<span class="hljs-string">"steps"</span>, <span class="hljs-meta">... </span> save_strategy=<span class="hljs-string">"steps"</span>, <span class="hljs-meta">... </span> save_steps=<span class="hljs-number">20</span>, <span class="hljs-meta">... </span> eval_steps=<span class="hljs-number">20</span>, <span class="hljs-meta">... </span> logging_steps=<span class="hljs-number">1</span>, <span class="hljs-meta">... </span> eval_accumulation_steps=<span class="hljs-number">5</span>, <span class="hljs-meta">... </span> remove_unused_columns=<span class="hljs-literal">False</span>, <span class="hljs-meta">... </span> push_to_hub=<span class="hljs-literal">True</span>, <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>trainer = Trainer( <span class="hljs-meta">... </span> model=model, <span class="hljs-meta">... </span> args=training_args, <span class="hljs-meta">... </span> train_dataset=train_ds, <span class="hljs-meta">... </span> eval_dataset=test_ds, <span class="hljs-meta">... </span> compute_metrics=compute_metrics, <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>trainer.train()</pre></div> <p data-svelte-h="svelte-cv8z08">Once training is completed, share your model to the Hub with the <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.push_to_hub">push_to_hub()</a> method so everyone can use your model:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>trainer.push_to_hub()</pre></div></div></div> </div> <div class="space-y-10 py-6 2xl:py-8 2xl:-mx-4"> <div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="0.94em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 274"><path d="M145.726 42.065v42.07l72.861 42.07v-42.07l-72.86-42.07zM0 84.135v42.07l36.43 21.03V105.17L0 84.135zm109.291 21.035l-36.43 21.034v126.2l36.43 21.035v-84.135l36.435 21.035v-42.07l-36.435-21.034V105.17z" fill="#E55B2D"></path><path d="M145.726 42.065L36.43 105.17v42.065l72.861-42.065v42.065l36.435-21.03v-84.14zM255.022 63.1l-36.435 21.035v42.07l36.435-21.035V63.1zm-72.865 84.135l-36.43 21.035v42.07l36.43-21.036v-42.07zm-36.43 63.104l-36.436-21.035v84.135l36.435-21.035V210.34z" fill="#ED8E24"></path><path d="M145.726 0L0 84.135l36.43 21.035l109.296-63.105l72.861 42.07L255.022 63.1L145.726 0zm0 126.204l-36.435 21.03l36.435 21.036l36.43-21.035l-36.43-21.03z" fill="#F8BF3C"></path></svg> <span>TensorFlow</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide TensorFlow content</span></div></div> <div class="framework-content"><div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1egt5s9">If you are unfamiliar with fine-tuning a model with Keras, check out the <a href="./training#train-a-tensorflow-model-with-keras">basic tutorial</a> first!</p></div> <p data-svelte-h="svelte-s07fxj">To fine-tune a model in TensorFlow, follow these steps:</p> <ol data-svelte-h="svelte-1e3w1hr"><li>Define the training hyperparameters, and set up an optimizer and a learning rate schedule.</li> <li>Instantiate a pretrained model.</li> <li>Convert a 🤗 Dataset to a <code>tf.data.Dataset</code>.</li> <li>Compile your model.</li> <li>Add callbacks to calculate metrics and upload your model to 🤗 Hub</li> <li>Use the <code>fit()</code> method to run the training.</li></ol> <p data-svelte-h="svelte-ccl3wn">Start by defining the hyperparameters, optimizer and learning rate schedule:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> create_optimizer <span class="hljs-meta">&gt;&gt;&gt; </span>batch_size = <span class="hljs-number">2</span> <span class="hljs-meta">&gt;&gt;&gt; </span>num_epochs = <span class="hljs-number">50</span> <span class="hljs-meta">&gt;&gt;&gt; </span>num_train_steps = <span class="hljs-built_in">len</span>(train_ds) * num_epochs <span class="hljs-meta">&gt;&gt;&gt; </span>learning_rate = <span class="hljs-number">6e-5</span> <span class="hljs-meta">&gt;&gt;&gt; </span>weight_decay_rate = <span class="hljs-number">0.01</span> <span class="hljs-meta">&gt;&gt;&gt; </span>optimizer, lr_schedule = create_optimizer( <span class="hljs-meta">... </span> init_lr=learning_rate, <span class="hljs-meta">... </span> num_train_steps=num_train_steps, <span class="hljs-meta">... </span> weight_decay_rate=weight_decay_rate, <span class="hljs-meta">... </span> num_warmup_steps=<span class="hljs-number">0</span>, <span class="hljs-meta">... </span>)</pre></div> <p data-svelte-h="svelte-42cp9c">Then, load SegFormer with <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.TFAutoModelForSemanticSegmentation">TFAutoModelForSemanticSegmentation</a> along with the label mappings, and compile it with the optimizer. Note that Transformers models all have a default task-relevant loss function, so you don’t need to specify one unless you want to:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> TFAutoModelForSemanticSegmentation <span class="hljs-meta">&gt;&gt;&gt; </span>model = TFAutoModelForSemanticSegmentation.from_pretrained( <span class="hljs-meta">... </span> checkpoint, <span class="hljs-meta">... </span> id2label=id2label, <span class="hljs-meta">... </span> label2id=label2id, <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model.<span class="hljs-built_in">compile</span>(optimizer=optimizer) <span class="hljs-comment"># No loss argument!</span></pre></div> <p data-svelte-h="svelte-4x4y3j">Convert your datasets to the <code>tf.data.Dataset</code> format using the <a href="https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.to_tf_dataset" rel="nofollow">to_tf_dataset</a> and the <a href="/docs/transformers/v4.34.0/en/main_classes/data_collator#transformers.DefaultDataCollator">DefaultDataCollator</a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> DefaultDataCollator <span class="hljs-meta">&gt;&gt;&gt; </span>data_collator = DefaultDataCollator(return_tensors=<span class="hljs-string">"tf"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>tf_train_dataset = train_ds.to_tf_dataset( <span class="hljs-meta">... </span> columns=[<span class="hljs-string">"pixel_values"</span>, <span class="hljs-string">"label"</span>], <span class="hljs-meta">... </span> shuffle=<span class="hljs-literal">True</span>, <span class="hljs-meta">... </span> batch_size=batch_size, <span class="hljs-meta">... </span> collate_fn=data_collator, <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>tf_eval_dataset = test_ds.to_tf_dataset( <span class="hljs-meta">... </span> columns=[<span class="hljs-string">"pixel_values"</span>, <span class="hljs-string">"label"</span>], <span class="hljs-meta">... </span> shuffle=<span class="hljs-literal">True</span>, <span class="hljs-meta">... </span> batch_size=batch_size, <span class="hljs-meta">... </span> collate_fn=data_collator, <span class="hljs-meta">... </span>)</pre></div> <p data-svelte-h="svelte-10i0jcl">To compute the accuracy from the predictions and push your model to the 🤗 Hub, use <a href="../main_classes/keras_callbacks">Keras callbacks</a>. Pass your <code>compute_metrics</code> function to <a href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks#transformers.KerasMetricCallback">KerasMetricCallback</a>, and use the <a href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks#transformers.PushToHubCallback">PushToHubCallback</a> to upload the model:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers.keras_callbacks <span class="hljs-keyword">import</span> KerasMetricCallback, PushToHubCallback <span class="hljs-meta">&gt;&gt;&gt; </span>metric_callback = KerasMetricCallback( <span class="hljs-meta">... </span> metric_fn=compute_metrics, eval_dataset=tf_eval_dataset, batch_size=batch_size, label_cols=[<span class="hljs-string">"labels"</span>] <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>push_to_hub_callback = PushToHubCallback(output_dir=<span class="hljs-string">"scene_segmentation"</span>, tokenizer=image_processor) <span class="hljs-meta">&gt;&gt;&gt; </span>callbacks = [metric_callback, push_to_hub_callback]</pre></div> <p data-svelte-h="svelte-1occr1z">Finally, you are ready to train your model! Call <code>fit()</code> with your training and validation datasets, the number of epochs, and your callbacks to fine-tune the model:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>model.fit( <span class="hljs-meta">... </span> tf_train_dataset, <span class="hljs-meta">... </span> validation_data=tf_eval_dataset, <span class="hljs-meta">... </span> callbacks=callbacks, <span class="hljs-meta">... </span> epochs=num_epochs, <span class="hljs-meta">... </span>)</pre></div> <p data-svelte-h="svelte-1r99pbn">Congratulations! You have fine-tuned your model and shared it on the 🤗 Hub. You can now use it for inference!</p></div></div> </div> <h2 class="relative group"><a id="inference" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#inference"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-199uz7g">Inference</span></h2> <p data-svelte-h="svelte-633ppb">Great, now that you’ve finetuned a model, you can use it for inference!</p> <p data-svelte-h="svelte-1g0hugc">Load an image for inference:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>image = ds[<span class="hljs-number">0</span>][<span class="hljs-string">"image"</span>] <span class="hljs-meta">&gt;&gt;&gt; </span>image</pre></div> <div class="flex justify-center" data-svelte-h="svelte-11jfm1f"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/semantic-seg-image.png" alt="Image of bedroom"></div> <div class="space-y-10 py-6 2xl:py-8 2xl:-mx-4"><div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><defs><clipPath id="a"><rect x="3.05" y="0.5" width="25.73" height="31" fill="none"></rect></clipPath></defs><g clip-path="url(#a)"><path d="M24.94,9.51a12.81,12.81,0,0,1,0,18.16,12.68,12.68,0,0,1-18,0,12.81,12.81,0,0,1,0-18.16l9-9V5l-.84.83-6,6a9.58,9.58,0,1,0,13.55,0ZM20.44,9a1.68,1.68,0,1,1,1.67-1.67A1.68,1.68,0,0,1,20.44,9Z" fill="#ee4c2c"></path></g></svg> <span>Pytorch</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide Pytorch content</span></div></div> <div class="framework-content"><p data-svelte-h="svelte-wptfed">The simplest way to try out your finetuned model for inference is to use it in a <a href="/docs/transformers/v4.34.0/en/main_classes/pipelines#transformers.pipeline">pipeline()</a>. Instantiate a <code>pipeline</code> for image segmentation with your model, and pass your image to it:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> pipeline <span class="hljs-meta">&gt;&gt;&gt; </span>segmenter = pipeline(<span class="hljs-string">"image-segmentation"</span>, model=<span class="hljs-string">"my_awesome_seg_model"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>segmenter(image) [{<span class="hljs-string">'score'</span>: <span class="hljs-literal">None</span>, <span class="hljs-string">'label'</span>: <span class="hljs-string">'wall'</span>, <span class="hljs-string">'mask'</span>: &lt;PIL.Image.Image image mode=L size=640x427 at <span class="hljs-number">0x7FD5B2062690</span>&gt;}, {<span class="hljs-string">'score'</span>: <span class="hljs-literal">None</span>, <span class="hljs-string">'label'</span>: <span class="hljs-string">'sky'</span>, <span class="hljs-string">'mask'</span>: &lt;PIL.Image.Image image mode=L size=640x427 at <span class="hljs-number">0x7FD5B2062A50</span>&gt;}, {<span class="hljs-string">'score'</span>: <span class="hljs-literal">None</span>, <span class="hljs-string">'label'</span>: <span class="hljs-string">'floor'</span>, <span class="hljs-string">'mask'</span>: &lt;PIL.Image.Image image mode=L size=640x427 at <span class="hljs-number">0x7FD5B2062B50</span>&gt;}, {<span class="hljs-string">'score'</span>: <span class="hljs-literal">None</span>, <span class="hljs-string">'label'</span>: <span class="hljs-string">'ceiling'</span>, <span class="hljs-string">'mask'</span>: &lt;PIL.Image.Image image mode=L size=640x427 at <span class="hljs-number">0x7FD5B2062A10</span>&gt;}, {<span class="hljs-string">'score'</span>: <span class="hljs-literal">None</span>, <span class="hljs-string">'label'</span>: <span class="hljs-string">'bed '</span>, <span class="hljs-string">'mask'</span>: &lt;PIL.Image.Image image mode=L size=640x427 at <span class="hljs-number">0x7FD5B2062E90</span>&gt;}, {<span class="hljs-string">'score'</span>: <span class="hljs-literal">None</span>, <span class="hljs-string">'label'</span>: <span class="hljs-string">'windowpane'</span>, <span class="hljs-string">'mask'</span>: &lt;PIL.Image.Image image mode=L size=640x427 at <span class="hljs-number">0x7FD5B2062390</span>&gt;}, {<span class="hljs-string">'score'</span>: <span class="hljs-literal">None</span>, <span class="hljs-string">'label'</span>: <span class="hljs-string">'cabinet'</span>, <span class="hljs-string">'mask'</span>: &lt;PIL.Image.Image image mode=L size=640x427 at <span class="hljs-number">0x7FD5B2062550</span>&gt;}, {<span class="hljs-string">'score'</span>: <span class="hljs-literal">None</span>, <span class="hljs-string">'label'</span>: <span class="hljs-string">'chair'</span>, <span class="hljs-string">'mask'</span>: &lt;PIL.Image.Image image mode=L size=640x427 at <span class="hljs-number">0x7FD5B2062D90</span>&gt;}, {<span class="hljs-string">'score'</span>: <span class="hljs-literal">None</span>, <span class="hljs-string">'label'</span>: <span class="hljs-string">'armchair'</span>, <span class="hljs-string">'mask'</span>: &lt;PIL.Image.Image image mode=L size=640x427 at <span class="hljs-number">0x7FD5B2062E10</span>&gt;}]</pre></div> <p data-svelte-h="svelte-58ednc">You can also manually replicate the results of the <code>pipeline</code> if you’d like. Process the image with an image processor and place the <code>pixel_values</code> on a GPU:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>device = torch.device(<span class="hljs-string">"cuda"</span> <span class="hljs-keyword">if</span> torch.cuda.is_available() <span class="hljs-keyword">else</span> <span class="hljs-string">"cpu"</span>) <span class="hljs-comment"># use GPU if available, otherwise use a CPU</span> <span class="hljs-meta">&gt;&gt;&gt; </span>encoding = image_processor(image, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>pixel_values = encoding.pixel_values.to(device)</pre></div> <p data-svelte-h="svelte-oyplyw">Pass your input to the model and return the <code>logits</code>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(pixel_values=pixel_values) <span class="hljs-meta">&gt;&gt;&gt; </span>logits = outputs.logits.cpu()</pre></div> <p data-svelte-h="svelte-tk6q3q">Next, rescale the logits to the original image size:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>upsampled_logits = nn.functional.interpolate( <span class="hljs-meta">... </span> logits, <span class="hljs-meta">... </span> size=image.size[::-<span class="hljs-number">1</span>], <span class="hljs-meta">... </span> mode=<span class="hljs-string">"bilinear"</span>, <span class="hljs-meta">... </span> align_corners=<span class="hljs-literal">False</span>, <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>pred_seg = upsampled_logits.argmax(dim=<span class="hljs-number">1</span>)[<span class="hljs-number">0</span>]</pre></div></div></div> </div> <div class="space-y-10 py-6 2xl:py-8 2xl:-mx-4"> <div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="0.94em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 274"><path d="M145.726 42.065v42.07l72.861 42.07v-42.07l-72.86-42.07zM0 84.135v42.07l36.43 21.03V105.17L0 84.135zm109.291 21.035l-36.43 21.034v126.2l36.43 21.035v-84.135l36.435 21.035v-42.07l-36.435-21.034V105.17z" fill="#E55B2D"></path><path d="M145.726 42.065L36.43 105.17v42.065l72.861-42.065v42.065l36.435-21.03v-84.14zM255.022 63.1l-36.435 21.035v42.07l36.435-21.035V63.1zm-72.865 84.135l-36.43 21.035v42.07l36.43-21.036v-42.07zm-36.43 63.104l-36.436-21.035v84.135l36.435-21.035V210.34z" fill="#ED8E24"></path><path d="M145.726 0L0 84.135l36.43 21.035l109.296-63.105l72.861 42.07L255.022 63.1L145.726 0zm0 126.204l-36.435 21.03l36.435 21.036l36.43-21.035l-36.43-21.03z" fill="#F8BF3C"></path></svg> <span>TensorFlow</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide TensorFlow content</span></div></div> <div class="framework-content"><p data-svelte-h="svelte-1r2yss0">Load an image processor to preprocess the image and return the input as TensorFlow tensors:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoImageProcessor <span class="hljs-meta">&gt;&gt;&gt; </span>image_processor = AutoImageProcessor.from_pretrained(<span class="hljs-string">"MariaK/scene_segmentation"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = image_processor(image, return_tensors=<span class="hljs-string">"tf"</span>)</pre></div> <p data-svelte-h="svelte-oyplyw">Pass your input to the model and return the <code>logits</code>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> TFAutoModelForSemanticSegmentation <span class="hljs-meta">&gt;&gt;&gt; </span>model = TFAutoModelForSemanticSegmentation.from_pretrained(<span class="hljs-string">"MariaK/scene_segmentation"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>logits = model(**inputs).logits</pre></div> <p data-svelte-h="svelte-enih9r">Next, rescale the logits to the original image size and apply argmax on the class dimension:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>logits = tf.transpose(logits, [<span class="hljs-number">0</span>, <span class="hljs-number">2</span>, <span class="hljs-number">3</span>, <span class="hljs-number">1</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span>upsampled_logits = tf.image.resize( <span class="hljs-meta">... </span> logits, <span class="hljs-meta">... </span> <span class="hljs-comment"># We reverse the shape of `image` because `image.size` returns width and height.</span> <span class="hljs-meta">... </span> image.size[::-<span class="hljs-number">1</span>], <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>pred_seg = tf.math.argmax(upsampled_logits, axis=-<span class="hljs-number">1</span>)[<span class="hljs-number">0</span>]</pre></div></div></div> </div> <p data-svelte-h="svelte-1ng77tx">To visualize the results, load the <a href="https://github.com/tensorflow/models/blob/3f1ca33afe3c1631b733ea7e40c294273b9e406d/research/deeplab/utils/get_dataset_colormap.py#L51" rel="nofollow">dataset color palette</a> as <code>ade_palette()</code> that maps each class to their RGB values. Then you can combine and plot your image and the predicted segmentation map:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> matplotlib.pyplot <span class="hljs-keyword">as</span> plt <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np <span class="hljs-meta">&gt;&gt;&gt; </span>color_seg = np.zeros((pred_seg.shape[<span class="hljs-number">0</span>], pred_seg.shape[<span class="hljs-number">1</span>], <span class="hljs-number">3</span>), dtype=np.uint8) <span class="hljs-meta">&gt;&gt;&gt; </span>palette = np.array(ade_palette()) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">for</span> label, color <span class="hljs-keyword">in</span> <span class="hljs-built_in">enumerate</span>(palette): <span class="hljs-meta">... </span> color_seg[pred_seg == label, :] = color <span class="hljs-meta">&gt;&gt;&gt; </span>color_seg = color_seg[..., ::-<span class="hljs-number">1</span>] <span class="hljs-comment"># convert to BGR</span> <span class="hljs-meta">&gt;&gt;&gt; </span>img = np.array(image) * <span class="hljs-number">0.5</span> + color_seg * <span class="hljs-number">0.5</span> <span class="hljs-comment"># plot the image with the segmentation map</span> <span class="hljs-meta">&gt;&gt;&gt; </span>img = img.astype(np.uint8) <span class="hljs-meta">&gt;&gt;&gt; </span>plt.figure(figsize=(<span class="hljs-number">15</span>, <span class="hljs-number">10</span>)) <span class="hljs-meta">&gt;&gt;&gt; </span>plt.imshow(img) <span class="hljs-meta">&gt;&gt;&gt; </span>plt.show()</pre></div> <div class="flex justify-center" data-svelte-h="svelte-nsecok"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/semantic-seg-preds.png" alt="Image of bedroom overlaid with segmentation map"></div> <p></p> <div id="svelte-announcer" aria-live="assertive" aria-atomic="true" style="position: absolute; left: 0px; top: 0px; clip: rect(0px, 0px, 0px, 0px); clip-path: inset(50%); overflow: hidden; white-space: nowrap; width: 1px; height: 1px;"></div></div> <div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/tasks/image_classification" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>Image classification</a> <a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/tasks/video_classification" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">Video classification<span class="ml-2 translate-y-px">→</span></a></div></div></div> <div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapter&quot;:{&quot;title&quot;:&quot;Semantic segmentation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;semantic-segmentation&quot;,&quot;url&quot;:&quot;#semantic-segmentation&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Load SceneParse150 dataset&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;load-sceneparse150-dataset&quot;,&quot;url&quot;:&quot;#load-sceneparse150-dataset&quot;},{&quot;title&quot;:&quot;Preprocess&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocess&quot;,&quot;url&quot;:&quot;#preprocess&quot;},{&quot;title&quot;:&quot;Evaluate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;evaluate&quot;,&quot;url&quot;:&quot;#evaluate&quot;},{&quot;title&quot;:&quot;Train&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;train&quot;,&quot;url&quot;:&quot;#train&quot;},{&quot;title&quot;:&quot;Inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;inference&quot;,&quot;url&quot;:&quot;#inference&quot;}]}}" data-target="SubSideMenu"><nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#semantic-segmentation" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-semantic-segmentation"><wbr>Semantic segmentation</a> <a href="#load-sceneparse150-dataset" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-load-sceneparse150-dataset"><wbr>Load <wbr>Scene<wbr>Parse150 dataset</a> <a href="#preprocess" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-preprocess"><wbr>Preprocess</a> <a href="#evaluate" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-evaluate"><wbr>Evaluate</a> <a href="#train" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-train"><wbr>Train</a> <a href="#inference" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-inference"><wbr>Inference</a> </nav></div></div></div> <div id="doc-footer"></div></main> </div> <script> import("/front/build/kube-5e23f38/index.js"); window.moonSha = "kube-5e23f38/"; window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc","environment":"production","userAgent":"HuggingFace (production)"}`); </script> <!-- Stripe --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://js.stripe.com/v3/"; script.async = true; document.head.appendChild(script); } </script> <!-- Google analytics v4 --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL"; script.async = true; document.head.appendChild(script); window.dataLayer = window.dataLayer || []; function gtag() { if (window.dataLayer !== undefined) { window.dataLayer.push(arguments); } } gtag("js", new Date()); gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/v4.34.0/en/tasks/semantic_segmentation" }); /// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" }); /// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent /// TODO: ask the user for their consent and update this with gtag('consent', 'update') } </script> <!-- Google Analytics v3 (deprecated) --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { (function (i, s, o, g, r, a, m) { i["GoogleAnalyticsObject"] = r; (i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments); }), (i[r].l = 1 * new Date()); (a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]); a.async = 1; a.src = g; m.parentNode.insertBefore(a, m); })(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics"); ganalytics("create", "UA-83738774-2", "auto"); ganalytics("send", "pageview", "/docs/transformers/v4.34.0/en/tasks/semantic_segmentation"); } </script> <iframe name="__privateStripeMetricsController3640" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-27c67c0d52761104439bb051c7856ab1.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fv4.34.0%2Fen%2Ftasks%2Fsemantic_segmentation&amp;title=Semantic%20segmentation&amp;referrer=&amp;muid=NA&amp;sid=NA&amp;version=6&amp;preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html>
2023-10-05T13:33:46.516Z
https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/transformerxl
The documentation page MODEL\_DOC/TRANSFORMERXL doesn’t exist in v4.34.0, but exists on the main version. Click [here](/docs/transformers/main/en/model_doc/transformerxl) to redirect to the main version of the documentation.
<html><head></head><body>The documentation page MODEL_DOC/TRANSFORMERXL doesn’t exist in v4.34.0, but exists on the main version. Click <a href="/docs/transformers/main/en/model_doc/transformerxl">here</a> to redirect to the main version of the documentation.</body></html>
2023-10-05T13:33:46.789Z
Token classification
https://huggingface.co/docs/transformers/v4.34.0/en/tasks/token_classification
# Token classification Token classification assigns a label to individual tokens in a sentence. One of the most common token classification tasks is Named Entity Recognition (NER). NER attempts to find a label for each entity in a sentence, such as a person, location, or organization. This guide will show you how to: 1. Finetune [DistilBERT](https://huggingface.co/distilbert-base-uncased) on the [WNUT 17](https://huggingface.co/datasets/wnut_17) dataset to detect new entities. 2. Use your finetuned model for inference. The task illustrated in this tutorial is supported by the following model architectures: [ALBERT](../model_doc/albert), [BERT](../model_doc/bert), [BigBird](../model_doc/big_bird), [BioGpt](../model_doc/biogpt), [BLOOM](../model_doc/bloom), [BROS](../model_doc/bros), [CamemBERT](../model_doc/camembert), [CANINE](../model_doc/canine), [ConvBERT](../model_doc/convbert), [Data2VecText](../model_doc/data2vec-text), [DeBERTa](../model_doc/deberta), [DeBERTa-v2](../model_doc/deberta-v2), [DistilBERT](../model_doc/distilbert), [ELECTRA](../model_doc/electra), [ERNIE](../model_doc/ernie), [ErnieM](../model_doc/ernie_m), [ESM](../model_doc/esm), [Falcon](../model_doc/falcon), [FlauBERT](../model_doc/flaubert), [FNet](../model_doc/fnet), [Funnel Transformer](../model_doc/funnel), [GPT-Sw3](../model_doc/gpt-sw3), [OpenAI GPT-2](../model_doc/gpt2), [GPTBigCode](../model_doc/gpt_bigcode), [GPT Neo](../model_doc/gpt_neo), [GPT NeoX](../model_doc/gpt_neox), [I-BERT](../model_doc/ibert), [LayoutLM](../model_doc/layoutlm), [LayoutLMv2](../model_doc/layoutlmv2), [LayoutLMv3](../model_doc/layoutlmv3), [LiLT](../model_doc/lilt), [Longformer](../model_doc/longformer), [LUKE](../model_doc/luke), [MarkupLM](../model_doc/markuplm), [MEGA](../model_doc/mega), [Megatron-BERT](../model_doc/megatron-bert), [MobileBERT](../model_doc/mobilebert), [MPNet](../model_doc/mpnet), [MPT](../model_doc/mpt), [MRA](../model_doc/mra), [Nezha](../model_doc/nezha), [Nyströmformer](../model_doc/nystromformer), [QDQBert](../model_doc/qdqbert), [RemBERT](../model_doc/rembert), [RoBERTa](../model_doc/roberta), [RoBERTa-PreLayerNorm](../model_doc/roberta-prelayernorm), [RoCBert](../model_doc/roc_bert), [RoFormer](../model_doc/roformer), [SqueezeBERT](../model_doc/squeezebert), [XLM](../model_doc/xlm), [XLM-RoBERTa](../model_doc/xlm-roberta), [XLM-RoBERTa-XL](../model_doc/xlm-roberta-xl), [XLNet](../model_doc/xlnet), [X-MOD](../model_doc/xmod), [YOSO](../model_doc/yoso) Before you begin, make sure you have all the necessary libraries installed: ``` pip install transformers datasets evaluate seqeval ``` We encourage you to login to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to login: ``` >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## Load WNUT 17 dataset Start by loading the WNUT 17 dataset from the 🤗 Datasets library: ``` >>> from datasets import load_dataset >>> wnut = load_dataset("wnut_17") ``` Then take a look at an example: ``` >>> wnut["train"][0] {'id': '0', 'ner_tags': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 7, 8, 8, 0, 7, 0, 0, 0, 0, 0, 0, 0, 0], 'tokens': ['@paulwalk', 'It', "'s", 'the', 'view', 'from', 'where', 'I', "'m", 'living', 'for', 'two', 'weeks', '.', 'Empire', 'State', 'Building', '=', 'ESB', '.', 'Pretty', 'bad', 'storm', 'here', 'last', 'evening', '.'] } ``` Each number in `ner_tags` represents an entity. Convert the numbers to their label names to find out what the entities are: ``` >>> label_list = wnut["train"].features[f"ner_tags"].feature.names >>> label_list [ "O", "B-corporation", "I-corporation", "B-creative-work", "I-creative-work", "B-group", "I-group", "B-location", "I-location", "B-person", "I-person", "B-product", "I-product", ] ``` The letter that prefixes each `ner_tag` indicates the token position of the entity: - `B-` indicates the beginning of an entity. - `I-` indicates a token is contained inside the same entity (for example, the `State` token is a part of an entity like `Empire State Building`). - `0` indicates the token doesn’t correspond to any entity. ## Preprocess The next step is to load a DistilBERT tokenizer to preprocess the `tokens` field: ``` >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") ``` As you saw in the example `tokens` field above, it looks like the input has already been tokenized. But the input actually hasn’t been tokenized yet and you’ll need to set `is_split_into_words=True` to tokenize the words into subwords. For example: ``` >>> example = wnut["train"][0] >>> tokenized_input = tokenizer(example["tokens"], is_split_into_words=True) >>> tokens = tokenizer.convert_ids_to_tokens(tokenized_input["input_ids"]) >>> tokens ['[CLS]', '@', 'paul', '##walk', 'it', "'", 's', 'the', 'view', 'from', 'where', 'i', "'", 'm', 'living', 'for', 'two', 'weeks', '.', 'empire', 'state', 'building', '=', 'es', '##b', '.', 'pretty', 'bad', 'storm', 'here', 'last', 'evening', '.', '[SEP]'] ``` However, this adds some special tokens `[CLS]` and `[SEP]` and the subword tokenization creates a mismatch between the input and labels. A single word corresponding to a single label may now be split into two subwords. You’ll need to realign the tokens and labels by: 1. Mapping all tokens to their corresponding word with the [`word_ids`](https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.BatchEncoding.word_ids) method. 2. Assigning the label `-100` to the special tokens `[CLS]` and `[SEP]` so they’re ignored by the PyTorch loss function (see [CrossEntropyLoss](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html)). 3. Only labeling the first token of a given word. Assign `-100` to other subtokens from the same word. Here is how you can create a function to realign the tokens and labels, and truncate sequences to be no longer than DistilBERT’s maximum input length: ``` >>> def tokenize_and_align_labels(examples): ... tokenized_inputs = tokenizer(examples["tokens"], truncation=True, is_split_into_words=True) ... labels = [] ... for i, label in enumerate(examples[f"ner_tags"]): ... word_ids = tokenized_inputs.word_ids(batch_index=i) ... previous_word_idx = None ... label_ids = [] ... for word_idx in word_ids: ... if word_idx is None: ... label_ids.append(-100) ... elif word_idx != previous_word_idx: ... label_ids.append(label[word_idx]) ... else: ... label_ids.append(-100) ... previous_word_idx = word_idx ... labels.append(label_ids) ... tokenized_inputs["labels"] = labels ... return tokenized_inputs ``` To apply the preprocessing function over the entire dataset, use 🤗 Datasets [map](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.map) function. You can speed up the `map` function by setting `batched=True` to process multiple elements of the dataset at once: ``` >>> tokenized_wnut = wnut.map(tokenize_and_align_labels, batched=True) ``` Now create a batch of examples using [DataCollatorWithPadding](/docs/transformers/v4.34.0/en/main_classes/data_collator#transformers.DataCollatorWithPadding). It’s more efficient to _dynamically pad_ the sentences to the longest length in a batch during collation, instead of padding the whole dataset to the maximum length. ``` >>> from transformers import DataCollatorForTokenClassification >>> data_collator = DataCollatorForTokenClassification(tokenizer=tokenizer) ``` ``` >>> from transformers import DataCollatorForTokenClassification >>> data_collator = DataCollatorForTokenClassification(tokenizer=tokenizer, return_tensors="tf") ``` ## Evaluate Including a metric during training is often helpful for evaluating your model’s performance. You can quickly load a evaluation method with the 🤗 [Evaluate](https://huggingface.co/docs/evaluate/index) library. For this task, load the [seqeval](https://huggingface.co/spaces/evaluate-metric/seqeval) framework (see the 🤗 Evaluate [quick tour](https://huggingface.co/docs/evaluate/a_quick_tour) to learn more about how to load and compute a metric). Seqeval actually produces several scores: precision, recall, F1, and accuracy. ``` >>> import evaluate >>> seqeval = evaluate.load("seqeval") ``` Get the NER labels first, and then create a function that passes your true predictions and true labels to [compute](https://huggingface.co/docs/evaluate/v0.4.0/en/package_reference/main_classes#evaluate.EvaluationModule.compute) to calculate the scores: ``` >>> import numpy as np >>> labels = [label_list[i] for i in example[f"ner_tags"]] >>> def compute_metrics(p): ... predictions, labels = p ... predictions = np.argmax(predictions, axis=2) ... true_predictions = [ ... [label_list[p] for (p, l) in zip(prediction, label) if l != -100] ... for prediction, label in zip(predictions, labels) ... ] ... true_labels = [ ... [label_list[l] for (p, l) in zip(prediction, label) if l != -100] ... for prediction, label in zip(predictions, labels) ... ] ... results = seqeval.compute(predictions=true_predictions, references=true_labels) ... return { ... "precision": results["overall_precision"], ... "recall": results["overall_recall"], ... "f1": results["overall_f1"], ... "accuracy": results["overall_accuracy"], ... } ``` Your `compute_metrics` function is ready to go now, and you’ll return to it when you setup your training. ## Train Before you start training your model, create a map of the expected ids to their labels with `id2label` and `label2id`: ``` >>> id2label = { ... 0: "O", ... 1: "B-corporation", ... 2: "I-corporation", ... 3: "B-creative-work", ... 4: "I-creative-work", ... 5: "B-group", ... 6: "I-group", ... 7: "B-location", ... 8: "I-location", ... 9: "B-person", ... 10: "I-person", ... 11: "B-product", ... 12: "I-product", ... } >>> label2id = { ... "O": 0, ... "B-corporation": 1, ... "I-corporation": 2, ... "B-creative-work": 3, ... "I-creative-work": 4, ... "B-group": 5, ... "I-group": 6, ... "B-location": 7, ... "I-location": 8, ... "B-person": 9, ... "I-person": 10, ... "B-product": 11, ... "I-product": 12, ... } ``` If you aren’t familiar with finetuning a model with the [Trainer](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer), take a look at the basic tutorial [here](../training#train-with-pytorch-trainer)! You’re ready to start training your model now! Load DistilBERT with [AutoModelForTokenClassification](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoModelForTokenClassification) along with the number of expected labels, and the label mappings: ``` >>> from transformers import AutoModelForTokenClassification, TrainingArguments, Trainer >>> model = AutoModelForTokenClassification.from_pretrained( ... "distilbert-base-uncased", num_labels=13, id2label=id2label, label2id=label2id ... ) ``` At this point, only three steps remain: 1. Define your training hyperparameters in [TrainingArguments](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.TrainingArguments). The only required parameter is `output_dir` which specifies where to save your model. You’ll push this model to the Hub by setting `push_to_hub=True` (you need to be signed in to Hugging Face to upload your model). At the end of each epoch, the [Trainer](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer) will evaluate the seqeval scores and save the training checkpoint. 2. Pass the training arguments to [Trainer](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer) along with the model, dataset, tokenizer, data collator, and `compute_metrics` function. 3. Call [train()](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.train) to finetune your model. ``` >>> training_args = TrainingArguments( ... output_dir="my_awesome_wnut_model", ... learning_rate=2e-5, ... per_device_train_batch_size=16, ... per_device_eval_batch_size=16, ... num_train_epochs=2, ... weight_decay=0.01, ... evaluation_strategy="epoch", ... save_strategy="epoch", ... load_best_model_at_end=True, ... push_to_hub=True, ... ) >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=tokenized_wnut["train"], ... eval_dataset=tokenized_wnut["test"], ... tokenizer=tokenizer, ... data_collator=data_collator, ... compute_metrics=compute_metrics, ... ) >>> trainer.train() ``` Once training is completed, share your model to the Hub with the [push\_to\_hub()](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.push_to_hub) method so everyone can use your model: ``` >>> trainer.push_to_hub() ``` If you aren’t familiar with finetuning a model with Keras, take a look at the basic tutorial [here](../training#train-a-tensorflow-model-with-keras)! To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters: ``` >>> from transformers import create_optimizer >>> batch_size = 16 >>> num_train_epochs = 3 >>> num_train_steps = (len(tokenized_wnut["train"]) // batch_size) * num_train_epochs >>> optimizer, lr_schedule = create_optimizer( ... init_lr=2e-5, ... num_train_steps=num_train_steps, ... weight_decay_rate=0.01, ... num_warmup_steps=0, ... ) ``` Then you can load DistilBERT with [TFAutoModelForTokenClassification](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.TFAutoModelForTokenClassification) along with the number of expected labels, and the label mappings: ``` >>> from transformers import TFAutoModelForTokenClassification >>> model = TFAutoModelForTokenClassification.from_pretrained( ... "distilbert-base-uncased", num_labels=13, id2label=id2label, label2id=label2id ... ) ``` Convert your datasets to the `tf.data.Dataset` format with [prepare\_tf\_dataset()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel.prepare_tf_dataset): ``` >>> tf_train_set = model.prepare_tf_dataset( ... tokenized_wnut["train"], ... shuffle=True, ... batch_size=16, ... collate_fn=data_collator, ... ) >>> tf_validation_set = model.prepare_tf_dataset( ... tokenized_wnut["validation"], ... shuffle=False, ... batch_size=16, ... collate_fn=data_collator, ... ) ``` Configure the model for training with [`compile`](https://keras.io/api/models/model_training_apis/#compile-method). Note that Transformers models all have a default task-relevant loss function, so you don’t need to specify one unless you want to: ``` >>> import tensorflow as tf >>> model.compile(optimizer=optimizer) ``` The last two things to setup before you start training is to compute the seqeval scores from the predictions, and provide a way to push your model to the Hub. Both are done by using [Keras callbacks](../main_classes/keras_callbacks). Pass your `compute_metrics` function to [KerasMetricCallback](/docs/transformers/v4.34.0/en/main_classes/keras_callbacks#transformers.KerasMetricCallback): ``` >>> from transformers.keras_callbacks import KerasMetricCallback >>> metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_set) ``` Specify where to push your model and tokenizer in the [PushToHubCallback](/docs/transformers/v4.34.0/en/main_classes/keras_callbacks#transformers.PushToHubCallback): ``` >>> from transformers.keras_callbacks import PushToHubCallback >>> push_to_hub_callback = PushToHubCallback( ... output_dir="my_awesome_wnut_model", ... tokenizer=tokenizer, ... ) ``` Then bundle your callbacks together: ``` >>> callbacks = [metric_callback, push_to_hub_callback] ``` Finally, you’re ready to start training your model! Call [`fit`](https://keras.io/api/models/model_training_apis/#fit-method) with your training and validation datasets, the number of epochs, and your callbacks to finetune the model: ``` >>> model.fit(x=tf_train_set, validation_data=tf_validation_set, epochs=3, callbacks=callbacks) ``` Once training is completed, your model is automatically uploaded to the Hub so everyone can use it! For a more in-depth example of how to finetune a model for token classification, take a look at the corresponding [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification.ipynb) or [TensorFlow notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification-tf.ipynb). ## Inference Great, now that you’ve finetuned a model, you can use it for inference! Grab some text you’d like to run inference on: ``` >>> text = "The Golden State Warriors are an American professional basketball team based in San Francisco." ``` The simplest way to try out your finetuned model for inference is to use it in a [pipeline()](/docs/transformers/v4.34.0/en/main_classes/pipelines#transformers.pipeline). Instantiate a `pipeline` for NER with your model, and pass your text to it: ``` >>> from transformers import pipeline >>> classifier = pipeline("ner", model="stevhliu/my_awesome_wnut_model") >>> classifier(text) [{'entity': 'B-location', 'score': 0.42658573, 'index': 2, 'word': 'golden', 'start': 4, 'end': 10}, {'entity': 'I-location', 'score': 0.35856336, 'index': 3, 'word': 'state', 'start': 11, 'end': 16}, {'entity': 'B-group', 'score': 0.3064001, 'index': 4, 'word': 'warriors', 'start': 17, 'end': 25}, {'entity': 'B-location', 'score': 0.65523505, 'index': 13, 'word': 'san', 'start': 80, 'end': 83}, {'entity': 'B-location', 'score': 0.4668663, 'index': 14, 'word': 'francisco', 'start': 84, 'end': 93}] ``` You can also manually replicate the results of the `pipeline` if you’d like: Tokenize the text and return PyTorch tensors: ``` >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("stevhliu/my_awesome_wnut_model") >>> inputs = tokenizer(text, return_tensors="pt") ``` Pass your inputs to the model and return the `logits`: ``` >>> from transformers import AutoModelForTokenClassification >>> model = AutoModelForTokenClassification.from_pretrained("stevhliu/my_awesome_wnut_model") >>> with torch.no_grad(): ... logits = model(**inputs).logits ``` Get the class with the highest probability, and use the model’s `id2label` mapping to convert it to a text label: ``` >>> predictions = torch.argmax(logits, dim=2) >>> predicted_token_class = [model.config.id2label[t.item()] for t in predictions[0]] >>> predicted_token_class ['O', 'O', 'B-location', 'I-location', 'B-group', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-location', 'B-location', 'O', 'O'] ``` Tokenize the text and return TensorFlow tensors: ``` >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("stevhliu/my_awesome_wnut_model") >>> inputs = tokenizer(text, return_tensors="tf") ``` Pass your inputs to the model and return the `logits`: ``` >>> from transformers import TFAutoModelForTokenClassification >>> model = TFAutoModelForTokenClassification.from_pretrained("stevhliu/my_awesome_wnut_model") >>> logits = model(**inputs).logits ``` Get the class with the highest probability, and use the model’s `id2label` mapping to convert it to a text label: ``` >>> predicted_token_class_ids = tf.math.argmax(logits, axis=-1) >>> predicted_token_class = [model.config.id2label[t] for t in predicted_token_class_ids[0].numpy().tolist()] >>> predicted_token_class ['O', 'O', 'B-location', 'I-location', 'B-group', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-location', 'B-location', 'O', 'O'] ```
<!DOCTYPE html><html class=""><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"> <meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science."> <meta property="fb:app_id" content="1321688464574422"> <meta name="twitter:card" content="summary_large_image"> <meta name="twitter:site" content="@huggingface"> <meta property="og:title" content="Token classification"> <meta property="og:type" content="website"> <meta property="og:url" content="https://huggingface.co/docs/transformers/v4.34.0/en/tasks/token_classification"> <meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png"> <link rel="stylesheet" href="/front/build/kube-5e23f38/style.css"> <link rel="preconnect" href="https://fonts.gstatic.com"> <link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&amp;display=swap" rel="stylesheet"> <link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&amp;display=swap" rel="stylesheet"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" as="style" onload="this.onload=null;this.rel='stylesheet'"> <noscript> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" /> </noscript> <title>Token classification</title> <script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script> <script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"><link rel="stylesheet" href="/docs/transformers/v4.34.0/en/_app/immutable/assets/0.e3b0c442.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/1.38c5c2f6.js"><meta name="hf:doc:metadata" content="{&quot;local&quot;:&quot;token-classification&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;load-wnut-17-dataset&quot;,&quot;title&quot;:&quot;Load WNUT 17 dataset&quot;},{&quot;local&quot;:&quot;preprocess&quot;,&quot;title&quot;:&quot;Preprocess&quot;},{&quot;local&quot;:&quot;evaluate&quot;,&quot;title&quot;:&quot;Evaluate&quot;},{&quot;local&quot;:&quot;train&quot;,&quot;title&quot;:&quot;Train&quot;},{&quot;local&quot;:&quot;inference&quot;,&quot;title&quot;:&quot;Inference&quot;}],&quot;title&quot;:&quot;Token classification&quot;}"></head> <body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage"> <div class="flex min-h-screen flex-col"> <div class="SVELTE_HYDRATER contents" data-props="{&quot;classNames&quot;:&quot;&quot;,&quot;isWide&quot;:true,&quot;isZh&quot;:false}" data-target="MainHeader"><header class="border-b border-gray-100 "><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <div class="flex flex-none items-center justify-center p-0.5 place-self-stretch lg:hidden"><button class="relative z-40 flex h-6 w-8 items-center justify-center" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div></div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="rounded-full border border-transparent bg-gray-900 px-3 py-1 leading-none text-white hover:border-black hover:bg-white hover:text-black" href="/join">Sign Up</a></li></ul></nav></div></header></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div> <main class="flex flex-1 flex-col"><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapters&quot;:[{&quot;title&quot;:&quot;Get started&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;🤗 Transformers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;index&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/index&quot;},{&quot;title&quot;:&quot;Quick tour&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;quicktour&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/quicktour&quot;},{&quot;title&quot;:&quot;Installation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;installation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/installation&quot;}]},{&quot;title&quot;:&quot;Tutorials&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Run inference with pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_tutorial&quot;},{&quot;title&quot;:&quot;Write portable code with AutoClass&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;autoclass_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/autoclass_tutorial&quot;},{&quot;title&quot;:&quot;Preprocess data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocessing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/preprocessing&quot;},{&quot;title&quot;:&quot;Fine-tune a pretrained model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;training&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/training&quot;},{&quot;title&quot;:&quot;Train with a script&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;run_scripts&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/run_scripts&quot;},{&quot;title&quot;:&quot;Set up distributed training with 🤗 Accelerate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;accelerate&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/accelerate&quot;},{&quot;title&quot;:&quot;Load and train adapters with 🤗 PEFT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;peft&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/peft&quot;},{&quot;title&quot;:&quot;Share your model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_sharing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_sharing&quot;},{&quot;title&quot;:&quot;Agents&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers_agents&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/transformers_agents&quot;},{&quot;title&quot;:&quot;Generation with LLMs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;llm_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/llm_tutorial&quot;}]},{&quot;title&quot;:&quot;Task Guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Natural Language Processing&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text classification&quot;,&quot;id&quot;:&quot;tasks/sequence_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/sequence_classification&quot;},{&quot;title&quot;:&quot;Token classification&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks/token_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/token_classification&quot;},{&quot;title&quot;:&quot;Question answering&quot;,&quot;id&quot;:&quot;tasks/question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/question_answering&quot;},{&quot;title&quot;:&quot;Causal language modeling&quot;,&quot;id&quot;:&quot;tasks/language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/language_modeling&quot;},{&quot;title&quot;:&quot;Masked language modeling&quot;,&quot;id&quot;:&quot;tasks/masked_language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/masked_language_modeling&quot;},{&quot;title&quot;:&quot;Translation&quot;,&quot;id&quot;:&quot;tasks/translation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/translation&quot;},{&quot;title&quot;:&quot;Summarization&quot;,&quot;id&quot;:&quot;tasks/summarization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/summarization&quot;},{&quot;title&quot;:&quot;Multiple choice&quot;,&quot;id&quot;:&quot;tasks/multiple_choice&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/multiple_choice&quot;}]},{&quot;title&quot;:&quot;Audio&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio classification&quot;,&quot;id&quot;:&quot;tasks/audio_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/audio_classification&quot;},{&quot;title&quot;:&quot;Automatic speech recognition&quot;,&quot;id&quot;:&quot;tasks/asr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/asr&quot;}]},{&quot;title&quot;:&quot;Computer Vision&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image classification&quot;,&quot;id&quot;:&quot;tasks/image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_classification&quot;},{&quot;title&quot;:&quot;Semantic segmentation&quot;,&quot;id&quot;:&quot;tasks/semantic_segmentation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/semantic_segmentation&quot;},{&quot;title&quot;:&quot;Video classification&quot;,&quot;id&quot;:&quot;tasks/video_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/video_classification&quot;},{&quot;title&quot;:&quot;Object detection&quot;,&quot;id&quot;:&quot;tasks/object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot object detection&quot;,&quot;id&quot;:&quot;tasks/zero_shot_object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot image classification&quot;,&quot;id&quot;:&quot;tasks/zero_shot_image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification&quot;},{&quot;title&quot;:&quot;Depth estimation&quot;,&quot;id&quot;:&quot;tasks/monocular_depth_estimation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation&quot;}]},{&quot;title&quot;:&quot;Multimodal&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image captioning&quot;,&quot;id&quot;:&quot;tasks/image_captioning&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_captioning&quot;},{&quot;title&quot;:&quot;Document Question Answering&quot;,&quot;id&quot;:&quot;tasks/document_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/document_question_answering&quot;},{&quot;title&quot;:&quot;Visual Question Answering&quot;,&quot;id&quot;:&quot;tasks/visual_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/visual_question_answering&quot;},{&quot;title&quot;:&quot;Text to speech&quot;,&quot;id&quot;:&quot;tasks/text-to-speech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/text-to-speech&quot;}]},{&quot;title&quot;:&quot;Generation&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Customize the generation strategy&quot;,&quot;id&quot;:&quot;generation_strategies&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/generation_strategies&quot;}]},{&quot;title&quot;:&quot;Prompting&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image tasks with IDEFICS&quot;,&quot;id&quot;:&quot;tasks/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/idefics&quot;}]}]},{&quot;title&quot;:&quot;Developer guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Use fast tokenizers from 🤗 Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;fast_tokenizers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/fast_tokenizers&quot;},{&quot;title&quot;:&quot;Run inference with multilingual models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;multilingual&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/multilingual&quot;},{&quot;title&quot;:&quot;Use model-specific APIs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;create_a_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/create_a_model&quot;},{&quot;title&quot;:&quot;Share a custom model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_models&quot;},{&quot;title&quot;:&quot;Templates for chat models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;chat_templating&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/chat_templating&quot;},{&quot;title&quot;:&quot;Run training on Amazon SageMaker&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;sagemaker&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/sagemaker&quot;},{&quot;title&quot;:&quot;Export to ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;serialization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/serialization&quot;},{&quot;title&quot;:&quot;Export to TFLite&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tflite&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tflite&quot;},{&quot;title&quot;:&quot;Export to TorchScript&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;torchscript&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/torchscript&quot;},{&quot;title&quot;:&quot;Benchmarks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;benchmarks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/benchmarks&quot;},{&quot;title&quot;:&quot;Notebooks with examples&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;notebooks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/notebooks&quot;},{&quot;title&quot;:&quot;Community resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;community&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/community&quot;},{&quot;title&quot;:&quot;Custom Tools and Prompts&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_tools&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_tools&quot;},{&quot;title&quot;:&quot;Troubleshoot&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;troubleshooting&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/troubleshooting&quot;}]},{&quot;title&quot;:&quot;Performance and scalability&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;performance&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/performance&quot;},{&quot;title&quot;:&quot;Efficient training techniques&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Methods and tools for efficient training on a single GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_one&quot;},{&quot;title&quot;:&quot;Multiple GPUs and parallelism&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_many&quot;},{&quot;title&quot;:&quot;Efficient training on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu&quot;},{&quot;title&quot;:&quot;Distributed CPU training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu_many&quot;},{&quot;title&quot;:&quot;Training on TPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu&quot;},{&quot;title&quot;:&quot;Training on TPU with TensorFlow&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu_tf&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu_tf&quot;},{&quot;title&quot;:&quot;Training on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_special&quot;},{&quot;title&quot;:&quot;Custom hardware for training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_hardware&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_hardware&quot;},{&quot;title&quot;:&quot;Hyperparameter Search using Trainer API&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;hpo_train&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/hpo_train&quot;}]},{&quot;title&quot;:&quot;Optimizing inference&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Inference on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_cpu&quot;},{&quot;title&quot;:&quot;Inference on one GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_one&quot;},{&quot;title&quot;:&quot;Inference on many GPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_many&quot;},{&quot;title&quot;:&quot;Inference on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_special&quot;}]},{&quot;title&quot;:&quot;Instantiating a big model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;big_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/big_models&quot;},{&quot;title&quot;:&quot;Troubleshooting&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;debugging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/debugging&quot;},{&quot;title&quot;:&quot;XLA Integration for TensorFlow Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tf_xla&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tf_xla&quot;},{&quot;title&quot;:&quot;Optimize inference using `torch.compile()`&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_torch_compile&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_torch_compile&quot;}]},{&quot;title&quot;:&quot;Contribute&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;How to contribute to transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;contributing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/contributing&quot;},{&quot;title&quot;:&quot;How to add a model to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_model&quot;},{&quot;title&quot;:&quot;How to convert a 🤗 Transformers model to TensorFlow?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_tensorflow_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_tensorflow_model&quot;},{&quot;title&quot;:&quot;How to add a pipeline to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_pipeline&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_pipeline&quot;},{&quot;title&quot;:&quot;Testing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;testing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/testing&quot;},{&quot;title&quot;:&quot;Checks on a Pull Request&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pr_checks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pr_checks&quot;}]},{&quot;title&quot;:&quot;Conceptual guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Philosophy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;philosophy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/philosophy&quot;},{&quot;title&quot;:&quot;Glossary&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;glossary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/glossary&quot;},{&quot;title&quot;:&quot;What 🤗 Transformers can do&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;task_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/task_summary&quot;},{&quot;title&quot;:&quot;How 🤗 Transformers solve tasks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks_explained&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks_explained&quot;},{&quot;title&quot;:&quot;The Transformer model family&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_summary&quot;},{&quot;title&quot;:&quot;Summary of the tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tokenizer_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tokenizer_summary&quot;},{&quot;title&quot;:&quot;Attention mechanisms&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;attention&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/attention&quot;},{&quot;title&quot;:&quot;Padding and truncation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pad_truncation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pad_truncation&quot;},{&quot;title&quot;:&quot;BERTology&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;bertology&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/bertology&quot;},{&quot;title&quot;:&quot;Perplexity of fixed-length models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perplexity&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perplexity&quot;},{&quot;title&quot;:&quot;Pipelines for webserver inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_webserver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_webserver&quot;},{&quot;title&quot;:&quot;Model training anatomy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_memory_anatomy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_memory_anatomy&quot;}]},{&quot;title&quot;:&quot;API&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Main Classes&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Agents and Tools&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/agent&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/agent&quot;},{&quot;title&quot;:&quot;Auto Classes&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/auto&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/auto&quot;},{&quot;title&quot;:&quot;Callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/callback&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/callback&quot;},{&quot;title&quot;:&quot;Configuration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/configuration&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/configuration&quot;},{&quot;title&quot;:&quot;Data Collator&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/data_collator&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/data_collator&quot;},{&quot;title&quot;:&quot;Keras callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/keras_callbacks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/keras_callbacks&quot;},{&quot;title&quot;:&quot;Logging&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/logging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/logging&quot;},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/model&quot;},{&quot;title&quot;:&quot;Text Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/text_generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/text_generation&quot;},{&quot;title&quot;:&quot;ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/onnx&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/onnx&quot;},{&quot;title&quot;:&quot;Optimization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/optimizer_schedules&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules&quot;},{&quot;title&quot;:&quot;Model outputs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/output&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/output&quot;},{&quot;title&quot;:&quot;Pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/pipelines&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/pipelines&quot;},{&quot;title&quot;:&quot;Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/processors&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/processors&quot;},{&quot;title&quot;:&quot;Quantization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/quantization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/quantization&quot;},{&quot;title&quot;:&quot;Tokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/tokenizer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/tokenizer&quot;},{&quot;title&quot;:&quot;Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/trainer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/trainer&quot;},{&quot;title&quot;:&quot;DeepSpeed Integration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/deepspeed&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/deepspeed&quot;},{&quot;title&quot;:&quot;Feature Extractor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/feature_extractor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/feature_extractor&quot;},{&quot;title&quot;:&quot;Image Processor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/image_processor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/image_processor&quot;}]},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALBERT&quot;,&quot;id&quot;:&quot;model_doc/albert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/albert&quot;},{&quot;title&quot;:&quot;BART&quot;,&quot;id&quot;:&quot;model_doc/bart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bart&quot;},{&quot;title&quot;:&quot;BARThez&quot;,&quot;id&quot;:&quot;model_doc/barthez&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/barthez&quot;},{&quot;title&quot;:&quot;BARTpho&quot;,&quot;id&quot;:&quot;model_doc/bartpho&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bartpho&quot;},{&quot;title&quot;:&quot;BERT&quot;,&quot;id&quot;:&quot;model_doc/bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert&quot;},{&quot;title&quot;:&quot;BertGeneration&quot;,&quot;id&quot;:&quot;model_doc/bert-generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-generation&quot;},{&quot;title&quot;:&quot;BertJapanese&quot;,&quot;id&quot;:&quot;model_doc/bert-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-japanese&quot;},{&quot;title&quot;:&quot;Bertweet&quot;,&quot;id&quot;:&quot;model_doc/bertweet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bertweet&quot;},{&quot;title&quot;:&quot;BigBird&quot;,&quot;id&quot;:&quot;model_doc/big_bird&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/big_bird&quot;},{&quot;title&quot;:&quot;BigBirdPegasus&quot;,&quot;id&quot;:&quot;model_doc/bigbird_pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus&quot;},{&quot;title&quot;:&quot;BioGpt&quot;,&quot;id&quot;:&quot;model_doc/biogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/biogpt&quot;},{&quot;title&quot;:&quot;Blenderbot&quot;,&quot;id&quot;:&quot;model_doc/blenderbot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot&quot;},{&quot;title&quot;:&quot;Blenderbot Small&quot;,&quot;id&quot;:&quot;model_doc/blenderbot-small&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot-small&quot;},{&quot;title&quot;:&quot;BLOOM&quot;,&quot;id&quot;:&quot;model_doc/bloom&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bloom&quot;},{&quot;title&quot;:&quot;BORT&quot;,&quot;id&quot;:&quot;model_doc/bort&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bort&quot;},{&quot;title&quot;:&quot;ByT5&quot;,&quot;id&quot;:&quot;model_doc/byt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/byt5&quot;},{&quot;title&quot;:&quot;CamemBERT&quot;,&quot;id&quot;:&quot;model_doc/camembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/camembert&quot;},{&quot;title&quot;:&quot;CANINE&quot;,&quot;id&quot;:&quot;model_doc/canine&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/canine&quot;},{&quot;title&quot;:&quot;CodeGen&quot;,&quot;id&quot;:&quot;model_doc/codegen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/codegen&quot;},{&quot;title&quot;:&quot;CodeLlama&quot;,&quot;id&quot;:&quot;model_doc/code_llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/code_llama&quot;},{&quot;title&quot;:&quot;ConvBERT&quot;,&quot;id&quot;:&quot;model_doc/convbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convbert&quot;},{&quot;title&quot;:&quot;CPM&quot;,&quot;id&quot;:&quot;model_doc/cpm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpm&quot;},{&quot;title&quot;:&quot;CPMANT&quot;,&quot;id&quot;:&quot;model_doc/cpmant&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpmant&quot;},{&quot;title&quot;:&quot;CTRL&quot;,&quot;id&quot;:&quot;model_doc/ctrl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ctrl&quot;},{&quot;title&quot;:&quot;DeBERTa&quot;,&quot;id&quot;:&quot;model_doc/deberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta&quot;},{&quot;title&quot;:&quot;DeBERTa-v2&quot;,&quot;id&quot;:&quot;model_doc/deberta-v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta-v2&quot;},{&quot;title&quot;:&quot;DialoGPT&quot;,&quot;id&quot;:&quot;model_doc/dialogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dialogpt&quot;},{&quot;title&quot;:&quot;DistilBERT&quot;,&quot;id&quot;:&quot;model_doc/distilbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/distilbert&quot;},{&quot;title&quot;:&quot;DPR&quot;,&quot;id&quot;:&quot;model_doc/dpr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpr&quot;},{&quot;title&quot;:&quot;ELECTRA&quot;,&quot;id&quot;:&quot;model_doc/electra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/electra&quot;},{&quot;title&quot;:&quot;Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encoder-decoder&quot;},{&quot;title&quot;:&quot;ERNIE&quot;,&quot;id&quot;:&quot;model_doc/ernie&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie&quot;},{&quot;title&quot;:&quot;ErnieM&quot;,&quot;id&quot;:&quot;model_doc/ernie_m&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie_m&quot;},{&quot;title&quot;:&quot;ESM&quot;,&quot;id&quot;:&quot;model_doc/esm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/esm&quot;},{&quot;title&quot;:&quot;Falcon&quot;,&quot;id&quot;:&quot;model_doc/falcon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/falcon&quot;},{&quot;title&quot;:&quot;FLAN-T5&quot;,&quot;id&quot;:&quot;model_doc/flan-t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-t5&quot;},{&quot;title&quot;:&quot;FLAN-UL2&quot;,&quot;id&quot;:&quot;model_doc/flan-ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-ul2&quot;},{&quot;title&quot;:&quot;FlauBERT&quot;,&quot;id&quot;:&quot;model_doc/flaubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flaubert&quot;},{&quot;title&quot;:&quot;FNet&quot;,&quot;id&quot;:&quot;model_doc/fnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fnet&quot;},{&quot;title&quot;:&quot;FSMT&quot;,&quot;id&quot;:&quot;model_doc/fsmt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fsmt&quot;},{&quot;title&quot;:&quot;Funnel Transformer&quot;,&quot;id&quot;:&quot;model_doc/funnel&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/funnel&quot;},{&quot;title&quot;:&quot;GPT&quot;,&quot;id&quot;:&quot;model_doc/openai-gpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/openai-gpt&quot;},{&quot;title&quot;:&quot;GPT Neo&quot;,&quot;id&quot;:&quot;model_doc/gpt_neo&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neo&quot;},{&quot;title&quot;:&quot;GPT NeoX&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox&quot;},{&quot;title&quot;:&quot;GPT NeoX Japanese&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox_japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese&quot;},{&quot;title&quot;:&quot;GPT-J&quot;,&quot;id&quot;:&quot;model_doc/gptj&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptj&quot;},{&quot;title&quot;:&quot;GPT2&quot;,&quot;id&quot;:&quot;model_doc/gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt2&quot;},{&quot;title&quot;:&quot;GPTBigCode&quot;,&quot;id&quot;:&quot;model_doc/gpt_bigcode&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode&quot;},{&quot;title&quot;:&quot;GPTSAN Japanese&quot;,&quot;id&quot;:&quot;model_doc/gptsan-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese&quot;},{&quot;title&quot;:&quot;GPTSw3&quot;,&quot;id&quot;:&quot;model_doc/gpt-sw3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt-sw3&quot;},{&quot;title&quot;:&quot;HerBERT&quot;,&quot;id&quot;:&quot;model_doc/herbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/herbert&quot;},{&quot;title&quot;:&quot;I-BERT&quot;,&quot;id&quot;:&quot;model_doc/ibert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ibert&quot;},{&quot;title&quot;:&quot;Jukebox&quot;,&quot;id&quot;:&quot;model_doc/jukebox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/jukebox&quot;},{&quot;title&quot;:&quot;LED&quot;,&quot;id&quot;:&quot;model_doc/led&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/led&quot;},{&quot;title&quot;:&quot;LLaMA&quot;,&quot;id&quot;:&quot;model_doc/llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama&quot;},{&quot;title&quot;:&quot;Llama2&quot;,&quot;id&quot;:&quot;model_doc/llama2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama2&quot;},{&quot;title&quot;:&quot;Longformer&quot;,&quot;id&quot;:&quot;model_doc/longformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longformer&quot;},{&quot;title&quot;:&quot;LongT5&quot;,&quot;id&quot;:&quot;model_doc/longt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longt5&quot;},{&quot;title&quot;:&quot;LUKE&quot;,&quot;id&quot;:&quot;model_doc/luke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/luke&quot;},{&quot;title&quot;:&quot;M2M100&quot;,&quot;id&quot;:&quot;model_doc/m2m_100&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/m2m_100&quot;},{&quot;title&quot;:&quot;MarianMT&quot;,&quot;id&quot;:&quot;model_doc/marian&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/marian&quot;},{&quot;title&quot;:&quot;MarkupLM&quot;,&quot;id&quot;:&quot;model_doc/markuplm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/markuplm&quot;},{&quot;title&quot;:&quot;MBart and MBart-50&quot;,&quot;id&quot;:&quot;model_doc/mbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mbart&quot;},{&quot;title&quot;:&quot;MEGA&quot;,&quot;id&quot;:&quot;model_doc/mega&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mega&quot;},{&quot;title&quot;:&quot;MegatronBERT&quot;,&quot;id&quot;:&quot;model_doc/megatron-bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron-bert&quot;},{&quot;title&quot;:&quot;MegatronGPT2&quot;,&quot;id&quot;:&quot;model_doc/megatron_gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2&quot;},{&quot;title&quot;:&quot;Mistral&quot;,&quot;id&quot;:&quot;model_doc/mistral&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mistral&quot;},{&quot;title&quot;:&quot;mLUKE&quot;,&quot;id&quot;:&quot;model_doc/mluke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mluke&quot;},{&quot;title&quot;:&quot;MobileBERT&quot;,&quot;id&quot;:&quot;model_doc/mobilebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilebert&quot;},{&quot;title&quot;:&quot;MPNet&quot;,&quot;id&quot;:&quot;model_doc/mpnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpnet&quot;},{&quot;title&quot;:&quot;MPT&quot;,&quot;id&quot;:&quot;model_doc/mpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpt&quot;},{&quot;title&quot;:&quot;MRA&quot;,&quot;id&quot;:&quot;model_doc/mra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mra&quot;},{&quot;title&quot;:&quot;MT5&quot;,&quot;id&quot;:&quot;model_doc/mt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mt5&quot;},{&quot;title&quot;:&quot;MVP&quot;,&quot;id&quot;:&quot;model_doc/mvp&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mvp&quot;},{&quot;title&quot;:&quot;NEZHA&quot;,&quot;id&quot;:&quot;model_doc/nezha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nezha&quot;},{&quot;title&quot;:&quot;NLLB&quot;,&quot;id&quot;:&quot;model_doc/nllb&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb&quot;},{&quot;title&quot;:&quot;NLLB-MoE&quot;,&quot;id&quot;:&quot;model_doc/nllb-moe&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb-moe&quot;},{&quot;title&quot;:&quot;Nyströmformer&quot;,&quot;id&quot;:&quot;model_doc/nystromformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nystromformer&quot;},{&quot;title&quot;:&quot;Open-Llama&quot;,&quot;id&quot;:&quot;model_doc/open-llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/open-llama&quot;},{&quot;title&quot;:&quot;OPT&quot;,&quot;id&quot;:&quot;model_doc/opt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/opt&quot;},{&quot;title&quot;:&quot;Pegasus&quot;,&quot;id&quot;:&quot;model_doc/pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus&quot;},{&quot;title&quot;:&quot;PEGASUS-X&quot;,&quot;id&quot;:&quot;model_doc/pegasus_x&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus_x&quot;},{&quot;title&quot;:&quot;Persimmon&quot;,&quot;id&quot;:&quot;model_doc/persimmon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/persimmon&quot;},{&quot;title&quot;:&quot;PhoBERT&quot;,&quot;id&quot;:&quot;model_doc/phobert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/phobert&quot;},{&quot;title&quot;:&quot;PLBart&quot;,&quot;id&quot;:&quot;model_doc/plbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/plbart&quot;},{&quot;title&quot;:&quot;ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/prophetnet&quot;},{&quot;title&quot;:&quot;QDQBert&quot;,&quot;id&quot;:&quot;model_doc/qdqbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/qdqbert&quot;},{&quot;title&quot;:&quot;RAG&quot;,&quot;id&quot;:&quot;model_doc/rag&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rag&quot;},{&quot;title&quot;:&quot;REALM&quot;,&quot;id&quot;:&quot;model_doc/realm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/realm&quot;},{&quot;title&quot;:&quot;Reformer&quot;,&quot;id&quot;:&quot;model_doc/reformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/reformer&quot;},{&quot;title&quot;:&quot;RemBERT&quot;,&quot;id&quot;:&quot;model_doc/rembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rembert&quot;},{&quot;title&quot;:&quot;RetriBERT&quot;,&quot;id&quot;:&quot;model_doc/retribert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/retribert&quot;},{&quot;title&quot;:&quot;RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta&quot;},{&quot;title&quot;:&quot;RoBERTa-PreLayerNorm&quot;,&quot;id&quot;:&quot;model_doc/roberta-prelayernorm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm&quot;},{&quot;title&quot;:&quot;RoCBert&quot;,&quot;id&quot;:&quot;model_doc/roc_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roc_bert&quot;},{&quot;title&quot;:&quot;RoFormer&quot;,&quot;id&quot;:&quot;model_doc/roformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roformer&quot;},{&quot;title&quot;:&quot;RWKV&quot;,&quot;id&quot;:&quot;model_doc/rwkv&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rwkv&quot;},{&quot;title&quot;:&quot;Splinter&quot;,&quot;id&quot;:&quot;model_doc/splinter&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/splinter&quot;},{&quot;title&quot;:&quot;SqueezeBERT&quot;,&quot;id&quot;:&quot;model_doc/squeezebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/squeezebert&quot;},{&quot;title&quot;:&quot;SwitchTransformers&quot;,&quot;id&quot;:&quot;model_doc/switch_transformers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/switch_transformers&quot;},{&quot;title&quot;:&quot;T5&quot;,&quot;id&quot;:&quot;model_doc/t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5&quot;},{&quot;title&quot;:&quot;T5v1.1&quot;,&quot;id&quot;:&quot;model_doc/t5v1.1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5v1.1&quot;},{&quot;title&quot;:&quot;TAPEX&quot;,&quot;id&quot;:&quot;model_doc/tapex&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapex&quot;},{&quot;title&quot;:&quot;Transformer XL&quot;,&quot;id&quot;:&quot;model_doc/transfo-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/transfo-xl&quot;},{&quot;title&quot;:&quot;UL2&quot;,&quot;id&quot;:&quot;model_doc/ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ul2&quot;},{&quot;title&quot;:&quot;UMT5&quot;,&quot;id&quot;:&quot;model_doc/umt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/umt5&quot;},{&quot;title&quot;:&quot;X-MOD&quot;,&quot;id&quot;:&quot;model_doc/xmod&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xmod&quot;},{&quot;title&quot;:&quot;XGLM&quot;,&quot;id&quot;:&quot;model_doc/xglm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xglm&quot;},{&quot;title&quot;:&quot;XLM&quot;,&quot;id&quot;:&quot;model_doc/xlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm&quot;},{&quot;title&quot;:&quot;XLM-ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/xlm-prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa-XL&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl&quot;},{&quot;title&quot;:&quot;XLM-V&quot;,&quot;id&quot;:&quot;model_doc/xlm-v&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-v&quot;},{&quot;title&quot;:&quot;XLNet&quot;,&quot;id&quot;:&quot;model_doc/xlnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlnet&quot;},{&quot;title&quot;:&quot;YOSO&quot;,&quot;id&quot;:&quot;model_doc/yoso&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yoso&quot;}]},{&quot;title&quot;:&quot;Vision models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;BEiT&quot;,&quot;id&quot;:&quot;model_doc/beit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/beit&quot;},{&quot;title&quot;:&quot;BiT&quot;,&quot;id&quot;:&quot;model_doc/bit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bit&quot;},{&quot;title&quot;:&quot;Conditional DETR&quot;,&quot;id&quot;:&quot;model_doc/conditional_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/conditional_detr&quot;},{&quot;title&quot;:&quot;ConvNeXT&quot;,&quot;id&quot;:&quot;model_doc/convnext&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnext&quot;},{&quot;title&quot;:&quot;ConvNeXTV2&quot;,&quot;id&quot;:&quot;model_doc/convnextv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnextv2&quot;},{&quot;title&quot;:&quot;CvT&quot;,&quot;id&quot;:&quot;model_doc/cvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cvt&quot;},{&quot;title&quot;:&quot;Deformable DETR&quot;,&quot;id&quot;:&quot;model_doc/deformable_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deformable_detr&quot;},{&quot;title&quot;:&quot;DeiT&quot;,&quot;id&quot;:&quot;model_doc/deit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deit&quot;},{&quot;title&quot;:&quot;DETA&quot;,&quot;id&quot;:&quot;model_doc/deta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deta&quot;},{&quot;title&quot;:&quot;DETR&quot;,&quot;id&quot;:&quot;model_doc/detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/detr&quot;},{&quot;title&quot;:&quot;DiNAT&quot;,&quot;id&quot;:&quot;model_doc/dinat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinat&quot;},{&quot;title&quot;:&quot;DINO V2&quot;,&quot;id&quot;:&quot;model_doc/dinov2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinov2&quot;},{&quot;title&quot;:&quot;DiT&quot;,&quot;id&quot;:&quot;model_doc/dit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dit&quot;},{&quot;title&quot;:&quot;DPT&quot;,&quot;id&quot;:&quot;model_doc/dpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpt&quot;},{&quot;title&quot;:&quot;EfficientFormer&quot;,&quot;id&quot;:&quot;model_doc/efficientformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientformer&quot;},{&quot;title&quot;:&quot;EfficientNet&quot;,&quot;id&quot;:&quot;model_doc/efficientnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientnet&quot;},{&quot;title&quot;:&quot;FocalNet&quot;,&quot;id&quot;:&quot;model_doc/focalnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/focalnet&quot;},{&quot;title&quot;:&quot;GLPN&quot;,&quot;id&quot;:&quot;model_doc/glpn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/glpn&quot;},{&quot;title&quot;:&quot;ImageGPT&quot;,&quot;id&quot;:&quot;model_doc/imagegpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/imagegpt&quot;},{&quot;title&quot;:&quot;LeViT&quot;,&quot;id&quot;:&quot;model_doc/levit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/levit&quot;},{&quot;title&quot;:&quot;Mask2Former&quot;,&quot;id&quot;:&quot;model_doc/mask2former&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mask2former&quot;},{&quot;title&quot;:&quot;MaskFormer&quot;,&quot;id&quot;:&quot;model_doc/maskformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/maskformer&quot;},{&quot;title&quot;:&quot;MobileNetV1&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v1&quot;},{&quot;title&quot;:&quot;MobileNetV2&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v2&quot;},{&quot;title&quot;:&quot;MobileViT&quot;,&quot;id&quot;:&quot;model_doc/mobilevit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevit&quot;},{&quot;title&quot;:&quot;MobileViTV2&quot;,&quot;id&quot;:&quot;model_doc/mobilevitv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevitv2&quot;},{&quot;title&quot;:&quot;NAT&quot;,&quot;id&quot;:&quot;model_doc/nat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nat&quot;},{&quot;title&quot;:&quot;PoolFormer&quot;,&quot;id&quot;:&quot;model_doc/poolformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/poolformer&quot;},{&quot;title&quot;:&quot;Pyramid Vision Transformer (PVT)&quot;,&quot;id&quot;:&quot;model_doc/pvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pvt&quot;},{&quot;title&quot;:&quot;RegNet&quot;,&quot;id&quot;:&quot;model_doc/regnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/regnet&quot;},{&quot;title&quot;:&quot;ResNet&quot;,&quot;id&quot;:&quot;model_doc/resnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/resnet&quot;},{&quot;title&quot;:&quot;SegFormer&quot;,&quot;id&quot;:&quot;model_doc/segformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/segformer&quot;},{&quot;title&quot;:&quot;SwiftFormer&quot;,&quot;id&quot;:&quot;model_doc/swiftformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swiftformer&quot;},{&quot;title&quot;:&quot;Swin Transformer&quot;,&quot;id&quot;:&quot;model_doc/swin&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin&quot;},{&quot;title&quot;:&quot;Swin Transformer V2&quot;,&quot;id&quot;:&quot;model_doc/swinv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swinv2&quot;},{&quot;title&quot;:&quot;Swin2SR&quot;,&quot;id&quot;:&quot;model_doc/swin2sr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin2sr&quot;},{&quot;title&quot;:&quot;Table Transformer&quot;,&quot;id&quot;:&quot;model_doc/table-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/table-transformer&quot;},{&quot;title&quot;:&quot;TimeSformer&quot;,&quot;id&quot;:&quot;model_doc/timesformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/timesformer&quot;},{&quot;title&quot;:&quot;UperNet&quot;,&quot;id&quot;:&quot;model_doc/upernet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/upernet&quot;},{&quot;title&quot;:&quot;VAN&quot;,&quot;id&quot;:&quot;model_doc/van&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/van&quot;},{&quot;title&quot;:&quot;VideoMAE&quot;,&quot;id&quot;:&quot;model_doc/videomae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/videomae&quot;},{&quot;title&quot;:&quot;Vision Transformer (ViT)&quot;,&quot;id&quot;:&quot;model_doc/vit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit&quot;},{&quot;title&quot;:&quot;ViT Hybrid&quot;,&quot;id&quot;:&quot;model_doc/vit_hybrid&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_hybrid&quot;},{&quot;title&quot;:&quot;ViTDet&quot;,&quot;id&quot;:&quot;model_doc/vitdet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitdet&quot;},{&quot;title&quot;:&quot;ViTMAE&quot;,&quot;id&quot;:&quot;model_doc/vit_mae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_mae&quot;},{&quot;title&quot;:&quot;ViTMatte&quot;,&quot;id&quot;:&quot;model_doc/vitmatte&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitmatte&quot;},{&quot;title&quot;:&quot;ViTMSN&quot;,&quot;id&quot;:&quot;model_doc/vit_msn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_msn&quot;},{&quot;title&quot;:&quot;ViViT&quot;,&quot;id&quot;:&quot;model_doc/vivit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vivit&quot;},{&quot;title&quot;:&quot;YOLOS&quot;,&quot;id&quot;:&quot;model_doc/yolos&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yolos&quot;}]},{&quot;title&quot;:&quot;Audio models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio Spectrogram Transformer&quot;,&quot;id&quot;:&quot;model_doc/audio-spectrogram-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer&quot;},{&quot;title&quot;:&quot;Bark&quot;,&quot;id&quot;:&quot;model_doc/bark&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bark&quot;},{&quot;title&quot;:&quot;CLAP&quot;,&quot;id&quot;:&quot;model_doc/clap&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clap&quot;},{&quot;title&quot;:&quot;EnCodec&quot;,&quot;id&quot;:&quot;model_doc/encodec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encodec&quot;},{&quot;title&quot;:&quot;Hubert&quot;,&quot;id&quot;:&quot;model_doc/hubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/hubert&quot;},{&quot;title&quot;:&quot;MCTCT&quot;,&quot;id&quot;:&quot;model_doc/mctct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mctct&quot;},{&quot;title&quot;:&quot;MMS&quot;,&quot;id&quot;:&quot;model_doc/mms&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mms&quot;},{&quot;title&quot;:&quot;MusicGen&quot;,&quot;id&quot;:&quot;model_doc/musicgen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/musicgen&quot;},{&quot;title&quot;:&quot;Pop2Piano&quot;,&quot;id&quot;:&quot;model_doc/pop2piano&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pop2piano&quot;},{&quot;title&quot;:&quot;SEW&quot;,&quot;id&quot;:&quot;model_doc/sew&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew&quot;},{&quot;title&quot;:&quot;SEW-D&quot;,&quot;id&quot;:&quot;model_doc/sew-d&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew-d&quot;},{&quot;title&quot;:&quot;Speech2Text&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text&quot;},{&quot;title&quot;:&quot;Speech2Text2&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text_2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2&quot;},{&quot;title&quot;:&quot;SpeechT5&quot;,&quot;id&quot;:&quot;model_doc/speecht5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speecht5&quot;},{&quot;title&quot;:&quot;UniSpeech&quot;,&quot;id&quot;:&quot;model_doc/unispeech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech&quot;},{&quot;title&quot;:&quot;UniSpeech-SAT&quot;,&quot;id&quot;:&quot;model_doc/unispeech-sat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech-sat&quot;},{&quot;title&quot;:&quot;VITS&quot;,&quot;id&quot;:&quot;model_doc/vits&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vits&quot;},{&quot;title&quot;:&quot;Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2&quot;},{&quot;title&quot;:&quot;Wav2Vec2-Conformer&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2-conformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer&quot;},{&quot;title&quot;:&quot;Wav2Vec2Phoneme&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2_phoneme&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme&quot;},{&quot;title&quot;:&quot;WavLM&quot;,&quot;id&quot;:&quot;model_doc/wavlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wavlm&quot;},{&quot;title&quot;:&quot;Whisper&quot;,&quot;id&quot;:&quot;model_doc/whisper&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/whisper&quot;},{&quot;title&quot;:&quot;XLS-R&quot;,&quot;id&quot;:&quot;model_doc/xls_r&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xls_r&quot;},{&quot;title&quot;:&quot;XLSR-Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/xlsr_wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2&quot;}]},{&quot;title&quot;:&quot;Multimodal models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALIGN&quot;,&quot;id&quot;:&quot;model_doc/align&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/align&quot;},{&quot;title&quot;:&quot;AltCLIP&quot;,&quot;id&quot;:&quot;model_doc/altclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/altclip&quot;},{&quot;title&quot;:&quot;BLIP&quot;,&quot;id&quot;:&quot;model_doc/blip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip&quot;},{&quot;title&quot;:&quot;BLIP-2&quot;,&quot;id&quot;:&quot;model_doc/blip-2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip-2&quot;},{&quot;title&quot;:&quot;BridgeTower&quot;,&quot;id&quot;:&quot;model_doc/bridgetower&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bridgetower&quot;},{&quot;title&quot;:&quot;BROS&quot;,&quot;id&quot;:&quot;model_doc/bros&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bros&quot;},{&quot;title&quot;:&quot;Chinese-CLIP&quot;,&quot;id&quot;:&quot;model_doc/chinese_clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/chinese_clip&quot;},{&quot;title&quot;:&quot;CLIP&quot;,&quot;id&quot;:&quot;model_doc/clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clip&quot;},{&quot;title&quot;:&quot;CLIPSeg&quot;,&quot;id&quot;:&quot;model_doc/clipseg&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clipseg&quot;},{&quot;title&quot;:&quot;Data2Vec&quot;,&quot;id&quot;:&quot;model_doc/data2vec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/data2vec&quot;},{&quot;title&quot;:&quot;DePlot&quot;,&quot;id&quot;:&quot;model_doc/deplot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deplot&quot;},{&quot;title&quot;:&quot;Donut&quot;,&quot;id&quot;:&quot;model_doc/donut&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/donut&quot;},{&quot;title&quot;:&quot;FLAVA&quot;,&quot;id&quot;:&quot;model_doc/flava&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flava&quot;},{&quot;title&quot;:&quot;GIT&quot;,&quot;id&quot;:&quot;model_doc/git&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/git&quot;},{&quot;title&quot;:&quot;GroupViT&quot;,&quot;id&quot;:&quot;model_doc/groupvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/groupvit&quot;},{&quot;title&quot;:&quot;IDEFICS&quot;,&quot;id&quot;:&quot;model_doc/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/idefics&quot;},{&quot;title&quot;:&quot;InstructBLIP&quot;,&quot;id&quot;:&quot;model_doc/instructblip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/instructblip&quot;},{&quot;title&quot;:&quot;LayoutLM&quot;,&quot;id&quot;:&quot;model_doc/layoutlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlm&quot;},{&quot;title&quot;:&quot;LayoutLMV2&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv2&quot;},{&quot;title&quot;:&quot;LayoutLMV3&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv3&quot;},{&quot;title&quot;:&quot;LayoutXLM&quot;,&quot;id&quot;:&quot;model_doc/layoutxlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutxlm&quot;},{&quot;title&quot;:&quot;LiLT&quot;,&quot;id&quot;:&quot;model_doc/lilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lilt&quot;},{&quot;title&quot;:&quot;LXMERT&quot;,&quot;id&quot;:&quot;model_doc/lxmert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lxmert&quot;},{&quot;title&quot;:&quot;MatCha&quot;,&quot;id&quot;:&quot;model_doc/matcha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/matcha&quot;},{&quot;title&quot;:&quot;MGP-STR&quot;,&quot;id&quot;:&quot;model_doc/mgp-str&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mgp-str&quot;},{&quot;title&quot;:&quot;Nougat&quot;,&quot;id&quot;:&quot;model_doc/nougat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nougat&quot;},{&quot;title&quot;:&quot;OneFormer&quot;,&quot;id&quot;:&quot;model_doc/oneformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/oneformer&quot;},{&quot;title&quot;:&quot;OWL-ViT&quot;,&quot;id&quot;:&quot;model_doc/owlvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/owlvit&quot;},{&quot;title&quot;:&quot;Perceiver&quot;,&quot;id&quot;:&quot;model_doc/perceiver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/perceiver&quot;},{&quot;title&quot;:&quot;Pix2Struct&quot;,&quot;id&quot;:&quot;model_doc/pix2struct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pix2struct&quot;},{&quot;title&quot;:&quot;Segment Anything&quot;,&quot;id&quot;:&quot;model_doc/sam&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sam&quot;},{&quot;title&quot;:&quot;Speech Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/speech-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder&quot;},{&quot;title&quot;:&quot;TAPAS&quot;,&quot;id&quot;:&quot;model_doc/tapas&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapas&quot;},{&quot;title&quot;:&quot;TrOCR&quot;,&quot;id&quot;:&quot;model_doc/trocr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trocr&quot;},{&quot;title&quot;:&quot;TVLT&quot;,&quot;id&quot;:&quot;model_doc/tvlt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tvlt&quot;},{&quot;title&quot;:&quot;ViLT&quot;,&quot;id&quot;:&quot;model_doc/vilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vilt&quot;},{&quot;title&quot;:&quot;Vision Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/vision-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder&quot;},{&quot;title&quot;:&quot;Vision Text Dual Encoder&quot;,&quot;id&quot;:&quot;model_doc/vision-text-dual-encoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder&quot;},{&quot;title&quot;:&quot;VisualBERT&quot;,&quot;id&quot;:&quot;model_doc/visual_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/visual_bert&quot;},{&quot;title&quot;:&quot;X-CLIP&quot;,&quot;id&quot;:&quot;model_doc/xclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xclip&quot;}]},{&quot;title&quot;:&quot;Reinforcement learning models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Decision Transformer&quot;,&quot;id&quot;:&quot;model_doc/decision_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/decision_transformer&quot;},{&quot;title&quot;:&quot;Trajectory Transformer&quot;,&quot;id&quot;:&quot;model_doc/trajectory_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer&quot;}]},{&quot;title&quot;:&quot;Time series models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Autoformer&quot;,&quot;id&quot;:&quot;model_doc/autoformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/autoformer&quot;},{&quot;title&quot;:&quot;Informer&quot;,&quot;id&quot;:&quot;model_doc/informer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/informer&quot;},{&quot;title&quot;:&quot;Time Series Transformer&quot;,&quot;id&quot;:&quot;model_doc/time_series_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/time_series_transformer&quot;}]},{&quot;title&quot;:&quot;Graph models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Graphormer&quot;,&quot;id&quot;:&quot;model_doc/graphormer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/graphormer&quot;}]}]},{&quot;title&quot;:&quot;Internal Helpers&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Custom Layers and Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/modeling_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/modeling_utils&quot;},{&quot;title&quot;:&quot;Utilities for pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/pipelines_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/pipelines_utils&quot;},{&quot;title&quot;:&quot;Utilities for Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/tokenization_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/tokenization_utils&quot;},{&quot;title&quot;:&quot;Utilities for Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/trainer_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/trainer_utils&quot;},{&quot;title&quot;:&quot;Utilities for Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/generation_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/generation_utils&quot;},{&quot;title&quot;:&quot;Utilities for Image Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/image_processing_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/image_processing_utils&quot;},{&quot;title&quot;:&quot;Utilities for Audio processing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/audio_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/audio_utils&quot;},{&quot;title&quot;:&quot;General Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/file_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/file_utils&quot;},{&quot;title&quot;:&quot;Utilities for Time Series&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/time_series_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/time_series_utils&quot;}]}]}],&quot;chapterId&quot;:&quot;tasks/token_classification&quot;,&quot;docType&quot;:&quot;docs&quot;,&quot;isLoggedIn&quot;:false,&quot;lang&quot;:&quot;en&quot;,&quot;langs&quot;:[&quot;de&quot;,&quot;en&quot;,&quot;es&quot;,&quot;fr&quot;,&quot;it&quot;,&quot;ko&quot;,&quot;pt&quot;,&quot;zh&quot;],&quot;library&quot;:&quot;transformers&quot;,&quot;theme&quot;:&quot;light&quot;,&quot;version&quot;:&quot;v4.34.0&quot;,&quot;versions&quot;:[{&quot;version&quot;:&quot;main&quot;},{&quot;version&quot;:&quot;v4.34.0&quot;},{&quot;version&quot;:&quot;v4.33.3&quot;},{&quot;version&quot;:&quot;v4.33.2&quot;},{&quot;version&quot;:&quot;v4.33.0&quot;},{&quot;version&quot;:&quot;v4.32.1&quot;},{&quot;version&quot;:&quot;v4.32.0&quot;},{&quot;version&quot;:&quot;v4.31.0&quot;},{&quot;version&quot;:&quot;v4.30.0&quot;},{&quot;version&quot;:&quot;v4.29.1&quot;},{&quot;version&quot;:&quot;v4.29.0&quot;},{&quot;version&quot;:&quot;v4.28.1&quot;},{&quot;version&quot;:&quot;v4.28.0&quot;},{&quot;version&quot;:&quot;v4.27.2&quot;},{&quot;version&quot;:&quot;v4.27.1&quot;},{&quot;version&quot;:&quot;v4.27.0&quot;},{&quot;version&quot;:&quot;v4.26.1&quot;},{&quot;version&quot;:&quot;v4.26.0&quot;},{&quot;version&quot;:&quot;v4.25.1&quot;},{&quot;version&quot;:&quot;v4.24.0&quot;},{&quot;version&quot;:&quot;v4.23.1&quot;},{&quot;version&quot;:&quot;v4.23.0&quot;},{&quot;version&quot;:&quot;v4.22.2&quot;},{&quot;version&quot;:&quot;v4.22.1&quot;},{&quot;version&quot;:&quot;v4.22.0&quot;},{&quot;version&quot;:&quot;v4.21.3&quot;},{&quot;version&quot;:&quot;v4.21.2&quot;},{&quot;version&quot;:&quot;v4.21.1&quot;},{&quot;version&quot;:&quot;v4.21.0&quot;},{&quot;version&quot;:&quot;v4.20.1&quot;},{&quot;version&quot;:&quot;v4.20.0&quot;},{&quot;version&quot;:&quot;v4.19.4&quot;},{&quot;version&quot;:&quot;v4.19.3&quot;},{&quot;version&quot;:&quot;v4.19.2&quot;},{&quot;version&quot;:&quot;v4.19.0&quot;},{&quot;version&quot;:&quot;v4.18.0&quot;},{&quot;version&quot;:&quot;v4.17.0&quot;},{&quot;version&quot;:&quot;v4.16.2&quot;},{&quot;version&quot;:&quot;v4.16.1&quot;},{&quot;version&quot;:&quot;v4.16.0&quot;},{&quot;version&quot;:&quot;v4.15.0&quot;},{&quot;version&quot;:&quot;v4.14.1&quot;},{&quot;version&quot;:&quot;v4.13.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.5&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.4&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.0.0&quot;},{&quot;version&quot;:&quot;doc-builder-html&quot;}],&quot;title&quot;:&quot;Token classification&quot;}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"><div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation</p> <div class="flex items-center"><p class="font-semibold">Token classification</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "><button class=" " type="button"><h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> <span class="ml-auto rounded border border-gray-200 bg-gray-100 px-0.5 text-xs dark:border-gray-800 dark:bg-gray-800"><kbd class="font-sans">⌘K</kbd></span></button> <div class="flex items-center"><select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1">v4.34.0</option><option value="2">v4.33.3</option><option value="3">v4.32.1</option><option value="4">v4.31.0</option><option value="5">v4.30.0</option><option value="6">v4.29.1</option><option value="7">v4.28.1</option><option value="8">v4.27.2</option><option value="9">v4.26.1</option><option value="10">v4.25.1</option><option value="11">v4.24.0</option><option value="12">v4.23.1</option><option value="13">v4.22.2</option><option value="14">v4.21.3</option><option value="15">v4.20.1</option><option value="16">v4.19.4</option><option value="17">v4.18.0</option><option value="18">v4.17.0</option><option value="19">v4.16.2</option><option value="20">v4.15.0</option><option value="21">v4.14.1</option><option value="22">v4.13.0</option><option value="23">v4.12.5</option><option value="24">v4.11.3</option><option value="25">v4.10.1</option><option value="26">v4.9.2</option><option value="27">v4.8.2</option><option value="28">v4.7.0</option><option value="29">v4.6.0</option><option value="30">v4.5.1</option><option value="31">v4.4.2</option><option value="32">v4.3.3</option><option value="33">v4.2.2</option><option value="34">v4.1.1</option><option value="35">v4.0.1</option><option value="36">v3.5.1</option><option value="37">v3.4.0</option><option value="38">v3.3.1</option><option value="39">v3.2.0</option><option value="40">v3.1.0</option><option value="41">v3.0.2</option><option value="42">v2.11.0</option><option value="43">v2.10.0</option><option value="44">v2.9.1</option><option value="45">v2.8.0</option><option value="46">v2.7.0</option><option value="47">v2.6.0</option><option value="48">v2.5.1</option><option value="49">v2.4.1</option><option value="50">v2.3.0</option><option value="51">v2.2.2</option><option value="52">v2.1.1</option><option value="53">v2.0.0</option><option value="54">v1.2.0</option><option value="55">v1.1.0</option><option value="56">v1.0.0</option><option value="57">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"><button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"><svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> 112,792</a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Get started</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/index">🤗 Transformers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/quicktour">Quick tour </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/installation">Installation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Tutorials</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_tutorial">Run inference with pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/autoclass_tutorial">Write portable code with AutoClass </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/preprocessing">Preprocess data </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/training">Fine-tune a pretrained model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/run_scripts">Train with a script </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/accelerate">Set up distributed training with 🤗 Accelerate </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/peft">Load and train adapters with 🤗 PEFT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_sharing">Share your model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/transformers_agents">Agents </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/llm_tutorial">Generation with LLMs </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Task Guides</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Natural Language Processing</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/sequence_classification">Text classification </a><a data-sveltekit-reload="" class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-4" href="/docs/transformers/v4.34.0/en/tasks/token_classification">Token classification </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/question_answering">Question answering </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/language_modeling">Causal language modeling </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/masked_language_modeling">Masked language modeling </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/translation">Translation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/summarization">Summarization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/multiple_choice">Multiple choice </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Computer Vision</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Generation</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Prompting</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Developer guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/fast_tokenizers">Use fast tokenizers from 🤗 Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/multilingual">Run inference with multilingual models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/create_a_model">Use model-specific APIs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_models">Share a custom model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/chat_templating">Templates for chat models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/sagemaker">Run training on Amazon SageMaker </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/serialization">Export to ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tflite">Export to TFLite </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/torchscript">Export to TorchScript </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/benchmarks">Benchmarks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/notebooks">Notebooks with examples </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/community">Community resources </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_tools">Custom Tools and Prompts </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/troubleshooting">Troubleshoot </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Performance and scalability</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/performance">Overview </a><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Efficient training techniques</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_one">Methods and tools for efficient training on a single GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_many">Multiple GPUs and parallelism </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu">Efficient training on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu_many">Distributed CPU training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu">Training on TPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu_tf">Training on TPU with TensorFlow </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_special">Training on Specialized Hardware </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_hardware">Custom hardware for training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/hpo_train">Hyperparameter Search using Trainer API </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Optimizing inference</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_cpu">Inference on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_one">Inference on one GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_many">Inference on many GPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_special">Inference on Specialized Hardware </a> </div><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/big_models">Instantiating a big model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/debugging">Troubleshooting </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tf_xla">XLA Integration for TensorFlow Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perf_torch_compile">Optimize inference using `torch.compile()` </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Contribute</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/contributing">How to contribute to transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_model">How to add a model to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_tensorflow_model">How to convert a 🤗 Transformers model to TensorFlow? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_pipeline">How to add a pipeline to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/testing">Testing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pr_checks">Checks on a Pull Request </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Conceptual guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/philosophy">Philosophy </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/glossary">Glossary </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/task_summary">What 🤗 Transformers can do </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tasks_explained">How 🤗 Transformers solve tasks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_summary">The Transformer model family </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tokenizer_summary">Summary of the tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/attention">Attention mechanisms </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pad_truncation">Padding and truncation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/bertology">BERTology </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perplexity">Perplexity of fixed-length models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_webserver">Pipelines for webserver inference </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_memory_anatomy">Model training anatomy </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>API</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Main Classes</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/agent">Agents and Tools </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/model_doc/auto">Auto Classes </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/callback">Callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/configuration">Configuration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/data_collator">Data Collator </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks">Keras callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/logging">Logging </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/model">Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/text_generation">Text Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/onnx">ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules">Optimization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/output">Model outputs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/pipelines">Pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/processors">Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/quantization">Quantization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/tokenizer">Tokenizer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/trainer">Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/deepspeed">DeepSpeed Integration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor">Feature Extractor </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/image_processor">Image Processor </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Models</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Text models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Vision models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Reinforcement learning models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Time series models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Graph models</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Internal Helpers</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/modeling_utils">Custom Layers and Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/pipelines_utils">Utilities for pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/tokenization_utils">Utilities for Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/trainer_utils">Utilities for Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/generation_utils">Utilities for Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/image_processing_utils">Utilities for Image Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/audio_utils">Utilities for Audio processing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/file_utils">General Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/time_series_utils">Utilities for Time Series </a> </div> </div></nav></div></div></div> <div class="z-1 min-w-0 flex-1"> <div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg"> <div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div> <p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience </p> <div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes </div></div></div> <div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a> <p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div> <div class="prose-doc prose relative mx-auto max-w-4xl break-words"> <p></p> <h1 class="relative group"><a id="token-classification" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#token-classification"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-18ca1ds">Token classification</span></h1> <div class="flex space-x-1 absolute z-10 right-0 top-0"> <div class="relative colab-dropdown "><button class=" " type="button"><img alt="Open In Colab" class="!m-0" src="https://colab.research.google.com/assets/colab-badge.svg"></button> </div> <div class="relative colab-dropdown "><button class=" " type="button"><img alt="Open In Studio Lab" class="!m-0" src="https://studiolab.sagemaker.aws/studiolab.svg"></button> </div></div> <iframe class="w-full xl:w-4/6 h-80" src="https://www.youtube-nocookie.com/embed/wVHdVlPScxA" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe> <p data-svelte-h="svelte-o0annf">Token classification assigns a label to individual tokens in a sentence. One of the most common token classification tasks is Named Entity Recognition (NER). NER attempts to find a label for each entity in a sentence, such as a person, location, or organization.</p> <p data-svelte-h="svelte-1aff4p7">This guide will show you how to:</p> <ol data-svelte-h="svelte-hnqwcf"><li>Finetune <a href="https://huggingface.co/distilbert-base-uncased" rel="nofollow">DistilBERT</a> on the <a href="https://huggingface.co/datasets/wnut_17" rel="nofollow">WNUT 17</a> dataset to detect new entities.</li> <li>Use your finetuned model for inference.</li></ol> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400">The task illustrated in this tutorial is supported by the following model architectures: <p data-svelte-h="svelte-1kspzor"><a href="../model_doc/albert">ALBERT</a>, <a href="../model_doc/bert">BERT</a>, <a href="../model_doc/big_bird">BigBird</a>, <a href="../model_doc/biogpt">BioGpt</a>, <a href="../model_doc/bloom">BLOOM</a>, <a href="../model_doc/bros">BROS</a>, <a href="../model_doc/camembert">CamemBERT</a>, <a href="../model_doc/canine">CANINE</a>, <a href="../model_doc/convbert">ConvBERT</a>, <a href="../model_doc/data2vec-text">Data2VecText</a>, <a href="../model_doc/deberta">DeBERTa</a>, <a href="../model_doc/deberta-v2">DeBERTa-v2</a>, <a href="../model_doc/distilbert">DistilBERT</a>, <a href="../model_doc/electra">ELECTRA</a>, <a href="../model_doc/ernie">ERNIE</a>, <a href="../model_doc/ernie_m">ErnieM</a>, <a href="../model_doc/esm">ESM</a>, <a href="../model_doc/falcon">Falcon</a>, <a href="../model_doc/flaubert">FlauBERT</a>, <a href="../model_doc/fnet">FNet</a>, <a href="../model_doc/funnel">Funnel Transformer</a>, <a href="../model_doc/gpt-sw3">GPT-Sw3</a>, <a href="../model_doc/gpt2">OpenAI GPT-2</a>, <a href="../model_doc/gpt_bigcode">GPTBigCode</a>, <a href="../model_doc/gpt_neo">GPT Neo</a>, <a href="../model_doc/gpt_neox">GPT NeoX</a>, <a href="../model_doc/ibert">I-BERT</a>, <a href="../model_doc/layoutlm">LayoutLM</a>, <a href="../model_doc/layoutlmv2">LayoutLMv2</a>, <a href="../model_doc/layoutlmv3">LayoutLMv3</a>, <a href="../model_doc/lilt">LiLT</a>, <a href="../model_doc/longformer">Longformer</a>, <a href="../model_doc/luke">LUKE</a>, <a href="../model_doc/markuplm">MarkupLM</a>, <a href="../model_doc/mega">MEGA</a>, <a href="../model_doc/megatron-bert">Megatron-BERT</a>, <a href="../model_doc/mobilebert">MobileBERT</a>, <a href="../model_doc/mpnet">MPNet</a>, <a href="../model_doc/mpt">MPT</a>, <a href="../model_doc/mra">MRA</a>, <a href="../model_doc/nezha">Nezha</a>, <a href="../model_doc/nystromformer">Nyströmformer</a>, <a href="../model_doc/qdqbert">QDQBert</a>, <a href="../model_doc/rembert">RemBERT</a>, <a href="../model_doc/roberta">RoBERTa</a>, <a href="../model_doc/roberta-prelayernorm">RoBERTa-PreLayerNorm</a>, <a href="../model_doc/roc_bert">RoCBert</a>, <a href="../model_doc/roformer">RoFormer</a>, <a href="../model_doc/squeezebert">SqueezeBERT</a>, <a href="../model_doc/xlm">XLM</a>, <a href="../model_doc/xlm-roberta">XLM-RoBERTa</a>, <a href="../model_doc/xlm-roberta-xl">XLM-RoBERTa-XL</a>, <a href="../model_doc/xlnet">XLNet</a>, <a href="../model_doc/xmod">X-MOD</a>, <a href="../model_doc/yoso">YOSO</a></p></div> <p data-svelte-h="svelte-1c9nexd">Before you begin, make sure you have all the necessary libraries installed:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class="">pip install transformers datasets evaluate seqeval</pre></div> <p data-svelte-h="svelte-k76o1m">We encourage you to login to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to login:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> huggingface_hub <span class="hljs-keyword">import</span> notebook_login <span class="hljs-meta">&gt;&gt;&gt; </span>notebook_login()</pre></div> <h2 class="relative group"><a id="load-wnut-17-dataset" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#load-wnut-17-dataset"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-zacbk9">Load WNUT 17 dataset</span></h2> <p data-svelte-h="svelte-tluco0">Start by loading the WNUT 17 dataset from the 🤗 Datasets library:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset <span class="hljs-meta">&gt;&gt;&gt; </span>wnut = load_dataset(<span class="hljs-string">"wnut_17"</span>)</pre></div> <p data-svelte-h="svelte-1m91ua0">Then take a look at an example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>wnut[<span class="hljs-string">"train"</span>][<span class="hljs-number">0</span>] {<span class="hljs-string">'id'</span>: <span class="hljs-string">'0'</span>, <span class="hljs-string">'ner_tags'</span>: [<span class="hljs-number">0</span>, <span class="hljs-number">0</span>, <span class="hljs-number">0</span>, <span class="hljs-number">0</span>, <span class="hljs-number">0</span>, <span class="hljs-number">0</span>, <span class="hljs-number">0</span>, <span class="hljs-number">0</span>, <span class="hljs-number">0</span>, <span class="hljs-number">0</span>, <span class="hljs-number">0</span>, <span class="hljs-number">0</span>, <span class="hljs-number">0</span>, <span class="hljs-number">0</span>, <span class="hljs-number">7</span>, <span class="hljs-number">8</span>, <span class="hljs-number">8</span>, <span class="hljs-number">0</span>, <span class="hljs-number">7</span>, <span class="hljs-number">0</span>, <span class="hljs-number">0</span>, <span class="hljs-number">0</span>, <span class="hljs-number">0</span>, <span class="hljs-number">0</span>, <span class="hljs-number">0</span>, <span class="hljs-number">0</span>, <span class="hljs-number">0</span>], <span class="hljs-string">'tokens'</span>: [<span class="hljs-string">'@paulwalk'</span>, <span class="hljs-string">'It'</span>, <span class="hljs-string">"'s"</span>, <span class="hljs-string">'the'</span>, <span class="hljs-string">'view'</span>, <span class="hljs-string">'from'</span>, <span class="hljs-string">'where'</span>, <span class="hljs-string">'I'</span>, <span class="hljs-string">"'m"</span>, <span class="hljs-string">'living'</span>, <span class="hljs-string">'for'</span>, <span class="hljs-string">'two'</span>, <span class="hljs-string">'weeks'</span>, <span class="hljs-string">'.'</span>, <span class="hljs-string">'Empire'</span>, <span class="hljs-string">'State'</span>, <span class="hljs-string">'Building'</span>, <span class="hljs-string">'='</span>, <span class="hljs-string">'ESB'</span>, <span class="hljs-string">'.'</span>, <span class="hljs-string">'Pretty'</span>, <span class="hljs-string">'bad'</span>, <span class="hljs-string">'storm'</span>, <span class="hljs-string">'here'</span>, <span class="hljs-string">'last'</span>, <span class="hljs-string">'evening'</span>, <span class="hljs-string">'.'</span>] }</pre></div> <p data-svelte-h="svelte-1684qxr">Each number in <code>ner_tags</code> represents an entity. Convert the numbers to their label names to find out what the entities are:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>label_list = wnut[<span class="hljs-string">"train"</span>].features[<span class="hljs-string">f"ner_tags"</span>].feature.names <span class="hljs-meta">&gt;&gt;&gt; </span>label_list [ <span class="hljs-string">"O"</span>, <span class="hljs-string">"B-corporation"</span>, <span class="hljs-string">"I-corporation"</span>, <span class="hljs-string">"B-creative-work"</span>, <span class="hljs-string">"I-creative-work"</span>, <span class="hljs-string">"B-group"</span>, <span class="hljs-string">"I-group"</span>, <span class="hljs-string">"B-location"</span>, <span class="hljs-string">"I-location"</span>, <span class="hljs-string">"B-person"</span>, <span class="hljs-string">"I-person"</span>, <span class="hljs-string">"B-product"</span>, <span class="hljs-string">"I-product"</span>, ]</pre></div> <p data-svelte-h="svelte-b2md1a">The letter that prefixes each <code>ner_tag</code> indicates the token position of the entity:</p> <ul data-svelte-h="svelte-13punrg"><li><code>B-</code> indicates the beginning of an entity.</li> <li><code>I-</code> indicates a token is contained inside the same entity (for example, the <code>State</code> token is a part of an entity like <code>Empire State Building</code>).</li> <li><code>0</code> indicates the token doesn’t correspond to any entity.</li></ul> <h2 class="relative group"><a id="preprocess" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#preprocess"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1cg9qj">Preprocess</span></h2> <iframe class="w-full xl:w-4/6 h-80" src="https://www.youtube-nocookie.com/embed/iY2AZYdZAr0" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe> <p data-svelte-h="svelte-1pk56gi">The next step is to load a DistilBERT tokenizer to preprocess the <code>tokens</code> field:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"distilbert-base-uncased"</span>)</pre></div> <p data-svelte-h="svelte-el75ld">As you saw in the example <code>tokens</code> field above, it looks like the input has already been tokenized. But the input actually hasn’t been tokenized yet and you’ll need to set <code>is_split_into_words=True</code> to tokenize the words into subwords. For example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>example = wnut[<span class="hljs-string">"train"</span>][<span class="hljs-number">0</span>] <span class="hljs-meta">&gt;&gt;&gt; </span>tokenized_input = tokenizer(example[<span class="hljs-string">"tokens"</span>], is_split_into_words=<span class="hljs-literal">True</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>tokens = tokenizer.convert_ids_to_tokens(tokenized_input[<span class="hljs-string">"input_ids"</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span>tokens [<span class="hljs-string">'[CLS]'</span>, <span class="hljs-string">'@'</span>, <span class="hljs-string">'paul'</span>, <span class="hljs-string">'##walk'</span>, <span class="hljs-string">'it'</span>, <span class="hljs-string">"'"</span>, <span class="hljs-string">'s'</span>, <span class="hljs-string">'the'</span>, <span class="hljs-string">'view'</span>, <span class="hljs-string">'from'</span>, <span class="hljs-string">'where'</span>, <span class="hljs-string">'i'</span>, <span class="hljs-string">"'"</span>, <span class="hljs-string">'m'</span>, <span class="hljs-string">'living'</span>, <span class="hljs-string">'for'</span>, <span class="hljs-string">'two'</span>, <span class="hljs-string">'weeks'</span>, <span class="hljs-string">'.'</span>, <span class="hljs-string">'empire'</span>, <span class="hljs-string">'state'</span>, <span class="hljs-string">'building'</span>, <span class="hljs-string">'='</span>, <span class="hljs-string">'es'</span>, <span class="hljs-string">'##b'</span>, <span class="hljs-string">'.'</span>, <span class="hljs-string">'pretty'</span>, <span class="hljs-string">'bad'</span>, <span class="hljs-string">'storm'</span>, <span class="hljs-string">'here'</span>, <span class="hljs-string">'last'</span>, <span class="hljs-string">'evening'</span>, <span class="hljs-string">'.'</span>, <span class="hljs-string">'[SEP]'</span>]</pre></div> <p data-svelte-h="svelte-1lf9iv0">However, this adds some special tokens <code>[CLS]</code> and <code>[SEP]</code> and the subword tokenization creates a mismatch between the input and labels. A single word corresponding to a single label may now be split into two subwords. You’ll need to realign the tokens and labels by:</p> <ol data-svelte-h="svelte-1heekds"><li>Mapping all tokens to their corresponding word with the <a href="https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.BatchEncoding.word_ids" rel="nofollow"><code>word_ids</code></a> method.</li> <li>Assigning the label <code>-100</code> to the special tokens <code>[CLS]</code> and <code>[SEP]</code> so they’re ignored by the PyTorch loss function (see <a href="https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html" rel="nofollow">CrossEntropyLoss</a>).</li> <li>Only labeling the first token of a given word. Assign <code>-100</code> to other subtokens from the same word.</li></ol> <p data-svelte-h="svelte-yddais">Here is how you can create a function to realign the tokens and labels, and truncate sequences to be no longer than DistilBERT’s maximum input length:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">tokenize_and_align_labels</span>(<span class="hljs-params">examples</span>): <span class="hljs-meta">... </span> tokenized_inputs = tokenizer(examples[<span class="hljs-string">"tokens"</span>], truncation=<span class="hljs-literal">True</span>, is_split_into_words=<span class="hljs-literal">True</span>) <span class="hljs-meta">... </span> labels = [] <span class="hljs-meta">... </span> <span class="hljs-keyword">for</span> i, label <span class="hljs-keyword">in</span> <span class="hljs-built_in">enumerate</span>(examples[<span class="hljs-string">f"ner_tags"</span>]): <span class="hljs-meta">... </span> word_ids = tokenized_inputs.word_ids(batch_index=i) <span class="hljs-comment"># Map tokens to their respective word.</span> <span class="hljs-meta">... </span> previous_word_idx = <span class="hljs-literal">None</span> <span class="hljs-meta">... </span> label_ids = [] <span class="hljs-meta">... </span> <span class="hljs-keyword">for</span> word_idx <span class="hljs-keyword">in</span> word_ids: <span class="hljs-comment"># Set the special tokens to -100.</span> <span class="hljs-meta">... </span> <span class="hljs-keyword">if</span> word_idx <span class="hljs-keyword">is</span> <span class="hljs-literal">None</span>: <span class="hljs-meta">... </span> label_ids.append(-<span class="hljs-number">100</span>) <span class="hljs-meta">... </span> <span class="hljs-keyword">elif</span> word_idx != previous_word_idx: <span class="hljs-comment"># Only label the first token of a given word.</span> <span class="hljs-meta">... </span> label_ids.append(label[word_idx]) <span class="hljs-meta">... </span> <span class="hljs-keyword">else</span>: <span class="hljs-meta">... </span> label_ids.append(-<span class="hljs-number">100</span>) <span class="hljs-meta">... </span> previous_word_idx = word_idx <span class="hljs-meta">... </span> labels.append(label_ids) <span class="hljs-meta">... </span> tokenized_inputs[<span class="hljs-string">"labels"</span>] = labels <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> tokenized_inputs</pre></div> <p data-svelte-h="svelte-147cojc">To apply the preprocessing function over the entire dataset, use 🤗 Datasets <a href="https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.map" rel="nofollow">map</a> function. You can speed up the <code>map</code> function by setting <code>batched=True</code> to process multiple elements of the dataset at once:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>tokenized_wnut = wnut.<span class="hljs-built_in">map</span>(tokenize_and_align_labels, batched=<span class="hljs-literal">True</span>)</pre></div> <p data-svelte-h="svelte-1f1ft0n">Now create a batch of examples using <a href="/docs/transformers/v4.34.0/en/main_classes/data_collator#transformers.DataCollatorWithPadding">DataCollatorWithPadding</a>. It’s more efficient to <em>dynamically pad</em> the sentences to the longest length in a batch during collation, instead of padding the whole dataset to the maximum length.</p> <div class="space-y-10 py-6 2xl:py-8 2xl:-mx-4"><div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><defs><clipPath id="a"><rect x="3.05" y="0.5" width="25.73" height="31" fill="none"></rect></clipPath></defs><g clip-path="url(#a)"><path d="M24.94,9.51a12.81,12.81,0,0,1,0,18.16,12.68,12.68,0,0,1-18,0,12.81,12.81,0,0,1,0-18.16l9-9V5l-.84.83-6,6a9.58,9.58,0,1,0,13.55,0ZM20.44,9a1.68,1.68,0,1,1,1.67-1.67A1.68,1.68,0,0,1,20.44,9Z" fill="#ee4c2c"></path></g></svg> <span>Pytorch</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide Pytorch content</span></div></div> <div class="framework-content"><div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> DataCollatorForTokenClassification <span class="hljs-meta">&gt;&gt;&gt; </span>data_collator = DataCollatorForTokenClassification(tokenizer=tokenizer)</pre></div></div></div> <div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="0.94em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 274"><path d="M145.726 42.065v42.07l72.861 42.07v-42.07l-72.86-42.07zM0 84.135v42.07l36.43 21.03V105.17L0 84.135zm109.291 21.035l-36.43 21.034v126.2l36.43 21.035v-84.135l36.435 21.035v-42.07l-36.435-21.034V105.17z" fill="#E55B2D"></path><path d="M145.726 42.065L36.43 105.17v42.065l72.861-42.065v42.065l36.435-21.03v-84.14zM255.022 63.1l-36.435 21.035v42.07l36.435-21.035V63.1zm-72.865 84.135l-36.43 21.035v42.07l36.43-21.036v-42.07zm-36.43 63.104l-36.436-21.035v84.135l36.435-21.035V210.34z" fill="#ED8E24"></path><path d="M145.726 0L0 84.135l36.43 21.035l109.296-63.105l72.861 42.07L255.022 63.1L145.726 0zm0 126.204l-36.435 21.03l36.435 21.036l36.43-21.035l-36.43-21.03z" fill="#F8BF3C"></path></svg> <span>TensorFlow</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide TensorFlow content</span></div></div> <div class="framework-content"><div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> DataCollatorForTokenClassification <span class="hljs-meta">&gt;&gt;&gt; </span>data_collator = DataCollatorForTokenClassification(tokenizer=tokenizer, return_tensors=<span class="hljs-string">"tf"</span>)</pre></div></div></div> </div> <h2 class="relative group"><a id="evaluate" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#evaluate"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-sh8s6s">Evaluate</span></h2> <p data-svelte-h="svelte-434hvn">Including a metric during training is often helpful for evaluating your model’s performance. You can quickly load a evaluation method with the 🤗 <a href="https://huggingface.co/docs/evaluate/index" rel="nofollow">Evaluate</a> library. For this task, load the <a href="https://huggingface.co/spaces/evaluate-metric/seqeval" rel="nofollow">seqeval</a> framework (see the 🤗 Evaluate <a href="https://huggingface.co/docs/evaluate/a_quick_tour" rel="nofollow">quick tour</a> to learn more about how to load and compute a metric). Seqeval actually produces several scores: precision, recall, F1, and accuracy.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> evaluate <span class="hljs-meta">&gt;&gt;&gt; </span>seqeval = evaluate.load(<span class="hljs-string">"seqeval"</span>)</pre></div> <p data-svelte-h="svelte-1nd9y0m">Get the NER labels first, and then create a function that passes your true predictions and true labels to <a href="https://huggingface.co/docs/evaluate/v0.4.0/en/package_reference/main_classes#evaluate.EvaluationModule.compute" rel="nofollow">compute</a> to calculate the scores:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np <span class="hljs-meta">&gt;&gt;&gt; </span>labels = [label_list[i] <span class="hljs-keyword">for</span> i <span class="hljs-keyword">in</span> example[<span class="hljs-string">f"ner_tags"</span>]] <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">compute_metrics</span>(<span class="hljs-params">p</span>): <span class="hljs-meta">... </span> predictions, labels = p <span class="hljs-meta">... </span> predictions = np.argmax(predictions, axis=<span class="hljs-number">2</span>) <span class="hljs-meta">... </span> true_predictions = [ <span class="hljs-meta">... </span> [label_list[p] <span class="hljs-keyword">for</span> (p, l) <span class="hljs-keyword">in</span> <span class="hljs-built_in">zip</span>(prediction, label) <span class="hljs-keyword">if</span> l != -<span class="hljs-number">100</span>] <span class="hljs-meta">... </span> <span class="hljs-keyword">for</span> prediction, label <span class="hljs-keyword">in</span> <span class="hljs-built_in">zip</span>(predictions, labels) <span class="hljs-meta">... </span> ] <span class="hljs-meta">... </span> true_labels = [ <span class="hljs-meta">... </span> [label_list[l] <span class="hljs-keyword">for</span> (p, l) <span class="hljs-keyword">in</span> <span class="hljs-built_in">zip</span>(prediction, label) <span class="hljs-keyword">if</span> l != -<span class="hljs-number">100</span>] <span class="hljs-meta">... </span> <span class="hljs-keyword">for</span> prediction, label <span class="hljs-keyword">in</span> <span class="hljs-built_in">zip</span>(predictions, labels) <span class="hljs-meta">... </span> ] <span class="hljs-meta">... </span> results = seqeval.compute(predictions=true_predictions, references=true_labels) <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> { <span class="hljs-meta">... </span> <span class="hljs-string">"precision"</span>: results[<span class="hljs-string">"overall_precision"</span>], <span class="hljs-meta">... </span> <span class="hljs-string">"recall"</span>: results[<span class="hljs-string">"overall_recall"</span>], <span class="hljs-meta">... </span> <span class="hljs-string">"f1"</span>: results[<span class="hljs-string">"overall_f1"</span>], <span class="hljs-meta">... </span> <span class="hljs-string">"accuracy"</span>: results[<span class="hljs-string">"overall_accuracy"</span>], <span class="hljs-meta">... </span> }</pre></div> <p data-svelte-h="svelte-183aynn">Your <code>compute_metrics</code> function is ready to go now, and you’ll return to it when you setup your training.</p> <h2 class="relative group"><a id="train" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#train"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-5arm0l">Train</span></h2> <p data-svelte-h="svelte-18c6io4">Before you start training your model, create a map of the expected ids to their labels with <code>id2label</code> and <code>label2id</code>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>id2label = { <span class="hljs-meta">... </span> <span class="hljs-number">0</span>: <span class="hljs-string">"O"</span>, <span class="hljs-meta">... </span> <span class="hljs-number">1</span>: <span class="hljs-string">"B-corporation"</span>, <span class="hljs-meta">... </span> <span class="hljs-number">2</span>: <span class="hljs-string">"I-corporation"</span>, <span class="hljs-meta">... </span> <span class="hljs-number">3</span>: <span class="hljs-string">"B-creative-work"</span>, <span class="hljs-meta">... </span> <span class="hljs-number">4</span>: <span class="hljs-string">"I-creative-work"</span>, <span class="hljs-meta">... </span> <span class="hljs-number">5</span>: <span class="hljs-string">"B-group"</span>, <span class="hljs-meta">... </span> <span class="hljs-number">6</span>: <span class="hljs-string">"I-group"</span>, <span class="hljs-meta">... </span> <span class="hljs-number">7</span>: <span class="hljs-string">"B-location"</span>, <span class="hljs-meta">... </span> <span class="hljs-number">8</span>: <span class="hljs-string">"I-location"</span>, <span class="hljs-meta">... </span> <span class="hljs-number">9</span>: <span class="hljs-string">"B-person"</span>, <span class="hljs-meta">... </span> <span class="hljs-number">10</span>: <span class="hljs-string">"I-person"</span>, <span class="hljs-meta">... </span> <span class="hljs-number">11</span>: <span class="hljs-string">"B-product"</span>, <span class="hljs-meta">... </span> <span class="hljs-number">12</span>: <span class="hljs-string">"I-product"</span>, <span class="hljs-meta">... </span>} <span class="hljs-meta">&gt;&gt;&gt; </span>label2id = { <span class="hljs-meta">... </span> <span class="hljs-string">"O"</span>: <span class="hljs-number">0</span>, <span class="hljs-meta">... </span> <span class="hljs-string">"B-corporation"</span>: <span class="hljs-number">1</span>, <span class="hljs-meta">... </span> <span class="hljs-string">"I-corporation"</span>: <span class="hljs-number">2</span>, <span class="hljs-meta">... </span> <span class="hljs-string">"B-creative-work"</span>: <span class="hljs-number">3</span>, <span class="hljs-meta">... </span> <span class="hljs-string">"I-creative-work"</span>: <span class="hljs-number">4</span>, <span class="hljs-meta">... </span> <span class="hljs-string">"B-group"</span>: <span class="hljs-number">5</span>, <span class="hljs-meta">... </span> <span class="hljs-string">"I-group"</span>: <span class="hljs-number">6</span>, <span class="hljs-meta">... </span> <span class="hljs-string">"B-location"</span>: <span class="hljs-number">7</span>, <span class="hljs-meta">... </span> <span class="hljs-string">"I-location"</span>: <span class="hljs-number">8</span>, <span class="hljs-meta">... </span> <span class="hljs-string">"B-person"</span>: <span class="hljs-number">9</span>, <span class="hljs-meta">... </span> <span class="hljs-string">"I-person"</span>: <span class="hljs-number">10</span>, <span class="hljs-meta">... </span> <span class="hljs-string">"B-product"</span>: <span class="hljs-number">11</span>, <span class="hljs-meta">... </span> <span class="hljs-string">"I-product"</span>: <span class="hljs-number">12</span>, <span class="hljs-meta">... </span>}</pre></div> <div class="space-y-10 py-6 2xl:py-8 2xl:-mx-4"><div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><defs><clipPath id="a"><rect x="3.05" y="0.5" width="25.73" height="31" fill="none"></rect></clipPath></defs><g clip-path="url(#a)"><path d="M24.94,9.51a12.81,12.81,0,0,1,0,18.16,12.68,12.68,0,0,1-18,0,12.81,12.81,0,0,1,0-18.16l9-9V5l-.84.83-6,6a9.58,9.58,0,1,0,13.55,0ZM20.44,9a1.68,1.68,0,1,1,1.67-1.67A1.68,1.68,0,0,1,20.44,9Z" fill="#ee4c2c"></path></g></svg> <span>Pytorch</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide Pytorch content</span></div></div> <div class="framework-content"><div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1ufp0ay">If you aren’t familiar with finetuning a model with the <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer">Trainer</a>, take a look at the basic tutorial <a href="../training#train-with-pytorch-trainer">here</a>!</p></div> <p data-svelte-h="svelte-x5mfue">You’re ready to start training your model now! Load DistilBERT with <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoModelForTokenClassification">AutoModelForTokenClassification</a> along with the number of expected labels, and the label mappings:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoModelForTokenClassification, TrainingArguments, Trainer <span class="hljs-meta">&gt;&gt;&gt; </span>model = AutoModelForTokenClassification.from_pretrained( <span class="hljs-meta">... </span> <span class="hljs-string">"distilbert-base-uncased"</span>, num_labels=<span class="hljs-number">13</span>, id2label=id2label, label2id=label2id <span class="hljs-meta">... </span>)</pre></div> <p data-svelte-h="svelte-l42k0i">At this point, only three steps remain:</p> <ol data-svelte-h="svelte-l63yt6"><li>Define your training hyperparameters in <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.TrainingArguments">TrainingArguments</a>. The only required parameter is <code>output_dir</code> which specifies where to save your model. You’ll push this model to the Hub by setting <code>push_to_hub=True</code> (you need to be signed in to Hugging Face to upload your model). At the end of each epoch, the <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer">Trainer</a> will evaluate the seqeval scores and save the training checkpoint.</li> <li>Pass the training arguments to <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer">Trainer</a> along with the model, dataset, tokenizer, data collator, and <code>compute_metrics</code> function.</li> <li>Call <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.train">train()</a> to finetune your model.</li></ol> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>training_args = TrainingArguments( <span class="hljs-meta">... </span> output_dir=<span class="hljs-string">"my_awesome_wnut_model"</span>, <span class="hljs-meta">... </span> learning_rate=<span class="hljs-number">2e-5</span>, <span class="hljs-meta">... </span> per_device_train_batch_size=<span class="hljs-number">16</span>, <span class="hljs-meta">... </span> per_device_eval_batch_size=<span class="hljs-number">16</span>, <span class="hljs-meta">... </span> num_train_epochs=<span class="hljs-number">2</span>, <span class="hljs-meta">... </span> weight_decay=<span class="hljs-number">0.01</span>, <span class="hljs-meta">... </span> evaluation_strategy=<span class="hljs-string">"epoch"</span>, <span class="hljs-meta">... </span> save_strategy=<span class="hljs-string">"epoch"</span>, <span class="hljs-meta">... </span> load_best_model_at_end=<span class="hljs-literal">True</span>, <span class="hljs-meta">... </span> push_to_hub=<span class="hljs-literal">True</span>, <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>trainer = Trainer( <span class="hljs-meta">... </span> model=model, <span class="hljs-meta">... </span> args=training_args, <span class="hljs-meta">... </span> train_dataset=tokenized_wnut[<span class="hljs-string">"train"</span>], <span class="hljs-meta">... </span> eval_dataset=tokenized_wnut[<span class="hljs-string">"test"</span>], <span class="hljs-meta">... </span> tokenizer=tokenizer, <span class="hljs-meta">... </span> data_collator=data_collator, <span class="hljs-meta">... </span> compute_metrics=compute_metrics, <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>trainer.train()</pre></div> <p data-svelte-h="svelte-cv8z08">Once training is completed, share your model to the Hub with the <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.push_to_hub">push_to_hub()</a> method so everyone can use your model:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>trainer.push_to_hub()</pre></div></div></div> <div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="0.94em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 274"><path d="M145.726 42.065v42.07l72.861 42.07v-42.07l-72.86-42.07zM0 84.135v42.07l36.43 21.03V105.17L0 84.135zm109.291 21.035l-36.43 21.034v126.2l36.43 21.035v-84.135l36.435 21.035v-42.07l-36.435-21.034V105.17z" fill="#E55B2D"></path><path d="M145.726 42.065L36.43 105.17v42.065l72.861-42.065v42.065l36.435-21.03v-84.14zM255.022 63.1l-36.435 21.035v42.07l36.435-21.035V63.1zm-72.865 84.135l-36.43 21.035v42.07l36.43-21.036v-42.07zm-36.43 63.104l-36.436-21.035v84.135l36.435-21.035V210.34z" fill="#ED8E24"></path><path d="M145.726 0L0 84.135l36.43 21.035l109.296-63.105l72.861 42.07L255.022 63.1L145.726 0zm0 126.204l-36.435 21.03l36.435 21.036l36.43-21.035l-36.43-21.03z" fill="#F8BF3C"></path></svg> <span>TensorFlow</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide TensorFlow content</span></div></div> <div class="framework-content"><div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1rd4nl8">If you aren’t familiar with finetuning a model with Keras, take a look at the basic tutorial <a href="../training#train-a-tensorflow-model-with-keras">here</a>!</p></div> To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters: <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> create_optimizer <span class="hljs-meta">&gt;&gt;&gt; </span>batch_size = <span class="hljs-number">16</span> <span class="hljs-meta">&gt;&gt;&gt; </span>num_train_epochs = <span class="hljs-number">3</span> <span class="hljs-meta">&gt;&gt;&gt; </span>num_train_steps = (<span class="hljs-built_in">len</span>(tokenized_wnut[<span class="hljs-string">"train"</span>]) // batch_size) * num_train_epochs <span class="hljs-meta">&gt;&gt;&gt; </span>optimizer, lr_schedule = create_optimizer( <span class="hljs-meta">... </span> init_lr=<span class="hljs-number">2e-5</span>, <span class="hljs-meta">... </span> num_train_steps=num_train_steps, <span class="hljs-meta">... </span> weight_decay_rate=<span class="hljs-number">0.01</span>, <span class="hljs-meta">... </span> num_warmup_steps=<span class="hljs-number">0</span>, <span class="hljs-meta">... </span>)</pre></div> <p data-svelte-h="svelte-1fifgjw">Then you can load DistilBERT with <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.TFAutoModelForTokenClassification">TFAutoModelForTokenClassification</a> along with the number of expected labels, and the label mappings:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> TFAutoModelForTokenClassification <span class="hljs-meta">&gt;&gt;&gt; </span>model = TFAutoModelForTokenClassification.from_pretrained( <span class="hljs-meta">... </span> <span class="hljs-string">"distilbert-base-uncased"</span>, num_labels=<span class="hljs-number">13</span>, id2label=id2label, label2id=label2id <span class="hljs-meta">... </span>)</pre></div> <p data-svelte-h="svelte-qmwuyd">Convert your datasets to the <code>tf.data.Dataset</code> format with <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel.prepare_tf_dataset">prepare_tf_dataset()</a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>tf_train_set = model.prepare_tf_dataset( <span class="hljs-meta">... </span> tokenized_wnut[<span class="hljs-string">"train"</span>], <span class="hljs-meta">... </span> shuffle=<span class="hljs-literal">True</span>, <span class="hljs-meta">... </span> batch_size=<span class="hljs-number">16</span>, <span class="hljs-meta">... </span> collate_fn=data_collator, <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>tf_validation_set = model.prepare_tf_dataset( <span class="hljs-meta">... </span> tokenized_wnut[<span class="hljs-string">"validation"</span>], <span class="hljs-meta">... </span> shuffle=<span class="hljs-literal">False</span>, <span class="hljs-meta">... </span> batch_size=<span class="hljs-number">16</span>, <span class="hljs-meta">... </span> collate_fn=data_collator, <span class="hljs-meta">... </span>)</pre></div> <p data-svelte-h="svelte-17cxx5e">Configure the model for training with <a href="https://keras.io/api/models/model_training_apis/#compile-method" rel="nofollow"><code>compile</code></a>. Note that Transformers models all have a default task-relevant loss function, so you don’t need to specify one unless you want to:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> tensorflow <span class="hljs-keyword">as</span> tf <span class="hljs-meta">&gt;&gt;&gt; </span>model.<span class="hljs-built_in">compile</span>(optimizer=optimizer) <span class="hljs-comment"># No loss argument!</span></pre></div> <p data-svelte-h="svelte-nf8aa0">The last two things to setup before you start training is to compute the seqeval scores from the predictions, and provide a way to push your model to the Hub. Both are done by using <a href="../main_classes/keras_callbacks">Keras callbacks</a>.</p> <p data-svelte-h="svelte-6vs5z9">Pass your <code>compute_metrics</code> function to <a href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks#transformers.KerasMetricCallback">KerasMetricCallback</a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers.keras_callbacks <span class="hljs-keyword">import</span> KerasMetricCallback <span class="hljs-meta">&gt;&gt;&gt; </span>metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_set)</pre></div> <p data-svelte-h="svelte-b2vwd">Specify where to push your model and tokenizer in the <a href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks#transformers.PushToHubCallback">PushToHubCallback</a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers.keras_callbacks <span class="hljs-keyword">import</span> PushToHubCallback <span class="hljs-meta">&gt;&gt;&gt; </span>push_to_hub_callback = PushToHubCallback( <span class="hljs-meta">... </span> output_dir=<span class="hljs-string">"my_awesome_wnut_model"</span>, <span class="hljs-meta">... </span> tokenizer=tokenizer, <span class="hljs-meta">... </span>)</pre></div> <p data-svelte-h="svelte-1lw9xm8">Then bundle your callbacks together:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>callbacks = [metric_callback, push_to_hub_callback]</pre></div> <p data-svelte-h="svelte-1hrpv1v">Finally, you’re ready to start training your model! Call <a href="https://keras.io/api/models/model_training_apis/#fit-method" rel="nofollow"><code>fit</code></a> with your training and validation datasets, the number of epochs, and your callbacks to finetune the model:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>model.fit(x=tf_train_set, validation_data=tf_validation_set, epochs=<span class="hljs-number">3</span>, callbacks=callbacks)</pre></div> <p data-svelte-h="svelte-2s71om">Once training is completed, your model is automatically uploaded to the Hub so everyone can use it!</p></div></div> </div> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-j7j3qo">For a more in-depth example of how to finetune a model for token classification, take a look at the corresponding <a href="https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification.ipynb" rel="nofollow">PyTorch notebook</a> or <a href="https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification-tf.ipynb" rel="nofollow">TensorFlow notebook</a>.</p></div> <h2 class="relative group"><a id="inference" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#inference"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-199uz7g">Inference</span></h2> <p data-svelte-h="svelte-633ppb">Great, now that you’ve finetuned a model, you can use it for inference!</p> <p data-svelte-h="svelte-o1jbfg">Grab some text you’d like to run inference on:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>text = <span class="hljs-string">"The Golden State Warriors are an American professional basketball team based in San Francisco."</span></pre></div> <p data-svelte-h="svelte-jiezzn">The simplest way to try out your finetuned model for inference is to use it in a <a href="/docs/transformers/v4.34.0/en/main_classes/pipelines#transformers.pipeline">pipeline()</a>. Instantiate a <code>pipeline</code> for NER with your model, and pass your text to it:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> pipeline <span class="hljs-meta">&gt;&gt;&gt; </span>classifier = pipeline(<span class="hljs-string">"ner"</span>, model=<span class="hljs-string">"stevhliu/my_awesome_wnut_model"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>classifier(text) [{<span class="hljs-string">'entity'</span>: <span class="hljs-string">'B-location'</span>, <span class="hljs-string">'score'</span>: <span class="hljs-number">0.42658573</span>, <span class="hljs-string">'index'</span>: <span class="hljs-number">2</span>, <span class="hljs-string">'word'</span>: <span class="hljs-string">'golden'</span>, <span class="hljs-string">'start'</span>: <span class="hljs-number">4</span>, <span class="hljs-string">'end'</span>: <span class="hljs-number">10</span>}, {<span class="hljs-string">'entity'</span>: <span class="hljs-string">'I-location'</span>, <span class="hljs-string">'score'</span>: <span class="hljs-number">0.35856336</span>, <span class="hljs-string">'index'</span>: <span class="hljs-number">3</span>, <span class="hljs-string">'word'</span>: <span class="hljs-string">'state'</span>, <span class="hljs-string">'start'</span>: <span class="hljs-number">11</span>, <span class="hljs-string">'end'</span>: <span class="hljs-number">16</span>}, {<span class="hljs-string">'entity'</span>: <span class="hljs-string">'B-group'</span>, <span class="hljs-string">'score'</span>: <span class="hljs-number">0.3064001</span>, <span class="hljs-string">'index'</span>: <span class="hljs-number">4</span>, <span class="hljs-string">'word'</span>: <span class="hljs-string">'warriors'</span>, <span class="hljs-string">'start'</span>: <span class="hljs-number">17</span>, <span class="hljs-string">'end'</span>: <span class="hljs-number">25</span>}, {<span class="hljs-string">'entity'</span>: <span class="hljs-string">'B-location'</span>, <span class="hljs-string">'score'</span>: <span class="hljs-number">0.65523505</span>, <span class="hljs-string">'index'</span>: <span class="hljs-number">13</span>, <span class="hljs-string">'word'</span>: <span class="hljs-string">'san'</span>, <span class="hljs-string">'start'</span>: <span class="hljs-number">80</span>, <span class="hljs-string">'end'</span>: <span class="hljs-number">83</span>}, {<span class="hljs-string">'entity'</span>: <span class="hljs-string">'B-location'</span>, <span class="hljs-string">'score'</span>: <span class="hljs-number">0.4668663</span>, <span class="hljs-string">'index'</span>: <span class="hljs-number">14</span>, <span class="hljs-string">'word'</span>: <span class="hljs-string">'francisco'</span>, <span class="hljs-string">'start'</span>: <span class="hljs-number">84</span>, <span class="hljs-string">'end'</span>: <span class="hljs-number">93</span>}]</pre></div> <p data-svelte-h="svelte-1njl8vm">You can also manually replicate the results of the <code>pipeline</code> if you’d like:</p> <div class="space-y-10 py-6 2xl:py-8 2xl:-mx-4"><div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><defs><clipPath id="a"><rect x="3.05" y="0.5" width="25.73" height="31" fill="none"></rect></clipPath></defs><g clip-path="url(#a)"><path d="M24.94,9.51a12.81,12.81,0,0,1,0,18.16,12.68,12.68,0,0,1-18,0,12.81,12.81,0,0,1,0-18.16l9-9V5l-.84.83-6,6a9.58,9.58,0,1,0,13.55,0ZM20.44,9a1.68,1.68,0,1,1,1.67-1.67A1.68,1.68,0,0,1,20.44,9Z" fill="#ee4c2c"></path></g></svg> <span>Pytorch</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide Pytorch content</span></div></div> <div class="framework-content"><p data-svelte-h="svelte-1qcz1wr">Tokenize the text and return PyTorch tensors:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"stevhliu/my_awesome_wnut_model"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(text, return_tensors=<span class="hljs-string">"pt"</span>)</pre></div> <p data-svelte-h="svelte-f3g043">Pass your inputs to the model and return the <code>logits</code>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoModelForTokenClassification <span class="hljs-meta">&gt;&gt;&gt; </span>model = AutoModelForTokenClassification.from_pretrained(<span class="hljs-string">"stevhliu/my_awesome_wnut_model"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> logits = model(**inputs).logits</pre></div> <p data-svelte-h="svelte-6mgrol">Get the class with the highest probability, and use the model’s <code>id2label</code> mapping to convert it to a text label:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>predictions = torch.argmax(logits, dim=<span class="hljs-number">2</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_token_class = [model.config.id2label[t.item()] <span class="hljs-keyword">for</span> t <span class="hljs-keyword">in</span> predictions[<span class="hljs-number">0</span>]] <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_token_class [<span class="hljs-string">'O'</span>, <span class="hljs-string">'O'</span>, <span class="hljs-string">'B-location'</span>, <span class="hljs-string">'I-location'</span>, <span class="hljs-string">'B-group'</span>, <span class="hljs-string">'O'</span>, <span class="hljs-string">'O'</span>, <span class="hljs-string">'O'</span>, <span class="hljs-string">'O'</span>, <span class="hljs-string">'O'</span>, <span class="hljs-string">'O'</span>, <span class="hljs-string">'O'</span>, <span class="hljs-string">'O'</span>, <span class="hljs-string">'B-location'</span>, <span class="hljs-string">'B-location'</span>, <span class="hljs-string">'O'</span>, <span class="hljs-string">'O'</span>]</pre></div></div></div> <div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="0.94em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 274"><path d="M145.726 42.065v42.07l72.861 42.07v-42.07l-72.86-42.07zM0 84.135v42.07l36.43 21.03V105.17L0 84.135zm109.291 21.035l-36.43 21.034v126.2l36.43 21.035v-84.135l36.435 21.035v-42.07l-36.435-21.034V105.17z" fill="#E55B2D"></path><path d="M145.726 42.065L36.43 105.17v42.065l72.861-42.065v42.065l36.435-21.03v-84.14zM255.022 63.1l-36.435 21.035v42.07l36.435-21.035V63.1zm-72.865 84.135l-36.43 21.035v42.07l36.43-21.036v-42.07zm-36.43 63.104l-36.436-21.035v84.135l36.435-21.035V210.34z" fill="#ED8E24"></path><path d="M145.726 0L0 84.135l36.43 21.035l109.296-63.105l72.861 42.07L255.022 63.1L145.726 0zm0 126.204l-36.435 21.03l36.435 21.036l36.43-21.035l-36.43-21.03z" fill="#F8BF3C"></path></svg> <span>TensorFlow</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide TensorFlow content</span></div></div> <div class="framework-content"><p data-svelte-h="svelte-s1qr7b">Tokenize the text and return TensorFlow tensors:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"stevhliu/my_awesome_wnut_model"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(text, return_tensors=<span class="hljs-string">"tf"</span>)</pre></div> <p data-svelte-h="svelte-f3g043">Pass your inputs to the model and return the <code>logits</code>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> TFAutoModelForTokenClassification <span class="hljs-meta">&gt;&gt;&gt; </span>model = TFAutoModelForTokenClassification.from_pretrained(<span class="hljs-string">"stevhliu/my_awesome_wnut_model"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>logits = model(**inputs).logits</pre></div> <p data-svelte-h="svelte-6mgrol">Get the class with the highest probability, and use the model’s <code>id2label</code> mapping to convert it to a text label:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>predicted_token_class_ids = tf.math.argmax(logits, axis=-<span class="hljs-number">1</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_token_class = [model.config.id2label[t] <span class="hljs-keyword">for</span> t <span class="hljs-keyword">in</span> predicted_token_class_ids[<span class="hljs-number">0</span>].numpy().tolist()] <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_token_class [<span class="hljs-string">'O'</span>, <span class="hljs-string">'O'</span>, <span class="hljs-string">'B-location'</span>, <span class="hljs-string">'I-location'</span>, <span class="hljs-string">'B-group'</span>, <span class="hljs-string">'O'</span>, <span class="hljs-string">'O'</span>, <span class="hljs-string">'O'</span>, <span class="hljs-string">'O'</span>, <span class="hljs-string">'O'</span>, <span class="hljs-string">'O'</span>, <span class="hljs-string">'O'</span>, <span class="hljs-string">'O'</span>, <span class="hljs-string">'B-location'</span>, <span class="hljs-string">'B-location'</span>, <span class="hljs-string">'O'</span>, <span class="hljs-string">'O'</span>]</pre></div></div></div> </div> <p></p> <div id="svelte-announcer" aria-live="assertive" aria-atomic="true" style="position: absolute; left: 0px; top: 0px; clip: rect(0px, 0px, 0px, 0px); clip-path: inset(50%); overflow: hidden; white-space: nowrap; width: 1px; height: 1px;"></div></div> <div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/tasks/sequence_classification" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>Text classification</a> <a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/tasks/question_answering" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">Question answering<span class="ml-2 translate-y-px">→</span></a></div></div></div> <div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapter&quot;:{&quot;title&quot;:&quot;Token classification&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;token-classification&quot;,&quot;url&quot;:&quot;#token-classification&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Load WNUT 17 dataset&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;load-wnut-17-dataset&quot;,&quot;url&quot;:&quot;#load-wnut-17-dataset&quot;},{&quot;title&quot;:&quot;Preprocess&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocess&quot;,&quot;url&quot;:&quot;#preprocess&quot;},{&quot;title&quot;:&quot;Evaluate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;evaluate&quot;,&quot;url&quot;:&quot;#evaluate&quot;},{&quot;title&quot;:&quot;Train&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;train&quot;,&quot;url&quot;:&quot;#train&quot;},{&quot;title&quot;:&quot;Inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;inference&quot;,&quot;url&quot;:&quot;#inference&quot;}]}}" data-target="SubSideMenu"><nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#token-classification" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-token-classification"><wbr>Token classification</a> <a href="#load-wnut-17-dataset" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-load-wnut-17-dataset"><wbr>Load WNU<wbr>T 17 dataset</a> <a href="#preprocess" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-preprocess"><wbr>Preprocess</a> <a href="#evaluate" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-evaluate"><wbr>Evaluate</a> <a href="#train" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-train"><wbr>Train</a> <a href="#inference" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-inference"><wbr>Inference</a> </nav></div></div></div> <div id="doc-footer"></div></main> </div> <script> import("/front/build/kube-5e23f38/index.js"); window.moonSha = "kube-5e23f38/"; window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc","environment":"production","userAgent":"HuggingFace (production)"}`); </script> <!-- Stripe --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://js.stripe.com/v3/"; script.async = true; document.head.appendChild(script); } </script> <!-- Google analytics v4 --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL"; script.async = true; document.head.appendChild(script); window.dataLayer = window.dataLayer || []; function gtag() { if (window.dataLayer !== undefined) { window.dataLayer.push(arguments); } } gtag("js", new Date()); gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/v4.34.0/en/tasks/token_classification" }); /// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" }); /// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent /// TODO: ask the user for their consent and update this with gtag('consent', 'update') } </script> <!-- Google Analytics v3 (deprecated) --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { (function (i, s, o, g, r, a, m) { i["GoogleAnalyticsObject"] = r; (i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments); }), (i[r].l = 1 * new Date()); (a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]); a.async = 1; a.src = g; m.parentNode.insertBefore(a, m); })(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics"); ganalytics("create", "UA-83738774-2", "auto"); ganalytics("send", "pageview", "/docs/transformers/v4.34.0/en/tasks/token_classification"); } </script> <iframe name="__privateStripeMetricsController2040" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-27c67c0d52761104439bb051c7856ab1.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fv4.34.0%2Fen%2Ftasks%2Ftoken_classification&amp;title=Token%20classification&amp;referrer=&amp;muid=NA&amp;sid=NA&amp;version=6&amp;preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html>
2023-10-05T13:33:47.070Z
https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/gpt
The documentation page MODEL\_DOC/GPT doesn’t exist in v4.34.0, but exists on the main version. Click [here](/docs/transformers/main/en/model_doc/gpt) to redirect to the main version of the documentation.
<html><head></head><body>The documentation page MODEL_DOC/GPT doesn’t exist in v4.34.0, but exists on the main version. Click <a href="/docs/transformers/main/en/model_doc/gpt">here</a> to redirect to the main version of the documentation.</body></html>
2023-10-05T13:33:47.105Z
Summarization
https://huggingface.co/docs/transformers/v4.34.0/en/tasks/summarization
# Summarization Summarization creates a shorter version of a document or an article that captures all the important information. Along with translation, it is another example of a task that can be formulated as a sequence-to-sequence task. Summarization can be: - Extractive: extract the most relevant information from a document. - Abstractive: generate new text that captures the most relevant information. This guide will show you how to: 1. Finetune [T5](https://huggingface.co/t5-small) on the California state bill subset of the [BillSum](https://huggingface.co/datasets/billsum) dataset for abstractive summarization. 2. Use your finetuned model for inference. The task illustrated in this tutorial is supported by the following model architectures: [BART](../model_doc/bart), [BigBird-Pegasus](../model_doc/bigbird_pegasus), [Blenderbot](../model_doc/blenderbot), [BlenderbotSmall](../model_doc/blenderbot-small), [Encoder decoder](../model_doc/encoder-decoder), [FairSeq Machine-Translation](../model_doc/fsmt), [GPTSAN-japanese](../model_doc/gptsan-japanese), [LED](../model_doc/led), [LongT5](../model_doc/longt5), [M2M100](../model_doc/m2m_100), [Marian](../model_doc/marian), [mBART](../model_doc/mbart), [MT5](../model_doc/mt5), [MVP](../model_doc/mvp), [NLLB](../model_doc/nllb), [NLLB-MOE](../model_doc/nllb-moe), [Pegasus](../model_doc/pegasus), [PEGASUS-X](../model_doc/pegasus_x), [PLBart](../model_doc/plbart), [ProphetNet](../model_doc/prophetnet), [SwitchTransformers](../model_doc/switch_transformers), [T5](../model_doc/t5), [UMT5](../model_doc/umt5), [XLM-ProphetNet](../model_doc/xlm-prophetnet) Before you begin, make sure you have all the necessary libraries installed: ``` pip install transformers datasets evaluate rouge_score ``` We encourage you to login to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to login: ``` >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## Load BillSum dataset Start by loading the smaller California state bill subset of the BillSum dataset from the 🤗 Datasets library: ``` >>> from datasets import load_dataset >>> billsum = load_dataset("billsum", split="ca_test") ``` Split the dataset into a train and test set with the [train\_test\_split](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.train_test_split) method: ``` >>> billsum = billsum.train_test_split(test_size=0.2) ``` Then take a look at an example: ``` >>> billsum["train"][0] {'summary': 'Existing law authorizes state agencies to enter into contracts for the acquisition of goods or services upon approval by the Department of General Services. Existing law sets forth various requirements and prohibitions for those contracts, including, but not limited to, a prohibition on entering into contracts for the acquisition of goods or services of $100,000 or more with a contractor that discriminates between spouses and domestic partners or same-sex and different-sex couples in the provision of benefits. Existing law provides that a contract entered into in violation of those requirements and prohibitions is void and authorizes the state or any person acting on behalf of the state to bring a civil action seeking a determination that a contract is in violation and therefore void. Under existing law, a willful violation of those requirements and prohibitions is a misdemeanor.\nThis bill would also prohibit a state agency from entering into contracts for the acquisition of goods or services of $100,000 or more with a contractor that discriminates between employees on the basis of gender identity in the provision of benefits, as specified. By expanding the scope of a crime, this bill would impose a state-mandated local program.\nThe California Constitution requires the state to reimburse local agencies and school districts for certain costs mandated by the state. Statutory provisions establish procedures for making that reimbursement.\nThis bill would provide that no reimbursement is required by this act for a specified reason.', 'text': 'The people of the State of California do enact as follows:\n\n\nSECTION 1.\nSection 10295.35 is added to the Public Contract Code, to read:\n10295.35.\n(a) (1) Notwithstanding any other law, a state agency shall not enter into any contract for the acquisition of goods or services in the amount of one hundred thousand dollars ($100,000) or more with a contractor that, in the provision of benefits, discriminates between employees on the basis of an employee’s or dependent’s actual or perceived gender identity, including, but not limited to, the employee’s or dependent’s identification as transgender.\n(2) For purposes of this section, “contract” includes contracts with a cumulative amount of one hundred thousand dollars ($100,000) or more per contractor in each fiscal year.\n(3) For purposes of this section, an employee health plan is discriminatory if the plan is not consistent with Section 1365.5 of the Health and Safety Code and Section 10140 of the Insurance Code.\n(4) The requirements of this section shall apply only to those portions of a contractor’s operations that occur under any of the following conditions:\n(A) Within the state.\n(B) On real property outside the state if the property is owned by the state or if the state has a right to occupy the property, and if the contractor’s presence at that location is connected to a contract with the state.\n(C) Elsewhere in the United States where work related to a state contract is being performed.\n(b) Contractors shall treat as confidential, to the maximum extent allowed by law or by the requirement of the contractor’s insurance provider, any request by an employee or applicant for employment benefits or any documentation of eligibility for benefits submitted by an employee or applicant for employment.\n(c) After taking all reasonable measures to find a contractor that complies with this section, as determined by the state agency, the requirements of this section may be waived under any of the following circumstances:\n(1) There is only one prospective contractor willing to enter into a specific contract with the state agency.\n(2) The contract is necessary to respond to an emergency, as determined by the state agency, that endangers the public health, welfare, or safety, or the contract is necessary for the provision of essential services, and no entity that complies with the requirements of this section capable of responding to the emergency is immediately available.\n(3) The requirements of this section violate, or are inconsistent with, the terms or conditions of a grant, subvention, or agreement, if the agency has made a good faith attempt to change the terms or conditions of any grant, subvention, or agreement to authorize application of this section.\n(4) The contractor is providing wholesale or bulk water, power, or natural gas, the conveyance or transmission of the same, or ancillary services, as required for ensuring reliable services in accordance with good utility practice, if the purchase of the same cannot practically be accomplished through the standard competitive bidding procedures and the contractor is not providing direct retail services to end users.\n(d) (1) A contractor shall not be deemed to discriminate in the provision of benefits if the contractor, in providing the benefits, pays the actual costs incurred in obtaining the benefit.\n(2) If a contractor is unable to provide a certain benefit, despite taking reasonable measures to do so, the contractor shall not be deemed to discriminate in the provision of benefits.\n(e) (1) Every contract subject to this chapter shall contain a statement by which the contractor certifies that the contractor is in compliance with this section.\n(2) The department or other contracting agency shall enforce this section pursuant to its existing enforcement powers.\n(3) (A) If a contractor falsely certifies that it is in compliance with this section, the contract with that contractor shall be subject to Article 9 (commencing with Section 10420), unless, within a time period specified by the department or other contracting agency, the contractor provides to the department or agency proof that it has complied, or is in the process of complying, with this section.\n(B) The application of the remedies or penalties contained in Article 9 (commencing with Section 10420) to a contract subject to this chapter shall not preclude the application of any existing remedies otherwise available to the department or other contracting agency under its existing enforcement powers.\n(f) Nothing in this section is intended to regulate the contracting practices of any local jurisdiction.\n(g) This section shall be construed so as not to conflict with applicable federal laws, rules, or regulations. In the event that a court or agency of competent jurisdiction holds that federal law, rule, or regulation invalidates any clause, sentence, paragraph, or section of this code or the application thereof to any person or circumstances, it is the intent of the state that the court or agency sever that clause, sentence, paragraph, or section so that the remainder of this section shall remain in effect.\nSEC. 2.\nSection 10295.35 of the Public Contract Code shall not be construed to create any new enforcement authority or responsibility in the Department of General Services or any other contracting agency.\nSEC. 3.\nNo reimbursement is required by this act pursuant to Section 6 of Article XIII\u2009B of the California Constitution because the only costs that may be incurred by a local agency or school district will be incurred because this act creates a new crime or infraction, eliminates a crime or infraction, or changes the penalty for a crime or infraction, within the meaning of Section 17556 of the Government Code, or changes the definition of a crime within the meaning of Section 6 of Article XIII\u2009B of the California Constitution.', 'title': 'An act to add Section 10295.35 to the Public Contract Code, relating to public contracts.'} ``` There are two fields that you’ll want to use: - `text`: the text of the bill which’ll be the input to the model. - `summary`: a condensed version of `text` which’ll be the model target. ## Preprocess The next step is to load a T5 tokenizer to process `text` and `summary`: ``` >>> from transformers import AutoTokenizer >>> checkpoint = "t5-small" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) ``` The preprocessing function you want to create needs to: 1. Prefix the input with a prompt so T5 knows this is a summarization task. Some models capable of multiple NLP tasks require prompting for specific tasks. 2. Use the keyword `text_target` argument when tokenizing labels. 3. Truncate sequences to be no longer than the maximum length set by the `max_length` parameter. ``` >>> prefix = "summarize: " >>> def preprocess_function(examples): ... inputs = [prefix + doc for doc in examples["text"]] ... model_inputs = tokenizer(inputs, max_length=1024, truncation=True) ... labels = tokenizer(text_target=examples["summary"], max_length=128, truncation=True) ... model_inputs["labels"] = labels["input_ids"] ... return model_inputs ``` To apply the preprocessing function over the entire dataset, use 🤗 Datasets [map](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.map) method. You can speed up the `map` function by setting `batched=True` to process multiple elements of the dataset at once: ``` >>> tokenized_billsum = billsum.map(preprocess_function, batched=True) ``` Now create a batch of examples using [DataCollatorForSeq2Seq](/docs/transformers/v4.34.0/en/main_classes/data_collator#transformers.DataCollatorForSeq2Seq). It’s more efficient to _dynamically pad_ the sentences to the longest length in a batch during collation, instead of padding the whole dataset to the maximum length. ``` >>> from transformers import DataCollatorForSeq2Seq >>> data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=checkpoint) ``` ``` >>> from transformers import DataCollatorForSeq2Seq >>> data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=checkpoint, return_tensors="tf") ``` ## Evaluate Including a metric during training is often helpful for evaluating your model’s performance. You can quickly load a evaluation method with the 🤗 [Evaluate](https://huggingface.co/docs/evaluate/index) library. For this task, load the [ROUGE](https://huggingface.co/spaces/evaluate-metric/rouge) metric (see the 🤗 Evaluate [quick tour](https://huggingface.co/docs/evaluate/a_quick_tour) to learn more about how to load and compute a metric): ``` >>> import evaluate >>> rouge = evaluate.load("rouge") ``` Then create a function that passes your predictions and labels to [compute](https://huggingface.co/docs/evaluate/v0.4.0/en/package_reference/main_classes#evaluate.EvaluationModule.compute) to calculate the ROUGE metric: ``` >>> import numpy as np >>> def compute_metrics(eval_pred): ... predictions, labels = eval_pred ... decoded_preds = tokenizer.batch_decode(predictions, skip_special_tokens=True) ... labels = np.where(labels != -100, labels, tokenizer.pad_token_id) ... decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True) ... result = rouge.compute(predictions=decoded_preds, references=decoded_labels, use_stemmer=True) ... prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in predictions] ... result["gen_len"] = np.mean(prediction_lens) ... return {k: round(v, 4) for k, v in result.items()} ``` Your `compute_metrics` function is ready to go now, and you’ll return to it when you setup your training. ## Train If you aren’t familiar with finetuning a model with the [Trainer](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer), take a look at the basic tutorial [here](../training#train-with-pytorch-trainer)! You’re ready to start training your model now! Load T5 with [AutoModelForSeq2SeqLM](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoModelForSeq2SeqLM): ``` >>> from transformers import AutoModelForSeq2SeqLM, Seq2SeqTrainingArguments, Seq2SeqTrainer >>> model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint) ``` At this point, only three steps remain: 1. Define your training hyperparameters in [Seq2SeqTrainingArguments](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Seq2SeqTrainingArguments). The only required parameter is `output_dir` which specifies where to save your model. You’ll push this model to the Hub by setting `push_to_hub=True` (you need to be signed in to Hugging Face to upload your model). At the end of each epoch, the [Trainer](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer) will evaluate the ROUGE metric and save the training checkpoint. 2. Pass the training arguments to [Seq2SeqTrainer](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Seq2SeqTrainer) along with the model, dataset, tokenizer, data collator, and `compute_metrics` function. 3. Call [train()](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.train) to finetune your model. ``` >>> training_args = Seq2SeqTrainingArguments( ... output_dir="my_awesome_billsum_model", ... evaluation_strategy="epoch", ... learning_rate=2e-5, ... per_device_train_batch_size=16, ... per_device_eval_batch_size=16, ... weight_decay=0.01, ... save_total_limit=3, ... num_train_epochs=4, ... predict_with_generate=True, ... fp16=True, ... push_to_hub=True, ... ) >>> trainer = Seq2SeqTrainer( ... model=model, ... args=training_args, ... train_dataset=tokenized_billsum["train"], ... eval_dataset=tokenized_billsum["test"], ... tokenizer=tokenizer, ... data_collator=data_collator, ... compute_metrics=compute_metrics, ... ) >>> trainer.train() ``` Once training is completed, share your model to the Hub with the [push\_to\_hub()](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.push_to_hub) method so everyone can use your model: ``` >>> trainer.push_to_hub() ``` If you aren’t familiar with finetuning a model with Keras, take a look at the basic tutorial [here](../training#train-a-tensorflow-model-with-keras)! To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters: ``` >>> from transformers import create_optimizer, AdamWeightDecay >>> optimizer = AdamWeightDecay(learning_rate=2e-5, weight_decay_rate=0.01) ``` Then you can load T5 with [TFAutoModelForSeq2SeqLM](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.TFAutoModelForSeq2SeqLM): ``` >>> from transformers import TFAutoModelForSeq2SeqLM >>> model = TFAutoModelForSeq2SeqLM.from_pretrained(checkpoint) ``` Convert your datasets to the `tf.data.Dataset` format with [prepare\_tf\_dataset()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel.prepare_tf_dataset): ``` >>> tf_train_set = model.prepare_tf_dataset( ... tokenized_billsum["train"], ... shuffle=True, ... batch_size=16, ... collate_fn=data_collator, ... ) >>> tf_test_set = model.prepare_tf_dataset( ... tokenized_billsum["test"], ... shuffle=False, ... batch_size=16, ... collate_fn=data_collator, ... ) ``` Configure the model for training with [`compile`](https://keras.io/api/models/model_training_apis/#compile-method). Note that Transformers models all have a default task-relevant loss function, so you don’t need to specify one unless you want to: ``` >>> import tensorflow as tf >>> model.compile(optimizer=optimizer) ``` The last two things to setup before you start training is to compute the ROUGE score from the predictions, and provide a way to push your model to the Hub. Both are done by using [Keras callbacks](../main_classes/keras_callbacks). Pass your `compute_metrics` function to [KerasMetricCallback](/docs/transformers/v4.34.0/en/main_classes/keras_callbacks#transformers.KerasMetricCallback): ``` >>> from transformers.keras_callbacks import KerasMetricCallback >>> metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_set) ``` Specify where to push your model and tokenizer in the [PushToHubCallback](/docs/transformers/v4.34.0/en/main_classes/keras_callbacks#transformers.PushToHubCallback): ``` >>> from transformers.keras_callbacks import PushToHubCallback >>> push_to_hub_callback = PushToHubCallback( ... output_dir="my_awesome_billsum_model", ... tokenizer=tokenizer, ... ) ``` Then bundle your callbacks together: ``` >>> callbacks = [metric_callback, push_to_hub_callback] ``` Finally, you’re ready to start training your model! Call [`fit`](https://keras.io/api/models/model_training_apis/#fit-method) with your training and validation datasets, the number of epochs, and your callbacks to finetune the model: ``` >>> model.fit(x=tf_train_set, validation_data=tf_test_set, epochs=3, callbacks=callbacks) ``` Once training is completed, your model is automatically uploaded to the Hub so everyone can use it! For a more in-depth example of how to finetune a model for summarization, take a look at the corresponding [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/summarization.ipynb) or [TensorFlow notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/summarization-tf.ipynb). ## Inference Great, now that you’ve finetuned a model, you can use it for inference! Come up with some text you’d like to summarize. For T5, you need to prefix your input depending on the task you’re working on. For summarization you should prefix your input as shown below: ``` >>> text = "summarize: The Inflation Reduction Act lowers prescription drug costs, health care costs, and energy costs. It's the most aggressive action on tackling the climate crisis in American history, which will lift up American workers and create good-paying, union jobs across the country. It'll lower the deficit and ask the ultra-wealthy and corporations to pay their fair share. And no one making under $400,000 per year will pay a penny more in taxes." ``` The simplest way to try out your finetuned model for inference is to use it in a [pipeline()](/docs/transformers/v4.34.0/en/main_classes/pipelines#transformers.pipeline). Instantiate a `pipeline` for summarization with your model, and pass your text to it: ``` >>> from transformers import pipeline >>> summarizer = pipeline("summarization", model="stevhliu/my_awesome_billsum_model") >>> summarizer(text) [{"summary_text": "The Inflation Reduction Act lowers prescription drug costs, health care costs, and energy costs. It's the most aggressive action on tackling the climate crisis in American history, which will lift up American workers and create good-paying, union jobs across the country."}] ``` You can also manually replicate the results of the `pipeline` if you’d like: Tokenize the text and return the `input_ids` as PyTorch tensors: ``` >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("stevhliu/my_awesome_billsum_model") >>> inputs = tokenizer(text, return_tensors="pt").input_ids ``` Use the [generate()](/docs/transformers/v4.34.0/en/main_classes/text_generation#transformers.GenerationMixin.generate) method to create the summarization. For more details about the different text generation strategies and parameters for controlling generation, check out the [Text Generation](../main_classes/text_generation) API. ``` >>> from transformers import AutoModelForSeq2SeqLM >>> model = AutoModelForSeq2SeqLM.from_pretrained("stevhliu/my_awesome_billsum_model") >>> outputs = model.generate(inputs, max_new_tokens=100, do_sample=False) ``` Decode the generated token ids back into text: ``` >>> tokenizer.decode(outputs[0], skip_special_tokens=True) 'the inflation reduction act lowers prescription drug costs, health care costs, and energy costs. it's the most aggressive action on tackling the climate crisis in american history. it will ask the ultra-wealthy and corporations to pay their fair share.' ``` Tokenize the text and return the `input_ids` as TensorFlow tensors: ``` >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("stevhliu/my_awesome_billsum_model") >>> inputs = tokenizer(text, return_tensors="tf").input_ids ``` Use the [generate()](/docs/transformers/v4.34.0/en/main_classes/text_generation#transformers.TFGenerationMixin.generate) method to create the summarization. For more details about the different text generation strategies and parameters for controlling generation, check out the [Text Generation](../main_classes/text_generation) API. ``` >>> from transformers import TFAutoModelForSeq2SeqLM >>> model = TFAutoModelForSeq2SeqLM.from_pretrained("stevhliu/my_awesome_billsum_model") >>> outputs = model.generate(inputs, max_new_tokens=100, do_sample=False) ``` Decode the generated token ids back into text: ``` >>> tokenizer.decode(outputs[0], skip_special_tokens=True) 'the inflation reduction act lowers prescription drug costs, health care costs, and energy costs. it's the most aggressive action on tackling the climate crisis in american history. it will ask the ultra-wealthy and corporations to pay their fair share.' ```
<!DOCTYPE html><html class=""><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"> <meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science."> <meta property="fb:app_id" content="1321688464574422"> <meta name="twitter:card" content="summary_large_image"> <meta name="twitter:site" content="@huggingface"> <meta property="og:title" content="Summarization"> <meta property="og:type" content="website"> <meta property="og:url" content="https://huggingface.co/docs/transformers/v4.34.0/en/tasks/summarization"> <meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png"> <link rel="stylesheet" href="/front/build/kube-5e23f38/style.css"> <link rel="preconnect" href="https://fonts.gstatic.com"> <link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&amp;display=swap" rel="stylesheet"> <link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&amp;display=swap" rel="stylesheet"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" as="style" onload="this.onload=null;this.rel='stylesheet'"> <noscript> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" /> </noscript> <title>Summarization</title> <script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script> <script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"><link rel="stylesheet" href="/docs/transformers/v4.34.0/en/_app/immutable/assets/0.e3b0c442.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/1.38c5c2f6.js"><meta name="hf:doc:metadata" content="{&quot;local&quot;:&quot;summarization&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;load-billsum-dataset&quot;,&quot;title&quot;:&quot;Load BillSum dataset&quot;},{&quot;local&quot;:&quot;preprocess&quot;,&quot;title&quot;:&quot;Preprocess&quot;},{&quot;local&quot;:&quot;evaluate&quot;,&quot;title&quot;:&quot;Evaluate&quot;},{&quot;local&quot;:&quot;train&quot;,&quot;title&quot;:&quot;Train&quot;},{&quot;local&quot;:&quot;inference&quot;,&quot;title&quot;:&quot;Inference&quot;}],&quot;title&quot;:&quot;Summarization&quot;}"></head> <body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage"> <div class="flex min-h-screen flex-col"> <div class="SVELTE_HYDRATER contents" data-props="{&quot;classNames&quot;:&quot;&quot;,&quot;isWide&quot;:true,&quot;isZh&quot;:false}" data-target="MainHeader"><header class="border-b border-gray-100 "><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <div class="flex flex-none items-center justify-center p-0.5 place-self-stretch lg:hidden"><button class="relative z-40 flex h-6 w-8 items-center justify-center" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div></div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="rounded-full border border-transparent bg-gray-900 px-3 py-1 leading-none text-white hover:border-black hover:bg-white hover:text-black" href="/join">Sign Up</a></li></ul></nav></div></header></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div> <main class="flex flex-1 flex-col"><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapters&quot;:[{&quot;title&quot;:&quot;Get started&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;🤗 Transformers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;index&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/index&quot;},{&quot;title&quot;:&quot;Quick tour&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;quicktour&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/quicktour&quot;},{&quot;title&quot;:&quot;Installation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;installation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/installation&quot;}]},{&quot;title&quot;:&quot;Tutorials&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Run inference with pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_tutorial&quot;},{&quot;title&quot;:&quot;Write portable code with AutoClass&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;autoclass_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/autoclass_tutorial&quot;},{&quot;title&quot;:&quot;Preprocess data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocessing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/preprocessing&quot;},{&quot;title&quot;:&quot;Fine-tune a pretrained model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;training&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/training&quot;},{&quot;title&quot;:&quot;Train with a script&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;run_scripts&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/run_scripts&quot;},{&quot;title&quot;:&quot;Set up distributed training with 🤗 Accelerate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;accelerate&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/accelerate&quot;},{&quot;title&quot;:&quot;Load and train adapters with 🤗 PEFT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;peft&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/peft&quot;},{&quot;title&quot;:&quot;Share your model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_sharing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_sharing&quot;},{&quot;title&quot;:&quot;Agents&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers_agents&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/transformers_agents&quot;},{&quot;title&quot;:&quot;Generation with LLMs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;llm_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/llm_tutorial&quot;}]},{&quot;title&quot;:&quot;Task Guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Natural Language Processing&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text classification&quot;,&quot;id&quot;:&quot;tasks/sequence_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/sequence_classification&quot;},{&quot;title&quot;:&quot;Token classification&quot;,&quot;id&quot;:&quot;tasks/token_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/token_classification&quot;},{&quot;title&quot;:&quot;Question answering&quot;,&quot;id&quot;:&quot;tasks/question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/question_answering&quot;},{&quot;title&quot;:&quot;Causal language modeling&quot;,&quot;id&quot;:&quot;tasks/language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/language_modeling&quot;},{&quot;title&quot;:&quot;Masked language modeling&quot;,&quot;id&quot;:&quot;tasks/masked_language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/masked_language_modeling&quot;},{&quot;title&quot;:&quot;Translation&quot;,&quot;id&quot;:&quot;tasks/translation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/translation&quot;},{&quot;title&quot;:&quot;Summarization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks/summarization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/summarization&quot;},{&quot;title&quot;:&quot;Multiple choice&quot;,&quot;id&quot;:&quot;tasks/multiple_choice&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/multiple_choice&quot;}]},{&quot;title&quot;:&quot;Audio&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio classification&quot;,&quot;id&quot;:&quot;tasks/audio_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/audio_classification&quot;},{&quot;title&quot;:&quot;Automatic speech recognition&quot;,&quot;id&quot;:&quot;tasks/asr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/asr&quot;}]},{&quot;title&quot;:&quot;Computer Vision&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image classification&quot;,&quot;id&quot;:&quot;tasks/image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_classification&quot;},{&quot;title&quot;:&quot;Semantic segmentation&quot;,&quot;id&quot;:&quot;tasks/semantic_segmentation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/semantic_segmentation&quot;},{&quot;title&quot;:&quot;Video classification&quot;,&quot;id&quot;:&quot;tasks/video_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/video_classification&quot;},{&quot;title&quot;:&quot;Object detection&quot;,&quot;id&quot;:&quot;tasks/object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot object detection&quot;,&quot;id&quot;:&quot;tasks/zero_shot_object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot image classification&quot;,&quot;id&quot;:&quot;tasks/zero_shot_image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification&quot;},{&quot;title&quot;:&quot;Depth estimation&quot;,&quot;id&quot;:&quot;tasks/monocular_depth_estimation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation&quot;}]},{&quot;title&quot;:&quot;Multimodal&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image captioning&quot;,&quot;id&quot;:&quot;tasks/image_captioning&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_captioning&quot;},{&quot;title&quot;:&quot;Document Question Answering&quot;,&quot;id&quot;:&quot;tasks/document_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/document_question_answering&quot;},{&quot;title&quot;:&quot;Visual Question Answering&quot;,&quot;id&quot;:&quot;tasks/visual_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/visual_question_answering&quot;},{&quot;title&quot;:&quot;Text to speech&quot;,&quot;id&quot;:&quot;tasks/text-to-speech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/text-to-speech&quot;}]},{&quot;title&quot;:&quot;Generation&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Customize the generation strategy&quot;,&quot;id&quot;:&quot;generation_strategies&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/generation_strategies&quot;}]},{&quot;title&quot;:&quot;Prompting&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image tasks with IDEFICS&quot;,&quot;id&quot;:&quot;tasks/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/idefics&quot;}]}]},{&quot;title&quot;:&quot;Developer guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Use fast tokenizers from 🤗 Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;fast_tokenizers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/fast_tokenizers&quot;},{&quot;title&quot;:&quot;Run inference with multilingual models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;multilingual&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/multilingual&quot;},{&quot;title&quot;:&quot;Use model-specific APIs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;create_a_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/create_a_model&quot;},{&quot;title&quot;:&quot;Share a custom model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_models&quot;},{&quot;title&quot;:&quot;Templates for chat models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;chat_templating&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/chat_templating&quot;},{&quot;title&quot;:&quot;Run training on Amazon SageMaker&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;sagemaker&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/sagemaker&quot;},{&quot;title&quot;:&quot;Export to ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;serialization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/serialization&quot;},{&quot;title&quot;:&quot;Export to TFLite&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tflite&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tflite&quot;},{&quot;title&quot;:&quot;Export to TorchScript&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;torchscript&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/torchscript&quot;},{&quot;title&quot;:&quot;Benchmarks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;benchmarks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/benchmarks&quot;},{&quot;title&quot;:&quot;Notebooks with examples&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;notebooks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/notebooks&quot;},{&quot;title&quot;:&quot;Community resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;community&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/community&quot;},{&quot;title&quot;:&quot;Custom Tools and Prompts&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_tools&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_tools&quot;},{&quot;title&quot;:&quot;Troubleshoot&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;troubleshooting&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/troubleshooting&quot;}]},{&quot;title&quot;:&quot;Performance and scalability&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;performance&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/performance&quot;},{&quot;title&quot;:&quot;Efficient training techniques&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Methods and tools for efficient training on a single GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_one&quot;},{&quot;title&quot;:&quot;Multiple GPUs and parallelism&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_many&quot;},{&quot;title&quot;:&quot;Efficient training on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu&quot;},{&quot;title&quot;:&quot;Distributed CPU training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu_many&quot;},{&quot;title&quot;:&quot;Training on TPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu&quot;},{&quot;title&quot;:&quot;Training on TPU with TensorFlow&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu_tf&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu_tf&quot;},{&quot;title&quot;:&quot;Training on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_special&quot;},{&quot;title&quot;:&quot;Custom hardware for training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_hardware&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_hardware&quot;},{&quot;title&quot;:&quot;Hyperparameter Search using Trainer API&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;hpo_train&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/hpo_train&quot;}]},{&quot;title&quot;:&quot;Optimizing inference&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Inference on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_cpu&quot;},{&quot;title&quot;:&quot;Inference on one GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_one&quot;},{&quot;title&quot;:&quot;Inference on many GPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_many&quot;},{&quot;title&quot;:&quot;Inference on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_special&quot;}]},{&quot;title&quot;:&quot;Instantiating a big model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;big_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/big_models&quot;},{&quot;title&quot;:&quot;Troubleshooting&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;debugging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/debugging&quot;},{&quot;title&quot;:&quot;XLA Integration for TensorFlow Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tf_xla&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tf_xla&quot;},{&quot;title&quot;:&quot;Optimize inference using `torch.compile()`&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_torch_compile&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_torch_compile&quot;}]},{&quot;title&quot;:&quot;Contribute&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;How to contribute to transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;contributing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/contributing&quot;},{&quot;title&quot;:&quot;How to add a model to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_model&quot;},{&quot;title&quot;:&quot;How to convert a 🤗 Transformers model to TensorFlow?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_tensorflow_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_tensorflow_model&quot;},{&quot;title&quot;:&quot;How to add a pipeline to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_pipeline&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_pipeline&quot;},{&quot;title&quot;:&quot;Testing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;testing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/testing&quot;},{&quot;title&quot;:&quot;Checks on a Pull Request&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pr_checks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pr_checks&quot;}]},{&quot;title&quot;:&quot;Conceptual guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Philosophy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;philosophy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/philosophy&quot;},{&quot;title&quot;:&quot;Glossary&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;glossary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/glossary&quot;},{&quot;title&quot;:&quot;What 🤗 Transformers can do&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;task_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/task_summary&quot;},{&quot;title&quot;:&quot;How 🤗 Transformers solve tasks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks_explained&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks_explained&quot;},{&quot;title&quot;:&quot;The Transformer model family&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_summary&quot;},{&quot;title&quot;:&quot;Summary of the tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tokenizer_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tokenizer_summary&quot;},{&quot;title&quot;:&quot;Attention mechanisms&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;attention&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/attention&quot;},{&quot;title&quot;:&quot;Padding and truncation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pad_truncation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pad_truncation&quot;},{&quot;title&quot;:&quot;BERTology&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;bertology&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/bertology&quot;},{&quot;title&quot;:&quot;Perplexity of fixed-length models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perplexity&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perplexity&quot;},{&quot;title&quot;:&quot;Pipelines for webserver inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_webserver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_webserver&quot;},{&quot;title&quot;:&quot;Model training anatomy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_memory_anatomy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_memory_anatomy&quot;}]},{&quot;title&quot;:&quot;API&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Main Classes&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Agents and Tools&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/agent&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/agent&quot;},{&quot;title&quot;:&quot;Auto Classes&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/auto&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/auto&quot;},{&quot;title&quot;:&quot;Callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/callback&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/callback&quot;},{&quot;title&quot;:&quot;Configuration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/configuration&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/configuration&quot;},{&quot;title&quot;:&quot;Data Collator&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/data_collator&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/data_collator&quot;},{&quot;title&quot;:&quot;Keras callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/keras_callbacks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/keras_callbacks&quot;},{&quot;title&quot;:&quot;Logging&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/logging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/logging&quot;},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/model&quot;},{&quot;title&quot;:&quot;Text Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/text_generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/text_generation&quot;},{&quot;title&quot;:&quot;ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/onnx&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/onnx&quot;},{&quot;title&quot;:&quot;Optimization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/optimizer_schedules&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules&quot;},{&quot;title&quot;:&quot;Model outputs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/output&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/output&quot;},{&quot;title&quot;:&quot;Pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/pipelines&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/pipelines&quot;},{&quot;title&quot;:&quot;Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/processors&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/processors&quot;},{&quot;title&quot;:&quot;Quantization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/quantization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/quantization&quot;},{&quot;title&quot;:&quot;Tokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/tokenizer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/tokenizer&quot;},{&quot;title&quot;:&quot;Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/trainer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/trainer&quot;},{&quot;title&quot;:&quot;DeepSpeed Integration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/deepspeed&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/deepspeed&quot;},{&quot;title&quot;:&quot;Feature Extractor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/feature_extractor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/feature_extractor&quot;},{&quot;title&quot;:&quot;Image Processor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/image_processor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/image_processor&quot;}]},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALBERT&quot;,&quot;id&quot;:&quot;model_doc/albert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/albert&quot;},{&quot;title&quot;:&quot;BART&quot;,&quot;id&quot;:&quot;model_doc/bart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bart&quot;},{&quot;title&quot;:&quot;BARThez&quot;,&quot;id&quot;:&quot;model_doc/barthez&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/barthez&quot;},{&quot;title&quot;:&quot;BARTpho&quot;,&quot;id&quot;:&quot;model_doc/bartpho&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bartpho&quot;},{&quot;title&quot;:&quot;BERT&quot;,&quot;id&quot;:&quot;model_doc/bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert&quot;},{&quot;title&quot;:&quot;BertGeneration&quot;,&quot;id&quot;:&quot;model_doc/bert-generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-generation&quot;},{&quot;title&quot;:&quot;BertJapanese&quot;,&quot;id&quot;:&quot;model_doc/bert-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-japanese&quot;},{&quot;title&quot;:&quot;Bertweet&quot;,&quot;id&quot;:&quot;model_doc/bertweet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bertweet&quot;},{&quot;title&quot;:&quot;BigBird&quot;,&quot;id&quot;:&quot;model_doc/big_bird&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/big_bird&quot;},{&quot;title&quot;:&quot;BigBirdPegasus&quot;,&quot;id&quot;:&quot;model_doc/bigbird_pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus&quot;},{&quot;title&quot;:&quot;BioGpt&quot;,&quot;id&quot;:&quot;model_doc/biogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/biogpt&quot;},{&quot;title&quot;:&quot;Blenderbot&quot;,&quot;id&quot;:&quot;model_doc/blenderbot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot&quot;},{&quot;title&quot;:&quot;Blenderbot Small&quot;,&quot;id&quot;:&quot;model_doc/blenderbot-small&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot-small&quot;},{&quot;title&quot;:&quot;BLOOM&quot;,&quot;id&quot;:&quot;model_doc/bloom&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bloom&quot;},{&quot;title&quot;:&quot;BORT&quot;,&quot;id&quot;:&quot;model_doc/bort&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bort&quot;},{&quot;title&quot;:&quot;ByT5&quot;,&quot;id&quot;:&quot;model_doc/byt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/byt5&quot;},{&quot;title&quot;:&quot;CamemBERT&quot;,&quot;id&quot;:&quot;model_doc/camembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/camembert&quot;},{&quot;title&quot;:&quot;CANINE&quot;,&quot;id&quot;:&quot;model_doc/canine&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/canine&quot;},{&quot;title&quot;:&quot;CodeGen&quot;,&quot;id&quot;:&quot;model_doc/codegen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/codegen&quot;},{&quot;title&quot;:&quot;CodeLlama&quot;,&quot;id&quot;:&quot;model_doc/code_llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/code_llama&quot;},{&quot;title&quot;:&quot;ConvBERT&quot;,&quot;id&quot;:&quot;model_doc/convbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convbert&quot;},{&quot;title&quot;:&quot;CPM&quot;,&quot;id&quot;:&quot;model_doc/cpm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpm&quot;},{&quot;title&quot;:&quot;CPMANT&quot;,&quot;id&quot;:&quot;model_doc/cpmant&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpmant&quot;},{&quot;title&quot;:&quot;CTRL&quot;,&quot;id&quot;:&quot;model_doc/ctrl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ctrl&quot;},{&quot;title&quot;:&quot;DeBERTa&quot;,&quot;id&quot;:&quot;model_doc/deberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta&quot;},{&quot;title&quot;:&quot;DeBERTa-v2&quot;,&quot;id&quot;:&quot;model_doc/deberta-v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta-v2&quot;},{&quot;title&quot;:&quot;DialoGPT&quot;,&quot;id&quot;:&quot;model_doc/dialogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dialogpt&quot;},{&quot;title&quot;:&quot;DistilBERT&quot;,&quot;id&quot;:&quot;model_doc/distilbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/distilbert&quot;},{&quot;title&quot;:&quot;DPR&quot;,&quot;id&quot;:&quot;model_doc/dpr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpr&quot;},{&quot;title&quot;:&quot;ELECTRA&quot;,&quot;id&quot;:&quot;model_doc/electra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/electra&quot;},{&quot;title&quot;:&quot;Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encoder-decoder&quot;},{&quot;title&quot;:&quot;ERNIE&quot;,&quot;id&quot;:&quot;model_doc/ernie&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie&quot;},{&quot;title&quot;:&quot;ErnieM&quot;,&quot;id&quot;:&quot;model_doc/ernie_m&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie_m&quot;},{&quot;title&quot;:&quot;ESM&quot;,&quot;id&quot;:&quot;model_doc/esm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/esm&quot;},{&quot;title&quot;:&quot;Falcon&quot;,&quot;id&quot;:&quot;model_doc/falcon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/falcon&quot;},{&quot;title&quot;:&quot;FLAN-T5&quot;,&quot;id&quot;:&quot;model_doc/flan-t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-t5&quot;},{&quot;title&quot;:&quot;FLAN-UL2&quot;,&quot;id&quot;:&quot;model_doc/flan-ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-ul2&quot;},{&quot;title&quot;:&quot;FlauBERT&quot;,&quot;id&quot;:&quot;model_doc/flaubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flaubert&quot;},{&quot;title&quot;:&quot;FNet&quot;,&quot;id&quot;:&quot;model_doc/fnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fnet&quot;},{&quot;title&quot;:&quot;FSMT&quot;,&quot;id&quot;:&quot;model_doc/fsmt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fsmt&quot;},{&quot;title&quot;:&quot;Funnel Transformer&quot;,&quot;id&quot;:&quot;model_doc/funnel&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/funnel&quot;},{&quot;title&quot;:&quot;GPT&quot;,&quot;id&quot;:&quot;model_doc/openai-gpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/openai-gpt&quot;},{&quot;title&quot;:&quot;GPT Neo&quot;,&quot;id&quot;:&quot;model_doc/gpt_neo&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neo&quot;},{&quot;title&quot;:&quot;GPT NeoX&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox&quot;},{&quot;title&quot;:&quot;GPT NeoX Japanese&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox_japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese&quot;},{&quot;title&quot;:&quot;GPT-J&quot;,&quot;id&quot;:&quot;model_doc/gptj&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptj&quot;},{&quot;title&quot;:&quot;GPT2&quot;,&quot;id&quot;:&quot;model_doc/gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt2&quot;},{&quot;title&quot;:&quot;GPTBigCode&quot;,&quot;id&quot;:&quot;model_doc/gpt_bigcode&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode&quot;},{&quot;title&quot;:&quot;GPTSAN Japanese&quot;,&quot;id&quot;:&quot;model_doc/gptsan-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese&quot;},{&quot;title&quot;:&quot;GPTSw3&quot;,&quot;id&quot;:&quot;model_doc/gpt-sw3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt-sw3&quot;},{&quot;title&quot;:&quot;HerBERT&quot;,&quot;id&quot;:&quot;model_doc/herbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/herbert&quot;},{&quot;title&quot;:&quot;I-BERT&quot;,&quot;id&quot;:&quot;model_doc/ibert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ibert&quot;},{&quot;title&quot;:&quot;Jukebox&quot;,&quot;id&quot;:&quot;model_doc/jukebox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/jukebox&quot;},{&quot;title&quot;:&quot;LED&quot;,&quot;id&quot;:&quot;model_doc/led&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/led&quot;},{&quot;title&quot;:&quot;LLaMA&quot;,&quot;id&quot;:&quot;model_doc/llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama&quot;},{&quot;title&quot;:&quot;Llama2&quot;,&quot;id&quot;:&quot;model_doc/llama2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama2&quot;},{&quot;title&quot;:&quot;Longformer&quot;,&quot;id&quot;:&quot;model_doc/longformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longformer&quot;},{&quot;title&quot;:&quot;LongT5&quot;,&quot;id&quot;:&quot;model_doc/longt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longt5&quot;},{&quot;title&quot;:&quot;LUKE&quot;,&quot;id&quot;:&quot;model_doc/luke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/luke&quot;},{&quot;title&quot;:&quot;M2M100&quot;,&quot;id&quot;:&quot;model_doc/m2m_100&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/m2m_100&quot;},{&quot;title&quot;:&quot;MarianMT&quot;,&quot;id&quot;:&quot;model_doc/marian&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/marian&quot;},{&quot;title&quot;:&quot;MarkupLM&quot;,&quot;id&quot;:&quot;model_doc/markuplm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/markuplm&quot;},{&quot;title&quot;:&quot;MBart and MBart-50&quot;,&quot;id&quot;:&quot;model_doc/mbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mbart&quot;},{&quot;title&quot;:&quot;MEGA&quot;,&quot;id&quot;:&quot;model_doc/mega&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mega&quot;},{&quot;title&quot;:&quot;MegatronBERT&quot;,&quot;id&quot;:&quot;model_doc/megatron-bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron-bert&quot;},{&quot;title&quot;:&quot;MegatronGPT2&quot;,&quot;id&quot;:&quot;model_doc/megatron_gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2&quot;},{&quot;title&quot;:&quot;Mistral&quot;,&quot;id&quot;:&quot;model_doc/mistral&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mistral&quot;},{&quot;title&quot;:&quot;mLUKE&quot;,&quot;id&quot;:&quot;model_doc/mluke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mluke&quot;},{&quot;title&quot;:&quot;MobileBERT&quot;,&quot;id&quot;:&quot;model_doc/mobilebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilebert&quot;},{&quot;title&quot;:&quot;MPNet&quot;,&quot;id&quot;:&quot;model_doc/mpnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpnet&quot;},{&quot;title&quot;:&quot;MPT&quot;,&quot;id&quot;:&quot;model_doc/mpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpt&quot;},{&quot;title&quot;:&quot;MRA&quot;,&quot;id&quot;:&quot;model_doc/mra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mra&quot;},{&quot;title&quot;:&quot;MT5&quot;,&quot;id&quot;:&quot;model_doc/mt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mt5&quot;},{&quot;title&quot;:&quot;MVP&quot;,&quot;id&quot;:&quot;model_doc/mvp&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mvp&quot;},{&quot;title&quot;:&quot;NEZHA&quot;,&quot;id&quot;:&quot;model_doc/nezha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nezha&quot;},{&quot;title&quot;:&quot;NLLB&quot;,&quot;id&quot;:&quot;model_doc/nllb&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb&quot;},{&quot;title&quot;:&quot;NLLB-MoE&quot;,&quot;id&quot;:&quot;model_doc/nllb-moe&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb-moe&quot;},{&quot;title&quot;:&quot;Nyströmformer&quot;,&quot;id&quot;:&quot;model_doc/nystromformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nystromformer&quot;},{&quot;title&quot;:&quot;Open-Llama&quot;,&quot;id&quot;:&quot;model_doc/open-llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/open-llama&quot;},{&quot;title&quot;:&quot;OPT&quot;,&quot;id&quot;:&quot;model_doc/opt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/opt&quot;},{&quot;title&quot;:&quot;Pegasus&quot;,&quot;id&quot;:&quot;model_doc/pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus&quot;},{&quot;title&quot;:&quot;PEGASUS-X&quot;,&quot;id&quot;:&quot;model_doc/pegasus_x&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus_x&quot;},{&quot;title&quot;:&quot;Persimmon&quot;,&quot;id&quot;:&quot;model_doc/persimmon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/persimmon&quot;},{&quot;title&quot;:&quot;PhoBERT&quot;,&quot;id&quot;:&quot;model_doc/phobert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/phobert&quot;},{&quot;title&quot;:&quot;PLBart&quot;,&quot;id&quot;:&quot;model_doc/plbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/plbart&quot;},{&quot;title&quot;:&quot;ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/prophetnet&quot;},{&quot;title&quot;:&quot;QDQBert&quot;,&quot;id&quot;:&quot;model_doc/qdqbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/qdqbert&quot;},{&quot;title&quot;:&quot;RAG&quot;,&quot;id&quot;:&quot;model_doc/rag&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rag&quot;},{&quot;title&quot;:&quot;REALM&quot;,&quot;id&quot;:&quot;model_doc/realm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/realm&quot;},{&quot;title&quot;:&quot;Reformer&quot;,&quot;id&quot;:&quot;model_doc/reformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/reformer&quot;},{&quot;title&quot;:&quot;RemBERT&quot;,&quot;id&quot;:&quot;model_doc/rembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rembert&quot;},{&quot;title&quot;:&quot;RetriBERT&quot;,&quot;id&quot;:&quot;model_doc/retribert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/retribert&quot;},{&quot;title&quot;:&quot;RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta&quot;},{&quot;title&quot;:&quot;RoBERTa-PreLayerNorm&quot;,&quot;id&quot;:&quot;model_doc/roberta-prelayernorm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm&quot;},{&quot;title&quot;:&quot;RoCBert&quot;,&quot;id&quot;:&quot;model_doc/roc_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roc_bert&quot;},{&quot;title&quot;:&quot;RoFormer&quot;,&quot;id&quot;:&quot;model_doc/roformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roformer&quot;},{&quot;title&quot;:&quot;RWKV&quot;,&quot;id&quot;:&quot;model_doc/rwkv&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rwkv&quot;},{&quot;title&quot;:&quot;Splinter&quot;,&quot;id&quot;:&quot;model_doc/splinter&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/splinter&quot;},{&quot;title&quot;:&quot;SqueezeBERT&quot;,&quot;id&quot;:&quot;model_doc/squeezebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/squeezebert&quot;},{&quot;title&quot;:&quot;SwitchTransformers&quot;,&quot;id&quot;:&quot;model_doc/switch_transformers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/switch_transformers&quot;},{&quot;title&quot;:&quot;T5&quot;,&quot;id&quot;:&quot;model_doc/t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5&quot;},{&quot;title&quot;:&quot;T5v1.1&quot;,&quot;id&quot;:&quot;model_doc/t5v1.1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5v1.1&quot;},{&quot;title&quot;:&quot;TAPEX&quot;,&quot;id&quot;:&quot;model_doc/tapex&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapex&quot;},{&quot;title&quot;:&quot;Transformer XL&quot;,&quot;id&quot;:&quot;model_doc/transfo-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/transfo-xl&quot;},{&quot;title&quot;:&quot;UL2&quot;,&quot;id&quot;:&quot;model_doc/ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ul2&quot;},{&quot;title&quot;:&quot;UMT5&quot;,&quot;id&quot;:&quot;model_doc/umt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/umt5&quot;},{&quot;title&quot;:&quot;X-MOD&quot;,&quot;id&quot;:&quot;model_doc/xmod&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xmod&quot;},{&quot;title&quot;:&quot;XGLM&quot;,&quot;id&quot;:&quot;model_doc/xglm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xglm&quot;},{&quot;title&quot;:&quot;XLM&quot;,&quot;id&quot;:&quot;model_doc/xlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm&quot;},{&quot;title&quot;:&quot;XLM-ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/xlm-prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa-XL&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl&quot;},{&quot;title&quot;:&quot;XLM-V&quot;,&quot;id&quot;:&quot;model_doc/xlm-v&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-v&quot;},{&quot;title&quot;:&quot;XLNet&quot;,&quot;id&quot;:&quot;model_doc/xlnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlnet&quot;},{&quot;title&quot;:&quot;YOSO&quot;,&quot;id&quot;:&quot;model_doc/yoso&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yoso&quot;}]},{&quot;title&quot;:&quot;Vision models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;BEiT&quot;,&quot;id&quot;:&quot;model_doc/beit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/beit&quot;},{&quot;title&quot;:&quot;BiT&quot;,&quot;id&quot;:&quot;model_doc/bit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bit&quot;},{&quot;title&quot;:&quot;Conditional DETR&quot;,&quot;id&quot;:&quot;model_doc/conditional_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/conditional_detr&quot;},{&quot;title&quot;:&quot;ConvNeXT&quot;,&quot;id&quot;:&quot;model_doc/convnext&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnext&quot;},{&quot;title&quot;:&quot;ConvNeXTV2&quot;,&quot;id&quot;:&quot;model_doc/convnextv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnextv2&quot;},{&quot;title&quot;:&quot;CvT&quot;,&quot;id&quot;:&quot;model_doc/cvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cvt&quot;},{&quot;title&quot;:&quot;Deformable DETR&quot;,&quot;id&quot;:&quot;model_doc/deformable_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deformable_detr&quot;},{&quot;title&quot;:&quot;DeiT&quot;,&quot;id&quot;:&quot;model_doc/deit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deit&quot;},{&quot;title&quot;:&quot;DETA&quot;,&quot;id&quot;:&quot;model_doc/deta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deta&quot;},{&quot;title&quot;:&quot;DETR&quot;,&quot;id&quot;:&quot;model_doc/detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/detr&quot;},{&quot;title&quot;:&quot;DiNAT&quot;,&quot;id&quot;:&quot;model_doc/dinat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinat&quot;},{&quot;title&quot;:&quot;DINO V2&quot;,&quot;id&quot;:&quot;model_doc/dinov2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinov2&quot;},{&quot;title&quot;:&quot;DiT&quot;,&quot;id&quot;:&quot;model_doc/dit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dit&quot;},{&quot;title&quot;:&quot;DPT&quot;,&quot;id&quot;:&quot;model_doc/dpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpt&quot;},{&quot;title&quot;:&quot;EfficientFormer&quot;,&quot;id&quot;:&quot;model_doc/efficientformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientformer&quot;},{&quot;title&quot;:&quot;EfficientNet&quot;,&quot;id&quot;:&quot;model_doc/efficientnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientnet&quot;},{&quot;title&quot;:&quot;FocalNet&quot;,&quot;id&quot;:&quot;model_doc/focalnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/focalnet&quot;},{&quot;title&quot;:&quot;GLPN&quot;,&quot;id&quot;:&quot;model_doc/glpn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/glpn&quot;},{&quot;title&quot;:&quot;ImageGPT&quot;,&quot;id&quot;:&quot;model_doc/imagegpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/imagegpt&quot;},{&quot;title&quot;:&quot;LeViT&quot;,&quot;id&quot;:&quot;model_doc/levit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/levit&quot;},{&quot;title&quot;:&quot;Mask2Former&quot;,&quot;id&quot;:&quot;model_doc/mask2former&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mask2former&quot;},{&quot;title&quot;:&quot;MaskFormer&quot;,&quot;id&quot;:&quot;model_doc/maskformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/maskformer&quot;},{&quot;title&quot;:&quot;MobileNetV1&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v1&quot;},{&quot;title&quot;:&quot;MobileNetV2&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v2&quot;},{&quot;title&quot;:&quot;MobileViT&quot;,&quot;id&quot;:&quot;model_doc/mobilevit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevit&quot;},{&quot;title&quot;:&quot;MobileViTV2&quot;,&quot;id&quot;:&quot;model_doc/mobilevitv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevitv2&quot;},{&quot;title&quot;:&quot;NAT&quot;,&quot;id&quot;:&quot;model_doc/nat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nat&quot;},{&quot;title&quot;:&quot;PoolFormer&quot;,&quot;id&quot;:&quot;model_doc/poolformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/poolformer&quot;},{&quot;title&quot;:&quot;Pyramid Vision Transformer (PVT)&quot;,&quot;id&quot;:&quot;model_doc/pvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pvt&quot;},{&quot;title&quot;:&quot;RegNet&quot;,&quot;id&quot;:&quot;model_doc/regnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/regnet&quot;},{&quot;title&quot;:&quot;ResNet&quot;,&quot;id&quot;:&quot;model_doc/resnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/resnet&quot;},{&quot;title&quot;:&quot;SegFormer&quot;,&quot;id&quot;:&quot;model_doc/segformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/segformer&quot;},{&quot;title&quot;:&quot;SwiftFormer&quot;,&quot;id&quot;:&quot;model_doc/swiftformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swiftformer&quot;},{&quot;title&quot;:&quot;Swin Transformer&quot;,&quot;id&quot;:&quot;model_doc/swin&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin&quot;},{&quot;title&quot;:&quot;Swin Transformer V2&quot;,&quot;id&quot;:&quot;model_doc/swinv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swinv2&quot;},{&quot;title&quot;:&quot;Swin2SR&quot;,&quot;id&quot;:&quot;model_doc/swin2sr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin2sr&quot;},{&quot;title&quot;:&quot;Table Transformer&quot;,&quot;id&quot;:&quot;model_doc/table-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/table-transformer&quot;},{&quot;title&quot;:&quot;TimeSformer&quot;,&quot;id&quot;:&quot;model_doc/timesformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/timesformer&quot;},{&quot;title&quot;:&quot;UperNet&quot;,&quot;id&quot;:&quot;model_doc/upernet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/upernet&quot;},{&quot;title&quot;:&quot;VAN&quot;,&quot;id&quot;:&quot;model_doc/van&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/van&quot;},{&quot;title&quot;:&quot;VideoMAE&quot;,&quot;id&quot;:&quot;model_doc/videomae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/videomae&quot;},{&quot;title&quot;:&quot;Vision Transformer (ViT)&quot;,&quot;id&quot;:&quot;model_doc/vit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit&quot;},{&quot;title&quot;:&quot;ViT Hybrid&quot;,&quot;id&quot;:&quot;model_doc/vit_hybrid&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_hybrid&quot;},{&quot;title&quot;:&quot;ViTDet&quot;,&quot;id&quot;:&quot;model_doc/vitdet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitdet&quot;},{&quot;title&quot;:&quot;ViTMAE&quot;,&quot;id&quot;:&quot;model_doc/vit_mae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_mae&quot;},{&quot;title&quot;:&quot;ViTMatte&quot;,&quot;id&quot;:&quot;model_doc/vitmatte&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitmatte&quot;},{&quot;title&quot;:&quot;ViTMSN&quot;,&quot;id&quot;:&quot;model_doc/vit_msn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_msn&quot;},{&quot;title&quot;:&quot;ViViT&quot;,&quot;id&quot;:&quot;model_doc/vivit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vivit&quot;},{&quot;title&quot;:&quot;YOLOS&quot;,&quot;id&quot;:&quot;model_doc/yolos&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yolos&quot;}]},{&quot;title&quot;:&quot;Audio models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio Spectrogram Transformer&quot;,&quot;id&quot;:&quot;model_doc/audio-spectrogram-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer&quot;},{&quot;title&quot;:&quot;Bark&quot;,&quot;id&quot;:&quot;model_doc/bark&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bark&quot;},{&quot;title&quot;:&quot;CLAP&quot;,&quot;id&quot;:&quot;model_doc/clap&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clap&quot;},{&quot;title&quot;:&quot;EnCodec&quot;,&quot;id&quot;:&quot;model_doc/encodec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encodec&quot;},{&quot;title&quot;:&quot;Hubert&quot;,&quot;id&quot;:&quot;model_doc/hubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/hubert&quot;},{&quot;title&quot;:&quot;MCTCT&quot;,&quot;id&quot;:&quot;model_doc/mctct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mctct&quot;},{&quot;title&quot;:&quot;MMS&quot;,&quot;id&quot;:&quot;model_doc/mms&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mms&quot;},{&quot;title&quot;:&quot;MusicGen&quot;,&quot;id&quot;:&quot;model_doc/musicgen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/musicgen&quot;},{&quot;title&quot;:&quot;Pop2Piano&quot;,&quot;id&quot;:&quot;model_doc/pop2piano&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pop2piano&quot;},{&quot;title&quot;:&quot;SEW&quot;,&quot;id&quot;:&quot;model_doc/sew&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew&quot;},{&quot;title&quot;:&quot;SEW-D&quot;,&quot;id&quot;:&quot;model_doc/sew-d&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew-d&quot;},{&quot;title&quot;:&quot;Speech2Text&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text&quot;},{&quot;title&quot;:&quot;Speech2Text2&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text_2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2&quot;},{&quot;title&quot;:&quot;SpeechT5&quot;,&quot;id&quot;:&quot;model_doc/speecht5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speecht5&quot;},{&quot;title&quot;:&quot;UniSpeech&quot;,&quot;id&quot;:&quot;model_doc/unispeech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech&quot;},{&quot;title&quot;:&quot;UniSpeech-SAT&quot;,&quot;id&quot;:&quot;model_doc/unispeech-sat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech-sat&quot;},{&quot;title&quot;:&quot;VITS&quot;,&quot;id&quot;:&quot;model_doc/vits&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vits&quot;},{&quot;title&quot;:&quot;Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2&quot;},{&quot;title&quot;:&quot;Wav2Vec2-Conformer&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2-conformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer&quot;},{&quot;title&quot;:&quot;Wav2Vec2Phoneme&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2_phoneme&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme&quot;},{&quot;title&quot;:&quot;WavLM&quot;,&quot;id&quot;:&quot;model_doc/wavlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wavlm&quot;},{&quot;title&quot;:&quot;Whisper&quot;,&quot;id&quot;:&quot;model_doc/whisper&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/whisper&quot;},{&quot;title&quot;:&quot;XLS-R&quot;,&quot;id&quot;:&quot;model_doc/xls_r&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xls_r&quot;},{&quot;title&quot;:&quot;XLSR-Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/xlsr_wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2&quot;}]},{&quot;title&quot;:&quot;Multimodal models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALIGN&quot;,&quot;id&quot;:&quot;model_doc/align&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/align&quot;},{&quot;title&quot;:&quot;AltCLIP&quot;,&quot;id&quot;:&quot;model_doc/altclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/altclip&quot;},{&quot;title&quot;:&quot;BLIP&quot;,&quot;id&quot;:&quot;model_doc/blip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip&quot;},{&quot;title&quot;:&quot;BLIP-2&quot;,&quot;id&quot;:&quot;model_doc/blip-2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip-2&quot;},{&quot;title&quot;:&quot;BridgeTower&quot;,&quot;id&quot;:&quot;model_doc/bridgetower&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bridgetower&quot;},{&quot;title&quot;:&quot;BROS&quot;,&quot;id&quot;:&quot;model_doc/bros&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bros&quot;},{&quot;title&quot;:&quot;Chinese-CLIP&quot;,&quot;id&quot;:&quot;model_doc/chinese_clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/chinese_clip&quot;},{&quot;title&quot;:&quot;CLIP&quot;,&quot;id&quot;:&quot;model_doc/clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clip&quot;},{&quot;title&quot;:&quot;CLIPSeg&quot;,&quot;id&quot;:&quot;model_doc/clipseg&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clipseg&quot;},{&quot;title&quot;:&quot;Data2Vec&quot;,&quot;id&quot;:&quot;model_doc/data2vec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/data2vec&quot;},{&quot;title&quot;:&quot;DePlot&quot;,&quot;id&quot;:&quot;model_doc/deplot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deplot&quot;},{&quot;title&quot;:&quot;Donut&quot;,&quot;id&quot;:&quot;model_doc/donut&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/donut&quot;},{&quot;title&quot;:&quot;FLAVA&quot;,&quot;id&quot;:&quot;model_doc/flava&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flava&quot;},{&quot;title&quot;:&quot;GIT&quot;,&quot;id&quot;:&quot;model_doc/git&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/git&quot;},{&quot;title&quot;:&quot;GroupViT&quot;,&quot;id&quot;:&quot;model_doc/groupvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/groupvit&quot;},{&quot;title&quot;:&quot;IDEFICS&quot;,&quot;id&quot;:&quot;model_doc/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/idefics&quot;},{&quot;title&quot;:&quot;InstructBLIP&quot;,&quot;id&quot;:&quot;model_doc/instructblip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/instructblip&quot;},{&quot;title&quot;:&quot;LayoutLM&quot;,&quot;id&quot;:&quot;model_doc/layoutlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlm&quot;},{&quot;title&quot;:&quot;LayoutLMV2&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv2&quot;},{&quot;title&quot;:&quot;LayoutLMV3&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv3&quot;},{&quot;title&quot;:&quot;LayoutXLM&quot;,&quot;id&quot;:&quot;model_doc/layoutxlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutxlm&quot;},{&quot;title&quot;:&quot;LiLT&quot;,&quot;id&quot;:&quot;model_doc/lilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lilt&quot;},{&quot;title&quot;:&quot;LXMERT&quot;,&quot;id&quot;:&quot;model_doc/lxmert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lxmert&quot;},{&quot;title&quot;:&quot;MatCha&quot;,&quot;id&quot;:&quot;model_doc/matcha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/matcha&quot;},{&quot;title&quot;:&quot;MGP-STR&quot;,&quot;id&quot;:&quot;model_doc/mgp-str&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mgp-str&quot;},{&quot;title&quot;:&quot;Nougat&quot;,&quot;id&quot;:&quot;model_doc/nougat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nougat&quot;},{&quot;title&quot;:&quot;OneFormer&quot;,&quot;id&quot;:&quot;model_doc/oneformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/oneformer&quot;},{&quot;title&quot;:&quot;OWL-ViT&quot;,&quot;id&quot;:&quot;model_doc/owlvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/owlvit&quot;},{&quot;title&quot;:&quot;Perceiver&quot;,&quot;id&quot;:&quot;model_doc/perceiver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/perceiver&quot;},{&quot;title&quot;:&quot;Pix2Struct&quot;,&quot;id&quot;:&quot;model_doc/pix2struct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pix2struct&quot;},{&quot;title&quot;:&quot;Segment Anything&quot;,&quot;id&quot;:&quot;model_doc/sam&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sam&quot;},{&quot;title&quot;:&quot;Speech Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/speech-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder&quot;},{&quot;title&quot;:&quot;TAPAS&quot;,&quot;id&quot;:&quot;model_doc/tapas&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapas&quot;},{&quot;title&quot;:&quot;TrOCR&quot;,&quot;id&quot;:&quot;model_doc/trocr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trocr&quot;},{&quot;title&quot;:&quot;TVLT&quot;,&quot;id&quot;:&quot;model_doc/tvlt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tvlt&quot;},{&quot;title&quot;:&quot;ViLT&quot;,&quot;id&quot;:&quot;model_doc/vilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vilt&quot;},{&quot;title&quot;:&quot;Vision Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/vision-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder&quot;},{&quot;title&quot;:&quot;Vision Text Dual Encoder&quot;,&quot;id&quot;:&quot;model_doc/vision-text-dual-encoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder&quot;},{&quot;title&quot;:&quot;VisualBERT&quot;,&quot;id&quot;:&quot;model_doc/visual_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/visual_bert&quot;},{&quot;title&quot;:&quot;X-CLIP&quot;,&quot;id&quot;:&quot;model_doc/xclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xclip&quot;}]},{&quot;title&quot;:&quot;Reinforcement learning models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Decision Transformer&quot;,&quot;id&quot;:&quot;model_doc/decision_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/decision_transformer&quot;},{&quot;title&quot;:&quot;Trajectory Transformer&quot;,&quot;id&quot;:&quot;model_doc/trajectory_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer&quot;}]},{&quot;title&quot;:&quot;Time series models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Autoformer&quot;,&quot;id&quot;:&quot;model_doc/autoformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/autoformer&quot;},{&quot;title&quot;:&quot;Informer&quot;,&quot;id&quot;:&quot;model_doc/informer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/informer&quot;},{&quot;title&quot;:&quot;Time Series Transformer&quot;,&quot;id&quot;:&quot;model_doc/time_series_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/time_series_transformer&quot;}]},{&quot;title&quot;:&quot;Graph models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Graphormer&quot;,&quot;id&quot;:&quot;model_doc/graphormer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/graphormer&quot;}]}]},{&quot;title&quot;:&quot;Internal Helpers&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Custom Layers and Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/modeling_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/modeling_utils&quot;},{&quot;title&quot;:&quot;Utilities for pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/pipelines_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/pipelines_utils&quot;},{&quot;title&quot;:&quot;Utilities for Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/tokenization_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/tokenization_utils&quot;},{&quot;title&quot;:&quot;Utilities for Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/trainer_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/trainer_utils&quot;},{&quot;title&quot;:&quot;Utilities for Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/generation_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/generation_utils&quot;},{&quot;title&quot;:&quot;Utilities for Image Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/image_processing_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/image_processing_utils&quot;},{&quot;title&quot;:&quot;Utilities for Audio processing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/audio_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/audio_utils&quot;},{&quot;title&quot;:&quot;General Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/file_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/file_utils&quot;},{&quot;title&quot;:&quot;Utilities for Time Series&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/time_series_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/time_series_utils&quot;}]}]}],&quot;chapterId&quot;:&quot;tasks/summarization&quot;,&quot;docType&quot;:&quot;docs&quot;,&quot;isLoggedIn&quot;:false,&quot;lang&quot;:&quot;en&quot;,&quot;langs&quot;:[&quot;de&quot;,&quot;en&quot;,&quot;es&quot;,&quot;fr&quot;,&quot;it&quot;,&quot;ko&quot;,&quot;pt&quot;,&quot;zh&quot;],&quot;library&quot;:&quot;transformers&quot;,&quot;theme&quot;:&quot;light&quot;,&quot;version&quot;:&quot;v4.34.0&quot;,&quot;versions&quot;:[{&quot;version&quot;:&quot;main&quot;},{&quot;version&quot;:&quot;v4.34.0&quot;},{&quot;version&quot;:&quot;v4.33.3&quot;},{&quot;version&quot;:&quot;v4.33.2&quot;},{&quot;version&quot;:&quot;v4.33.0&quot;},{&quot;version&quot;:&quot;v4.32.1&quot;},{&quot;version&quot;:&quot;v4.32.0&quot;},{&quot;version&quot;:&quot;v4.31.0&quot;},{&quot;version&quot;:&quot;v4.30.0&quot;},{&quot;version&quot;:&quot;v4.29.1&quot;},{&quot;version&quot;:&quot;v4.29.0&quot;},{&quot;version&quot;:&quot;v4.28.1&quot;},{&quot;version&quot;:&quot;v4.28.0&quot;},{&quot;version&quot;:&quot;v4.27.2&quot;},{&quot;version&quot;:&quot;v4.27.1&quot;},{&quot;version&quot;:&quot;v4.27.0&quot;},{&quot;version&quot;:&quot;v4.26.1&quot;},{&quot;version&quot;:&quot;v4.26.0&quot;},{&quot;version&quot;:&quot;v4.25.1&quot;},{&quot;version&quot;:&quot;v4.24.0&quot;},{&quot;version&quot;:&quot;v4.23.1&quot;},{&quot;version&quot;:&quot;v4.23.0&quot;},{&quot;version&quot;:&quot;v4.22.2&quot;},{&quot;version&quot;:&quot;v4.22.1&quot;},{&quot;version&quot;:&quot;v4.22.0&quot;},{&quot;version&quot;:&quot;v4.21.3&quot;},{&quot;version&quot;:&quot;v4.21.2&quot;},{&quot;version&quot;:&quot;v4.21.1&quot;},{&quot;version&quot;:&quot;v4.21.0&quot;},{&quot;version&quot;:&quot;v4.20.1&quot;},{&quot;version&quot;:&quot;v4.20.0&quot;},{&quot;version&quot;:&quot;v4.19.4&quot;},{&quot;version&quot;:&quot;v4.19.3&quot;},{&quot;version&quot;:&quot;v4.19.2&quot;},{&quot;version&quot;:&quot;v4.19.0&quot;},{&quot;version&quot;:&quot;v4.18.0&quot;},{&quot;version&quot;:&quot;v4.17.0&quot;},{&quot;version&quot;:&quot;v4.16.2&quot;},{&quot;version&quot;:&quot;v4.16.1&quot;},{&quot;version&quot;:&quot;v4.16.0&quot;},{&quot;version&quot;:&quot;v4.15.0&quot;},{&quot;version&quot;:&quot;v4.14.1&quot;},{&quot;version&quot;:&quot;v4.13.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.5&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.4&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.0.0&quot;},{&quot;version&quot;:&quot;doc-builder-html&quot;}],&quot;title&quot;:&quot;Summarization&quot;}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"><div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation</p> <div class="flex items-center"><p class="font-semibold">Summarization</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "><button class=" " type="button"><h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> <span class="ml-auto rounded border border-gray-200 bg-gray-100 px-0.5 text-xs dark:border-gray-800 dark:bg-gray-800"><kbd class="font-sans">⌘K</kbd></span></button> <div class="flex items-center"><select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1">v4.34.0</option><option value="2">v4.33.3</option><option value="3">v4.32.1</option><option value="4">v4.31.0</option><option value="5">v4.30.0</option><option value="6">v4.29.1</option><option value="7">v4.28.1</option><option value="8">v4.27.2</option><option value="9">v4.26.1</option><option value="10">v4.25.1</option><option value="11">v4.24.0</option><option value="12">v4.23.1</option><option value="13">v4.22.2</option><option value="14">v4.21.3</option><option value="15">v4.20.1</option><option value="16">v4.19.4</option><option value="17">v4.18.0</option><option value="18">v4.17.0</option><option value="19">v4.16.2</option><option value="20">v4.15.0</option><option value="21">v4.14.1</option><option value="22">v4.13.0</option><option value="23">v4.12.5</option><option value="24">v4.11.3</option><option value="25">v4.10.1</option><option value="26">v4.9.2</option><option value="27">v4.8.2</option><option value="28">v4.7.0</option><option value="29">v4.6.0</option><option value="30">v4.5.1</option><option value="31">v4.4.2</option><option value="32">v4.3.3</option><option value="33">v4.2.2</option><option value="34">v4.1.1</option><option value="35">v4.0.1</option><option value="36">v3.5.1</option><option value="37">v3.4.0</option><option value="38">v3.3.1</option><option value="39">v3.2.0</option><option value="40">v3.1.0</option><option value="41">v3.0.2</option><option value="42">v2.11.0</option><option value="43">v2.10.0</option><option value="44">v2.9.1</option><option value="45">v2.8.0</option><option value="46">v2.7.0</option><option value="47">v2.6.0</option><option value="48">v2.5.1</option><option value="49">v2.4.1</option><option value="50">v2.3.0</option><option value="51">v2.2.2</option><option value="52">v2.1.1</option><option value="53">v2.0.0</option><option value="54">v1.2.0</option><option value="55">v1.1.0</option><option value="56">v1.0.0</option><option value="57">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"><button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"><svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> 112,792</a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Get started</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/index">🤗 Transformers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/quicktour">Quick tour </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/installation">Installation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Tutorials</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_tutorial">Run inference with pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/autoclass_tutorial">Write portable code with AutoClass </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/preprocessing">Preprocess data </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/training">Fine-tune a pretrained model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/run_scripts">Train with a script </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/accelerate">Set up distributed training with 🤗 Accelerate </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/peft">Load and train adapters with 🤗 PEFT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_sharing">Share your model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/transformers_agents">Agents </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/llm_tutorial">Generation with LLMs </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Task Guides</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Natural Language Processing</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/sequence_classification">Text classification </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/token_classification">Token classification </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/question_answering">Question answering </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/language_modeling">Causal language modeling </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/masked_language_modeling">Masked language modeling </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/translation">Translation </a><a data-sveltekit-reload="" class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-4" href="/docs/transformers/v4.34.0/en/tasks/summarization">Summarization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/multiple_choice">Multiple choice </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Computer Vision</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Generation</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Prompting</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Developer guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/fast_tokenizers">Use fast tokenizers from 🤗 Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/multilingual">Run inference with multilingual models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/create_a_model">Use model-specific APIs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_models">Share a custom model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/chat_templating">Templates for chat models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/sagemaker">Run training on Amazon SageMaker </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/serialization">Export to ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tflite">Export to TFLite </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/torchscript">Export to TorchScript </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/benchmarks">Benchmarks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/notebooks">Notebooks with examples </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/community">Community resources </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_tools">Custom Tools and Prompts </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/troubleshooting">Troubleshoot </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Performance and scalability</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/performance">Overview </a><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Efficient training techniques</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_one">Methods and tools for efficient training on a single GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_many">Multiple GPUs and parallelism </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu">Efficient training on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu_many">Distributed CPU training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu">Training on TPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu_tf">Training on TPU with TensorFlow </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_special">Training on Specialized Hardware </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_hardware">Custom hardware for training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/hpo_train">Hyperparameter Search using Trainer API </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Optimizing inference</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_cpu">Inference on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_one">Inference on one GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_many">Inference on many GPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_special">Inference on Specialized Hardware </a> </div><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/big_models">Instantiating a big model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/debugging">Troubleshooting </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tf_xla">XLA Integration for TensorFlow Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perf_torch_compile">Optimize inference using `torch.compile()` </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Contribute</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/contributing">How to contribute to transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_model">How to add a model to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_tensorflow_model">How to convert a 🤗 Transformers model to TensorFlow? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_pipeline">How to add a pipeline to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/testing">Testing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pr_checks">Checks on a Pull Request </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Conceptual guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/philosophy">Philosophy </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/glossary">Glossary </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/task_summary">What 🤗 Transformers can do </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tasks_explained">How 🤗 Transformers solve tasks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_summary">The Transformer model family </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tokenizer_summary">Summary of the tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/attention">Attention mechanisms </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pad_truncation">Padding and truncation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/bertology">BERTology </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perplexity">Perplexity of fixed-length models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_webserver">Pipelines for webserver inference </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_memory_anatomy">Model training anatomy </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>API</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Main Classes</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/agent">Agents and Tools </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/model_doc/auto">Auto Classes </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/callback">Callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/configuration">Configuration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/data_collator">Data Collator </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks">Keras callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/logging">Logging </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/model">Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/text_generation">Text Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/onnx">ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules">Optimization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/output">Model outputs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/pipelines">Pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/processors">Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/quantization">Quantization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/tokenizer">Tokenizer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/trainer">Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/deepspeed">DeepSpeed Integration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor">Feature Extractor </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/image_processor">Image Processor </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Models</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Text models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Vision models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Reinforcement learning models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Time series models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Graph models</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Internal Helpers</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/modeling_utils">Custom Layers and Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/pipelines_utils">Utilities for pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/tokenization_utils">Utilities for Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/trainer_utils">Utilities for Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/generation_utils">Utilities for Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/image_processing_utils">Utilities for Image Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/audio_utils">Utilities for Audio processing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/file_utils">General Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/time_series_utils">Utilities for Time Series </a> </div> </div></nav></div></div></div> <div class="z-1 min-w-0 flex-1"> <div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg"> <div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div> <p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience </p> <div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes </div></div></div> <div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a> <p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div> <div class="prose-doc prose relative mx-auto max-w-4xl break-words"> <p></p> <h1 class="relative group"><a id="summarization" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#summarization"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-3xc59g">Summarization</span></h1> <div class="flex space-x-1 absolute z-10 right-0 top-0"> <div class="relative colab-dropdown "><button class=" " type="button"><img alt="Open In Colab" class="!m-0" src="https://colab.research.google.com/assets/colab-badge.svg"></button> </div> <div class="relative colab-dropdown "><button class=" " type="button"><img alt="Open In Studio Lab" class="!m-0" src="https://studiolab.sagemaker.aws/studiolab.svg"></button> </div></div> <iframe class="w-full xl:w-4/6 h-80" src="https://www.youtube-nocookie.com/embed/yHnr5Dk2zCI" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe> <p data-svelte-h="svelte-1m8hm66">Summarization creates a shorter version of a document or an article that captures all the important information. Along with translation, it is another example of a task that can be formulated as a sequence-to-sequence task. Summarization can be:</p> <ul data-svelte-h="svelte-1rofy1u"><li>Extractive: extract the most relevant information from a document.</li> <li>Abstractive: generate new text that captures the most relevant information.</li></ul> <p data-svelte-h="svelte-1aff4p7">This guide will show you how to:</p> <ol data-svelte-h="svelte-8msm5v"><li>Finetune <a href="https://huggingface.co/t5-small" rel="nofollow">T5</a> on the California state bill subset of the <a href="https://huggingface.co/datasets/billsum" rel="nofollow">BillSum</a> dataset for abstractive summarization.</li> <li>Use your finetuned model for inference.</li></ol> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400">The task illustrated in this tutorial is supported by the following model architectures: <p data-svelte-h="svelte-2s8ce4"><a href="../model_doc/bart">BART</a>, <a href="../model_doc/bigbird_pegasus">BigBird-Pegasus</a>, <a href="../model_doc/blenderbot">Blenderbot</a>, <a href="../model_doc/blenderbot-small">BlenderbotSmall</a>, <a href="../model_doc/encoder-decoder">Encoder decoder</a>, <a href="../model_doc/fsmt">FairSeq Machine-Translation</a>, <a href="../model_doc/gptsan-japanese">GPTSAN-japanese</a>, <a href="../model_doc/led">LED</a>, <a href="../model_doc/longt5">LongT5</a>, <a href="../model_doc/m2m_100">M2M100</a>, <a href="../model_doc/marian">Marian</a>, <a href="../model_doc/mbart">mBART</a>, <a href="../model_doc/mt5">MT5</a>, <a href="../model_doc/mvp">MVP</a>, <a href="../model_doc/nllb">NLLB</a>, <a href="../model_doc/nllb-moe">NLLB-MOE</a>, <a href="../model_doc/pegasus">Pegasus</a>, <a href="../model_doc/pegasus_x">PEGASUS-X</a>, <a href="../model_doc/plbart">PLBart</a>, <a href="../model_doc/prophetnet">ProphetNet</a>, <a href="../model_doc/switch_transformers">SwitchTransformers</a>, <a href="../model_doc/t5">T5</a>, <a href="../model_doc/umt5">UMT5</a>, <a href="../model_doc/xlm-prophetnet">XLM-ProphetNet</a></p></div> <p data-svelte-h="svelte-1c9nexd">Before you begin, make sure you have all the necessary libraries installed:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class="">pip install transformers datasets evaluate rouge_score</pre></div> <p data-svelte-h="svelte-k76o1m">We encourage you to login to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to login:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> huggingface_hub <span class="hljs-keyword">import</span> notebook_login <span class="hljs-meta">&gt;&gt;&gt; </span>notebook_login()</pre></div> <h2 class="relative group"><a id="load-billsum-dataset" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#load-billsum-dataset"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1i3ekat">Load BillSum dataset</span></h2> <p data-svelte-h="svelte-l4wmf4">Start by loading the smaller California state bill subset of the BillSum dataset from the 🤗 Datasets library:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset <span class="hljs-meta">&gt;&gt;&gt; </span>billsum = load_dataset(<span class="hljs-string">"billsum"</span>, split=<span class="hljs-string">"ca_test"</span>)</pre></div> <p data-svelte-h="svelte-gqiacy">Split the dataset into a train and test set with the <a href="https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.train_test_split" rel="nofollow">train_test_split</a> method:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>billsum = billsum.train_test_split(test_size=<span class="hljs-number">0.2</span>)</pre></div> <p data-svelte-h="svelte-1m91ua0">Then take a look at an example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>billsum[<span class="hljs-string">"train"</span>][<span class="hljs-number">0</span>] {<span class="hljs-string">'summary'</span>: <span class="hljs-string">'Existing law authorizes state agencies to enter into contracts for the acquisition of goods or services upon approval by the Department of General Services. Existing law sets forth various requirements and prohibitions for those contracts, including, but not limited to, a prohibition on entering into contracts for the acquisition of goods or services of $100,000 or more with a contractor that discriminates between spouses and domestic partners or same-sex and different-sex couples in the provision of benefits. Existing law provides that a contract entered into in violation of those requirements and prohibitions is void and authorizes the state or any person acting on behalf of the state to bring a civil action seeking a determination that a contract is in violation and therefore void. Under existing law, a willful violation of those requirements and prohibitions is a misdemeanor.\nThis bill would also prohibit a state agency from entering into contracts for the acquisition of goods or services of $100,000 or more with a contractor that discriminates between employees on the basis of gender identity in the provision of benefits, as specified. By expanding the scope of a crime, this bill would impose a state-mandated local program.\nThe California Constitution requires the state to reimburse local agencies and school districts for certain costs mandated by the state. Statutory provisions establish procedures for making that reimbursement.\nThis bill would provide that no reimbursement is required by this act for a specified reason.'</span>, <span class="hljs-string">'text'</span>: <span class="hljs-string">'The people of the State of California do enact as follows:\n\n\nSECTION 1.\nSection 10295.35 is added to the Public Contract Code, to read:\n10295.35.\n(a) (1) Notwithstanding any other law, a state agency shall not enter into any contract for the acquisition of goods or services in the amount of one hundred thousand dollars ($100,000) or more with a contractor that, in the provision of benefits, discriminates between employees on the basis of an employee’s or dependent’s actual or perceived gender identity, including, but not limited to, the employee’s or dependent’s identification as transgender.\n(2) For purposes of this section, “contract” includes contracts with a cumulative amount of one hundred thousand dollars ($100,000) or more per contractor in each fiscal year.\n(3) For purposes of this section, an employee health plan is discriminatory if the plan is not consistent with Section 1365.5 of the Health and Safety Code and Section 10140 of the Insurance Code.\n(4) The requirements of this section shall apply only to those portions of a contractor’s operations that occur under any of the following conditions:\n(A) Within the state.\n(B) On real property outside the state if the property is owned by the state or if the state has a right to occupy the property, and if the contractor’s presence at that location is connected to a contract with the state.\n(C) Elsewhere in the United States where work related to a state contract is being performed.\n(b) Contractors shall treat as confidential, to the maximum extent allowed by law or by the requirement of the contractor’s insurance provider, any request by an employee or applicant for employment benefits or any documentation of eligibility for benefits submitted by an employee or applicant for employment.\n(c) After taking all reasonable measures to find a contractor that complies with this section, as determined by the state agency, the requirements of this section may be waived under any of the following circumstances:\n(1) There is only one prospective contractor willing to enter into a specific contract with the state agency.\n(2) The contract is necessary to respond to an emergency, as determined by the state agency, that endangers the public health, welfare, or safety, or the contract is necessary for the provision of essential services, and no entity that complies with the requirements of this section capable of responding to the emergency is immediately available.\n(3) The requirements of this section violate, or are inconsistent with, the terms or conditions of a grant, subvention, or agreement, if the agency has made a good faith attempt to change the terms or conditions of any grant, subvention, or agreement to authorize application of this section.\n(4) The contractor is providing wholesale or bulk water, power, or natural gas, the conveyance or transmission of the same, or ancillary services, as required for ensuring reliable services in accordance with good utility practice, if the purchase of the same cannot practically be accomplished through the standard competitive bidding procedures and the contractor is not providing direct retail services to end users.\n(d) (1) A contractor shall not be deemed to discriminate in the provision of benefits if the contractor, in providing the benefits, pays the actual costs incurred in obtaining the benefit.\n(2) If a contractor is unable to provide a certain benefit, despite taking reasonable measures to do so, the contractor shall not be deemed to discriminate in the provision of benefits.\n(e) (1) Every contract subject to this chapter shall contain a statement by which the contractor certifies that the contractor is in compliance with this section.\n(2) The department or other contracting agency shall enforce this section pursuant to its existing enforcement powers.\n(3) (A) If a contractor falsely certifies that it is in compliance with this section, the contract with that contractor shall be subject to Article 9 (commencing with Section 10420), unless, within a time period specified by the department or other contracting agency, the contractor provides to the department or agency proof that it has complied, or is in the process of complying, with this section.\n(B) The application of the remedies or penalties contained in Article 9 (commencing with Section 10420) to a contract subject to this chapter shall not preclude the application of any existing remedies otherwise available to the department or other contracting agency under its existing enforcement powers.\n(f) Nothing in this section is intended to regulate the contracting practices of any local jurisdiction.\n(g) This section shall be construed so as not to conflict with applicable federal laws, rules, or regulations. In the event that a court or agency of competent jurisdiction holds that federal law, rule, or regulation invalidates any clause, sentence, paragraph, or section of this code or the application thereof to any person or circumstances, it is the intent of the state that the court or agency sever that clause, sentence, paragraph, or section so that the remainder of this section shall remain in effect.\nSEC. 2.\nSection 10295.35 of the Public Contract Code shall not be construed to create any new enforcement authority or responsibility in the Department of General Services or any other contracting agency.\nSEC. 3.\nNo reimbursement is required by this act pursuant to Section 6 of Article XIII\u2009B of the California Constitution because the only costs that may be incurred by a local agency or school district will be incurred because this act creates a new crime or infraction, eliminates a crime or infraction, or changes the penalty for a crime or infraction, within the meaning of Section 17556 of the Government Code, or changes the definition of a crime within the meaning of Section 6 of Article XIII\u2009B of the California Constitution.'</span>, <span class="hljs-string">'title'</span>: <span class="hljs-string">'An act to add Section 10295.35 to the Public Contract Code, relating to public contracts.'</span>}</pre></div> <p data-svelte-h="svelte-1a8v1m8">There are two fields that you’ll want to use:</p> <ul data-svelte-h="svelte-1f0gqje"><li><code>text</code>: the text of the bill which’ll be the input to the model.</li> <li><code>summary</code>: a condensed version of <code>text</code> which’ll be the model target.</li></ul> <h2 class="relative group"><a id="preprocess" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#preprocess"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1cg9qj">Preprocess</span></h2> <p data-svelte-h="svelte-lfk9rm">The next step is to load a T5 tokenizer to process <code>text</code> and <code>summary</code>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer <span class="hljs-meta">&gt;&gt;&gt; </span>checkpoint = <span class="hljs-string">"t5-small"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(checkpoint)</pre></div> <p data-svelte-h="svelte-pduvot">The preprocessing function you want to create needs to:</p> <ol data-svelte-h="svelte-11ryhh6"><li>Prefix the input with a prompt so T5 knows this is a summarization task. Some models capable of multiple NLP tasks require prompting for specific tasks.</li> <li>Use the keyword <code>text_target</code> argument when tokenizing labels.</li> <li>Truncate sequences to be no longer than the maximum length set by the <code>max_length</code> parameter.</li></ol> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>prefix = <span class="hljs-string">"summarize: "</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">preprocess_function</span>(<span class="hljs-params">examples</span>): <span class="hljs-meta">... </span> inputs = [prefix + doc <span class="hljs-keyword">for</span> doc <span class="hljs-keyword">in</span> examples[<span class="hljs-string">"text"</span>]] <span class="hljs-meta">... </span> model_inputs = tokenizer(inputs, max_length=<span class="hljs-number">1024</span>, truncation=<span class="hljs-literal">True</span>) <span class="hljs-meta">... </span> labels = tokenizer(text_target=examples[<span class="hljs-string">"summary"</span>], max_length=<span class="hljs-number">128</span>, truncation=<span class="hljs-literal">True</span>) <span class="hljs-meta">... </span> model_inputs[<span class="hljs-string">"labels"</span>] = labels[<span class="hljs-string">"input_ids"</span>] <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> model_inputs</pre></div> <p data-svelte-h="svelte-ndcj3d">To apply the preprocessing function over the entire dataset, use 🤗 Datasets <a href="https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.map" rel="nofollow">map</a> method. You can speed up the <code>map</code> function by setting <code>batched=True</code> to process multiple elements of the dataset at once:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>tokenized_billsum = billsum.<span class="hljs-built_in">map</span>(preprocess_function, batched=<span class="hljs-literal">True</span>)</pre></div> <p data-svelte-h="svelte-5dvr4x">Now create a batch of examples using <a href="/docs/transformers/v4.34.0/en/main_classes/data_collator#transformers.DataCollatorForSeq2Seq">DataCollatorForSeq2Seq</a>. It’s more efficient to <em>dynamically pad</em> the sentences to the longest length in a batch during collation, instead of padding the whole dataset to the maximum length.</p> <div class="space-y-10 py-6 2xl:py-8 2xl:-mx-4"><div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><defs><clipPath id="a"><rect x="3.05" y="0.5" width="25.73" height="31" fill="none"></rect></clipPath></defs><g clip-path="url(#a)"><path d="M24.94,9.51a12.81,12.81,0,0,1,0,18.16,12.68,12.68,0,0,1-18,0,12.81,12.81,0,0,1,0-18.16l9-9V5l-.84.83-6,6a9.58,9.58,0,1,0,13.55,0ZM20.44,9a1.68,1.68,0,1,1,1.67-1.67A1.68,1.68,0,0,1,20.44,9Z" fill="#ee4c2c"></path></g></svg> <span>Pytorch</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide Pytorch content</span></div></div> <div class="framework-content"><div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> DataCollatorForSeq2Seq <span class="hljs-meta">&gt;&gt;&gt; </span>data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=checkpoint)</pre></div></div></div> <div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="0.94em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 274"><path d="M145.726 42.065v42.07l72.861 42.07v-42.07l-72.86-42.07zM0 84.135v42.07l36.43 21.03V105.17L0 84.135zm109.291 21.035l-36.43 21.034v126.2l36.43 21.035v-84.135l36.435 21.035v-42.07l-36.435-21.034V105.17z" fill="#E55B2D"></path><path d="M145.726 42.065L36.43 105.17v42.065l72.861-42.065v42.065l36.435-21.03v-84.14zM255.022 63.1l-36.435 21.035v42.07l36.435-21.035V63.1zm-72.865 84.135l-36.43 21.035v42.07l36.43-21.036v-42.07zm-36.43 63.104l-36.436-21.035v84.135l36.435-21.035V210.34z" fill="#ED8E24"></path><path d="M145.726 0L0 84.135l36.43 21.035l109.296-63.105l72.861 42.07L255.022 63.1L145.726 0zm0 126.204l-36.435 21.03l36.435 21.036l36.43-21.035l-36.43-21.03z" fill="#F8BF3C"></path></svg> <span>TensorFlow</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide TensorFlow content</span></div></div> <div class="framework-content"><div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> DataCollatorForSeq2Seq <span class="hljs-meta">&gt;&gt;&gt; </span>data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=checkpoint, return_tensors=<span class="hljs-string">"tf"</span>)</pre></div></div></div> </div> <h2 class="relative group"><a id="evaluate" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#evaluate"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-sh8s6s">Evaluate</span></h2> <p data-svelte-h="svelte-767wh5">Including a metric during training is often helpful for evaluating your model’s performance. You can quickly load a evaluation method with the 🤗 <a href="https://huggingface.co/docs/evaluate/index" rel="nofollow">Evaluate</a> library. For this task, load the <a href="https://huggingface.co/spaces/evaluate-metric/rouge" rel="nofollow">ROUGE</a> metric (see the 🤗 Evaluate <a href="https://huggingface.co/docs/evaluate/a_quick_tour" rel="nofollow">quick tour</a> to learn more about how to load and compute a metric):</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> evaluate <span class="hljs-meta">&gt;&gt;&gt; </span>rouge = evaluate.load(<span class="hljs-string">"rouge"</span>)</pre></div> <p data-svelte-h="svelte-us1hp">Then create a function that passes your predictions and labels to <a href="https://huggingface.co/docs/evaluate/v0.4.0/en/package_reference/main_classes#evaluate.EvaluationModule.compute" rel="nofollow">compute</a> to calculate the ROUGE metric:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">compute_metrics</span>(<span class="hljs-params">eval_pred</span>): <span class="hljs-meta">... </span> predictions, labels = eval_pred <span class="hljs-meta">... </span> decoded_preds = tokenizer.batch_decode(predictions, skip_special_tokens=<span class="hljs-literal">True</span>) <span class="hljs-meta">... </span> labels = np.where(labels != -<span class="hljs-number">100</span>, labels, tokenizer.pad_token_id) <span class="hljs-meta">... </span> decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=<span class="hljs-literal">True</span>) <span class="hljs-meta">... </span> result = rouge.compute(predictions=decoded_preds, references=decoded_labels, use_stemmer=<span class="hljs-literal">True</span>) <span class="hljs-meta">... </span> prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) <span class="hljs-keyword">for</span> pred <span class="hljs-keyword">in</span> predictions] <span class="hljs-meta">... </span> result[<span class="hljs-string">"gen_len"</span>] = np.mean(prediction_lens) <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> {k: <span class="hljs-built_in">round</span>(v, <span class="hljs-number">4</span>) <span class="hljs-keyword">for</span> k, v <span class="hljs-keyword">in</span> result.items()}</pre></div> <p data-svelte-h="svelte-183aynn">Your <code>compute_metrics</code> function is ready to go now, and you’ll return to it when you setup your training.</p> <h2 class="relative group"><a id="train" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#train"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-5arm0l">Train</span></h2> <div class="space-y-10 py-6 2xl:py-8 2xl:-mx-4"><div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><defs><clipPath id="a"><rect x="3.05" y="0.5" width="25.73" height="31" fill="none"></rect></clipPath></defs><g clip-path="url(#a)"><path d="M24.94,9.51a12.81,12.81,0,0,1,0,18.16,12.68,12.68,0,0,1-18,0,12.81,12.81,0,0,1,0-18.16l9-9V5l-.84.83-6,6a9.58,9.58,0,1,0,13.55,0ZM20.44,9a1.68,1.68,0,1,1,1.67-1.67A1.68,1.68,0,0,1,20.44,9Z" fill="#ee4c2c"></path></g></svg> <span>Pytorch</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide Pytorch content</span></div></div> <div class="framework-content"><div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1ufp0ay">If you aren’t familiar with finetuning a model with the <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer">Trainer</a>, take a look at the basic tutorial <a href="../training#train-with-pytorch-trainer">here</a>!</p></div> <p data-svelte-h="svelte-1h2b6hn">You’re ready to start training your model now! Load T5 with <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoModelForSeq2SeqLM">AutoModelForSeq2SeqLM</a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoModelForSeq2SeqLM, Seq2SeqTrainingArguments, Seq2SeqTrainer <span class="hljs-meta">&gt;&gt;&gt; </span>model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint)</pre></div> <p data-svelte-h="svelte-l42k0i">At this point, only three steps remain:</p> <ol data-svelte-h="svelte-sbjy7i"><li>Define your training hyperparameters in <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Seq2SeqTrainingArguments">Seq2SeqTrainingArguments</a>. The only required parameter is <code>output_dir</code> which specifies where to save your model. You’ll push this model to the Hub by setting <code>push_to_hub=True</code> (you need to be signed in to Hugging Face to upload your model). At the end of each epoch, the <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer">Trainer</a> will evaluate the ROUGE metric and save the training checkpoint.</li> <li>Pass the training arguments to <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Seq2SeqTrainer">Seq2SeqTrainer</a> along with the model, dataset, tokenizer, data collator, and <code>compute_metrics</code> function.</li> <li>Call <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.train">train()</a> to finetune your model.</li></ol> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>training_args = Seq2SeqTrainingArguments( <span class="hljs-meta">... </span> output_dir=<span class="hljs-string">"my_awesome_billsum_model"</span>, <span class="hljs-meta">... </span> evaluation_strategy=<span class="hljs-string">"epoch"</span>, <span class="hljs-meta">... </span> learning_rate=<span class="hljs-number">2e-5</span>, <span class="hljs-meta">... </span> per_device_train_batch_size=<span class="hljs-number">16</span>, <span class="hljs-meta">... </span> per_device_eval_batch_size=<span class="hljs-number">16</span>, <span class="hljs-meta">... </span> weight_decay=<span class="hljs-number">0.01</span>, <span class="hljs-meta">... </span> save_total_limit=<span class="hljs-number">3</span>, <span class="hljs-meta">... </span> num_train_epochs=<span class="hljs-number">4</span>, <span class="hljs-meta">... </span> predict_with_generate=<span class="hljs-literal">True</span>, <span class="hljs-meta">... </span> fp16=<span class="hljs-literal">True</span>, <span class="hljs-meta">... </span> push_to_hub=<span class="hljs-literal">True</span>, <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>trainer = Seq2SeqTrainer( <span class="hljs-meta">... </span> model=model, <span class="hljs-meta">... </span> args=training_args, <span class="hljs-meta">... </span> train_dataset=tokenized_billsum[<span class="hljs-string">"train"</span>], <span class="hljs-meta">... </span> eval_dataset=tokenized_billsum[<span class="hljs-string">"test"</span>], <span class="hljs-meta">... </span> tokenizer=tokenizer, <span class="hljs-meta">... </span> data_collator=data_collator, <span class="hljs-meta">... </span> compute_metrics=compute_metrics, <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>trainer.train()</pre></div> <p data-svelte-h="svelte-cv8z08">Once training is completed, share your model to the Hub with the <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.push_to_hub">push_to_hub()</a> method so everyone can use your model:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>trainer.push_to_hub()</pre></div></div></div> <div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="0.94em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 274"><path d="M145.726 42.065v42.07l72.861 42.07v-42.07l-72.86-42.07zM0 84.135v42.07l36.43 21.03V105.17L0 84.135zm109.291 21.035l-36.43 21.034v126.2l36.43 21.035v-84.135l36.435 21.035v-42.07l-36.435-21.034V105.17z" fill="#E55B2D"></path><path d="M145.726 42.065L36.43 105.17v42.065l72.861-42.065v42.065l36.435-21.03v-84.14zM255.022 63.1l-36.435 21.035v42.07l36.435-21.035V63.1zm-72.865 84.135l-36.43 21.035v42.07l36.43-21.036v-42.07zm-36.43 63.104l-36.436-21.035v84.135l36.435-21.035V210.34z" fill="#ED8E24"></path><path d="M145.726 0L0 84.135l36.43 21.035l109.296-63.105l72.861 42.07L255.022 63.1L145.726 0zm0 126.204l-36.435 21.03l36.435 21.036l36.43-21.035l-36.43-21.03z" fill="#F8BF3C"></path></svg> <span>TensorFlow</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide TensorFlow content</span></div></div> <div class="framework-content"><div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1rd4nl8">If you aren’t familiar with finetuning a model with Keras, take a look at the basic tutorial <a href="../training#train-a-tensorflow-model-with-keras">here</a>!</p></div> To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters: <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> create_optimizer, AdamWeightDecay <span class="hljs-meta">&gt;&gt;&gt; </span>optimizer = AdamWeightDecay(learning_rate=<span class="hljs-number">2e-5</span>, weight_decay_rate=<span class="hljs-number">0.01</span>)</pre></div> <p data-svelte-h="svelte-d8bckx">Then you can load T5 with <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.TFAutoModelForSeq2SeqLM">TFAutoModelForSeq2SeqLM</a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> TFAutoModelForSeq2SeqLM <span class="hljs-meta">&gt;&gt;&gt; </span>model = TFAutoModelForSeq2SeqLM.from_pretrained(checkpoint)</pre></div> <p data-svelte-h="svelte-qmwuyd">Convert your datasets to the <code>tf.data.Dataset</code> format with <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel.prepare_tf_dataset">prepare_tf_dataset()</a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>tf_train_set = model.prepare_tf_dataset( <span class="hljs-meta">... </span> tokenized_billsum[<span class="hljs-string">"train"</span>], <span class="hljs-meta">... </span> shuffle=<span class="hljs-literal">True</span>, <span class="hljs-meta">... </span> batch_size=<span class="hljs-number">16</span>, <span class="hljs-meta">... </span> collate_fn=data_collator, <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>tf_test_set = model.prepare_tf_dataset( <span class="hljs-meta">... </span> tokenized_billsum[<span class="hljs-string">"test"</span>], <span class="hljs-meta">... </span> shuffle=<span class="hljs-literal">False</span>, <span class="hljs-meta">... </span> batch_size=<span class="hljs-number">16</span>, <span class="hljs-meta">... </span> collate_fn=data_collator, <span class="hljs-meta">... </span>)</pre></div> <p data-svelte-h="svelte-17cxx5e">Configure the model for training with <a href="https://keras.io/api/models/model_training_apis/#compile-method" rel="nofollow"><code>compile</code></a>. Note that Transformers models all have a default task-relevant loss function, so you don’t need to specify one unless you want to:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> tensorflow <span class="hljs-keyword">as</span> tf <span class="hljs-meta">&gt;&gt;&gt; </span>model.<span class="hljs-built_in">compile</span>(optimizer=optimizer) <span class="hljs-comment"># No loss argument!</span></pre></div> <p data-svelte-h="svelte-ugq3ja">The last two things to setup before you start training is to compute the ROUGE score from the predictions, and provide a way to push your model to the Hub. Both are done by using <a href="../main_classes/keras_callbacks">Keras callbacks</a>.</p> <p data-svelte-h="svelte-6vs5z9">Pass your <code>compute_metrics</code> function to <a href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks#transformers.KerasMetricCallback">KerasMetricCallback</a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers.keras_callbacks <span class="hljs-keyword">import</span> KerasMetricCallback <span class="hljs-meta">&gt;&gt;&gt; </span>metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_set)</pre></div> <p data-svelte-h="svelte-b2vwd">Specify where to push your model and tokenizer in the <a href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks#transformers.PushToHubCallback">PushToHubCallback</a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers.keras_callbacks <span class="hljs-keyword">import</span> PushToHubCallback <span class="hljs-meta">&gt;&gt;&gt; </span>push_to_hub_callback = PushToHubCallback( <span class="hljs-meta">... </span> output_dir=<span class="hljs-string">"my_awesome_billsum_model"</span>, <span class="hljs-meta">... </span> tokenizer=tokenizer, <span class="hljs-meta">... </span>)</pre></div> <p data-svelte-h="svelte-1lw9xm8">Then bundle your callbacks together:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>callbacks = [metric_callback, push_to_hub_callback]</pre></div> <p data-svelte-h="svelte-1hrpv1v">Finally, you’re ready to start training your model! Call <a href="https://keras.io/api/models/model_training_apis/#fit-method" rel="nofollow"><code>fit</code></a> with your training and validation datasets, the number of epochs, and your callbacks to finetune the model:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>model.fit(x=tf_train_set, validation_data=tf_test_set, epochs=<span class="hljs-number">3</span>, callbacks=callbacks)</pre></div> <p data-svelte-h="svelte-2s71om">Once training is completed, your model is automatically uploaded to the Hub so everyone can use it!</p></div></div> </div> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-qyprqu">For a more in-depth example of how to finetune a model for summarization, take a look at the corresponding <a href="https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/summarization.ipynb" rel="nofollow">PyTorch notebook</a> or <a href="https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/summarization-tf.ipynb" rel="nofollow">TensorFlow notebook</a>.</p></div> <h2 class="relative group"><a id="inference" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#inference"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-199uz7g">Inference</span></h2> <p data-svelte-h="svelte-633ppb">Great, now that you’ve finetuned a model, you can use it for inference!</p> <p data-svelte-h="svelte-18tsusp">Come up with some text you’d like to summarize. For T5, you need to prefix your input depending on the task you’re working on. For summarization you should prefix your input as shown below:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>text = <span class="hljs-string">"summarize: The Inflation Reduction Act lowers prescription drug costs, health care costs, and energy costs. It's the most aggressive action on tackling the climate crisis in American history, which will lift up American workers and create good-paying, union jobs across the country. It'll lower the deficit and ask the ultra-wealthy and corporations to pay their fair share. And no one making under $400,000 per year will pay a penny more in taxes."</span></pre></div> <p data-svelte-h="svelte-1iadf05">The simplest way to try out your finetuned model for inference is to use it in a <a href="/docs/transformers/v4.34.0/en/main_classes/pipelines#transformers.pipeline">pipeline()</a>. Instantiate a <code>pipeline</code> for summarization with your model, and pass your text to it:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> pipeline <span class="hljs-meta">&gt;&gt;&gt; </span>summarizer = pipeline(<span class="hljs-string">"summarization"</span>, model=<span class="hljs-string">"stevhliu/my_awesome_billsum_model"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>summarizer(text) [{<span class="hljs-string">"summary_text"</span>: <span class="hljs-string">"The Inflation Reduction Act lowers prescription drug costs, health care costs, and energy costs. It's the most aggressive action on tackling the climate crisis in American history, which will lift up American workers and create good-paying, union jobs across the country."</span>}]</pre></div> <p data-svelte-h="svelte-1njl8vm">You can also manually replicate the results of the <code>pipeline</code> if you’d like:</p> <div class="space-y-10 py-6 2xl:py-8 2xl:-mx-4"><div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><defs><clipPath id="a"><rect x="3.05" y="0.5" width="25.73" height="31" fill="none"></rect></clipPath></defs><g clip-path="url(#a)"><path d="M24.94,9.51a12.81,12.81,0,0,1,0,18.16,12.68,12.68,0,0,1-18,0,12.81,12.81,0,0,1,0-18.16l9-9V5l-.84.83-6,6a9.58,9.58,0,1,0,13.55,0ZM20.44,9a1.68,1.68,0,1,1,1.67-1.67A1.68,1.68,0,0,1,20.44,9Z" fill="#ee4c2c"></path></g></svg> <span>Pytorch</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide Pytorch content</span></div></div> <div class="framework-content"><p data-svelte-h="svelte-1c2y1ia">Tokenize the text and return the <code>input_ids</code> as PyTorch tensors:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"stevhliu/my_awesome_billsum_model"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(text, return_tensors=<span class="hljs-string">"pt"</span>).input_ids</pre></div> <p data-svelte-h="svelte-19lanuw">Use the <a href="/docs/transformers/v4.34.0/en/main_classes/text_generation#transformers.GenerationMixin.generate">generate()</a> method to create the summarization. For more details about the different text generation strategies and parameters for controlling generation, check out the <a href="../main_classes/text_generation">Text Generation</a> API.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoModelForSeq2SeqLM <span class="hljs-meta">&gt;&gt;&gt; </span>model = AutoModelForSeq2SeqLM.from_pretrained(<span class="hljs-string">"stevhliu/my_awesome_billsum_model"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model.generate(inputs, max_new_tokens=<span class="hljs-number">100</span>, do_sample=<span class="hljs-literal">False</span>)</pre></div> <p data-svelte-h="svelte-1918fu9">Decode the generated token ids back into text:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer.decode(outputs[<span class="hljs-number">0</span>], skip_special_tokens=<span class="hljs-literal">True</span>) <span class="hljs-string">'the inflation reduction act lowers prescription drug costs, health care costs, and energy costs. it'</span>s the most aggressive action on tackling the climate crisis <span class="hljs-keyword">in</span> american history. it will ask the ultra-wealthy <span class="hljs-keyword">and</span> corporations to pay their fair share.<span class="hljs-string">'</span></pre></div></div></div> <div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="0.94em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 274"><path d="M145.726 42.065v42.07l72.861 42.07v-42.07l-72.86-42.07zM0 84.135v42.07l36.43 21.03V105.17L0 84.135zm109.291 21.035l-36.43 21.034v126.2l36.43 21.035v-84.135l36.435 21.035v-42.07l-36.435-21.034V105.17z" fill="#E55B2D"></path><path d="M145.726 42.065L36.43 105.17v42.065l72.861-42.065v42.065l36.435-21.03v-84.14zM255.022 63.1l-36.435 21.035v42.07l36.435-21.035V63.1zm-72.865 84.135l-36.43 21.035v42.07l36.43-21.036v-42.07zm-36.43 63.104l-36.436-21.035v84.135l36.435-21.035V210.34z" fill="#ED8E24"></path><path d="M145.726 0L0 84.135l36.43 21.035l109.296-63.105l72.861 42.07L255.022 63.1L145.726 0zm0 126.204l-36.435 21.03l36.435 21.036l36.43-21.035l-36.43-21.03z" fill="#F8BF3C"></path></svg> <span>TensorFlow</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide TensorFlow content</span></div></div> <div class="framework-content"><p data-svelte-h="svelte-hw2mu6">Tokenize the text and return the <code>input_ids</code> as TensorFlow tensors:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"stevhliu/my_awesome_billsum_model"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(text, return_tensors=<span class="hljs-string">"tf"</span>).input_ids</pre></div> <p data-svelte-h="svelte-1oi8g8m">Use the <a href="/docs/transformers/v4.34.0/en/main_classes/text_generation#transformers.TFGenerationMixin.generate">generate()</a> method to create the summarization. For more details about the different text generation strategies and parameters for controlling generation, check out the <a href="../main_classes/text_generation">Text Generation</a> API.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> TFAutoModelForSeq2SeqLM <span class="hljs-meta">&gt;&gt;&gt; </span>model = TFAutoModelForSeq2SeqLM.from_pretrained(<span class="hljs-string">"stevhliu/my_awesome_billsum_model"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model.generate(inputs, max_new_tokens=<span class="hljs-number">100</span>, do_sample=<span class="hljs-literal">False</span>)</pre></div> <p data-svelte-h="svelte-1918fu9">Decode the generated token ids back into text:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer.decode(outputs[<span class="hljs-number">0</span>], skip_special_tokens=<span class="hljs-literal">True</span>) <span class="hljs-string">'the inflation reduction act lowers prescription drug costs, health care costs, and energy costs. it'</span>s the most aggressive action on tackling the climate crisis <span class="hljs-keyword">in</span> american history. it will ask the ultra-wealthy <span class="hljs-keyword">and</span> corporations to pay their fair share.<span class="hljs-string">'</span></pre></div></div></div> </div> <p></p> <div id="svelte-announcer" aria-live="assertive" aria-atomic="true" style="position: absolute; left: 0px; top: 0px; clip: rect(0px, 0px, 0px, 0px); clip-path: inset(50%); overflow: hidden; white-space: nowrap; width: 1px; height: 1px;"></div></div> <div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/tasks/translation" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>Translation</a> <a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/tasks/multiple_choice" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">Multiple choice<span class="ml-2 translate-y-px">→</span></a></div></div></div> <div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapter&quot;:{&quot;title&quot;:&quot;Summarization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;summarization&quot;,&quot;url&quot;:&quot;#summarization&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Load BillSum dataset&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;load-billsum-dataset&quot;,&quot;url&quot;:&quot;#load-billsum-dataset&quot;},{&quot;title&quot;:&quot;Preprocess&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocess&quot;,&quot;url&quot;:&quot;#preprocess&quot;},{&quot;title&quot;:&quot;Evaluate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;evaluate&quot;,&quot;url&quot;:&quot;#evaluate&quot;},{&quot;title&quot;:&quot;Train&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;train&quot;,&quot;url&quot;:&quot;#train&quot;},{&quot;title&quot;:&quot;Inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;inference&quot;,&quot;url&quot;:&quot;#inference&quot;}]}}" data-target="SubSideMenu"><nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#summarization" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-summarization"><wbr>Summarization</a> <a href="#load-billsum-dataset" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-load-billsum-dataset"><wbr>Load <wbr>Bill<wbr>Sum dataset</a> <a href="#preprocess" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-preprocess"><wbr>Preprocess</a> <a href="#evaluate" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-evaluate"><wbr>Evaluate</a> <a href="#train" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-train"><wbr>Train</a> <a href="#inference" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-inference"><wbr>Inference</a> </nav></div></div></div> <div id="doc-footer"></div></main> </div> <script> import("/front/build/kube-5e23f38/index.js"); window.moonSha = "kube-5e23f38/"; window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc","environment":"production","userAgent":"HuggingFace (production)"}`); </script> <!-- Stripe --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://js.stripe.com/v3/"; script.async = true; document.head.appendChild(script); } </script> <!-- Google analytics v4 --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL"; script.async = true; document.head.appendChild(script); window.dataLayer = window.dataLayer || []; function gtag() { if (window.dataLayer !== undefined) { window.dataLayer.push(arguments); } } gtag("js", new Date()); gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/v4.34.0/en/tasks/summarization" }); /// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" }); /// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent /// TODO: ask the user for their consent and update this with gtag('consent', 'update') } </script> <!-- Google Analytics v3 (deprecated) --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { (function (i, s, o, g, r, a, m) { i["GoogleAnalyticsObject"] = r; (i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments); }), (i[r].l = 1 * new Date()); (a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]); a.async = 1; a.src = g; m.parentNode.insertBefore(a, m); })(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics"); ganalytics("create", "UA-83738774-2", "auto"); ganalytics("send", "pageview", "/docs/transformers/v4.34.0/en/tasks/summarization"); } </script> <iframe name="__privateStripeMetricsController2750" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-27c67c0d52761104439bb051c7856ab1.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fv4.34.0%2Fen%2Ftasks%2Fsummarization&amp;title=Summarization&amp;referrer=&amp;muid=NA&amp;sid=NA&amp;version=6&amp;preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html>
2023-10-05T13:33:47.294Z
Question answering
https://huggingface.co/docs/transformers/v4.34.0/en/tasks/question_answering
# Question answering Question answering tasks return an answer given a question. If you’ve ever asked a virtual assistant like Alexa, Siri or Google what the weather is, then you’ve used a question answering model before. There are two common types of question answering tasks: - Extractive: extract the answer from the given context. - Abstractive: generate an answer from the context that correctly answers the question. This guide will show you how to: 1. Finetune [DistilBERT](https://huggingface.co/distilbert-base-uncased) on the [SQuAD](https://huggingface.co/datasets/squad) dataset for extractive question answering. 2. Use your finetuned model for inference. The task illustrated in this tutorial is supported by the following model architectures: [ALBERT](../model_doc/albert), [BART](../model_doc/bart), [BERT](../model_doc/bert), [BigBird](../model_doc/big_bird), [BigBird-Pegasus](../model_doc/bigbird_pegasus), [BLOOM](../model_doc/bloom), [CamemBERT](../model_doc/camembert), [CANINE](../model_doc/canine), [ConvBERT](../model_doc/convbert), [Data2VecText](../model_doc/data2vec-text), [DeBERTa](../model_doc/deberta), [DeBERTa-v2](../model_doc/deberta-v2), [DistilBERT](../model_doc/distilbert), [ELECTRA](../model_doc/electra), [ERNIE](../model_doc/ernie), [ErnieM](../model_doc/ernie_m), [Falcon](../model_doc/falcon), [FlauBERT](../model_doc/flaubert), [FNet](../model_doc/fnet), [Funnel Transformer](../model_doc/funnel), [OpenAI GPT-2](../model_doc/gpt2), [GPT Neo](../model_doc/gpt_neo), [GPT NeoX](../model_doc/gpt_neox), [GPT-J](../model_doc/gptj), [I-BERT](../model_doc/ibert), [LayoutLMv2](../model_doc/layoutlmv2), [LayoutLMv3](../model_doc/layoutlmv3), [LED](../model_doc/led), [LiLT](../model_doc/lilt), [Longformer](../model_doc/longformer), [LUKE](../model_doc/luke), [LXMERT](../model_doc/lxmert), [MarkupLM](../model_doc/markuplm), [mBART](../model_doc/mbart), [MEGA](../model_doc/mega), [Megatron-BERT](../model_doc/megatron-bert), [MobileBERT](../model_doc/mobilebert), [MPNet](../model_doc/mpnet), [MPT](../model_doc/mpt), [MRA](../model_doc/mra), [MT5](../model_doc/mt5), [MVP](../model_doc/mvp), [Nezha](../model_doc/nezha), [Nyströmformer](../model_doc/nystromformer), [OPT](../model_doc/opt), [QDQBert](../model_doc/qdqbert), [Reformer](../model_doc/reformer), [RemBERT](../model_doc/rembert), [RoBERTa](../model_doc/roberta), [RoBERTa-PreLayerNorm](../model_doc/roberta-prelayernorm), [RoCBert](../model_doc/roc_bert), [RoFormer](../model_doc/roformer), [Splinter](../model_doc/splinter), [SqueezeBERT](../model_doc/squeezebert), [T5](../model_doc/t5), [UMT5](../model_doc/umt5), [XLM](../model_doc/xlm), [XLM-RoBERTa](../model_doc/xlm-roberta), [XLM-RoBERTa-XL](../model_doc/xlm-roberta-xl), [XLNet](../model_doc/xlnet), [X-MOD](../model_doc/xmod), [YOSO](../model_doc/yoso) Before you begin, make sure you have all the necessary libraries installed: ``` pip install transformers datasets evaluate ``` We encourage you to login to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to login: ``` >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## Load SQuAD dataset Start by loading a smaller subset of the SQuAD dataset from the 🤗 Datasets library. This’ll give you a chance to experiment and make sure everything works before spending more time training on the full dataset. ``` >>> from datasets import load_dataset >>> squad = load_dataset("squad", split="train[:5000]") ``` Split the dataset’s `train` split into a train and test set with the [train\_test\_split](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.train_test_split) method: ``` >>> squad = squad.train_test_split(test_size=0.2) ``` Then take a look at an example: ``` >>> squad["train"][0] {'answers': {'answer_start': [515], 'text': ['Saint Bernadette Soubirous']}, 'context': 'Architecturally, the school has a Catholic character. Atop the Main Building\'s gold dome is a golden statue of the Virgin Mary. Immediately in front of the Main Building and facing it, is a copper statue of Christ with arms upraised with the legend "Venite Ad Me Omnes". Next to the Main Building is the Basilica of the Sacred Heart. Immediately behind the basilica is the Grotto, a Marian place of prayer and reflection. It is a replica of the grotto at Lourdes, France where the Virgin Mary reputedly appeared to Saint Bernadette Soubirous in 1858. At the end of the main drive (and in a direct line that connects through 3 statues and the Gold Dome), is a simple, modern stone statue of Mary.', 'id': '5733be284776f41900661182', 'question': 'To whom did the Virgin Mary allegedly appear in 1858 in Lourdes France?', 'title': 'University_of_Notre_Dame' } ``` There are several important fields here: - `answers`: the starting location of the answer token and the answer text. - `context`: background information from which the model needs to extract the answer. - `question`: the question a model should answer. ## Preprocess The next step is to load a DistilBERT tokenizer to process the `question` and `context` fields: ``` >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") ``` There are a few preprocessing steps particular to question answering tasks you should be aware of: 1. Some examples in a dataset may have a very long `context` that exceeds the maximum input length of the model. To deal with longer sequences, truncate only the `context` by setting `truncation="only_second"`. 2. Next, map the start and end positions of the answer to the original `context` by setting `return_offset_mapping=True`. 3. With the mapping in hand, now you can find the start and end tokens of the answer. Use the `sequence_ids` method to find which part of the offset corresponds to the `question` and which corresponds to the `context`. Here is how you can create a function to truncate and map the start and end tokens of the `answer` to the `context`: ``` >>> def preprocess_function(examples): ... questions = [q.strip() for q in examples["question"]] ... inputs = tokenizer( ... questions, ... examples["context"], ... max_length=384, ... truncation="only_second", ... return_offsets_mapping=True, ... padding="max_length", ... ) ... offset_mapping = inputs.pop("offset_mapping") ... answers = examples["answers"] ... start_positions = [] ... end_positions = [] ... for i, offset in enumerate(offset_mapping): ... answer = answers[i] ... start_char = answer["answer_start"][0] ... end_char = answer["answer_start"][0] + len(answer["text"][0]) ... sequence_ids = inputs.sequence_ids(i) ... ... idx = 0 ... while sequence_ids[idx] != 1: ... idx += 1 ... context_start = idx ... while sequence_ids[idx] == 1: ... idx += 1 ... context_end = idx - 1 ... ... if offset[context_start][0] > end_char or offset[context_end][1] < start_char: ... start_positions.append(0) ... end_positions.append(0) ... else: ... ... idx = context_start ... while idx <= context_end and offset[idx][0] <= start_char: ... idx += 1 ... start_positions.append(idx - 1) ... idx = context_end ... while idx >= context_start and offset[idx][1] >= end_char: ... idx -= 1 ... end_positions.append(idx + 1) ... inputs["start_positions"] = start_positions ... inputs["end_positions"] = end_positions ... return inputs ``` To apply the preprocessing function over the entire dataset, use 🤗 Datasets [map](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.map) function. You can speed up the `map` function by setting `batched=True` to process multiple elements of the dataset at once. Remove any columns you don’t need: ``` >>> tokenized_squad = squad.map(preprocess_function, batched=True, remove_columns=squad["train"].column_names) ``` Now create a batch of examples using [DefaultDataCollator](/docs/transformers/v4.34.0/en/main_classes/data_collator#transformers.DefaultDataCollator). Unlike other data collators in 🤗 Transformers, the [DefaultDataCollator](/docs/transformers/v4.34.0/en/main_classes/data_collator#transformers.DefaultDataCollator) does not apply any additional preprocessing such as padding. ``` >>> from transformers import DefaultDataCollator >>> data_collator = DefaultDataCollator() ``` ``` >>> from transformers import DefaultDataCollator >>> data_collator = DefaultDataCollator(return_tensors="tf") ``` ## Train If you aren’t familiar with finetuning a model with the [Trainer](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer), take a look at the basic tutorial [here](../training#train-with-pytorch-trainer)! You’re ready to start training your model now! Load DistilBERT with [AutoModelForQuestionAnswering](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoModelForQuestionAnswering): ``` >>> from transformers import AutoModelForQuestionAnswering, TrainingArguments, Trainer >>> model = AutoModelForQuestionAnswering.from_pretrained("distilbert-base-uncased") ``` At this point, only three steps remain: 1. Define your training hyperparameters in [TrainingArguments](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.TrainingArguments). The only required parameter is `output_dir` which specifies where to save your model. You’ll push this model to the Hub by setting `push_to_hub=True` (you need to be signed in to Hugging Face to upload your model). 2. Pass the training arguments to [Trainer](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer) along with the model, dataset, tokenizer, and data collator. 3. Call [train()](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.train) to finetune your model. ``` >>> training_args = TrainingArguments( ... output_dir="my_awesome_qa_model", ... evaluation_strategy="epoch", ... learning_rate=2e-5, ... per_device_train_batch_size=16, ... per_device_eval_batch_size=16, ... num_train_epochs=3, ... weight_decay=0.01, ... push_to_hub=True, ... ) >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=tokenized_squad["train"], ... eval_dataset=tokenized_squad["test"], ... tokenizer=tokenizer, ... data_collator=data_collator, ... ) >>> trainer.train() ``` Once training is completed, share your model to the Hub with the [push\_to\_hub()](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.push_to_hub) method so everyone can use your model: ``` >>> trainer.push_to_hub() ``` If you aren’t familiar with finetuning a model with Keras, take a look at the basic tutorial [here](../training#train-a-tensorflow-model-with-keras)! To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters: ``` >>> from transformers import create_optimizer >>> batch_size = 16 >>> num_epochs = 2 >>> total_train_steps = (len(tokenized_squad["train"]) // batch_size) * num_epochs >>> optimizer, schedule = create_optimizer( ... init_lr=2e-5, ... num_warmup_steps=0, ... num_train_steps=total_train_steps, ... ) ``` Then you can load DistilBERT with [TFAutoModelForQuestionAnswering](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.TFAutoModelForQuestionAnswering): ``` >>> from transformers import TFAutoModelForQuestionAnswering >>> model = TFAutoModelForQuestionAnswering("distilbert-base-uncased") ``` Convert your datasets to the `tf.data.Dataset` format with [prepare\_tf\_dataset()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel.prepare_tf_dataset): ``` >>> tf_train_set = model.prepare_tf_dataset( ... tokenized_squad["train"], ... shuffle=True, ... batch_size=16, ... collate_fn=data_collator, ... ) >>> tf_validation_set = model.prepare_tf_dataset( ... tokenized_squad["test"], ... shuffle=False, ... batch_size=16, ... collate_fn=data_collator, ... ) ``` Configure the model for training with [`compile`](https://keras.io/api/models/model_training_apis/#compile-method): ``` >>> import tensorflow as tf >>> model.compile(optimizer=optimizer) ``` The last thing to setup before you start training is to provide a way to push your model to the Hub. This can be done by specifying where to push your model and tokenizer in the [PushToHubCallback](/docs/transformers/v4.34.0/en/main_classes/keras_callbacks#transformers.PushToHubCallback): ``` >>> from transformers.keras_callbacks import PushToHubCallback >>> callback = PushToHubCallback( ... output_dir="my_awesome_qa_model", ... tokenizer=tokenizer, ... ) ``` Finally, you’re ready to start training your model! Call [`fit`](https://keras.io/api/models/model_training_apis/#fit-method) with your training and validation datasets, the number of epochs, and your callback to finetune the model: ``` >>> model.fit(x=tf_train_set, validation_data=tf_validation_set, epochs=3, callbacks=[callback]) ``` Once training is completed, your model is automatically uploaded to the Hub so everyone can use it! For a more in-depth example of how to finetune a model for question answering, take a look at the corresponding [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering.ipynb) or [TensorFlow notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering-tf.ipynb). ## Evaluate Evaluation for question answering requires a significant amount of postprocessing. To avoid taking up too much of your time, this guide skips the evaluation step. The [Trainer](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer) still calculates the evaluation loss during training so you’re not completely in the dark about your model’s performance. If have more time and you’re interested in how to evaluate your model for question answering, take a look at the [Question answering](https://huggingface.co/course/chapter7/7?fw=pt#postprocessing) chapter from the 🤗 Hugging Face Course! ## Inference Great, now that you’ve finetuned a model, you can use it for inference! Come up with a question and some context you’d like the model to predict: ``` >>> question = "How many programming languages does BLOOM support?" >>> context = "BLOOM has 176 billion parameters and can generate text in 46 languages natural languages and 13 programming languages." ``` The simplest way to try out your finetuned model for inference is to use it in a [pipeline()](/docs/transformers/v4.34.0/en/main_classes/pipelines#transformers.pipeline). Instantiate a `pipeline` for question answering with your model, and pass your text to it: ``` >>> from transformers import pipeline >>> question_answerer = pipeline("question-answering", model="my_awesome_qa_model") >>> question_answerer(question=question, context=context) {'score': 0.2058267742395401, 'start': 10, 'end': 95, 'answer': '176 billion parameters and can generate text in 46 languages natural languages and 13'} ``` You can also manually replicate the results of the `pipeline` if you’d like: Tokenize the text and return PyTorch tensors: ``` >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_qa_model") >>> inputs = tokenizer(question, context, return_tensors="pt") ``` Pass your inputs to the model and return the `logits`: ``` >>> import torch >>> from transformers import AutoModelForQuestionAnswering >>> model = AutoModelForQuestionAnswering.from_pretrained("my_awesome_qa_model") >>> with torch.no_grad(): ... outputs = model(**inputs) ``` Get the highest probability from the model output for the start and end positions: ``` >>> answer_start_index = outputs.start_logits.argmax() >>> answer_end_index = outputs.end_logits.argmax() ``` Decode the predicted tokens to get the answer: ``` >>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1] >>> tokenizer.decode(predict_answer_tokens) '176 billion parameters and can generate text in 46 languages natural languages and 13' ``` Tokenize the text and return TensorFlow tensors: ``` >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_qa_model") >>> inputs = tokenizer(question, text, return_tensors="tf") ``` Pass your inputs to the model and return the `logits`: ``` >>> from transformers import TFAutoModelForQuestionAnswering >>> model = TFAutoModelForQuestionAnswering.from_pretrained("my_awesome_qa_model") >>> outputs = model(**inputs) ``` Get the highest probability from the model output for the start and end positions: ``` >>> answer_start_index = int(tf.math.argmax(outputs.start_logits, axis=-1)[0]) >>> answer_end_index = int(tf.math.argmax(outputs.end_logits, axis=-1)[0]) ``` Decode the predicted tokens to get the answer: ``` >>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1] >>> tokenizer.decode(predict_answer_tokens) '176 billion parameters and can generate text in 46 languages natural languages and 13' ```
<!DOCTYPE html><html class=""><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"> <meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science."> <meta property="fb:app_id" content="1321688464574422"> <meta name="twitter:card" content="summary_large_image"> <meta name="twitter:site" content="@huggingface"> <meta property="og:title" content="Question answering"> <meta property="og:type" content="website"> <meta property="og:url" content="https://huggingface.co/docs/transformers/v4.34.0/en/tasks/question_answering"> <meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png"> <link rel="stylesheet" href="/front/build/kube-5e23f38/style.css"> <link rel="preconnect" href="https://fonts.gstatic.com"> <link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&amp;display=swap" rel="stylesheet"> <link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&amp;display=swap" rel="stylesheet"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" as="style" onload="this.onload=null;this.rel='stylesheet'"> <noscript> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" /> </noscript> <title>Question answering</title> <script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script> <script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"><link rel="stylesheet" href="/docs/transformers/v4.34.0/en/_app/immutable/assets/0.e3b0c442.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/1.38c5c2f6.js"><meta name="hf:doc:metadata" content="{&quot;local&quot;:&quot;question-answering&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;load-squad-dataset&quot;,&quot;title&quot;:&quot;Load SQuAD dataset&quot;},{&quot;local&quot;:&quot;preprocess&quot;,&quot;title&quot;:&quot;Preprocess&quot;},{&quot;local&quot;:&quot;train&quot;,&quot;title&quot;:&quot;Train&quot;},{&quot;local&quot;:&quot;evaluate&quot;,&quot;title&quot;:&quot;Evaluate&quot;},{&quot;local&quot;:&quot;inference&quot;,&quot;title&quot;:&quot;Inference&quot;}],&quot;title&quot;:&quot;Question answering&quot;}"></head> <body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage"> <div class="flex min-h-screen flex-col"> <div class="SVELTE_HYDRATER contents" data-props="{&quot;classNames&quot;:&quot;&quot;,&quot;isWide&quot;:true,&quot;isZh&quot;:false}" data-target="MainHeader"><header class="border-b border-gray-100 "><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <div class="flex flex-none items-center justify-center p-0.5 place-self-stretch lg:hidden"><button class="relative z-40 flex h-6 w-8 items-center justify-center" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div></div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="rounded-full border border-transparent bg-gray-900 px-3 py-1 leading-none text-white hover:border-black hover:bg-white hover:text-black" href="/join">Sign Up</a></li></ul></nav></div></header></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div> <main class="flex flex-1 flex-col"><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapters&quot;:[{&quot;title&quot;:&quot;Get started&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;🤗 Transformers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;index&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/index&quot;},{&quot;title&quot;:&quot;Quick tour&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;quicktour&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/quicktour&quot;},{&quot;title&quot;:&quot;Installation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;installation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/installation&quot;}]},{&quot;title&quot;:&quot;Tutorials&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Run inference with pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_tutorial&quot;},{&quot;title&quot;:&quot;Write portable code with AutoClass&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;autoclass_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/autoclass_tutorial&quot;},{&quot;title&quot;:&quot;Preprocess data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocessing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/preprocessing&quot;},{&quot;title&quot;:&quot;Fine-tune a pretrained model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;training&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/training&quot;},{&quot;title&quot;:&quot;Train with a script&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;run_scripts&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/run_scripts&quot;},{&quot;title&quot;:&quot;Set up distributed training with 🤗 Accelerate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;accelerate&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/accelerate&quot;},{&quot;title&quot;:&quot;Load and train adapters with 🤗 PEFT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;peft&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/peft&quot;},{&quot;title&quot;:&quot;Share your model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_sharing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_sharing&quot;},{&quot;title&quot;:&quot;Agents&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers_agents&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/transformers_agents&quot;},{&quot;title&quot;:&quot;Generation with LLMs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;llm_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/llm_tutorial&quot;}]},{&quot;title&quot;:&quot;Task Guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Natural Language Processing&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text classification&quot;,&quot;id&quot;:&quot;tasks/sequence_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/sequence_classification&quot;},{&quot;title&quot;:&quot;Token classification&quot;,&quot;id&quot;:&quot;tasks/token_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/token_classification&quot;},{&quot;title&quot;:&quot;Question answering&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks/question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/question_answering&quot;},{&quot;title&quot;:&quot;Causal language modeling&quot;,&quot;id&quot;:&quot;tasks/language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/language_modeling&quot;},{&quot;title&quot;:&quot;Masked language modeling&quot;,&quot;id&quot;:&quot;tasks/masked_language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/masked_language_modeling&quot;},{&quot;title&quot;:&quot;Translation&quot;,&quot;id&quot;:&quot;tasks/translation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/translation&quot;},{&quot;title&quot;:&quot;Summarization&quot;,&quot;id&quot;:&quot;tasks/summarization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/summarization&quot;},{&quot;title&quot;:&quot;Multiple choice&quot;,&quot;id&quot;:&quot;tasks/multiple_choice&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/multiple_choice&quot;}]},{&quot;title&quot;:&quot;Audio&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio classification&quot;,&quot;id&quot;:&quot;tasks/audio_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/audio_classification&quot;},{&quot;title&quot;:&quot;Automatic speech recognition&quot;,&quot;id&quot;:&quot;tasks/asr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/asr&quot;}]},{&quot;title&quot;:&quot;Computer Vision&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image classification&quot;,&quot;id&quot;:&quot;tasks/image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_classification&quot;},{&quot;title&quot;:&quot;Semantic segmentation&quot;,&quot;id&quot;:&quot;tasks/semantic_segmentation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/semantic_segmentation&quot;},{&quot;title&quot;:&quot;Video classification&quot;,&quot;id&quot;:&quot;tasks/video_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/video_classification&quot;},{&quot;title&quot;:&quot;Object detection&quot;,&quot;id&quot;:&quot;tasks/object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot object detection&quot;,&quot;id&quot;:&quot;tasks/zero_shot_object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot image classification&quot;,&quot;id&quot;:&quot;tasks/zero_shot_image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification&quot;},{&quot;title&quot;:&quot;Depth estimation&quot;,&quot;id&quot;:&quot;tasks/monocular_depth_estimation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation&quot;}]},{&quot;title&quot;:&quot;Multimodal&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image captioning&quot;,&quot;id&quot;:&quot;tasks/image_captioning&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_captioning&quot;},{&quot;title&quot;:&quot;Document Question Answering&quot;,&quot;id&quot;:&quot;tasks/document_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/document_question_answering&quot;},{&quot;title&quot;:&quot;Visual Question Answering&quot;,&quot;id&quot;:&quot;tasks/visual_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/visual_question_answering&quot;},{&quot;title&quot;:&quot;Text to speech&quot;,&quot;id&quot;:&quot;tasks/text-to-speech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/text-to-speech&quot;}]},{&quot;title&quot;:&quot;Generation&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Customize the generation strategy&quot;,&quot;id&quot;:&quot;generation_strategies&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/generation_strategies&quot;}]},{&quot;title&quot;:&quot;Prompting&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image tasks with IDEFICS&quot;,&quot;id&quot;:&quot;tasks/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/idefics&quot;}]}]},{&quot;title&quot;:&quot;Developer guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Use fast tokenizers from 🤗 Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;fast_tokenizers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/fast_tokenizers&quot;},{&quot;title&quot;:&quot;Run inference with multilingual models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;multilingual&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/multilingual&quot;},{&quot;title&quot;:&quot;Use model-specific APIs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;create_a_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/create_a_model&quot;},{&quot;title&quot;:&quot;Share a custom model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_models&quot;},{&quot;title&quot;:&quot;Templates for chat models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;chat_templating&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/chat_templating&quot;},{&quot;title&quot;:&quot;Run training on Amazon SageMaker&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;sagemaker&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/sagemaker&quot;},{&quot;title&quot;:&quot;Export to ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;serialization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/serialization&quot;},{&quot;title&quot;:&quot;Export to TFLite&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tflite&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tflite&quot;},{&quot;title&quot;:&quot;Export to TorchScript&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;torchscript&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/torchscript&quot;},{&quot;title&quot;:&quot;Benchmarks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;benchmarks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/benchmarks&quot;},{&quot;title&quot;:&quot;Notebooks with examples&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;notebooks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/notebooks&quot;},{&quot;title&quot;:&quot;Community resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;community&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/community&quot;},{&quot;title&quot;:&quot;Custom Tools and Prompts&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_tools&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_tools&quot;},{&quot;title&quot;:&quot;Troubleshoot&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;troubleshooting&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/troubleshooting&quot;}]},{&quot;title&quot;:&quot;Performance and scalability&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;performance&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/performance&quot;},{&quot;title&quot;:&quot;Efficient training techniques&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Methods and tools for efficient training on a single GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_one&quot;},{&quot;title&quot;:&quot;Multiple GPUs and parallelism&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_many&quot;},{&quot;title&quot;:&quot;Efficient training on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu&quot;},{&quot;title&quot;:&quot;Distributed CPU training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu_many&quot;},{&quot;title&quot;:&quot;Training on TPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu&quot;},{&quot;title&quot;:&quot;Training on TPU with TensorFlow&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu_tf&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu_tf&quot;},{&quot;title&quot;:&quot;Training on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_special&quot;},{&quot;title&quot;:&quot;Custom hardware for training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_hardware&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_hardware&quot;},{&quot;title&quot;:&quot;Hyperparameter Search using Trainer API&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;hpo_train&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/hpo_train&quot;}]},{&quot;title&quot;:&quot;Optimizing inference&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Inference on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_cpu&quot;},{&quot;title&quot;:&quot;Inference on one GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_one&quot;},{&quot;title&quot;:&quot;Inference on many GPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_many&quot;},{&quot;title&quot;:&quot;Inference on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_special&quot;}]},{&quot;title&quot;:&quot;Instantiating a big model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;big_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/big_models&quot;},{&quot;title&quot;:&quot;Troubleshooting&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;debugging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/debugging&quot;},{&quot;title&quot;:&quot;XLA Integration for TensorFlow Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tf_xla&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tf_xla&quot;},{&quot;title&quot;:&quot;Optimize inference using `torch.compile()`&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_torch_compile&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_torch_compile&quot;}]},{&quot;title&quot;:&quot;Contribute&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;How to contribute to transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;contributing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/contributing&quot;},{&quot;title&quot;:&quot;How to add a model to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_model&quot;},{&quot;title&quot;:&quot;How to convert a 🤗 Transformers model to TensorFlow?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_tensorflow_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_tensorflow_model&quot;},{&quot;title&quot;:&quot;How to add a pipeline to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_pipeline&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_pipeline&quot;},{&quot;title&quot;:&quot;Testing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;testing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/testing&quot;},{&quot;title&quot;:&quot;Checks on a Pull Request&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pr_checks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pr_checks&quot;}]},{&quot;title&quot;:&quot;Conceptual guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Philosophy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;philosophy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/philosophy&quot;},{&quot;title&quot;:&quot;Glossary&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;glossary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/glossary&quot;},{&quot;title&quot;:&quot;What 🤗 Transformers can do&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;task_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/task_summary&quot;},{&quot;title&quot;:&quot;How 🤗 Transformers solve tasks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks_explained&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks_explained&quot;},{&quot;title&quot;:&quot;The Transformer model family&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_summary&quot;},{&quot;title&quot;:&quot;Summary of the tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tokenizer_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tokenizer_summary&quot;},{&quot;title&quot;:&quot;Attention mechanisms&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;attention&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/attention&quot;},{&quot;title&quot;:&quot;Padding and truncation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pad_truncation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pad_truncation&quot;},{&quot;title&quot;:&quot;BERTology&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;bertology&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/bertology&quot;},{&quot;title&quot;:&quot;Perplexity of fixed-length models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perplexity&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perplexity&quot;},{&quot;title&quot;:&quot;Pipelines for webserver inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_webserver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_webserver&quot;},{&quot;title&quot;:&quot;Model training anatomy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_memory_anatomy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_memory_anatomy&quot;}]},{&quot;title&quot;:&quot;API&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Main Classes&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Agents and Tools&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/agent&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/agent&quot;},{&quot;title&quot;:&quot;Auto Classes&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/auto&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/auto&quot;},{&quot;title&quot;:&quot;Callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/callback&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/callback&quot;},{&quot;title&quot;:&quot;Configuration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/configuration&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/configuration&quot;},{&quot;title&quot;:&quot;Data Collator&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/data_collator&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/data_collator&quot;},{&quot;title&quot;:&quot;Keras callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/keras_callbacks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/keras_callbacks&quot;},{&quot;title&quot;:&quot;Logging&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/logging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/logging&quot;},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/model&quot;},{&quot;title&quot;:&quot;Text Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/text_generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/text_generation&quot;},{&quot;title&quot;:&quot;ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/onnx&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/onnx&quot;},{&quot;title&quot;:&quot;Optimization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/optimizer_schedules&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules&quot;},{&quot;title&quot;:&quot;Model outputs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/output&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/output&quot;},{&quot;title&quot;:&quot;Pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/pipelines&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/pipelines&quot;},{&quot;title&quot;:&quot;Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/processors&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/processors&quot;},{&quot;title&quot;:&quot;Quantization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/quantization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/quantization&quot;},{&quot;title&quot;:&quot;Tokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/tokenizer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/tokenizer&quot;},{&quot;title&quot;:&quot;Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/trainer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/trainer&quot;},{&quot;title&quot;:&quot;DeepSpeed Integration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/deepspeed&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/deepspeed&quot;},{&quot;title&quot;:&quot;Feature Extractor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/feature_extractor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/feature_extractor&quot;},{&quot;title&quot;:&quot;Image Processor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/image_processor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/image_processor&quot;}]},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALBERT&quot;,&quot;id&quot;:&quot;model_doc/albert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/albert&quot;},{&quot;title&quot;:&quot;BART&quot;,&quot;id&quot;:&quot;model_doc/bart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bart&quot;},{&quot;title&quot;:&quot;BARThez&quot;,&quot;id&quot;:&quot;model_doc/barthez&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/barthez&quot;},{&quot;title&quot;:&quot;BARTpho&quot;,&quot;id&quot;:&quot;model_doc/bartpho&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bartpho&quot;},{&quot;title&quot;:&quot;BERT&quot;,&quot;id&quot;:&quot;model_doc/bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert&quot;},{&quot;title&quot;:&quot;BertGeneration&quot;,&quot;id&quot;:&quot;model_doc/bert-generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-generation&quot;},{&quot;title&quot;:&quot;BertJapanese&quot;,&quot;id&quot;:&quot;model_doc/bert-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-japanese&quot;},{&quot;title&quot;:&quot;Bertweet&quot;,&quot;id&quot;:&quot;model_doc/bertweet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bertweet&quot;},{&quot;title&quot;:&quot;BigBird&quot;,&quot;id&quot;:&quot;model_doc/big_bird&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/big_bird&quot;},{&quot;title&quot;:&quot;BigBirdPegasus&quot;,&quot;id&quot;:&quot;model_doc/bigbird_pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus&quot;},{&quot;title&quot;:&quot;BioGpt&quot;,&quot;id&quot;:&quot;model_doc/biogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/biogpt&quot;},{&quot;title&quot;:&quot;Blenderbot&quot;,&quot;id&quot;:&quot;model_doc/blenderbot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot&quot;},{&quot;title&quot;:&quot;Blenderbot Small&quot;,&quot;id&quot;:&quot;model_doc/blenderbot-small&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot-small&quot;},{&quot;title&quot;:&quot;BLOOM&quot;,&quot;id&quot;:&quot;model_doc/bloom&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bloom&quot;},{&quot;title&quot;:&quot;BORT&quot;,&quot;id&quot;:&quot;model_doc/bort&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bort&quot;},{&quot;title&quot;:&quot;ByT5&quot;,&quot;id&quot;:&quot;model_doc/byt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/byt5&quot;},{&quot;title&quot;:&quot;CamemBERT&quot;,&quot;id&quot;:&quot;model_doc/camembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/camembert&quot;},{&quot;title&quot;:&quot;CANINE&quot;,&quot;id&quot;:&quot;model_doc/canine&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/canine&quot;},{&quot;title&quot;:&quot;CodeGen&quot;,&quot;id&quot;:&quot;model_doc/codegen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/codegen&quot;},{&quot;title&quot;:&quot;CodeLlama&quot;,&quot;id&quot;:&quot;model_doc/code_llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/code_llama&quot;},{&quot;title&quot;:&quot;ConvBERT&quot;,&quot;id&quot;:&quot;model_doc/convbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convbert&quot;},{&quot;title&quot;:&quot;CPM&quot;,&quot;id&quot;:&quot;model_doc/cpm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpm&quot;},{&quot;title&quot;:&quot;CPMANT&quot;,&quot;id&quot;:&quot;model_doc/cpmant&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpmant&quot;},{&quot;title&quot;:&quot;CTRL&quot;,&quot;id&quot;:&quot;model_doc/ctrl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ctrl&quot;},{&quot;title&quot;:&quot;DeBERTa&quot;,&quot;id&quot;:&quot;model_doc/deberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta&quot;},{&quot;title&quot;:&quot;DeBERTa-v2&quot;,&quot;id&quot;:&quot;model_doc/deberta-v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta-v2&quot;},{&quot;title&quot;:&quot;DialoGPT&quot;,&quot;id&quot;:&quot;model_doc/dialogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dialogpt&quot;},{&quot;title&quot;:&quot;DistilBERT&quot;,&quot;id&quot;:&quot;model_doc/distilbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/distilbert&quot;},{&quot;title&quot;:&quot;DPR&quot;,&quot;id&quot;:&quot;model_doc/dpr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpr&quot;},{&quot;title&quot;:&quot;ELECTRA&quot;,&quot;id&quot;:&quot;model_doc/electra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/electra&quot;},{&quot;title&quot;:&quot;Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encoder-decoder&quot;},{&quot;title&quot;:&quot;ERNIE&quot;,&quot;id&quot;:&quot;model_doc/ernie&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie&quot;},{&quot;title&quot;:&quot;ErnieM&quot;,&quot;id&quot;:&quot;model_doc/ernie_m&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie_m&quot;},{&quot;title&quot;:&quot;ESM&quot;,&quot;id&quot;:&quot;model_doc/esm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/esm&quot;},{&quot;title&quot;:&quot;Falcon&quot;,&quot;id&quot;:&quot;model_doc/falcon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/falcon&quot;},{&quot;title&quot;:&quot;FLAN-T5&quot;,&quot;id&quot;:&quot;model_doc/flan-t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-t5&quot;},{&quot;title&quot;:&quot;FLAN-UL2&quot;,&quot;id&quot;:&quot;model_doc/flan-ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-ul2&quot;},{&quot;title&quot;:&quot;FlauBERT&quot;,&quot;id&quot;:&quot;model_doc/flaubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flaubert&quot;},{&quot;title&quot;:&quot;FNet&quot;,&quot;id&quot;:&quot;model_doc/fnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fnet&quot;},{&quot;title&quot;:&quot;FSMT&quot;,&quot;id&quot;:&quot;model_doc/fsmt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fsmt&quot;},{&quot;title&quot;:&quot;Funnel Transformer&quot;,&quot;id&quot;:&quot;model_doc/funnel&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/funnel&quot;},{&quot;title&quot;:&quot;GPT&quot;,&quot;id&quot;:&quot;model_doc/openai-gpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/openai-gpt&quot;},{&quot;title&quot;:&quot;GPT Neo&quot;,&quot;id&quot;:&quot;model_doc/gpt_neo&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neo&quot;},{&quot;title&quot;:&quot;GPT NeoX&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox&quot;},{&quot;title&quot;:&quot;GPT NeoX Japanese&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox_japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese&quot;},{&quot;title&quot;:&quot;GPT-J&quot;,&quot;id&quot;:&quot;model_doc/gptj&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptj&quot;},{&quot;title&quot;:&quot;GPT2&quot;,&quot;id&quot;:&quot;model_doc/gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt2&quot;},{&quot;title&quot;:&quot;GPTBigCode&quot;,&quot;id&quot;:&quot;model_doc/gpt_bigcode&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode&quot;},{&quot;title&quot;:&quot;GPTSAN Japanese&quot;,&quot;id&quot;:&quot;model_doc/gptsan-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese&quot;},{&quot;title&quot;:&quot;GPTSw3&quot;,&quot;id&quot;:&quot;model_doc/gpt-sw3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt-sw3&quot;},{&quot;title&quot;:&quot;HerBERT&quot;,&quot;id&quot;:&quot;model_doc/herbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/herbert&quot;},{&quot;title&quot;:&quot;I-BERT&quot;,&quot;id&quot;:&quot;model_doc/ibert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ibert&quot;},{&quot;title&quot;:&quot;Jukebox&quot;,&quot;id&quot;:&quot;model_doc/jukebox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/jukebox&quot;},{&quot;title&quot;:&quot;LED&quot;,&quot;id&quot;:&quot;model_doc/led&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/led&quot;},{&quot;title&quot;:&quot;LLaMA&quot;,&quot;id&quot;:&quot;model_doc/llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama&quot;},{&quot;title&quot;:&quot;Llama2&quot;,&quot;id&quot;:&quot;model_doc/llama2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama2&quot;},{&quot;title&quot;:&quot;Longformer&quot;,&quot;id&quot;:&quot;model_doc/longformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longformer&quot;},{&quot;title&quot;:&quot;LongT5&quot;,&quot;id&quot;:&quot;model_doc/longt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longt5&quot;},{&quot;title&quot;:&quot;LUKE&quot;,&quot;id&quot;:&quot;model_doc/luke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/luke&quot;},{&quot;title&quot;:&quot;M2M100&quot;,&quot;id&quot;:&quot;model_doc/m2m_100&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/m2m_100&quot;},{&quot;title&quot;:&quot;MarianMT&quot;,&quot;id&quot;:&quot;model_doc/marian&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/marian&quot;},{&quot;title&quot;:&quot;MarkupLM&quot;,&quot;id&quot;:&quot;model_doc/markuplm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/markuplm&quot;},{&quot;title&quot;:&quot;MBart and MBart-50&quot;,&quot;id&quot;:&quot;model_doc/mbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mbart&quot;},{&quot;title&quot;:&quot;MEGA&quot;,&quot;id&quot;:&quot;model_doc/mega&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mega&quot;},{&quot;title&quot;:&quot;MegatronBERT&quot;,&quot;id&quot;:&quot;model_doc/megatron-bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron-bert&quot;},{&quot;title&quot;:&quot;MegatronGPT2&quot;,&quot;id&quot;:&quot;model_doc/megatron_gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2&quot;},{&quot;title&quot;:&quot;Mistral&quot;,&quot;id&quot;:&quot;model_doc/mistral&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mistral&quot;},{&quot;title&quot;:&quot;mLUKE&quot;,&quot;id&quot;:&quot;model_doc/mluke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mluke&quot;},{&quot;title&quot;:&quot;MobileBERT&quot;,&quot;id&quot;:&quot;model_doc/mobilebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilebert&quot;},{&quot;title&quot;:&quot;MPNet&quot;,&quot;id&quot;:&quot;model_doc/mpnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpnet&quot;},{&quot;title&quot;:&quot;MPT&quot;,&quot;id&quot;:&quot;model_doc/mpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpt&quot;},{&quot;title&quot;:&quot;MRA&quot;,&quot;id&quot;:&quot;model_doc/mra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mra&quot;},{&quot;title&quot;:&quot;MT5&quot;,&quot;id&quot;:&quot;model_doc/mt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mt5&quot;},{&quot;title&quot;:&quot;MVP&quot;,&quot;id&quot;:&quot;model_doc/mvp&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mvp&quot;},{&quot;title&quot;:&quot;NEZHA&quot;,&quot;id&quot;:&quot;model_doc/nezha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nezha&quot;},{&quot;title&quot;:&quot;NLLB&quot;,&quot;id&quot;:&quot;model_doc/nllb&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb&quot;},{&quot;title&quot;:&quot;NLLB-MoE&quot;,&quot;id&quot;:&quot;model_doc/nllb-moe&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb-moe&quot;},{&quot;title&quot;:&quot;Nyströmformer&quot;,&quot;id&quot;:&quot;model_doc/nystromformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nystromformer&quot;},{&quot;title&quot;:&quot;Open-Llama&quot;,&quot;id&quot;:&quot;model_doc/open-llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/open-llama&quot;},{&quot;title&quot;:&quot;OPT&quot;,&quot;id&quot;:&quot;model_doc/opt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/opt&quot;},{&quot;title&quot;:&quot;Pegasus&quot;,&quot;id&quot;:&quot;model_doc/pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus&quot;},{&quot;title&quot;:&quot;PEGASUS-X&quot;,&quot;id&quot;:&quot;model_doc/pegasus_x&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus_x&quot;},{&quot;title&quot;:&quot;Persimmon&quot;,&quot;id&quot;:&quot;model_doc/persimmon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/persimmon&quot;},{&quot;title&quot;:&quot;PhoBERT&quot;,&quot;id&quot;:&quot;model_doc/phobert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/phobert&quot;},{&quot;title&quot;:&quot;PLBart&quot;,&quot;id&quot;:&quot;model_doc/plbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/plbart&quot;},{&quot;title&quot;:&quot;ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/prophetnet&quot;},{&quot;title&quot;:&quot;QDQBert&quot;,&quot;id&quot;:&quot;model_doc/qdqbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/qdqbert&quot;},{&quot;title&quot;:&quot;RAG&quot;,&quot;id&quot;:&quot;model_doc/rag&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rag&quot;},{&quot;title&quot;:&quot;REALM&quot;,&quot;id&quot;:&quot;model_doc/realm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/realm&quot;},{&quot;title&quot;:&quot;Reformer&quot;,&quot;id&quot;:&quot;model_doc/reformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/reformer&quot;},{&quot;title&quot;:&quot;RemBERT&quot;,&quot;id&quot;:&quot;model_doc/rembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rembert&quot;},{&quot;title&quot;:&quot;RetriBERT&quot;,&quot;id&quot;:&quot;model_doc/retribert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/retribert&quot;},{&quot;title&quot;:&quot;RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta&quot;},{&quot;title&quot;:&quot;RoBERTa-PreLayerNorm&quot;,&quot;id&quot;:&quot;model_doc/roberta-prelayernorm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm&quot;},{&quot;title&quot;:&quot;RoCBert&quot;,&quot;id&quot;:&quot;model_doc/roc_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roc_bert&quot;},{&quot;title&quot;:&quot;RoFormer&quot;,&quot;id&quot;:&quot;model_doc/roformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roformer&quot;},{&quot;title&quot;:&quot;RWKV&quot;,&quot;id&quot;:&quot;model_doc/rwkv&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rwkv&quot;},{&quot;title&quot;:&quot;Splinter&quot;,&quot;id&quot;:&quot;model_doc/splinter&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/splinter&quot;},{&quot;title&quot;:&quot;SqueezeBERT&quot;,&quot;id&quot;:&quot;model_doc/squeezebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/squeezebert&quot;},{&quot;title&quot;:&quot;SwitchTransformers&quot;,&quot;id&quot;:&quot;model_doc/switch_transformers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/switch_transformers&quot;},{&quot;title&quot;:&quot;T5&quot;,&quot;id&quot;:&quot;model_doc/t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5&quot;},{&quot;title&quot;:&quot;T5v1.1&quot;,&quot;id&quot;:&quot;model_doc/t5v1.1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5v1.1&quot;},{&quot;title&quot;:&quot;TAPEX&quot;,&quot;id&quot;:&quot;model_doc/tapex&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapex&quot;},{&quot;title&quot;:&quot;Transformer XL&quot;,&quot;id&quot;:&quot;model_doc/transfo-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/transfo-xl&quot;},{&quot;title&quot;:&quot;UL2&quot;,&quot;id&quot;:&quot;model_doc/ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ul2&quot;},{&quot;title&quot;:&quot;UMT5&quot;,&quot;id&quot;:&quot;model_doc/umt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/umt5&quot;},{&quot;title&quot;:&quot;X-MOD&quot;,&quot;id&quot;:&quot;model_doc/xmod&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xmod&quot;},{&quot;title&quot;:&quot;XGLM&quot;,&quot;id&quot;:&quot;model_doc/xglm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xglm&quot;},{&quot;title&quot;:&quot;XLM&quot;,&quot;id&quot;:&quot;model_doc/xlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm&quot;},{&quot;title&quot;:&quot;XLM-ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/xlm-prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa-XL&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl&quot;},{&quot;title&quot;:&quot;XLM-V&quot;,&quot;id&quot;:&quot;model_doc/xlm-v&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-v&quot;},{&quot;title&quot;:&quot;XLNet&quot;,&quot;id&quot;:&quot;model_doc/xlnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlnet&quot;},{&quot;title&quot;:&quot;YOSO&quot;,&quot;id&quot;:&quot;model_doc/yoso&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yoso&quot;}]},{&quot;title&quot;:&quot;Vision models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;BEiT&quot;,&quot;id&quot;:&quot;model_doc/beit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/beit&quot;},{&quot;title&quot;:&quot;BiT&quot;,&quot;id&quot;:&quot;model_doc/bit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bit&quot;},{&quot;title&quot;:&quot;Conditional DETR&quot;,&quot;id&quot;:&quot;model_doc/conditional_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/conditional_detr&quot;},{&quot;title&quot;:&quot;ConvNeXT&quot;,&quot;id&quot;:&quot;model_doc/convnext&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnext&quot;},{&quot;title&quot;:&quot;ConvNeXTV2&quot;,&quot;id&quot;:&quot;model_doc/convnextv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnextv2&quot;},{&quot;title&quot;:&quot;CvT&quot;,&quot;id&quot;:&quot;model_doc/cvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cvt&quot;},{&quot;title&quot;:&quot;Deformable DETR&quot;,&quot;id&quot;:&quot;model_doc/deformable_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deformable_detr&quot;},{&quot;title&quot;:&quot;DeiT&quot;,&quot;id&quot;:&quot;model_doc/deit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deit&quot;},{&quot;title&quot;:&quot;DETA&quot;,&quot;id&quot;:&quot;model_doc/deta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deta&quot;},{&quot;title&quot;:&quot;DETR&quot;,&quot;id&quot;:&quot;model_doc/detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/detr&quot;},{&quot;title&quot;:&quot;DiNAT&quot;,&quot;id&quot;:&quot;model_doc/dinat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinat&quot;},{&quot;title&quot;:&quot;DINO V2&quot;,&quot;id&quot;:&quot;model_doc/dinov2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinov2&quot;},{&quot;title&quot;:&quot;DiT&quot;,&quot;id&quot;:&quot;model_doc/dit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dit&quot;},{&quot;title&quot;:&quot;DPT&quot;,&quot;id&quot;:&quot;model_doc/dpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpt&quot;},{&quot;title&quot;:&quot;EfficientFormer&quot;,&quot;id&quot;:&quot;model_doc/efficientformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientformer&quot;},{&quot;title&quot;:&quot;EfficientNet&quot;,&quot;id&quot;:&quot;model_doc/efficientnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientnet&quot;},{&quot;title&quot;:&quot;FocalNet&quot;,&quot;id&quot;:&quot;model_doc/focalnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/focalnet&quot;},{&quot;title&quot;:&quot;GLPN&quot;,&quot;id&quot;:&quot;model_doc/glpn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/glpn&quot;},{&quot;title&quot;:&quot;ImageGPT&quot;,&quot;id&quot;:&quot;model_doc/imagegpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/imagegpt&quot;},{&quot;title&quot;:&quot;LeViT&quot;,&quot;id&quot;:&quot;model_doc/levit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/levit&quot;},{&quot;title&quot;:&quot;Mask2Former&quot;,&quot;id&quot;:&quot;model_doc/mask2former&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mask2former&quot;},{&quot;title&quot;:&quot;MaskFormer&quot;,&quot;id&quot;:&quot;model_doc/maskformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/maskformer&quot;},{&quot;title&quot;:&quot;MobileNetV1&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v1&quot;},{&quot;title&quot;:&quot;MobileNetV2&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v2&quot;},{&quot;title&quot;:&quot;MobileViT&quot;,&quot;id&quot;:&quot;model_doc/mobilevit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevit&quot;},{&quot;title&quot;:&quot;MobileViTV2&quot;,&quot;id&quot;:&quot;model_doc/mobilevitv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevitv2&quot;},{&quot;title&quot;:&quot;NAT&quot;,&quot;id&quot;:&quot;model_doc/nat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nat&quot;},{&quot;title&quot;:&quot;PoolFormer&quot;,&quot;id&quot;:&quot;model_doc/poolformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/poolformer&quot;},{&quot;title&quot;:&quot;Pyramid Vision Transformer (PVT)&quot;,&quot;id&quot;:&quot;model_doc/pvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pvt&quot;},{&quot;title&quot;:&quot;RegNet&quot;,&quot;id&quot;:&quot;model_doc/regnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/regnet&quot;},{&quot;title&quot;:&quot;ResNet&quot;,&quot;id&quot;:&quot;model_doc/resnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/resnet&quot;},{&quot;title&quot;:&quot;SegFormer&quot;,&quot;id&quot;:&quot;model_doc/segformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/segformer&quot;},{&quot;title&quot;:&quot;SwiftFormer&quot;,&quot;id&quot;:&quot;model_doc/swiftformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swiftformer&quot;},{&quot;title&quot;:&quot;Swin Transformer&quot;,&quot;id&quot;:&quot;model_doc/swin&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin&quot;},{&quot;title&quot;:&quot;Swin Transformer V2&quot;,&quot;id&quot;:&quot;model_doc/swinv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swinv2&quot;},{&quot;title&quot;:&quot;Swin2SR&quot;,&quot;id&quot;:&quot;model_doc/swin2sr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin2sr&quot;},{&quot;title&quot;:&quot;Table Transformer&quot;,&quot;id&quot;:&quot;model_doc/table-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/table-transformer&quot;},{&quot;title&quot;:&quot;TimeSformer&quot;,&quot;id&quot;:&quot;model_doc/timesformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/timesformer&quot;},{&quot;title&quot;:&quot;UperNet&quot;,&quot;id&quot;:&quot;model_doc/upernet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/upernet&quot;},{&quot;title&quot;:&quot;VAN&quot;,&quot;id&quot;:&quot;model_doc/van&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/van&quot;},{&quot;title&quot;:&quot;VideoMAE&quot;,&quot;id&quot;:&quot;model_doc/videomae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/videomae&quot;},{&quot;title&quot;:&quot;Vision Transformer (ViT)&quot;,&quot;id&quot;:&quot;model_doc/vit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit&quot;},{&quot;title&quot;:&quot;ViT Hybrid&quot;,&quot;id&quot;:&quot;model_doc/vit_hybrid&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_hybrid&quot;},{&quot;title&quot;:&quot;ViTDet&quot;,&quot;id&quot;:&quot;model_doc/vitdet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitdet&quot;},{&quot;title&quot;:&quot;ViTMAE&quot;,&quot;id&quot;:&quot;model_doc/vit_mae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_mae&quot;},{&quot;title&quot;:&quot;ViTMatte&quot;,&quot;id&quot;:&quot;model_doc/vitmatte&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitmatte&quot;},{&quot;title&quot;:&quot;ViTMSN&quot;,&quot;id&quot;:&quot;model_doc/vit_msn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_msn&quot;},{&quot;title&quot;:&quot;ViViT&quot;,&quot;id&quot;:&quot;model_doc/vivit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vivit&quot;},{&quot;title&quot;:&quot;YOLOS&quot;,&quot;id&quot;:&quot;model_doc/yolos&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yolos&quot;}]},{&quot;title&quot;:&quot;Audio models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio Spectrogram Transformer&quot;,&quot;id&quot;:&quot;model_doc/audio-spectrogram-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer&quot;},{&quot;title&quot;:&quot;Bark&quot;,&quot;id&quot;:&quot;model_doc/bark&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bark&quot;},{&quot;title&quot;:&quot;CLAP&quot;,&quot;id&quot;:&quot;model_doc/clap&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clap&quot;},{&quot;title&quot;:&quot;EnCodec&quot;,&quot;id&quot;:&quot;model_doc/encodec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encodec&quot;},{&quot;title&quot;:&quot;Hubert&quot;,&quot;id&quot;:&quot;model_doc/hubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/hubert&quot;},{&quot;title&quot;:&quot;MCTCT&quot;,&quot;id&quot;:&quot;model_doc/mctct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mctct&quot;},{&quot;title&quot;:&quot;MMS&quot;,&quot;id&quot;:&quot;model_doc/mms&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mms&quot;},{&quot;title&quot;:&quot;MusicGen&quot;,&quot;id&quot;:&quot;model_doc/musicgen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/musicgen&quot;},{&quot;title&quot;:&quot;Pop2Piano&quot;,&quot;id&quot;:&quot;model_doc/pop2piano&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pop2piano&quot;},{&quot;title&quot;:&quot;SEW&quot;,&quot;id&quot;:&quot;model_doc/sew&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew&quot;},{&quot;title&quot;:&quot;SEW-D&quot;,&quot;id&quot;:&quot;model_doc/sew-d&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew-d&quot;},{&quot;title&quot;:&quot;Speech2Text&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text&quot;},{&quot;title&quot;:&quot;Speech2Text2&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text_2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2&quot;},{&quot;title&quot;:&quot;SpeechT5&quot;,&quot;id&quot;:&quot;model_doc/speecht5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speecht5&quot;},{&quot;title&quot;:&quot;UniSpeech&quot;,&quot;id&quot;:&quot;model_doc/unispeech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech&quot;},{&quot;title&quot;:&quot;UniSpeech-SAT&quot;,&quot;id&quot;:&quot;model_doc/unispeech-sat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech-sat&quot;},{&quot;title&quot;:&quot;VITS&quot;,&quot;id&quot;:&quot;model_doc/vits&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vits&quot;},{&quot;title&quot;:&quot;Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2&quot;},{&quot;title&quot;:&quot;Wav2Vec2-Conformer&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2-conformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer&quot;},{&quot;title&quot;:&quot;Wav2Vec2Phoneme&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2_phoneme&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme&quot;},{&quot;title&quot;:&quot;WavLM&quot;,&quot;id&quot;:&quot;model_doc/wavlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wavlm&quot;},{&quot;title&quot;:&quot;Whisper&quot;,&quot;id&quot;:&quot;model_doc/whisper&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/whisper&quot;},{&quot;title&quot;:&quot;XLS-R&quot;,&quot;id&quot;:&quot;model_doc/xls_r&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xls_r&quot;},{&quot;title&quot;:&quot;XLSR-Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/xlsr_wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2&quot;}]},{&quot;title&quot;:&quot;Multimodal models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALIGN&quot;,&quot;id&quot;:&quot;model_doc/align&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/align&quot;},{&quot;title&quot;:&quot;AltCLIP&quot;,&quot;id&quot;:&quot;model_doc/altclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/altclip&quot;},{&quot;title&quot;:&quot;BLIP&quot;,&quot;id&quot;:&quot;model_doc/blip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip&quot;},{&quot;title&quot;:&quot;BLIP-2&quot;,&quot;id&quot;:&quot;model_doc/blip-2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip-2&quot;},{&quot;title&quot;:&quot;BridgeTower&quot;,&quot;id&quot;:&quot;model_doc/bridgetower&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bridgetower&quot;},{&quot;title&quot;:&quot;BROS&quot;,&quot;id&quot;:&quot;model_doc/bros&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bros&quot;},{&quot;title&quot;:&quot;Chinese-CLIP&quot;,&quot;id&quot;:&quot;model_doc/chinese_clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/chinese_clip&quot;},{&quot;title&quot;:&quot;CLIP&quot;,&quot;id&quot;:&quot;model_doc/clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clip&quot;},{&quot;title&quot;:&quot;CLIPSeg&quot;,&quot;id&quot;:&quot;model_doc/clipseg&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clipseg&quot;},{&quot;title&quot;:&quot;Data2Vec&quot;,&quot;id&quot;:&quot;model_doc/data2vec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/data2vec&quot;},{&quot;title&quot;:&quot;DePlot&quot;,&quot;id&quot;:&quot;model_doc/deplot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deplot&quot;},{&quot;title&quot;:&quot;Donut&quot;,&quot;id&quot;:&quot;model_doc/donut&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/donut&quot;},{&quot;title&quot;:&quot;FLAVA&quot;,&quot;id&quot;:&quot;model_doc/flava&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flava&quot;},{&quot;title&quot;:&quot;GIT&quot;,&quot;id&quot;:&quot;model_doc/git&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/git&quot;},{&quot;title&quot;:&quot;GroupViT&quot;,&quot;id&quot;:&quot;model_doc/groupvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/groupvit&quot;},{&quot;title&quot;:&quot;IDEFICS&quot;,&quot;id&quot;:&quot;model_doc/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/idefics&quot;},{&quot;title&quot;:&quot;InstructBLIP&quot;,&quot;id&quot;:&quot;model_doc/instructblip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/instructblip&quot;},{&quot;title&quot;:&quot;LayoutLM&quot;,&quot;id&quot;:&quot;model_doc/layoutlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlm&quot;},{&quot;title&quot;:&quot;LayoutLMV2&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv2&quot;},{&quot;title&quot;:&quot;LayoutLMV3&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv3&quot;},{&quot;title&quot;:&quot;LayoutXLM&quot;,&quot;id&quot;:&quot;model_doc/layoutxlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutxlm&quot;},{&quot;title&quot;:&quot;LiLT&quot;,&quot;id&quot;:&quot;model_doc/lilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lilt&quot;},{&quot;title&quot;:&quot;LXMERT&quot;,&quot;id&quot;:&quot;model_doc/lxmert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lxmert&quot;},{&quot;title&quot;:&quot;MatCha&quot;,&quot;id&quot;:&quot;model_doc/matcha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/matcha&quot;},{&quot;title&quot;:&quot;MGP-STR&quot;,&quot;id&quot;:&quot;model_doc/mgp-str&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mgp-str&quot;},{&quot;title&quot;:&quot;Nougat&quot;,&quot;id&quot;:&quot;model_doc/nougat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nougat&quot;},{&quot;title&quot;:&quot;OneFormer&quot;,&quot;id&quot;:&quot;model_doc/oneformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/oneformer&quot;},{&quot;title&quot;:&quot;OWL-ViT&quot;,&quot;id&quot;:&quot;model_doc/owlvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/owlvit&quot;},{&quot;title&quot;:&quot;Perceiver&quot;,&quot;id&quot;:&quot;model_doc/perceiver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/perceiver&quot;},{&quot;title&quot;:&quot;Pix2Struct&quot;,&quot;id&quot;:&quot;model_doc/pix2struct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pix2struct&quot;},{&quot;title&quot;:&quot;Segment Anything&quot;,&quot;id&quot;:&quot;model_doc/sam&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sam&quot;},{&quot;title&quot;:&quot;Speech Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/speech-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder&quot;},{&quot;title&quot;:&quot;TAPAS&quot;,&quot;id&quot;:&quot;model_doc/tapas&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapas&quot;},{&quot;title&quot;:&quot;TrOCR&quot;,&quot;id&quot;:&quot;model_doc/trocr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trocr&quot;},{&quot;title&quot;:&quot;TVLT&quot;,&quot;id&quot;:&quot;model_doc/tvlt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tvlt&quot;},{&quot;title&quot;:&quot;ViLT&quot;,&quot;id&quot;:&quot;model_doc/vilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vilt&quot;},{&quot;title&quot;:&quot;Vision Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/vision-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder&quot;},{&quot;title&quot;:&quot;Vision Text Dual Encoder&quot;,&quot;id&quot;:&quot;model_doc/vision-text-dual-encoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder&quot;},{&quot;title&quot;:&quot;VisualBERT&quot;,&quot;id&quot;:&quot;model_doc/visual_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/visual_bert&quot;},{&quot;title&quot;:&quot;X-CLIP&quot;,&quot;id&quot;:&quot;model_doc/xclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xclip&quot;}]},{&quot;title&quot;:&quot;Reinforcement learning models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Decision Transformer&quot;,&quot;id&quot;:&quot;model_doc/decision_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/decision_transformer&quot;},{&quot;title&quot;:&quot;Trajectory Transformer&quot;,&quot;id&quot;:&quot;model_doc/trajectory_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer&quot;}]},{&quot;title&quot;:&quot;Time series models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Autoformer&quot;,&quot;id&quot;:&quot;model_doc/autoformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/autoformer&quot;},{&quot;title&quot;:&quot;Informer&quot;,&quot;id&quot;:&quot;model_doc/informer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/informer&quot;},{&quot;title&quot;:&quot;Time Series Transformer&quot;,&quot;id&quot;:&quot;model_doc/time_series_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/time_series_transformer&quot;}]},{&quot;title&quot;:&quot;Graph models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Graphormer&quot;,&quot;id&quot;:&quot;model_doc/graphormer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/graphormer&quot;}]}]},{&quot;title&quot;:&quot;Internal Helpers&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Custom Layers and Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/modeling_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/modeling_utils&quot;},{&quot;title&quot;:&quot;Utilities for pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/pipelines_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/pipelines_utils&quot;},{&quot;title&quot;:&quot;Utilities for Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/tokenization_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/tokenization_utils&quot;},{&quot;title&quot;:&quot;Utilities for Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/trainer_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/trainer_utils&quot;},{&quot;title&quot;:&quot;Utilities for Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/generation_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/generation_utils&quot;},{&quot;title&quot;:&quot;Utilities for Image Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/image_processing_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/image_processing_utils&quot;},{&quot;title&quot;:&quot;Utilities for Audio processing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/audio_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/audio_utils&quot;},{&quot;title&quot;:&quot;General Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/file_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/file_utils&quot;},{&quot;title&quot;:&quot;Utilities for Time Series&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/time_series_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/time_series_utils&quot;}]}]}],&quot;chapterId&quot;:&quot;tasks/question_answering&quot;,&quot;docType&quot;:&quot;docs&quot;,&quot;isLoggedIn&quot;:false,&quot;lang&quot;:&quot;en&quot;,&quot;langs&quot;:[&quot;de&quot;,&quot;en&quot;,&quot;es&quot;,&quot;fr&quot;,&quot;it&quot;,&quot;ko&quot;,&quot;pt&quot;,&quot;zh&quot;],&quot;library&quot;:&quot;transformers&quot;,&quot;theme&quot;:&quot;light&quot;,&quot;version&quot;:&quot;v4.34.0&quot;,&quot;versions&quot;:[{&quot;version&quot;:&quot;main&quot;},{&quot;version&quot;:&quot;v4.34.0&quot;},{&quot;version&quot;:&quot;v4.33.3&quot;},{&quot;version&quot;:&quot;v4.33.2&quot;},{&quot;version&quot;:&quot;v4.33.0&quot;},{&quot;version&quot;:&quot;v4.32.1&quot;},{&quot;version&quot;:&quot;v4.32.0&quot;},{&quot;version&quot;:&quot;v4.31.0&quot;},{&quot;version&quot;:&quot;v4.30.0&quot;},{&quot;version&quot;:&quot;v4.29.1&quot;},{&quot;version&quot;:&quot;v4.29.0&quot;},{&quot;version&quot;:&quot;v4.28.1&quot;},{&quot;version&quot;:&quot;v4.28.0&quot;},{&quot;version&quot;:&quot;v4.27.2&quot;},{&quot;version&quot;:&quot;v4.27.1&quot;},{&quot;version&quot;:&quot;v4.27.0&quot;},{&quot;version&quot;:&quot;v4.26.1&quot;},{&quot;version&quot;:&quot;v4.26.0&quot;},{&quot;version&quot;:&quot;v4.25.1&quot;},{&quot;version&quot;:&quot;v4.24.0&quot;},{&quot;version&quot;:&quot;v4.23.1&quot;},{&quot;version&quot;:&quot;v4.23.0&quot;},{&quot;version&quot;:&quot;v4.22.2&quot;},{&quot;version&quot;:&quot;v4.22.1&quot;},{&quot;version&quot;:&quot;v4.22.0&quot;},{&quot;version&quot;:&quot;v4.21.3&quot;},{&quot;version&quot;:&quot;v4.21.2&quot;},{&quot;version&quot;:&quot;v4.21.1&quot;},{&quot;version&quot;:&quot;v4.21.0&quot;},{&quot;version&quot;:&quot;v4.20.1&quot;},{&quot;version&quot;:&quot;v4.20.0&quot;},{&quot;version&quot;:&quot;v4.19.4&quot;},{&quot;version&quot;:&quot;v4.19.3&quot;},{&quot;version&quot;:&quot;v4.19.2&quot;},{&quot;version&quot;:&quot;v4.19.0&quot;},{&quot;version&quot;:&quot;v4.18.0&quot;},{&quot;version&quot;:&quot;v4.17.0&quot;},{&quot;version&quot;:&quot;v4.16.2&quot;},{&quot;version&quot;:&quot;v4.16.1&quot;},{&quot;version&quot;:&quot;v4.16.0&quot;},{&quot;version&quot;:&quot;v4.15.0&quot;},{&quot;version&quot;:&quot;v4.14.1&quot;},{&quot;version&quot;:&quot;v4.13.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.5&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.4&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.0.0&quot;},{&quot;version&quot;:&quot;doc-builder-html&quot;}],&quot;title&quot;:&quot;Question answering&quot;}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"><div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation</p> <div class="flex items-center"><p class="font-semibold">Question answering</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "><button class=" " type="button"><h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> <span class="ml-auto rounded border border-gray-200 bg-gray-100 px-0.5 text-xs dark:border-gray-800 dark:bg-gray-800"><kbd class="font-sans">⌘K</kbd></span></button> <div class="flex items-center"><select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1">v4.34.0</option><option value="2">v4.33.3</option><option value="3">v4.32.1</option><option value="4">v4.31.0</option><option value="5">v4.30.0</option><option value="6">v4.29.1</option><option value="7">v4.28.1</option><option value="8">v4.27.2</option><option value="9">v4.26.1</option><option value="10">v4.25.1</option><option value="11">v4.24.0</option><option value="12">v4.23.1</option><option value="13">v4.22.2</option><option value="14">v4.21.3</option><option value="15">v4.20.1</option><option value="16">v4.19.4</option><option value="17">v4.18.0</option><option value="18">v4.17.0</option><option value="19">v4.16.2</option><option value="20">v4.15.0</option><option value="21">v4.14.1</option><option value="22">v4.13.0</option><option value="23">v4.12.5</option><option value="24">v4.11.3</option><option value="25">v4.10.1</option><option value="26">v4.9.2</option><option value="27">v4.8.2</option><option value="28">v4.7.0</option><option value="29">v4.6.0</option><option value="30">v4.5.1</option><option value="31">v4.4.2</option><option value="32">v4.3.3</option><option value="33">v4.2.2</option><option value="34">v4.1.1</option><option value="35">v4.0.1</option><option value="36">v3.5.1</option><option value="37">v3.4.0</option><option value="38">v3.3.1</option><option value="39">v3.2.0</option><option value="40">v3.1.0</option><option value="41">v3.0.2</option><option value="42">v2.11.0</option><option value="43">v2.10.0</option><option value="44">v2.9.1</option><option value="45">v2.8.0</option><option value="46">v2.7.0</option><option value="47">v2.6.0</option><option value="48">v2.5.1</option><option value="49">v2.4.1</option><option value="50">v2.3.0</option><option value="51">v2.2.2</option><option value="52">v2.1.1</option><option value="53">v2.0.0</option><option value="54">v1.2.0</option><option value="55">v1.1.0</option><option value="56">v1.0.0</option><option value="57">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"><button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"><svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> 112,792</a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Get started</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/index">🤗 Transformers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/quicktour">Quick tour </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/installation">Installation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Tutorials</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_tutorial">Run inference with pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/autoclass_tutorial">Write portable code with AutoClass </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/preprocessing">Preprocess data </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/training">Fine-tune a pretrained model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/run_scripts">Train with a script </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/accelerate">Set up distributed training with 🤗 Accelerate </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/peft">Load and train adapters with 🤗 PEFT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_sharing">Share your model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/transformers_agents">Agents </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/llm_tutorial">Generation with LLMs </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Task Guides</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Natural Language Processing</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/sequence_classification">Text classification </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/token_classification">Token classification </a><a data-sveltekit-reload="" class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-4" href="/docs/transformers/v4.34.0/en/tasks/question_answering">Question answering </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/language_modeling">Causal language modeling </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/masked_language_modeling">Masked language modeling </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/translation">Translation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/summarization">Summarization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/multiple_choice">Multiple choice </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Computer Vision</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Generation</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Prompting</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Developer guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/fast_tokenizers">Use fast tokenizers from 🤗 Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/multilingual">Run inference with multilingual models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/create_a_model">Use model-specific APIs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_models">Share a custom model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/chat_templating">Templates for chat models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/sagemaker">Run training on Amazon SageMaker </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/serialization">Export to ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tflite">Export to TFLite </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/torchscript">Export to TorchScript </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/benchmarks">Benchmarks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/notebooks">Notebooks with examples </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/community">Community resources </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_tools">Custom Tools and Prompts </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/troubleshooting">Troubleshoot </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Performance and scalability</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/performance">Overview </a><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Efficient training techniques</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_one">Methods and tools for efficient training on a single GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_many">Multiple GPUs and parallelism </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu">Efficient training on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu_many">Distributed CPU training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu">Training on TPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu_tf">Training on TPU with TensorFlow </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_special">Training on Specialized Hardware </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_hardware">Custom hardware for training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/hpo_train">Hyperparameter Search using Trainer API </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Optimizing inference</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_cpu">Inference on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_one">Inference on one GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_many">Inference on many GPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_special">Inference on Specialized Hardware </a> </div><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/big_models">Instantiating a big model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/debugging">Troubleshooting </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tf_xla">XLA Integration for TensorFlow Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perf_torch_compile">Optimize inference using `torch.compile()` </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Contribute</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/contributing">How to contribute to transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_model">How to add a model to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_tensorflow_model">How to convert a 🤗 Transformers model to TensorFlow? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_pipeline">How to add a pipeline to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/testing">Testing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pr_checks">Checks on a Pull Request </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Conceptual guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/philosophy">Philosophy </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/glossary">Glossary </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/task_summary">What 🤗 Transformers can do </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tasks_explained">How 🤗 Transformers solve tasks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_summary">The Transformer model family </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tokenizer_summary">Summary of the tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/attention">Attention mechanisms </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pad_truncation">Padding and truncation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/bertology">BERTology </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perplexity">Perplexity of fixed-length models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_webserver">Pipelines for webserver inference </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_memory_anatomy">Model training anatomy </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>API</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Main Classes</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/agent">Agents and Tools </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/model_doc/auto">Auto Classes </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/callback">Callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/configuration">Configuration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/data_collator">Data Collator </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks">Keras callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/logging">Logging </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/model">Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/text_generation">Text Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/onnx">ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules">Optimization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/output">Model outputs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/pipelines">Pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/processors">Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/quantization">Quantization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/tokenizer">Tokenizer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/trainer">Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/deepspeed">DeepSpeed Integration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor">Feature Extractor </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/image_processor">Image Processor </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Models</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Text models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Vision models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Reinforcement learning models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Time series models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Graph models</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Internal Helpers</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/modeling_utils">Custom Layers and Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/pipelines_utils">Utilities for pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/tokenization_utils">Utilities for Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/trainer_utils">Utilities for Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/generation_utils">Utilities for Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/image_processing_utils">Utilities for Image Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/audio_utils">Utilities for Audio processing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/file_utils">General Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/time_series_utils">Utilities for Time Series </a> </div> </div></nav></div></div></div> <div class="z-1 min-w-0 flex-1"> <div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg"> <div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div> <p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience </p> <div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes </div></div></div> <div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a> <p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div> <div class="prose-doc prose relative mx-auto max-w-4xl break-words"> <p></p> <h1 class="relative group"><a id="question-answering" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#question-answering"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1i2el67">Question answering</span></h1> <div class="flex space-x-1 absolute z-10 right-0 top-0"> <div class="relative colab-dropdown "><button class=" " type="button"><img alt="Open In Colab" class="!m-0" src="https://colab.research.google.com/assets/colab-badge.svg"></button> </div> <div class="relative colab-dropdown "><button class=" " type="button"><img alt="Open In Studio Lab" class="!m-0" src="https://studiolab.sagemaker.aws/studiolab.svg"></button> </div></div> <iframe class="w-full xl:w-4/6 h-80" src="https://www.youtube-nocookie.com/embed/ajPx5LwJD-I" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe> <p data-svelte-h="svelte-wddg3q">Question answering tasks return an answer given a question. If you’ve ever asked a virtual assistant like Alexa, Siri or Google what the weather is, then you’ve used a question answering model before. There are two common types of question answering tasks:</p> <ul data-svelte-h="svelte-juxaac"><li>Extractive: extract the answer from the given context.</li> <li>Abstractive: generate an answer from the context that correctly answers the question.</li></ul> <p data-svelte-h="svelte-1aff4p7">This guide will show you how to:</p> <ol data-svelte-h="svelte-aty3s5"><li>Finetune <a href="https://huggingface.co/distilbert-base-uncased" rel="nofollow">DistilBERT</a> on the <a href="https://huggingface.co/datasets/squad" rel="nofollow">SQuAD</a> dataset for extractive question answering.</li> <li>Use your finetuned model for inference.</li></ol> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400">The task illustrated in this tutorial is supported by the following model architectures: <p data-svelte-h="svelte-1grwhx8"><a href="../model_doc/albert">ALBERT</a>, <a href="../model_doc/bart">BART</a>, <a href="../model_doc/bert">BERT</a>, <a href="../model_doc/big_bird">BigBird</a>, <a href="../model_doc/bigbird_pegasus">BigBird-Pegasus</a>, <a href="../model_doc/bloom">BLOOM</a>, <a href="../model_doc/camembert">CamemBERT</a>, <a href="../model_doc/canine">CANINE</a>, <a href="../model_doc/convbert">ConvBERT</a>, <a href="../model_doc/data2vec-text">Data2VecText</a>, <a href="../model_doc/deberta">DeBERTa</a>, <a href="../model_doc/deberta-v2">DeBERTa-v2</a>, <a href="../model_doc/distilbert">DistilBERT</a>, <a href="../model_doc/electra">ELECTRA</a>, <a href="../model_doc/ernie">ERNIE</a>, <a href="../model_doc/ernie_m">ErnieM</a>, <a href="../model_doc/falcon">Falcon</a>, <a href="../model_doc/flaubert">FlauBERT</a>, <a href="../model_doc/fnet">FNet</a>, <a href="../model_doc/funnel">Funnel Transformer</a>, <a href="../model_doc/gpt2">OpenAI GPT-2</a>, <a href="../model_doc/gpt_neo">GPT Neo</a>, <a href="../model_doc/gpt_neox">GPT NeoX</a>, <a href="../model_doc/gptj">GPT-J</a>, <a href="../model_doc/ibert">I-BERT</a>, <a href="../model_doc/layoutlmv2">LayoutLMv2</a>, <a href="../model_doc/layoutlmv3">LayoutLMv3</a>, <a href="../model_doc/led">LED</a>, <a href="../model_doc/lilt">LiLT</a>, <a href="../model_doc/longformer">Longformer</a>, <a href="../model_doc/luke">LUKE</a>, <a href="../model_doc/lxmert">LXMERT</a>, <a href="../model_doc/markuplm">MarkupLM</a>, <a href="../model_doc/mbart">mBART</a>, <a href="../model_doc/mega">MEGA</a>, <a href="../model_doc/megatron-bert">Megatron-BERT</a>, <a href="../model_doc/mobilebert">MobileBERT</a>, <a href="../model_doc/mpnet">MPNet</a>, <a href="../model_doc/mpt">MPT</a>, <a href="../model_doc/mra">MRA</a>, <a href="../model_doc/mt5">MT5</a>, <a href="../model_doc/mvp">MVP</a>, <a href="../model_doc/nezha">Nezha</a>, <a href="../model_doc/nystromformer">Nyströmformer</a>, <a href="../model_doc/opt">OPT</a>, <a href="../model_doc/qdqbert">QDQBert</a>, <a href="../model_doc/reformer">Reformer</a>, <a href="../model_doc/rembert">RemBERT</a>, <a href="../model_doc/roberta">RoBERTa</a>, <a href="../model_doc/roberta-prelayernorm">RoBERTa-PreLayerNorm</a>, <a href="../model_doc/roc_bert">RoCBert</a>, <a href="../model_doc/roformer">RoFormer</a>, <a href="../model_doc/splinter">Splinter</a>, <a href="../model_doc/squeezebert">SqueezeBERT</a>, <a href="../model_doc/t5">T5</a>, <a href="../model_doc/umt5">UMT5</a>, <a href="../model_doc/xlm">XLM</a>, <a href="../model_doc/xlm-roberta">XLM-RoBERTa</a>, <a href="../model_doc/xlm-roberta-xl">XLM-RoBERTa-XL</a>, <a href="../model_doc/xlnet">XLNet</a>, <a href="../model_doc/xmod">X-MOD</a>, <a href="../model_doc/yoso">YOSO</a></p></div> <p data-svelte-h="svelte-1c9nexd">Before you begin, make sure you have all the necessary libraries installed:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class="">pip install transformers datasets evaluate</pre></div> <p data-svelte-h="svelte-k76o1m">We encourage you to login to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to login:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> huggingface_hub <span class="hljs-keyword">import</span> notebook_login <span class="hljs-meta">&gt;&gt;&gt; </span>notebook_login()</pre></div> <h2 class="relative group"><a id="load-squad-dataset" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#load-squad-dataset"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-mole1n">Load SQuAD dataset</span></h2> <p data-svelte-h="svelte-86lsp1">Start by loading a smaller subset of the SQuAD dataset from the 🤗 Datasets library. This’ll give you a chance to experiment and make sure everything works before spending more time training on the full dataset.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset <span class="hljs-meta">&gt;&gt;&gt; </span>squad = load_dataset(<span class="hljs-string">"squad"</span>, split=<span class="hljs-string">"train[:5000]"</span>)</pre></div> <p data-svelte-h="svelte-1izknij">Split the dataset’s <code>train</code> split into a train and test set with the <a href="https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.train_test_split" rel="nofollow">train_test_split</a> method:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>squad = squad.train_test_split(test_size=<span class="hljs-number">0.2</span>)</pre></div> <p data-svelte-h="svelte-1m91ua0">Then take a look at an example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>squad[<span class="hljs-string">"train"</span>][<span class="hljs-number">0</span>] {<span class="hljs-string">'answers'</span>: {<span class="hljs-string">'answer_start'</span>: [<span class="hljs-number">515</span>], <span class="hljs-string">'text'</span>: [<span class="hljs-string">'Saint Bernadette Soubirous'</span>]}, <span class="hljs-string">'context'</span>: <span class="hljs-string">'Architecturally, the school has a Catholic character. Atop the Main Building\'s gold dome is a golden statue of the Virgin Mary. Immediately in front of the Main Building and facing it, is a copper statue of Christ with arms upraised with the legend "Venite Ad Me Omnes". Next to the Main Building is the Basilica of the Sacred Heart. Immediately behind the basilica is the Grotto, a Marian place of prayer and reflection. It is a replica of the grotto at Lourdes, France where the Virgin Mary reputedly appeared to Saint Bernadette Soubirous in 1858. At the end of the main drive (and in a direct line that connects through 3 statues and the Gold Dome), is a simple, modern stone statue of Mary.'</span>, <span class="hljs-string">'id'</span>: <span class="hljs-string">'5733be284776f41900661182'</span>, <span class="hljs-string">'question'</span>: <span class="hljs-string">'To whom did the Virgin Mary allegedly appear in 1858 in Lourdes France?'</span>, <span class="hljs-string">'title'</span>: <span class="hljs-string">'University_of_Notre_Dame'</span> }</pre></div> <p data-svelte-h="svelte-ixla8d">There are several important fields here:</p> <ul data-svelte-h="svelte-13sugm7"><li><code>answers</code>: the starting location of the answer token and the answer text.</li> <li><code>context</code>: background information from which the model needs to extract the answer.</li> <li><code>question</code>: the question a model should answer.</li></ul> <h2 class="relative group"><a id="preprocess" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#preprocess"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1cg9qj">Preprocess</span></h2> <iframe class="w-full xl:w-4/6 h-80" src="https://www.youtube-nocookie.com/embed/qgaM0weJHpA" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe> <p data-svelte-h="svelte-1l87utj">The next step is to load a DistilBERT tokenizer to process the <code>question</code> and <code>context</code> fields:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"distilbert-base-uncased"</span>)</pre></div> <p data-svelte-h="svelte-6fi1jw">There are a few preprocessing steps particular to question answering tasks you should be aware of:</p> <ol data-svelte-h="svelte-1pbu9ad"><li>Some examples in a dataset may have a very long <code>context</code> that exceeds the maximum input length of the model. To deal with longer sequences, truncate only the <code>context</code> by setting <code>truncation="only_second"</code>.</li> <li>Next, map the start and end positions of the answer to the original <code>context</code> by setting <code>return_offset_mapping=True</code>.</li> <li>With the mapping in hand, now you can find the start and end tokens of the answer. Use the <code>sequence_ids</code> method to find which part of the offset corresponds to the <code>question</code> and which corresponds to the <code>context</code>.</li></ol> <p data-svelte-h="svelte-1l18kqg">Here is how you can create a function to truncate and map the start and end tokens of the <code>answer</code> to the <code>context</code>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">preprocess_function</span>(<span class="hljs-params">examples</span>): <span class="hljs-meta">... </span> questions = [q.strip() <span class="hljs-keyword">for</span> q <span class="hljs-keyword">in</span> examples[<span class="hljs-string">"question"</span>]] <span class="hljs-meta">... </span> inputs = tokenizer( <span class="hljs-meta">... </span> questions, <span class="hljs-meta">... </span> examples[<span class="hljs-string">"context"</span>], <span class="hljs-meta">... </span> max_length=<span class="hljs-number">384</span>, <span class="hljs-meta">... </span> truncation=<span class="hljs-string">"only_second"</span>, <span class="hljs-meta">... </span> return_offsets_mapping=<span class="hljs-literal">True</span>, <span class="hljs-meta">... </span> padding=<span class="hljs-string">"max_length"</span>, <span class="hljs-meta">... </span> ) <span class="hljs-meta">... </span> offset_mapping = inputs.pop(<span class="hljs-string">"offset_mapping"</span>) <span class="hljs-meta">... </span> answers = examples[<span class="hljs-string">"answers"</span>] <span class="hljs-meta">... </span> start_positions = [] <span class="hljs-meta">... </span> end_positions = [] <span class="hljs-meta">... </span> <span class="hljs-keyword">for</span> i, offset <span class="hljs-keyword">in</span> <span class="hljs-built_in">enumerate</span>(offset_mapping): <span class="hljs-meta">... </span> answer = answers[i] <span class="hljs-meta">... </span> start_char = answer[<span class="hljs-string">"answer_start"</span>][<span class="hljs-number">0</span>] <span class="hljs-meta">... </span> end_char = answer[<span class="hljs-string">"answer_start"</span>][<span class="hljs-number">0</span>] + <span class="hljs-built_in">len</span>(answer[<span class="hljs-string">"text"</span>][<span class="hljs-number">0</span>]) <span class="hljs-meta">... </span> sequence_ids = inputs.sequence_ids(i) <span class="hljs-meta">... </span> <span class="hljs-comment"># Find the start and end of the context</span> <span class="hljs-meta">... </span> idx = <span class="hljs-number">0</span> <span class="hljs-meta">... </span> <span class="hljs-keyword">while</span> sequence_ids[idx] != <span class="hljs-number">1</span>: <span class="hljs-meta">... </span> idx += <span class="hljs-number">1</span> <span class="hljs-meta">... </span> context_start = idx <span class="hljs-meta">... </span> <span class="hljs-keyword">while</span> sequence_ids[idx] == <span class="hljs-number">1</span>: <span class="hljs-meta">... </span> idx += <span class="hljs-number">1</span> <span class="hljs-meta">... </span> context_end = idx - <span class="hljs-number">1</span> <span class="hljs-meta">... </span> <span class="hljs-comment"># If the answer is not fully inside the context, label it (0, 0)</span> <span class="hljs-meta">... </span> <span class="hljs-keyword">if</span> offset[context_start][<span class="hljs-number">0</span>] &gt; end_char <span class="hljs-keyword">or</span> offset[context_end][<span class="hljs-number">1</span>] &lt; start_char: <span class="hljs-meta">... </span> start_positions.append(<span class="hljs-number">0</span>) <span class="hljs-meta">... </span> end_positions.append(<span class="hljs-number">0</span>) <span class="hljs-meta">... </span> <span class="hljs-keyword">else</span>: <span class="hljs-meta">... </span> <span class="hljs-comment"># Otherwise it's the start and end token positions</span> <span class="hljs-meta">... </span> idx = context_start <span class="hljs-meta">... </span> <span class="hljs-keyword">while</span> idx &lt;= context_end <span class="hljs-keyword">and</span> offset[idx][<span class="hljs-number">0</span>] &lt;= start_char: <span class="hljs-meta">... </span> idx += <span class="hljs-number">1</span> <span class="hljs-meta">... </span> start_positions.append(idx - <span class="hljs-number">1</span>) <span class="hljs-meta">... </span> idx = context_end <span class="hljs-meta">... </span> <span class="hljs-keyword">while</span> idx &gt;= context_start <span class="hljs-keyword">and</span> offset[idx][<span class="hljs-number">1</span>] &gt;= end_char: <span class="hljs-meta">... </span> idx -= <span class="hljs-number">1</span> <span class="hljs-meta">... </span> end_positions.append(idx + <span class="hljs-number">1</span>) <span class="hljs-meta">... </span> inputs[<span class="hljs-string">"start_positions"</span>] = start_positions <span class="hljs-meta">... </span> inputs[<span class="hljs-string">"end_positions"</span>] = end_positions <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> inputs</pre></div> <p data-svelte-h="svelte-1j29ck8">To apply the preprocessing function over the entire dataset, use 🤗 Datasets <a href="https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.map" rel="nofollow">map</a> function. You can speed up the <code>map</code> function by setting <code>batched=True</code> to process multiple elements of the dataset at once. Remove any columns you don’t need:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>tokenized_squad = squad.<span class="hljs-built_in">map</span>(preprocess_function, batched=<span class="hljs-literal">True</span>, remove_columns=squad[<span class="hljs-string">"train"</span>].column_names)</pre></div> <p data-svelte-h="svelte-1se8rd4">Now create a batch of examples using <a href="/docs/transformers/v4.34.0/en/main_classes/data_collator#transformers.DefaultDataCollator">DefaultDataCollator</a>. Unlike other data collators in 🤗 Transformers, the <a href="/docs/transformers/v4.34.0/en/main_classes/data_collator#transformers.DefaultDataCollator">DefaultDataCollator</a> does not apply any additional preprocessing such as padding.</p> <div class="space-y-10 py-6 2xl:py-8 2xl:-mx-4"><div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><defs><clipPath id="a"><rect x="3.05" y="0.5" width="25.73" height="31" fill="none"></rect></clipPath></defs><g clip-path="url(#a)"><path d="M24.94,9.51a12.81,12.81,0,0,1,0,18.16,12.68,12.68,0,0,1-18,0,12.81,12.81,0,0,1,0-18.16l9-9V5l-.84.83-6,6a9.58,9.58,0,1,0,13.55,0ZM20.44,9a1.68,1.68,0,1,1,1.67-1.67A1.68,1.68,0,0,1,20.44,9Z" fill="#ee4c2c"></path></g></svg> <span>Pytorch</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide Pytorch content</span></div></div> <div class="framework-content"><div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> DefaultDataCollator <span class="hljs-meta">&gt;&gt;&gt; </span>data_collator = DefaultDataCollator()</pre></div></div></div> <div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="0.94em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 274"><path d="M145.726 42.065v42.07l72.861 42.07v-42.07l-72.86-42.07zM0 84.135v42.07l36.43 21.03V105.17L0 84.135zm109.291 21.035l-36.43 21.034v126.2l36.43 21.035v-84.135l36.435 21.035v-42.07l-36.435-21.034V105.17z" fill="#E55B2D"></path><path d="M145.726 42.065L36.43 105.17v42.065l72.861-42.065v42.065l36.435-21.03v-84.14zM255.022 63.1l-36.435 21.035v42.07l36.435-21.035V63.1zm-72.865 84.135l-36.43 21.035v42.07l36.43-21.036v-42.07zm-36.43 63.104l-36.436-21.035v84.135l36.435-21.035V210.34z" fill="#ED8E24"></path><path d="M145.726 0L0 84.135l36.43 21.035l109.296-63.105l72.861 42.07L255.022 63.1L145.726 0zm0 126.204l-36.435 21.03l36.435 21.036l36.43-21.035l-36.43-21.03z" fill="#F8BF3C"></path></svg> <span>TensorFlow</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide TensorFlow content</span></div></div> <div class="framework-content"><div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> DefaultDataCollator <span class="hljs-meta">&gt;&gt;&gt; </span>data_collator = DefaultDataCollator(return_tensors=<span class="hljs-string">"tf"</span>)</pre></div></div></div> </div> <h2 class="relative group"><a id="train" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#train"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-5arm0l">Train</span></h2> <div class="space-y-10 py-6 2xl:py-8 2xl:-mx-4"><div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><defs><clipPath id="a"><rect x="3.05" y="0.5" width="25.73" height="31" fill="none"></rect></clipPath></defs><g clip-path="url(#a)"><path d="M24.94,9.51a12.81,12.81,0,0,1,0,18.16,12.68,12.68,0,0,1-18,0,12.81,12.81,0,0,1,0-18.16l9-9V5l-.84.83-6,6a9.58,9.58,0,1,0,13.55,0ZM20.44,9a1.68,1.68,0,1,1,1.67-1.67A1.68,1.68,0,0,1,20.44,9Z" fill="#ee4c2c"></path></g></svg> <span>Pytorch</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide Pytorch content</span></div></div> <div class="framework-content"><div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1ufp0ay">If you aren’t familiar with finetuning a model with the <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer">Trainer</a>, take a look at the basic tutorial <a href="../training#train-with-pytorch-trainer">here</a>!</p></div> <p data-svelte-h="svelte-3d6qx4">You’re ready to start training your model now! Load DistilBERT with <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoModelForQuestionAnswering">AutoModelForQuestionAnswering</a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoModelForQuestionAnswering, TrainingArguments, Trainer <span class="hljs-meta">&gt;&gt;&gt; </span>model = AutoModelForQuestionAnswering.from_pretrained(<span class="hljs-string">"distilbert-base-uncased"</span>)</pre></div> <p data-svelte-h="svelte-l42k0i">At this point, only three steps remain:</p> <ol data-svelte-h="svelte-1rv9c7o"><li>Define your training hyperparameters in <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.TrainingArguments">TrainingArguments</a>. The only required parameter is <code>output_dir</code> which specifies where to save your model. You’ll push this model to the Hub by setting <code>push_to_hub=True</code> (you need to be signed in to Hugging Face to upload your model).</li> <li>Pass the training arguments to <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer">Trainer</a> along with the model, dataset, tokenizer, and data collator.</li> <li>Call <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.train">train()</a> to finetune your model.</li></ol> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>training_args = TrainingArguments( <span class="hljs-meta">... </span> output_dir=<span class="hljs-string">"my_awesome_qa_model"</span>, <span class="hljs-meta">... </span> evaluation_strategy=<span class="hljs-string">"epoch"</span>, <span class="hljs-meta">... </span> learning_rate=<span class="hljs-number">2e-5</span>, <span class="hljs-meta">... </span> per_device_train_batch_size=<span class="hljs-number">16</span>, <span class="hljs-meta">... </span> per_device_eval_batch_size=<span class="hljs-number">16</span>, <span class="hljs-meta">... </span> num_train_epochs=<span class="hljs-number">3</span>, <span class="hljs-meta">... </span> weight_decay=<span class="hljs-number">0.01</span>, <span class="hljs-meta">... </span> push_to_hub=<span class="hljs-literal">True</span>, <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>trainer = Trainer( <span class="hljs-meta">... </span> model=model, <span class="hljs-meta">... </span> args=training_args, <span class="hljs-meta">... </span> train_dataset=tokenized_squad[<span class="hljs-string">"train"</span>], <span class="hljs-meta">... </span> eval_dataset=tokenized_squad[<span class="hljs-string">"test"</span>], <span class="hljs-meta">... </span> tokenizer=tokenizer, <span class="hljs-meta">... </span> data_collator=data_collator, <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>trainer.train()</pre></div> <p data-svelte-h="svelte-cv8z08">Once training is completed, share your model to the Hub with the <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.push_to_hub">push_to_hub()</a> method so everyone can use your model:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>trainer.push_to_hub()</pre></div></div></div> <div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="0.94em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 274"><path d="M145.726 42.065v42.07l72.861 42.07v-42.07l-72.86-42.07zM0 84.135v42.07l36.43 21.03V105.17L0 84.135zm109.291 21.035l-36.43 21.034v126.2l36.43 21.035v-84.135l36.435 21.035v-42.07l-36.435-21.034V105.17z" fill="#E55B2D"></path><path d="M145.726 42.065L36.43 105.17v42.065l72.861-42.065v42.065l36.435-21.03v-84.14zM255.022 63.1l-36.435 21.035v42.07l36.435-21.035V63.1zm-72.865 84.135l-36.43 21.035v42.07l36.43-21.036v-42.07zm-36.43 63.104l-36.436-21.035v84.135l36.435-21.035V210.34z" fill="#ED8E24"></path><path d="M145.726 0L0 84.135l36.43 21.035l109.296-63.105l72.861 42.07L255.022 63.1L145.726 0zm0 126.204l-36.435 21.03l36.435 21.036l36.43-21.035l-36.43-21.03z" fill="#F8BF3C"></path></svg> <span>TensorFlow</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide TensorFlow content</span></div></div> <div class="framework-content"><div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1rd4nl8">If you aren’t familiar with finetuning a model with Keras, take a look at the basic tutorial <a href="../training#train-a-tensorflow-model-with-keras">here</a>!</p></div> To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters: <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> create_optimizer <span class="hljs-meta">&gt;&gt;&gt; </span>batch_size = <span class="hljs-number">16</span> <span class="hljs-meta">&gt;&gt;&gt; </span>num_epochs = <span class="hljs-number">2</span> <span class="hljs-meta">&gt;&gt;&gt; </span>total_train_steps = (<span class="hljs-built_in">len</span>(tokenized_squad[<span class="hljs-string">"train"</span>]) // batch_size) * num_epochs <span class="hljs-meta">&gt;&gt;&gt; </span>optimizer, schedule = create_optimizer( <span class="hljs-meta">... </span> init_lr=<span class="hljs-number">2e-5</span>, <span class="hljs-meta">... </span> num_warmup_steps=<span class="hljs-number">0</span>, <span class="hljs-meta">... </span> num_train_steps=total_train_steps, <span class="hljs-meta">... </span>)</pre></div> <p data-svelte-h="svelte-1q2r63a">Then you can load DistilBERT with <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.TFAutoModelForQuestionAnswering">TFAutoModelForQuestionAnswering</a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> TFAutoModelForQuestionAnswering <span class="hljs-meta">&gt;&gt;&gt; </span>model = TFAutoModelForQuestionAnswering(<span class="hljs-string">"distilbert-base-uncased"</span>)</pre></div> <p data-svelte-h="svelte-qmwuyd">Convert your datasets to the <code>tf.data.Dataset</code> format with <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel.prepare_tf_dataset">prepare_tf_dataset()</a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>tf_train_set = model.prepare_tf_dataset( <span class="hljs-meta">... </span> tokenized_squad[<span class="hljs-string">"train"</span>], <span class="hljs-meta">... </span> shuffle=<span class="hljs-literal">True</span>, <span class="hljs-meta">... </span> batch_size=<span class="hljs-number">16</span>, <span class="hljs-meta">... </span> collate_fn=data_collator, <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>tf_validation_set = model.prepare_tf_dataset( <span class="hljs-meta">... </span> tokenized_squad[<span class="hljs-string">"test"</span>], <span class="hljs-meta">... </span> shuffle=<span class="hljs-literal">False</span>, <span class="hljs-meta">... </span> batch_size=<span class="hljs-number">16</span>, <span class="hljs-meta">... </span> collate_fn=data_collator, <span class="hljs-meta">... </span>)</pre></div> <p data-svelte-h="svelte-rt1r5v">Configure the model for training with <a href="https://keras.io/api/models/model_training_apis/#compile-method" rel="nofollow"><code>compile</code></a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> tensorflow <span class="hljs-keyword">as</span> tf <span class="hljs-meta">&gt;&gt;&gt; </span>model.<span class="hljs-built_in">compile</span>(optimizer=optimizer)</pre></div> <p data-svelte-h="svelte-1htcxoe">The last thing to setup before you start training is to provide a way to push your model to the Hub. This can be done by specifying where to push your model and tokenizer in the <a href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks#transformers.PushToHubCallback">PushToHubCallback</a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers.keras_callbacks <span class="hljs-keyword">import</span> PushToHubCallback <span class="hljs-meta">&gt;&gt;&gt; </span>callback = PushToHubCallback( <span class="hljs-meta">... </span> output_dir=<span class="hljs-string">"my_awesome_qa_model"</span>, <span class="hljs-meta">... </span> tokenizer=tokenizer, <span class="hljs-meta">... </span>)</pre></div> <p data-svelte-h="svelte-1pfsro2">Finally, you’re ready to start training your model! Call <a href="https://keras.io/api/models/model_training_apis/#fit-method" rel="nofollow"><code>fit</code></a> with your training and validation datasets, the number of epochs, and your callback to finetune the model:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>model.fit(x=tf_train_set, validation_data=tf_validation_set, epochs=<span class="hljs-number">3</span>, callbacks=[callback])</pre></div> <p data-svelte-h="svelte-2s71om">Once training is completed, your model is automatically uploaded to the Hub so everyone can use it!</p></div></div> </div> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-14amlb3">For a more in-depth example of how to finetune a model for question answering, take a look at the corresponding <a href="https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering.ipynb" rel="nofollow">PyTorch notebook</a> or <a href="https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering-tf.ipynb" rel="nofollow">TensorFlow notebook</a>.</p></div> <h2 class="relative group"><a id="evaluate" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#evaluate"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-sh8s6s">Evaluate</span></h2> <p data-svelte-h="svelte-p8r7j5">Evaluation for question answering requires a significant amount of postprocessing. To avoid taking up too much of your time, this guide skips the evaluation step. The <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer">Trainer</a> still calculates the evaluation loss during training so you’re not completely in the dark about your model’s performance.</p> <p data-svelte-h="svelte-ktob7c">If have more time and you’re interested in how to evaluate your model for question answering, take a look at the <a href="https://huggingface.co/course/chapter7/7?fw=pt#postprocessing" rel="nofollow">Question answering</a> chapter from the 🤗 Hugging Face Course!</p> <h2 class="relative group"><a id="inference" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#inference"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-199uz7g">Inference</span></h2> <p data-svelte-h="svelte-633ppb">Great, now that you’ve finetuned a model, you can use it for inference!</p> <p data-svelte-h="svelte-1wy7p4p">Come up with a question and some context you’d like the model to predict:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>question = <span class="hljs-string">"How many programming languages does BLOOM support?"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>context = <span class="hljs-string">"BLOOM has 176 billion parameters and can generate text in 46 languages natural languages and 13 programming languages."</span></pre></div> <p data-svelte-h="svelte-19nplva">The simplest way to try out your finetuned model for inference is to use it in a <a href="/docs/transformers/v4.34.0/en/main_classes/pipelines#transformers.pipeline">pipeline()</a>. Instantiate a <code>pipeline</code> for question answering with your model, and pass your text to it:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> pipeline <span class="hljs-meta">&gt;&gt;&gt; </span>question_answerer = pipeline(<span class="hljs-string">"question-answering"</span>, model=<span class="hljs-string">"my_awesome_qa_model"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>question_answerer(question=question, context=context) {<span class="hljs-string">'score'</span>: <span class="hljs-number">0.2058267742395401</span>, <span class="hljs-string">'start'</span>: <span class="hljs-number">10</span>, <span class="hljs-string">'end'</span>: <span class="hljs-number">95</span>, <span class="hljs-string">'answer'</span>: <span class="hljs-string">'176 billion parameters and can generate text in 46 languages natural languages and 13'</span>}</pre></div> <p data-svelte-h="svelte-1njl8vm">You can also manually replicate the results of the <code>pipeline</code> if you’d like:</p> <div class="space-y-10 py-6 2xl:py-8 2xl:-mx-4"><div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><defs><clipPath id="a"><rect x="3.05" y="0.5" width="25.73" height="31" fill="none"></rect></clipPath></defs><g clip-path="url(#a)"><path d="M24.94,9.51a12.81,12.81,0,0,1,0,18.16,12.68,12.68,0,0,1-18,0,12.81,12.81,0,0,1,0-18.16l9-9V5l-.84.83-6,6a9.58,9.58,0,1,0,13.55,0ZM20.44,9a1.68,1.68,0,1,1,1.67-1.67A1.68,1.68,0,0,1,20.44,9Z" fill="#ee4c2c"></path></g></svg> <span>Pytorch</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide Pytorch content</span></div></div> <div class="framework-content"><p data-svelte-h="svelte-1qcz1wr">Tokenize the text and return PyTorch tensors:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"my_awesome_qa_model"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(question, context, return_tensors=<span class="hljs-string">"pt"</span>)</pre></div> <p data-svelte-h="svelte-f3g043">Pass your inputs to the model and return the <code>logits</code>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoModelForQuestionAnswering <span class="hljs-meta">&gt;&gt;&gt; </span>model = AutoModelForQuestionAnswering.from_pretrained(<span class="hljs-string">"my_awesome_qa_model"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> outputs = model(**inputs)</pre></div> <p data-svelte-h="svelte-v8itmt">Get the highest probability from the model output for the start and end positions:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>answer_start_index = outputs.start_logits.argmax() <span class="hljs-meta">&gt;&gt;&gt; </span>answer_end_index = outputs.end_logits.argmax()</pre></div> <p data-svelte-h="svelte-66bsyj">Decode the predicted tokens to get the answer:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>predict_answer_tokens = inputs.input_ids[<span class="hljs-number">0</span>, answer_start_index : answer_end_index + <span class="hljs-number">1</span>] <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer.decode(predict_answer_tokens) <span class="hljs-string">'176 billion parameters and can generate text in 46 languages natural languages and 13'</span></pre></div></div></div> <div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="0.94em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 274"><path d="M145.726 42.065v42.07l72.861 42.07v-42.07l-72.86-42.07zM0 84.135v42.07l36.43 21.03V105.17L0 84.135zm109.291 21.035l-36.43 21.034v126.2l36.43 21.035v-84.135l36.435 21.035v-42.07l-36.435-21.034V105.17z" fill="#E55B2D"></path><path d="M145.726 42.065L36.43 105.17v42.065l72.861-42.065v42.065l36.435-21.03v-84.14zM255.022 63.1l-36.435 21.035v42.07l36.435-21.035V63.1zm-72.865 84.135l-36.43 21.035v42.07l36.43-21.036v-42.07zm-36.43 63.104l-36.436-21.035v84.135l36.435-21.035V210.34z" fill="#ED8E24"></path><path d="M145.726 0L0 84.135l36.43 21.035l109.296-63.105l72.861 42.07L255.022 63.1L145.726 0zm0 126.204l-36.435 21.03l36.435 21.036l36.43-21.035l-36.43-21.03z" fill="#F8BF3C"></path></svg> <span>TensorFlow</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide TensorFlow content</span></div></div> <div class="framework-content"><p data-svelte-h="svelte-s1qr7b">Tokenize the text and return TensorFlow tensors:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"my_awesome_qa_model"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(question, text, return_tensors=<span class="hljs-string">"tf"</span>)</pre></div> <p data-svelte-h="svelte-f3g043">Pass your inputs to the model and return the <code>logits</code>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> TFAutoModelForQuestionAnswering <span class="hljs-meta">&gt;&gt;&gt; </span>model = TFAutoModelForQuestionAnswering.from_pretrained(<span class="hljs-string">"my_awesome_qa_model"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(**inputs)</pre></div> <p data-svelte-h="svelte-v8itmt">Get the highest probability from the model output for the start and end positions:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>answer_start_index = <span class="hljs-built_in">int</span>(tf.math.argmax(outputs.start_logits, axis=-<span class="hljs-number">1</span>)[<span class="hljs-number">0</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span>answer_end_index = <span class="hljs-built_in">int</span>(tf.math.argmax(outputs.end_logits, axis=-<span class="hljs-number">1</span>)[<span class="hljs-number">0</span>])</pre></div> <p data-svelte-h="svelte-66bsyj">Decode the predicted tokens to get the answer:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>predict_answer_tokens = inputs.input_ids[<span class="hljs-number">0</span>, answer_start_index : answer_end_index + <span class="hljs-number">1</span>] <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer.decode(predict_answer_tokens) <span class="hljs-string">'176 billion parameters and can generate text in 46 languages natural languages and 13'</span></pre></div></div></div> </div> <p></p> <div id="svelte-announcer" aria-live="assertive" aria-atomic="true" style="position: absolute; left: 0px; top: 0px; clip: rect(0px, 0px, 0px, 0px); clip-path: inset(50%); overflow: hidden; white-space: nowrap; width: 1px; height: 1px;"></div></div> <div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/tasks/token_classification" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>Token classification</a> <a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/tasks/language_modeling" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">Causal language modeling<span class="ml-2 translate-y-px">→</span></a></div></div></div> <div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapter&quot;:{&quot;title&quot;:&quot;Question answering&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;question-answering&quot;,&quot;url&quot;:&quot;#question-answering&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Load SQuAD dataset&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;load-squad-dataset&quot;,&quot;url&quot;:&quot;#load-squad-dataset&quot;},{&quot;title&quot;:&quot;Preprocess&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocess&quot;,&quot;url&quot;:&quot;#preprocess&quot;},{&quot;title&quot;:&quot;Train&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;train&quot;,&quot;url&quot;:&quot;#train&quot;},{&quot;title&quot;:&quot;Evaluate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;evaluate&quot;,&quot;url&quot;:&quot;#evaluate&quot;},{&quot;title&quot;:&quot;Inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;inference&quot;,&quot;url&quot;:&quot;#inference&quot;}]}}" data-target="SubSideMenu"><nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#question-answering" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-question-answering"><wbr>Question answering</a> <a href="#load-squad-dataset" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-load-squad-dataset"><wbr>Load S<wbr>QuA<wbr>D dataset</a> <a href="#preprocess" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-preprocess"><wbr>Preprocess</a> <a href="#train" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-train"><wbr>Train</a> <a href="#evaluate" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-evaluate"><wbr>Evaluate</a> <a href="#inference" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-inference"><wbr>Inference</a> </nav></div></div></div> <div id="doc-footer"></div></main> </div> <script> import("/front/build/kube-5e23f38/index.js"); window.moonSha = "kube-5e23f38/"; window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc","environment":"production","userAgent":"HuggingFace (production)"}`); </script> <!-- Stripe --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://js.stripe.com/v3/"; script.async = true; document.head.appendChild(script); } </script> <!-- Google analytics v4 --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL"; script.async = true; document.head.appendChild(script); window.dataLayer = window.dataLayer || []; function gtag() { if (window.dataLayer !== undefined) { window.dataLayer.push(arguments); } } gtag("js", new Date()); gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/v4.34.0/en/tasks/question_answering" }); /// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" }); /// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent /// TODO: ask the user for their consent and update this with gtag('consent', 'update') } </script> <!-- Google Analytics v3 (deprecated) --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { (function (i, s, o, g, r, a, m) { i["GoogleAnalyticsObject"] = r; (i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments); }), (i[r].l = 1 * new Date()); (a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]); a.async = 1; a.src = g; m.parentNode.insertBefore(a, m); })(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics"); ganalytics("create", "UA-83738774-2", "auto"); ganalytics("send", "pageview", "/docs/transformers/v4.34.0/en/tasks/question_answering"); } </script> <iframe name="__privateStripeMetricsController4770" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-27c67c0d52761104439bb051c7856ab1.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fv4.34.0%2Fen%2Ftasks%2Fquestion_answering&amp;title=Question%20answering&amp;referrer=&amp;muid=NA&amp;sid=NA&amp;version=6&amp;preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html>
2023-10-05T13:33:47.485Z
https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/vitmae
The documentation page MODEL\_DOC/VITMAE doesn’t exist in v4.34.0, but exists on the main version. Click [here](/docs/transformers/main/en/model_doc/vitmae) to redirect to the main version of the documentation.
<html><head></head><body>The documentation page MODEL_DOC/VITMAE doesn’t exist in v4.34.0, but exists on the main version. Click <a href="/docs/transformers/main/en/model_doc/vitmae">here</a> to redirect to the main version of the documentation.</body></html>
2023-10-05T13:33:47.648Z
https://huggingface.co/docs/transformers/v4.34.0/en/glossary.html#feed-forward-chunking
The documentation page GLOSSARY.HTML doesn’t exist in v4.34.0, but exists on the main version. Click [here](/docs/transformers/main/en/glossary.html) to redirect to the main version of the documentation.
<html><head></head><body>The documentation page GLOSSARY.HTML doesn’t exist in v4.34.0, but exists on the main version. Click <a href="/docs/transformers/main/en/glossary.html">here</a> to redirect to the main version of the documentation.</body></html>
2023-10-05T13:33:47.754Z
https://huggingface.co/docs/transformers/v4.34.0/en/examples
The documentation page EXAMPLES doesn’t exist in v4.34.0, but exists on the main version. Click [here](/docs/transformers/main/en/examples) to redirect to the main version of the documentation.
<html><head></head><body>The documentation page EXAMPLES doesn’t exist in v4.34.0, but exists on the main version. Click <a href="/docs/transformers/main/en/examples">here</a> to redirect to the main version of the documentation.</body></html>
2023-10-05T13:33:47.885Z
Trajectory Transformer
https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer
# Trajectory Transformer This model is in maintenance mode only, so we won’t accept any new PRs changing its code. If you run into any issues running this model, please reinstall the last version that supported this model: v4.30.0. You can do so by running the following command: `pip install -U transformers==4.30.0`. ## Overview The Trajectory Transformer model was proposed in [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https://arxiv.org/abs/2106.02039) by Michael Janner, Qiyang Li, Sergey Levine. The abstract from the paper is the following: _Reinforcement learning (RL) is typically concerned with estimating stationary policies or single-step models, leveraging the Markov property to factorize problems in time. However, we can also view RL as a generic sequence modeling problem, with the goal being to produce a sequence of actions that leads to a sequence of high rewards. Viewed in this way, it is tempting to consider whether high-capacity sequence prediction models that work well in other domains, such as natural-language processing, can also provide effective solutions to the RL problem. To this end, we explore how RL can be tackled with the tools of sequence modeling, using a Transformer architecture to model distributions over trajectories and repurposing beam search as a planning algorithm. Framing RL as sequence modeling problem simplifies a range of design decisions, allowing us to dispense with many of the components common in offline RL algorithms. We demonstrate the flexibility of this approach across long-horizon dynamics prediction, imitation learning, goal-conditioned RL, and offline RL. Further, we show that this approach can be combined with existing model-free algorithms to yield a state-of-the-art planner in sparse-reward, long-horizon tasks._ Tips: This Transformer is used for deep reinforcement learning. To use it, you need to create sequences from actions, states and rewards from all previous timesteps. This model will treat all these elements together as one big sequence (a trajectory). This model was contributed by [CarlCochet](https://huggingface.co/CarlCochet). The original code can be found [here](https://github.com/jannerm/trajectory-transformer). ## TrajectoryTransformerConfig ### class transformers.TrajectoryTransformerConfig [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/deprecated/trajectory_transformer/configuration_trajectory_transformer.py#L31) ( vocab\_size = 100 action\_weight = 5 reward\_weight = 1 value\_weight = 1 block\_size = 249 action\_dim = 6 observation\_dim = 17 transition\_dim = 25 n\_layer = 4 n\_head = 4 n\_embd = 128 embd\_pdrop = 0.1 attn\_pdrop = 0.1 resid\_pdrop = 0.1 learning\_rate = 0.0006 max\_position\_embeddings = 512 initializer\_range = 0.02 layer\_norm\_eps = 1e-12 kaiming\_initializer\_range = 1 use\_cache = True pad\_token\_id = 1 bos\_token\_id = 50256 eos\_token\_id = 50256 \*\*kwargs ) Parameters - **vocab\_size** (`int`, _optional_, defaults to 100) — Vocabulary size of the TrajectoryTransformer model. Defines the number of different tokens that can be represented by the `trajectories` passed when calling [TrajectoryTransformerModel](/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer#transformers.TrajectoryTransformerModel) - **action\_weight** (`int`, _optional_, defaults to 5) — Weight of the action in the loss function - **reward\_weight** (`int`, _optional_, defaults to 1) — Weight of the reward in the loss function - **value\_weight** (`int`, _optional_, defaults to 1) — Weight of the value in the loss function - **block\_size** (`int`, _optional_, defaults to 249) — Size of the blocks in the trajectory transformer. - **action\_dim** (`int`, _optional_, defaults to 6) — Dimension of the action space. - **observation\_dim** (`int`, _optional_, defaults to 17) — Dimension of the observation space. - **transition\_dim** (`int`, _optional_, defaults to 25) — Dimension of the transition space. - **n\_layer** (`int`, _optional_, defaults to 4) — Number of hidden layers in the Transformer encoder. - **n\_head** (`int`, _optional_, defaults to 4) — Number of attention heads for each attention layer in the Transformer encoder. - **n\_embd** (`int`, _optional_, defaults to 128) — Dimensionality of the embeddings and hidden states. - **resid\_pdrop** (`float`, _optional_, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. - **embd\_pdrop** (`int`, _optional_, defaults to 0.1) — The dropout ratio for the embeddings. - **attn\_pdrop** (`float`, _optional_, defaults to 0.1) — The dropout ratio for the attention. - **hidden\_act** (`str` or `function`, _optional_, defaults to `"gelu"`) — The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported. - **max\_position\_embeddings** (`int`, _optional_, defaults to 512) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). - **initializer\_range** (`float`, _optional_, defaults to 0.02) — The standard deviation of the truncated\_normal\_initializer for initializing all weight matrices. - **layer\_norm\_eps** (`float`, _optional_, defaults to 1e-12) — The epsilon used by the layer normalization layers. - **kaiming\_initializer\_range** (\`float, _optional_, defaults to 1) — A coefficient scaling the negative slope of the kaiming initializer rectifier for EinLinear layers. - **use\_cache** (`bool`, _optional_, defaults to `True`) — Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if `config.is_decoder=True`. Example — This is the configuration class to store the configuration of a [TrajectoryTransformerModel](/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer#transformers.TrajectoryTransformerModel). It is used to instantiate an TrajectoryTransformer model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the TrajectoryTransformer [CarlCochet/trajectory-transformer-halfcheetah-medium-v2](https://huggingface.co/CarlCochet/trajectory-transformer-halfcheetah-medium-v2) architecture. Configuration objects inherit from [PretrainedConfig](/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig) and can be used to control the model outputs. Read the documentation from [PretrainedConfig](/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig) for more information. ``` >>> from transformers import TrajectoryTransformerConfig, TrajectoryTransformerModel >>> >>> configuration = TrajectoryTransformerConfig() >>> >>> model = TrajectoryTransformerModel(configuration) >>> >>> configuration = model.config ``` ## TrajectoryTransformerModel ### class transformers.TrajectoryTransformerModel [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/deprecated/trajectory_transformer/modeling_trajectory_transformer.py#L407) ( config ) Parameters - **config** ([TrajectoryTransformerConfig](/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer#transformers.TrajectoryTransformerConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. The bare TrajectoryTransformer Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. the full GPT language model, with a context size of block\_size #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/deprecated/trajectory_transformer/modeling_trajectory_transformer.py#L468) ( trajectories: typing.Optional\[torch.LongTensor\] = None past\_key\_values: typing.Optional\[typing.Tuple\[typing.Tuple\[torch.Tensor\]\]\] = None targets: typing.Optional\[torch.FloatTensor\] = None attention\_mask: typing.Optional\[torch.FloatTensor\] = None use\_cache: typing.Optional\[bool\] = None output\_attentions: typing.Optional\[bool\] = None output\_hidden\_states: typing.Optional\[bool\] = None return\_dict: typing.Optional\[bool\] = None ) → `transformers.models.deprecated.trajectory_transformer.modeling_trajectory_transformer.TrajectoryTransformerOutput` or `tuple(torch.FloatTensor)` Parameters - **trajectories** (`torch.LongTensor` of shape `(batch_size, sequence_length)`) — Batch of trajectories, where a trajectory is a sequence of states, actions and rewards. - **past\_key\_values** (`Tuple[Tuple[torch.Tensor]]` of length `config.n_layers`, _optional_) — Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see `past_key_values` output below). Can be used to speed up sequential decoding. The `input_ids` which have their past given to this model should not be passed as `input_ids` as they have already been computed. - **targets** (`torch.LongTensor` of shape `(batch_size, sequence_length)`, _optional_) — Desired targets used to compute the loss. - **attention\_mask** (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, _optional_) — Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - **use\_cache** (`bool`, _optional_) — If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see `past_key_values`). - **output\_attentions** (`bool`, _optional_) — Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. - **output\_hidden\_states** (`bool`, _optional_) — Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for more detail. - **return\_dict** (`bool`, _optional_) — Whether or not to return a [ModelOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput) instead of a plain tuple. Returns `transformers.models.deprecated.trajectory_transformer.modeling_trajectory_transformer.TrajectoryTransformerOutput` or `tuple(torch.FloatTensor)` A `transformers.models.deprecated.trajectory_transformer.modeling_trajectory_transformer.TrajectoryTransformerOutput` or a tuple of `torch.FloatTensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various elements depending on the configuration ([TrajectoryTransformerConfig](/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer#transformers.TrajectoryTransformerConfig)) and inputs. - **loss** (`torch.FloatTensor` of shape `(1,)`, _optional_, returned when `labels` is provided) — Language modeling loss. - **logits** (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). - **past\_key\_values** (`Tuple[Tuple[torch.Tensor]]`, _optional_, returned when `use_cache=True` is passed or when `config.use_cache=True`) — Tuple of length `config.n_layers`, containing tuples of tensors of shape `(batch_size, num_heads, sequence_length, embed_size_per_head)`). Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see `past_key_values` input) to speed up sequential decoding. - **hidden\_states** (`tuple(torch.FloatTensor)`, _optional_, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`) — Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. Hidden-states of the model at the output of each layer plus the initial embedding outputs. - **attentions** (`tuple(torch.FloatTensor)`, _optional_, returned when `output_attentions=True` is passed or when `config.output_attentions=True`) — Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. GPT2Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The [TrajectoryTransformerModel](/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer#transformers.TrajectoryTransformerModel) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: ``` >>> from transformers import TrajectoryTransformerModel >>> import torch >>> model = TrajectoryTransformerModel.from_pretrained( ... "CarlCochet/trajectory-transformer-halfcheetah-medium-v2" ... ) >>> model.to(device) >>> model.eval() >>> observations_dim, action_dim, batch_size = 17, 6, 256 >>> seq_length = observations_dim + action_dim + 1 >>> trajectories = torch.LongTensor([np.random.permutation(self.seq_length) for _ in range(batch_size)]).to( ... device ... ) >>> targets = torch.LongTensor([np.random.permutation(self.seq_length) for _ in range(batch_size)]).to(device) >>> outputs = model( ... trajectories, ... targets=targets, ... use_cache=True, ... output_attentions=True, ... output_hidden_states=True, ... return_dict=True, ... ) ```
<!DOCTYPE html><html class=""><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"> <meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science."> <meta property="fb:app_id" content="1321688464574422"> <meta name="twitter:card" content="summary_large_image"> <meta name="twitter:site" content="@huggingface"> <meta property="og:title" content="Trajectory Transformer"> <meta property="og:type" content="website"> <meta property="og:url" content="https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer"> <meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png"> <link rel="stylesheet" href="/front/build/kube-5e23f38/style.css"> <link rel="preconnect" href="https://fonts.gstatic.com"> <link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&amp;display=swap" rel="stylesheet"> <link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&amp;display=swap" rel="stylesheet"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" as="style" onload="this.onload=null;this.rel='stylesheet'"> <noscript> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" /> </noscript> <title>Trajectory Transformer</title> <script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script> <script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head> <body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage"> <div class="flex min-h-screen flex-col"> <div class="SVELTE_HYDRATER contents" data-props="{&quot;classNames&quot;:&quot;&quot;,&quot;isWide&quot;:true,&quot;isZh&quot;:false}" data-target="MainHeader"><header class="border-b border-gray-100 "><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <div class="flex flex-none items-center justify-center p-0.5 place-self-stretch lg:hidden"><button class="relative z-40 flex h-6 w-8 items-center justify-center" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div></div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="rounded-full border border-transparent bg-gray-900 px-3 py-1 leading-none text-white hover:border-black hover:bg-white hover:text-black" href="/join">Sign Up</a></li></ul></nav></div></header></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div> <main class="flex flex-1 flex-col"><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapters&quot;:[{&quot;title&quot;:&quot;Get started&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;🤗 Transformers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;index&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/index&quot;},{&quot;title&quot;:&quot;Quick tour&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;quicktour&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/quicktour&quot;},{&quot;title&quot;:&quot;Installation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;installation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/installation&quot;}]},{&quot;title&quot;:&quot;Tutorials&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Run inference with pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_tutorial&quot;},{&quot;title&quot;:&quot;Write portable code with AutoClass&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;autoclass_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/autoclass_tutorial&quot;},{&quot;title&quot;:&quot;Preprocess data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocessing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/preprocessing&quot;},{&quot;title&quot;:&quot;Fine-tune a pretrained model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;training&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/training&quot;},{&quot;title&quot;:&quot;Train with a script&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;run_scripts&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/run_scripts&quot;},{&quot;title&quot;:&quot;Set up distributed training with 🤗 Accelerate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;accelerate&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/accelerate&quot;},{&quot;title&quot;:&quot;Load and train adapters with 🤗 PEFT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;peft&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/peft&quot;},{&quot;title&quot;:&quot;Share your model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_sharing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_sharing&quot;},{&quot;title&quot;:&quot;Agents&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers_agents&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/transformers_agents&quot;},{&quot;title&quot;:&quot;Generation with LLMs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;llm_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/llm_tutorial&quot;}]},{&quot;title&quot;:&quot;Task Guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Natural Language Processing&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text classification&quot;,&quot;id&quot;:&quot;tasks/sequence_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/sequence_classification&quot;},{&quot;title&quot;:&quot;Token classification&quot;,&quot;id&quot;:&quot;tasks/token_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/token_classification&quot;},{&quot;title&quot;:&quot;Question answering&quot;,&quot;id&quot;:&quot;tasks/question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/question_answering&quot;},{&quot;title&quot;:&quot;Causal language modeling&quot;,&quot;id&quot;:&quot;tasks/language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/language_modeling&quot;},{&quot;title&quot;:&quot;Masked language modeling&quot;,&quot;id&quot;:&quot;tasks/masked_language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/masked_language_modeling&quot;},{&quot;title&quot;:&quot;Translation&quot;,&quot;id&quot;:&quot;tasks/translation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/translation&quot;},{&quot;title&quot;:&quot;Summarization&quot;,&quot;id&quot;:&quot;tasks/summarization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/summarization&quot;},{&quot;title&quot;:&quot;Multiple choice&quot;,&quot;id&quot;:&quot;tasks/multiple_choice&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/multiple_choice&quot;}]},{&quot;title&quot;:&quot;Audio&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio classification&quot;,&quot;id&quot;:&quot;tasks/audio_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/audio_classification&quot;},{&quot;title&quot;:&quot;Automatic speech recognition&quot;,&quot;id&quot;:&quot;tasks/asr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/asr&quot;}]},{&quot;title&quot;:&quot;Computer Vision&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image classification&quot;,&quot;id&quot;:&quot;tasks/image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_classification&quot;},{&quot;title&quot;:&quot;Semantic segmentation&quot;,&quot;id&quot;:&quot;tasks/semantic_segmentation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/semantic_segmentation&quot;},{&quot;title&quot;:&quot;Video classification&quot;,&quot;id&quot;:&quot;tasks/video_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/video_classification&quot;},{&quot;title&quot;:&quot;Object detection&quot;,&quot;id&quot;:&quot;tasks/object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot object detection&quot;,&quot;id&quot;:&quot;tasks/zero_shot_object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot image classification&quot;,&quot;id&quot;:&quot;tasks/zero_shot_image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification&quot;},{&quot;title&quot;:&quot;Depth estimation&quot;,&quot;id&quot;:&quot;tasks/monocular_depth_estimation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation&quot;}]},{&quot;title&quot;:&quot;Multimodal&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image captioning&quot;,&quot;id&quot;:&quot;tasks/image_captioning&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_captioning&quot;},{&quot;title&quot;:&quot;Document Question Answering&quot;,&quot;id&quot;:&quot;tasks/document_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/document_question_answering&quot;},{&quot;title&quot;:&quot;Visual Question Answering&quot;,&quot;id&quot;:&quot;tasks/visual_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/visual_question_answering&quot;},{&quot;title&quot;:&quot;Text to speech&quot;,&quot;id&quot;:&quot;tasks/text-to-speech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/text-to-speech&quot;}]},{&quot;title&quot;:&quot;Generation&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Customize the generation strategy&quot;,&quot;id&quot;:&quot;generation_strategies&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/generation_strategies&quot;}]},{&quot;title&quot;:&quot;Prompting&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image tasks with IDEFICS&quot;,&quot;id&quot;:&quot;tasks/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/idefics&quot;}]}]},{&quot;title&quot;:&quot;Developer guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Use fast tokenizers from 🤗 Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;fast_tokenizers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/fast_tokenizers&quot;},{&quot;title&quot;:&quot;Run inference with multilingual models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;multilingual&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/multilingual&quot;},{&quot;title&quot;:&quot;Use model-specific APIs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;create_a_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/create_a_model&quot;},{&quot;title&quot;:&quot;Share a custom model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_models&quot;},{&quot;title&quot;:&quot;Templates for chat models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;chat_templating&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/chat_templating&quot;},{&quot;title&quot;:&quot;Run training on Amazon SageMaker&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;sagemaker&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/sagemaker&quot;},{&quot;title&quot;:&quot;Export to ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;serialization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/serialization&quot;},{&quot;title&quot;:&quot;Export to TFLite&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tflite&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tflite&quot;},{&quot;title&quot;:&quot;Export to TorchScript&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;torchscript&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/torchscript&quot;},{&quot;title&quot;:&quot;Benchmarks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;benchmarks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/benchmarks&quot;},{&quot;title&quot;:&quot;Notebooks with examples&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;notebooks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/notebooks&quot;},{&quot;title&quot;:&quot;Community resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;community&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/community&quot;},{&quot;title&quot;:&quot;Custom Tools and Prompts&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_tools&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_tools&quot;},{&quot;title&quot;:&quot;Troubleshoot&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;troubleshooting&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/troubleshooting&quot;}]},{&quot;title&quot;:&quot;Performance and scalability&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;performance&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/performance&quot;},{&quot;title&quot;:&quot;Efficient training techniques&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Methods and tools for efficient training on a single GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_one&quot;},{&quot;title&quot;:&quot;Multiple GPUs and parallelism&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_many&quot;},{&quot;title&quot;:&quot;Efficient training on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu&quot;},{&quot;title&quot;:&quot;Distributed CPU training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu_many&quot;},{&quot;title&quot;:&quot;Training on TPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu&quot;},{&quot;title&quot;:&quot;Training on TPU with TensorFlow&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu_tf&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu_tf&quot;},{&quot;title&quot;:&quot;Training on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_special&quot;},{&quot;title&quot;:&quot;Custom hardware for training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_hardware&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_hardware&quot;},{&quot;title&quot;:&quot;Hyperparameter Search using Trainer API&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;hpo_train&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/hpo_train&quot;}]},{&quot;title&quot;:&quot;Optimizing inference&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Inference on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_cpu&quot;},{&quot;title&quot;:&quot;Inference on one GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_one&quot;},{&quot;title&quot;:&quot;Inference on many GPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_many&quot;},{&quot;title&quot;:&quot;Inference on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_special&quot;}]},{&quot;title&quot;:&quot;Instantiating a big model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;big_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/big_models&quot;},{&quot;title&quot;:&quot;Troubleshooting&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;debugging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/debugging&quot;},{&quot;title&quot;:&quot;XLA Integration for TensorFlow Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tf_xla&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tf_xla&quot;},{&quot;title&quot;:&quot;Optimize inference using `torch.compile()`&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_torch_compile&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_torch_compile&quot;}]},{&quot;title&quot;:&quot;Contribute&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;How to contribute to transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;contributing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/contributing&quot;},{&quot;title&quot;:&quot;How to add a model to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_model&quot;},{&quot;title&quot;:&quot;How to convert a 🤗 Transformers model to TensorFlow?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_tensorflow_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_tensorflow_model&quot;},{&quot;title&quot;:&quot;How to add a pipeline to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_pipeline&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_pipeline&quot;},{&quot;title&quot;:&quot;Testing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;testing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/testing&quot;},{&quot;title&quot;:&quot;Checks on a Pull Request&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pr_checks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pr_checks&quot;}]},{&quot;title&quot;:&quot;Conceptual guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Philosophy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;philosophy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/philosophy&quot;},{&quot;title&quot;:&quot;Glossary&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;glossary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/glossary&quot;},{&quot;title&quot;:&quot;What 🤗 Transformers can do&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;task_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/task_summary&quot;},{&quot;title&quot;:&quot;How 🤗 Transformers solve tasks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks_explained&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks_explained&quot;},{&quot;title&quot;:&quot;The Transformer model family&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_summary&quot;},{&quot;title&quot;:&quot;Summary of the tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tokenizer_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tokenizer_summary&quot;},{&quot;title&quot;:&quot;Attention mechanisms&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;attention&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/attention&quot;},{&quot;title&quot;:&quot;Padding and truncation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pad_truncation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pad_truncation&quot;},{&quot;title&quot;:&quot;BERTology&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;bertology&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/bertology&quot;},{&quot;title&quot;:&quot;Perplexity of fixed-length models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perplexity&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perplexity&quot;},{&quot;title&quot;:&quot;Pipelines for webserver inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_webserver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_webserver&quot;},{&quot;title&quot;:&quot;Model training anatomy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_memory_anatomy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_memory_anatomy&quot;}]},{&quot;title&quot;:&quot;API&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Main Classes&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Agents and Tools&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/agent&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/agent&quot;},{&quot;title&quot;:&quot;Auto Classes&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/auto&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/auto&quot;},{&quot;title&quot;:&quot;Callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/callback&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/callback&quot;},{&quot;title&quot;:&quot;Configuration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/configuration&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/configuration&quot;},{&quot;title&quot;:&quot;Data Collator&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/data_collator&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/data_collator&quot;},{&quot;title&quot;:&quot;Keras callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/keras_callbacks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/keras_callbacks&quot;},{&quot;title&quot;:&quot;Logging&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/logging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/logging&quot;},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/model&quot;},{&quot;title&quot;:&quot;Text Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/text_generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/text_generation&quot;},{&quot;title&quot;:&quot;ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/onnx&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/onnx&quot;},{&quot;title&quot;:&quot;Optimization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/optimizer_schedules&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules&quot;},{&quot;title&quot;:&quot;Model outputs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/output&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/output&quot;},{&quot;title&quot;:&quot;Pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/pipelines&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/pipelines&quot;},{&quot;title&quot;:&quot;Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/processors&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/processors&quot;},{&quot;title&quot;:&quot;Quantization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/quantization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/quantization&quot;},{&quot;title&quot;:&quot;Tokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/tokenizer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/tokenizer&quot;},{&quot;title&quot;:&quot;Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/trainer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/trainer&quot;},{&quot;title&quot;:&quot;DeepSpeed Integration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/deepspeed&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/deepspeed&quot;},{&quot;title&quot;:&quot;Feature Extractor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/feature_extractor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/feature_extractor&quot;},{&quot;title&quot;:&quot;Image Processor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/image_processor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/image_processor&quot;}]},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALBERT&quot;,&quot;id&quot;:&quot;model_doc/albert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/albert&quot;},{&quot;title&quot;:&quot;BART&quot;,&quot;id&quot;:&quot;model_doc/bart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bart&quot;},{&quot;title&quot;:&quot;BARThez&quot;,&quot;id&quot;:&quot;model_doc/barthez&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/barthez&quot;},{&quot;title&quot;:&quot;BARTpho&quot;,&quot;id&quot;:&quot;model_doc/bartpho&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bartpho&quot;},{&quot;title&quot;:&quot;BERT&quot;,&quot;id&quot;:&quot;model_doc/bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert&quot;},{&quot;title&quot;:&quot;BertGeneration&quot;,&quot;id&quot;:&quot;model_doc/bert-generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-generation&quot;},{&quot;title&quot;:&quot;BertJapanese&quot;,&quot;id&quot;:&quot;model_doc/bert-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-japanese&quot;},{&quot;title&quot;:&quot;Bertweet&quot;,&quot;id&quot;:&quot;model_doc/bertweet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bertweet&quot;},{&quot;title&quot;:&quot;BigBird&quot;,&quot;id&quot;:&quot;model_doc/big_bird&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/big_bird&quot;},{&quot;title&quot;:&quot;BigBirdPegasus&quot;,&quot;id&quot;:&quot;model_doc/bigbird_pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus&quot;},{&quot;title&quot;:&quot;BioGpt&quot;,&quot;id&quot;:&quot;model_doc/biogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/biogpt&quot;},{&quot;title&quot;:&quot;Blenderbot&quot;,&quot;id&quot;:&quot;model_doc/blenderbot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot&quot;},{&quot;title&quot;:&quot;Blenderbot Small&quot;,&quot;id&quot;:&quot;model_doc/blenderbot-small&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot-small&quot;},{&quot;title&quot;:&quot;BLOOM&quot;,&quot;id&quot;:&quot;model_doc/bloom&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bloom&quot;},{&quot;title&quot;:&quot;BORT&quot;,&quot;id&quot;:&quot;model_doc/bort&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bort&quot;},{&quot;title&quot;:&quot;ByT5&quot;,&quot;id&quot;:&quot;model_doc/byt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/byt5&quot;},{&quot;title&quot;:&quot;CamemBERT&quot;,&quot;id&quot;:&quot;model_doc/camembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/camembert&quot;},{&quot;title&quot;:&quot;CANINE&quot;,&quot;id&quot;:&quot;model_doc/canine&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/canine&quot;},{&quot;title&quot;:&quot;CodeGen&quot;,&quot;id&quot;:&quot;model_doc/codegen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/codegen&quot;},{&quot;title&quot;:&quot;CodeLlama&quot;,&quot;id&quot;:&quot;model_doc/code_llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/code_llama&quot;},{&quot;title&quot;:&quot;ConvBERT&quot;,&quot;id&quot;:&quot;model_doc/convbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convbert&quot;},{&quot;title&quot;:&quot;CPM&quot;,&quot;id&quot;:&quot;model_doc/cpm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpm&quot;},{&quot;title&quot;:&quot;CPMANT&quot;,&quot;id&quot;:&quot;model_doc/cpmant&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpmant&quot;},{&quot;title&quot;:&quot;CTRL&quot;,&quot;id&quot;:&quot;model_doc/ctrl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ctrl&quot;},{&quot;title&quot;:&quot;DeBERTa&quot;,&quot;id&quot;:&quot;model_doc/deberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta&quot;},{&quot;title&quot;:&quot;DeBERTa-v2&quot;,&quot;id&quot;:&quot;model_doc/deberta-v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta-v2&quot;},{&quot;title&quot;:&quot;DialoGPT&quot;,&quot;id&quot;:&quot;model_doc/dialogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dialogpt&quot;},{&quot;title&quot;:&quot;DistilBERT&quot;,&quot;id&quot;:&quot;model_doc/distilbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/distilbert&quot;},{&quot;title&quot;:&quot;DPR&quot;,&quot;id&quot;:&quot;model_doc/dpr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpr&quot;},{&quot;title&quot;:&quot;ELECTRA&quot;,&quot;id&quot;:&quot;model_doc/electra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/electra&quot;},{&quot;title&quot;:&quot;Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encoder-decoder&quot;},{&quot;title&quot;:&quot;ERNIE&quot;,&quot;id&quot;:&quot;model_doc/ernie&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie&quot;},{&quot;title&quot;:&quot;ErnieM&quot;,&quot;id&quot;:&quot;model_doc/ernie_m&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie_m&quot;},{&quot;title&quot;:&quot;ESM&quot;,&quot;id&quot;:&quot;model_doc/esm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/esm&quot;},{&quot;title&quot;:&quot;Falcon&quot;,&quot;id&quot;:&quot;model_doc/falcon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/falcon&quot;},{&quot;title&quot;:&quot;FLAN-T5&quot;,&quot;id&quot;:&quot;model_doc/flan-t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-t5&quot;},{&quot;title&quot;:&quot;FLAN-UL2&quot;,&quot;id&quot;:&quot;model_doc/flan-ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-ul2&quot;},{&quot;title&quot;:&quot;FlauBERT&quot;,&quot;id&quot;:&quot;model_doc/flaubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flaubert&quot;},{&quot;title&quot;:&quot;FNet&quot;,&quot;id&quot;:&quot;model_doc/fnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fnet&quot;},{&quot;title&quot;:&quot;FSMT&quot;,&quot;id&quot;:&quot;model_doc/fsmt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fsmt&quot;},{&quot;title&quot;:&quot;Funnel Transformer&quot;,&quot;id&quot;:&quot;model_doc/funnel&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/funnel&quot;},{&quot;title&quot;:&quot;GPT&quot;,&quot;id&quot;:&quot;model_doc/openai-gpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/openai-gpt&quot;},{&quot;title&quot;:&quot;GPT Neo&quot;,&quot;id&quot;:&quot;model_doc/gpt_neo&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neo&quot;},{&quot;title&quot;:&quot;GPT NeoX&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox&quot;},{&quot;title&quot;:&quot;GPT NeoX Japanese&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox_japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese&quot;},{&quot;title&quot;:&quot;GPT-J&quot;,&quot;id&quot;:&quot;model_doc/gptj&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptj&quot;},{&quot;title&quot;:&quot;GPT2&quot;,&quot;id&quot;:&quot;model_doc/gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt2&quot;},{&quot;title&quot;:&quot;GPTBigCode&quot;,&quot;id&quot;:&quot;model_doc/gpt_bigcode&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode&quot;},{&quot;title&quot;:&quot;GPTSAN Japanese&quot;,&quot;id&quot;:&quot;model_doc/gptsan-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese&quot;},{&quot;title&quot;:&quot;GPTSw3&quot;,&quot;id&quot;:&quot;model_doc/gpt-sw3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt-sw3&quot;},{&quot;title&quot;:&quot;HerBERT&quot;,&quot;id&quot;:&quot;model_doc/herbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/herbert&quot;},{&quot;title&quot;:&quot;I-BERT&quot;,&quot;id&quot;:&quot;model_doc/ibert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ibert&quot;},{&quot;title&quot;:&quot;Jukebox&quot;,&quot;id&quot;:&quot;model_doc/jukebox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/jukebox&quot;},{&quot;title&quot;:&quot;LED&quot;,&quot;id&quot;:&quot;model_doc/led&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/led&quot;},{&quot;title&quot;:&quot;LLaMA&quot;,&quot;id&quot;:&quot;model_doc/llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama&quot;},{&quot;title&quot;:&quot;Llama2&quot;,&quot;id&quot;:&quot;model_doc/llama2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama2&quot;},{&quot;title&quot;:&quot;Longformer&quot;,&quot;id&quot;:&quot;model_doc/longformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longformer&quot;},{&quot;title&quot;:&quot;LongT5&quot;,&quot;id&quot;:&quot;model_doc/longt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longt5&quot;},{&quot;title&quot;:&quot;LUKE&quot;,&quot;id&quot;:&quot;model_doc/luke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/luke&quot;},{&quot;title&quot;:&quot;M2M100&quot;,&quot;id&quot;:&quot;model_doc/m2m_100&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/m2m_100&quot;},{&quot;title&quot;:&quot;MarianMT&quot;,&quot;id&quot;:&quot;model_doc/marian&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/marian&quot;},{&quot;title&quot;:&quot;MarkupLM&quot;,&quot;id&quot;:&quot;model_doc/markuplm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/markuplm&quot;},{&quot;title&quot;:&quot;MBart and MBart-50&quot;,&quot;id&quot;:&quot;model_doc/mbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mbart&quot;},{&quot;title&quot;:&quot;MEGA&quot;,&quot;id&quot;:&quot;model_doc/mega&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mega&quot;},{&quot;title&quot;:&quot;MegatronBERT&quot;,&quot;id&quot;:&quot;model_doc/megatron-bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron-bert&quot;},{&quot;title&quot;:&quot;MegatronGPT2&quot;,&quot;id&quot;:&quot;model_doc/megatron_gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2&quot;},{&quot;title&quot;:&quot;Mistral&quot;,&quot;id&quot;:&quot;model_doc/mistral&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mistral&quot;},{&quot;title&quot;:&quot;mLUKE&quot;,&quot;id&quot;:&quot;model_doc/mluke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mluke&quot;},{&quot;title&quot;:&quot;MobileBERT&quot;,&quot;id&quot;:&quot;model_doc/mobilebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilebert&quot;},{&quot;title&quot;:&quot;MPNet&quot;,&quot;id&quot;:&quot;model_doc/mpnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpnet&quot;},{&quot;title&quot;:&quot;MPT&quot;,&quot;id&quot;:&quot;model_doc/mpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpt&quot;},{&quot;title&quot;:&quot;MRA&quot;,&quot;id&quot;:&quot;model_doc/mra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mra&quot;},{&quot;title&quot;:&quot;MT5&quot;,&quot;id&quot;:&quot;model_doc/mt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mt5&quot;},{&quot;title&quot;:&quot;MVP&quot;,&quot;id&quot;:&quot;model_doc/mvp&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mvp&quot;},{&quot;title&quot;:&quot;NEZHA&quot;,&quot;id&quot;:&quot;model_doc/nezha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nezha&quot;},{&quot;title&quot;:&quot;NLLB&quot;,&quot;id&quot;:&quot;model_doc/nllb&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb&quot;},{&quot;title&quot;:&quot;NLLB-MoE&quot;,&quot;id&quot;:&quot;model_doc/nllb-moe&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb-moe&quot;},{&quot;title&quot;:&quot;Nyströmformer&quot;,&quot;id&quot;:&quot;model_doc/nystromformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nystromformer&quot;},{&quot;title&quot;:&quot;Open-Llama&quot;,&quot;id&quot;:&quot;model_doc/open-llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/open-llama&quot;},{&quot;title&quot;:&quot;OPT&quot;,&quot;id&quot;:&quot;model_doc/opt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/opt&quot;},{&quot;title&quot;:&quot;Pegasus&quot;,&quot;id&quot;:&quot;model_doc/pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus&quot;},{&quot;title&quot;:&quot;PEGASUS-X&quot;,&quot;id&quot;:&quot;model_doc/pegasus_x&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus_x&quot;},{&quot;title&quot;:&quot;Persimmon&quot;,&quot;id&quot;:&quot;model_doc/persimmon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/persimmon&quot;},{&quot;title&quot;:&quot;PhoBERT&quot;,&quot;id&quot;:&quot;model_doc/phobert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/phobert&quot;},{&quot;title&quot;:&quot;PLBart&quot;,&quot;id&quot;:&quot;model_doc/plbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/plbart&quot;},{&quot;title&quot;:&quot;ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/prophetnet&quot;},{&quot;title&quot;:&quot;QDQBert&quot;,&quot;id&quot;:&quot;model_doc/qdqbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/qdqbert&quot;},{&quot;title&quot;:&quot;RAG&quot;,&quot;id&quot;:&quot;model_doc/rag&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rag&quot;},{&quot;title&quot;:&quot;REALM&quot;,&quot;id&quot;:&quot;model_doc/realm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/realm&quot;},{&quot;title&quot;:&quot;Reformer&quot;,&quot;id&quot;:&quot;model_doc/reformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/reformer&quot;},{&quot;title&quot;:&quot;RemBERT&quot;,&quot;id&quot;:&quot;model_doc/rembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rembert&quot;},{&quot;title&quot;:&quot;RetriBERT&quot;,&quot;id&quot;:&quot;model_doc/retribert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/retribert&quot;},{&quot;title&quot;:&quot;RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta&quot;},{&quot;title&quot;:&quot;RoBERTa-PreLayerNorm&quot;,&quot;id&quot;:&quot;model_doc/roberta-prelayernorm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm&quot;},{&quot;title&quot;:&quot;RoCBert&quot;,&quot;id&quot;:&quot;model_doc/roc_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roc_bert&quot;},{&quot;title&quot;:&quot;RoFormer&quot;,&quot;id&quot;:&quot;model_doc/roformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roformer&quot;},{&quot;title&quot;:&quot;RWKV&quot;,&quot;id&quot;:&quot;model_doc/rwkv&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rwkv&quot;},{&quot;title&quot;:&quot;Splinter&quot;,&quot;id&quot;:&quot;model_doc/splinter&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/splinter&quot;},{&quot;title&quot;:&quot;SqueezeBERT&quot;,&quot;id&quot;:&quot;model_doc/squeezebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/squeezebert&quot;},{&quot;title&quot;:&quot;SwitchTransformers&quot;,&quot;id&quot;:&quot;model_doc/switch_transformers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/switch_transformers&quot;},{&quot;title&quot;:&quot;T5&quot;,&quot;id&quot;:&quot;model_doc/t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5&quot;},{&quot;title&quot;:&quot;T5v1.1&quot;,&quot;id&quot;:&quot;model_doc/t5v1.1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5v1.1&quot;},{&quot;title&quot;:&quot;TAPEX&quot;,&quot;id&quot;:&quot;model_doc/tapex&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapex&quot;},{&quot;title&quot;:&quot;Transformer XL&quot;,&quot;id&quot;:&quot;model_doc/transfo-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/transfo-xl&quot;},{&quot;title&quot;:&quot;UL2&quot;,&quot;id&quot;:&quot;model_doc/ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ul2&quot;},{&quot;title&quot;:&quot;UMT5&quot;,&quot;id&quot;:&quot;model_doc/umt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/umt5&quot;},{&quot;title&quot;:&quot;X-MOD&quot;,&quot;id&quot;:&quot;model_doc/xmod&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xmod&quot;},{&quot;title&quot;:&quot;XGLM&quot;,&quot;id&quot;:&quot;model_doc/xglm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xglm&quot;},{&quot;title&quot;:&quot;XLM&quot;,&quot;id&quot;:&quot;model_doc/xlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm&quot;},{&quot;title&quot;:&quot;XLM-ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/xlm-prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa-XL&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl&quot;},{&quot;title&quot;:&quot;XLM-V&quot;,&quot;id&quot;:&quot;model_doc/xlm-v&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-v&quot;},{&quot;title&quot;:&quot;XLNet&quot;,&quot;id&quot;:&quot;model_doc/xlnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlnet&quot;},{&quot;title&quot;:&quot;YOSO&quot;,&quot;id&quot;:&quot;model_doc/yoso&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yoso&quot;}]},{&quot;title&quot;:&quot;Vision models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;BEiT&quot;,&quot;id&quot;:&quot;model_doc/beit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/beit&quot;},{&quot;title&quot;:&quot;BiT&quot;,&quot;id&quot;:&quot;model_doc/bit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bit&quot;},{&quot;title&quot;:&quot;Conditional DETR&quot;,&quot;id&quot;:&quot;model_doc/conditional_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/conditional_detr&quot;},{&quot;title&quot;:&quot;ConvNeXT&quot;,&quot;id&quot;:&quot;model_doc/convnext&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnext&quot;},{&quot;title&quot;:&quot;ConvNeXTV2&quot;,&quot;id&quot;:&quot;model_doc/convnextv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnextv2&quot;},{&quot;title&quot;:&quot;CvT&quot;,&quot;id&quot;:&quot;model_doc/cvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cvt&quot;},{&quot;title&quot;:&quot;Deformable DETR&quot;,&quot;id&quot;:&quot;model_doc/deformable_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deformable_detr&quot;},{&quot;title&quot;:&quot;DeiT&quot;,&quot;id&quot;:&quot;model_doc/deit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deit&quot;},{&quot;title&quot;:&quot;DETA&quot;,&quot;id&quot;:&quot;model_doc/deta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deta&quot;},{&quot;title&quot;:&quot;DETR&quot;,&quot;id&quot;:&quot;model_doc/detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/detr&quot;},{&quot;title&quot;:&quot;DiNAT&quot;,&quot;id&quot;:&quot;model_doc/dinat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinat&quot;},{&quot;title&quot;:&quot;DINO V2&quot;,&quot;id&quot;:&quot;model_doc/dinov2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinov2&quot;},{&quot;title&quot;:&quot;DiT&quot;,&quot;id&quot;:&quot;model_doc/dit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dit&quot;},{&quot;title&quot;:&quot;DPT&quot;,&quot;id&quot;:&quot;model_doc/dpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpt&quot;},{&quot;title&quot;:&quot;EfficientFormer&quot;,&quot;id&quot;:&quot;model_doc/efficientformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientformer&quot;},{&quot;title&quot;:&quot;EfficientNet&quot;,&quot;id&quot;:&quot;model_doc/efficientnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientnet&quot;},{&quot;title&quot;:&quot;FocalNet&quot;,&quot;id&quot;:&quot;model_doc/focalnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/focalnet&quot;},{&quot;title&quot;:&quot;GLPN&quot;,&quot;id&quot;:&quot;model_doc/glpn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/glpn&quot;},{&quot;title&quot;:&quot;ImageGPT&quot;,&quot;id&quot;:&quot;model_doc/imagegpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/imagegpt&quot;},{&quot;title&quot;:&quot;LeViT&quot;,&quot;id&quot;:&quot;model_doc/levit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/levit&quot;},{&quot;title&quot;:&quot;Mask2Former&quot;,&quot;id&quot;:&quot;model_doc/mask2former&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mask2former&quot;},{&quot;title&quot;:&quot;MaskFormer&quot;,&quot;id&quot;:&quot;model_doc/maskformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/maskformer&quot;},{&quot;title&quot;:&quot;MobileNetV1&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v1&quot;},{&quot;title&quot;:&quot;MobileNetV2&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v2&quot;},{&quot;title&quot;:&quot;MobileViT&quot;,&quot;id&quot;:&quot;model_doc/mobilevit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevit&quot;},{&quot;title&quot;:&quot;MobileViTV2&quot;,&quot;id&quot;:&quot;model_doc/mobilevitv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevitv2&quot;},{&quot;title&quot;:&quot;NAT&quot;,&quot;id&quot;:&quot;model_doc/nat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nat&quot;},{&quot;title&quot;:&quot;PoolFormer&quot;,&quot;id&quot;:&quot;model_doc/poolformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/poolformer&quot;},{&quot;title&quot;:&quot;Pyramid Vision Transformer (PVT)&quot;,&quot;id&quot;:&quot;model_doc/pvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pvt&quot;},{&quot;title&quot;:&quot;RegNet&quot;,&quot;id&quot;:&quot;model_doc/regnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/regnet&quot;},{&quot;title&quot;:&quot;ResNet&quot;,&quot;id&quot;:&quot;model_doc/resnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/resnet&quot;},{&quot;title&quot;:&quot;SegFormer&quot;,&quot;id&quot;:&quot;model_doc/segformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/segformer&quot;},{&quot;title&quot;:&quot;SwiftFormer&quot;,&quot;id&quot;:&quot;model_doc/swiftformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swiftformer&quot;},{&quot;title&quot;:&quot;Swin Transformer&quot;,&quot;id&quot;:&quot;model_doc/swin&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin&quot;},{&quot;title&quot;:&quot;Swin Transformer V2&quot;,&quot;id&quot;:&quot;model_doc/swinv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swinv2&quot;},{&quot;title&quot;:&quot;Swin2SR&quot;,&quot;id&quot;:&quot;model_doc/swin2sr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin2sr&quot;},{&quot;title&quot;:&quot;Table Transformer&quot;,&quot;id&quot;:&quot;model_doc/table-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/table-transformer&quot;},{&quot;title&quot;:&quot;TimeSformer&quot;,&quot;id&quot;:&quot;model_doc/timesformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/timesformer&quot;},{&quot;title&quot;:&quot;UperNet&quot;,&quot;id&quot;:&quot;model_doc/upernet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/upernet&quot;},{&quot;title&quot;:&quot;VAN&quot;,&quot;id&quot;:&quot;model_doc/van&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/van&quot;},{&quot;title&quot;:&quot;VideoMAE&quot;,&quot;id&quot;:&quot;model_doc/videomae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/videomae&quot;},{&quot;title&quot;:&quot;Vision Transformer (ViT)&quot;,&quot;id&quot;:&quot;model_doc/vit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit&quot;},{&quot;title&quot;:&quot;ViT Hybrid&quot;,&quot;id&quot;:&quot;model_doc/vit_hybrid&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_hybrid&quot;},{&quot;title&quot;:&quot;ViTDet&quot;,&quot;id&quot;:&quot;model_doc/vitdet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitdet&quot;},{&quot;title&quot;:&quot;ViTMAE&quot;,&quot;id&quot;:&quot;model_doc/vit_mae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_mae&quot;},{&quot;title&quot;:&quot;ViTMatte&quot;,&quot;id&quot;:&quot;model_doc/vitmatte&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitmatte&quot;},{&quot;title&quot;:&quot;ViTMSN&quot;,&quot;id&quot;:&quot;model_doc/vit_msn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_msn&quot;},{&quot;title&quot;:&quot;ViViT&quot;,&quot;id&quot;:&quot;model_doc/vivit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vivit&quot;},{&quot;title&quot;:&quot;YOLOS&quot;,&quot;id&quot;:&quot;model_doc/yolos&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yolos&quot;}]},{&quot;title&quot;:&quot;Audio models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio Spectrogram Transformer&quot;,&quot;id&quot;:&quot;model_doc/audio-spectrogram-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer&quot;},{&quot;title&quot;:&quot;Bark&quot;,&quot;id&quot;:&quot;model_doc/bark&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bark&quot;},{&quot;title&quot;:&quot;CLAP&quot;,&quot;id&quot;:&quot;model_doc/clap&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clap&quot;},{&quot;title&quot;:&quot;EnCodec&quot;,&quot;id&quot;:&quot;model_doc/encodec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encodec&quot;},{&quot;title&quot;:&quot;Hubert&quot;,&quot;id&quot;:&quot;model_doc/hubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/hubert&quot;},{&quot;title&quot;:&quot;MCTCT&quot;,&quot;id&quot;:&quot;model_doc/mctct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mctct&quot;},{&quot;title&quot;:&quot;MMS&quot;,&quot;id&quot;:&quot;model_doc/mms&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mms&quot;},{&quot;title&quot;:&quot;MusicGen&quot;,&quot;id&quot;:&quot;model_doc/musicgen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/musicgen&quot;},{&quot;title&quot;:&quot;Pop2Piano&quot;,&quot;id&quot;:&quot;model_doc/pop2piano&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pop2piano&quot;},{&quot;title&quot;:&quot;SEW&quot;,&quot;id&quot;:&quot;model_doc/sew&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew&quot;},{&quot;title&quot;:&quot;SEW-D&quot;,&quot;id&quot;:&quot;model_doc/sew-d&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew-d&quot;},{&quot;title&quot;:&quot;Speech2Text&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text&quot;},{&quot;title&quot;:&quot;Speech2Text2&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text_2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2&quot;},{&quot;title&quot;:&quot;SpeechT5&quot;,&quot;id&quot;:&quot;model_doc/speecht5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speecht5&quot;},{&quot;title&quot;:&quot;UniSpeech&quot;,&quot;id&quot;:&quot;model_doc/unispeech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech&quot;},{&quot;title&quot;:&quot;UniSpeech-SAT&quot;,&quot;id&quot;:&quot;model_doc/unispeech-sat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech-sat&quot;},{&quot;title&quot;:&quot;VITS&quot;,&quot;id&quot;:&quot;model_doc/vits&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vits&quot;},{&quot;title&quot;:&quot;Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2&quot;},{&quot;title&quot;:&quot;Wav2Vec2-Conformer&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2-conformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer&quot;},{&quot;title&quot;:&quot;Wav2Vec2Phoneme&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2_phoneme&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme&quot;},{&quot;title&quot;:&quot;WavLM&quot;,&quot;id&quot;:&quot;model_doc/wavlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wavlm&quot;},{&quot;title&quot;:&quot;Whisper&quot;,&quot;id&quot;:&quot;model_doc/whisper&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/whisper&quot;},{&quot;title&quot;:&quot;XLS-R&quot;,&quot;id&quot;:&quot;model_doc/xls_r&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xls_r&quot;},{&quot;title&quot;:&quot;XLSR-Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/xlsr_wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2&quot;}]},{&quot;title&quot;:&quot;Multimodal models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALIGN&quot;,&quot;id&quot;:&quot;model_doc/align&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/align&quot;},{&quot;title&quot;:&quot;AltCLIP&quot;,&quot;id&quot;:&quot;model_doc/altclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/altclip&quot;},{&quot;title&quot;:&quot;BLIP&quot;,&quot;id&quot;:&quot;model_doc/blip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip&quot;},{&quot;title&quot;:&quot;BLIP-2&quot;,&quot;id&quot;:&quot;model_doc/blip-2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip-2&quot;},{&quot;title&quot;:&quot;BridgeTower&quot;,&quot;id&quot;:&quot;model_doc/bridgetower&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bridgetower&quot;},{&quot;title&quot;:&quot;BROS&quot;,&quot;id&quot;:&quot;model_doc/bros&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bros&quot;},{&quot;title&quot;:&quot;Chinese-CLIP&quot;,&quot;id&quot;:&quot;model_doc/chinese_clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/chinese_clip&quot;},{&quot;title&quot;:&quot;CLIP&quot;,&quot;id&quot;:&quot;model_doc/clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clip&quot;},{&quot;title&quot;:&quot;CLIPSeg&quot;,&quot;id&quot;:&quot;model_doc/clipseg&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clipseg&quot;},{&quot;title&quot;:&quot;Data2Vec&quot;,&quot;id&quot;:&quot;model_doc/data2vec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/data2vec&quot;},{&quot;title&quot;:&quot;DePlot&quot;,&quot;id&quot;:&quot;model_doc/deplot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deplot&quot;},{&quot;title&quot;:&quot;Donut&quot;,&quot;id&quot;:&quot;model_doc/donut&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/donut&quot;},{&quot;title&quot;:&quot;FLAVA&quot;,&quot;id&quot;:&quot;model_doc/flava&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flava&quot;},{&quot;title&quot;:&quot;GIT&quot;,&quot;id&quot;:&quot;model_doc/git&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/git&quot;},{&quot;title&quot;:&quot;GroupViT&quot;,&quot;id&quot;:&quot;model_doc/groupvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/groupvit&quot;},{&quot;title&quot;:&quot;IDEFICS&quot;,&quot;id&quot;:&quot;model_doc/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/idefics&quot;},{&quot;title&quot;:&quot;InstructBLIP&quot;,&quot;id&quot;:&quot;model_doc/instructblip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/instructblip&quot;},{&quot;title&quot;:&quot;LayoutLM&quot;,&quot;id&quot;:&quot;model_doc/layoutlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlm&quot;},{&quot;title&quot;:&quot;LayoutLMV2&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv2&quot;},{&quot;title&quot;:&quot;LayoutLMV3&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv3&quot;},{&quot;title&quot;:&quot;LayoutXLM&quot;,&quot;id&quot;:&quot;model_doc/layoutxlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutxlm&quot;},{&quot;title&quot;:&quot;LiLT&quot;,&quot;id&quot;:&quot;model_doc/lilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lilt&quot;},{&quot;title&quot;:&quot;LXMERT&quot;,&quot;id&quot;:&quot;model_doc/lxmert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lxmert&quot;},{&quot;title&quot;:&quot;MatCha&quot;,&quot;id&quot;:&quot;model_doc/matcha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/matcha&quot;},{&quot;title&quot;:&quot;MGP-STR&quot;,&quot;id&quot;:&quot;model_doc/mgp-str&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mgp-str&quot;},{&quot;title&quot;:&quot;Nougat&quot;,&quot;id&quot;:&quot;model_doc/nougat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nougat&quot;},{&quot;title&quot;:&quot;OneFormer&quot;,&quot;id&quot;:&quot;model_doc/oneformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/oneformer&quot;},{&quot;title&quot;:&quot;OWL-ViT&quot;,&quot;id&quot;:&quot;model_doc/owlvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/owlvit&quot;},{&quot;title&quot;:&quot;Perceiver&quot;,&quot;id&quot;:&quot;model_doc/perceiver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/perceiver&quot;},{&quot;title&quot;:&quot;Pix2Struct&quot;,&quot;id&quot;:&quot;model_doc/pix2struct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pix2struct&quot;},{&quot;title&quot;:&quot;Segment Anything&quot;,&quot;id&quot;:&quot;model_doc/sam&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sam&quot;},{&quot;title&quot;:&quot;Speech Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/speech-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder&quot;},{&quot;title&quot;:&quot;TAPAS&quot;,&quot;id&quot;:&quot;model_doc/tapas&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapas&quot;},{&quot;title&quot;:&quot;TrOCR&quot;,&quot;id&quot;:&quot;model_doc/trocr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trocr&quot;},{&quot;title&quot;:&quot;TVLT&quot;,&quot;id&quot;:&quot;model_doc/tvlt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tvlt&quot;},{&quot;title&quot;:&quot;ViLT&quot;,&quot;id&quot;:&quot;model_doc/vilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vilt&quot;},{&quot;title&quot;:&quot;Vision Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/vision-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder&quot;},{&quot;title&quot;:&quot;Vision Text Dual Encoder&quot;,&quot;id&quot;:&quot;model_doc/vision-text-dual-encoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder&quot;},{&quot;title&quot;:&quot;VisualBERT&quot;,&quot;id&quot;:&quot;model_doc/visual_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/visual_bert&quot;},{&quot;title&quot;:&quot;X-CLIP&quot;,&quot;id&quot;:&quot;model_doc/xclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xclip&quot;}]},{&quot;title&quot;:&quot;Reinforcement learning models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Decision Transformer&quot;,&quot;id&quot;:&quot;model_doc/decision_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/decision_transformer&quot;},{&quot;title&quot;:&quot;Trajectory Transformer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/trajectory_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer&quot;}]},{&quot;title&quot;:&quot;Time series models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Autoformer&quot;,&quot;id&quot;:&quot;model_doc/autoformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/autoformer&quot;},{&quot;title&quot;:&quot;Informer&quot;,&quot;id&quot;:&quot;model_doc/informer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/informer&quot;},{&quot;title&quot;:&quot;Time Series Transformer&quot;,&quot;id&quot;:&quot;model_doc/time_series_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/time_series_transformer&quot;}]},{&quot;title&quot;:&quot;Graph models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Graphormer&quot;,&quot;id&quot;:&quot;model_doc/graphormer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/graphormer&quot;}]}]},{&quot;title&quot;:&quot;Internal Helpers&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Custom Layers and Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/modeling_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/modeling_utils&quot;},{&quot;title&quot;:&quot;Utilities for pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/pipelines_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/pipelines_utils&quot;},{&quot;title&quot;:&quot;Utilities for Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/tokenization_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/tokenization_utils&quot;},{&quot;title&quot;:&quot;Utilities for Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/trainer_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/trainer_utils&quot;},{&quot;title&quot;:&quot;Utilities for Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/generation_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/generation_utils&quot;},{&quot;title&quot;:&quot;Utilities for Image Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/image_processing_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/image_processing_utils&quot;},{&quot;title&quot;:&quot;Utilities for Audio processing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/audio_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/audio_utils&quot;},{&quot;title&quot;:&quot;General Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/file_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/file_utils&quot;},{&quot;title&quot;:&quot;Utilities for Time Series&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/time_series_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/time_series_utils&quot;}]}]}],&quot;chapterId&quot;:&quot;model_doc/trajectory_transformer&quot;,&quot;docType&quot;:&quot;docs&quot;,&quot;isLoggedIn&quot;:false,&quot;lang&quot;:&quot;en&quot;,&quot;langs&quot;:[&quot;de&quot;,&quot;en&quot;,&quot;es&quot;,&quot;fr&quot;,&quot;it&quot;,&quot;ko&quot;,&quot;pt&quot;,&quot;zh&quot;],&quot;library&quot;:&quot;transformers&quot;,&quot;theme&quot;:&quot;light&quot;,&quot;version&quot;:&quot;v4.34.0&quot;,&quot;versions&quot;:[{&quot;version&quot;:&quot;main&quot;},{&quot;version&quot;:&quot;v4.34.0&quot;},{&quot;version&quot;:&quot;v4.33.3&quot;},{&quot;version&quot;:&quot;v4.33.2&quot;},{&quot;version&quot;:&quot;v4.33.0&quot;},{&quot;version&quot;:&quot;v4.32.1&quot;},{&quot;version&quot;:&quot;v4.32.0&quot;},{&quot;version&quot;:&quot;v4.31.0&quot;},{&quot;version&quot;:&quot;v4.30.0&quot;},{&quot;version&quot;:&quot;v4.29.1&quot;},{&quot;version&quot;:&quot;v4.29.0&quot;},{&quot;version&quot;:&quot;v4.28.1&quot;},{&quot;version&quot;:&quot;v4.28.0&quot;},{&quot;version&quot;:&quot;v4.27.2&quot;},{&quot;version&quot;:&quot;v4.27.1&quot;},{&quot;version&quot;:&quot;v4.27.0&quot;},{&quot;version&quot;:&quot;v4.26.1&quot;},{&quot;version&quot;:&quot;v4.26.0&quot;},{&quot;version&quot;:&quot;v4.25.1&quot;},{&quot;version&quot;:&quot;v4.24.0&quot;},{&quot;version&quot;:&quot;v4.23.1&quot;},{&quot;version&quot;:&quot;v4.23.0&quot;},{&quot;version&quot;:&quot;v4.22.2&quot;},{&quot;version&quot;:&quot;v4.22.1&quot;},{&quot;version&quot;:&quot;v4.22.0&quot;},{&quot;version&quot;:&quot;v4.21.3&quot;},{&quot;version&quot;:&quot;v4.21.2&quot;},{&quot;version&quot;:&quot;v4.21.1&quot;},{&quot;version&quot;:&quot;v4.21.0&quot;},{&quot;version&quot;:&quot;v4.20.1&quot;},{&quot;version&quot;:&quot;v4.20.0&quot;},{&quot;version&quot;:&quot;v4.19.4&quot;},{&quot;version&quot;:&quot;v4.19.3&quot;},{&quot;version&quot;:&quot;v4.19.2&quot;},{&quot;version&quot;:&quot;v4.19.0&quot;},{&quot;version&quot;:&quot;v4.18.0&quot;},{&quot;version&quot;:&quot;v4.17.0&quot;},{&quot;version&quot;:&quot;v4.16.2&quot;},{&quot;version&quot;:&quot;v4.16.1&quot;},{&quot;version&quot;:&quot;v4.16.0&quot;},{&quot;version&quot;:&quot;v4.15.0&quot;},{&quot;version&quot;:&quot;v4.14.1&quot;},{&quot;version&quot;:&quot;v4.13.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.5&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.4&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.0.0&quot;},{&quot;version&quot;:&quot;doc-builder-html&quot;}],&quot;title&quot;:&quot;Trajectory Transformer&quot;}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"> <div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation </p> <div class="flex items-center"><p class="font-semibold">Trajectory Transformer</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "> <button class=" " type="button"> <h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> </button> <div class="flex items-center"> <select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1" selected="">v4.34.0</option><option value="2">v4.33.3</option><option value="3">v4.32.1</option><option value="4">v4.31.0</option><option value="5">v4.30.0</option><option value="6">v4.29.1</option><option value="7">v4.28.1</option><option value="8">v4.27.2</option><option value="9">v4.26.1</option><option value="10">v4.25.1</option><option value="11">v4.24.0</option><option value="12">v4.23.1</option><option value="13">v4.22.2</option><option value="14">v4.21.3</option><option value="15">v4.20.1</option><option value="16">v4.19.4</option><option value="17">v4.18.0</option><option value="18">v4.17.0</option><option value="19">v4.16.2</option><option value="20">v4.15.0</option><option value="21">v4.14.1</option><option value="22">v4.13.0</option><option value="23">v4.12.5</option><option value="24">v4.11.3</option><option value="25">v4.10.1</option><option value="26">v4.9.2</option><option value="27">v4.8.2</option><option value="28">v4.7.0</option><option value="29">v4.6.0</option><option value="30">v4.5.1</option><option value="31">v4.4.2</option><option value="32">v4.3.3</option><option value="33">v4.2.2</option><option value="34">v4.1.1</option><option value="35">v4.0.1</option><option value="36">v3.5.1</option><option value="37">v3.4.0</option><option value="38">v3.3.1</option><option value="39">v3.2.0</option><option value="40">v3.1.0</option><option value="41">v3.0.2</option><option value="42">v2.11.0</option><option value="43">v2.10.0</option><option value="44">v2.9.1</option><option value="45">v2.8.0</option><option value="46">v2.7.0</option><option value="47">v2.6.0</option><option value="48">v2.5.1</option><option value="49">v2.4.1</option><option value="50">v2.3.0</option><option value="51">v2.2.2</option><option value="52">v2.1.1</option><option value="53">v2.0.0</option><option value="54">v1.2.0</option><option value="55">v1.1.0</option><option value="56">v1.0.0</option><option value="57">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en" selected="">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"> <button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"> <svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> </a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Get started<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/index"><!-- HTML_TAG_START -->🤗 Transformers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/quicktour"><!-- HTML_TAG_START -->Quick tour<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/installation"><!-- HTML_TAG_START -->Installation<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Tutorials<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_tutorial"><!-- HTML_TAG_START -->Run inference with pipelines<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/autoclass_tutorial"><!-- HTML_TAG_START -->Write portable code with AutoClass<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/preprocessing"><!-- HTML_TAG_START -->Preprocess data<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/training"><!-- HTML_TAG_START -->Fine-tune a pretrained model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/run_scripts"><!-- HTML_TAG_START -->Train with a script<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/accelerate"><!-- HTML_TAG_START -->Set up distributed training with 🤗 Accelerate<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/peft"><!-- HTML_TAG_START -->Load and train adapters with 🤗 PEFT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_sharing"><!-- HTML_TAG_START -->Share your model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/transformers_agents"><!-- HTML_TAG_START -->Agents<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/llm_tutorial"><!-- HTML_TAG_START -->Generation with LLMs<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Task Guides<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Natural Language Processing<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Audio<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Computer Vision<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Multimodal<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Generation<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Prompting<!-- HTML_TAG_END --></span> </span></span> </div></div> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Developer guides<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/fast_tokenizers"><!-- HTML_TAG_START -->Use fast tokenizers from 🤗 Tokenizers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/multilingual"><!-- HTML_TAG_START -->Run inference with multilingual models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/create_a_model"><!-- HTML_TAG_START -->Use model-specific APIs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_models"><!-- HTML_TAG_START -->Share a custom model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/chat_templating"><!-- HTML_TAG_START -->Templates for chat models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/sagemaker"><!-- HTML_TAG_START -->Run training on Amazon SageMaker<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/serialization"><!-- HTML_TAG_START -->Export to ONNX<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tflite"><!-- HTML_TAG_START -->Export to TFLite<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/torchscript"><!-- HTML_TAG_START -->Export to TorchScript<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/benchmarks"><!-- HTML_TAG_START -->Benchmarks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/notebooks"><!-- HTML_TAG_START -->Notebooks with examples<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/community"><!-- HTML_TAG_START -->Community resources<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_tools"><!-- HTML_TAG_START -->Custom Tools and Prompts<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/troubleshooting"><!-- HTML_TAG_START -->Troubleshoot<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Performance and scalability<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/performance"><!-- HTML_TAG_START -->Overview<!-- HTML_TAG_END --> </a> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Efficient training techniques<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_one"><!-- HTML_TAG_START -->Methods and tools for efficient training on a single GPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_many"><!-- HTML_TAG_START -->Multiple GPUs and parallelism<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu"><!-- HTML_TAG_START -->Efficient training on CPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu_many"><!-- HTML_TAG_START -->Distributed CPU training<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu"><!-- HTML_TAG_START -->Training on TPUs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu_tf"><!-- HTML_TAG_START -->Training on TPU with TensorFlow<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_special"><!-- HTML_TAG_START -->Training on Specialized Hardware<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_hardware"><!-- HTML_TAG_START -->Custom hardware for training<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/hpo_train"><!-- HTML_TAG_START -->Hyperparameter Search using Trainer API<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Optimizing inference<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_cpu"><!-- HTML_TAG_START -->Inference on CPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_one"><!-- HTML_TAG_START -->Inference on one GPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_many"><!-- HTML_TAG_START -->Inference on many GPUs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_special"><!-- HTML_TAG_START -->Inference on Specialized Hardware<!-- HTML_TAG_END --> </a> </div><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/big_models"><!-- HTML_TAG_START -->Instantiating a big model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/debugging"><!-- HTML_TAG_START -->Troubleshooting<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tf_xla"><!-- HTML_TAG_START -->XLA Integration for TensorFlow Models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perf_torch_compile"><!-- HTML_TAG_START -->Optimize inference using `torch.compile()`<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Contribute<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/contributing"><!-- HTML_TAG_START -->How to contribute to transformers?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_model"><!-- HTML_TAG_START -->How to add a model to 🤗 Transformers?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_tensorflow_model"><!-- HTML_TAG_START -->How to convert a 🤗 Transformers model to TensorFlow?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_pipeline"><!-- HTML_TAG_START -->How to add a pipeline to 🤗 Transformers?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/testing"><!-- HTML_TAG_START -->Testing<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pr_checks"><!-- HTML_TAG_START -->Checks on a Pull Request<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Conceptual guides<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/philosophy"><!-- HTML_TAG_START -->Philosophy<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/glossary"><!-- HTML_TAG_START -->Glossary<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/task_summary"><!-- HTML_TAG_START -->What 🤗 Transformers can do<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tasks_explained"><!-- HTML_TAG_START -->How 🤗 Transformers solve tasks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_summary"><!-- HTML_TAG_START -->The Transformer model family<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tokenizer_summary"><!-- HTML_TAG_START -->Summary of the tokenizers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/attention"><!-- HTML_TAG_START -->Attention mechanisms<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pad_truncation"><!-- HTML_TAG_START -->Padding and truncation<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/bertology"><!-- HTML_TAG_START -->BERTology<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perplexity"><!-- HTML_TAG_START -->Perplexity of fixed-length models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_webserver"><!-- HTML_TAG_START -->Pipelines for webserver inference<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_memory_anatomy"><!-- HTML_TAG_START -->Model training anatomy<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->API<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Main Classes<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/agent"><!-- HTML_TAG_START -->Agents and Tools<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/model_doc/auto"><!-- HTML_TAG_START -->Auto Classes<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/callback"><!-- HTML_TAG_START -->Callbacks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/configuration"><!-- HTML_TAG_START -->Configuration<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/data_collator"><!-- HTML_TAG_START -->Data Collator<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks"><!-- HTML_TAG_START -->Keras callbacks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/logging"><!-- HTML_TAG_START -->Logging<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/model"><!-- HTML_TAG_START -->Models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/text_generation"><!-- HTML_TAG_START -->Text Generation<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/onnx"><!-- HTML_TAG_START -->ONNX<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules"><!-- HTML_TAG_START -->Optimization<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/output"><!-- HTML_TAG_START -->Model outputs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/pipelines"><!-- HTML_TAG_START -->Pipelines<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/processors"><!-- HTML_TAG_START -->Processors<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/quantization"><!-- HTML_TAG_START -->Quantization<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/tokenizer"><!-- HTML_TAG_START -->Tokenizer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/trainer"><!-- HTML_TAG_START -->Trainer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/deepspeed"><!-- HTML_TAG_START -->DeepSpeed Integration<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor"><!-- HTML_TAG_START -->Feature Extractor<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/image_processor"><!-- HTML_TAG_START -->Image Processor<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Text models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Vision models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Audio models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Multimodal models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Reinforcement learning models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/decision_transformer"><!-- HTML_TAG_START -->Decision Transformer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer"><!-- HTML_TAG_START -->Trajectory Transformer<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Time series models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Graph models<!-- HTML_TAG_END --></span> </span></span> </div></div> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Internal Helpers<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/modeling_utils"><!-- HTML_TAG_START -->Custom Layers and Utilities<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/pipelines_utils"><!-- HTML_TAG_START -->Utilities for pipelines<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/tokenization_utils"><!-- HTML_TAG_START -->Utilities for Tokenizers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/trainer_utils"><!-- HTML_TAG_START -->Utilities for Trainer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/generation_utils"><!-- HTML_TAG_START -->Utilities for Generation<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/image_processing_utils"><!-- HTML_TAG_START -->Utilities for Image Processors<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/audio_utils"><!-- HTML_TAG_START -->Utilities for Audio processing<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/file_utils"><!-- HTML_TAG_START -->General Utilities<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/time_series_utils"><!-- HTML_TAG_START -->Utilities for Time Series<!-- HTML_TAG_END --> </a> </div> </div></nav></div></div></div> <div class="z-1 min-w-0 flex-1"> <div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg"> <div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div> <p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience </p> <div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes </div></div></div> <div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a> <p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div> <div class="prose-doc prose relative mx-auto max-w-4xl break-words"><!-- HTML_TAG_START --> <link href="/docs/transformers/v4.34.0/en/_app/immutable/assets/0.e3b0c442.css" rel="modulepreload"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/entry/start.c2db227a.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/scheduler.9bc65507.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/singletons.e3057404.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/index.3b203c72.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/paths.e7de6301.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/entry/app.879d9b87.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/index.78c82d43.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/0.242aaaff.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/each.e59479a4.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/247.13b802cc.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/Tip.87d55b76.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/Docstring.4e7352e2.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/globals.7f7f1b26.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/IconCopyLink.bedaa44d.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/CodeBlock.73e038be.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/ExampleCodeBlock.872b014d.js"><!-- HEAD_svelte-1phssyn_START --><meta name="hf:doc:metadata" content="{&quot;local&quot;:&quot;trajectory-transformer&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;overview&quot;,&quot;title&quot;:&quot;Overview&quot;},{&quot;local&quot;:&quot;transformers.TrajectoryTransformerConfig&quot;,&quot;title&quot;:&quot;TrajectoryTransformerConfig&quot;},{&quot;local&quot;:&quot;transformers.TrajectoryTransformerModel&quot;,&quot;title&quot;:&quot;TrajectoryTransformerModel&quot;}],&quot;title&quot;:&quot;Trajectory Transformer&quot;}"><!-- HEAD_svelte-1phssyn_END --> <p></p> <h1 class="relative group"><a id="trajectory-transformer" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#trajectory-transformer"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1kgfj6b">Trajectory Transformer</span></h1> <div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400"><p data-svelte-h="svelte-lwu440">This model is in maintenance mode only, so we won’t accept any new PRs changing its code.</p> <p data-svelte-h="svelte-4042uy">If you run into any issues running this model, please reinstall the last version that supported this model: v4.30.0. You can do so by running the following command: <code>pip install -U transformers==4.30.0</code>.</p></div> <h2 class="relative group"><a id="overview" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#overview"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1jsw1pg">Overview</span></h2> <p data-svelte-h="svelte-phmc4q">The Trajectory Transformer model was proposed in <a href="https://arxiv.org/abs/2106.02039" rel="nofollow">Offline Reinforcement Learning as One Big Sequence Modeling Problem</a> by Michael Janner, Qiyang Li, Sergey Levine.</p> <p data-svelte-h="svelte-vfdo9a">The abstract from the paper is the following:</p> <p data-svelte-h="svelte-1bgey0r"><em>Reinforcement learning (RL) is typically concerned with estimating stationary policies or single-step models, leveraging the Markov property to factorize problems in time. However, we can also view RL as a generic sequence modeling problem, with the goal being to produce a sequence of actions that leads to a sequence of high rewards. Viewed in this way, it is tempting to consider whether high-capacity sequence prediction models that work well in other domains, such as natural-language processing, can also provide effective solutions to the RL problem. To this end, we explore how RL can be tackled with the tools of sequence modeling, using a Transformer architecture to model distributions over trajectories and repurposing beam search as a planning algorithm. Framing RL as sequence modeling problem simplifies a range of design decisions, allowing us to dispense with many of the components common in offline RL algorithms. We demonstrate the flexibility of this approach across long-horizon dynamics prediction, imitation learning, goal-conditioned RL, and offline RL. Further, we show that this approach can be combined with existing model-free algorithms to yield a state-of-the-art planner in sparse-reward, long-horizon tasks.</em></p> <p data-svelte-h="svelte-axv494">Tips:</p> <p data-svelte-h="svelte-1kke2bv">This Transformer is used for deep reinforcement learning. To use it, you need to create sequences from actions, states and rewards from all previous timesteps. This model will treat all these elements together as one big sequence (a trajectory).</p> <p data-svelte-h="svelte-1usegnj">This model was contributed by <a href="https://huggingface.co/CarlCochet" rel="nofollow">CarlCochet</a>. The original code can be found <a href="https://github.com/jannerm/trajectory-transformer" rel="nofollow">here</a>.</p> <h2 class="relative group"><a id="transformers.TrajectoryTransformerConfig" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TrajectoryTransformerConfig"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-f83n7l">TrajectoryTransformerConfig</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TrajectoryTransformerConfig"><!-- HTML_TAG_START --><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">TrajectoryTransformerConfig</span></span></h3><!-- HTML_TAG_END --> <a id="transformers.TrajectoryTransformerConfig" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TrajectoryTransformerConfig"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/deprecated/trajectory_transformer/configuration_trajectory_transformer.py#L31" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">vocab_size<span class="opacity-60"> = 100</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">action_weight<span class="opacity-60"> = 5</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">reward_weight<span class="opacity-60"> = 1</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">value_weight<span class="opacity-60"> = 1</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">block_size<span class="opacity-60"> = 249</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">action_dim<span class="opacity-60"> = 6</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">observation_dim<span class="opacity-60"> = 17</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">transition_dim<span class="opacity-60"> = 25</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">n_layer<span class="opacity-60"> = 4</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">n_head<span class="opacity-60"> = 4</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">n_embd<span class="opacity-60"> = 128</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">embd_pdrop<span class="opacity-60"> = 0.1</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attn_pdrop<span class="opacity-60"> = 0.1</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">resid_pdrop<span class="opacity-60"> = 0.1</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">learning_rate<span class="opacity-60"> = 0.0006</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">max_position_embeddings<span class="opacity-60"> = 512</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">initializer_range<span class="opacity-60"> = 0.02</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">layer_norm_eps<span class="opacity-60"> = 1e-12</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">kaiming_initializer_range<span class="opacity-60"> = 1</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_cache<span class="opacity-60"> = True</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pad_token_id<span class="opacity-60"> = 1</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">bos_token_id<span class="opacity-60"> = 50256</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">eos_token_id<span class="opacity-60"> = 50256</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TrajectoryTransformerConfig.vocab_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TrajectoryTransformerConfig.vocab_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>vocab_size</strong> (<code>int</code>, <em>optional</em>, defaults to 100) — Vocabulary size of the TrajectoryTransformer model. Defines the number of different tokens that can be represented by the <code>trajectories</code> passed when calling <a href="/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer#transformers.TrajectoryTransformerModel">TrajectoryTransformerModel</a><!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TrajectoryTransformerConfig.action_weight" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TrajectoryTransformerConfig.action_weight"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>action_weight</strong> (<code>int</code>, <em>optional</em>, defaults to 5) — Weight of the action in the loss function<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TrajectoryTransformerConfig.reward_weight" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TrajectoryTransformerConfig.reward_weight"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>reward_weight</strong> (<code>int</code>, <em>optional</em>, defaults to 1) — Weight of the reward in the loss function<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TrajectoryTransformerConfig.value_weight" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TrajectoryTransformerConfig.value_weight"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>value_weight</strong> (<code>int</code>, <em>optional</em>, defaults to 1) — Weight of the value in the loss function<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TrajectoryTransformerConfig.block_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TrajectoryTransformerConfig.block_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>block_size</strong> (<code>int</code>, <em>optional</em>, defaults to 249) — Size of the blocks in the trajectory transformer.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TrajectoryTransformerConfig.action_dim" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TrajectoryTransformerConfig.action_dim"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>action_dim</strong> (<code>int</code>, <em>optional</em>, defaults to 6) — Dimension of the action space.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TrajectoryTransformerConfig.observation_dim" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TrajectoryTransformerConfig.observation_dim"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>observation_dim</strong> (<code>int</code>, <em>optional</em>, defaults to 17) — Dimension of the observation space.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TrajectoryTransformerConfig.transition_dim" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TrajectoryTransformerConfig.transition_dim"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>transition_dim</strong> (<code>int</code>, <em>optional</em>, defaults to 25) — Dimension of the transition space.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TrajectoryTransformerConfig.n_layer" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TrajectoryTransformerConfig.n_layer"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>n_layer</strong> (<code>int</code>, <em>optional</em>, defaults to 4) — Number of hidden layers in the Transformer encoder.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TrajectoryTransformerConfig.n_head" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TrajectoryTransformerConfig.n_head"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>n_head</strong> (<code>int</code>, <em>optional</em>, defaults to 4) — Number of attention heads for each attention layer in the Transformer encoder.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TrajectoryTransformerConfig.n_embd" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TrajectoryTransformerConfig.n_embd"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>n_embd</strong> (<code>int</code>, <em>optional</em>, defaults to 128) — Dimensionality of the embeddings and hidden states.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TrajectoryTransformerConfig.resid_pdrop" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TrajectoryTransformerConfig.resid_pdrop"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>resid_pdrop</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TrajectoryTransformerConfig.embd_pdrop" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TrajectoryTransformerConfig.embd_pdrop"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>embd_pdrop</strong> (<code>int</code>, <em>optional</em>, defaults to 0.1) — The dropout ratio for the embeddings.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TrajectoryTransformerConfig.attn_pdrop" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TrajectoryTransformerConfig.attn_pdrop"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>attn_pdrop</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) — The dropout ratio for the attention.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TrajectoryTransformerConfig.hidden_act" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TrajectoryTransformerConfig.hidden_act"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>hidden_act</strong> (<code>str</code> or <code>function</code>, <em>optional</em>, defaults to <code>"gelu"</code>) — The non-linear activation function (function or string) in the encoder and pooler. If string, <code>"gelu"</code>, <code>"relu"</code>, <code>"selu"</code> and <code>"gelu_new"</code> are supported.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TrajectoryTransformerConfig.max_position_embeddings" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TrajectoryTransformerConfig.max_position_embeddings"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>max_position_embeddings</strong> (<code>int</code>, <em>optional</em>, defaults to 512) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048).<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TrajectoryTransformerConfig.initializer_range" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TrajectoryTransformerConfig.initializer_range"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>initializer_range</strong> (<code>float</code>, <em>optional</em>, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TrajectoryTransformerConfig.layer_norm_eps" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TrajectoryTransformerConfig.layer_norm_eps"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>layer_norm_eps</strong> (<code>float</code>, <em>optional</em>, defaults to 1e-12) — The epsilon used by the layer normalization layers.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TrajectoryTransformerConfig.kaiming_initializer_range" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TrajectoryTransformerConfig.kaiming_initializer_range"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>kaiming_initializer_range</strong> (`float, <em>optional</em>, defaults to 1) — A coefficient scaling the negative slope of the kaiming initializer rectifier for EinLinear layers.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TrajectoryTransformerConfig.use_cache" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TrajectoryTransformerConfig.use_cache"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>use_cache</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if <code>config.is_decoder=True</code>. Example —<!-- HTML_TAG_END --> </span></span> </li></ul> </div></div> <p data-svelte-h="svelte-vwlkpq">This is the configuration class to store the configuration of a <a href="/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer#transformers.TrajectoryTransformerModel">TrajectoryTransformerModel</a>. It is used to instantiate an TrajectoryTransformer model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the TrajectoryTransformer <a href="https://huggingface.co/CarlCochet/trajectory-transformer-halfcheetah-medium-v2" rel="nofollow">CarlCochet/trajectory-transformer-halfcheetah-medium-v2</a> architecture.</p> <p data-svelte-h="svelte-10kqkkl">Configuration objects inherit from <a href="/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> and can be used to control the model outputs. Read the documentation from <a href="/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> for more information.</p> <div class="relative group rounded-md"><a id="transformers.TrajectoryTransformerConfig.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TrajectoryTransformerConfig.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> TrajectoryTransformerConfig, TrajectoryTransformerModel <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Initializing a TrajectoryTransformer CarlCochet/trajectory-transformer-halfcheetah-medium-v2 style configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>configuration = TrajectoryTransformerConfig() <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Initializing a model (with random weights) from the CarlCochet/trajectory-transformer-halfcheetah-medium-v2 style configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model = TrajectoryTransformerModel(configuration) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Accessing the model configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>configuration = model.config<!-- HTML_TAG_END --></pre></div></div></div> <h2 class="relative group"><a id="transformers.TrajectoryTransformerModel" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TrajectoryTransformerModel"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-3v3pe2">TrajectoryTransformerModel</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TrajectoryTransformerModel"><!-- HTML_TAG_START --><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">TrajectoryTransformerModel</span></span></h3><!-- HTML_TAG_END --> <a id="transformers.TrajectoryTransformerModel" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TrajectoryTransformerModel"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/deprecated/trajectory_transformer/modeling_trajectory_transformer.py#L407" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TrajectoryTransformerModel.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TrajectoryTransformerModel.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer#transformers.TrajectoryTransformerConfig">TrajectoryTransformerConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.<!-- HTML_TAG_END --> </span></span> </li></ul> </div></div> <p data-svelte-h="svelte-d660au">The bare TrajectoryTransformer Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <p data-svelte-h="svelte-1oiyixz">the full GPT language model, with a context size of block_size</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TrajectoryTransformerModel.forward"><!-- HTML_TAG_START --><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4><!-- HTML_TAG_END --> <a id="transformers.TrajectoryTransformerModel.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TrajectoryTransformerModel.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/deprecated/trajectory_transformer/modeling_trajectory_transformer.py#L468" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">trajectories<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">past_key_values<span class="opacity-60">: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">targets<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_cache<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><!-- HTML_TAG_START --><span><code>transformers.models.deprecated.trajectory_transformer.modeling_trajectory_transformer.TrajectoryTransformerOutput</code> or <code>tuple(torch.FloatTensor)</code></span><!-- HTML_TAG_END --></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TrajectoryTransformerModel.forward.trajectories" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TrajectoryTransformerModel.forward.trajectories"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>trajectories</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Batch of trajectories, where a trajectory is a sequence of states, actions and rewards.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TrajectoryTransformerModel.forward.past_key_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TrajectoryTransformerModel.forward.past_key_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>past_key_values</strong> (<code>Tuple[Tuple[torch.Tensor]]</code> of length <code>config.n_layers</code>, <em>optional</em>) — Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see <code>past_key_values</code> output below). Can be used to speed up sequential decoding. The <code>input_ids</code> which have their past given to this model should not be passed as <code>input_ids</code> as they have already been computed.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TrajectoryTransformerModel.forward.targets" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TrajectoryTransformerModel.forward.targets"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>targets</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Desired targets used to compute the loss.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TrajectoryTransformerModel.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TrajectoryTransformerModel.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TrajectoryTransformerModel.forward.use_cache" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TrajectoryTransformerModel.forward.use_cache"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>use_cache</strong> (<code>bool</code>, <em>optional</em>) — If set to <code>True</code>, <code>past_key_values</code> key value states are returned and can be used to speed up decoding (see <code>past_key_values</code>).<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TrajectoryTransformerModel.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TrajectoryTransformerModel.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TrajectoryTransformerModel.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TrajectoryTransformerModel.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TrajectoryTransformerModel.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TrajectoryTransformerModel.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.<!-- HTML_TAG_END --> </span></span> </li></ul> <div id="transformers.TrajectoryTransformerModel.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <!-- HTML_TAG_START --> <p><code>transformers.models.deprecated.trajectory_transformer.modeling_trajectory_transformer.TrajectoryTransformerOutput</code> or <code>tuple(torch.FloatTensor)</code></p> <!-- HTML_TAG_END --> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"><!-- HTML_TAG_START --> </p><p>A <code>transformers.models.deprecated.trajectory_transformer.modeling_trajectory_transformer.TrajectoryTransformerOutput</code> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer#transformers.TrajectoryTransformerConfig">TrajectoryTransformerConfig</a>) and inputs.</p> <ul> <li><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Language modeling loss.</li> <li><strong>logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, config.vocab_size)</code>) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).</li> <li><strong>past_key_values</strong> (<code>Tuple[Tuple[torch.Tensor]]</code>, <em>optional</em>, returned when <code>use_cache=True</code> is passed or when <code>config.use_cache=True</code>) — Tuple of length <code>config.n_layers</code>, containing tuples of tensors of shape <code>(batch_size, num_heads, sequence_length, embed_size_per_head)</code>). Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see <code>past_key_values</code> input) to speed up sequential decoding.</li> <li><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>. Hidden-states of the model at the output of each layer plus the initial embedding outputs.</li> <li><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>. GPT2Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</li> </ul> <!-- HTML_TAG_END --><p></p> </div></div> <p data-svelte-h="svelte-1hd1olu">The <a href="/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer#transformers.TrajectoryTransformerModel">TrajectoryTransformerModel</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.TrajectoryTransformerModel.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TrajectoryTransformerModel.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-kvfsh7">Examples:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> TrajectoryTransformerModel <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>model = TrajectoryTransformerModel.from_pretrained( <span class="hljs-meta">... </span> <span class="hljs-string">"CarlCochet/trajectory-transformer-halfcheetah-medium-v2"</span> <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model.to(device) <span class="hljs-meta">&gt;&gt;&gt; </span>model.<span class="hljs-built_in">eval</span>() <span class="hljs-meta">&gt;&gt;&gt; </span>observations_dim, action_dim, batch_size = <span class="hljs-number">17</span>, <span class="hljs-number">6</span>, <span class="hljs-number">256</span> <span class="hljs-meta">&gt;&gt;&gt; </span>seq_length = observations_dim + action_dim + <span class="hljs-number">1</span> <span class="hljs-meta">&gt;&gt;&gt; </span>trajectories = torch.LongTensor([np.random.permutation(self.seq_length) <span class="hljs-keyword">for</span> _ <span class="hljs-keyword">in</span> <span class="hljs-built_in">range</span>(batch_size)]).to( <span class="hljs-meta">... </span> device <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>targets = torch.LongTensor([np.random.permutation(self.seq_length) <span class="hljs-keyword">for</span> _ <span class="hljs-keyword">in</span> <span class="hljs-built_in">range</span>(batch_size)]).to(device) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model( <span class="hljs-meta">... </span> trajectories, <span class="hljs-meta">... </span> targets=targets, <span class="hljs-meta">... </span> use_cache=<span class="hljs-literal">True</span>, <span class="hljs-meta">... </span> output_attentions=<span class="hljs-literal">True</span>, <span class="hljs-meta">... </span> output_hidden_states=<span class="hljs-literal">True</span>, <span class="hljs-meta">... </span> return_dict=<span class="hljs-literal">True</span>, <span class="hljs-meta">... </span>)<!-- HTML_TAG_END --></pre></div></div></div></div> <p></p> <script> { __sveltekit_1yybmhh = { assets: "/docs/transformers/v4.34.0/en", base: "/docs/transformers/v4.34.0/en", env: {} }; const element = document.currentScript.parentElement; const data = [null,null]; Promise.all([ import("/docs/transformers/v4.34.0/en/_app/immutable/entry/start.c2db227a.js"), import("/docs/transformers/v4.34.0/en/_app/immutable/entry/app.879d9b87.js") ]).then(([kit, app]) => { kit.start(app, element, { node_ids: [0, 247], data, form: null, error: null }); }); } </script> <!-- HTML_TAG_END --></div> <div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/decision_transformer" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>Decision Transformer</a> <a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/autoformer" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">Autoformer<span class="ml-2 translate-y-px">→</span></a></div></div></div> <div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapter&quot;:{&quot;title&quot;:&quot;Trajectory Transformer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;trajectory-transformer&quot;,&quot;url&quot;:&quot;#trajectory-transformer&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;overview&quot;,&quot;url&quot;:&quot;#overview&quot;},{&quot;title&quot;:&quot;TrajectoryTransformerConfig&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.TrajectoryTransformerConfig&quot;,&quot;url&quot;:&quot;#transformers.TrajectoryTransformerConfig&quot;},{&quot;title&quot;:&quot;TrajectoryTransformerModel&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.TrajectoryTransformerModel&quot;,&quot;url&quot;:&quot;#transformers.TrajectoryTransformerModel&quot;}]}}" data-target="SubSideMenu"><nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#trajectory-transformer" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-trajectory-transformer"><wbr>Trajectory <wbr>Transformer</a> <a href="#overview" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-overview"><wbr>Overview</a> <a href="#transformers.TrajectoryTransformerConfig" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.TrajectoryTransformerConfig"><wbr>Trajectory<wbr>Transformer<wbr>Config</a> <a href="#transformers.TrajectoryTransformerModel" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.TrajectoryTransformerModel"><wbr>Trajectory<wbr>Transformer<wbr>Model</a> </nav></div></div></div> <div id="doc-footer"></div></main> </div> <script> import("/front/build/kube-5e23f38/index.js"); window.moonSha = "kube-5e23f38/"; window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc","environment":"production","userAgent":"HuggingFace (production)"}`); </script> <!-- Stripe --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://js.stripe.com/v3/"; script.async = true; document.head.appendChild(script); } </script> <!-- Google analytics v4 --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL"; script.async = true; document.head.appendChild(script); window.dataLayer = window.dataLayer || []; function gtag() { if (window.dataLayer !== undefined) { window.dataLayer.push(arguments); } } gtag("js", new Date()); gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer" }); /// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" }); /// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent /// TODO: ask the user for their consent and update this with gtag('consent', 'update') } </script> <!-- Google Analytics v3 (deprecated) --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { (function (i, s, o, g, r, a, m) { i["GoogleAnalyticsObject"] = r; (i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments); }), (i[r].l = 1 * new Date()); (a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]); a.async = 1; a.src = g; m.parentNode.insertBefore(a, m); })(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics"); ganalytics("create", "UA-83738774-2", "auto"); ganalytics("send", "pageview", "/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer"); } </script> <iframe name="__privateStripeMetricsController6170" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-27c67c0d52761104439bb051c7856ab1.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fv4.34.0%2Fen%2Fmodel_doc%2Ftrajectory_transformer&amp;title=Trajectory%20Transformer&amp;referrer=&amp;muid=NA&amp;sid=NA&amp;version=6&amp;preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html>
2023-10-05T13:33:48.161Z
FSMT
https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/fsmt#transformers.FSMTConfig
# FSMT **DISCLAIMER:** If you see something strange, file a [Github Issue](https://github.com/huggingface/transformers/issues/new?assignees=&labels=&template=bug-report.md&title) and assign @stas00. ## Overview FSMT (FairSeq MachineTranslation) models were introduced in [Facebook FAIR’s WMT19 News Translation Task Submission](https://arxiv.org/abs/1907.06616) by Nathan Ng, Kyra Yee, Alexei Baevski, Myle Ott, Michael Auli, Sergey Edunov. The abstract of the paper is the following: _This paper describes Facebook FAIR’s submission to the WMT19 shared news translation task. We participate in two language pairs and four language directions, English <-> German and English <-> Russian. Following our submission from last year, our baseline systems are large BPE-based transformer models trained with the Fairseq sequence modeling toolkit which rely on sampled back-translations. This year we experiment with different bitext data filtering schemes, as well as with adding filtered back-translated data. We also ensemble and fine-tune our models on domain-specific data, then decode using noisy channel model reranking. Our submissions are ranked first in all four directions of the human evaluation campaign. On En->De, our system significantly outperforms other systems as well as human translations. This system improves upon our WMT’18 submission by 4.5 BLEU points._ This model was contributed by [stas](https://huggingface.co/stas). The original code can be found [here](https://github.com/pytorch/fairseq/tree/master/examples/wmt19). ## Implementation Notes - FSMT uses source and target vocabulary pairs that aren’t combined into one. It doesn’t share embeddings tokens either. Its tokenizer is very similar to [XLMTokenizer](/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.XLMTokenizer) and the main model is derived from [BartModel](/docs/transformers/v4.34.0/en/model_doc/bart#transformers.BartModel). ## FSMTConfig ### class transformers.FSMTConfig [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/fsmt/configuration_fsmt.py#L39) ( langs = \['en', 'de'\] src\_vocab\_size = 42024 tgt\_vocab\_size = 42024 activation\_function = 'relu' d\_model = 1024 max\_length = 200 max\_position\_embeddings = 1024 encoder\_ffn\_dim = 4096 encoder\_layers = 12 encoder\_attention\_heads = 16 encoder\_layerdrop = 0.0 decoder\_ffn\_dim = 4096 decoder\_layers = 12 decoder\_attention\_heads = 16 decoder\_layerdrop = 0.0 attention\_dropout = 0.0 dropout = 0.1 activation\_dropout = 0.0 init\_std = 0.02 decoder\_start\_token\_id = 2 is\_encoder\_decoder = True scale\_embedding = True tie\_word\_embeddings = False num\_beams = 5 length\_penalty = 1.0 early\_stopping = False use\_cache = True pad\_token\_id = 1 bos\_token\_id = 0 eos\_token\_id = 2 forced\_eos\_token\_id = 2 \*\*common\_kwargs ) Parameters - **langs** (`List[str]`) — A list with source language and target\_language (e.g., \[‘en’, ‘ru’\]). - **src\_vocab\_size** (`int`) — Vocabulary size of the encoder. Defines the number of different tokens that can be represented by the `inputs_ids` passed to the forward method in the encoder. - **tgt\_vocab\_size** (`int`) — Vocabulary size of the decoder. Defines the number of different tokens that can be represented by the `inputs_ids` passed to the forward method in the decoder. - **d\_model** (`int`, _optional_, defaults to 1024) — Dimensionality of the layers and the pooler layer. - **encoder\_layers** (`int`, _optional_, defaults to 12) — Number of encoder layers. - **decoder\_layers** (`int`, _optional_, defaults to 12) — Number of decoder layers. - **encoder\_attention\_heads** (`int`, _optional_, defaults to 16) — Number of attention heads for each attention layer in the Transformer encoder. - **decoder\_attention\_heads** (`int`, _optional_, defaults to 16) — Number of attention heads for each attention layer in the Transformer decoder. - **decoder\_ffn\_dim** (`int`, _optional_, defaults to 4096) — Dimensionality of the “intermediate” (often named feed-forward) layer in decoder. - **encoder\_ffn\_dim** (`int`, _optional_, defaults to 4096) — Dimensionality of the “intermediate” (often named feed-forward) layer in decoder. - **activation\_function** (`str` or `Callable`, _optional_, defaults to `"relu"`) — The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"silu"` and `"gelu_new"` are supported. - **dropout** (`float`, _optional_, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. - **attention\_dropout** (`float`, _optional_, defaults to 0.0) — The dropout ratio for the attention probabilities. - **activation\_dropout** (`float`, _optional_, defaults to 0.0) — The dropout ratio for activations inside the fully connected layer. - **max\_position\_embeddings** (`int`, _optional_, defaults to 1024) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). - **init\_std** (`float`, _optional_, defaults to 0.02) — The standard deviation of the truncated\_normal\_initializer for initializing all weight matrices. - **scale\_embedding** (`bool`, _optional_, defaults to `True`) — Scale embeddings by diving by sqrt(d\_model). - **bos\_token\_id** (`int`, _optional_, defaults to 0) — Beginning of stream token id. - **pad\_token\_id** (`int`, _optional_, defaults to 1) — Padding token id. - **eos\_token\_id** (`int`, _optional_, defaults to 2) — End of stream token id. - **decoder\_start\_token\_id** (`int`, _optional_) — This model starts decoding with `eos_token_id` - **encoder\_layerdrop** (`float`, _optional_, defaults to 0.0) — Google “layerdrop arxiv”, as its not explainable in one line. - **decoder\_layerdrop** (`float`, _optional_, defaults to 0.0) — Google “layerdrop arxiv”, as its not explainable in one line. - **is\_encoder\_decoder** (`bool`, _optional_, defaults to `True`) — Whether this is an encoder/decoder model. - **tie\_word\_embeddings** (`bool`, _optional_, defaults to `False`) — Whether to tie input and output embeddings. - **num\_beams** (`int`, _optional_, defaults to 5) — Number of beams for beam search that will be used by default in the `generate` method of the model. 1 means no beam search. - **length\_penalty** (`float`, _optional_, defaults to 1) — Exponential penalty to the length that is used with beam-based generation. It is applied as an exponent to the sequence length, which in turn is used to divide the score of the sequence. Since the score is the log likelihood of the sequence (i.e. negative), `length_penalty` > 0.0 promotes longer sequences, while `length_penalty` < 0.0 encourages shorter sequences. - **early\_stopping** (`bool`, _optional_, defaults to `False`) — Flag that will be used by default in the `generate` method of the model. Whether to stop the beam search when at least `num_beams` sentences are finished per batch or not. - **use\_cache** (`bool`, _optional_, defaults to `True`) — Whether or not the model should return the last key/values attentions (not used by all models). - **forced\_eos\_token\_id** (`int`, _optional_, defaults to 2) — The id of the token to force as the last generated token when `max_length` is reached. Usually set to `eos_token_id`. This is the configuration class to store the configuration of a [FSMTModel](/docs/transformers/v4.34.0/en/model_doc/fsmt#transformers.FSMTModel). It is used to instantiate a FSMT model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the FSMT [facebook/wmt19-en-ru](https://huggingface.co/facebook/wmt19-en-ru) architecture. Configuration objects inherit from [PretrainedConfig](/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig) and can be used to control the model outputs. Read the documentation from [PretrainedConfig](/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig) for more information. Examples: ``` >>> from transformers import FSMTConfig, FSMTModel >>> >>> config = FSMTConfig() >>> >>> model = FSMTModel(config) >>> >>> configuration = model.config ``` ## FSMTTokenizer ### class transformers.FSMTTokenizer [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/fsmt/tokenization_fsmt.py#L135) ( langs = None src\_vocab\_file = None tgt\_vocab\_file = None merges\_file = None do\_lower\_case = False unk\_token = '<unk>' bos\_token = '<s>' sep\_token = '</s>' pad\_token = '<pad>' \*\*kwargs ) Parameters - **langs** (`List[str]`) — A list of two languages to translate from and to, for instance `["en", "ru"]`. - **src\_vocab\_file** (`str`) — File containing the vocabulary for the source language. - **tgt\_vocab\_file** (`st`) — File containing the vocabulary for the target language. - **merges\_file** (`str`) — File containing the merges. - **do\_lower\_case** (`bool`, _optional_, defaults to `False`) — Whether or not to lowercase the input when tokenizing. - **unk\_token** (`str`, _optional_, defaults to `"<unk>"`) — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. - **bos\_token** (`str`, _optional_, defaults to `"<s>"`) — The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token. When building a sequence using special tokens, this is not the token that is used for the beginning of sequence. The token used is the `cls_token`. - **sep\_token** (`str`, _optional_, defaults to `"</s>"`) — The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens. - **pad\_token** (`str`, _optional_, defaults to `"<pad>"`) — The token used for padding, for example when batching sequences of different lengths. Construct an FAIRSEQ Transformer tokenizer. Based on Byte-Pair Encoding. The tokenization process is the following: - Moses preprocessing and tokenization. - Normalizing all inputs text. - The arguments `special_tokens` and the function `set_special_tokens`, can be used to add additional symbols (like ”**classify**”) to a vocabulary. - The argument `langs` defines a pair of languages. This tokenizer inherits from [PreTrainedTokenizer](/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizer) which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. #### build\_inputs\_with\_special\_tokens [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/fsmt/tokenization_fsmt.py#L403) ( token\_ids\_0: typing.List\[int\] token\_ids\_1: typing.Optional\[typing.List\[int\]\] = None ) → `List[int]` Parameters - **token\_ids\_0** (`List[int]`) — List of IDs to which the special tokens will be added. - **token\_ids\_1** (`List[int]`, _optional_) — Optional second list of IDs for sequence pairs. List of [input IDs](../glossary#input-ids) with the appropriate special tokens. Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A FAIRSEQ Transformer sequence has the following format: - single sequence: `<s> X </s>` - pair of sequences: `<s> A </s> B </s>` #### get\_special\_tokens\_mask [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/fsmt/tokenization_fsmt.py#L429) ( token\_ids\_0: typing.List\[int\] token\_ids\_1: typing.Optional\[typing.List\[int\]\] = None already\_has\_special\_tokens: bool = False ) → `List[int]` Parameters - **token\_ids\_0** (`List[int]`) — List of IDs. - **token\_ids\_1** (`List[int]`, _optional_) — Optional second list of IDs for sequence pairs. - **already\_has\_special\_tokens** (`bool`, _optional_, defaults to `False`) — Whether or not the token list is already formatted with special tokens for the model. A list of integers in the range \[0, 1\]: 1 for a special token, 0 for a sequence token. Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer `prepare_for_model` method. #### create\_token\_type\_ids\_from\_sequences [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/fsmt/tokenization_fsmt.py#L457) ( token\_ids\_0: typing.List\[int\] token\_ids\_1: typing.Optional\[typing.List\[int\]\] = None ) → `List[int]` Parameters - **token\_ids\_0** (`List[int]`) — List of IDs. - **token\_ids\_1** (`List[int]`, _optional_) — Optional second list of IDs for sequence pairs. List of [token type IDs](../glossary#token-type-ids) according to the given sequence(s). Create a mask from the two sequences passed to be used in a sequence-pair classification task. A FAIRSEQ Transformer sequence pair mask has the following format: ``` 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 | first sequence | second sequence | ``` If `token_ids_1` is `None`, this method only returns the first portion of the mask (0s). Creates a mask from the two sequences passed to be used in a sequence-pair classification task. An FAIRSEQ\_TRANSFORMER sequence pair mask has the following format: #### save\_vocabulary [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/fsmt/tokenization_fsmt.py#L490) ( save\_directory: str filename\_prefix: typing.Optional\[str\] = None ) ## FSMTModel ### class transformers.FSMTModel [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/fsmt/modeling_fsmt.py#L1036) ( config: FSMTConfig ) Parameters - **config** ([FSMTConfig](/docs/transformers/v4.34.0/en/model_doc/fsmt#transformers.FSMTConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. The bare FSMT Model outputting raw hidden-states without any specific head on top. This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/fsmt/modeling_fsmt.py#L1063) ( input\_ids: LongTensor attention\_mask: typing.Optional\[torch.Tensor\] = None decoder\_input\_ids: typing.Optional\[torch.LongTensor\] = None decoder\_attention\_mask: typing.Optional\[torch.BoolTensor\] = None head\_mask: typing.Optional\[torch.Tensor\] = None decoder\_head\_mask: typing.Optional\[torch.Tensor\] = None cross\_attn\_head\_mask: typing.Optional\[torch.Tensor\] = None encoder\_outputs: typing.Optional\[typing.Tuple\[torch.FloatTensor\]\] = None past\_key\_values: typing.Optional\[typing.Tuple\[torch.FloatTensor\]\] = None use\_cache: typing.Optional\[bool\] = None output\_attentions: typing.Optional\[bool\] = None output\_hidden\_states: typing.Optional\[bool\] = None inputs\_embeds: typing.Optional\[torch.FloatTensor\] = None decoder\_inputs\_embeds: typing.Optional\[torch.FloatTensor\] = None return\_dict: typing.Optional\[bool\] = None ) → [transformers.modeling\_outputs.Seq2SeqModelOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.Seq2SeqModelOutput) or `tuple(torch.FloatTensor)` Parameters - **input\_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using `FSTMTokenizer`. See [PreTrainedTokenizer.encode()](/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode) and [PreTrainedTokenizer.**call**()](/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__) for details. [What are input IDs?](../glossary#input-ids) - **attention\_mask** (`torch.Tensor` of shape `(batch_size, sequence_length)`, _optional_) — Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - **decoder\_input\_ids** (`torch.LongTensor` of shape `(batch_size, target_sequence_length)`, _optional_) — Indices of decoder input sequence tokens in the vocabulary. Indices can be obtained using [AutoTokenizer](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer). See [PreTrainedTokenizer.encode()](/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode) and [PreTrainedTokenizer.**call**()](/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__) for details. [What are decoder input IDs?](../glossary#decoder-input-ids) FSMT uses the `eos_token_id` as the starting token for `decoder_input_ids` generation. If `past_key_values` is used, optionally only the last `decoder_input_ids` have to be input (see `past_key_values`). - **decoder\_attention\_mask** (`torch.BoolTensor` of shape `(batch_size, target_sequence_length)`, _optional_) — Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also be used by default. - **head\_mask** (`torch.Tensor` of shape `(encoder_layers, encoder_attention_heads)`, _optional_) — Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in `[0, 1]`: - 1 indicates the head is **not masked**, - 0 indicates the head is **masked**. - **decoder\_head\_mask** (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, _optional_) — Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in `[0, 1]`: - 1 indicates the head is **not masked**, - 0 indicates the head is **masked**. - **cross\_attn\_head\_mask** (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, _optional_) — Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in `[0, 1]`: - 1 indicates the head is **not masked**, - 0 indicates the head is **masked**. - **encoder\_outputs** (`Tuple(torch.FloatTensor)`, _optional_) — Tuple consists of (`last_hidden_state`, _optional_: `hidden_states`, _optional_: `attentions`) `last_hidden_state` of shape `(batch_size, sequence_length, hidden_size)` is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. - **past\_key\_values** (`Tuple(torch.FloatTensor)` of length `config.n_layers` with each tuple having 4 tensors of shape `(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`) — Contains precomputed key and value hidden-states of the attention blocks. Can be used to speed up decoding. If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that don’t have their past key value states given to this model) of shape `(batch_size, 1)` instead of all `decoder_input_ids` of shape `(batch_size, sequence_length)`. - **inputs\_embeds** (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, _optional_) — Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert `input_ids` indices into associated vectors than the model’s internal embedding lookup matrix. - **decoder\_inputs\_embeds** (`torch.FloatTensor` of shape `(batch_size, target_sequence_length, hidden_size)`, _optional_) — Optionally, instead of passing `decoder_input_ids` you can choose to directly pass an embedded representation. If `past_key_values` is used, optionally only the last `decoder_inputs_embeds` have to be input (see `past_key_values`). This is useful if you want more control over how to convert `decoder_input_ids` indices into associated vectors than the model’s internal embedding lookup matrix. If `decoder_input_ids` and `decoder_inputs_embeds` are both unset, `decoder_inputs_embeds` takes the value of `inputs_embeds`. - **use\_cache** (`bool`, _optional_, defaults to `True`) — If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see `past_key_values`). - **output\_attentions** (`bool`, _optional_) — Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. - **output\_hidden\_states** (`bool`, _optional_) — Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for more detail. - **return\_dict** (`bool`, _optional_) — Whether or not to return a [ModelOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput) instead of a plain tuple. A [transformers.modeling\_outputs.Seq2SeqModelOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.Seq2SeqModelOutput) or a tuple of `torch.FloatTensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various elements depending on the configuration ([FSMTConfig](/docs/transformers/v4.34.0/en/model_doc/fsmt#transformers.FSMTConfig)) and inputs. - **last\_hidden\_state** (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`) — Sequence of hidden-states at the output of the last layer of the decoder of the model. If `past_key_values` is used only the last hidden-state of the sequences of shape `(batch_size, 1, hidden_size)` is output. - **past\_key\_values** (`tuple(tuple(torch.FloatTensor))`, _optional_, returned when `use_cache=True` is passed or when `config.use_cache=True`) — Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of shape `(batch_size, num_heads, sequence_length, embed_size_per_head)`) and 2 additional tensors of shape `(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)`. Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see `past_key_values` input) to speed up sequential decoding. - **decoder\_hidden\_states** (`tuple(torch.FloatTensor)`, _optional_, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`) — Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. Hidden-states of the decoder at the output of each layer plus the optional initial embedding outputs. - **decoder\_attentions** (`tuple(torch.FloatTensor)`, _optional_, returned when `output_attentions=True` is passed or when `config.output_attentions=True`) — Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. - **cross\_attentions** (`tuple(torch.FloatTensor)`, _optional_, returned when `output_attentions=True` is passed or when `config.output_attentions=True`) — Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. - **encoder\_last\_hidden\_state** (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, _optional_) — Sequence of hidden-states at the output of the last layer of the encoder of the model. - **encoder\_hidden\_states** (`tuple(torch.FloatTensor)`, _optional_, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`) — Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. Hidden-states of the encoder at the output of each layer plus the optional initial embedding outputs. - **encoder\_attentions** (`tuple(torch.FloatTensor)`, _optional_, returned when `output_attentions=True` is passed or when `config.output_attentions=True`) — Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. The [FSMTModel](/docs/transformers/v4.34.0/en/model_doc/fsmt#transformers.FSMTModel) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, FSMTModel >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("facebook/wmt19-ru-en") >>> model = FSMTModel.from_pretrained("facebook/wmt19-ru-en") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state ``` ## FSMTForConditionalGeneration ### class transformers.FSMTForConditionalGeneration [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/fsmt/modeling_fsmt.py#L1177) ( config: FSMTConfig ) Parameters - **config** ([FSMTConfig](/docs/transformers/v4.34.0/en/model_doc/fsmt#transformers.FSMTConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. The FSMT Model with a language modeling head. Can be used for summarization. This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/fsmt/modeling_fsmt.py#L1189) ( input\_ids: LongTensor attention\_mask: typing.Optional\[torch.Tensor\] = None decoder\_input\_ids: typing.Optional\[torch.LongTensor\] = None decoder\_attention\_mask: typing.Optional\[torch.BoolTensor\] = None head\_mask: typing.Optional\[torch.Tensor\] = None decoder\_head\_mask: typing.Optional\[torch.Tensor\] = None cross\_attn\_head\_mask: typing.Optional\[torch.Tensor\] = None encoder\_outputs: typing.Optional\[typing.Tuple\[torch.FloatTensor\]\] = None past\_key\_values: typing.Optional\[typing.Tuple\[torch.FloatTensor\]\] = None inputs\_embeds: typing.Optional\[torch.Tensor\] = None decoder\_inputs\_embeds: typing.Optional\[torch.Tensor\] = None labels: typing.Optional\[torch.LongTensor\] = None use\_cache: typing.Optional\[bool\] = None output\_attentions: typing.Optional\[bool\] = None output\_hidden\_states: typing.Optional\[bool\] = None return\_dict: typing.Optional\[bool\] = None ) → [transformers.modeling\_outputs.Seq2SeqLMOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.Seq2SeqLMOutput) or `tuple(torch.FloatTensor)` Parameters - **input\_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using `FSTMTokenizer`. See [PreTrainedTokenizer.encode()](/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode) and [PreTrainedTokenizer.**call**()](/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__) for details. [What are input IDs?](../glossary#input-ids) - **attention\_mask** (`torch.Tensor` of shape `(batch_size, sequence_length)`, _optional_) — Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - **decoder\_input\_ids** (`torch.LongTensor` of shape `(batch_size, target_sequence_length)`, _optional_) — Indices of decoder input sequence tokens in the vocabulary. Indices can be obtained using [AutoTokenizer](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer). See [PreTrainedTokenizer.encode()](/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode) and [PreTrainedTokenizer.**call**()](/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__) for details. [What are decoder input IDs?](../glossary#decoder-input-ids) FSMT uses the `eos_token_id` as the starting token for `decoder_input_ids` generation. If `past_key_values` is used, optionally only the last `decoder_input_ids` have to be input (see `past_key_values`). - **decoder\_attention\_mask** (`torch.BoolTensor` of shape `(batch_size, target_sequence_length)`, _optional_) — Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also be used by default. - **head\_mask** (`torch.Tensor` of shape `(encoder_layers, encoder_attention_heads)`, _optional_) — Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in `[0, 1]`: - 1 indicates the head is **not masked**, - 0 indicates the head is **masked**. - **decoder\_head\_mask** (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, _optional_) — Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in `[0, 1]`: - 1 indicates the head is **not masked**, - 0 indicates the head is **masked**. - **cross\_attn\_head\_mask** (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, _optional_) — Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in `[0, 1]`: - 1 indicates the head is **not masked**, - 0 indicates the head is **masked**. - **encoder\_outputs** (`Tuple(torch.FloatTensor)`, _optional_) — Tuple consists of (`last_hidden_state`, _optional_: `hidden_states`, _optional_: `attentions`) `last_hidden_state` of shape `(batch_size, sequence_length, hidden_size)` is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. - **past\_key\_values** (`Tuple(torch.FloatTensor)` of length `config.n_layers` with each tuple having 4 tensors of shape `(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`) — Contains precomputed key and value hidden-states of the attention blocks. Can be used to speed up decoding. If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that don’t have their past key value states given to this model) of shape `(batch_size, 1)` instead of all `decoder_input_ids` of shape `(batch_size, sequence_length)`. - **inputs\_embeds** (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, _optional_) — Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert `input_ids` indices into associated vectors than the model’s internal embedding lookup matrix. - **decoder\_inputs\_embeds** (`torch.FloatTensor` of shape `(batch_size, target_sequence_length, hidden_size)`, _optional_) — Optionally, instead of passing `decoder_input_ids` you can choose to directly pass an embedded representation. If `past_key_values` is used, optionally only the last `decoder_inputs_embeds` have to be input (see `past_key_values`). This is useful if you want more control over how to convert `decoder_input_ids` indices into associated vectors than the model’s internal embedding lookup matrix. If `decoder_input_ids` and `decoder_inputs_embeds` are both unset, `decoder_inputs_embeds` takes the value of `inputs_embeds`. - **use\_cache** (`bool`, _optional_, defaults to `True`) — If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see `past_key_values`). - **output\_attentions** (`bool`, _optional_) — Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. - **output\_hidden\_states** (`bool`, _optional_) — Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for more detail. - **return\_dict** (`bool`, _optional_) — Whether or not to return a [ModelOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput) instead of a plain tuple. - **labels** (`torch.LongTensor` of shape `(batch_size, sequence_length)`, _optional_) — Labels for computing the masked language modeling loss. Indices should either be in `[0, ..., config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`. A [transformers.modeling\_outputs.Seq2SeqLMOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.Seq2SeqLMOutput) or a tuple of `torch.FloatTensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various elements depending on the configuration ([FSMTConfig](/docs/transformers/v4.34.0/en/model_doc/fsmt#transformers.FSMTConfig)) and inputs. - **loss** (`torch.FloatTensor` of shape `(1,)`, _optional_, returned when `labels` is provided) — Language modeling loss. - **logits** (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). - **past\_key\_values** (`tuple(tuple(torch.FloatTensor))`, _optional_, returned when `use_cache=True` is passed or when `config.use_cache=True`) — Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of shape `(batch_size, num_heads, sequence_length, embed_size_per_head)`) and 2 additional tensors of shape `(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)`. Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see `past_key_values` input) to speed up sequential decoding. - **decoder\_hidden\_states** (`tuple(torch.FloatTensor)`, _optional_, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`) — Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. Hidden-states of the decoder at the output of each layer plus the initial embedding outputs. - **decoder\_attentions** (`tuple(torch.FloatTensor)`, _optional_, returned when `output_attentions=True` is passed or when `config.output_attentions=True`) — Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. - **cross\_attentions** (`tuple(torch.FloatTensor)`, _optional_, returned when `output_attentions=True` is passed or when `config.output_attentions=True`) — Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. - **encoder\_last\_hidden\_state** (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, _optional_) — Sequence of hidden-states at the output of the last layer of the encoder of the model. - **encoder\_hidden\_states** (`tuple(torch.FloatTensor)`, _optional_, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`) — Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. Hidden-states of the encoder at the output of each layer plus the initial embedding outputs. - **encoder\_attentions** (`tuple(torch.FloatTensor)`, _optional_, returned when `output_attentions=True` is passed or when `config.output_attentions=True`) — Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. The [FSMTForConditionalGeneration](/docs/transformers/v4.34.0/en/model_doc/fsmt#transformers.FSMTForConditionalGeneration) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Translation example:: ``` >>> from transformers import AutoTokenizer, FSMTForConditionalGeneration >>> mname = "facebook/wmt19-ru-en" >>> model = FSMTForConditionalGeneration.from_pretrained(mname) >>> tokenizer = AutoTokenizer.from_pretrained(mname) >>> src_text = "Машинное обучение - это здорово, не так ли?" >>> input_ids = tokenizer(src_text, return_tensors="pt").input_ids >>> outputs = model.generate(input_ids, num_beams=5, num_return_sequences=3) >>> tokenizer.decode(outputs[0], skip_special_tokens=True) "Machine learning is great, isn't it?" ```
<!DOCTYPE html><html class=""><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"> <meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science."> <meta property="fb:app_id" content="1321688464574422"> <meta name="twitter:card" content="summary_large_image"> <meta name="twitter:site" content="@huggingface"> <meta property="og:title" content="FSMT"> <meta property="og:type" content="website"> <meta property="og:url" content="https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/fsmt"> <meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png"> <link rel="stylesheet" href="/front/build/kube-5e23f38/style.css"> <link rel="preconnect" href="https://fonts.gstatic.com"> <link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&amp;display=swap" rel="stylesheet"> <link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&amp;display=swap" rel="stylesheet"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" as="style" onload="this.onload=null;this.rel='stylesheet'"> <noscript> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" /> </noscript> <title>FSMT</title> <script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script> <script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"><link rel="stylesheet" href="/docs/transformers/v4.34.0/en/_app/immutable/assets/0.e3b0c442.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/1.38c5c2f6.js"></head> <body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage"> <div class="flex min-h-screen flex-col"> <div class="SVELTE_HYDRATER contents" data-props="{&quot;classNames&quot;:&quot;&quot;,&quot;isWide&quot;:true,&quot;isZh&quot;:false}" data-target="MainHeader"><header class="border-b border-gray-100 "><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <div class="flex flex-none items-center justify-center p-0.5 place-self-stretch lg:hidden"><button class="relative z-40 flex h-6 w-8 items-center justify-center" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div></div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="rounded-full border border-transparent bg-gray-900 px-3 py-1 leading-none text-white hover:border-black hover:bg-white hover:text-black" href="/join">Sign Up</a></li></ul></nav></div></header></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div> <main class="flex flex-1 flex-col"><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapters&quot;:[{&quot;title&quot;:&quot;Get started&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;🤗 Transformers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;index&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/index&quot;},{&quot;title&quot;:&quot;Quick tour&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;quicktour&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/quicktour&quot;},{&quot;title&quot;:&quot;Installation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;installation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/installation&quot;}]},{&quot;title&quot;:&quot;Tutorials&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Run inference with pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_tutorial&quot;},{&quot;title&quot;:&quot;Write portable code with AutoClass&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;autoclass_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/autoclass_tutorial&quot;},{&quot;title&quot;:&quot;Preprocess data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocessing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/preprocessing&quot;},{&quot;title&quot;:&quot;Fine-tune a pretrained model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;training&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/training&quot;},{&quot;title&quot;:&quot;Train with a script&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;run_scripts&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/run_scripts&quot;},{&quot;title&quot;:&quot;Set up distributed training with 🤗 Accelerate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;accelerate&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/accelerate&quot;},{&quot;title&quot;:&quot;Load and train adapters with 🤗 PEFT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;peft&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/peft&quot;},{&quot;title&quot;:&quot;Share your model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_sharing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_sharing&quot;},{&quot;title&quot;:&quot;Agents&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers_agents&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/transformers_agents&quot;},{&quot;title&quot;:&quot;Generation with LLMs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;llm_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/llm_tutorial&quot;}]},{&quot;title&quot;:&quot;Task Guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Natural Language Processing&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text classification&quot;,&quot;id&quot;:&quot;tasks/sequence_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/sequence_classification&quot;},{&quot;title&quot;:&quot;Token classification&quot;,&quot;id&quot;:&quot;tasks/token_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/token_classification&quot;},{&quot;title&quot;:&quot;Question answering&quot;,&quot;id&quot;:&quot;tasks/question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/question_answering&quot;},{&quot;title&quot;:&quot;Causal language modeling&quot;,&quot;id&quot;:&quot;tasks/language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/language_modeling&quot;},{&quot;title&quot;:&quot;Masked language modeling&quot;,&quot;id&quot;:&quot;tasks/masked_language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/masked_language_modeling&quot;},{&quot;title&quot;:&quot;Translation&quot;,&quot;id&quot;:&quot;tasks/translation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/translation&quot;},{&quot;title&quot;:&quot;Summarization&quot;,&quot;id&quot;:&quot;tasks/summarization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/summarization&quot;},{&quot;title&quot;:&quot;Multiple choice&quot;,&quot;id&quot;:&quot;tasks/multiple_choice&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/multiple_choice&quot;}]},{&quot;title&quot;:&quot;Audio&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio classification&quot;,&quot;id&quot;:&quot;tasks/audio_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/audio_classification&quot;},{&quot;title&quot;:&quot;Automatic speech recognition&quot;,&quot;id&quot;:&quot;tasks/asr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/asr&quot;}]},{&quot;title&quot;:&quot;Computer Vision&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image classification&quot;,&quot;id&quot;:&quot;tasks/image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_classification&quot;},{&quot;title&quot;:&quot;Semantic segmentation&quot;,&quot;id&quot;:&quot;tasks/semantic_segmentation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/semantic_segmentation&quot;},{&quot;title&quot;:&quot;Video classification&quot;,&quot;id&quot;:&quot;tasks/video_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/video_classification&quot;},{&quot;title&quot;:&quot;Object detection&quot;,&quot;id&quot;:&quot;tasks/object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot object detection&quot;,&quot;id&quot;:&quot;tasks/zero_shot_object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot image classification&quot;,&quot;id&quot;:&quot;tasks/zero_shot_image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification&quot;},{&quot;title&quot;:&quot;Depth estimation&quot;,&quot;id&quot;:&quot;tasks/monocular_depth_estimation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation&quot;}]},{&quot;title&quot;:&quot;Multimodal&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image captioning&quot;,&quot;id&quot;:&quot;tasks/image_captioning&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_captioning&quot;},{&quot;title&quot;:&quot;Document Question Answering&quot;,&quot;id&quot;:&quot;tasks/document_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/document_question_answering&quot;},{&quot;title&quot;:&quot;Visual Question Answering&quot;,&quot;id&quot;:&quot;tasks/visual_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/visual_question_answering&quot;},{&quot;title&quot;:&quot;Text to speech&quot;,&quot;id&quot;:&quot;tasks/text-to-speech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/text-to-speech&quot;}]},{&quot;title&quot;:&quot;Generation&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Customize the generation strategy&quot;,&quot;id&quot;:&quot;generation_strategies&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/generation_strategies&quot;}]},{&quot;title&quot;:&quot;Prompting&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image tasks with IDEFICS&quot;,&quot;id&quot;:&quot;tasks/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/idefics&quot;}]}]},{&quot;title&quot;:&quot;Developer guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Use fast tokenizers from 🤗 Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;fast_tokenizers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/fast_tokenizers&quot;},{&quot;title&quot;:&quot;Run inference with multilingual models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;multilingual&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/multilingual&quot;},{&quot;title&quot;:&quot;Use model-specific APIs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;create_a_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/create_a_model&quot;},{&quot;title&quot;:&quot;Share a custom model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_models&quot;},{&quot;title&quot;:&quot;Templates for chat models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;chat_templating&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/chat_templating&quot;},{&quot;title&quot;:&quot;Run training on Amazon SageMaker&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;sagemaker&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/sagemaker&quot;},{&quot;title&quot;:&quot;Export to ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;serialization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/serialization&quot;},{&quot;title&quot;:&quot;Export to TFLite&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tflite&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tflite&quot;},{&quot;title&quot;:&quot;Export to TorchScript&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;torchscript&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/torchscript&quot;},{&quot;title&quot;:&quot;Benchmarks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;benchmarks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/benchmarks&quot;},{&quot;title&quot;:&quot;Notebooks with examples&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;notebooks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/notebooks&quot;},{&quot;title&quot;:&quot;Community resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;community&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/community&quot;},{&quot;title&quot;:&quot;Custom Tools and Prompts&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_tools&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_tools&quot;},{&quot;title&quot;:&quot;Troubleshoot&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;troubleshooting&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/troubleshooting&quot;}]},{&quot;title&quot;:&quot;Performance and scalability&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;performance&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/performance&quot;},{&quot;title&quot;:&quot;Efficient training techniques&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Methods and tools for efficient training on a single GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_one&quot;},{&quot;title&quot;:&quot;Multiple GPUs and parallelism&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_many&quot;},{&quot;title&quot;:&quot;Efficient training on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu&quot;},{&quot;title&quot;:&quot;Distributed CPU training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu_many&quot;},{&quot;title&quot;:&quot;Training on TPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu&quot;},{&quot;title&quot;:&quot;Training on TPU with TensorFlow&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu_tf&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu_tf&quot;},{&quot;title&quot;:&quot;Training on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_special&quot;},{&quot;title&quot;:&quot;Custom hardware for training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_hardware&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_hardware&quot;},{&quot;title&quot;:&quot;Hyperparameter Search using Trainer API&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;hpo_train&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/hpo_train&quot;}]},{&quot;title&quot;:&quot;Optimizing inference&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Inference on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_cpu&quot;},{&quot;title&quot;:&quot;Inference on one GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_one&quot;},{&quot;title&quot;:&quot;Inference on many GPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_many&quot;},{&quot;title&quot;:&quot;Inference on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_special&quot;}]},{&quot;title&quot;:&quot;Instantiating a big model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;big_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/big_models&quot;},{&quot;title&quot;:&quot;Troubleshooting&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;debugging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/debugging&quot;},{&quot;title&quot;:&quot;XLA Integration for TensorFlow Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tf_xla&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tf_xla&quot;},{&quot;title&quot;:&quot;Optimize inference using `torch.compile()`&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_torch_compile&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_torch_compile&quot;}]},{&quot;title&quot;:&quot;Contribute&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;How to contribute to transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;contributing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/contributing&quot;},{&quot;title&quot;:&quot;How to add a model to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_model&quot;},{&quot;title&quot;:&quot;How to convert a 🤗 Transformers model to TensorFlow?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_tensorflow_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_tensorflow_model&quot;},{&quot;title&quot;:&quot;How to add a pipeline to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_pipeline&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_pipeline&quot;},{&quot;title&quot;:&quot;Testing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;testing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/testing&quot;},{&quot;title&quot;:&quot;Checks on a Pull Request&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pr_checks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pr_checks&quot;}]},{&quot;title&quot;:&quot;Conceptual guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Philosophy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;philosophy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/philosophy&quot;},{&quot;title&quot;:&quot;Glossary&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;glossary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/glossary&quot;},{&quot;title&quot;:&quot;What 🤗 Transformers can do&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;task_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/task_summary&quot;},{&quot;title&quot;:&quot;How 🤗 Transformers solve tasks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks_explained&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks_explained&quot;},{&quot;title&quot;:&quot;The Transformer model family&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_summary&quot;},{&quot;title&quot;:&quot;Summary of the tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tokenizer_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tokenizer_summary&quot;},{&quot;title&quot;:&quot;Attention mechanisms&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;attention&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/attention&quot;},{&quot;title&quot;:&quot;Padding and truncation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pad_truncation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pad_truncation&quot;},{&quot;title&quot;:&quot;BERTology&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;bertology&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/bertology&quot;},{&quot;title&quot;:&quot;Perplexity of fixed-length models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perplexity&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perplexity&quot;},{&quot;title&quot;:&quot;Pipelines for webserver inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_webserver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_webserver&quot;},{&quot;title&quot;:&quot;Model training anatomy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_memory_anatomy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_memory_anatomy&quot;}]},{&quot;title&quot;:&quot;API&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Main Classes&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Agents and Tools&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/agent&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/agent&quot;},{&quot;title&quot;:&quot;Auto Classes&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/auto&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/auto&quot;},{&quot;title&quot;:&quot;Callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/callback&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/callback&quot;},{&quot;title&quot;:&quot;Configuration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/configuration&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/configuration&quot;},{&quot;title&quot;:&quot;Data Collator&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/data_collator&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/data_collator&quot;},{&quot;title&quot;:&quot;Keras callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/keras_callbacks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/keras_callbacks&quot;},{&quot;title&quot;:&quot;Logging&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/logging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/logging&quot;},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/model&quot;},{&quot;title&quot;:&quot;Text Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/text_generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/text_generation&quot;},{&quot;title&quot;:&quot;ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/onnx&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/onnx&quot;},{&quot;title&quot;:&quot;Optimization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/optimizer_schedules&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules&quot;},{&quot;title&quot;:&quot;Model outputs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/output&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/output&quot;},{&quot;title&quot;:&quot;Pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/pipelines&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/pipelines&quot;},{&quot;title&quot;:&quot;Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/processors&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/processors&quot;},{&quot;title&quot;:&quot;Quantization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/quantization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/quantization&quot;},{&quot;title&quot;:&quot;Tokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/tokenizer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/tokenizer&quot;},{&quot;title&quot;:&quot;Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/trainer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/trainer&quot;},{&quot;title&quot;:&quot;DeepSpeed Integration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/deepspeed&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/deepspeed&quot;},{&quot;title&quot;:&quot;Feature Extractor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/feature_extractor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/feature_extractor&quot;},{&quot;title&quot;:&quot;Image Processor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/image_processor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/image_processor&quot;}]},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALBERT&quot;,&quot;id&quot;:&quot;model_doc/albert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/albert&quot;},{&quot;title&quot;:&quot;BART&quot;,&quot;id&quot;:&quot;model_doc/bart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bart&quot;},{&quot;title&quot;:&quot;BARThez&quot;,&quot;id&quot;:&quot;model_doc/barthez&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/barthez&quot;},{&quot;title&quot;:&quot;BARTpho&quot;,&quot;id&quot;:&quot;model_doc/bartpho&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bartpho&quot;},{&quot;title&quot;:&quot;BERT&quot;,&quot;id&quot;:&quot;model_doc/bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert&quot;},{&quot;title&quot;:&quot;BertGeneration&quot;,&quot;id&quot;:&quot;model_doc/bert-generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-generation&quot;},{&quot;title&quot;:&quot;BertJapanese&quot;,&quot;id&quot;:&quot;model_doc/bert-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-japanese&quot;},{&quot;title&quot;:&quot;Bertweet&quot;,&quot;id&quot;:&quot;model_doc/bertweet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bertweet&quot;},{&quot;title&quot;:&quot;BigBird&quot;,&quot;id&quot;:&quot;model_doc/big_bird&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/big_bird&quot;},{&quot;title&quot;:&quot;BigBirdPegasus&quot;,&quot;id&quot;:&quot;model_doc/bigbird_pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus&quot;},{&quot;title&quot;:&quot;BioGpt&quot;,&quot;id&quot;:&quot;model_doc/biogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/biogpt&quot;},{&quot;title&quot;:&quot;Blenderbot&quot;,&quot;id&quot;:&quot;model_doc/blenderbot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot&quot;},{&quot;title&quot;:&quot;Blenderbot Small&quot;,&quot;id&quot;:&quot;model_doc/blenderbot-small&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot-small&quot;},{&quot;title&quot;:&quot;BLOOM&quot;,&quot;id&quot;:&quot;model_doc/bloom&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bloom&quot;},{&quot;title&quot;:&quot;BORT&quot;,&quot;id&quot;:&quot;model_doc/bort&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bort&quot;},{&quot;title&quot;:&quot;ByT5&quot;,&quot;id&quot;:&quot;model_doc/byt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/byt5&quot;},{&quot;title&quot;:&quot;CamemBERT&quot;,&quot;id&quot;:&quot;model_doc/camembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/camembert&quot;},{&quot;title&quot;:&quot;CANINE&quot;,&quot;id&quot;:&quot;model_doc/canine&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/canine&quot;},{&quot;title&quot;:&quot;CodeGen&quot;,&quot;id&quot;:&quot;model_doc/codegen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/codegen&quot;},{&quot;title&quot;:&quot;CodeLlama&quot;,&quot;id&quot;:&quot;model_doc/code_llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/code_llama&quot;},{&quot;title&quot;:&quot;ConvBERT&quot;,&quot;id&quot;:&quot;model_doc/convbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convbert&quot;},{&quot;title&quot;:&quot;CPM&quot;,&quot;id&quot;:&quot;model_doc/cpm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpm&quot;},{&quot;title&quot;:&quot;CPMANT&quot;,&quot;id&quot;:&quot;model_doc/cpmant&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpmant&quot;},{&quot;title&quot;:&quot;CTRL&quot;,&quot;id&quot;:&quot;model_doc/ctrl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ctrl&quot;},{&quot;title&quot;:&quot;DeBERTa&quot;,&quot;id&quot;:&quot;model_doc/deberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta&quot;},{&quot;title&quot;:&quot;DeBERTa-v2&quot;,&quot;id&quot;:&quot;model_doc/deberta-v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta-v2&quot;},{&quot;title&quot;:&quot;DialoGPT&quot;,&quot;id&quot;:&quot;model_doc/dialogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dialogpt&quot;},{&quot;title&quot;:&quot;DistilBERT&quot;,&quot;id&quot;:&quot;model_doc/distilbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/distilbert&quot;},{&quot;title&quot;:&quot;DPR&quot;,&quot;id&quot;:&quot;model_doc/dpr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpr&quot;},{&quot;title&quot;:&quot;ELECTRA&quot;,&quot;id&quot;:&quot;model_doc/electra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/electra&quot;},{&quot;title&quot;:&quot;Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encoder-decoder&quot;},{&quot;title&quot;:&quot;ERNIE&quot;,&quot;id&quot;:&quot;model_doc/ernie&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie&quot;},{&quot;title&quot;:&quot;ErnieM&quot;,&quot;id&quot;:&quot;model_doc/ernie_m&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie_m&quot;},{&quot;title&quot;:&quot;ESM&quot;,&quot;id&quot;:&quot;model_doc/esm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/esm&quot;},{&quot;title&quot;:&quot;Falcon&quot;,&quot;id&quot;:&quot;model_doc/falcon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/falcon&quot;},{&quot;title&quot;:&quot;FLAN-T5&quot;,&quot;id&quot;:&quot;model_doc/flan-t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-t5&quot;},{&quot;title&quot;:&quot;FLAN-UL2&quot;,&quot;id&quot;:&quot;model_doc/flan-ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-ul2&quot;},{&quot;title&quot;:&quot;FlauBERT&quot;,&quot;id&quot;:&quot;model_doc/flaubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flaubert&quot;},{&quot;title&quot;:&quot;FNet&quot;,&quot;id&quot;:&quot;model_doc/fnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fnet&quot;},{&quot;title&quot;:&quot;FSMT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/fsmt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fsmt&quot;},{&quot;title&quot;:&quot;Funnel Transformer&quot;,&quot;id&quot;:&quot;model_doc/funnel&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/funnel&quot;},{&quot;title&quot;:&quot;GPT&quot;,&quot;id&quot;:&quot;model_doc/openai-gpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/openai-gpt&quot;},{&quot;title&quot;:&quot;GPT Neo&quot;,&quot;id&quot;:&quot;model_doc/gpt_neo&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neo&quot;},{&quot;title&quot;:&quot;GPT NeoX&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox&quot;},{&quot;title&quot;:&quot;GPT NeoX Japanese&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox_japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese&quot;},{&quot;title&quot;:&quot;GPT-J&quot;,&quot;id&quot;:&quot;model_doc/gptj&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptj&quot;},{&quot;title&quot;:&quot;GPT2&quot;,&quot;id&quot;:&quot;model_doc/gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt2&quot;},{&quot;title&quot;:&quot;GPTBigCode&quot;,&quot;id&quot;:&quot;model_doc/gpt_bigcode&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode&quot;},{&quot;title&quot;:&quot;GPTSAN Japanese&quot;,&quot;id&quot;:&quot;model_doc/gptsan-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese&quot;},{&quot;title&quot;:&quot;GPTSw3&quot;,&quot;id&quot;:&quot;model_doc/gpt-sw3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt-sw3&quot;},{&quot;title&quot;:&quot;HerBERT&quot;,&quot;id&quot;:&quot;model_doc/herbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/herbert&quot;},{&quot;title&quot;:&quot;I-BERT&quot;,&quot;id&quot;:&quot;model_doc/ibert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ibert&quot;},{&quot;title&quot;:&quot;Jukebox&quot;,&quot;id&quot;:&quot;model_doc/jukebox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/jukebox&quot;},{&quot;title&quot;:&quot;LED&quot;,&quot;id&quot;:&quot;model_doc/led&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/led&quot;},{&quot;title&quot;:&quot;LLaMA&quot;,&quot;id&quot;:&quot;model_doc/llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama&quot;},{&quot;title&quot;:&quot;Llama2&quot;,&quot;id&quot;:&quot;model_doc/llama2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama2&quot;},{&quot;title&quot;:&quot;Longformer&quot;,&quot;id&quot;:&quot;model_doc/longformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longformer&quot;},{&quot;title&quot;:&quot;LongT5&quot;,&quot;id&quot;:&quot;model_doc/longt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longt5&quot;},{&quot;title&quot;:&quot;LUKE&quot;,&quot;id&quot;:&quot;model_doc/luke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/luke&quot;},{&quot;title&quot;:&quot;M2M100&quot;,&quot;id&quot;:&quot;model_doc/m2m_100&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/m2m_100&quot;},{&quot;title&quot;:&quot;MarianMT&quot;,&quot;id&quot;:&quot;model_doc/marian&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/marian&quot;},{&quot;title&quot;:&quot;MarkupLM&quot;,&quot;id&quot;:&quot;model_doc/markuplm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/markuplm&quot;},{&quot;title&quot;:&quot;MBart and MBart-50&quot;,&quot;id&quot;:&quot;model_doc/mbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mbart&quot;},{&quot;title&quot;:&quot;MEGA&quot;,&quot;id&quot;:&quot;model_doc/mega&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mega&quot;},{&quot;title&quot;:&quot;MegatronBERT&quot;,&quot;id&quot;:&quot;model_doc/megatron-bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron-bert&quot;},{&quot;title&quot;:&quot;MegatronGPT2&quot;,&quot;id&quot;:&quot;model_doc/megatron_gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2&quot;},{&quot;title&quot;:&quot;Mistral&quot;,&quot;id&quot;:&quot;model_doc/mistral&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mistral&quot;},{&quot;title&quot;:&quot;mLUKE&quot;,&quot;id&quot;:&quot;model_doc/mluke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mluke&quot;},{&quot;title&quot;:&quot;MobileBERT&quot;,&quot;id&quot;:&quot;model_doc/mobilebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilebert&quot;},{&quot;title&quot;:&quot;MPNet&quot;,&quot;id&quot;:&quot;model_doc/mpnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpnet&quot;},{&quot;title&quot;:&quot;MPT&quot;,&quot;id&quot;:&quot;model_doc/mpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpt&quot;},{&quot;title&quot;:&quot;MRA&quot;,&quot;id&quot;:&quot;model_doc/mra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mra&quot;},{&quot;title&quot;:&quot;MT5&quot;,&quot;id&quot;:&quot;model_doc/mt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mt5&quot;},{&quot;title&quot;:&quot;MVP&quot;,&quot;id&quot;:&quot;model_doc/mvp&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mvp&quot;},{&quot;title&quot;:&quot;NEZHA&quot;,&quot;id&quot;:&quot;model_doc/nezha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nezha&quot;},{&quot;title&quot;:&quot;NLLB&quot;,&quot;id&quot;:&quot;model_doc/nllb&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb&quot;},{&quot;title&quot;:&quot;NLLB-MoE&quot;,&quot;id&quot;:&quot;model_doc/nllb-moe&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb-moe&quot;},{&quot;title&quot;:&quot;Nyströmformer&quot;,&quot;id&quot;:&quot;model_doc/nystromformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nystromformer&quot;},{&quot;title&quot;:&quot;Open-Llama&quot;,&quot;id&quot;:&quot;model_doc/open-llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/open-llama&quot;},{&quot;title&quot;:&quot;OPT&quot;,&quot;id&quot;:&quot;model_doc/opt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/opt&quot;},{&quot;title&quot;:&quot;Pegasus&quot;,&quot;id&quot;:&quot;model_doc/pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus&quot;},{&quot;title&quot;:&quot;PEGASUS-X&quot;,&quot;id&quot;:&quot;model_doc/pegasus_x&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus_x&quot;},{&quot;title&quot;:&quot;Persimmon&quot;,&quot;id&quot;:&quot;model_doc/persimmon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/persimmon&quot;},{&quot;title&quot;:&quot;PhoBERT&quot;,&quot;id&quot;:&quot;model_doc/phobert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/phobert&quot;},{&quot;title&quot;:&quot;PLBart&quot;,&quot;id&quot;:&quot;model_doc/plbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/plbart&quot;},{&quot;title&quot;:&quot;ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/prophetnet&quot;},{&quot;title&quot;:&quot;QDQBert&quot;,&quot;id&quot;:&quot;model_doc/qdqbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/qdqbert&quot;},{&quot;title&quot;:&quot;RAG&quot;,&quot;id&quot;:&quot;model_doc/rag&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rag&quot;},{&quot;title&quot;:&quot;REALM&quot;,&quot;id&quot;:&quot;model_doc/realm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/realm&quot;},{&quot;title&quot;:&quot;Reformer&quot;,&quot;id&quot;:&quot;model_doc/reformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/reformer&quot;},{&quot;title&quot;:&quot;RemBERT&quot;,&quot;id&quot;:&quot;model_doc/rembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rembert&quot;},{&quot;title&quot;:&quot;RetriBERT&quot;,&quot;id&quot;:&quot;model_doc/retribert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/retribert&quot;},{&quot;title&quot;:&quot;RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta&quot;},{&quot;title&quot;:&quot;RoBERTa-PreLayerNorm&quot;,&quot;id&quot;:&quot;model_doc/roberta-prelayernorm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm&quot;},{&quot;title&quot;:&quot;RoCBert&quot;,&quot;id&quot;:&quot;model_doc/roc_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roc_bert&quot;},{&quot;title&quot;:&quot;RoFormer&quot;,&quot;id&quot;:&quot;model_doc/roformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roformer&quot;},{&quot;title&quot;:&quot;RWKV&quot;,&quot;id&quot;:&quot;model_doc/rwkv&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rwkv&quot;},{&quot;title&quot;:&quot;Splinter&quot;,&quot;id&quot;:&quot;model_doc/splinter&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/splinter&quot;},{&quot;title&quot;:&quot;SqueezeBERT&quot;,&quot;id&quot;:&quot;model_doc/squeezebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/squeezebert&quot;},{&quot;title&quot;:&quot;SwitchTransformers&quot;,&quot;id&quot;:&quot;model_doc/switch_transformers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/switch_transformers&quot;},{&quot;title&quot;:&quot;T5&quot;,&quot;id&quot;:&quot;model_doc/t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5&quot;},{&quot;title&quot;:&quot;T5v1.1&quot;,&quot;id&quot;:&quot;model_doc/t5v1.1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5v1.1&quot;},{&quot;title&quot;:&quot;TAPEX&quot;,&quot;id&quot;:&quot;model_doc/tapex&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapex&quot;},{&quot;title&quot;:&quot;Transformer XL&quot;,&quot;id&quot;:&quot;model_doc/transfo-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/transfo-xl&quot;},{&quot;title&quot;:&quot;UL2&quot;,&quot;id&quot;:&quot;model_doc/ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ul2&quot;},{&quot;title&quot;:&quot;UMT5&quot;,&quot;id&quot;:&quot;model_doc/umt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/umt5&quot;},{&quot;title&quot;:&quot;X-MOD&quot;,&quot;id&quot;:&quot;model_doc/xmod&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xmod&quot;},{&quot;title&quot;:&quot;XGLM&quot;,&quot;id&quot;:&quot;model_doc/xglm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xglm&quot;},{&quot;title&quot;:&quot;XLM&quot;,&quot;id&quot;:&quot;model_doc/xlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm&quot;},{&quot;title&quot;:&quot;XLM-ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/xlm-prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa-XL&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl&quot;},{&quot;title&quot;:&quot;XLM-V&quot;,&quot;id&quot;:&quot;model_doc/xlm-v&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-v&quot;},{&quot;title&quot;:&quot;XLNet&quot;,&quot;id&quot;:&quot;model_doc/xlnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlnet&quot;},{&quot;title&quot;:&quot;YOSO&quot;,&quot;id&quot;:&quot;model_doc/yoso&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yoso&quot;}]},{&quot;title&quot;:&quot;Vision models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;BEiT&quot;,&quot;id&quot;:&quot;model_doc/beit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/beit&quot;},{&quot;title&quot;:&quot;BiT&quot;,&quot;id&quot;:&quot;model_doc/bit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bit&quot;},{&quot;title&quot;:&quot;Conditional DETR&quot;,&quot;id&quot;:&quot;model_doc/conditional_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/conditional_detr&quot;},{&quot;title&quot;:&quot;ConvNeXT&quot;,&quot;id&quot;:&quot;model_doc/convnext&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnext&quot;},{&quot;title&quot;:&quot;ConvNeXTV2&quot;,&quot;id&quot;:&quot;model_doc/convnextv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnextv2&quot;},{&quot;title&quot;:&quot;CvT&quot;,&quot;id&quot;:&quot;model_doc/cvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cvt&quot;},{&quot;title&quot;:&quot;Deformable DETR&quot;,&quot;id&quot;:&quot;model_doc/deformable_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deformable_detr&quot;},{&quot;title&quot;:&quot;DeiT&quot;,&quot;id&quot;:&quot;model_doc/deit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deit&quot;},{&quot;title&quot;:&quot;DETA&quot;,&quot;id&quot;:&quot;model_doc/deta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deta&quot;},{&quot;title&quot;:&quot;DETR&quot;,&quot;id&quot;:&quot;model_doc/detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/detr&quot;},{&quot;title&quot;:&quot;DiNAT&quot;,&quot;id&quot;:&quot;model_doc/dinat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinat&quot;},{&quot;title&quot;:&quot;DINO V2&quot;,&quot;id&quot;:&quot;model_doc/dinov2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinov2&quot;},{&quot;title&quot;:&quot;DiT&quot;,&quot;id&quot;:&quot;model_doc/dit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dit&quot;},{&quot;title&quot;:&quot;DPT&quot;,&quot;id&quot;:&quot;model_doc/dpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpt&quot;},{&quot;title&quot;:&quot;EfficientFormer&quot;,&quot;id&quot;:&quot;model_doc/efficientformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientformer&quot;},{&quot;title&quot;:&quot;EfficientNet&quot;,&quot;id&quot;:&quot;model_doc/efficientnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientnet&quot;},{&quot;title&quot;:&quot;FocalNet&quot;,&quot;id&quot;:&quot;model_doc/focalnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/focalnet&quot;},{&quot;title&quot;:&quot;GLPN&quot;,&quot;id&quot;:&quot;model_doc/glpn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/glpn&quot;},{&quot;title&quot;:&quot;ImageGPT&quot;,&quot;id&quot;:&quot;model_doc/imagegpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/imagegpt&quot;},{&quot;title&quot;:&quot;LeViT&quot;,&quot;id&quot;:&quot;model_doc/levit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/levit&quot;},{&quot;title&quot;:&quot;Mask2Former&quot;,&quot;id&quot;:&quot;model_doc/mask2former&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mask2former&quot;},{&quot;title&quot;:&quot;MaskFormer&quot;,&quot;id&quot;:&quot;model_doc/maskformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/maskformer&quot;},{&quot;title&quot;:&quot;MobileNetV1&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v1&quot;},{&quot;title&quot;:&quot;MobileNetV2&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v2&quot;},{&quot;title&quot;:&quot;MobileViT&quot;,&quot;id&quot;:&quot;model_doc/mobilevit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevit&quot;},{&quot;title&quot;:&quot;MobileViTV2&quot;,&quot;id&quot;:&quot;model_doc/mobilevitv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevitv2&quot;},{&quot;title&quot;:&quot;NAT&quot;,&quot;id&quot;:&quot;model_doc/nat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nat&quot;},{&quot;title&quot;:&quot;PoolFormer&quot;,&quot;id&quot;:&quot;model_doc/poolformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/poolformer&quot;},{&quot;title&quot;:&quot;Pyramid Vision Transformer (PVT)&quot;,&quot;id&quot;:&quot;model_doc/pvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pvt&quot;},{&quot;title&quot;:&quot;RegNet&quot;,&quot;id&quot;:&quot;model_doc/regnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/regnet&quot;},{&quot;title&quot;:&quot;ResNet&quot;,&quot;id&quot;:&quot;model_doc/resnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/resnet&quot;},{&quot;title&quot;:&quot;SegFormer&quot;,&quot;id&quot;:&quot;model_doc/segformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/segformer&quot;},{&quot;title&quot;:&quot;SwiftFormer&quot;,&quot;id&quot;:&quot;model_doc/swiftformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swiftformer&quot;},{&quot;title&quot;:&quot;Swin Transformer&quot;,&quot;id&quot;:&quot;model_doc/swin&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin&quot;},{&quot;title&quot;:&quot;Swin Transformer V2&quot;,&quot;id&quot;:&quot;model_doc/swinv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swinv2&quot;},{&quot;title&quot;:&quot;Swin2SR&quot;,&quot;id&quot;:&quot;model_doc/swin2sr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin2sr&quot;},{&quot;title&quot;:&quot;Table Transformer&quot;,&quot;id&quot;:&quot;model_doc/table-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/table-transformer&quot;},{&quot;title&quot;:&quot;TimeSformer&quot;,&quot;id&quot;:&quot;model_doc/timesformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/timesformer&quot;},{&quot;title&quot;:&quot;UperNet&quot;,&quot;id&quot;:&quot;model_doc/upernet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/upernet&quot;},{&quot;title&quot;:&quot;VAN&quot;,&quot;id&quot;:&quot;model_doc/van&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/van&quot;},{&quot;title&quot;:&quot;VideoMAE&quot;,&quot;id&quot;:&quot;model_doc/videomae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/videomae&quot;},{&quot;title&quot;:&quot;Vision Transformer (ViT)&quot;,&quot;id&quot;:&quot;model_doc/vit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit&quot;},{&quot;title&quot;:&quot;ViT Hybrid&quot;,&quot;id&quot;:&quot;model_doc/vit_hybrid&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_hybrid&quot;},{&quot;title&quot;:&quot;ViTDet&quot;,&quot;id&quot;:&quot;model_doc/vitdet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitdet&quot;},{&quot;title&quot;:&quot;ViTMAE&quot;,&quot;id&quot;:&quot;model_doc/vit_mae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_mae&quot;},{&quot;title&quot;:&quot;ViTMatte&quot;,&quot;id&quot;:&quot;model_doc/vitmatte&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitmatte&quot;},{&quot;title&quot;:&quot;ViTMSN&quot;,&quot;id&quot;:&quot;model_doc/vit_msn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_msn&quot;},{&quot;title&quot;:&quot;ViViT&quot;,&quot;id&quot;:&quot;model_doc/vivit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vivit&quot;},{&quot;title&quot;:&quot;YOLOS&quot;,&quot;id&quot;:&quot;model_doc/yolos&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yolos&quot;}]},{&quot;title&quot;:&quot;Audio models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio Spectrogram Transformer&quot;,&quot;id&quot;:&quot;model_doc/audio-spectrogram-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer&quot;},{&quot;title&quot;:&quot;Bark&quot;,&quot;id&quot;:&quot;model_doc/bark&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bark&quot;},{&quot;title&quot;:&quot;CLAP&quot;,&quot;id&quot;:&quot;model_doc/clap&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clap&quot;},{&quot;title&quot;:&quot;EnCodec&quot;,&quot;id&quot;:&quot;model_doc/encodec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encodec&quot;},{&quot;title&quot;:&quot;Hubert&quot;,&quot;id&quot;:&quot;model_doc/hubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/hubert&quot;},{&quot;title&quot;:&quot;MCTCT&quot;,&quot;id&quot;:&quot;model_doc/mctct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mctct&quot;},{&quot;title&quot;:&quot;MMS&quot;,&quot;id&quot;:&quot;model_doc/mms&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mms&quot;},{&quot;title&quot;:&quot;MusicGen&quot;,&quot;id&quot;:&quot;model_doc/musicgen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/musicgen&quot;},{&quot;title&quot;:&quot;Pop2Piano&quot;,&quot;id&quot;:&quot;model_doc/pop2piano&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pop2piano&quot;},{&quot;title&quot;:&quot;SEW&quot;,&quot;id&quot;:&quot;model_doc/sew&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew&quot;},{&quot;title&quot;:&quot;SEW-D&quot;,&quot;id&quot;:&quot;model_doc/sew-d&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew-d&quot;},{&quot;title&quot;:&quot;Speech2Text&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text&quot;},{&quot;title&quot;:&quot;Speech2Text2&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text_2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2&quot;},{&quot;title&quot;:&quot;SpeechT5&quot;,&quot;id&quot;:&quot;model_doc/speecht5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speecht5&quot;},{&quot;title&quot;:&quot;UniSpeech&quot;,&quot;id&quot;:&quot;model_doc/unispeech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech&quot;},{&quot;title&quot;:&quot;UniSpeech-SAT&quot;,&quot;id&quot;:&quot;model_doc/unispeech-sat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech-sat&quot;},{&quot;title&quot;:&quot;VITS&quot;,&quot;id&quot;:&quot;model_doc/vits&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vits&quot;},{&quot;title&quot;:&quot;Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2&quot;},{&quot;title&quot;:&quot;Wav2Vec2-Conformer&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2-conformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer&quot;},{&quot;title&quot;:&quot;Wav2Vec2Phoneme&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2_phoneme&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme&quot;},{&quot;title&quot;:&quot;WavLM&quot;,&quot;id&quot;:&quot;model_doc/wavlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wavlm&quot;},{&quot;title&quot;:&quot;Whisper&quot;,&quot;id&quot;:&quot;model_doc/whisper&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/whisper&quot;},{&quot;title&quot;:&quot;XLS-R&quot;,&quot;id&quot;:&quot;model_doc/xls_r&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xls_r&quot;},{&quot;title&quot;:&quot;XLSR-Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/xlsr_wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2&quot;}]},{&quot;title&quot;:&quot;Multimodal models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALIGN&quot;,&quot;id&quot;:&quot;model_doc/align&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/align&quot;},{&quot;title&quot;:&quot;AltCLIP&quot;,&quot;id&quot;:&quot;model_doc/altclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/altclip&quot;},{&quot;title&quot;:&quot;BLIP&quot;,&quot;id&quot;:&quot;model_doc/blip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip&quot;},{&quot;title&quot;:&quot;BLIP-2&quot;,&quot;id&quot;:&quot;model_doc/blip-2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip-2&quot;},{&quot;title&quot;:&quot;BridgeTower&quot;,&quot;id&quot;:&quot;model_doc/bridgetower&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bridgetower&quot;},{&quot;title&quot;:&quot;BROS&quot;,&quot;id&quot;:&quot;model_doc/bros&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bros&quot;},{&quot;title&quot;:&quot;Chinese-CLIP&quot;,&quot;id&quot;:&quot;model_doc/chinese_clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/chinese_clip&quot;},{&quot;title&quot;:&quot;CLIP&quot;,&quot;id&quot;:&quot;model_doc/clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clip&quot;},{&quot;title&quot;:&quot;CLIPSeg&quot;,&quot;id&quot;:&quot;model_doc/clipseg&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clipseg&quot;},{&quot;title&quot;:&quot;Data2Vec&quot;,&quot;id&quot;:&quot;model_doc/data2vec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/data2vec&quot;},{&quot;title&quot;:&quot;DePlot&quot;,&quot;id&quot;:&quot;model_doc/deplot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deplot&quot;},{&quot;title&quot;:&quot;Donut&quot;,&quot;id&quot;:&quot;model_doc/donut&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/donut&quot;},{&quot;title&quot;:&quot;FLAVA&quot;,&quot;id&quot;:&quot;model_doc/flava&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flava&quot;},{&quot;title&quot;:&quot;GIT&quot;,&quot;id&quot;:&quot;model_doc/git&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/git&quot;},{&quot;title&quot;:&quot;GroupViT&quot;,&quot;id&quot;:&quot;model_doc/groupvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/groupvit&quot;},{&quot;title&quot;:&quot;IDEFICS&quot;,&quot;id&quot;:&quot;model_doc/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/idefics&quot;},{&quot;title&quot;:&quot;InstructBLIP&quot;,&quot;id&quot;:&quot;model_doc/instructblip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/instructblip&quot;},{&quot;title&quot;:&quot;LayoutLM&quot;,&quot;id&quot;:&quot;model_doc/layoutlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlm&quot;},{&quot;title&quot;:&quot;LayoutLMV2&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv2&quot;},{&quot;title&quot;:&quot;LayoutLMV3&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv3&quot;},{&quot;title&quot;:&quot;LayoutXLM&quot;,&quot;id&quot;:&quot;model_doc/layoutxlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutxlm&quot;},{&quot;title&quot;:&quot;LiLT&quot;,&quot;id&quot;:&quot;model_doc/lilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lilt&quot;},{&quot;title&quot;:&quot;LXMERT&quot;,&quot;id&quot;:&quot;model_doc/lxmert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lxmert&quot;},{&quot;title&quot;:&quot;MatCha&quot;,&quot;id&quot;:&quot;model_doc/matcha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/matcha&quot;},{&quot;title&quot;:&quot;MGP-STR&quot;,&quot;id&quot;:&quot;model_doc/mgp-str&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mgp-str&quot;},{&quot;title&quot;:&quot;Nougat&quot;,&quot;id&quot;:&quot;model_doc/nougat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nougat&quot;},{&quot;title&quot;:&quot;OneFormer&quot;,&quot;id&quot;:&quot;model_doc/oneformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/oneformer&quot;},{&quot;title&quot;:&quot;OWL-ViT&quot;,&quot;id&quot;:&quot;model_doc/owlvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/owlvit&quot;},{&quot;title&quot;:&quot;Perceiver&quot;,&quot;id&quot;:&quot;model_doc/perceiver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/perceiver&quot;},{&quot;title&quot;:&quot;Pix2Struct&quot;,&quot;id&quot;:&quot;model_doc/pix2struct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pix2struct&quot;},{&quot;title&quot;:&quot;Segment Anything&quot;,&quot;id&quot;:&quot;model_doc/sam&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sam&quot;},{&quot;title&quot;:&quot;Speech Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/speech-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder&quot;},{&quot;title&quot;:&quot;TAPAS&quot;,&quot;id&quot;:&quot;model_doc/tapas&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapas&quot;},{&quot;title&quot;:&quot;TrOCR&quot;,&quot;id&quot;:&quot;model_doc/trocr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trocr&quot;},{&quot;title&quot;:&quot;TVLT&quot;,&quot;id&quot;:&quot;model_doc/tvlt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tvlt&quot;},{&quot;title&quot;:&quot;ViLT&quot;,&quot;id&quot;:&quot;model_doc/vilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vilt&quot;},{&quot;title&quot;:&quot;Vision Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/vision-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder&quot;},{&quot;title&quot;:&quot;Vision Text Dual Encoder&quot;,&quot;id&quot;:&quot;model_doc/vision-text-dual-encoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder&quot;},{&quot;title&quot;:&quot;VisualBERT&quot;,&quot;id&quot;:&quot;model_doc/visual_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/visual_bert&quot;},{&quot;title&quot;:&quot;X-CLIP&quot;,&quot;id&quot;:&quot;model_doc/xclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xclip&quot;}]},{&quot;title&quot;:&quot;Reinforcement learning models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Decision Transformer&quot;,&quot;id&quot;:&quot;model_doc/decision_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/decision_transformer&quot;},{&quot;title&quot;:&quot;Trajectory Transformer&quot;,&quot;id&quot;:&quot;model_doc/trajectory_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer&quot;}]},{&quot;title&quot;:&quot;Time series models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Autoformer&quot;,&quot;id&quot;:&quot;model_doc/autoformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/autoformer&quot;},{&quot;title&quot;:&quot;Informer&quot;,&quot;id&quot;:&quot;model_doc/informer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/informer&quot;},{&quot;title&quot;:&quot;Time Series Transformer&quot;,&quot;id&quot;:&quot;model_doc/time_series_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/time_series_transformer&quot;}]},{&quot;title&quot;:&quot;Graph models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Graphormer&quot;,&quot;id&quot;:&quot;model_doc/graphormer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/graphormer&quot;}]}]},{&quot;title&quot;:&quot;Internal Helpers&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Custom Layers and Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/modeling_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/modeling_utils&quot;},{&quot;title&quot;:&quot;Utilities for pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/pipelines_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/pipelines_utils&quot;},{&quot;title&quot;:&quot;Utilities for Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/tokenization_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/tokenization_utils&quot;},{&quot;title&quot;:&quot;Utilities for Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/trainer_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/trainer_utils&quot;},{&quot;title&quot;:&quot;Utilities for Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/generation_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/generation_utils&quot;},{&quot;title&quot;:&quot;Utilities for Image Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/image_processing_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/image_processing_utils&quot;},{&quot;title&quot;:&quot;Utilities for Audio processing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/audio_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/audio_utils&quot;},{&quot;title&quot;:&quot;General Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/file_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/file_utils&quot;},{&quot;title&quot;:&quot;Utilities for Time Series&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/time_series_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/time_series_utils&quot;}]}]}],&quot;chapterId&quot;:&quot;model_doc/fsmt&quot;,&quot;docType&quot;:&quot;docs&quot;,&quot;isLoggedIn&quot;:false,&quot;lang&quot;:&quot;en&quot;,&quot;langs&quot;:[&quot;de&quot;,&quot;en&quot;,&quot;es&quot;,&quot;fr&quot;,&quot;it&quot;,&quot;ko&quot;,&quot;pt&quot;,&quot;zh&quot;],&quot;library&quot;:&quot;transformers&quot;,&quot;theme&quot;:&quot;light&quot;,&quot;version&quot;:&quot;v4.34.0&quot;,&quot;versions&quot;:[{&quot;version&quot;:&quot;main&quot;},{&quot;version&quot;:&quot;v4.34.0&quot;},{&quot;version&quot;:&quot;v4.33.3&quot;},{&quot;version&quot;:&quot;v4.33.2&quot;},{&quot;version&quot;:&quot;v4.33.0&quot;},{&quot;version&quot;:&quot;v4.32.1&quot;},{&quot;version&quot;:&quot;v4.32.0&quot;},{&quot;version&quot;:&quot;v4.31.0&quot;},{&quot;version&quot;:&quot;v4.30.0&quot;},{&quot;version&quot;:&quot;v4.29.1&quot;},{&quot;version&quot;:&quot;v4.29.0&quot;},{&quot;version&quot;:&quot;v4.28.1&quot;},{&quot;version&quot;:&quot;v4.28.0&quot;},{&quot;version&quot;:&quot;v4.27.2&quot;},{&quot;version&quot;:&quot;v4.27.1&quot;},{&quot;version&quot;:&quot;v4.27.0&quot;},{&quot;version&quot;:&quot;v4.26.1&quot;},{&quot;version&quot;:&quot;v4.26.0&quot;},{&quot;version&quot;:&quot;v4.25.1&quot;},{&quot;version&quot;:&quot;v4.24.0&quot;},{&quot;version&quot;:&quot;v4.23.1&quot;},{&quot;version&quot;:&quot;v4.23.0&quot;},{&quot;version&quot;:&quot;v4.22.2&quot;},{&quot;version&quot;:&quot;v4.22.1&quot;},{&quot;version&quot;:&quot;v4.22.0&quot;},{&quot;version&quot;:&quot;v4.21.3&quot;},{&quot;version&quot;:&quot;v4.21.2&quot;},{&quot;version&quot;:&quot;v4.21.1&quot;},{&quot;version&quot;:&quot;v4.21.0&quot;},{&quot;version&quot;:&quot;v4.20.1&quot;},{&quot;version&quot;:&quot;v4.20.0&quot;},{&quot;version&quot;:&quot;v4.19.4&quot;},{&quot;version&quot;:&quot;v4.19.3&quot;},{&quot;version&quot;:&quot;v4.19.2&quot;},{&quot;version&quot;:&quot;v4.19.0&quot;},{&quot;version&quot;:&quot;v4.18.0&quot;},{&quot;version&quot;:&quot;v4.17.0&quot;},{&quot;version&quot;:&quot;v4.16.2&quot;},{&quot;version&quot;:&quot;v4.16.1&quot;},{&quot;version&quot;:&quot;v4.16.0&quot;},{&quot;version&quot;:&quot;v4.15.0&quot;},{&quot;version&quot;:&quot;v4.14.1&quot;},{&quot;version&quot;:&quot;v4.13.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.5&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.4&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.0.0&quot;},{&quot;version&quot;:&quot;doc-builder-html&quot;}],&quot;title&quot;:&quot;FSMT&quot;}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"> <div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation </p> <div class="flex items-center"><p class="font-semibold">FSMT</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "> <button class=" " type="button"> <h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> </button> <div class="flex items-center"> <select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1" selected="">v4.34.0</option><option value="2">v4.33.3</option><option value="3">v4.32.1</option><option value="4">v4.31.0</option><option value="5">v4.30.0</option><option value="6">v4.29.1</option><option value="7">v4.28.1</option><option value="8">v4.27.2</option><option value="9">v4.26.1</option><option value="10">v4.25.1</option><option value="11">v4.24.0</option><option value="12">v4.23.1</option><option value="13">v4.22.2</option><option value="14">v4.21.3</option><option value="15">v4.20.1</option><option value="16">v4.19.4</option><option value="17">v4.18.0</option><option value="18">v4.17.0</option><option value="19">v4.16.2</option><option value="20">v4.15.0</option><option value="21">v4.14.1</option><option value="22">v4.13.0</option><option value="23">v4.12.5</option><option value="24">v4.11.3</option><option value="25">v4.10.1</option><option value="26">v4.9.2</option><option value="27">v4.8.2</option><option value="28">v4.7.0</option><option value="29">v4.6.0</option><option value="30">v4.5.1</option><option value="31">v4.4.2</option><option value="32">v4.3.3</option><option value="33">v4.2.2</option><option value="34">v4.1.1</option><option value="35">v4.0.1</option><option value="36">v3.5.1</option><option value="37">v3.4.0</option><option value="38">v3.3.1</option><option value="39">v3.2.0</option><option value="40">v3.1.0</option><option value="41">v3.0.2</option><option value="42">v2.11.0</option><option value="43">v2.10.0</option><option value="44">v2.9.1</option><option value="45">v2.8.0</option><option value="46">v2.7.0</option><option value="47">v2.6.0</option><option value="48">v2.5.1</option><option value="49">v2.4.1</option><option value="50">v2.3.0</option><option value="51">v2.2.2</option><option value="52">v2.1.1</option><option value="53">v2.0.0</option><option value="54">v1.2.0</option><option value="55">v1.1.0</option><option value="56">v1.0.0</option><option value="57">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en" selected="">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"> <button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"> <svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> </a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Get started<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/index"><!-- HTML_TAG_START -->🤗 Transformers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/quicktour"><!-- HTML_TAG_START -->Quick tour<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/installation"><!-- HTML_TAG_START -->Installation<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Tutorials<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_tutorial"><!-- HTML_TAG_START -->Run inference with pipelines<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/autoclass_tutorial"><!-- HTML_TAG_START -->Write portable code with AutoClass<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/preprocessing"><!-- HTML_TAG_START -->Preprocess data<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/training"><!-- HTML_TAG_START -->Fine-tune a pretrained model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/run_scripts"><!-- HTML_TAG_START -->Train with a script<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/accelerate"><!-- HTML_TAG_START -->Set up distributed training with 🤗 Accelerate<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/peft"><!-- HTML_TAG_START -->Load and train adapters with 🤗 PEFT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_sharing"><!-- HTML_TAG_START -->Share your model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/transformers_agents"><!-- HTML_TAG_START -->Agents<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/llm_tutorial"><!-- HTML_TAG_START -->Generation with LLMs<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Task Guides<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Natural Language Processing<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Audio<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Computer Vision<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Multimodal<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Generation<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Prompting<!-- HTML_TAG_END --></span> </span></span> </div></div> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Developer guides<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/fast_tokenizers"><!-- HTML_TAG_START -->Use fast tokenizers from 🤗 Tokenizers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/multilingual"><!-- HTML_TAG_START -->Run inference with multilingual models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/create_a_model"><!-- HTML_TAG_START -->Use model-specific APIs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_models"><!-- HTML_TAG_START -->Share a custom model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/chat_templating"><!-- HTML_TAG_START -->Templates for chat models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/sagemaker"><!-- HTML_TAG_START -->Run training on Amazon SageMaker<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/serialization"><!-- HTML_TAG_START -->Export to ONNX<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tflite"><!-- HTML_TAG_START -->Export to TFLite<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/torchscript"><!-- HTML_TAG_START -->Export to TorchScript<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/benchmarks"><!-- HTML_TAG_START -->Benchmarks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/notebooks"><!-- HTML_TAG_START -->Notebooks with examples<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/community"><!-- HTML_TAG_START -->Community resources<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_tools"><!-- HTML_TAG_START -->Custom Tools and Prompts<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/troubleshooting"><!-- HTML_TAG_START -->Troubleshoot<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Performance and scalability<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/performance"><!-- HTML_TAG_START -->Overview<!-- HTML_TAG_END --> </a> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Efficient training techniques<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_one"><!-- HTML_TAG_START -->Methods and tools for efficient training on a single GPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_many"><!-- HTML_TAG_START -->Multiple GPUs and parallelism<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu"><!-- HTML_TAG_START -->Efficient training on CPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu_many"><!-- HTML_TAG_START -->Distributed CPU training<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu"><!-- HTML_TAG_START -->Training on TPUs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu_tf"><!-- HTML_TAG_START -->Training on TPU with TensorFlow<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_special"><!-- HTML_TAG_START -->Training on Specialized Hardware<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_hardware"><!-- HTML_TAG_START -->Custom hardware for training<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/hpo_train"><!-- HTML_TAG_START -->Hyperparameter Search using Trainer API<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Optimizing inference<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_cpu"><!-- HTML_TAG_START -->Inference on CPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_one"><!-- HTML_TAG_START -->Inference on one GPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_many"><!-- HTML_TAG_START -->Inference on many GPUs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_special"><!-- HTML_TAG_START -->Inference on Specialized Hardware<!-- HTML_TAG_END --> </a> </div><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/big_models"><!-- HTML_TAG_START -->Instantiating a big model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/debugging"><!-- HTML_TAG_START -->Troubleshooting<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tf_xla"><!-- HTML_TAG_START -->XLA Integration for TensorFlow Models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perf_torch_compile"><!-- HTML_TAG_START -->Optimize inference using `torch.compile()`<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Contribute<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/contributing"><!-- HTML_TAG_START -->How to contribute to transformers?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_model"><!-- HTML_TAG_START -->How to add a model to 🤗 Transformers?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_tensorflow_model"><!-- HTML_TAG_START -->How to convert a 🤗 Transformers model to TensorFlow?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_pipeline"><!-- HTML_TAG_START -->How to add a pipeline to 🤗 Transformers?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/testing"><!-- HTML_TAG_START -->Testing<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pr_checks"><!-- HTML_TAG_START -->Checks on a Pull Request<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Conceptual guides<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/philosophy"><!-- HTML_TAG_START -->Philosophy<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/glossary"><!-- HTML_TAG_START -->Glossary<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/task_summary"><!-- HTML_TAG_START -->What 🤗 Transformers can do<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tasks_explained"><!-- HTML_TAG_START -->How 🤗 Transformers solve tasks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_summary"><!-- HTML_TAG_START -->The Transformer model family<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tokenizer_summary"><!-- HTML_TAG_START -->Summary of the tokenizers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/attention"><!-- HTML_TAG_START -->Attention mechanisms<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pad_truncation"><!-- HTML_TAG_START -->Padding and truncation<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/bertology"><!-- HTML_TAG_START -->BERTology<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perplexity"><!-- HTML_TAG_START -->Perplexity of fixed-length models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_webserver"><!-- HTML_TAG_START -->Pipelines for webserver inference<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_memory_anatomy"><!-- HTML_TAG_START -->Model training anatomy<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->API<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Main Classes<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/agent"><!-- HTML_TAG_START -->Agents and Tools<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/model_doc/auto"><!-- HTML_TAG_START -->Auto Classes<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/callback"><!-- HTML_TAG_START -->Callbacks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/configuration"><!-- HTML_TAG_START -->Configuration<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/data_collator"><!-- HTML_TAG_START -->Data Collator<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks"><!-- HTML_TAG_START -->Keras callbacks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/logging"><!-- HTML_TAG_START -->Logging<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/model"><!-- HTML_TAG_START -->Models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/text_generation"><!-- HTML_TAG_START -->Text Generation<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/onnx"><!-- HTML_TAG_START -->ONNX<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules"><!-- HTML_TAG_START -->Optimization<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/output"><!-- HTML_TAG_START -->Model outputs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/pipelines"><!-- HTML_TAG_START -->Pipelines<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/processors"><!-- HTML_TAG_START -->Processors<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/quantization"><!-- HTML_TAG_START -->Quantization<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/tokenizer"><!-- HTML_TAG_START -->Tokenizer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/trainer"><!-- HTML_TAG_START -->Trainer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/deepspeed"><!-- HTML_TAG_START -->DeepSpeed Integration<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor"><!-- HTML_TAG_START -->Feature Extractor<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/image_processor"><!-- HTML_TAG_START -->Image Processor<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Text models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/albert"><!-- HTML_TAG_START -->ALBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bart"><!-- HTML_TAG_START -->BART<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/barthez"><!-- HTML_TAG_START -->BARThez<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bartpho"><!-- HTML_TAG_START -->BARTpho<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bert"><!-- HTML_TAG_START -->BERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bert-generation"><!-- HTML_TAG_START -->BertGeneration<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bert-japanese"><!-- HTML_TAG_START -->BertJapanese<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bertweet"><!-- HTML_TAG_START -->Bertweet<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/big_bird"><!-- HTML_TAG_START -->BigBird<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus"><!-- HTML_TAG_START -->BigBirdPegasus<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/biogpt"><!-- HTML_TAG_START -->BioGpt<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/blenderbot"><!-- HTML_TAG_START -->Blenderbot<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/blenderbot-small"><!-- HTML_TAG_START -->Blenderbot Small<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bloom"><!-- HTML_TAG_START -->BLOOM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bort"><!-- HTML_TAG_START -->BORT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/byt5"><!-- HTML_TAG_START -->ByT5<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/camembert"><!-- HTML_TAG_START -->CamemBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/canine"><!-- HTML_TAG_START -->CANINE<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/codegen"><!-- HTML_TAG_START -->CodeGen<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/code_llama"><!-- HTML_TAG_START -->CodeLlama<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/convbert"><!-- HTML_TAG_START -->ConvBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/cpm"><!-- HTML_TAG_START -->CPM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/cpmant"><!-- HTML_TAG_START -->CPMANT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ctrl"><!-- HTML_TAG_START -->CTRL<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/deberta"><!-- HTML_TAG_START -->DeBERTa<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/deberta-v2"><!-- HTML_TAG_START -->DeBERTa-v2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/dialogpt"><!-- HTML_TAG_START -->DialoGPT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/distilbert"><!-- HTML_TAG_START -->DistilBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/dpr"><!-- HTML_TAG_START -->DPR<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/electra"><!-- HTML_TAG_START -->ELECTRA<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/encoder-decoder"><!-- HTML_TAG_START -->Encoder Decoder Models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ernie"><!-- HTML_TAG_START -->ERNIE<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ernie_m"><!-- HTML_TAG_START -->ErnieM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/esm"><!-- HTML_TAG_START -->ESM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/falcon"><!-- HTML_TAG_START -->Falcon<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/flan-t5"><!-- HTML_TAG_START -->FLAN-T5<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/flan-ul2"><!-- HTML_TAG_START -->FLAN-UL2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/flaubert"><!-- HTML_TAG_START -->FlauBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/fnet"><!-- HTML_TAG_START -->FNet<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/fsmt"><!-- HTML_TAG_START -->FSMT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/funnel"><!-- HTML_TAG_START -->Funnel Transformer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/openai-gpt"><!-- HTML_TAG_START -->GPT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt_neo"><!-- HTML_TAG_START -->GPT Neo<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt_neox"><!-- HTML_TAG_START -->GPT NeoX<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese"><!-- HTML_TAG_START -->GPT NeoX Japanese<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gptj"><!-- HTML_TAG_START -->GPT-J<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt2"><!-- HTML_TAG_START -->GPT2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode"><!-- HTML_TAG_START -->GPTBigCode<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese"><!-- HTML_TAG_START -->GPTSAN Japanese<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt-sw3"><!-- HTML_TAG_START -->GPTSw3<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/herbert"><!-- HTML_TAG_START -->HerBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ibert"><!-- HTML_TAG_START -->I-BERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/jukebox"><!-- HTML_TAG_START -->Jukebox<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/led"><!-- HTML_TAG_START -->LED<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/llama"><!-- HTML_TAG_START -->LLaMA<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/llama2"><!-- HTML_TAG_START -->Llama2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/longformer"><!-- HTML_TAG_START -->Longformer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/longt5"><!-- HTML_TAG_START -->LongT5<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/luke"><!-- HTML_TAG_START -->LUKE<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/m2m_100"><!-- HTML_TAG_START -->M2M100<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/marian"><!-- HTML_TAG_START -->MarianMT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/markuplm"><!-- HTML_TAG_START -->MarkupLM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mbart"><!-- HTML_TAG_START -->MBart and MBart-50<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mega"><!-- HTML_TAG_START -->MEGA<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/megatron-bert"><!-- HTML_TAG_START -->MegatronBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2"><!-- HTML_TAG_START -->MegatronGPT2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mistral"><!-- HTML_TAG_START -->Mistral<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mluke"><!-- HTML_TAG_START -->mLUKE<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mobilebert"><!-- HTML_TAG_START -->MobileBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mpnet"><!-- HTML_TAG_START -->MPNet<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mpt"><!-- HTML_TAG_START -->MPT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mra"><!-- HTML_TAG_START -->MRA<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mt5"><!-- HTML_TAG_START -->MT5<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mvp"><!-- HTML_TAG_START -->MVP<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nezha"><!-- HTML_TAG_START -->NEZHA<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nllb"><!-- HTML_TAG_START -->NLLB<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nllb-moe"><!-- HTML_TAG_START -->NLLB-MoE<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nystromformer"><!-- HTML_TAG_START -->Nyströmformer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/open-llama"><!-- HTML_TAG_START -->Open-Llama<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/opt"><!-- HTML_TAG_START -->OPT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/pegasus"><!-- HTML_TAG_START -->Pegasus<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/pegasus_x"><!-- HTML_TAG_START -->PEGASUS-X<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/persimmon"><!-- HTML_TAG_START -->Persimmon<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/phobert"><!-- HTML_TAG_START -->PhoBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/plbart"><!-- HTML_TAG_START -->PLBart<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/prophetnet"><!-- HTML_TAG_START -->ProphetNet<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/qdqbert"><!-- HTML_TAG_START -->QDQBert<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/rag"><!-- HTML_TAG_START -->RAG<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/realm"><!-- HTML_TAG_START -->REALM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/reformer"><!-- HTML_TAG_START -->Reformer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/rembert"><!-- HTML_TAG_START -->RemBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/retribert"><!-- HTML_TAG_START -->RetriBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/roberta"><!-- HTML_TAG_START -->RoBERTa<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm"><!-- HTML_TAG_START -->RoBERTa-PreLayerNorm<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/roc_bert"><!-- HTML_TAG_START -->RoCBert<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/roformer"><!-- HTML_TAG_START -->RoFormer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/rwkv"><!-- HTML_TAG_START -->RWKV<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/splinter"><!-- HTML_TAG_START -->Splinter<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/squeezebert"><!-- HTML_TAG_START -->SqueezeBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/switch_transformers"><!-- HTML_TAG_START -->SwitchTransformers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/t5"><!-- HTML_TAG_START -->T5<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/t5v1.1"><!-- HTML_TAG_START -->T5v1.1<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/tapex"><!-- HTML_TAG_START -->TAPEX<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/transfo-xl"><!-- HTML_TAG_START -->Transformer XL<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ul2"><!-- HTML_TAG_START -->UL2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/umt5"><!-- HTML_TAG_START -->UMT5<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xmod"><!-- HTML_TAG_START -->X-MOD<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xglm"><!-- HTML_TAG_START -->XGLM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm"><!-- HTML_TAG_START -->XLM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet"><!-- HTML_TAG_START -->XLM-ProphetNet<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta"><!-- HTML_TAG_START -->XLM-RoBERTa<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl"><!-- HTML_TAG_START -->XLM-RoBERTa-XL<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm-v"><!-- HTML_TAG_START -->XLM-V<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlnet"><!-- HTML_TAG_START -->XLNet<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/yoso"><!-- HTML_TAG_START -->YOSO<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Vision models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Audio models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Multimodal models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Reinforcement learning models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Time series models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Graph models<!-- HTML_TAG_END --></span> </span></span> </div></div> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Internal Helpers<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/modeling_utils"><!-- HTML_TAG_START -->Custom Layers and Utilities<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/pipelines_utils"><!-- HTML_TAG_START -->Utilities for pipelines<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/tokenization_utils"><!-- HTML_TAG_START -->Utilities for Tokenizers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/trainer_utils"><!-- HTML_TAG_START -->Utilities for Trainer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/generation_utils"><!-- HTML_TAG_START -->Utilities for Generation<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/image_processing_utils"><!-- HTML_TAG_START -->Utilities for Image Processors<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/audio_utils"><!-- HTML_TAG_START -->Utilities for Audio processing<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/file_utils"><!-- HTML_TAG_START -->General Utilities<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/time_series_utils"><!-- HTML_TAG_START -->Utilities for Time Series<!-- HTML_TAG_END --> </a> </div> </div></nav></div></div></div> <div class="z-1 min-w-0 flex-1"> <div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg"> <div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div> <p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience </p> <div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes </div></div></div> <div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a> <p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div> <div class="prose-doc prose relative mx-auto max-w-4xl break-words"><!-- HTML_TAG_START --> <link href="/docs/transformers/v4.34.0/en/_app/immutable/assets/0.e3b0c442.css" rel="modulepreload"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/entry/start.c2db227a.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/scheduler.9bc65507.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/singletons.e3057404.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/index.3b203c72.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/paths.e7de6301.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/entry/app.879d9b87.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/index.78c82d43.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/0.242aaaff.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/each.e59479a4.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/129.b8f4431d.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/Tip.87d55b76.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/Docstring.4e7352e2.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/globals.7f7f1b26.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/IconCopyLink.bedaa44d.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/CodeBlock.73e038be.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/ExampleCodeBlock.872b014d.js"><!-- HEAD_svelte-1phssyn_START --><meta name="hf:doc:metadata" content="{&quot;local&quot;:&quot;fsmt&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;overview&quot;,&quot;title&quot;:&quot;Overview&quot;},{&quot;local&quot;:&quot;implementation-notes&quot;,&quot;title&quot;:&quot;Implementation Notes&quot;},{&quot;local&quot;:&quot;transformers.FSMTConfig&quot;,&quot;title&quot;:&quot;FSMTConfig&quot;},{&quot;local&quot;:&quot;transformers.FSMTTokenizer&quot;,&quot;title&quot;:&quot;FSMTTokenizer&quot;},{&quot;local&quot;:&quot;transformers.FSMTModel&quot;,&quot;title&quot;:&quot;FSMTModel&quot;},{&quot;local&quot;:&quot;transformers.FSMTForConditionalGeneration&quot;,&quot;title&quot;:&quot;FSMTForConditionalGeneration&quot;}],&quot;title&quot;:&quot;FSMT&quot;}"><!-- HEAD_svelte-1phssyn_END --> <p></p> <h1 class="relative group"><a id="fsmt" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#fsmt"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1n7bmjz">FSMT</span></h1> <p data-svelte-h="svelte-3h40kw"><strong>DISCLAIMER:</strong> If you see something strange, file a <a href="https://github.com/huggingface/transformers/issues/new?assignees=&amp;labels=&amp;template=bug-report.md&amp;title" rel="nofollow">Github Issue</a> and assign @stas00.</p> <h2 class="relative group"><a id="overview" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#overview"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1jsw1pg">Overview</span></h2> <p data-svelte-h="svelte-ppl3f1">FSMT (FairSeq MachineTranslation) models were introduced in <a href="https://arxiv.org/abs/1907.06616" rel="nofollow">Facebook FAIR’s WMT19 News Translation Task Submission</a> by Nathan Ng, Kyra Yee, Alexei Baevski, Myle Ott, Michael Auli, Sergey Edunov.</p> <p data-svelte-h="svelte-wu27l3">The abstract of the paper is the following:</p> <p data-svelte-h="svelte-75g4jk"><em>This paper describes Facebook FAIR’s submission to the WMT19 shared news translation task. We participate in two language pairs and four language directions, English &lt;-&gt; German and English &lt;-&gt; Russian. Following our submission from last year, our baseline systems are large BPE-based transformer models trained with the Fairseq sequence modeling toolkit which rely on sampled back-translations. This year we experiment with different bitext data filtering schemes, as well as with adding filtered back-translated data. We also ensemble and fine-tune our models on domain-specific data, then decode using noisy channel model reranking. Our submissions are ranked first in all four directions of the human evaluation campaign. On En-&gt;De, our system significantly outperforms other systems as well as human translations. This system improves upon our WMT’18 submission by 4.5 BLEU points.</em></p> <p data-svelte-h="svelte-1uemjgo">This model was contributed by <a href="https://huggingface.co/stas" rel="nofollow">stas</a>. The original code can be found <a href="https://github.com/pytorch/fairseq/tree/master/examples/wmt19" rel="nofollow">here</a>.</p> <h2 class="relative group"><a id="implementation-notes" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#implementation-notes"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-k2bmg4">Implementation Notes</span></h2> <ul data-svelte-h="svelte-qxykjx"><li>FSMT uses source and target vocabulary pairs that aren’t combined into one. It doesn’t share embeddings tokens either. Its tokenizer is very similar to <a href="/docs/transformers/v4.34.0/en/model_doc/xlm#transformers.XLMTokenizer">XLMTokenizer</a> and the main model is derived from <a href="/docs/transformers/v4.34.0/en/model_doc/bart#transformers.BartModel">BartModel</a>.</li></ul> <h2 class="relative group"><a id="transformers.FSMTConfig" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1t3nkr1">FSMTConfig</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FSMTConfig"><!-- HTML_TAG_START --><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">FSMTConfig</span></span></h3><!-- HTML_TAG_END --> <a id="transformers.FSMTConfig" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FSMTConfig"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/fsmt/configuration_fsmt.py#L39" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">langs<span class="opacity-60"> = ['en', 'de']</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">src_vocab_size<span class="opacity-60"> = 42024</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">tgt_vocab_size<span class="opacity-60"> = 42024</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">activation_function<span class="opacity-60"> = 'relu'</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">d_model<span class="opacity-60"> = 1024</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">max_length<span class="opacity-60"> = 200</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">max_position_embeddings<span class="opacity-60"> = 1024</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_ffn_dim<span class="opacity-60"> = 4096</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_layers<span class="opacity-60"> = 12</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_attention_heads<span class="opacity-60"> = 16</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_layerdrop<span class="opacity-60"> = 0.0</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_ffn_dim<span class="opacity-60"> = 4096</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_layers<span class="opacity-60"> = 12</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_attention_heads<span class="opacity-60"> = 16</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_layerdrop<span class="opacity-60"> = 0.0</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_dropout<span class="opacity-60"> = 0.0</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">dropout<span class="opacity-60"> = 0.1</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">activation_dropout<span class="opacity-60"> = 0.0</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">init_std<span class="opacity-60"> = 0.02</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_start_token_id<span class="opacity-60"> = 2</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">is_encoder_decoder<span class="opacity-60"> = True</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">scale_embedding<span class="opacity-60"> = True</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">tie_word_embeddings<span class="opacity-60"> = False</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_beams<span class="opacity-60"> = 5</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">length_penalty<span class="opacity-60"> = 1.0</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">early_stopping<span class="opacity-60"> = False</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_cache<span class="opacity-60"> = True</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pad_token_id<span class="opacity-60"> = 1</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">bos_token_id<span class="opacity-60"> = 0</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">eos_token_id<span class="opacity-60"> = 2</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">forced_eos_token_id<span class="opacity-60"> = 2</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**common_kwargs<span class="opacity-60"></span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTConfig.langs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.langs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>langs</strong> (<code>List[str]</code>) — A list with source language and target_language (e.g., [‘en’, ‘ru’]).<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTConfig.src_vocab_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.src_vocab_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>src_vocab_size</strong> (<code>int</code>) — Vocabulary size of the encoder. Defines the number of different tokens that can be represented by the <code>inputs_ids</code> passed to the forward method in the encoder.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTConfig.tgt_vocab_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.tgt_vocab_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>tgt_vocab_size</strong> (<code>int</code>) — Vocabulary size of the decoder. Defines the number of different tokens that can be represented by the <code>inputs_ids</code> passed to the forward method in the decoder.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTConfig.d_model" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.d_model"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>d_model</strong> (<code>int</code>, <em>optional</em>, defaults to 1024) — Dimensionality of the layers and the pooler layer.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTConfig.encoder_layers" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.encoder_layers"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>encoder_layers</strong> (<code>int</code>, <em>optional</em>, defaults to 12) — Number of encoder layers.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTConfig.decoder_layers" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.decoder_layers"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>decoder_layers</strong> (<code>int</code>, <em>optional</em>, defaults to 12) — Number of decoder layers.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTConfig.encoder_attention_heads" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.encoder_attention_heads"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>encoder_attention_heads</strong> (<code>int</code>, <em>optional</em>, defaults to 16) — Number of attention heads for each attention layer in the Transformer encoder.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTConfig.decoder_attention_heads" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.decoder_attention_heads"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>decoder_attention_heads</strong> (<code>int</code>, <em>optional</em>, defaults to 16) — Number of attention heads for each attention layer in the Transformer decoder.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTConfig.decoder_ffn_dim" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.decoder_ffn_dim"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>decoder_ffn_dim</strong> (<code>int</code>, <em>optional</em>, defaults to 4096) — Dimensionality of the “intermediate” (often named feed-forward) layer in decoder.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTConfig.encoder_ffn_dim" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.encoder_ffn_dim"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>encoder_ffn_dim</strong> (<code>int</code>, <em>optional</em>, defaults to 4096) — Dimensionality of the “intermediate” (often named feed-forward) layer in decoder.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTConfig.activation_function" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.activation_function"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>activation_function</strong> (<code>str</code> or <code>Callable</code>, <em>optional</em>, defaults to <code>"relu"</code>) — The non-linear activation function (function or string) in the encoder and pooler. If string, <code>"gelu"</code>, <code>"relu"</code>, <code>"silu"</code> and <code>"gelu_new"</code> are supported.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTConfig.dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>dropout</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTConfig.attention_dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.attention_dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>attention_dropout</strong> (<code>float</code>, <em>optional</em>, defaults to 0.0) — The dropout ratio for the attention probabilities.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTConfig.activation_dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.activation_dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>activation_dropout</strong> (<code>float</code>, <em>optional</em>, defaults to 0.0) — The dropout ratio for activations inside the fully connected layer.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTConfig.max_position_embeddings" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.max_position_embeddings"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>max_position_embeddings</strong> (<code>int</code>, <em>optional</em>, defaults to 1024) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048).<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTConfig.init_std" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.init_std"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>init_std</strong> (<code>float</code>, <em>optional</em>, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTConfig.scale_embedding" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.scale_embedding"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>scale_embedding</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Scale embeddings by diving by sqrt(d_model).<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTConfig.bos_token_id" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.bos_token_id"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>bos_token_id</strong> (<code>int</code>, <em>optional</em>, defaults to 0) — Beginning of stream token id.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTConfig.pad_token_id" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.pad_token_id"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>pad_token_id</strong> (<code>int</code>, <em>optional</em>, defaults to 1) — Padding token id.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTConfig.eos_token_id" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.eos_token_id"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>eos_token_id</strong> (<code>int</code>, <em>optional</em>, defaults to 2) — End of stream token id.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTConfig.decoder_start_token_id" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.decoder_start_token_id"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>decoder_start_token_id</strong> (<code>int</code>, <em>optional</em>) — This model starts decoding with <code>eos_token_id</code><!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTConfig.encoder_layerdrop" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.encoder_layerdrop"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>encoder_layerdrop</strong> (<code>float</code>, <em>optional</em>, defaults to 0.0) — Google “layerdrop arxiv”, as its not explainable in one line.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTConfig.decoder_layerdrop" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.decoder_layerdrop"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>decoder_layerdrop</strong> (<code>float</code>, <em>optional</em>, defaults to 0.0) — Google “layerdrop arxiv”, as its not explainable in one line.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTConfig.is_encoder_decoder" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.is_encoder_decoder"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>is_encoder_decoder</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether this is an encoder/decoder model.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTConfig.tie_word_embeddings" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.tie_word_embeddings"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>tie_word_embeddings</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether to tie input and output embeddings.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTConfig.num_beams" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.num_beams"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>num_beams</strong> (<code>int</code>, <em>optional</em>, defaults to 5) — Number of beams for beam search that will be used by default in the <code>generate</code> method of the model. 1 means no beam search.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTConfig.length_penalty" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.length_penalty"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>length_penalty</strong> (<code>float</code>, <em>optional</em>, defaults to 1) — Exponential penalty to the length that is used with beam-based generation. It is applied as an exponent to the sequence length, which in turn is used to divide the score of the sequence. Since the score is the log likelihood of the sequence (i.e. negative), <code>length_penalty</code> &gt; 0.0 promotes longer sequences, while <code>length_penalty</code> &lt; 0.0 encourages shorter sequences.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTConfig.early_stopping" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.early_stopping"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>early_stopping</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Flag that will be used by default in the <code>generate</code> method of the model. Whether to stop the beam search when at least <code>num_beams</code> sentences are finished per batch or not.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTConfig.use_cache" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.use_cache"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>use_cache</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether or not the model should return the last key/values attentions (not used by all models).<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTConfig.forced_eos_token_id" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.forced_eos_token_id"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>forced_eos_token_id</strong> (<code>int</code>, <em>optional</em>, defaults to 2) — The id of the token to force as the last generated token when <code>max_length</code> is reached. Usually set to <code>eos_token_id</code>.<!-- HTML_TAG_END --> </span></span> </li></ul> </div></div> <p data-svelte-h="svelte-4yg373">This is the configuration class to store the configuration of a <a href="/docs/transformers/v4.34.0/en/model_doc/fsmt#transformers.FSMTModel">FSMTModel</a>. It is used to instantiate a FSMT model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the FSMT <a href="https://huggingface.co/facebook/wmt19-en-ru" rel="nofollow">facebook/wmt19-en-ru</a> architecture.</p> <p data-svelte-h="svelte-10kqkkl">Configuration objects inherit from <a href="/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> and can be used to control the model outputs. Read the documentation from <a href="/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> for more information.</p> <div class="relative group rounded-md"><a id="transformers.FSMTConfig.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTConfig.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-kvfsh7">Examples:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> FSMTConfig, FSMTModel <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Initializing a FSMT facebook/wmt19-en-ru style configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>config = FSMTConfig() <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Initializing a model (with random weights) from the configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model = FSMTModel(config) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Accessing the model configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>configuration = model.config<!-- HTML_TAG_END --></pre></div></div></div> <h2 class="relative group"><a id="transformers.FSMTTokenizer" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTTokenizer"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1gy5yl8">FSMTTokenizer</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FSMTTokenizer"><!-- HTML_TAG_START --><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">FSMTTokenizer</span></span></h3><!-- HTML_TAG_END --> <a id="transformers.FSMTTokenizer" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FSMTTokenizer"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/fsmt/tokenization_fsmt.py#L135" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">langs<span class="opacity-60"> = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">src_vocab_file<span class="opacity-60"> = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">tgt_vocab_file<span class="opacity-60"> = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">merges_file<span class="opacity-60"> = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">do_lower_case<span class="opacity-60"> = False</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">unk_token<span class="opacity-60"> = '&lt;unk&gt;'</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">bos_token<span class="opacity-60"> = '&lt;s&gt;'</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">sep_token<span class="opacity-60"> = '&lt;/s&gt;'</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pad_token<span class="opacity-60"> = '&lt;pad&gt;'</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTTokenizer.langs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTTokenizer.langs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>langs</strong> (<code>List[str]</code>) — A list of two languages to translate from and to, for instance <code>["en", "ru"]</code>.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTTokenizer.src_vocab_file" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTTokenizer.src_vocab_file"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>src_vocab_file</strong> (<code>str</code>) — File containing the vocabulary for the source language.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTTokenizer.tgt_vocab_file" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTTokenizer.tgt_vocab_file"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>tgt_vocab_file</strong> (<code>st</code>) — File containing the vocabulary for the target language.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTTokenizer.merges_file" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTTokenizer.merges_file"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>merges_file</strong> (<code>str</code>) — File containing the merges.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTTokenizer.do_lower_case" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTTokenizer.do_lower_case"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>do_lower_case</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not to lowercase the input when tokenizing.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTTokenizer.unk_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTTokenizer.unk_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>unk_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;unk&gt;"</code>) — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTTokenizer.bos_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTTokenizer.bos_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>bos_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;s&gt;"</code>) — The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.<p></p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"> <p>When building a sequence using special tokens, this is not the token that is used for the beginning of sequence. The token used is the <code>cls_token</code>.</p> </div><!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTTokenizer.sep_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTTokenizer.sep_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>sep_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;/s&gt;"</code>) — The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTTokenizer.pad_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTTokenizer.pad_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>pad_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;pad&gt;"</code>) — The token used for padding, for example when batching sequences of different lengths.<!-- HTML_TAG_END --> </span></span> </li></ul> </div></div> <p data-svelte-h="svelte-5bbjru">Construct an FAIRSEQ Transformer tokenizer. Based on Byte-Pair Encoding. The tokenization process is the following:</p> <ul data-svelte-h="svelte-stmybw"><li>Moses preprocessing and tokenization.</li> <li>Normalizing all inputs text.</li> <li>The arguments <code>special_tokens</code> and the function <code>set_special_tokens</code>, can be used to add additional symbols (like ”<strong>classify</strong>”) to a vocabulary.</li> <li>The argument <code>langs</code> defines a pair of languages.</li></ul> <p data-svelte-h="svelte-1b0fouy">This tokenizer inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizer">PreTrainedTokenizer</a> which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FSMTTokenizer.build_inputs_with_special_tokens"><!-- HTML_TAG_START --><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>build_inputs_with_special_tokens</span></h4><!-- HTML_TAG_END --> <a id="transformers.FSMTTokenizer.build_inputs_with_special_tokens" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FSMTTokenizer.build_inputs_with_special_tokens"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/fsmt/tokenization_fsmt.py#L403" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_0<span class="opacity-60">: typing.List[int]</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_1<span class="opacity-60">: typing.Optional[typing.List[int]] = None</span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><!-- HTML_TAG_START --><span><code>List[int]</code></span><!-- HTML_TAG_END --></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTTokenizer.build_inputs_with_special_tokens.token_ids_0" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTTokenizer.build_inputs_with_special_tokens.token_ids_0"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>token_ids_0</strong> (<code>List[int]</code>) — List of IDs to which the special tokens will be added.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTTokenizer.build_inputs_with_special_tokens.token_ids_1" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTTokenizer.build_inputs_with_special_tokens.token_ids_1"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>token_ids_1</strong> (<code>List[int]</code>, <em>optional</em>) — Optional second list of IDs for sequence pairs.<!-- HTML_TAG_END --> </span></span> </li></ul> <div id="transformers.FSMTTokenizer.build_inputs_with_special_tokens.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <!-- HTML_TAG_START --> <p><code>List[int]</code></p> <!-- HTML_TAG_END --> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"><!-- HTML_TAG_START --> </p><p>List of <a href="../glossary#input-ids">input IDs</a> with the appropriate special tokens.</p> <!-- HTML_TAG_END --><p></p> </div></div> <p data-svelte-h="svelte-ym5sov">Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A FAIRSEQ Transformer sequence has the following format:</p> <ul data-svelte-h="svelte-1w73b42"><li>single sequence: <code>&lt;s&gt; X &lt;/s&gt;</code></li> <li>pair of sequences: <code>&lt;s&gt; A &lt;/s&gt; B &lt;/s&gt;</code></li></ul></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FSMTTokenizer.get_special_tokens_mask"><!-- HTML_TAG_START --><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>get_special_tokens_mask</span></h4><!-- HTML_TAG_END --> <a id="transformers.FSMTTokenizer.get_special_tokens_mask" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FSMTTokenizer.get_special_tokens_mask"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/fsmt/tokenization_fsmt.py#L429" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_0<span class="opacity-60">: typing.List[int]</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_1<span class="opacity-60">: typing.Optional[typing.List[int]] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">already_has_special_tokens<span class="opacity-60">: bool = False</span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><!-- HTML_TAG_START --><span><code>List[int]</code></span><!-- HTML_TAG_END --></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTTokenizer.get_special_tokens_mask.token_ids_0" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTTokenizer.get_special_tokens_mask.token_ids_0"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>token_ids_0</strong> (<code>List[int]</code>) — List of IDs.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTTokenizer.get_special_tokens_mask.token_ids_1" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTTokenizer.get_special_tokens_mask.token_ids_1"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>token_ids_1</strong> (<code>List[int]</code>, <em>optional</em>) — Optional second list of IDs for sequence pairs.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTTokenizer.get_special_tokens_mask.already_has_special_tokens" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTTokenizer.get_special_tokens_mask.already_has_special_tokens"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>already_has_special_tokens</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not the token list is already formatted with special tokens for the model.<!-- HTML_TAG_END --> </span></span> </li></ul> <div id="transformers.FSMTTokenizer.get_special_tokens_mask.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <!-- HTML_TAG_START --> <p><code>List[int]</code></p> <!-- HTML_TAG_END --> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"><!-- HTML_TAG_START --> </p><p>A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.</p> <!-- HTML_TAG_END --><p></p> </div></div> <p data-svelte-h="svelte-1f4f5kp">Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer <code>prepare_for_model</code> method.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FSMTTokenizer.create_token_type_ids_from_sequences"><!-- HTML_TAG_START --><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>create_token_type_ids_from_sequences</span></h4><!-- HTML_TAG_END --> <a id="transformers.FSMTTokenizer.create_token_type_ids_from_sequences" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FSMTTokenizer.create_token_type_ids_from_sequences"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/fsmt/tokenization_fsmt.py#L457" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_0<span class="opacity-60">: typing.List[int]</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_1<span class="opacity-60">: typing.Optional[typing.List[int]] = None</span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><!-- HTML_TAG_START --><span><code>List[int]</code></span><!-- HTML_TAG_END --></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTTokenizer.create_token_type_ids_from_sequences.token_ids_0" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTTokenizer.create_token_type_ids_from_sequences.token_ids_0"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>token_ids_0</strong> (<code>List[int]</code>) — List of IDs.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTTokenizer.create_token_type_ids_from_sequences.token_ids_1" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTTokenizer.create_token_type_ids_from_sequences.token_ids_1"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>token_ids_1</strong> (<code>List[int]</code>, <em>optional</em>) — Optional second list of IDs for sequence pairs.<!-- HTML_TAG_END --> </span></span> </li></ul> <div id="transformers.FSMTTokenizer.create_token_type_ids_from_sequences.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <!-- HTML_TAG_START --> <p><code>List[int]</code></p> <!-- HTML_TAG_END --> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"><!-- HTML_TAG_START --> </p><p>List of <a href="../glossary#token-type-ids">token type IDs</a> according to the given sequence(s).</p> <!-- HTML_TAG_END --><p></p> </div></div> <p data-svelte-h="svelte-13qcmkg">Create a mask from the two sequences passed to be used in a sequence-pair classification task. A FAIRSEQ</p> <div class="relative group rounded-md"><a id="transformers.FSMTTokenizer.create_token_type_ids_from_sequences.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTTokenizer.create_token_type_ids_from_sequences.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-12v5j2d">Transformer sequence pair mask has the following format:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START -->0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 1 </span>1<span class="hljs-number"> 1 </span>1<span class="hljs-number"> 1 </span>1<span class="hljs-number"> 1 </span>1 1 | first sequence | second sequence |<!-- HTML_TAG_END --></pre></div></div> <p data-svelte-h="svelte-owoxgn">If <code>token_ids_1</code> is <code>None</code>, this method only returns the first portion of the mask (0s).</p> <p data-svelte-h="svelte-1fvbzf1">Creates a mask from the two sequences passed to be used in a sequence-pair classification task. An FAIRSEQ_TRANSFORMER sequence pair mask has the following format:</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FSMTTokenizer.save_vocabulary"><!-- HTML_TAG_START --><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>save_vocabulary</span></h4><!-- HTML_TAG_END --> <a id="transformers.FSMTTokenizer.save_vocabulary" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FSMTTokenizer.save_vocabulary"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/fsmt/tokenization_fsmt.py#L490" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">save_directory<span class="opacity-60">: str</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">filename_prefix<span class="opacity-60">: typing.Optional[str] = None</span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> </div></div></div></div> <h2 class="relative group"><a id="transformers.FSMTModel" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTModel"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-kauksc">FSMTModel</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FSMTModel"><!-- HTML_TAG_START --><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">FSMTModel</span></span></h3><!-- HTML_TAG_END --> <a id="transformers.FSMTModel" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FSMTModel"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/fsmt/modeling_fsmt.py#L1036" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60">: FSMTConfig</span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTModel.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTModel.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/fsmt#transformers.FSMTConfig">FSMTConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.<!-- HTML_TAG_END --> </span></span> </li></ul> </div></div> <p data-svelte-h="svelte-42ql7q">The bare FSMT Model outputting raw hidden-states without any specific head on top.</p> <p data-svelte-h="svelte-hmtw9k">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-hswkmf">This model is also a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FSMTModel.forward"><!-- HTML_TAG_START --><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4><!-- HTML_TAG_END --> <a id="transformers.FSMTModel.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FSMTModel.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/fsmt/modeling_fsmt.py#L1063" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: LongTensor</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_input_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_attention_mask<span class="opacity-60">: typing.Optional[torch.BoolTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_head_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">cross_attn_head_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_outputs<span class="opacity-60">: typing.Optional[typing.Tuple[torch.FloatTensor]] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">past_key_values<span class="opacity-60">: typing.Optional[typing.Tuple[torch.FloatTensor]] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_cache<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_inputs_embeds<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><!-- HTML_TAG_START --><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.Seq2SeqModelOutput">transformers.modeling_outputs.Seq2SeqModelOutput</a> or <code>tuple(torch.FloatTensor)</code></span><!-- HTML_TAG_END --></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTModel.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTModel.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <code>FSTMTokenizer</code>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTModel.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTModel.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>attention_mask</strong> (<code>torch.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTModel.forward.decoder_input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTModel.forward.decoder_input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>decoder_input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, target_sequence_length)</code>, <em>optional</em>) — Indices of decoder input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#decoder-input-ids">What are decoder input IDs?</a></p> <p>FSMT uses the <code>eos_token_id</code> as the starting token for <code>decoder_input_ids</code> generation. If <code>past_key_values</code> is used, optionally only the last <code>decoder_input_ids</code> have to be input (see <code>past_key_values</code>).<!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTModel.forward.decoder_attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTModel.forward.decoder_attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>decoder_attention_mask</strong> (<code>torch.BoolTensor</code> of shape <code>(batch_size, target_sequence_length)</code>, <em>optional</em>) — Default behavior: generate a tensor that ignores pad tokens in <code>decoder_input_ids</code>. Causal mask will also be used by default.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTModel.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTModel.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>head_mask</strong> (<code>torch.Tensor</code> of shape <code>(encoder_layers, encoder_attention_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul><!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTModel.forward.decoder_head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTModel.forward.decoder_head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>decoder_head_mask</strong> (<code>torch.Tensor</code> of shape <code>(decoder_layers, decoder_attention_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul><!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTModel.forward.cross_attn_head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTModel.forward.cross_attn_head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>cross_attn_head_mask</strong> (<code>torch.Tensor</code> of shape <code>(decoder_layers, decoder_attention_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul><!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTModel.forward.encoder_outputs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTModel.forward.encoder_outputs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>encoder_outputs</strong> (<code>Tuple(torch.FloatTensor)</code>, <em>optional</em>) — Tuple consists of (<code>last_hidden_state</code>, <em>optional</em>: <code>hidden_states</code>, <em>optional</em>: <code>attentions</code>) <code>last_hidden_state</code> of shape <code>(batch_size, sequence_length, hidden_size)</code> is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTModel.forward.past_key_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTModel.forward.past_key_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>past_key_values</strong> (<code>Tuple(torch.FloatTensor)</code> of length <code>config.n_layers</code> with each tuple having 4 tensors of shape <code>(batch_size, num_heads, sequence_length - 1, embed_size_per_head)</code>) — Contains precomputed key and value hidden-states of the attention blocks. Can be used to speed up decoding. If <code>past_key_values</code> are used, the user can optionally input only the last <code>decoder_input_ids</code> (those that don’t have their past key value states given to this model) of shape <code>(batch_size, 1)</code> instead of all <code>decoder_input_ids</code> of shape <code>(batch_size, sequence_length)</code>.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTModel.forward.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTModel.forward.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTModel.forward.decoder_inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTModel.forward.decoder_inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>decoder_inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, target_sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>decoder_input_ids</code> you can choose to directly pass an embedded representation. If <code>past_key_values</code> is used, optionally only the last <code>decoder_inputs_embeds</code> have to be input (see <code>past_key_values</code>). This is useful if you want more control over how to convert <code>decoder_input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.<p></p> <p>If <code>decoder_input_ids</code> and <code>decoder_inputs_embeds</code> are both unset, <code>decoder_inputs_embeds</code> takes the value of <code>inputs_embeds</code>.<!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTModel.forward.use_cache" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTModel.forward.use_cache"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>use_cache</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — If set to <code>True</code>, <code>past_key_values</code> key value states are returned and can be used to speed up decoding (see <code>past_key_values</code>).<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTModel.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTModel.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTModel.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTModel.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTModel.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTModel.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.<!-- HTML_TAG_END --> </span></span> </li></ul> <div id="transformers.FSMTModel.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <!-- HTML_TAG_START --> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.Seq2SeqModelOutput">transformers.modeling_outputs.Seq2SeqModelOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <!-- HTML_TAG_END --> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"><!-- HTML_TAG_START --> </p><p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.Seq2SeqModelOutput">transformers.modeling_outputs.Seq2SeqModelOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/fsmt#transformers.FSMTConfig">FSMTConfig</a>) and inputs.</p> <ul> <li> <p><strong>last_hidden_state</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>) — Sequence of hidden-states at the output of the last layer of the decoder of the model.</p> <p>If <code>past_key_values</code> is used only the last hidden-state of the sequences of shape <code>(batch_size, 1, hidden_size)</code> is output.</p> </li> <li> <p><strong>past_key_values</strong> (<code>tuple(tuple(torch.FloatTensor))</code>, <em>optional</em>, returned when <code>use_cache=True</code> is passed or when <code>config.use_cache=True</code>) — Tuple of <code>tuple(torch.FloatTensor)</code> of length <code>config.n_layers</code>, with each tuple having 2 tensors of shape <code>(batch_size, num_heads, sequence_length, embed_size_per_head)</code>) and 2 additional tensors of shape <code>(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)</code>.</p> <p>Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see <code>past_key_values</code> input) to speed up sequential decoding.</p> </li> <li> <p><strong>decoder_hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the decoder at the output of each layer plus the optional initial embedding outputs.</p> </li> <li> <p><strong>decoder_attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> <li> <p><strong>cross_attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.</p> </li> <li> <p><strong>encoder_last_hidden_state</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Sequence of hidden-states at the output of the last layer of the encoder of the model.</p> </li> <li> <p><strong>encoder_hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the encoder at the output of each layer plus the optional initial embedding outputs.</p> </li> <li> <p><strong>encoder_attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> <!-- HTML_TAG_END --><p></p> </div></div> <p data-svelte-h="svelte-ecgf37">The <a href="/docs/transformers/v4.34.0/en/model_doc/fsmt#transformers.FSMTModel">FSMTModel</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.FSMTModel.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTModel.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, FSMTModel <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"facebook/wmt19-ru-en"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = FSMTModel.from_pretrained(<span class="hljs-string">"facebook/wmt19-ru-en"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(<span class="hljs-string">"Hello, my dog is cute"</span>, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(**inputs) <span class="hljs-meta">&gt;&gt;&gt; </span>last_hidden_states = outputs.last_hidden_state<!-- HTML_TAG_END --></pre></div></div></div></div> <h2 class="relative group"><a id="transformers.FSMTForConditionalGeneration" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTForConditionalGeneration"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-6x630u">FSMTForConditionalGeneration</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FSMTForConditionalGeneration"><!-- HTML_TAG_START --><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">FSMTForConditionalGeneration</span></span></h3><!-- HTML_TAG_END --> <a id="transformers.FSMTForConditionalGeneration" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FSMTForConditionalGeneration"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/fsmt/modeling_fsmt.py#L1177" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60">: FSMTConfig</span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTForConditionalGeneration.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTForConditionalGeneration.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/fsmt#transformers.FSMTConfig">FSMTConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.<!-- HTML_TAG_END --> </span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1449fju">The FSMT Model with a language modeling head. Can be used for summarization.</p> <p data-svelte-h="svelte-hmtw9k">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-hswkmf">This model is also a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FSMTForConditionalGeneration.forward"><!-- HTML_TAG_START --><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4><!-- HTML_TAG_END --> <a id="transformers.FSMTForConditionalGeneration.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FSMTForConditionalGeneration.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/fsmt/modeling_fsmt.py#L1189" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: LongTensor</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_input_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_attention_mask<span class="opacity-60">: typing.Optional[torch.BoolTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_head_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">cross_attn_head_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_outputs<span class="opacity-60">: typing.Optional[typing.Tuple[torch.FloatTensor]] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">past_key_values<span class="opacity-60">: typing.Optional[typing.Tuple[torch.FloatTensor]] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_inputs_embeds<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_cache<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><!-- HTML_TAG_START --><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.Seq2SeqLMOutput">transformers.modeling_outputs.Seq2SeqLMOutput</a> or <code>tuple(torch.FloatTensor)</code></span><!-- HTML_TAG_END --></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTForConditionalGeneration.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTForConditionalGeneration.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <code>FSTMTokenizer</code>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTForConditionalGeneration.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTForConditionalGeneration.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>attention_mask</strong> (<code>torch.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTForConditionalGeneration.forward.decoder_input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTForConditionalGeneration.forward.decoder_input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>decoder_input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, target_sequence_length)</code>, <em>optional</em>) — Indices of decoder input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#decoder-input-ids">What are decoder input IDs?</a></p> <p>FSMT uses the <code>eos_token_id</code> as the starting token for <code>decoder_input_ids</code> generation. If <code>past_key_values</code> is used, optionally only the last <code>decoder_input_ids</code> have to be input (see <code>past_key_values</code>).<!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTForConditionalGeneration.forward.decoder_attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTForConditionalGeneration.forward.decoder_attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>decoder_attention_mask</strong> (<code>torch.BoolTensor</code> of shape <code>(batch_size, target_sequence_length)</code>, <em>optional</em>) — Default behavior: generate a tensor that ignores pad tokens in <code>decoder_input_ids</code>. Causal mask will also be used by default.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTForConditionalGeneration.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTForConditionalGeneration.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>head_mask</strong> (<code>torch.Tensor</code> of shape <code>(encoder_layers, encoder_attention_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul><!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTForConditionalGeneration.forward.decoder_head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTForConditionalGeneration.forward.decoder_head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>decoder_head_mask</strong> (<code>torch.Tensor</code> of shape <code>(decoder_layers, decoder_attention_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul><!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTForConditionalGeneration.forward.cross_attn_head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTForConditionalGeneration.forward.cross_attn_head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>cross_attn_head_mask</strong> (<code>torch.Tensor</code> of shape <code>(decoder_layers, decoder_attention_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul><!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTForConditionalGeneration.forward.encoder_outputs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTForConditionalGeneration.forward.encoder_outputs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>encoder_outputs</strong> (<code>Tuple(torch.FloatTensor)</code>, <em>optional</em>) — Tuple consists of (<code>last_hidden_state</code>, <em>optional</em>: <code>hidden_states</code>, <em>optional</em>: <code>attentions</code>) <code>last_hidden_state</code> of shape <code>(batch_size, sequence_length, hidden_size)</code> is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTForConditionalGeneration.forward.past_key_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTForConditionalGeneration.forward.past_key_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>past_key_values</strong> (<code>Tuple(torch.FloatTensor)</code> of length <code>config.n_layers</code> with each tuple having 4 tensors of shape <code>(batch_size, num_heads, sequence_length - 1, embed_size_per_head)</code>) — Contains precomputed key and value hidden-states of the attention blocks. Can be used to speed up decoding. If <code>past_key_values</code> are used, the user can optionally input only the last <code>decoder_input_ids</code> (those that don’t have their past key value states given to this model) of shape <code>(batch_size, 1)</code> instead of all <code>decoder_input_ids</code> of shape <code>(batch_size, sequence_length)</code>.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTForConditionalGeneration.forward.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTForConditionalGeneration.forward.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTForConditionalGeneration.forward.decoder_inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTForConditionalGeneration.forward.decoder_inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>decoder_inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, target_sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>decoder_input_ids</code> you can choose to directly pass an embedded representation. If <code>past_key_values</code> is used, optionally only the last <code>decoder_inputs_embeds</code> have to be input (see <code>past_key_values</code>). This is useful if you want more control over how to convert <code>decoder_input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.<p></p> <p>If <code>decoder_input_ids</code> and <code>decoder_inputs_embeds</code> are both unset, <code>decoder_inputs_embeds</code> takes the value of <code>inputs_embeds</code>.<!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTForConditionalGeneration.forward.use_cache" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTForConditionalGeneration.forward.use_cache"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>use_cache</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — If set to <code>True</code>, <code>past_key_values</code> key value states are returned and can be used to speed up decoding (see <code>past_key_values</code>).<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTForConditionalGeneration.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTForConditionalGeneration.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTForConditionalGeneration.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTForConditionalGeneration.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTForConditionalGeneration.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTForConditionalGeneration.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FSMTForConditionalGeneration.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTForConditionalGeneration.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Labels for computing the masked language modeling loss. Indices should either be in <code>[0, ..., config.vocab_size]</code> or -100 (see <code>input_ids</code> docstring). Tokens with indices set to <code>-100</code> are ignored (masked), the loss is only computed for the tokens with labels in <code>[0, ..., config.vocab_size]</code>.<!-- HTML_TAG_END --> </span></span> </li></ul> <div id="transformers.FSMTForConditionalGeneration.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <!-- HTML_TAG_START --> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.Seq2SeqLMOutput">transformers.modeling_outputs.Seq2SeqLMOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <!-- HTML_TAG_END --> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"><!-- HTML_TAG_START --> </p><p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.Seq2SeqLMOutput">transformers.modeling_outputs.Seq2SeqLMOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/fsmt#transformers.FSMTConfig">FSMTConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Language modeling loss.</p> </li> <li> <p><strong>logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, config.vocab_size)</code>) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).</p> </li> <li> <p><strong>past_key_values</strong> (<code>tuple(tuple(torch.FloatTensor))</code>, <em>optional</em>, returned when <code>use_cache=True</code> is passed or when <code>config.use_cache=True</code>) — Tuple of <code>tuple(torch.FloatTensor)</code> of length <code>config.n_layers</code>, with each tuple having 2 tensors of shape <code>(batch_size, num_heads, sequence_length, embed_size_per_head)</code>) and 2 additional tensors of shape <code>(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)</code>.</p> <p>Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see <code>past_key_values</code> input) to speed up sequential decoding.</p> </li> <li> <p><strong>decoder_hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>decoder_attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> <li> <p><strong>cross_attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.</p> </li> <li> <p><strong>encoder_last_hidden_state</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Sequence of hidden-states at the output of the last layer of the encoder of the model.</p> </li> <li> <p><strong>encoder_hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>encoder_attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> <!-- HTML_TAG_END --><p></p> </div></div> <p data-svelte-h="svelte-ba26ip">The <a href="/docs/transformers/v4.34.0/en/model_doc/fsmt#transformers.FSMTForConditionalGeneration">FSMTForConditionalGeneration</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.FSMTForConditionalGeneration.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FSMTForConditionalGeneration.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-1ezebad">Translation example::</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, FSMTForConditionalGeneration <span class="hljs-meta">&gt;&gt;&gt; </span>mname = <span class="hljs-string">"facebook/wmt19-ru-en"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model = FSMTForConditionalGeneration.from_pretrained(mname) <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(mname) <span class="hljs-meta">&gt;&gt;&gt; </span>src_text = <span class="hljs-string">"Машинное обучение - это здорово, не так ли?"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>input_ids = tokenizer(src_text, return_tensors=<span class="hljs-string">"pt"</span>).input_ids <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model.generate(input_ids, num_beams=<span class="hljs-number">5</span>, num_return_sequences=<span class="hljs-number">3</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer.decode(outputs[<span class="hljs-number">0</span>], skip_special_tokens=<span class="hljs-literal">True</span>) <span class="hljs-string">"Machine learning is great, isn't it?"</span><!-- HTML_TAG_END --></pre></div></div></div></div> <p></p> <script> { __sveltekit_1yybmhh = { assets: "/docs/transformers/v4.34.0/en", base: "/docs/transformers/v4.34.0/en", env: {} }; const element = document.currentScript.parentElement; const data = [null,null]; Promise.all([ import("/docs/transformers/v4.34.0/en/_app/immutable/entry/start.c2db227a.js"), import("/docs/transformers/v4.34.0/en/_app/immutable/entry/app.879d9b87.js") ]).then(([kit, app]) => { kit.start(app, element, { node_ids: [0, 129], data, form: null, error: null }); }); } </script> <!-- HTML_TAG_END --></div> <div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/fnet" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>FNet</a> <a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/funnel" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">Funnel Transformer<span class="ml-2 translate-y-px">→</span></a></div></div></div> <div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapter&quot;:{&quot;title&quot;:&quot;FSMT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;fsmt&quot;,&quot;url&quot;:&quot;#fsmt&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;overview&quot;,&quot;url&quot;:&quot;#overview&quot;},{&quot;title&quot;:&quot;Implementation Notes&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;implementation-notes&quot;,&quot;url&quot;:&quot;#implementation-notes&quot;},{&quot;title&quot;:&quot;FSMTConfig&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.FSMTConfig&quot;,&quot;url&quot;:&quot;#transformers.FSMTConfig&quot;},{&quot;title&quot;:&quot;FSMTTokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.FSMTTokenizer&quot;,&quot;url&quot;:&quot;#transformers.FSMTTokenizer&quot;},{&quot;title&quot;:&quot;FSMTModel&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.FSMTModel&quot;,&quot;url&quot;:&quot;#transformers.FSMTModel&quot;},{&quot;title&quot;:&quot;FSMTForConditionalGeneration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.FSMTForConditionalGeneration&quot;,&quot;url&quot;:&quot;#transformers.FSMTForConditionalGeneration&quot;}]}}" data-target="SubSideMenu"> <nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#fsmt" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-fsmt"><!-- HTML_TAG_START -->FSMT<!-- HTML_TAG_END --></a> <a href="#overview" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-overview"><!-- HTML_TAG_START --><wbr>Overview<!-- HTML_TAG_END --></a> <a href="#implementation-notes" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-implementation-notes"><!-- HTML_TAG_START --><wbr>Implementation <wbr>Notes<!-- HTML_TAG_END --></a> <a href="#transformers.FSMTConfig" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.FSMTConfig"><!-- HTML_TAG_START -->FSMT<wbr>Config<!-- HTML_TAG_END --></a> <a href="#transformers.FSMTTokenizer" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.FSMTTokenizer"><!-- HTML_TAG_START -->FSMT<wbr>Tokenizer<!-- HTML_TAG_END --></a> <a href="#transformers.FSMTModel" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.FSMTModel"><!-- HTML_TAG_START -->FSMT<wbr>Model<!-- HTML_TAG_END --></a> <a href="#transformers.FSMTForConditionalGeneration" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.FSMTForConditionalGeneration"><!-- HTML_TAG_START -->FSMT<wbr>For<wbr>Conditional<wbr>Generation<!-- HTML_TAG_END --></a> </nav></div></div></div> <div id="doc-footer"></div></main> </div> <script> import("/front/build/kube-5e23f38/index.js"); window.moonSha = "kube-5e23f38/"; window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc","environment":"production","userAgent":"HuggingFace (production)"}`); </script> <!-- Stripe --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://js.stripe.com/v3/"; script.async = true; document.head.appendChild(script); } </script> <!-- Google analytics v4 --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL"; script.async = true; document.head.appendChild(script); window.dataLayer = window.dataLayer || []; function gtag() { if (window.dataLayer !== undefined) { window.dataLayer.push(arguments); } } gtag("js", new Date()); gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/v4.34.0/en/model_doc/fsmt" }); /// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" }); /// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent /// TODO: ask the user for their consent and update this with gtag('consent', 'update') } </script> <!-- Google Analytics v3 (deprecated) --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { (function (i, s, o, g, r, a, m) { i["GoogleAnalyticsObject"] = r; (i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments); }), (i[r].l = 1 * new Date()); (a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]); a.async = 1; a.src = g; m.parentNode.insertBefore(a, m); })(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics"); ganalytics("create", "UA-83738774-2", "auto"); ganalytics("send", "pageview", "/docs/transformers/v4.34.0/en/model_doc/fsmt"); } </script> <iframe name="__privateStripeMetricsController2530" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-27c67c0d52761104439bb051c7856ab1.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fv4.34.0%2Fen%2Fmodel_doc%2Ffsmt%23transformers.FSMTConfig&amp;title=FSMT&amp;referrer=&amp;muid=NA&amp;sid=NA&amp;version=6&amp;preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html>
2023-10-05T13:33:48.669Z
OPT
https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/opt
# OPT ## Overview The OPT model was proposed in [Open Pre-trained Transformer Language Models](https://arxiv.org/pdf/2205.01068) by Meta AI. OPT is a series of open-sourced large causal language models which perform similar in performance to GPT3. The abstract from the paper is the following: _Large language models, which are often trained for hundreds of thousands of compute days, have shown remarkable capabilities for zero- and few-shot learning. Given their computational cost, these models are difficult to replicate without significant capital. For the few that are available through APIs, no access is granted to the full model weights, making them difficult to study. We present Open Pre-trained Transformers (OPT), a suite of decoder-only pre-trained transformers ranging from 125M to 175B parameters, which we aim to fully and responsibly share with interested researchers. We show that OPT-175B is comparable to GPT-3, while requiring only 1/7th the carbon footprint to develop. We are also releasing our logbook detailing the infrastructure challenges we faced, along with code for experimenting with all of the released models._ Tips: - OPT has the same architecture as `BartDecoder`. - Contrary to GPT2, OPT adds the EOS token `</s>` to the beginning of every prompt. This model was contributed by [Arthur Zucker](https://huggingface.co/ArthurZ), [Younes Belkada](https://huggingface.co/ybelkada), and [Patrick Von Platen](https://huggingface.co/patrickvonplaten). The original code can be found [here](https://github.com/facebookresearch/metaseq). ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with OPT. If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we will review it. The resource should ideally demonstrate something new instead of duplicating an existing resource. - A notebook on [fine-tuning OPT with PEFT, bitsandbytes, and Transformers](https://colab.research.google.com/drive/1jCkpikz0J2o20FBQmYmAGdiKmJGOMo-o?usp=sharing). 🌎 - A blog post on [decoding strategies with OPT](https://huggingface.co/blog/introducing-csearch#62-example-two---opt). - [Causal language modeling](https://huggingface.co/course/en/chapter7/6?fw=pt#training-a-causal-language-model-from-scratch) chapter of the 🤗 Hugging Face Course. - [OPTForCausalLM](/docs/transformers/v4.34.0/en/model_doc/opt#transformers.OPTForCausalLM) is supported by this [causal language modeling example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling#gpt-2gpt-and-causal-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb). - [TFOPTForCausalLM](/docs/transformers/v4.34.0/en/model_doc/opt#transformers.TFOPTForCausalLM) is supported by this [causal language modeling example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/language-modeling#run_clmpy) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb). - [FlaxOPTForCausalLM](/docs/transformers/v4.34.0/en/model_doc/opt#transformers.FlaxOPTForCausalLM) is supported by this [causal language modeling example script](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling#causal-language-modeling). - [Text classification task guide](sequence_classification.md) - [OPTForSequenceClassification](/docs/transformers/v4.34.0/en/model_doc/opt#transformers.OPTForSequenceClassification) is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification.ipynb). - [OPTForQuestionAnswering](/docs/transformers/v4.34.0/en/model_doc/opt#transformers.OPTForQuestionAnswering) is supported by this [question answering example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering.ipynb). - [Question answering](https://huggingface.co/course/chapter7/7?fw=pt) chapter of the 🤗 Hugging Face Course. ⚡️ Inference - A blog post on [How 🤗 Accelerate runs very large models thanks to PyTorch](https://huggingface.co/blog/accelerate-large-models) with OPT. ## OPTConfig ### class transformers.OPTConfig [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/opt/configuration_opt.py#L32) ( vocab\_size = 50272hidden\_size = 768num\_hidden\_layers = 12ffn\_dim = 3072max\_position\_embeddings = 2048do\_layer\_norm\_before = True\_remove\_final\_layer\_norm = Falseword\_embed\_proj\_dim = Nonedropout = 0.1attention\_dropout = 0.0num\_attention\_heads = 12activation\_function = 'relu'layerdrop = 0.0init\_std = 0.02use\_cache = Truepad\_token\_id = 1bos\_token\_id = 2eos\_token\_id = 2enable\_bias = Truelayer\_norm\_elementwise\_affine = True\*\*kwargs ) This is the configuration class to store the configuration of a [OPTModel](/docs/transformers/v4.34.0/en/model_doc/opt#transformers.OPTModel). It is used to instantiate a OPT model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the OPT [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) architecture. Configuration objects inherit from [PretrainedConfig](/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig) and can be used to control the model outputs. Read the documentation from [PretrainedConfig](/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig) for more information. Example: ``` >>> from transformers import OPTConfig, OPTModel >>> >>> configuration = OPTConfig() >>> >>> model = OPTModel(configuration) >>> >>> configuration = model.config ``` ## OPTModel ### class transformers.OPTModel [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/opt/modeling_opt.py#L752) ( config: OPTConfig ) Parameters - **config** ([OPTConfig](/docs/transformers/v4.34.0/en/model_doc/opt#transformers.OPTConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. The bare OPT Model outputting raw hidden-states without any specific head on top. This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/opt/modeling_opt.py#L768) ( input\_ids: LongTensor = Noneattention\_mask: typing.Optional\[torch.Tensor\] = Nonehead\_mask: typing.Optional\[torch.Tensor\] = Nonepast\_key\_values: typing.Optional\[typing.List\[torch.FloatTensor\]\] = Noneinputs\_embeds: typing.Optional\[torch.FloatTensor\] = Noneuse\_cache: typing.Optional\[bool\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → [transformers.modeling\_outputs.BaseModelOutputWithPast](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.BaseModelOutputWithPast) or `tuple(torch.FloatTensor)` The [OPTModel](/docs/transformers/v4.34.0/en/model_doc/opt#transformers.OPTModel) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, OPTModel >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m") >>> model = OPTModel.from_pretrained("facebook/opt-350m") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state ``` ## OPTForCausalLM ### class transformers.OPTForCausalLM [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/opt/modeling_opt.py#L818) ( config ) #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/opt/modeling_opt.py#L849) ( input\_ids: LongTensor = Noneattention\_mask: typing.Optional\[torch.Tensor\] = Nonehead\_mask: typing.Optional\[torch.Tensor\] = Nonepast\_key\_values: typing.Optional\[typing.List\[torch.FloatTensor\]\] = Noneinputs\_embeds: typing.Optional\[torch.FloatTensor\] = Nonelabels: typing.Optional\[torch.LongTensor\] = Noneuse\_cache: typing.Optional\[bool\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → [transformers.modeling\_outputs.CausalLMOutputWithPast](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.CausalLMOutputWithPast) or `tuple(torch.FloatTensor)` Example: ``` >>> from transformers import AutoTokenizer, OPTForCausalLM >>> model = OPTForCausalLM.from_pretrained("facebook/opt-350m") >>> tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m") >>> prompt = "Hey, are you conscious? Can you talk to me?" >>> inputs = tokenizer(prompt, return_tensors="pt") >>> >>> generate_ids = model.generate(inputs.input_ids, max_length=30) >>> tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0] "Hey, are you conscious? Can you talk to me?\nI'm not conscious. I'm just a little bit of a weirdo." ``` ## TFOPTModel ### class transformers.TFOPTModel [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/opt/modeling_tf_opt.py#L766) ( \*args\*\*kwargs ) Parameters - **config** ([OPTConfig](/docs/transformers/v4.34.0/en/model_doc/opt#transformers.OPTConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel.from_pretrained) method to load the model weights. The bare TF OPT Model outputting raw hidden-states without any specific head on top. This model inherits from [TFPreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a [tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in `transformers` accept two formats as input: - having all inputs as keyword arguments (like PyTorch models), or - having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like `model.fit()` things should “just work” for you - just pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: - a single Tensor with `input_ids` only and nothing else: `model(input_ids)` - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])` - a dictionary with one or several input Tensors associated to the input names given in the docstring: `model({"input_ids": input_ids, "token_type_ids": token_type_ids})` Note that when creating models and layers with [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! #### call [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/opt/modeling_tf_opt.py#L780) ( input\_ids: TFModelInputType | None = Noneattention\_mask: np.ndarray | tf.Tensor | None = Nonehead\_mask: np.ndarray | tf.Tensor | None = Nonepast\_key\_values: Optional\[Tuple\[Tuple\[Union\[np.ndarray, tf.Tensor\]\]\]\] = Noneinputs\_embeds: np.ndarray | tf.Tensor | None = Noneuse\_cache: Optional\[bool\] = Noneoutput\_attentions: Optional\[bool\] = Noneoutput\_hidden\_states: Optional\[bool\] = Nonereturn\_dict: Optional\[bool\] = Nonetraining: Optional\[bool\] = False\*\*kwargs ) → [transformers.modeling\_tf\_outputs.TFBaseModelOutputWithPast](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFBaseModelOutputWithPast) or `tuple(tf.Tensor)` The [TFOPTModel](/docs/transformers/v4.34.0/en/model_doc/opt#transformers.TFOPTModel) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, TFOPTModel >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m") >>> model = TFOPTModel.from_pretrained("facebook/opt-350m") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") >>> outputs = model(inputs) >>> last_hidden_states = outputs.last_hidden_state ``` ## TFOPTForCausalLM ### class transformers.TFOPTForCausalLM [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/opt/modeling_tf_opt.py#L852) ( \*args\*\*kwargs ) Parameters - **config** ([OPTConfig](/docs/transformers/v4.34.0/en/model_doc/opt#transformers.OPTConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel.from_pretrained) method to load the model weights. The OPT Model transformer with a language modeling head on top. This model inherits from [TFPreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a [tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in `transformers` accept two formats as input: - having all inputs as keyword arguments (like PyTorch models), or - having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like `model.fit()` things should “just work” for you - just pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: - a single Tensor with `input_ids` only and nothing else: `model(input_ids)` - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])` - a dictionary with one or several input Tensors associated to the input names given in the docstring: `model({"input_ids": input_ids, "token_type_ids": token_type_ids})` Note that when creating models and layers with [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! #### call [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/opt/modeling_tf_opt.py#L877) ( input\_ids: TFModelInputType | None = Nonepast\_key\_values: Optional\[Tuple\[Tuple\[Union\[np.ndarray, tf.Tensor\]\]\]\] = Noneattention\_mask: np.ndarray | tf.Tensor | None = Noneposition\_ids: np.ndarray | tf.Tensor | None = Nonehead\_mask: np.ndarray | tf.Tensor | None = Noneinputs\_embeds: np.ndarray | tf.Tensor | None = Nonelabels: np.ndarray | tf.Tensor | None = Noneuse\_cache: Optional\[bool\] = Noneoutput\_attentions: Optional\[bool\] = Noneoutput\_hidden\_states: Optional\[bool\] = Nonereturn\_dict: Optional\[bool\] = Nonetraining: Optional\[bool\] = False\*\*kwargs ) → [transformers.modeling\_tf\_outputs.TFCausalLMOutputWithPast](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFCausalLMOutputWithPast) or `tuple(tf.Tensor)` Example: ``` >>> from transformers import AutoTokenizer, TFOPTForCausalLM >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m") >>> model = TFOPTForCausalLM.from_pretrained("facebook/opt-350m") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") >>> outputs = model(inputs) >>> logits = outputs.logits ``` ## OPTForSequenceClassification ### class transformers.OPTForSequenceClassification [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/opt/modeling_opt.py#L1027) ( config: OPTConfig ) Parameters - **config** ([OPTConfig](/docs/transformers/v4.34.0/en/model_doc/opt#transformers.OPTConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. The OPT Model transformer with a sequence classification head on top (linear layer). [OPTForSequenceClassification](/docs/transformers/v4.34.0/en/model_doc/opt#transformers.OPTForSequenceClassification) uses the last token in order to do the classification, as other causal models (e.g. GPT-2) do. Since it does classification on the last token, it requires to know the position of the last token. If a `pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in each row of the batch). This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/opt/modeling_opt.py#L1037) ( input\_ids: typing.Optional\[torch.LongTensor\] = Noneattention\_mask: typing.Optional\[torch.FloatTensor\] = Nonehead\_mask: typing.Optional\[torch.FloatTensor\] = Nonepast\_key\_values: typing.Optional\[typing.Tuple\[typing.Tuple\[torch.Tensor\]\]\] = Noneinputs\_embeds: typing.Optional\[torch.FloatTensor\] = Nonelabels: typing.Optional\[torch.LongTensor\] = Noneuse\_cache: typing.Optional\[bool\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → `transformers.modeling_outputs.SequenceClassifierOutputWithPast` or `tuple(torch.FloatTensor)` The [OPTForSequenceClassification](/docs/transformers/v4.34.0/en/model_doc/opt#transformers.OPTForSequenceClassification) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example of single-label classification: ``` >>> import torch >>> from transformers import AutoTokenizer, OPTForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("ArthurZ/opt-350m-dummy-sc") >>> model = OPTForSequenceClassification.from_pretrained("ArthurZ/opt-350m-dummy-sc") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_id = logits.argmax().item() >>> model.config.id2label[predicted_class_id] 'LABEL_0' >>> >>> num_labels = len(model.config.id2label) >>> model = OPTForSequenceClassification.from_pretrained("ArthurZ/opt-350m-dummy-sc", num_labels=num_labels) >>> labels = torch.tensor([1]) >>> loss = model(**inputs, labels=labels).loss >>> round(loss.item(), 2) 1.71 ``` Example of multi-label classification: ``` >>> import torch >>> from transformers import AutoTokenizer, OPTForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("ArthurZ/opt-350m-dummy-sc") >>> model = OPTForSequenceClassification.from_pretrained("ArthurZ/opt-350m-dummy-sc", problem_type="multi_label_classification") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5] >>> >>> num_labels = len(model.config.id2label) >>> model = OPTForSequenceClassification.from_pretrained( ... "ArthurZ/opt-350m-dummy-sc", num_labels=num_labels, problem_type="multi_label_classification" ... ) >>> labels = torch.sum( ... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1 ... ).to(torch.float) >>> loss = model(**inputs, labels=labels).loss ``` ## OPTForQuestionAnswering ### class transformers.OPTForQuestionAnswering [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/opt/modeling_opt.py#L1149) ( config: OPTConfig ) Parameters - **config** ([OPTConfig](/docs/transformers/v4.34.0/en/model_doc/opt#transformers.OPTConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. The OPT Model transformer with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute `span start logits` and `span end logits`). This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/opt/modeling_opt.py#L1158) ( input\_ids: typing.Optional\[torch.LongTensor\] = Noneattention\_mask: typing.Optional\[torch.FloatTensor\] = Nonehead\_mask: typing.Optional\[torch.FloatTensor\] = Nonepast\_key\_values: typing.Optional\[typing.Tuple\[typing.Tuple\[torch.Tensor\]\]\] = Noneinputs\_embeds: typing.Optional\[torch.FloatTensor\] = Nonestart\_positions: typing.Optional\[torch.LongTensor\] = Noneend\_positions: typing.Optional\[torch.LongTensor\] = Noneuse\_cache: typing.Optional\[bool\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → [transformers.modeling\_outputs.QuestionAnsweringModelOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.QuestionAnsweringModelOutput) or `tuple(torch.FloatTensor)` The [OPTForQuestionAnswering](/docs/transformers/v4.34.0/en/model_doc/opt#transformers.OPTForQuestionAnswering) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, OPTForQuestionAnswering >>> import torch >>> torch.manual_seed(4) >>> tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m") >>> >>> >>> model = OPTForQuestionAnswering.from_pretrained("facebook/opt-350m") >>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" >>> inputs = tokenizer(question, text, return_tensors="pt") >>> with torch.no_grad(): ... outputs = model(**inputs) >>> answer_start_index = outputs.start_logits.argmax() >>> answer_end_index = outputs.end_logits.argmax() >>> answer_offset = len(tokenizer(question)[0]) >>> predict_answer_tokens = inputs.input_ids[ ... 0, answer_offset + answer_start_index : answer_offset + answer_end_index + 1 ... ] >>> predicted = tokenizer.decode(predict_answer_tokens) >>> predicted ' a nice puppet' ``` ## FlaxOPTModel ### class transformers.FlaxOPTModel [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/opt/modeling_flax_opt.py#L690) ( config: OPTConfiginput\_shape: typing.Tuple\[int\] = (1, 1)seed: int = 0dtype: dtype = <class 'jax.numpy.float32'>\_do\_init: bool = True\*\*kwargs ) #### \_\_call\_\_ [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/opt/modeling_flax_opt.py#L583) ( input\_ids: Arrayattention\_mask: typing.Optional\[jax.Array\] = Noneposition\_ids: typing.Optional\[jax.Array\] = Noneparams: dict = Nonepast\_key\_values: dict = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = Nonedropout\_rng: PRNGKey = Nonedeterministic: bool = True ) → [transformers.modeling\_flax\_outputs.FlaxBaseModelOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxBaseModelOutput) or `tuple(torch.FloatTensor)` Example: ``` >>> from transformers import AutoTokenizer, FlaxOPTModel >>> tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m") >>> model = FlaxOPTModel.from_pretrained("facebook/opt-350m") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="jax") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state ``` ## FlaxOPTForCausalLM ### class transformers.FlaxOPTForCausalLM [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/opt/modeling_flax_opt.py#L763) ( config: OPTConfiginput\_shape: typing.Tuple\[int\] = (1, 1)seed: int = 0dtype: dtype = <class 'jax.numpy.float32'>\_do\_init: bool = True\*\*kwargs ) Parameters - **config** ([OPTConfig](/docs/transformers/v4.34.0/en/model_doc/opt#transformers.OPTConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.from_pretrained) method to load the model weights. - **dtype** (`jax.numpy.dtype`, _optional_, defaults to `jax.numpy.float32`) — The data type of the computation. Can be one of `jax.numpy.float32`, `jax.numpy.float16` (on GPUs) and `jax.numpy.bfloat16` (on TPUs). This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given `dtype`. **Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.** If you wish to change the dtype of the model parameters, see [to\_fp16()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.to_fp16) and [to\_bf16()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.to_bf16). OPT Model with a language modeling head on top (linear layer with weights tied to the input embeddings) e.g for autoregressive tasks. This model inherits from [FlaxPreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a Flax Linen [flax.nn.Module](https://flax.readthedocs.io/en/latest/_autosummary/flax.nn.module.html) subclass. Use it as a regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior. Finally, this model supports inherent JAX features such as: - [Just-In-Time (JIT) compilation](https://jax.readthedocs.io/en/latest/jax.html#just-in-time-compilation-jit) - [Automatic Differentiation](https://jax.readthedocs.io/en/latest/jax.html#automatic-differentiation) - [Vectorization](https://jax.readthedocs.io/en/latest/jax.html#vectorization-vmap) - [Parallelization](https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap) #### \_\_call\_\_ [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/opt/modeling_flax_opt.py#L583) ( input\_ids: Arrayattention\_mask: typing.Optional\[jax.Array\] = Noneposition\_ids: typing.Optional\[jax.Array\] = Noneparams: dict = Nonepast\_key\_values: dict = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = Nonedropout\_rng: PRNGKey = Nonedeterministic: bool = True ) → [transformers.modeling\_flax\_outputs.FlaxBaseModelOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxBaseModelOutput) or `tuple(torch.FloatTensor)` Example: ``` >>> from transformers import AutoTokenizer, FlaxOPTForCausalLM >>> tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m") >>> model = FlaxOPTForCausalLM.from_pretrained("facebook/opt-350m") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="np") >>> outputs = model(**inputs) >>> >>> next_token_logits = outputs.logits[:, -1] ```
<!DOCTYPE html><html class=""><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"> <meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science."> <meta property="fb:app_id" content="1321688464574422"> <meta name="twitter:card" content="summary_large_image"> <meta name="twitter:site" content="@huggingface"> <meta property="og:title" content="OPT"> <meta property="og:type" content="website"> <meta property="og:url" content="https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/opt"> <meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png"> <link rel="stylesheet" href="/front/build/kube-5e23f38/style.css"> <link rel="preconnect" href="https://fonts.gstatic.com"> <link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&amp;display=swap" rel="stylesheet"> <link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&amp;display=swap" rel="stylesheet"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" as="style" onload="this.onload=null;this.rel='stylesheet'"> <noscript> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" /> </noscript> <title>OPT</title> <script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script> <script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><link rel="stylesheet" href="/docs/transformers/v4.34.0/en/_app/immutable/assets/0.e3b0c442.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/1.38c5c2f6.js"><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"><meta name="hf:doc:metadata" content="{&quot;local&quot;:&quot;opt&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;overview&quot;,&quot;title&quot;:&quot;Overview&quot;},{&quot;local&quot;:&quot;resources&quot;,&quot;title&quot;:&quot;Resources&quot;},{&quot;local&quot;:&quot;transformers.OPTConfig&quot;,&quot;title&quot;:&quot;OPTConfig&quot;},{&quot;local&quot;:&quot;transformers.OPTModel&quot;,&quot;title&quot;:&quot;OPTModel&quot;},{&quot;local&quot;:&quot;transformers.OPTForCausalLM&quot;,&quot;title&quot;:&quot;OPTForCausalLM&quot;},{&quot;local&quot;:&quot;transformers.TFOPTModel&quot;,&quot;title&quot;:&quot;TFOPTModel&quot;},{&quot;local&quot;:&quot;transformers.TFOPTForCausalLM&quot;,&quot;title&quot;:&quot;TFOPTForCausalLM&quot;},{&quot;local&quot;:&quot;transformers.OPTForSequenceClassification&quot;,&quot;title&quot;:&quot;OPTForSequenceClassification&quot;},{&quot;local&quot;:&quot;transformers.OPTForQuestionAnswering&quot;,&quot;title&quot;:&quot;OPTForQuestionAnswering&quot;},{&quot;local&quot;:&quot;transformers.FlaxOPTModel&quot;,&quot;title&quot;:&quot;FlaxOPTModel&quot;},{&quot;local&quot;:&quot;transformers.FlaxOPTForCausalLM&quot;,&quot;title&quot;:&quot;FlaxOPTForCausalLM&quot;}],&quot;title&quot;:&quot;OPT&quot;}"></head> <body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage"> <div class="flex min-h-screen flex-col"> <div class="SVELTE_HYDRATER contents" data-props="{&quot;classNames&quot;:&quot;&quot;,&quot;isWide&quot;:true,&quot;isZh&quot;:false}" data-target="MainHeader"><header class="border-b border-gray-100 "><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <div class="flex flex-none items-center justify-center p-0.5 place-self-stretch lg:hidden"><button class="relative z-40 flex h-6 w-8 items-center justify-center" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div></div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="rounded-full border border-transparent bg-gray-900 px-3 py-1 leading-none text-white hover:border-black hover:bg-white hover:text-black" href="/join">Sign Up</a></li></ul></nav></div></header></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div> <main class="flex flex-1 flex-col"><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapters&quot;:[{&quot;title&quot;:&quot;Get started&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;🤗 Transformers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;index&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/index&quot;},{&quot;title&quot;:&quot;Quick tour&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;quicktour&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/quicktour&quot;},{&quot;title&quot;:&quot;Installation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;installation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/installation&quot;}]},{&quot;title&quot;:&quot;Tutorials&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Run inference with pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_tutorial&quot;},{&quot;title&quot;:&quot;Write portable code with AutoClass&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;autoclass_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/autoclass_tutorial&quot;},{&quot;title&quot;:&quot;Preprocess data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocessing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/preprocessing&quot;},{&quot;title&quot;:&quot;Fine-tune a pretrained model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;training&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/training&quot;},{&quot;title&quot;:&quot;Train with a script&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;run_scripts&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/run_scripts&quot;},{&quot;title&quot;:&quot;Set up distributed training with 🤗 Accelerate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;accelerate&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/accelerate&quot;},{&quot;title&quot;:&quot;Load and train adapters with 🤗 PEFT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;peft&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/peft&quot;},{&quot;title&quot;:&quot;Share your model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_sharing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_sharing&quot;},{&quot;title&quot;:&quot;Agents&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers_agents&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/transformers_agents&quot;},{&quot;title&quot;:&quot;Generation with LLMs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;llm_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/llm_tutorial&quot;}]},{&quot;title&quot;:&quot;Task Guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Natural Language Processing&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text classification&quot;,&quot;id&quot;:&quot;tasks/sequence_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/sequence_classification&quot;},{&quot;title&quot;:&quot;Token classification&quot;,&quot;id&quot;:&quot;tasks/token_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/token_classification&quot;},{&quot;title&quot;:&quot;Question answering&quot;,&quot;id&quot;:&quot;tasks/question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/question_answering&quot;},{&quot;title&quot;:&quot;Causal language modeling&quot;,&quot;id&quot;:&quot;tasks/language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/language_modeling&quot;},{&quot;title&quot;:&quot;Masked language modeling&quot;,&quot;id&quot;:&quot;tasks/masked_language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/masked_language_modeling&quot;},{&quot;title&quot;:&quot;Translation&quot;,&quot;id&quot;:&quot;tasks/translation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/translation&quot;},{&quot;title&quot;:&quot;Summarization&quot;,&quot;id&quot;:&quot;tasks/summarization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/summarization&quot;},{&quot;title&quot;:&quot;Multiple choice&quot;,&quot;id&quot;:&quot;tasks/multiple_choice&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/multiple_choice&quot;}]},{&quot;title&quot;:&quot;Audio&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio classification&quot;,&quot;id&quot;:&quot;tasks/audio_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/audio_classification&quot;},{&quot;title&quot;:&quot;Automatic speech recognition&quot;,&quot;id&quot;:&quot;tasks/asr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/asr&quot;}]},{&quot;title&quot;:&quot;Computer Vision&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image classification&quot;,&quot;id&quot;:&quot;tasks/image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_classification&quot;},{&quot;title&quot;:&quot;Semantic segmentation&quot;,&quot;id&quot;:&quot;tasks/semantic_segmentation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/semantic_segmentation&quot;},{&quot;title&quot;:&quot;Video classification&quot;,&quot;id&quot;:&quot;tasks/video_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/video_classification&quot;},{&quot;title&quot;:&quot;Object detection&quot;,&quot;id&quot;:&quot;tasks/object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot object detection&quot;,&quot;id&quot;:&quot;tasks/zero_shot_object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot image classification&quot;,&quot;id&quot;:&quot;tasks/zero_shot_image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification&quot;},{&quot;title&quot;:&quot;Depth estimation&quot;,&quot;id&quot;:&quot;tasks/monocular_depth_estimation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation&quot;}]},{&quot;title&quot;:&quot;Multimodal&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image captioning&quot;,&quot;id&quot;:&quot;tasks/image_captioning&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_captioning&quot;},{&quot;title&quot;:&quot;Document Question Answering&quot;,&quot;id&quot;:&quot;tasks/document_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/document_question_answering&quot;},{&quot;title&quot;:&quot;Visual Question Answering&quot;,&quot;id&quot;:&quot;tasks/visual_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/visual_question_answering&quot;},{&quot;title&quot;:&quot;Text to speech&quot;,&quot;id&quot;:&quot;tasks/text-to-speech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/text-to-speech&quot;}]},{&quot;title&quot;:&quot;Generation&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Customize the generation strategy&quot;,&quot;id&quot;:&quot;generation_strategies&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/generation_strategies&quot;}]},{&quot;title&quot;:&quot;Prompting&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image tasks with IDEFICS&quot;,&quot;id&quot;:&quot;tasks/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/idefics&quot;}]}]},{&quot;title&quot;:&quot;Developer guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Use fast tokenizers from 🤗 Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;fast_tokenizers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/fast_tokenizers&quot;},{&quot;title&quot;:&quot;Run inference with multilingual models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;multilingual&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/multilingual&quot;},{&quot;title&quot;:&quot;Use model-specific APIs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;create_a_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/create_a_model&quot;},{&quot;title&quot;:&quot;Share a custom model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_models&quot;},{&quot;title&quot;:&quot;Templates for chat models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;chat_templating&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/chat_templating&quot;},{&quot;title&quot;:&quot;Run training on Amazon SageMaker&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;sagemaker&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/sagemaker&quot;},{&quot;title&quot;:&quot;Export to ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;serialization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/serialization&quot;},{&quot;title&quot;:&quot;Export to TFLite&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tflite&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tflite&quot;},{&quot;title&quot;:&quot;Export to TorchScript&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;torchscript&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/torchscript&quot;},{&quot;title&quot;:&quot;Benchmarks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;benchmarks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/benchmarks&quot;},{&quot;title&quot;:&quot;Notebooks with examples&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;notebooks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/notebooks&quot;},{&quot;title&quot;:&quot;Community resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;community&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/community&quot;},{&quot;title&quot;:&quot;Custom Tools and Prompts&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_tools&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_tools&quot;},{&quot;title&quot;:&quot;Troubleshoot&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;troubleshooting&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/troubleshooting&quot;}]},{&quot;title&quot;:&quot;Performance and scalability&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;performance&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/performance&quot;},{&quot;title&quot;:&quot;Efficient training techniques&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Methods and tools for efficient training on a single GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_one&quot;},{&quot;title&quot;:&quot;Multiple GPUs and parallelism&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_many&quot;},{&quot;title&quot;:&quot;Efficient training on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu&quot;},{&quot;title&quot;:&quot;Distributed CPU training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu_many&quot;},{&quot;title&quot;:&quot;Training on TPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu&quot;},{&quot;title&quot;:&quot;Training on TPU with TensorFlow&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu_tf&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu_tf&quot;},{&quot;title&quot;:&quot;Training on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_special&quot;},{&quot;title&quot;:&quot;Custom hardware for training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_hardware&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_hardware&quot;},{&quot;title&quot;:&quot;Hyperparameter Search using Trainer API&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;hpo_train&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/hpo_train&quot;}]},{&quot;title&quot;:&quot;Optimizing inference&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Inference on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_cpu&quot;},{&quot;title&quot;:&quot;Inference on one GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_one&quot;},{&quot;title&quot;:&quot;Inference on many GPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_many&quot;},{&quot;title&quot;:&quot;Inference on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_special&quot;}]},{&quot;title&quot;:&quot;Instantiating a big model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;big_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/big_models&quot;},{&quot;title&quot;:&quot;Troubleshooting&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;debugging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/debugging&quot;},{&quot;title&quot;:&quot;XLA Integration for TensorFlow Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tf_xla&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tf_xla&quot;},{&quot;title&quot;:&quot;Optimize inference using `torch.compile()`&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_torch_compile&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_torch_compile&quot;}]},{&quot;title&quot;:&quot;Contribute&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;How to contribute to transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;contributing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/contributing&quot;},{&quot;title&quot;:&quot;How to add a model to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_model&quot;},{&quot;title&quot;:&quot;How to convert a 🤗 Transformers model to TensorFlow?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_tensorflow_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_tensorflow_model&quot;},{&quot;title&quot;:&quot;How to add a pipeline to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_pipeline&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_pipeline&quot;},{&quot;title&quot;:&quot;Testing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;testing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/testing&quot;},{&quot;title&quot;:&quot;Checks on a Pull Request&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pr_checks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pr_checks&quot;}]},{&quot;title&quot;:&quot;Conceptual guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Philosophy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;philosophy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/philosophy&quot;},{&quot;title&quot;:&quot;Glossary&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;glossary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/glossary&quot;},{&quot;title&quot;:&quot;What 🤗 Transformers can do&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;task_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/task_summary&quot;},{&quot;title&quot;:&quot;How 🤗 Transformers solve tasks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks_explained&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks_explained&quot;},{&quot;title&quot;:&quot;The Transformer model family&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_summary&quot;},{&quot;title&quot;:&quot;Summary of the tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tokenizer_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tokenizer_summary&quot;},{&quot;title&quot;:&quot;Attention mechanisms&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;attention&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/attention&quot;},{&quot;title&quot;:&quot;Padding and truncation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pad_truncation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pad_truncation&quot;},{&quot;title&quot;:&quot;BERTology&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;bertology&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/bertology&quot;},{&quot;title&quot;:&quot;Perplexity of fixed-length models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perplexity&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perplexity&quot;},{&quot;title&quot;:&quot;Pipelines for webserver inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_webserver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_webserver&quot;},{&quot;title&quot;:&quot;Model training anatomy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_memory_anatomy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_memory_anatomy&quot;}]},{&quot;title&quot;:&quot;API&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Main Classes&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Agents and Tools&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/agent&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/agent&quot;},{&quot;title&quot;:&quot;Auto Classes&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/auto&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/auto&quot;},{&quot;title&quot;:&quot;Callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/callback&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/callback&quot;},{&quot;title&quot;:&quot;Configuration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/configuration&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/configuration&quot;},{&quot;title&quot;:&quot;Data Collator&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/data_collator&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/data_collator&quot;},{&quot;title&quot;:&quot;Keras callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/keras_callbacks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/keras_callbacks&quot;},{&quot;title&quot;:&quot;Logging&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/logging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/logging&quot;},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/model&quot;},{&quot;title&quot;:&quot;Text Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/text_generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/text_generation&quot;},{&quot;title&quot;:&quot;ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/onnx&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/onnx&quot;},{&quot;title&quot;:&quot;Optimization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/optimizer_schedules&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules&quot;},{&quot;title&quot;:&quot;Model outputs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/output&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/output&quot;},{&quot;title&quot;:&quot;Pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/pipelines&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/pipelines&quot;},{&quot;title&quot;:&quot;Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/processors&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/processors&quot;},{&quot;title&quot;:&quot;Quantization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/quantization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/quantization&quot;},{&quot;title&quot;:&quot;Tokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/tokenizer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/tokenizer&quot;},{&quot;title&quot;:&quot;Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/trainer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/trainer&quot;},{&quot;title&quot;:&quot;DeepSpeed Integration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/deepspeed&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/deepspeed&quot;},{&quot;title&quot;:&quot;Feature Extractor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/feature_extractor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/feature_extractor&quot;},{&quot;title&quot;:&quot;Image Processor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/image_processor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/image_processor&quot;}]},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALBERT&quot;,&quot;id&quot;:&quot;model_doc/albert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/albert&quot;},{&quot;title&quot;:&quot;BART&quot;,&quot;id&quot;:&quot;model_doc/bart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bart&quot;},{&quot;title&quot;:&quot;BARThez&quot;,&quot;id&quot;:&quot;model_doc/barthez&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/barthez&quot;},{&quot;title&quot;:&quot;BARTpho&quot;,&quot;id&quot;:&quot;model_doc/bartpho&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bartpho&quot;},{&quot;title&quot;:&quot;BERT&quot;,&quot;id&quot;:&quot;model_doc/bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert&quot;},{&quot;title&quot;:&quot;BertGeneration&quot;,&quot;id&quot;:&quot;model_doc/bert-generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-generation&quot;},{&quot;title&quot;:&quot;BertJapanese&quot;,&quot;id&quot;:&quot;model_doc/bert-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-japanese&quot;},{&quot;title&quot;:&quot;Bertweet&quot;,&quot;id&quot;:&quot;model_doc/bertweet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bertweet&quot;},{&quot;title&quot;:&quot;BigBird&quot;,&quot;id&quot;:&quot;model_doc/big_bird&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/big_bird&quot;},{&quot;title&quot;:&quot;BigBirdPegasus&quot;,&quot;id&quot;:&quot;model_doc/bigbird_pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus&quot;},{&quot;title&quot;:&quot;BioGpt&quot;,&quot;id&quot;:&quot;model_doc/biogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/biogpt&quot;},{&quot;title&quot;:&quot;Blenderbot&quot;,&quot;id&quot;:&quot;model_doc/blenderbot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot&quot;},{&quot;title&quot;:&quot;Blenderbot Small&quot;,&quot;id&quot;:&quot;model_doc/blenderbot-small&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot-small&quot;},{&quot;title&quot;:&quot;BLOOM&quot;,&quot;id&quot;:&quot;model_doc/bloom&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bloom&quot;},{&quot;title&quot;:&quot;BORT&quot;,&quot;id&quot;:&quot;model_doc/bort&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bort&quot;},{&quot;title&quot;:&quot;ByT5&quot;,&quot;id&quot;:&quot;model_doc/byt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/byt5&quot;},{&quot;title&quot;:&quot;CamemBERT&quot;,&quot;id&quot;:&quot;model_doc/camembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/camembert&quot;},{&quot;title&quot;:&quot;CANINE&quot;,&quot;id&quot;:&quot;model_doc/canine&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/canine&quot;},{&quot;title&quot;:&quot;CodeGen&quot;,&quot;id&quot;:&quot;model_doc/codegen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/codegen&quot;},{&quot;title&quot;:&quot;CodeLlama&quot;,&quot;id&quot;:&quot;model_doc/code_llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/code_llama&quot;},{&quot;title&quot;:&quot;ConvBERT&quot;,&quot;id&quot;:&quot;model_doc/convbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convbert&quot;},{&quot;title&quot;:&quot;CPM&quot;,&quot;id&quot;:&quot;model_doc/cpm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpm&quot;},{&quot;title&quot;:&quot;CPMANT&quot;,&quot;id&quot;:&quot;model_doc/cpmant&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpmant&quot;},{&quot;title&quot;:&quot;CTRL&quot;,&quot;id&quot;:&quot;model_doc/ctrl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ctrl&quot;},{&quot;title&quot;:&quot;DeBERTa&quot;,&quot;id&quot;:&quot;model_doc/deberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta&quot;},{&quot;title&quot;:&quot;DeBERTa-v2&quot;,&quot;id&quot;:&quot;model_doc/deberta-v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta-v2&quot;},{&quot;title&quot;:&quot;DialoGPT&quot;,&quot;id&quot;:&quot;model_doc/dialogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dialogpt&quot;},{&quot;title&quot;:&quot;DistilBERT&quot;,&quot;id&quot;:&quot;model_doc/distilbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/distilbert&quot;},{&quot;title&quot;:&quot;DPR&quot;,&quot;id&quot;:&quot;model_doc/dpr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpr&quot;},{&quot;title&quot;:&quot;ELECTRA&quot;,&quot;id&quot;:&quot;model_doc/electra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/electra&quot;},{&quot;title&quot;:&quot;Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encoder-decoder&quot;},{&quot;title&quot;:&quot;ERNIE&quot;,&quot;id&quot;:&quot;model_doc/ernie&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie&quot;},{&quot;title&quot;:&quot;ErnieM&quot;,&quot;id&quot;:&quot;model_doc/ernie_m&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie_m&quot;},{&quot;title&quot;:&quot;ESM&quot;,&quot;id&quot;:&quot;model_doc/esm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/esm&quot;},{&quot;title&quot;:&quot;Falcon&quot;,&quot;id&quot;:&quot;model_doc/falcon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/falcon&quot;},{&quot;title&quot;:&quot;FLAN-T5&quot;,&quot;id&quot;:&quot;model_doc/flan-t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-t5&quot;},{&quot;title&quot;:&quot;FLAN-UL2&quot;,&quot;id&quot;:&quot;model_doc/flan-ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-ul2&quot;},{&quot;title&quot;:&quot;FlauBERT&quot;,&quot;id&quot;:&quot;model_doc/flaubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flaubert&quot;},{&quot;title&quot;:&quot;FNet&quot;,&quot;id&quot;:&quot;model_doc/fnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fnet&quot;},{&quot;title&quot;:&quot;FSMT&quot;,&quot;id&quot;:&quot;model_doc/fsmt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fsmt&quot;},{&quot;title&quot;:&quot;Funnel Transformer&quot;,&quot;id&quot;:&quot;model_doc/funnel&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/funnel&quot;},{&quot;title&quot;:&quot;GPT&quot;,&quot;id&quot;:&quot;model_doc/openai-gpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/openai-gpt&quot;},{&quot;title&quot;:&quot;GPT Neo&quot;,&quot;id&quot;:&quot;model_doc/gpt_neo&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neo&quot;},{&quot;title&quot;:&quot;GPT NeoX&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox&quot;},{&quot;title&quot;:&quot;GPT NeoX Japanese&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox_japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese&quot;},{&quot;title&quot;:&quot;GPT-J&quot;,&quot;id&quot;:&quot;model_doc/gptj&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptj&quot;},{&quot;title&quot;:&quot;GPT2&quot;,&quot;id&quot;:&quot;model_doc/gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt2&quot;},{&quot;title&quot;:&quot;GPTBigCode&quot;,&quot;id&quot;:&quot;model_doc/gpt_bigcode&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode&quot;},{&quot;title&quot;:&quot;GPTSAN Japanese&quot;,&quot;id&quot;:&quot;model_doc/gptsan-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese&quot;},{&quot;title&quot;:&quot;GPTSw3&quot;,&quot;id&quot;:&quot;model_doc/gpt-sw3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt-sw3&quot;},{&quot;title&quot;:&quot;HerBERT&quot;,&quot;id&quot;:&quot;model_doc/herbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/herbert&quot;},{&quot;title&quot;:&quot;I-BERT&quot;,&quot;id&quot;:&quot;model_doc/ibert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ibert&quot;},{&quot;title&quot;:&quot;Jukebox&quot;,&quot;id&quot;:&quot;model_doc/jukebox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/jukebox&quot;},{&quot;title&quot;:&quot;LED&quot;,&quot;id&quot;:&quot;model_doc/led&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/led&quot;},{&quot;title&quot;:&quot;LLaMA&quot;,&quot;id&quot;:&quot;model_doc/llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama&quot;},{&quot;title&quot;:&quot;Llama2&quot;,&quot;id&quot;:&quot;model_doc/llama2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama2&quot;},{&quot;title&quot;:&quot;Longformer&quot;,&quot;id&quot;:&quot;model_doc/longformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longformer&quot;},{&quot;title&quot;:&quot;LongT5&quot;,&quot;id&quot;:&quot;model_doc/longt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longt5&quot;},{&quot;title&quot;:&quot;LUKE&quot;,&quot;id&quot;:&quot;model_doc/luke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/luke&quot;},{&quot;title&quot;:&quot;M2M100&quot;,&quot;id&quot;:&quot;model_doc/m2m_100&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/m2m_100&quot;},{&quot;title&quot;:&quot;MarianMT&quot;,&quot;id&quot;:&quot;model_doc/marian&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/marian&quot;},{&quot;title&quot;:&quot;MarkupLM&quot;,&quot;id&quot;:&quot;model_doc/markuplm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/markuplm&quot;},{&quot;title&quot;:&quot;MBart and MBart-50&quot;,&quot;id&quot;:&quot;model_doc/mbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mbart&quot;},{&quot;title&quot;:&quot;MEGA&quot;,&quot;id&quot;:&quot;model_doc/mega&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mega&quot;},{&quot;title&quot;:&quot;MegatronBERT&quot;,&quot;id&quot;:&quot;model_doc/megatron-bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron-bert&quot;},{&quot;title&quot;:&quot;MegatronGPT2&quot;,&quot;id&quot;:&quot;model_doc/megatron_gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2&quot;},{&quot;title&quot;:&quot;Mistral&quot;,&quot;id&quot;:&quot;model_doc/mistral&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mistral&quot;},{&quot;title&quot;:&quot;mLUKE&quot;,&quot;id&quot;:&quot;model_doc/mluke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mluke&quot;},{&quot;title&quot;:&quot;MobileBERT&quot;,&quot;id&quot;:&quot;model_doc/mobilebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilebert&quot;},{&quot;title&quot;:&quot;MPNet&quot;,&quot;id&quot;:&quot;model_doc/mpnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpnet&quot;},{&quot;title&quot;:&quot;MPT&quot;,&quot;id&quot;:&quot;model_doc/mpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpt&quot;},{&quot;title&quot;:&quot;MRA&quot;,&quot;id&quot;:&quot;model_doc/mra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mra&quot;},{&quot;title&quot;:&quot;MT5&quot;,&quot;id&quot;:&quot;model_doc/mt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mt5&quot;},{&quot;title&quot;:&quot;MVP&quot;,&quot;id&quot;:&quot;model_doc/mvp&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mvp&quot;},{&quot;title&quot;:&quot;NEZHA&quot;,&quot;id&quot;:&quot;model_doc/nezha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nezha&quot;},{&quot;title&quot;:&quot;NLLB&quot;,&quot;id&quot;:&quot;model_doc/nllb&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb&quot;},{&quot;title&quot;:&quot;NLLB-MoE&quot;,&quot;id&quot;:&quot;model_doc/nllb-moe&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb-moe&quot;},{&quot;title&quot;:&quot;Nyströmformer&quot;,&quot;id&quot;:&quot;model_doc/nystromformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nystromformer&quot;},{&quot;title&quot;:&quot;Open-Llama&quot;,&quot;id&quot;:&quot;model_doc/open-llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/open-llama&quot;},{&quot;title&quot;:&quot;OPT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/opt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/opt&quot;},{&quot;title&quot;:&quot;Pegasus&quot;,&quot;id&quot;:&quot;model_doc/pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus&quot;},{&quot;title&quot;:&quot;PEGASUS-X&quot;,&quot;id&quot;:&quot;model_doc/pegasus_x&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus_x&quot;},{&quot;title&quot;:&quot;Persimmon&quot;,&quot;id&quot;:&quot;model_doc/persimmon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/persimmon&quot;},{&quot;title&quot;:&quot;PhoBERT&quot;,&quot;id&quot;:&quot;model_doc/phobert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/phobert&quot;},{&quot;title&quot;:&quot;PLBart&quot;,&quot;id&quot;:&quot;model_doc/plbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/plbart&quot;},{&quot;title&quot;:&quot;ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/prophetnet&quot;},{&quot;title&quot;:&quot;QDQBert&quot;,&quot;id&quot;:&quot;model_doc/qdqbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/qdqbert&quot;},{&quot;title&quot;:&quot;RAG&quot;,&quot;id&quot;:&quot;model_doc/rag&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rag&quot;},{&quot;title&quot;:&quot;REALM&quot;,&quot;id&quot;:&quot;model_doc/realm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/realm&quot;},{&quot;title&quot;:&quot;Reformer&quot;,&quot;id&quot;:&quot;model_doc/reformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/reformer&quot;},{&quot;title&quot;:&quot;RemBERT&quot;,&quot;id&quot;:&quot;model_doc/rembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rembert&quot;},{&quot;title&quot;:&quot;RetriBERT&quot;,&quot;id&quot;:&quot;model_doc/retribert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/retribert&quot;},{&quot;title&quot;:&quot;RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta&quot;},{&quot;title&quot;:&quot;RoBERTa-PreLayerNorm&quot;,&quot;id&quot;:&quot;model_doc/roberta-prelayernorm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm&quot;},{&quot;title&quot;:&quot;RoCBert&quot;,&quot;id&quot;:&quot;model_doc/roc_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roc_bert&quot;},{&quot;title&quot;:&quot;RoFormer&quot;,&quot;id&quot;:&quot;model_doc/roformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roformer&quot;},{&quot;title&quot;:&quot;RWKV&quot;,&quot;id&quot;:&quot;model_doc/rwkv&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rwkv&quot;},{&quot;title&quot;:&quot;Splinter&quot;,&quot;id&quot;:&quot;model_doc/splinter&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/splinter&quot;},{&quot;title&quot;:&quot;SqueezeBERT&quot;,&quot;id&quot;:&quot;model_doc/squeezebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/squeezebert&quot;},{&quot;title&quot;:&quot;SwitchTransformers&quot;,&quot;id&quot;:&quot;model_doc/switch_transformers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/switch_transformers&quot;},{&quot;title&quot;:&quot;T5&quot;,&quot;id&quot;:&quot;model_doc/t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5&quot;},{&quot;title&quot;:&quot;T5v1.1&quot;,&quot;id&quot;:&quot;model_doc/t5v1.1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5v1.1&quot;},{&quot;title&quot;:&quot;TAPEX&quot;,&quot;id&quot;:&quot;model_doc/tapex&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapex&quot;},{&quot;title&quot;:&quot;Transformer XL&quot;,&quot;id&quot;:&quot;model_doc/transfo-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/transfo-xl&quot;},{&quot;title&quot;:&quot;UL2&quot;,&quot;id&quot;:&quot;model_doc/ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ul2&quot;},{&quot;title&quot;:&quot;UMT5&quot;,&quot;id&quot;:&quot;model_doc/umt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/umt5&quot;},{&quot;title&quot;:&quot;X-MOD&quot;,&quot;id&quot;:&quot;model_doc/xmod&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xmod&quot;},{&quot;title&quot;:&quot;XGLM&quot;,&quot;id&quot;:&quot;model_doc/xglm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xglm&quot;},{&quot;title&quot;:&quot;XLM&quot;,&quot;id&quot;:&quot;model_doc/xlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm&quot;},{&quot;title&quot;:&quot;XLM-ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/xlm-prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa-XL&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl&quot;},{&quot;title&quot;:&quot;XLM-V&quot;,&quot;id&quot;:&quot;model_doc/xlm-v&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-v&quot;},{&quot;title&quot;:&quot;XLNet&quot;,&quot;id&quot;:&quot;model_doc/xlnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlnet&quot;},{&quot;title&quot;:&quot;YOSO&quot;,&quot;id&quot;:&quot;model_doc/yoso&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yoso&quot;}]},{&quot;title&quot;:&quot;Vision models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;BEiT&quot;,&quot;id&quot;:&quot;model_doc/beit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/beit&quot;},{&quot;title&quot;:&quot;BiT&quot;,&quot;id&quot;:&quot;model_doc/bit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bit&quot;},{&quot;title&quot;:&quot;Conditional DETR&quot;,&quot;id&quot;:&quot;model_doc/conditional_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/conditional_detr&quot;},{&quot;title&quot;:&quot;ConvNeXT&quot;,&quot;id&quot;:&quot;model_doc/convnext&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnext&quot;},{&quot;title&quot;:&quot;ConvNeXTV2&quot;,&quot;id&quot;:&quot;model_doc/convnextv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnextv2&quot;},{&quot;title&quot;:&quot;CvT&quot;,&quot;id&quot;:&quot;model_doc/cvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cvt&quot;},{&quot;title&quot;:&quot;Deformable DETR&quot;,&quot;id&quot;:&quot;model_doc/deformable_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deformable_detr&quot;},{&quot;title&quot;:&quot;DeiT&quot;,&quot;id&quot;:&quot;model_doc/deit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deit&quot;},{&quot;title&quot;:&quot;DETA&quot;,&quot;id&quot;:&quot;model_doc/deta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deta&quot;},{&quot;title&quot;:&quot;DETR&quot;,&quot;id&quot;:&quot;model_doc/detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/detr&quot;},{&quot;title&quot;:&quot;DiNAT&quot;,&quot;id&quot;:&quot;model_doc/dinat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinat&quot;},{&quot;title&quot;:&quot;DINO V2&quot;,&quot;id&quot;:&quot;model_doc/dinov2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinov2&quot;},{&quot;title&quot;:&quot;DiT&quot;,&quot;id&quot;:&quot;model_doc/dit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dit&quot;},{&quot;title&quot;:&quot;DPT&quot;,&quot;id&quot;:&quot;model_doc/dpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpt&quot;},{&quot;title&quot;:&quot;EfficientFormer&quot;,&quot;id&quot;:&quot;model_doc/efficientformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientformer&quot;},{&quot;title&quot;:&quot;EfficientNet&quot;,&quot;id&quot;:&quot;model_doc/efficientnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientnet&quot;},{&quot;title&quot;:&quot;FocalNet&quot;,&quot;id&quot;:&quot;model_doc/focalnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/focalnet&quot;},{&quot;title&quot;:&quot;GLPN&quot;,&quot;id&quot;:&quot;model_doc/glpn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/glpn&quot;},{&quot;title&quot;:&quot;ImageGPT&quot;,&quot;id&quot;:&quot;model_doc/imagegpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/imagegpt&quot;},{&quot;title&quot;:&quot;LeViT&quot;,&quot;id&quot;:&quot;model_doc/levit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/levit&quot;},{&quot;title&quot;:&quot;Mask2Former&quot;,&quot;id&quot;:&quot;model_doc/mask2former&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mask2former&quot;},{&quot;title&quot;:&quot;MaskFormer&quot;,&quot;id&quot;:&quot;model_doc/maskformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/maskformer&quot;},{&quot;title&quot;:&quot;MobileNetV1&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v1&quot;},{&quot;title&quot;:&quot;MobileNetV2&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v2&quot;},{&quot;title&quot;:&quot;MobileViT&quot;,&quot;id&quot;:&quot;model_doc/mobilevit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevit&quot;},{&quot;title&quot;:&quot;MobileViTV2&quot;,&quot;id&quot;:&quot;model_doc/mobilevitv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevitv2&quot;},{&quot;title&quot;:&quot;NAT&quot;,&quot;id&quot;:&quot;model_doc/nat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nat&quot;},{&quot;title&quot;:&quot;PoolFormer&quot;,&quot;id&quot;:&quot;model_doc/poolformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/poolformer&quot;},{&quot;title&quot;:&quot;Pyramid Vision Transformer (PVT)&quot;,&quot;id&quot;:&quot;model_doc/pvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pvt&quot;},{&quot;title&quot;:&quot;RegNet&quot;,&quot;id&quot;:&quot;model_doc/regnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/regnet&quot;},{&quot;title&quot;:&quot;ResNet&quot;,&quot;id&quot;:&quot;model_doc/resnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/resnet&quot;},{&quot;title&quot;:&quot;SegFormer&quot;,&quot;id&quot;:&quot;model_doc/segformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/segformer&quot;},{&quot;title&quot;:&quot;SwiftFormer&quot;,&quot;id&quot;:&quot;model_doc/swiftformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swiftformer&quot;},{&quot;title&quot;:&quot;Swin Transformer&quot;,&quot;id&quot;:&quot;model_doc/swin&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin&quot;},{&quot;title&quot;:&quot;Swin Transformer V2&quot;,&quot;id&quot;:&quot;model_doc/swinv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swinv2&quot;},{&quot;title&quot;:&quot;Swin2SR&quot;,&quot;id&quot;:&quot;model_doc/swin2sr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin2sr&quot;},{&quot;title&quot;:&quot;Table Transformer&quot;,&quot;id&quot;:&quot;model_doc/table-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/table-transformer&quot;},{&quot;title&quot;:&quot;TimeSformer&quot;,&quot;id&quot;:&quot;model_doc/timesformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/timesformer&quot;},{&quot;title&quot;:&quot;UperNet&quot;,&quot;id&quot;:&quot;model_doc/upernet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/upernet&quot;},{&quot;title&quot;:&quot;VAN&quot;,&quot;id&quot;:&quot;model_doc/van&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/van&quot;},{&quot;title&quot;:&quot;VideoMAE&quot;,&quot;id&quot;:&quot;model_doc/videomae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/videomae&quot;},{&quot;title&quot;:&quot;Vision Transformer (ViT)&quot;,&quot;id&quot;:&quot;model_doc/vit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit&quot;},{&quot;title&quot;:&quot;ViT Hybrid&quot;,&quot;id&quot;:&quot;model_doc/vit_hybrid&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_hybrid&quot;},{&quot;title&quot;:&quot;ViTDet&quot;,&quot;id&quot;:&quot;model_doc/vitdet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitdet&quot;},{&quot;title&quot;:&quot;ViTMAE&quot;,&quot;id&quot;:&quot;model_doc/vit_mae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_mae&quot;},{&quot;title&quot;:&quot;ViTMatte&quot;,&quot;id&quot;:&quot;model_doc/vitmatte&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitmatte&quot;},{&quot;title&quot;:&quot;ViTMSN&quot;,&quot;id&quot;:&quot;model_doc/vit_msn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_msn&quot;},{&quot;title&quot;:&quot;ViViT&quot;,&quot;id&quot;:&quot;model_doc/vivit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vivit&quot;},{&quot;title&quot;:&quot;YOLOS&quot;,&quot;id&quot;:&quot;model_doc/yolos&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yolos&quot;}]},{&quot;title&quot;:&quot;Audio models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio Spectrogram Transformer&quot;,&quot;id&quot;:&quot;model_doc/audio-spectrogram-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer&quot;},{&quot;title&quot;:&quot;Bark&quot;,&quot;id&quot;:&quot;model_doc/bark&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bark&quot;},{&quot;title&quot;:&quot;CLAP&quot;,&quot;id&quot;:&quot;model_doc/clap&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clap&quot;},{&quot;title&quot;:&quot;EnCodec&quot;,&quot;id&quot;:&quot;model_doc/encodec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encodec&quot;},{&quot;title&quot;:&quot;Hubert&quot;,&quot;id&quot;:&quot;model_doc/hubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/hubert&quot;},{&quot;title&quot;:&quot;MCTCT&quot;,&quot;id&quot;:&quot;model_doc/mctct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mctct&quot;},{&quot;title&quot;:&quot;MMS&quot;,&quot;id&quot;:&quot;model_doc/mms&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mms&quot;},{&quot;title&quot;:&quot;MusicGen&quot;,&quot;id&quot;:&quot;model_doc/musicgen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/musicgen&quot;},{&quot;title&quot;:&quot;Pop2Piano&quot;,&quot;id&quot;:&quot;model_doc/pop2piano&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pop2piano&quot;},{&quot;title&quot;:&quot;SEW&quot;,&quot;id&quot;:&quot;model_doc/sew&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew&quot;},{&quot;title&quot;:&quot;SEW-D&quot;,&quot;id&quot;:&quot;model_doc/sew-d&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew-d&quot;},{&quot;title&quot;:&quot;Speech2Text&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text&quot;},{&quot;title&quot;:&quot;Speech2Text2&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text_2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2&quot;},{&quot;title&quot;:&quot;SpeechT5&quot;,&quot;id&quot;:&quot;model_doc/speecht5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speecht5&quot;},{&quot;title&quot;:&quot;UniSpeech&quot;,&quot;id&quot;:&quot;model_doc/unispeech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech&quot;},{&quot;title&quot;:&quot;UniSpeech-SAT&quot;,&quot;id&quot;:&quot;model_doc/unispeech-sat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech-sat&quot;},{&quot;title&quot;:&quot;VITS&quot;,&quot;id&quot;:&quot;model_doc/vits&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vits&quot;},{&quot;title&quot;:&quot;Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2&quot;},{&quot;title&quot;:&quot;Wav2Vec2-Conformer&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2-conformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer&quot;},{&quot;title&quot;:&quot;Wav2Vec2Phoneme&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2_phoneme&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme&quot;},{&quot;title&quot;:&quot;WavLM&quot;,&quot;id&quot;:&quot;model_doc/wavlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wavlm&quot;},{&quot;title&quot;:&quot;Whisper&quot;,&quot;id&quot;:&quot;model_doc/whisper&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/whisper&quot;},{&quot;title&quot;:&quot;XLS-R&quot;,&quot;id&quot;:&quot;model_doc/xls_r&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xls_r&quot;},{&quot;title&quot;:&quot;XLSR-Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/xlsr_wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2&quot;}]},{&quot;title&quot;:&quot;Multimodal models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALIGN&quot;,&quot;id&quot;:&quot;model_doc/align&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/align&quot;},{&quot;title&quot;:&quot;AltCLIP&quot;,&quot;id&quot;:&quot;model_doc/altclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/altclip&quot;},{&quot;title&quot;:&quot;BLIP&quot;,&quot;id&quot;:&quot;model_doc/blip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip&quot;},{&quot;title&quot;:&quot;BLIP-2&quot;,&quot;id&quot;:&quot;model_doc/blip-2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip-2&quot;},{&quot;title&quot;:&quot;BridgeTower&quot;,&quot;id&quot;:&quot;model_doc/bridgetower&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bridgetower&quot;},{&quot;title&quot;:&quot;BROS&quot;,&quot;id&quot;:&quot;model_doc/bros&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bros&quot;},{&quot;title&quot;:&quot;Chinese-CLIP&quot;,&quot;id&quot;:&quot;model_doc/chinese_clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/chinese_clip&quot;},{&quot;title&quot;:&quot;CLIP&quot;,&quot;id&quot;:&quot;model_doc/clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clip&quot;},{&quot;title&quot;:&quot;CLIPSeg&quot;,&quot;id&quot;:&quot;model_doc/clipseg&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clipseg&quot;},{&quot;title&quot;:&quot;Data2Vec&quot;,&quot;id&quot;:&quot;model_doc/data2vec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/data2vec&quot;},{&quot;title&quot;:&quot;DePlot&quot;,&quot;id&quot;:&quot;model_doc/deplot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deplot&quot;},{&quot;title&quot;:&quot;Donut&quot;,&quot;id&quot;:&quot;model_doc/donut&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/donut&quot;},{&quot;title&quot;:&quot;FLAVA&quot;,&quot;id&quot;:&quot;model_doc/flava&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flava&quot;},{&quot;title&quot;:&quot;GIT&quot;,&quot;id&quot;:&quot;model_doc/git&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/git&quot;},{&quot;title&quot;:&quot;GroupViT&quot;,&quot;id&quot;:&quot;model_doc/groupvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/groupvit&quot;},{&quot;title&quot;:&quot;IDEFICS&quot;,&quot;id&quot;:&quot;model_doc/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/idefics&quot;},{&quot;title&quot;:&quot;InstructBLIP&quot;,&quot;id&quot;:&quot;model_doc/instructblip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/instructblip&quot;},{&quot;title&quot;:&quot;LayoutLM&quot;,&quot;id&quot;:&quot;model_doc/layoutlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlm&quot;},{&quot;title&quot;:&quot;LayoutLMV2&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv2&quot;},{&quot;title&quot;:&quot;LayoutLMV3&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv3&quot;},{&quot;title&quot;:&quot;LayoutXLM&quot;,&quot;id&quot;:&quot;model_doc/layoutxlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutxlm&quot;},{&quot;title&quot;:&quot;LiLT&quot;,&quot;id&quot;:&quot;model_doc/lilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lilt&quot;},{&quot;title&quot;:&quot;LXMERT&quot;,&quot;id&quot;:&quot;model_doc/lxmert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lxmert&quot;},{&quot;title&quot;:&quot;MatCha&quot;,&quot;id&quot;:&quot;model_doc/matcha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/matcha&quot;},{&quot;title&quot;:&quot;MGP-STR&quot;,&quot;id&quot;:&quot;model_doc/mgp-str&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mgp-str&quot;},{&quot;title&quot;:&quot;Nougat&quot;,&quot;id&quot;:&quot;model_doc/nougat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nougat&quot;},{&quot;title&quot;:&quot;OneFormer&quot;,&quot;id&quot;:&quot;model_doc/oneformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/oneformer&quot;},{&quot;title&quot;:&quot;OWL-ViT&quot;,&quot;id&quot;:&quot;model_doc/owlvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/owlvit&quot;},{&quot;title&quot;:&quot;Perceiver&quot;,&quot;id&quot;:&quot;model_doc/perceiver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/perceiver&quot;},{&quot;title&quot;:&quot;Pix2Struct&quot;,&quot;id&quot;:&quot;model_doc/pix2struct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pix2struct&quot;},{&quot;title&quot;:&quot;Segment Anything&quot;,&quot;id&quot;:&quot;model_doc/sam&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sam&quot;},{&quot;title&quot;:&quot;Speech Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/speech-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder&quot;},{&quot;title&quot;:&quot;TAPAS&quot;,&quot;id&quot;:&quot;model_doc/tapas&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapas&quot;},{&quot;title&quot;:&quot;TrOCR&quot;,&quot;id&quot;:&quot;model_doc/trocr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trocr&quot;},{&quot;title&quot;:&quot;TVLT&quot;,&quot;id&quot;:&quot;model_doc/tvlt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tvlt&quot;},{&quot;title&quot;:&quot;ViLT&quot;,&quot;id&quot;:&quot;model_doc/vilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vilt&quot;},{&quot;title&quot;:&quot;Vision Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/vision-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder&quot;},{&quot;title&quot;:&quot;Vision Text Dual Encoder&quot;,&quot;id&quot;:&quot;model_doc/vision-text-dual-encoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder&quot;},{&quot;title&quot;:&quot;VisualBERT&quot;,&quot;id&quot;:&quot;model_doc/visual_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/visual_bert&quot;},{&quot;title&quot;:&quot;X-CLIP&quot;,&quot;id&quot;:&quot;model_doc/xclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xclip&quot;}]},{&quot;title&quot;:&quot;Reinforcement learning models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Decision Transformer&quot;,&quot;id&quot;:&quot;model_doc/decision_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/decision_transformer&quot;},{&quot;title&quot;:&quot;Trajectory Transformer&quot;,&quot;id&quot;:&quot;model_doc/trajectory_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer&quot;}]},{&quot;title&quot;:&quot;Time series models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Autoformer&quot;,&quot;id&quot;:&quot;model_doc/autoformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/autoformer&quot;},{&quot;title&quot;:&quot;Informer&quot;,&quot;id&quot;:&quot;model_doc/informer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/informer&quot;},{&quot;title&quot;:&quot;Time Series Transformer&quot;,&quot;id&quot;:&quot;model_doc/time_series_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/time_series_transformer&quot;}]},{&quot;title&quot;:&quot;Graph models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Graphormer&quot;,&quot;id&quot;:&quot;model_doc/graphormer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/graphormer&quot;}]}]},{&quot;title&quot;:&quot;Internal Helpers&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Custom Layers and Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/modeling_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/modeling_utils&quot;},{&quot;title&quot;:&quot;Utilities for pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/pipelines_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/pipelines_utils&quot;},{&quot;title&quot;:&quot;Utilities for Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/tokenization_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/tokenization_utils&quot;},{&quot;title&quot;:&quot;Utilities for Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/trainer_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/trainer_utils&quot;},{&quot;title&quot;:&quot;Utilities for Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/generation_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/generation_utils&quot;},{&quot;title&quot;:&quot;Utilities for Image Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/image_processing_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/image_processing_utils&quot;},{&quot;title&quot;:&quot;Utilities for Audio processing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/audio_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/audio_utils&quot;},{&quot;title&quot;:&quot;General Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/file_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/file_utils&quot;},{&quot;title&quot;:&quot;Utilities for Time Series&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/time_series_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/time_series_utils&quot;}]}]}],&quot;chapterId&quot;:&quot;model_doc/opt&quot;,&quot;docType&quot;:&quot;docs&quot;,&quot;isLoggedIn&quot;:false,&quot;lang&quot;:&quot;en&quot;,&quot;langs&quot;:[&quot;de&quot;,&quot;en&quot;,&quot;es&quot;,&quot;fr&quot;,&quot;it&quot;,&quot;ko&quot;,&quot;pt&quot;,&quot;zh&quot;],&quot;library&quot;:&quot;transformers&quot;,&quot;theme&quot;:&quot;light&quot;,&quot;version&quot;:&quot;v4.34.0&quot;,&quot;versions&quot;:[{&quot;version&quot;:&quot;main&quot;},{&quot;version&quot;:&quot;v4.34.0&quot;},{&quot;version&quot;:&quot;v4.33.3&quot;},{&quot;version&quot;:&quot;v4.33.2&quot;},{&quot;version&quot;:&quot;v4.33.0&quot;},{&quot;version&quot;:&quot;v4.32.1&quot;},{&quot;version&quot;:&quot;v4.32.0&quot;},{&quot;version&quot;:&quot;v4.31.0&quot;},{&quot;version&quot;:&quot;v4.30.0&quot;},{&quot;version&quot;:&quot;v4.29.1&quot;},{&quot;version&quot;:&quot;v4.29.0&quot;},{&quot;version&quot;:&quot;v4.28.1&quot;},{&quot;version&quot;:&quot;v4.28.0&quot;},{&quot;version&quot;:&quot;v4.27.2&quot;},{&quot;version&quot;:&quot;v4.27.1&quot;},{&quot;version&quot;:&quot;v4.27.0&quot;},{&quot;version&quot;:&quot;v4.26.1&quot;},{&quot;version&quot;:&quot;v4.26.0&quot;},{&quot;version&quot;:&quot;v4.25.1&quot;},{&quot;version&quot;:&quot;v4.24.0&quot;},{&quot;version&quot;:&quot;v4.23.1&quot;},{&quot;version&quot;:&quot;v4.23.0&quot;},{&quot;version&quot;:&quot;v4.22.2&quot;},{&quot;version&quot;:&quot;v4.22.1&quot;},{&quot;version&quot;:&quot;v4.22.0&quot;},{&quot;version&quot;:&quot;v4.21.3&quot;},{&quot;version&quot;:&quot;v4.21.2&quot;},{&quot;version&quot;:&quot;v4.21.1&quot;},{&quot;version&quot;:&quot;v4.21.0&quot;},{&quot;version&quot;:&quot;v4.20.1&quot;},{&quot;version&quot;:&quot;v4.20.0&quot;},{&quot;version&quot;:&quot;v4.19.4&quot;},{&quot;version&quot;:&quot;v4.19.3&quot;},{&quot;version&quot;:&quot;v4.19.2&quot;},{&quot;version&quot;:&quot;v4.19.0&quot;},{&quot;version&quot;:&quot;v4.18.0&quot;},{&quot;version&quot;:&quot;v4.17.0&quot;},{&quot;version&quot;:&quot;v4.16.2&quot;},{&quot;version&quot;:&quot;v4.16.1&quot;},{&quot;version&quot;:&quot;v4.16.0&quot;},{&quot;version&quot;:&quot;v4.15.0&quot;},{&quot;version&quot;:&quot;v4.14.1&quot;},{&quot;version&quot;:&quot;v4.13.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.5&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.4&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.0.0&quot;},{&quot;version&quot;:&quot;doc-builder-html&quot;}],&quot;title&quot;:&quot;OPT&quot;}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"><div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation</p> <div class="flex items-center"><p class="font-semibold">OPT</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "><button class=" " type="button"><h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> <span class="ml-auto rounded border border-gray-200 bg-gray-100 px-0.5 text-xs dark:border-gray-800 dark:bg-gray-800"><kbd class="font-sans">⌘K</kbd></span></button> <div class="flex items-center"><select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1">v4.34.0</option><option value="2">v4.33.3</option><option value="3">v4.32.1</option><option value="4">v4.31.0</option><option value="5">v4.30.0</option><option value="6">v4.29.1</option><option value="7">v4.28.1</option><option value="8">v4.27.2</option><option value="9">v4.26.1</option><option value="10">v4.25.1</option><option value="11">v4.24.0</option><option value="12">v4.23.1</option><option value="13">v4.22.2</option><option value="14">v4.21.3</option><option value="15">v4.20.1</option><option value="16">v4.19.4</option><option value="17">v4.18.0</option><option value="18">v4.17.0</option><option value="19">v4.16.2</option><option value="20">v4.15.0</option><option value="21">v4.14.1</option><option value="22">v4.13.0</option><option value="23">v4.12.5</option><option value="24">v4.11.3</option><option value="25">v4.10.1</option><option value="26">v4.9.2</option><option value="27">v4.8.2</option><option value="28">v4.7.0</option><option value="29">v4.6.0</option><option value="30">v4.5.1</option><option value="31">v4.4.2</option><option value="32">v4.3.3</option><option value="33">v4.2.2</option><option value="34">v4.1.1</option><option value="35">v4.0.1</option><option value="36">v3.5.1</option><option value="37">v3.4.0</option><option value="38">v3.3.1</option><option value="39">v3.2.0</option><option value="40">v3.1.0</option><option value="41">v3.0.2</option><option value="42">v2.11.0</option><option value="43">v2.10.0</option><option value="44">v2.9.1</option><option value="45">v2.8.0</option><option value="46">v2.7.0</option><option value="47">v2.6.0</option><option value="48">v2.5.1</option><option value="49">v2.4.1</option><option value="50">v2.3.0</option><option value="51">v2.2.2</option><option value="52">v2.1.1</option><option value="53">v2.0.0</option><option value="54">v1.2.0</option><option value="55">v1.1.0</option><option value="56">v1.0.0</option><option value="57">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"><button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"><svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> 112,792</a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Get started</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/index">🤗 Transformers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/quicktour">Quick tour </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/installation">Installation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Tutorials</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_tutorial">Run inference with pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/autoclass_tutorial">Write portable code with AutoClass </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/preprocessing">Preprocess data </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/training">Fine-tune a pretrained model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/run_scripts">Train with a script </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/accelerate">Set up distributed training with 🤗 Accelerate </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/peft">Load and train adapters with 🤗 PEFT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_sharing">Share your model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/transformers_agents">Agents </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/llm_tutorial">Generation with LLMs </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Task Guides</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Natural Language Processing</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Computer Vision</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Generation</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Prompting</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Developer guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/fast_tokenizers">Use fast tokenizers from 🤗 Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/multilingual">Run inference with multilingual models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/create_a_model">Use model-specific APIs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_models">Share a custom model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/chat_templating">Templates for chat models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/sagemaker">Run training on Amazon SageMaker </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/serialization">Export to ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tflite">Export to TFLite </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/torchscript">Export to TorchScript </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/benchmarks">Benchmarks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/notebooks">Notebooks with examples </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/community">Community resources </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_tools">Custom Tools and Prompts </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/troubleshooting">Troubleshoot </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Performance and scalability</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/performance">Overview </a><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Efficient training techniques</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_one">Methods and tools for efficient training on a single GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_many">Multiple GPUs and parallelism </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu">Efficient training on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu_many">Distributed CPU training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu">Training on TPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu_tf">Training on TPU with TensorFlow </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_special">Training on Specialized Hardware </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_hardware">Custom hardware for training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/hpo_train">Hyperparameter Search using Trainer API </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Optimizing inference</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_cpu">Inference on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_one">Inference on one GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_many">Inference on many GPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_special">Inference on Specialized Hardware </a> </div><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/big_models">Instantiating a big model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/debugging">Troubleshooting </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tf_xla">XLA Integration for TensorFlow Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perf_torch_compile">Optimize inference using `torch.compile()` </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Contribute</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/contributing">How to contribute to transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_model">How to add a model to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_tensorflow_model">How to convert a 🤗 Transformers model to TensorFlow? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_pipeline">How to add a pipeline to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/testing">Testing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pr_checks">Checks on a Pull Request </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Conceptual guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/philosophy">Philosophy </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/glossary">Glossary </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/task_summary">What 🤗 Transformers can do </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tasks_explained">How 🤗 Transformers solve tasks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_summary">The Transformer model family </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tokenizer_summary">Summary of the tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/attention">Attention mechanisms </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pad_truncation">Padding and truncation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/bertology">BERTology </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perplexity">Perplexity of fixed-length models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_webserver">Pipelines for webserver inference </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_memory_anatomy">Model training anatomy </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>API</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Main Classes</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/agent">Agents and Tools </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/model_doc/auto">Auto Classes </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/callback">Callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/configuration">Configuration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/data_collator">Data Collator </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks">Keras callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/logging">Logging </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/model">Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/text_generation">Text Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/onnx">ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules">Optimization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/output">Model outputs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/pipelines">Pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/processors">Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/quantization">Quantization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/tokenizer">Tokenizer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/trainer">Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/deepspeed">DeepSpeed Integration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor">Feature Extractor </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/image_processor">Image Processor </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Models</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Text models</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/albert">ALBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bart">BART </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/barthez">BARThez </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bartpho">BARTpho </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bert">BERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bert-generation">BertGeneration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bert-japanese">BertJapanese </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bertweet">Bertweet </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/big_bird">BigBird </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus">BigBirdPegasus </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/biogpt">BioGpt </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/blenderbot">Blenderbot </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/blenderbot-small">Blenderbot Small </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bloom">BLOOM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bort">BORT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/byt5">ByT5 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/camembert">CamemBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/canine">CANINE </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/codegen">CodeGen </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/code_llama">CodeLlama </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/convbert">ConvBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/cpm">CPM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/cpmant">CPMANT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ctrl">CTRL </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/deberta">DeBERTa </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/deberta-v2">DeBERTa-v2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/dialogpt">DialoGPT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/distilbert">DistilBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/dpr">DPR </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/electra">ELECTRA </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/encoder-decoder">Encoder Decoder Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ernie">ERNIE </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ernie_m">ErnieM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/esm">ESM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/falcon">Falcon </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/flan-t5">FLAN-T5 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/flan-ul2">FLAN-UL2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/flaubert">FlauBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/fnet">FNet </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/fsmt">FSMT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/funnel">Funnel Transformer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/openai-gpt">GPT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt_neo">GPT Neo </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt_neox">GPT NeoX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese">GPT NeoX Japanese </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gptj">GPT-J </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt2">GPT2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode">GPTBigCode </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese">GPTSAN Japanese </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt-sw3">GPTSw3 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/herbert">HerBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ibert">I-BERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/jukebox">Jukebox </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/led">LED </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/llama">LLaMA </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/llama2">Llama2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/longformer">Longformer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/longt5">LongT5 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/luke">LUKE </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/m2m_100">M2M100 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/marian">MarianMT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/markuplm">MarkupLM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mbart">MBart and MBart-50 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mega">MEGA </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/megatron-bert">MegatronBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2">MegatronGPT2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mistral">Mistral </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mluke">mLUKE </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mobilebert">MobileBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mpnet">MPNet </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mpt">MPT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mra">MRA </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mt5">MT5 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mvp">MVP </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nezha">NEZHA </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nllb">NLLB </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nllb-moe">NLLB-MoE </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nystromformer">Nyströmformer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/open-llama">Open-Llama </a><a data-sveltekit-reload="" class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/opt">OPT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/pegasus">Pegasus </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/pegasus_x">PEGASUS-X </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/persimmon">Persimmon </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/phobert">PhoBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/plbart">PLBart </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/prophetnet">ProphetNet </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/qdqbert">QDQBert </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/rag">RAG </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/realm">REALM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/reformer">Reformer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/rembert">RemBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/retribert">RetriBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/roberta">RoBERTa </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm">RoBERTa-PreLayerNorm </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/roc_bert">RoCBert </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/roformer">RoFormer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/rwkv">RWKV </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/splinter">Splinter </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/squeezebert">SqueezeBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/switch_transformers">SwitchTransformers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/t5">T5 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/t5v1.1">T5v1.1 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/tapex">TAPEX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/transfo-xl">Transformer XL </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ul2">UL2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/umt5">UMT5 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xmod">X-MOD </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xglm">XGLM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm">XLM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet">XLM-ProphetNet </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta">XLM-RoBERTa </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl">XLM-RoBERTa-XL </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm-v">XLM-V </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlnet">XLNet </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/yoso">YOSO </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Vision models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Reinforcement learning models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Time series models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Graph models</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Internal Helpers</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/modeling_utils">Custom Layers and Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/pipelines_utils">Utilities for pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/tokenization_utils">Utilities for Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/trainer_utils">Utilities for Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/generation_utils">Utilities for Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/image_processing_utils">Utilities for Image Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/audio_utils">Utilities for Audio processing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/file_utils">General Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/time_series_utils">Utilities for Time Series </a> </div> </div></nav></div></div></div> <div class="z-1 min-w-0 flex-1"> <div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg"> <div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div> <p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience </p> <div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes </div></div></div> <div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a> <p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div> <div class="prose-doc prose relative mx-auto max-w-4xl break-words"> <p></p> <h1 class="relative group"><a id="opt" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#opt"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-vb5cci">OPT</span></h1> <h2 class="relative group"><a id="overview" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#overview"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1jsw1pg">Overview</span></h2> <p data-svelte-h="svelte-uv4l0c">The OPT model was proposed in <a href="https://arxiv.org/pdf/2205.01068" rel="nofollow">Open Pre-trained Transformer Language Models</a> by Meta AI. OPT is a series of open-sourced large causal language models which perform similar in performance to GPT3.</p> <p data-svelte-h="svelte-vfdo9a">The abstract from the paper is the following:</p> <p data-svelte-h="svelte-lygqbw"><em>Large language models, which are often trained for hundreds of thousands of compute days, have shown remarkable capabilities for zero- and few-shot learning. Given their computational cost, these models are difficult to replicate without significant capital. For the few that are available through APIs, no access is granted to the full model weights, making them difficult to study. We present Open Pre-trained Transformers (OPT), a suite of decoder-only pre-trained transformers ranging from 125M to 175B parameters, which we aim to fully and responsibly share with interested researchers. We show that OPT-175B is comparable to GPT-3, while requiring only 1/7th the carbon footprint to develop. We are also releasing our logbook detailing the infrastructure challenges we faced, along with code for experimenting with all of the released models.</em></p> <p data-svelte-h="svelte-axv494">Tips:</p> <ul data-svelte-h="svelte-1bmdbn0"><li>OPT has the same architecture as <code>BartDecoder</code>.</li> <li>Contrary to GPT2, OPT adds the EOS token <code>&lt;/s&gt;</code> to the beginning of every prompt.</li></ul> <p data-svelte-h="svelte-1muftuv">This model was contributed by <a href="https://huggingface.co/ArthurZ" rel="nofollow">Arthur Zucker</a>, <a href="https://huggingface.co/ybelkada" rel="nofollow">Younes Belkada</a>, and <a href="https://huggingface.co/patrickvonplaten" rel="nofollow">Patrick Von Platen</a>. The original code can be found <a href="https://github.com/facebookresearch/metaseq" rel="nofollow">here</a>.</p> <h2 class="relative group"><a id="resources" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#resources"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-w4zzv6">Resources</span></h2> <p data-svelte-h="svelte-1uxlnve">A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with OPT. If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we will review it. The resource should ideally demonstrate something new instead of duplicating an existing resource.</p> <div class="inline-flex items-center border pr-1 rounded-xl "><svg class="mr-1 tag-ico tag-ico-indigo" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 18 18"><path d="M16.2607 8.08202L14.468 6.28928C14.3063 6.12804 14.0873 6.03749 13.859 6.03749C13.6307 6.03749 13.4117 6.12804 13.25 6.28928L5.6375 13.904V16.9125H8.64607L16.2607 9.30002C16.422 9.13836 16.5125 8.91935 16.5125 8.69102C16.5125 8.4627 16.422 8.24369 16.2607 8.08202V8.08202ZM8.1953 15.825H6.725V14.3547L11.858 9.22118L13.3288 10.6915L8.1953 15.825ZM14.0982 9.92262L12.6279 8.45232L13.8606 7.21964L15.3309 8.68994L14.0982 9.92262Z"></path><path d="M6.18125 9.84373H7.26875V6.03748H8.9V4.94998H4.55V6.03748H6.18125V9.84373Z"></path><path d="M4.55 11.475H2.375V2.775H11.075V4.95H12.1625V2.775C12.1625 2.48658 12.0479 2.20997 11.844 2.00602C11.64 1.80208 11.3634 1.6875 11.075 1.6875H2.375C2.08658 1.6875 1.80997 1.80208 1.60602 2.00602C1.40207 2.20997 1.2875 2.48658 1.2875 2.775V11.475C1.2875 11.7634 1.40207 12.04 1.60602 12.244C1.80997 12.4479 2.08658 12.5625 2.375 12.5625H4.55V11.475Z"></path></svg> <span>Text Generation</span></div> <ul data-svelte-h="svelte-l3563i"><li>A notebook on <a href="https://colab.research.google.com/drive/1jCkpikz0J2o20FBQmYmAGdiKmJGOMo-o?usp=sharing" rel="nofollow">fine-tuning OPT with PEFT, bitsandbytes, and Transformers</a>. 🌎</li> <li>A blog post on <a href="https://huggingface.co/blog/introducing-csearch#62-example-two---opt" rel="nofollow">decoding strategies with OPT</a>.</li> <li><a href="https://huggingface.co/course/en/chapter7/6?fw=pt#training-a-causal-language-model-from-scratch" rel="nofollow">Causal language modeling</a> chapter of the 🤗 Hugging Face Course.</li> <li><a href="/docs/transformers/v4.34.0/en/model_doc/opt#transformers.OPTForCausalLM">OPTForCausalLM</a> is supported by this <a href="https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling#gpt-2gpt-and-causal-language-modeling" rel="nofollow">causal language modeling example script</a> and <a href="https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb" rel="nofollow">notebook</a>.</li> <li><a href="/docs/transformers/v4.34.0/en/model_doc/opt#transformers.TFOPTForCausalLM">TFOPTForCausalLM</a> is supported by this <a href="https://github.com/huggingface/transformers/tree/main/examples/tensorflow/language-modeling#run_clmpy" rel="nofollow">causal language modeling example script</a> and <a href="https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb" rel="nofollow">notebook</a>.</li> <li><a href="/docs/transformers/v4.34.0/en/model_doc/opt#transformers.FlaxOPTForCausalLM">FlaxOPTForCausalLM</a> is supported by this <a href="https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling#causal-language-modeling" rel="nofollow">causal language modeling example script</a>.</li></ul> <div class="inline-flex items-center border pr-1 rounded-xl "><svg class="mr-1 tag-ico tag-ico-orange" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32" style="transform: rotate(360deg);"><circle cx="10" cy="20" r="2" fill="currentColor"></circle><circle cx="10" cy="28" r="2" fill="currentColor"></circle><circle cx="10" cy="14" r="2" fill="currentColor"></circle><circle cx="28" cy="4" r="2" fill="currentColor"></circle><circle cx="22" cy="6" r="2" fill="currentColor"></circle><circle cx="28" cy="10" r="2" fill="currentColor"></circle><circle cx="20" cy="12" r="2" fill="currentColor"></circle><circle cx="28" cy="22" r="2" fill="currentColor"></circle><circle cx="26" cy="28" r="2" fill="currentColor"></circle><circle cx="20" cy="26" r="2" fill="currentColor"></circle><circle cx="22" cy="20" r="2" fill="currentColor"></circle><circle cx="16" cy="4" r="2" fill="currentColor"></circle><circle cx="4" cy="24" r="2" fill="currentColor"></circle><circle cx="4" cy="16" r="2" fill="currentColor"></circle></svg> <span>Text Classification</span></div> <ul data-svelte-h="svelte-1tv6r81"><li><a href="sequence_classification.md">Text classification task guide</a></li> <li><a href="/docs/transformers/v4.34.0/en/model_doc/opt#transformers.OPTForSequenceClassification">OPTForSequenceClassification</a> is supported by this <a href="https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification" rel="nofollow">example script</a> and <a href="https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification.ipynb" rel="nofollow">notebook</a>.</li></ul> <div class="inline-flex items-center border pr-1 rounded-xl "><svg class="mr-1 tag-ico tag-ico-blue" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M2 9h9V2H2zm2-5h5v3H4z" fill="currentColor"></path><path d="M2 19h9v-7H2zm2-5h5v3H4z" fill="currentColor"></path><path d="M2 29h9v-7H2zm2-5h5v3H4z" fill="currentColor"></path><path d="M27 9h-9l3.41-3.59L20 4l-6 6l6 6l1.41-1.41L18 11h9a1 1 0 0 1 1 1v12a1 1 0 0 1-1 1H15v2h12a3 3 0 0 0 3-3V12a3 3 0 0 0-3-3z" fill="currentColor"></path></svg> <span>Question Answering</span></div> <ul data-svelte-h="svelte-mrnu5u"><li><a href="/docs/transformers/v4.34.0/en/model_doc/opt#transformers.OPTForQuestionAnswering">OPTForQuestionAnswering</a> is supported by this <a href="https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering" rel="nofollow">question answering example script</a> and <a href="https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering.ipynb" rel="nofollow">notebook</a>.</li> <li><a href="https://huggingface.co/course/chapter7/7?fw=pt" rel="nofollow">Question answering</a> chapter of the 🤗 Hugging Face Course.</li></ul> <p data-svelte-h="svelte-1wntqpp">⚡️ Inference</p> <ul data-svelte-h="svelte-jbh4b4"><li>A blog post on <a href="https://huggingface.co/blog/accelerate-large-models" rel="nofollow">How 🤗 Accelerate runs very large models thanks to PyTorch</a> with OPT.</li></ul> <h2 class="relative group"><a id="transformers.OPTConfig" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTConfig"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1kpkus0">OPTConfig</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.OPTConfig"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">OPTConfig</span></span></h3> <a id="transformers.OPTConfig" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.OPTConfig"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/opt/configuration_opt.py#L32" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">vocab_size<span class="opacity-60"> = 50272</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_size<span class="opacity-60"> = 768</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_hidden_layers<span class="opacity-60"> = 12</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">ffn_dim<span class="opacity-60"> = 3072</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">max_position_embeddings<span class="opacity-60"> = 2048</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">do_layer_norm_before<span class="opacity-60"> = True</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">_remove_final_layer_norm<span class="opacity-60"> = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">word_embed_proj_dim<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">dropout<span class="opacity-60"> = 0.1</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_dropout<span class="opacity-60"> = 0.0</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_attention_heads<span class="opacity-60"> = 12</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">activation_function<span class="opacity-60"> = 'relu'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">layerdrop<span class="opacity-60"> = 0.0</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">init_std<span class="opacity-60"> = 0.02</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_cache<span class="opacity-60"> = True</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pad_token_id<span class="opacity-60"> = 1</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">bos_token_id<span class="opacity-60"> = 2</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">eos_token_id<span class="opacity-60"> = 2</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">enable_bias<span class="opacity-60"> = True</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">layer_norm_elementwise_affine<span class="opacity-60"> = True</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 16 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.OPTConfig.vocab_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTConfig.vocab_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>vocab_size</strong> (<code>int</code>, <em>optional</em>, defaults to 50272) — Vocabulary size of the OPT model. Defines the number of different tokens that can be represented by the <code>inputs_ids</code> passed when calling <a href="/docs/transformers/v4.34.0/en/model_doc/opt#transformers.OPTModel">OPTModel</a></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.OPTConfig.hidden_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTConfig.hidden_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hidden_size</strong> (<code>int</code>, <em>optional</em>, defaults to 768) — Dimensionality of the layers and the pooler layer.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.OPTConfig.num_hidden_layers" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTConfig.num_hidden_layers"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>num_hidden_layers</strong> (<code>int</code>, <em>optional</em>, defaults to 12) — Number of decoder layers.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.OPTConfig.ffn_dim" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTConfig.ffn_dim"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>ffn_dim</strong> (<code>int</code>, <em>optional</em>, defaults to 3072) — Dimensionality of the “intermediate” (often named feed-forward) layer in decoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.OPTConfig.num_attention_heads" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTConfig.num_attention_heads"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>num_attention_heads</strong> (<code>int</code>, <em>optional</em>, defaults to 12) — Number of attention heads for each attention layer in the Transformer decoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.OPTConfig.activation_function" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTConfig.activation_function"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>activation_function</strong> (<code>str</code> or <code>function</code>, <em>optional</em>, defaults to <code>"relu"</code>) — The non-linear activation function (function or string) in the encoder and pooler. If string, <code>"gelu"</code>, <code>"relu"</code>, <code>"silu"</code> and <code>"gelu_new"</code> are supported.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.OPTConfig.max_position_embeddings" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTConfig.max_position_embeddings"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>max_position_embeddings</strong> (<code>int</code>, <em>optional</em>, defaults to 2048) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.OPTConfig.do_layer_norm_before" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTConfig.do_layer_norm_before"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>do_layer_norm_before</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether to perform layer normalization before the attention block.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.OPTConfig.word_embed_proj_dim" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTConfig.word_embed_proj_dim"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>word_embed_proj_dim</strong> (<code>int</code>, <em>optional</em>) — <code>word_embed_proj_dim</code> can be set to down-project word embeddings, <em>e.g.</em> <code>opt-350m</code>. Defaults to <code>hidden_size</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.OPTConfig.dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTConfig.dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>dropout</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.OPTConfig.attention_dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTConfig.attention_dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_dropout</strong> (<code>float</code>, <em>optional</em>, defaults to 0.0) — The dropout ratio for the attention probabilities.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.OPTConfig.layerdrop" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTConfig.layerdrop"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>layerdrop</strong> (<code>float</code>, <em>optional</em>, defaults to 0.0) — The LayerDrop probability. See the [LayerDrop paper](see <a href="https://arxiv.org/abs/1909.11556" rel="nofollow">https://arxiv.org/abs/1909.11556</a>) for more details.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.OPTConfig.init_std" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTConfig.init_std"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>init_std</strong> (<code>float</code>, <em>optional</em>, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.OPTConfig.use_cache" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTConfig.use_cache"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>use_cache</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether or not the model should return the last key/values attentions (not used by all models).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.OPTConfig.enable_bias" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTConfig.enable_bias"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>enable_bias</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether or not if the linear layers in the attention blocks should use the bias term.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.OPTConfig.layer_norm_elementwise_affine" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTConfig.layer_norm_elementwise_affine"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>layer_norm_elementwise_affine</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether or not if the layer norms should have learnable parameters.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-11xcdh6">This is the configuration class to store the configuration of a <a href="/docs/transformers/v4.34.0/en/model_doc/opt#transformers.OPTModel">OPTModel</a>. It is used to instantiate a OPT model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the OPT <a href="https://huggingface.co/facebook/opt-350m" rel="nofollow">facebook/opt-350m</a> architecture.</p> <p data-svelte-h="svelte-10kqkkl">Configuration objects inherit from <a href="/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> and can be used to control the model outputs. Read the documentation from <a href="/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> for more information.</p> <div class="relative group rounded-md"><a id="transformers.OPTConfig.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTConfig.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> OPTConfig, OPTModel <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Initializing a OPT facebook/opt-large style configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>configuration = OPTConfig() <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Initializing a model (with random weights) from the facebook/opt-large style configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model = OPTModel(configuration) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Accessing the model configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>configuration = model.config</pre></div></div></div> <h2 class="relative group"><a id="transformers.OPTModel" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTModel"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-l97xr">OPTModel</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.OPTModel"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">OPTModel</span></span></h3> <a id="transformers.OPTModel" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.OPTModel"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/opt/modeling_opt.py#L752" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60">: OPTConfig</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.OPTModel.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTModel.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/opt#transformers.OPTConfig">OPTConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1hti96v">The bare OPT Model outputting raw hidden-states without any specific head on top. This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-hswkmf">This model is also a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.OPTModel.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.OPTModel.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.OPTModel.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/opt/modeling_opt.py#L768" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: LongTensor = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">past_key_values<span class="opacity-60">: typing.Optional[typing.List[torch.FloatTensor]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_cache<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.BaseModelOutputWithPast">transformers.modeling_outputs.BaseModelOutputWithPast</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 9 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.OPTModel.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTModel.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.OPTModel.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTModel.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p>If <code>past_key_values</code> is used, optionally only the last <code>decoder_input_ids</code> have to be input (see <code>past_key_values</code>).</p> <p>If you want to change padding behavior, you should read <code>modeling_opt._prepare_decoder_attention_mask</code> and modify to your needs. See diagram 1 in <a href="https://arxiv.org/abs/1910.13461" rel="nofollow">the paper</a> for more information on the default strategy.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.OPTModel.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTModel.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.Tensor</code> of shape <code>(encoder_layers, encoder_attention_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.OPTModel.forward.past_key_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTModel.forward.past_key_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>past_key_values</strong> (<code>tuple(tuple(torch.FloatTensor))</code>, <em>optional</em>, returned when <code>use_cache=True</code> is passed or when <code>config.use_cache=True</code>) — Tuple of <code>tuple(torch.FloatTensor)</code> of length <code>config.n_layers</code>, with each tuple having 2 tensors of shape <code>(batch_size, num_heads, sequence_length, embed_size_per_head)</code>) and 2 additional tensors of shape <code>(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)</code>.<p></p> <p>Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see <code>past_key_values</code> input) to speed up sequential decoding.</p> <p>If <code>past_key_values</code> are used, the user can optionally input only the last <code>decoder_input_ids</code> (those that don’t have their past key value states given to this model) of shape <code>(batch_size, 1)</code> instead of all <code>decoder_input_ids</code> of shape <code>(batch_size, sequence_length)</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.OPTModel.forward.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTModel.forward.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.OPTModel.forward.use_cache" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTModel.forward.use_cache"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>use_cache</strong> (<code>bool</code>, <em>optional</em>) — If set to <code>True</code>, <code>past_key_values</code> key value states are returned and can be used to speed up decoding (see <code>past_key_values</code>).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.OPTModel.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTModel.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.OPTModel.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTModel.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.OPTModel.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTModel.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li></ul> <div id="transformers.OPTModel.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.BaseModelOutputWithPast">transformers.modeling_outputs.BaseModelOutputWithPast</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.BaseModelOutputWithPast">transformers.modeling_outputs.BaseModelOutputWithPast</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/opt#transformers.OPTConfig">OPTConfig</a>) and inputs.</p> <ul> <li> <p><strong>last_hidden_state</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>) — Sequence of hidden-states at the output of the last layer of the model.</p> <p>If <code>past_key_values</code> is used only the last hidden-state of the sequences of shape <code>(batch_size, 1, hidden_size)</code> is output.</p> </li> <li> <p><strong>past_key_values</strong> (<code>tuple(tuple(torch.FloatTensor))</code>, <em>optional</em>, returned when <code>use_cache=True</code> is passed or when <code>config.use_cache=True</code>) — Tuple of <code>tuple(torch.FloatTensor)</code> of length <code>config.n_layers</code>, with each tuple having 2 tensors of shape <code>(batch_size, num_heads, sequence_length, embed_size_per_head)</code>) and optionally if <code>config.is_encoder_decoder=True</code> 2 additional tensors of shape <code>(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)</code>.</p> <p>Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if <code>config.is_encoder_decoder=True</code> in the cross-attention blocks) that can be used (see <code>past_key_values</code> input) to speed up sequential decoding.</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-1b7lqy6">The <a href="/docs/transformers/v4.34.0/en/model_doc/opt#transformers.OPTModel">OPTModel</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.OPTModel.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTModel.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, OPTModel <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"facebook/opt-350m"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = OPTModel.from_pretrained(<span class="hljs-string">"facebook/opt-350m"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(<span class="hljs-string">"Hello, my dog is cute"</span>, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(**inputs) <span class="hljs-meta">&gt;&gt;&gt; </span>last_hidden_states = outputs.last_hidden_state</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.OPTForCausalLM" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTForCausalLM"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-bdgkpr">OPTForCausalLM</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.OPTForCausalLM"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">OPTForCausalLM</span></span></h3> <a id="transformers.OPTForCausalLM" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.OPTForCausalLM"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/opt/modeling_opt.py#L818" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> </div></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.OPTForCausalLM.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.OPTForCausalLM.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.OPTForCausalLM.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/opt/modeling_opt.py#L849" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: LongTensor = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">past_key_values<span class="opacity-60">: typing.Optional[typing.List[torch.FloatTensor]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_cache<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.CausalLMOutputWithPast">transformers.modeling_outputs.CausalLMOutputWithPast</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 10 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.OPTForCausalLM.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTForCausalLM.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.OPTForCausalLM.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTForCausalLM.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.OPTForCausalLM.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTForCausalLM.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.Tensor</code> of shape <code>(num_hidden_layers, num_attention_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.OPTForCausalLM.forward.past_key_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTForCausalLM.forward.past_key_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>past_key_values</strong> (<code>tuple(tuple(torch.FloatTensor))</code>, <em>optional</em>, returned when <code>use_cache=True</code> is passed or when <code>config.use_cache=True</code>) — Tuple of <code>tuple(torch.FloatTensor)</code> of length <code>config.n_layers</code>, with each tuple having 2 tensors of shape <code>(batch_size, num_heads, sequence_length, embed_size_per_head)</code>) and 2 additional tensors of shape <code>(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)</code>. The two additional tensors are only required when the model is used as a decoder in a Sequence to Sequence model.<p></p> <p>Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see <code>past_key_values</code> input) to speed up sequential decoding.</p> <p>If <code>past_key_values</code> are used, the user can optionally input only the last <code>decoder_input_ids</code> (those that don’t have their past key value states given to this model) of shape <code>(batch_size, 1)</code> instead of all <code>decoder_input_ids</code> of shape <code>(batch_size, sequence_length)</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.OPTForCausalLM.forward.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTForCausalLM.forward.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.OPTForCausalLM.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTForCausalLM.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Labels for computing the masked language modeling loss. Indices should either be in <code>[0, ..., config.vocab_size]</code> or -100 (see <code>input_ids</code> docstring). Tokens with indices set to <code>-100</code> are ignored (masked), the loss is only computed for the tokens with labels in <code>[0, ..., config.vocab_size]</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.OPTForCausalLM.forward.use_cache" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTForCausalLM.forward.use_cache"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>use_cache</strong> (<code>bool</code>, <em>optional</em>) — If set to <code>True</code>, <code>past_key_values</code> key value states are returned and can be used to speed up decoding (see <code>past_key_values</code>).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.OPTForCausalLM.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTForCausalLM.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.OPTForCausalLM.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTForCausalLM.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.OPTForCausalLM.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTForCausalLM.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li></ul> <div id="transformers.OPTForCausalLM.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.CausalLMOutputWithPast">transformers.modeling_outputs.CausalLMOutputWithPast</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.CausalLMOutputWithPast">transformers.modeling_outputs.CausalLMOutputWithPast</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/opt#transformers.OPTConfig">OPTConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Language modeling loss (for next-token prediction).</p> </li> <li> <p><strong>logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, config.vocab_size)</code>) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).</p> </li> <li> <p><strong>past_key_values</strong> (<code>tuple(tuple(torch.FloatTensor))</code>, <em>optional</em>, returned when <code>use_cache=True</code> is passed or when <code>config.use_cache=True</code>) — Tuple of <code>tuple(torch.FloatTensor)</code> of length <code>config.n_layers</code>, with each tuple having 2 tensors of shape <code>(batch_size, num_heads, sequence_length, embed_size_per_head)</code>)</p> <p>Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see <code>past_key_values</code> input) to speed up sequential decoding.</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <div class="relative group rounded-md"><a id="transformers.OPTForCausalLM.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTForCausalLM.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, OPTForCausalLM <span class="hljs-meta">&gt;&gt;&gt; </span>model = OPTForCausalLM.from_pretrained(<span class="hljs-string">"facebook/opt-350m"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"facebook/opt-350m"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>prompt = <span class="hljs-string">"Hey, are you conscious? Can you talk to me?"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(prompt, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Generate</span> <span class="hljs-meta">&gt;&gt;&gt; </span>generate_ids = model.generate(inputs.input_ids, max_length=<span class="hljs-number">30</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer.batch_decode(generate_ids, skip_special_tokens=<span class="hljs-literal">True</span>, clean_up_tokenization_spaces=<span class="hljs-literal">False</span>)[<span class="hljs-number">0</span>] <span class="hljs-string">"Hey, are you conscious? Can you talk to me?\nI'm not conscious. I'm just a little bit of a weirdo."</span></pre></div></div></div></div> <h2 class="relative group"><a id="transformers.TFOPTModel" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFOPTModel"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1i5zxh5">TFOPTModel</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFOPTModel"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">TFOPTModel</span></span></h3> <a id="transformers.TFOPTModel" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFOPTModel"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/opt/modeling_tf_opt.py#L766" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">*args<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFOPTModel.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFOPTModel.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/opt#transformers.OPTConfig">OPTConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-127neq1">The bare TF OPT Model outputting raw hidden-states without any specific head on top. This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel">TFPreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-1ivrf8m">This model is also a <a href="https://www.tensorflow.org/api_docs/python/tf/keras/Model" rel="nofollow">tf.keras.Model</a> subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1ajbfxg">TensorFlow models and layers in <code>transformers</code> accept two formats as input:</p> <ul data-svelte-h="svelte-qm1t26"><li>having all inputs as keyword arguments (like PyTorch models), or</li> <li>having all inputs as a list, tuple or dict in the first positional argument.</li></ul> <p data-svelte-h="svelte-1v9qsc5">The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like <code>model.fit()</code> things should “just work” for you - just pass your inputs and labels in any format that <code>model.fit()</code> supports! If, however, you want to use the second format outside of Keras methods like <code>fit()</code> and <code>predict()</code>, such as when creating your own layers or models with the Keras <code>Functional</code> API, there are three possibilities you can use to gather all the input Tensors in the first positional argument:</p> <ul data-svelte-h="svelte-15scerc"><li>a single Tensor with <code>input_ids</code> only and nothing else: <code>model(input_ids)</code></li> <li>a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: <code>model([input_ids, attention_mask])</code> or <code>model([input_ids, attention_mask, token_type_ids])</code></li> <li>a dictionary with one or several input Tensors associated to the input names given in the docstring: <code>model({"input_ids": input_ids, "token_type_ids": token_type_ids})</code></li></ul> <p data-svelte-h="svelte-1an3odd">Note that when creating models and layers with <a href="https://keras.io/guides/making_new_layers_and_models_via_subclassing/" rel="nofollow">subclassing</a> then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function!</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFOPTModel.call"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>call</span></h4> <a id="transformers.TFOPTModel.call" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFOPTModel.call"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/opt/modeling_tf_opt.py#L780" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: TFModelInputType | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">past_key_values<span class="opacity-60">: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_cache<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">training<span class="opacity-60">: Optional[bool] = False</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFBaseModelOutputWithPast">transformers.modeling_tf_outputs.TFBaseModelOutputWithPast</a> or <code>tuple(tf.Tensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 9 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFOPTModel.call.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFOPTModel.call.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>tf.Tensor</code> of shape <code>({0})</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFOPTModel.call.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFOPTModel.call.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>tf.Tensor</code> of shape <code>({0})</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFOPTModel.call.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFOPTModel.call.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>tf.Tensor</code> of shape <code>(encoder_layers, encoder_attention_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFOPTModel.call.past_key_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFOPTModel.call.past_key_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>past_key_values</strong> (<code>Tuple[Tuple[tf.Tensor]]</code> of length <code>config.n_layers</code>) — contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. If <code>past_key_values</code> are used, the user can optionally input only the last <code>decoder_input_ids</code> (those that don’t have their past key value states given to this model) of shape <code>(batch_size, 1)</code> instead of all <code>decoder_input_ids</code> of shape <code>(batch_size, sequence_length)</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFOPTModel.call.use_cache" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFOPTModel.call.use_cache"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>use_cache</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — If set to <code>True</code>, <code>past_key_values</code> key value states are returned and can be used to speed up decoding (see <code>past_key_values</code>). Set to <code>False</code> during training, <code>True</code> during generation</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFOPTModel.call.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFOPTModel.call.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFOPTModel.call.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFOPTModel.call.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFOPTModel.call.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFOPTModel.call.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFOPTModel.call.training" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFOPTModel.call.training"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>training</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation).</span></span> </li></ul> <div id="transformers.TFOPTModel.call.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFBaseModelOutputWithPast">transformers.modeling_tf_outputs.TFBaseModelOutputWithPast</a> or <code>tuple(tf.Tensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFBaseModelOutputWithPast">transformers.modeling_tf_outputs.TFBaseModelOutputWithPast</a> or a tuple of <code>tf.Tensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/opt#transformers.OPTConfig">OPTConfig</a>) and inputs.</p> <ul> <li> <p><strong>last_hidden_state</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>) — Sequence of hidden-states at the output of the last layer of the model.</p> <p>If <code>past_key_values</code> is used only the last hidden-state of the sequences of shape <code>(batch_size, 1, hidden_size)</code> is output.</p> </li> <li> <p><strong>past_key_values</strong> (<code>List[tf.Tensor]</code>, <em>optional</em>, returned when <code>use_cache=True</code> is passed or when <code>config.use_cache=True</code>) — List of <code>tf.Tensor</code> of length <code>config.n_layers</code>, with each tensor of shape <code>(2, batch_size, num_heads, sequence_length, embed_size_per_head)</code>).</p> <p>Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see <code>past_key_values</code> input) to speed up sequential decoding.</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>tf.Tensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>tf.Tensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-1na9y8q">The <a href="/docs/transformers/v4.34.0/en/model_doc/opt#transformers.TFOPTModel">TFOPTModel</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.TFOPTModel.call.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFOPTModel.call.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, TFOPTModel <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> tensorflow <span class="hljs-keyword">as</span> tf <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"facebook/opt-350m"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = TFOPTModel.from_pretrained(<span class="hljs-string">"facebook/opt-350m"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(<span class="hljs-string">"Hello, my dog is cute"</span>, return_tensors=<span class="hljs-string">"tf"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(inputs) <span class="hljs-meta">&gt;&gt;&gt; </span>last_hidden_states = outputs.last_hidden_state</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.TFOPTForCausalLM" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFOPTForCausalLM"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-yhvls9">TFOPTForCausalLM</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFOPTForCausalLM"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">TFOPTForCausalLM</span></span></h3> <a id="transformers.TFOPTForCausalLM" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFOPTForCausalLM"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/opt/modeling_tf_opt.py#L852" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">*args<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFOPTForCausalLM.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFOPTForCausalLM.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/opt#transformers.OPTConfig">OPTConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-17041eg">The OPT Model transformer with a language modeling head on top.</p> <p data-svelte-h="svelte-1i0vt4o">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel">TFPreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-1ivrf8m">This model is also a <a href="https://www.tensorflow.org/api_docs/python/tf/keras/Model" rel="nofollow">tf.keras.Model</a> subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1ajbfxg">TensorFlow models and layers in <code>transformers</code> accept two formats as input:</p> <ul data-svelte-h="svelte-qm1t26"><li>having all inputs as keyword arguments (like PyTorch models), or</li> <li>having all inputs as a list, tuple or dict in the first positional argument.</li></ul> <p data-svelte-h="svelte-1v9qsc5">The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like <code>model.fit()</code> things should “just work” for you - just pass your inputs and labels in any format that <code>model.fit()</code> supports! If, however, you want to use the second format outside of Keras methods like <code>fit()</code> and <code>predict()</code>, such as when creating your own layers or models with the Keras <code>Functional</code> API, there are three possibilities you can use to gather all the input Tensors in the first positional argument:</p> <ul data-svelte-h="svelte-15scerc"><li>a single Tensor with <code>input_ids</code> only and nothing else: <code>model(input_ids)</code></li> <li>a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: <code>model([input_ids, attention_mask])</code> or <code>model([input_ids, attention_mask, token_type_ids])</code></li> <li>a dictionary with one or several input Tensors associated to the input names given in the docstring: <code>model({"input_ids": input_ids, "token_type_ids": token_type_ids})</code></li></ul> <p data-svelte-h="svelte-1an3odd">Note that when creating models and layers with <a href="https://keras.io/guides/making_new_layers_and_models_via_subclassing/" rel="nofollow">subclassing</a> then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function!</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFOPTForCausalLM.call"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>call</span></h4> <a id="transformers.TFOPTForCausalLM.call" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFOPTForCausalLM.call"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/opt/modeling_tf_opt.py#L877" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: TFModelInputType | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">past_key_values<span class="opacity-60">: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_cache<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">training<span class="opacity-60">: Optional[bool] = False</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFCausalLMOutputWithPast">transformers.modeling_tf_outputs.TFCausalLMOutputWithPast</a> or <code>tuple(tf.Tensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 10 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFOPTForCausalLM.call.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFOPTForCausalLM.call.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFOPTForCausalLM.call.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFOPTForCausalLM.call.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFOPTForCausalLM.call.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFOPTForCausalLM.call.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.Tensor</code> of shape <code>(num_hidden_layers, num_attention_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFOPTForCausalLM.call.past_key_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFOPTForCausalLM.call.past_key_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>past_key_values</strong> (<code>tuple(tuple(torch.FloatTensor))</code>, <em>optional</em>, returned when <code>use_cache=True</code> is passed or when <code>config.use_cache=True</code>) — Tuple of <code>tuple(torch.FloatTensor)</code> of length <code>config.n_layers</code>, with each tuple having 2 tensors of shape <code>(batch_size, num_heads, sequence_length, embed_size_per_head)</code>) and 2 additional tensors of shape <code>(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)</code>. The two additional tensors are only required when the model is used as a decoder in a Sequence to Sequence model.<p></p> <p>Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see <code>past_key_values</code> input) to speed up sequential decoding.</p> <p>If <code>past_key_values</code> are used, the user can optionally input only the last <code>input_ids</code> (those that don’t have their past key value states given to this model) of shape <code>(batch_size, 1)</code> instead of all <code>decoder_input_ids</code> of shape <code>(batch_size, sequence_length)</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFOPTForCausalLM.call.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFOPTForCausalLM.call.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFOPTForCausalLM.call.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFOPTForCausalLM.call.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Labels for computing the masked language modeling loss. Indices should either be in <code>[0, ..., config.vocab_size]</code> or -100 (see <code>input_ids</code> docstring). Tokens with indices set to <code>-100</code> are ignored (masked), the loss is only computed for the tokens with labels in <code>[0, ..., config.vocab_size]</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFOPTForCausalLM.call.use_cache" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFOPTForCausalLM.call.use_cache"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>use_cache</strong> (<code>bool</code>, <em>optional</em>) — If set to <code>True</code>, <code>past_key_values</code> key value states are returned and can be used to speed up decoding (see <code>past_key_values</code>).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFOPTForCausalLM.call.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFOPTForCausalLM.call.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFOPTForCausalLM.call.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFOPTForCausalLM.call.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFOPTForCausalLM.call.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFOPTForCausalLM.call.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li></ul> <div id="transformers.TFOPTForCausalLM.call.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFCausalLMOutputWithPast">transformers.modeling_tf_outputs.TFCausalLMOutputWithPast</a> or <code>tuple(tf.Tensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFCausalLMOutputWithPast">transformers.modeling_tf_outputs.TFCausalLMOutputWithPast</a> or a tuple of <code>tf.Tensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/opt#transformers.OPTConfig">OPTConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>tf.Tensor</code> of shape <code>(n,)</code>, <em>optional</em>, where n is the number of non-masked labels, returned when <code>labels</code> is provided) — Language modeling loss (for next-token prediction).</p> </li> <li> <p><strong>logits</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length, config.vocab_size)</code>) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).</p> </li> <li> <p><strong>past_key_values</strong> (<code>List[tf.Tensor]</code>, <em>optional</em>, returned when <code>use_cache=True</code> is passed or when <code>config.use_cache=True</code>) — List of <code>tf.Tensor</code> of length <code>config.n_layers</code>, with each tensor of shape <code>(2, batch_size, num_heads, sequence_length, embed_size_per_head)</code>).</p> <p>Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see <code>past_key_values</code> input) to speed up sequential decoding.</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>tf.Tensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>tf.Tensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFCausalLMOutputWithPast">transformers.modeling_tf_outputs.TFCausalLMOutputWithPast</a> or <code>tuple(tf.Tensor)</code>: A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFCausalLMOutputWithPast">transformers.modeling_tf_outputs.TFCausalLMOutputWithPast</a> or a tuple of <code>tf.Tensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/opt#transformers.OPTConfig">OPTConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>tf.Tensor</code> of shape <code>(n,)</code>, <em>optional</em>, where n is the number of non-masked labels, returned when <code>labels</code> is provided) — Language modeling loss (for next-token prediction).</p> </li> <li> <p><strong>logits</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length, config.vocab_size)</code>) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).</p> </li> <li> <p><strong>past_key_values</strong> (<code>List[tf.Tensor]</code>, <em>optional</em>, returned when <code>use_cache=True</code> is passed or when <code>config.use_cache=True</code>) — List of <code>tf.Tensor</code> of length <code>config.n_layers</code>, with each tensor of shape <code>(2, batch_size, num_heads, sequence_length, embed_size_per_head)</code>).</p> <p>Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see <code>past_key_values</code> input) to speed up sequential decoding.</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>tf.Tensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>tf.Tensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <div class="relative group rounded-md"><a id="transformers.TFOPTForCausalLM.call.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFOPTForCausalLM.call.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, TFOPTForCausalLM <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> tensorflow <span class="hljs-keyword">as</span> tf <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"facebook/opt-350m"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = TFOPTForCausalLM.from_pretrained(<span class="hljs-string">"facebook/opt-350m"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(<span class="hljs-string">"Hello, my dog is cute"</span>, return_tensors=<span class="hljs-string">"tf"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(inputs) <span class="hljs-meta">&gt;&gt;&gt; </span>logits = outputs.logits</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.OPTForSequenceClassification" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTForSequenceClassification"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-uqcdca">OPTForSequenceClassification</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.OPTForSequenceClassification"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">OPTForSequenceClassification</span></span></h3> <a id="transformers.OPTForSequenceClassification" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.OPTForSequenceClassification"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/opt/modeling_opt.py#L1027" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60">: OPTConfig</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.OPTForSequenceClassification.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTForSequenceClassification.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/opt#transformers.OPTConfig">OPTConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1hcah69">The OPT Model transformer with a sequence classification head on top (linear layer).</p> <p data-svelte-h="svelte-kdngul"><a href="/docs/transformers/v4.34.0/en/model_doc/opt#transformers.OPTForSequenceClassification">OPTForSequenceClassification</a> uses the last token in order to do the classification, as other causal models (e.g. GPT-2) do.</p> <p data-svelte-h="svelte-10ugs3m">Since it does classification on the last token, it requires to know the position of the last token. If a <code>pad_token_id</code> is defined in the configuration, it finds the last token that is not a padding token in each row. If no <code>pad_token_id</code> is defined, it simply takes the last value in each row of the batch. Since it cannot guess the padding tokens when <code>inputs_embeds</code> are passed instead of <code>input_ids</code>, it does the same (take the last value in each row of the batch).</p> <p data-svelte-h="svelte-hmtw9k">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-hswkmf">This model is also a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.OPTForSequenceClassification.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.OPTForSequenceClassification.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.OPTForSequenceClassification.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/opt/modeling_opt.py#L1037" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">past_key_values<span class="opacity-60">: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_cache<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>transformers.modeling_outputs.SequenceClassifierOutputWithPast</code> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 10 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.OPTForSequenceClassification.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTForSequenceClassification.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.OPTForSequenceClassification.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTForSequenceClassification.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p>If <code>past_key_values</code> is used, optionally only the last <code>decoder_input_ids</code> have to be input (see <code>past_key_values</code>).</p> <p>If you want to change padding behavior, you should read <code>modeling_opt._prepare_decoder_attention_mask</code> and modify to your needs. See diagram 1 in <a href="https://arxiv.org/abs/1910.13461" rel="nofollow">the paper</a> for more information on the default strategy.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.OPTForSequenceClassification.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTForSequenceClassification.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.Tensor</code> of shape <code>(encoder_layers, encoder_attention_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.OPTForSequenceClassification.forward.past_key_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTForSequenceClassification.forward.past_key_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>past_key_values</strong> (<code>tuple(tuple(torch.FloatTensor))</code>, <em>optional</em>, returned when <code>use_cache=True</code> is passed or when <code>config.use_cache=True</code>) — Tuple of <code>tuple(torch.FloatTensor)</code> of length <code>config.n_layers</code>, with each tuple having 2 tensors of shape <code>(batch_size, num_heads, sequence_length, embed_size_per_head)</code>) and 2 additional tensors of shape <code>(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)</code>.<p></p> <p>Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see <code>past_key_values</code> input) to speed up sequential decoding.</p> <p>If <code>past_key_values</code> are used, the user can optionally input only the last <code>decoder_input_ids</code> (those that don’t have their past key value states given to this model) of shape <code>(batch_size, 1)</code> instead of all <code>decoder_input_ids</code> of shape <code>(batch_size, sequence_length)</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.OPTForSequenceClassification.forward.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTForSequenceClassification.forward.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.OPTForSequenceClassification.forward.use_cache" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTForSequenceClassification.forward.use_cache"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>use_cache</strong> (<code>bool</code>, <em>optional</em>) — If set to <code>True</code>, <code>past_key_values</code> key value states are returned and can be used to speed up decoding (see <code>past_key_values</code>).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.OPTForSequenceClassification.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTForSequenceClassification.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.OPTForSequenceClassification.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTForSequenceClassification.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.OPTForSequenceClassification.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTForSequenceClassification.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.OPTForSequenceClassification.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTForSequenceClassification.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Labels for computing the sequence classification/regression loss. Indices should be in <code>[0, ..., config.num_labels - 1]</code>. If <code>config.num_labels == 1</code> a regression loss is computed (Mean-Square loss), If <code>config.num_labels &gt; 1</code> a classification loss is computed (Cross-Entropy).</span></span> </li></ul> <div id="transformers.OPTForSequenceClassification.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><code>transformers.modeling_outputs.SequenceClassifierOutputWithPast</code> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <code>transformers.modeling_outputs.SequenceClassifierOutputWithPast</code> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/opt#transformers.OPTConfig">OPTConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Classification (or regression if config.num_labels==1) loss.</p> </li> <li> <p><strong>logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, config.num_labels)</code>) — Classification (or regression if config.num_labels==1) scores (before SoftMax).</p> </li> <li> <p><strong>past_key_values</strong> (<code>tuple(tuple(torch.FloatTensor))</code>, <em>optional</em>, returned when <code>use_cache=True</code> is passed or when <code>config.use_cache=True</code>) — Tuple of <code>tuple(torch.FloatTensor)</code> of length <code>config.n_layers</code>, with each tuple having 2 tensors of shape <code>(batch_size, num_heads, sequence_length, embed_size_per_head)</code>)</p> <p>Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see <code>past_key_values</code> input) to speed up sequential decoding.</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-1ake0fw">The <a href="/docs/transformers/v4.34.0/en/model_doc/opt#transformers.OPTForSequenceClassification">OPTForSequenceClassification</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.OPTForSequenceClassification.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTForSequenceClassification.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-ykxpe4">Example of single-label classification:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, OPTForSequenceClassification <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"ArthurZ/opt-350m-dummy-sc"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = OPTForSequenceClassification.from_pretrained(<span class="hljs-string">"ArthurZ/opt-350m-dummy-sc"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(<span class="hljs-string">"Hello, my dog is cute"</span>, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> logits = model(**inputs).logits <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_class_id = logits.argmax().item() <span class="hljs-meta">&gt;&gt;&gt; </span>model.config.id2label[predicted_class_id] <span class="hljs-string">'LABEL_0'</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`</span> <span class="hljs-meta">&gt;&gt;&gt; </span>num_labels = <span class="hljs-built_in">len</span>(model.config.id2label) <span class="hljs-meta">&gt;&gt;&gt; </span>model = OPTForSequenceClassification.from_pretrained(<span class="hljs-string">"ArthurZ/opt-350m-dummy-sc"</span>, num_labels=num_labels) <span class="hljs-meta">&gt;&gt;&gt; </span>labels = torch.tensor([<span class="hljs-number">1</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span>loss = model(**inputs, labels=labels).loss <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-built_in">round</span>(loss.item(), <span class="hljs-number">2</span>) <span class="hljs-number">1.71</span></pre></div></div> <div class="relative group rounded-md"><a id="transformers.OPTForSequenceClassification.forward.example-2" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTForSequenceClassification.forward.example-2"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-1l8e32d">Example of multi-label classification:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, OPTForSequenceClassification <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"ArthurZ/opt-350m-dummy-sc"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = OPTForSequenceClassification.from_pretrained(<span class="hljs-string">"ArthurZ/opt-350m-dummy-sc"</span>, problem_type=<span class="hljs-string">"multi_label_classification"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(<span class="hljs-string">"Hello, my dog is cute"</span>, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> logits = model(**inputs).logits <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_class_ids = torch.arange(<span class="hljs-number">0</span>, logits.shape[-<span class="hljs-number">1</span>])[torch.sigmoid(logits).squeeze(dim=<span class="hljs-number">0</span>) &gt; <span class="hljs-number">0.5</span>] <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`</span> <span class="hljs-meta">&gt;&gt;&gt; </span>num_labels = <span class="hljs-built_in">len</span>(model.config.id2label) <span class="hljs-meta">&gt;&gt;&gt; </span>model = OPTForSequenceClassification.from_pretrained( <span class="hljs-meta">... </span> <span class="hljs-string">"ArthurZ/opt-350m-dummy-sc"</span>, num_labels=num_labels, problem_type=<span class="hljs-string">"multi_label_classification"</span> <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>labels = torch.<span class="hljs-built_in">sum</span>( <span class="hljs-meta">... </span> torch.nn.functional.one_hot(predicted_class_ids[<span class="hljs-literal">None</span>, :].clone(), num_classes=num_labels), dim=<span class="hljs-number">1</span> <span class="hljs-meta">... </span>).to(torch.<span class="hljs-built_in">float</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>loss = model(**inputs, labels=labels).loss</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.OPTForQuestionAnswering" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTForQuestionAnswering"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-fxu3zp">OPTForQuestionAnswering</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.OPTForQuestionAnswering"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">OPTForQuestionAnswering</span></span></h3> <a id="transformers.OPTForQuestionAnswering" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.OPTForQuestionAnswering"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/opt/modeling_opt.py#L1149" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60">: OPTConfig</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.OPTForQuestionAnswering.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTForQuestionAnswering.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/opt#transformers.OPTConfig">OPTConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-7pga04">The OPT Model transformer with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute <code>span start logits</code> and <code>span end logits</code>).</p> <p data-svelte-h="svelte-hmtw9k">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-hswkmf">This model is also a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.OPTForQuestionAnswering.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.OPTForQuestionAnswering.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.OPTForQuestionAnswering.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/opt/modeling_opt.py#L1158" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">past_key_values<span class="opacity-60">: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">start_positions<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">end_positions<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_cache<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.QuestionAnsweringModelOutput">transformers.modeling_outputs.QuestionAnsweringModelOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 11 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.OPTForQuestionAnswering.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTForQuestionAnswering.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.OPTForQuestionAnswering.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTForQuestionAnswering.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p>If <code>past_key_values</code> is used, optionally only the last <code>decoder_input_ids</code> have to be input (see <code>past_key_values</code>).</p> <p>If you want to change padding behavior, you should read <code>modeling_opt._prepare_decoder_attention_mask</code> and modify to your needs. See diagram 1 in <a href="https://arxiv.org/abs/1910.13461" rel="nofollow">the paper</a> for more information on the default strategy.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.OPTForQuestionAnswering.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTForQuestionAnswering.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>head_mask</strong> (<code>torch.Tensor</code> of shape <code>(encoder_layers, encoder_attention_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.OPTForQuestionAnswering.forward.past_key_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTForQuestionAnswering.forward.past_key_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>past_key_values</strong> (<code>tuple(tuple(torch.FloatTensor))</code>, <em>optional</em>, returned when <code>use_cache=True</code> is passed or when <code>config.use_cache=True</code>) — Tuple of <code>tuple(torch.FloatTensor)</code> of length <code>config.n_layers</code>, with each tuple having 2 tensors of shape <code>(batch_size, num_heads, sequence_length, embed_size_per_head)</code>) and 2 additional tensors of shape <code>(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)</code>.<p></p> <p>Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see <code>past_key_values</code> input) to speed up sequential decoding.</p> <p>If <code>past_key_values</code> are used, the user can optionally input only the last <code>decoder_input_ids</code> (those that don’t have their past key value states given to this model) of shape <code>(batch_size, 1)</code> instead of all <code>decoder_input_ids</code> of shape <code>(batch_size, sequence_length)</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.OPTForQuestionAnswering.forward.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTForQuestionAnswering.forward.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.OPTForQuestionAnswering.forward.use_cache" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTForQuestionAnswering.forward.use_cache"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>use_cache</strong> (<code>bool</code>, <em>optional</em>) — If set to <code>True</code>, <code>past_key_values</code> key value states are returned and can be used to speed up decoding (see <code>past_key_values</code>).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.OPTForQuestionAnswering.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTForQuestionAnswering.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.OPTForQuestionAnswering.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTForQuestionAnswering.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.OPTForQuestionAnswering.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTForQuestionAnswering.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.OPTForQuestionAnswering.forward.start_positions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTForQuestionAnswering.forward.start_positions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>start_positions</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (<code>sequence_length</code>). Position outside of the sequence are not taken into account for computing the loss.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.OPTForQuestionAnswering.forward.end_positions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTForQuestionAnswering.forward.end_positions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>end_positions</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (<code>sequence_length</code>). Position outside of the sequence are not taken into account for computing the loss.</span></span> </li></ul> <div id="transformers.OPTForQuestionAnswering.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.QuestionAnsweringModelOutput">transformers.modeling_outputs.QuestionAnsweringModelOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.QuestionAnsweringModelOutput">transformers.modeling_outputs.QuestionAnsweringModelOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/opt#transformers.OPTConfig">OPTConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.</p> </li> <li> <p><strong>start_logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Span-start scores (before SoftMax).</p> </li> <li> <p><strong>end_logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Span-end scores (before SoftMax).</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-pc65t6">The <a href="/docs/transformers/v4.34.0/en/model_doc/opt#transformers.OPTForQuestionAnswering">OPTForQuestionAnswering</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.OPTForQuestionAnswering.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.OPTForQuestionAnswering.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, OPTForQuestionAnswering <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>torch.manual_seed(<span class="hljs-number">4</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"facebook/opt-350m"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># note: we are loading a OPTForQuestionAnswering from the hub here,</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># so the head will be randomly initialized, hence the predictions will be random</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model = OPTForQuestionAnswering.from_pretrained(<span class="hljs-string">"facebook/opt-350m"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>question, text = <span class="hljs-string">"Who was Jim Henson?"</span>, <span class="hljs-string">"Jim Henson was a nice puppet"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(question, text, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> outputs = model(**inputs) <span class="hljs-meta">&gt;&gt;&gt; </span>answer_start_index = outputs.start_logits.argmax() <span class="hljs-meta">&gt;&gt;&gt; </span>answer_end_index = outputs.end_logits.argmax() <span class="hljs-meta">&gt;&gt;&gt; </span>answer_offset = <span class="hljs-built_in">len</span>(tokenizer(question)[<span class="hljs-number">0</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span>predict_answer_tokens = inputs.input_ids[ <span class="hljs-meta">... </span> <span class="hljs-number">0</span>, answer_offset + answer_start_index : answer_offset + answer_end_index + <span class="hljs-number">1</span> <span class="hljs-meta">... </span>] <span class="hljs-meta">&gt;&gt;&gt; </span>predicted = tokenizer.decode(predict_answer_tokens) <span class="hljs-meta">&gt;&gt;&gt; </span>predicted <span class="hljs-string">' a nice puppet'</span></pre></div></div></div></div> <h2 class="relative group"><a id="transformers.FlaxOPTModel" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxOPTModel"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-b61mha">FlaxOPTModel</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FlaxOPTModel"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">FlaxOPTModel</span></span></h3> <a id="transformers.FlaxOPTModel" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FlaxOPTModel"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/opt/modeling_flax_opt.py#L690" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60">: OPTConfig</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_shape<span class="opacity-60">: typing.Tuple[int] = (1, 1)</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">seed<span class="opacity-60">: int = 0</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">dtype<span class="opacity-60">: dtype = &lt;class 'jax.numpy.float32'&gt;</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">_do_init<span class="opacity-60">: bool = True</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> </div></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FlaxOPTModel.__call__"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>__call__</span></h4> <a id="transformers.FlaxOPTModel.__call__" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FlaxOPTModel.__call__"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/opt/modeling_flax_opt.py#L583" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: Array</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[jax.Array] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: typing.Optional[jax.Array] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">params<span class="opacity-60">: dict = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">past_key_values<span class="opacity-60">: dict = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">dropout_rng<span class="opacity-60">: PRNGKey = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">deterministic<span class="opacity-60">: bool = True</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxBaseModelOutput">transformers.modeling_flax_outputs.FlaxBaseModelOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand undefined parameters</button></div> <div id="transformers.FlaxOPTModel.__call__.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxBaseModelOutput">transformers.modeling_flax_outputs.FlaxBaseModelOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxBaseModelOutput">transformers.modeling_flax_outputs.FlaxBaseModelOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/opt#transformers.OPTConfig">OPTConfig</a>) and inputs.</p> <ul> <li> <p><strong>last_hidden_state</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>) — Sequence of hidden-states at the output of the last layer of the model.</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>jnp.ndarray</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>jnp.ndarray</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <div class="relative group rounded-md"><a id="transformers.FlaxOPTModel.__call__.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxOPTModel.__call__.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, FlaxOPTModel <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"facebook/opt-350m"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = FlaxOPTModel.from_pretrained(<span class="hljs-string">"facebook/opt-350m"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(<span class="hljs-string">"Hello, my dog is cute"</span>, return_tensors=<span class="hljs-string">"jax"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(**inputs) <span class="hljs-meta">&gt;&gt;&gt; </span>last_hidden_states = outputs.last_hidden_state</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.FlaxOPTForCausalLM" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxOPTForCausalLM"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1ca28m6">FlaxOPTForCausalLM</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FlaxOPTForCausalLM"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">FlaxOPTForCausalLM</span></span></h3> <a id="transformers.FlaxOPTForCausalLM" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FlaxOPTForCausalLM"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/opt/modeling_flax_opt.py#L763" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60">: OPTConfig</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_shape<span class="opacity-60">: typing.Tuple[int] = (1, 1)</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">seed<span class="opacity-60">: int = 0</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">dtype<span class="opacity-60">: dtype = &lt;class 'jax.numpy.float32'&gt;</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">_do_init<span class="opacity-60">: bool = True</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxOPTForCausalLM.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxOPTForCausalLM.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/opt#transformers.OPTConfig">OPTConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxOPTForCausalLM.dtype" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxOPTForCausalLM.dtype"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>dtype</strong> (<code>jax.numpy.dtype</code>, <em>optional</em>, defaults to <code>jax.numpy.float32</code>) — The data type of the computation. Can be one of <code>jax.numpy.float32</code>, <code>jax.numpy.float16</code> (on GPUs) and <code>jax.numpy.bfloat16</code> (on TPUs).<p></p> <p>This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given <code>dtype</code>.</p> <p><strong>Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.</strong></p> <p>If you wish to change the dtype of the model parameters, see <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.to_fp16">to_fp16()</a> and <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.to_bf16">to_bf16()</a>.</p></span></span> </li></ul> </div></div> <p data-svelte-h="svelte-lbc411">OPT Model with a language modeling head on top (linear layer with weights tied to the input embeddings) e.g for autoregressive tasks.</p> <p data-svelte-h="svelte-1b68hcc">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel">FlaxPreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-idybz1">This model is also a Flax Linen <a href="https://flax.readthedocs.io/en/latest/_autosummary/flax.nn.module.html" rel="nofollow">flax.nn.Module</a> subclass. Use it as a regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.</p> <p data-svelte-h="svelte-1pplc4a">Finally, this model supports inherent JAX features such as:</p> <ul data-svelte-h="svelte-1w7z84m"><li><a href="https://jax.readthedocs.io/en/latest/jax.html#just-in-time-compilation-jit" rel="nofollow">Just-In-Time (JIT) compilation</a></li> <li><a href="https://jax.readthedocs.io/en/latest/jax.html#automatic-differentiation" rel="nofollow">Automatic Differentiation</a></li> <li><a href="https://jax.readthedocs.io/en/latest/jax.html#vectorization-vmap" rel="nofollow">Vectorization</a></li> <li><a href="https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap" rel="nofollow">Parallelization</a></li></ul> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FlaxOPTForCausalLM.__call__"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>__call__</span></h4> <a id="transformers.FlaxOPTForCausalLM.__call__" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FlaxOPTForCausalLM.__call__"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/opt/modeling_flax_opt.py#L583" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: Array</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[jax.Array] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: typing.Optional[jax.Array] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">params<span class="opacity-60">: dict = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">past_key_values<span class="opacity-60">: dict = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">dropout_rng<span class="opacity-60">: PRNGKey = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">deterministic<span class="opacity-60">: bool = True</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxBaseModelOutput">transformers.modeling_flax_outputs.FlaxBaseModelOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand undefined parameters</button></div> <div id="transformers.FlaxOPTForCausalLM.__call__.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxBaseModelOutput">transformers.modeling_flax_outputs.FlaxBaseModelOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxBaseModelOutput">transformers.modeling_flax_outputs.FlaxBaseModelOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/opt#transformers.OPTConfig">OPTConfig</a>) and inputs.</p> <ul> <li> <p><strong>last_hidden_state</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>) — Sequence of hidden-states at the output of the last layer of the model.</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>jnp.ndarray</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>jnp.ndarray</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <div class="relative group rounded-md"><a id="transformers.FlaxOPTForCausalLM.__call__.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxOPTForCausalLM.__call__.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, FlaxOPTForCausalLM <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"facebook/opt-350m"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = FlaxOPTForCausalLM.from_pretrained(<span class="hljs-string">"facebook/opt-350m"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(<span class="hljs-string">"Hello, my dog is cute"</span>, return_tensors=<span class="hljs-string">"np"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(**inputs) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># retrieve logts for next token</span> <span class="hljs-meta">&gt;&gt;&gt; </span>next_token_logits = outputs.logits[:, -<span class="hljs-number">1</span>]</pre></div></div></div></div> <p></p> <div id="svelte-announcer" aria-live="assertive" aria-atomic="true" style="position: absolute; left: 0px; top: 0px; clip: rect(0px, 0px, 0px, 0px); clip-path: inset(50%); overflow: hidden; white-space: nowrap; width: 1px; height: 1px;"></div></div> <div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/open-llama" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>Open-Llama</a> <a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/pegasus" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">Pegasus<span class="ml-2 translate-y-px">→</span></a></div></div></div> <div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapter&quot;:{&quot;title&quot;:&quot;OPT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;opt&quot;,&quot;url&quot;:&quot;#opt&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;overview&quot;,&quot;url&quot;:&quot;#overview&quot;},{&quot;title&quot;:&quot;Resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;resources&quot;,&quot;url&quot;:&quot;#resources&quot;},{&quot;title&quot;:&quot;OPTConfig&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.OPTConfig&quot;,&quot;url&quot;:&quot;#transformers.OPTConfig&quot;},{&quot;title&quot;:&quot;OPTModel&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.OPTModel&quot;,&quot;url&quot;:&quot;#transformers.OPTModel&quot;},{&quot;title&quot;:&quot;OPTForCausalLM&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.OPTForCausalLM&quot;,&quot;url&quot;:&quot;#transformers.OPTForCausalLM&quot;},{&quot;title&quot;:&quot;TFOPTModel&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.TFOPTModel&quot;,&quot;url&quot;:&quot;#transformers.TFOPTModel&quot;},{&quot;title&quot;:&quot;TFOPTForCausalLM&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.TFOPTForCausalLM&quot;,&quot;url&quot;:&quot;#transformers.TFOPTForCausalLM&quot;},{&quot;title&quot;:&quot;OPTForSequenceClassification&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.OPTForSequenceClassification&quot;,&quot;url&quot;:&quot;#transformers.OPTForSequenceClassification&quot;},{&quot;title&quot;:&quot;OPTForQuestionAnswering&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.OPTForQuestionAnswering&quot;,&quot;url&quot;:&quot;#transformers.OPTForQuestionAnswering&quot;},{&quot;title&quot;:&quot;FlaxOPTModel&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.FlaxOPTModel&quot;,&quot;url&quot;:&quot;#transformers.FlaxOPTModel&quot;},{&quot;title&quot;:&quot;FlaxOPTForCausalLM&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.FlaxOPTForCausalLM&quot;,&quot;url&quot;:&quot;#transformers.FlaxOPTForCausalLM&quot;}]}}" data-target="SubSideMenu"><nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#opt" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-opt">OPT</a> <a href="#overview" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-overview"><wbr>Overview</a> <a href="#resources" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-resources"><wbr>Resources</a> <a href="#transformers.OPTConfig" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.OPTConfig">OPT<wbr>Config</a> <a href="#transformers.OPTModel" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.OPTModel">OPT<wbr>Model</a> <a href="#transformers.OPTForCausalLM" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.OPTForCausalLM">OPT<wbr>For<wbr>CausalLM</a> <a href="#transformers.TFOPTModel" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.TFOPTModel">TFOPT<wbr>Model</a> <a href="#transformers.TFOPTForCausalLM" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.TFOPTForCausalLM">TFOPT<wbr>For<wbr>CausalLM</a> <a href="#transformers.OPTForSequenceClassification" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.OPTForSequenceClassification">OPT<wbr>For<wbr>Sequence<wbr>Classification</a> <a href="#transformers.OPTForQuestionAnswering" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.OPTForQuestionAnswering">OPT<wbr>For<wbr>Question<wbr>Answering</a> <a href="#transformers.FlaxOPTModel" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.FlaxOPTModel"><wbr>FlaxOPT<wbr>Model</a> <a href="#transformers.FlaxOPTForCausalLM" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.FlaxOPTForCausalLM"><wbr>FlaxOPT<wbr>For<wbr>CausalLM</a> </nav></div></div></div> <div id="doc-footer"></div></main> </div> <script> import("/front/build/kube-5e23f38/index.js"); window.moonSha = "kube-5e23f38/"; window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc","environment":"production","userAgent":"HuggingFace (production)"}`); </script> <!-- Stripe --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://js.stripe.com/v3/"; script.async = true; document.head.appendChild(script); } </script> <!-- Google analytics v4 --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL"; script.async = true; document.head.appendChild(script); window.dataLayer = window.dataLayer || []; function gtag() { if (window.dataLayer !== undefined) { window.dataLayer.push(arguments); } } gtag("js", new Date()); gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/v4.34.0/en/model_doc/opt" }); /// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" }); /// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent /// TODO: ask the user for their consent and update this with gtag('consent', 'update') } </script> <!-- Google Analytics v3 (deprecated) --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { (function (i, s, o, g, r, a, m) { i["GoogleAnalyticsObject"] = r; (i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments); }), (i[r].l = 1 * new Date()); (a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]); a.async = 1; a.src = g; m.parentNode.insertBefore(a, m); })(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics"); ganalytics("create", "UA-83738774-2", "auto"); ganalytics("send", "pageview", "/docs/transformers/v4.34.0/en/model_doc/opt"); } </script> <iframe name="__privateStripeMetricsController2650" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-27c67c0d52761104439bb051c7856ab1.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fv4.34.0%2Fen%2Fmodel_doc%2Fopt&amp;title=OPT&amp;referrer=&amp;muid=NA&amp;sid=NA&amp;version=6&amp;preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html>
2023-10-05T13:33:49.008Z
REALM
https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmConfig
# REALM ## Overview The REALM model was proposed in [REALM: Retrieval-Augmented Language Model Pre-Training](https://arxiv.org/abs/2002.08909) by Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang. It’s a retrieval-augmented language model that firstly retrieves documents from a textual knowledge corpus and then utilizes retrieved documents to process question answering tasks. The abstract from the paper is the following: _Language model pre-training has been shown to capture a surprising amount of world knowledge, crucial for NLP tasks such as question answering. However, this knowledge is stored implicitly in the parameters of a neural network, requiring ever-larger networks to cover more facts. To capture knowledge in a more modular and interpretable way, we augment language model pre-training with a latent knowledge retriever, which allows the model to retrieve and attend over documents from a large corpus such as Wikipedia, used during pre-training, fine-tuning and inference. For the first time, we show how to pre-train such a knowledge retriever in an unsupervised manner, using masked language modeling as the learning signal and backpropagating through a retrieval step that considers millions of documents. We demonstrate the effectiveness of Retrieval-Augmented Language Model pre-training (REALM) by fine-tuning on the challenging task of Open-domain Question Answering (Open-QA). We compare against state-of-the-art models for both explicit and implicit knowledge storage on three popular Open-QA benchmarks, and find that we outperform all previous methods by a significant margin (4-16% absolute accuracy), while also providing qualitative benefits such as interpretability and modularity._ This model was contributed by [qqaatw](https://huggingface.co/qqaatw). The original code can be found [here](https://github.com/google-research/language/tree/master/language/realm). ## RealmConfig ### class transformers.RealmConfig [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/realm/configuration_realm.py#L44) ( vocab\_size = 30522 hidden\_size = 768 retriever\_proj\_size = 128 num\_hidden\_layers = 12 num\_attention\_heads = 12 num\_candidates = 8 intermediate\_size = 3072 hidden\_act = 'gelu\_new' hidden\_dropout\_prob = 0.1 attention\_probs\_dropout\_prob = 0.1 max\_position\_embeddings = 512 type\_vocab\_size = 2 initializer\_range = 0.02 layer\_norm\_eps = 1e-12 span\_hidden\_size = 256 max\_span\_width = 10 reader\_layer\_norm\_eps = 0.001 reader\_beam\_size = 5 reader\_seq\_len = 320 num\_block\_records = 13353718 searcher\_beam\_size = 5000 pad\_token\_id = 1 bos\_token\_id = 0 eos\_token\_id = 2 \*\*kwargs ) Parameters - **vocab\_size** (`int`, _optional_, defaults to 30522) — Vocabulary size of the REALM model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [RealmEmbedder](/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmEmbedder), [RealmScorer](/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmScorer), [RealmKnowledgeAugEncoder](/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmKnowledgeAugEncoder), or [RealmReader](/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmReader). - **hidden\_size** (`int`, _optional_, defaults to 768) — Dimension of the encoder layers and the pooler layer. - **retriever\_proj\_size** (`int`, _optional_, defaults to 128) — Dimension of the retriever(embedder) projection. - **num\_hidden\_layers** (`int`, _optional_, defaults to 12) — Number of hidden layers in the Transformer encoder. - **num\_attention\_heads** (`int`, _optional_, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder. - **num\_candidates** (`int`, _optional_, defaults to 8) — Number of candidates inputted to the RealmScorer or RealmKnowledgeAugEncoder. - **intermediate\_size** (`int`, _optional_, defaults to 3072) — Dimension of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder. - **hidden\_act** (`str` or `function`, _optional_, defaults to `"gelu_new"`) — The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported. - **hidden\_dropout\_prob** (`float`, _optional_, defaults to 0.1) — The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler. - **attention\_probs\_dropout\_prob** (`float`, _optional_, defaults to 0.1) — The dropout ratio for the attention probabilities. - **max\_position\_embeddings** (`int`, _optional_, defaults to 512) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). - **type\_vocab\_size** (`int`, _optional_, defaults to 2) — The vocabulary size of the `token_type_ids` passed when calling [RealmEmbedder](/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmEmbedder), [RealmScorer](/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmScorer), [RealmKnowledgeAugEncoder](/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmKnowledgeAugEncoder), or [RealmReader](/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmReader). - **initializer\_range** (`float`, _optional_, defaults to 0.02) — The standard deviation of the truncated\_normal\_initializer for initializing all weight matrices. - **layer\_norm\_eps** (`float`, _optional_, defaults to 1e-12) — The epsilon used by the layer normalization layers. - **span\_hidden\_size** (`int`, _optional_, defaults to 256) — Dimension of the reader’s spans. - **max\_span\_width** (`int`, _optional_, defaults to 10) — Max span width of the reader. - **reader\_layer\_norm\_eps** (`float`, _optional_, defaults to 1e-3) — The epsilon used by the reader’s layer normalization layers. - **reader\_beam\_size** (`int`, _optional_, defaults to 5) — Beam size of the reader. - **reader\_seq\_len** (`int`, _optional_, defaults to 288+32) — Maximum sequence length of the reader. - **num\_block\_records** (`int`, _optional_, defaults to 13353718) — Number of block records. - **searcher\_beam\_size** (`int`, _optional_, defaults to 5000) — Beam size of the searcher. Note that when eval mode is enabled, _searcher\_beam\_size_ will be the same as _reader\_beam\_size_. This is the configuration class to store the configuration of 1. [RealmEmbedder](/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmEmbedder) 2. [RealmScorer](/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmScorer) 3. [RealmKnowledgeAugEncoder](/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmKnowledgeAugEncoder) 4. [RealmRetriever](/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmRetriever) 5. [RealmReader](/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmReader) 6. [RealmForOpenQA](/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmForOpenQA) It is used to instantiate an REALM model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the REALM [google/realm-cc-news-pretrained-embedder](https://huggingface.co/google/realm-cc-news-pretrained-embedder) architecture. Configuration objects inherit from [PretrainedConfig](/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig) and can be used to control the model outputs. Read the documentation from [PretrainedConfig](/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig) for more information. Example: ``` >>> from transformers import RealmConfig, RealmEmbedder >>> >>> configuration = RealmConfig() >>> >>> model = RealmEmbedder(configuration) >>> >>> configuration = model.config ``` ## RealmTokenizer ### class transformers.RealmTokenizer [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/realm/tokenization_realm.py#L95) ( vocab\_file do\_lower\_case = True do\_basic\_tokenize = True never\_split = None unk\_token = '\[UNK\]' sep\_token = '\[SEP\]' pad\_token = '\[PAD\]' cls\_token = '\[CLS\]' mask\_token = '\[MASK\]' tokenize\_chinese\_chars = True strip\_accents = None \*\*kwargs ) Parameters - **vocab\_file** (`str`) — File containing the vocabulary. - **do\_lower\_case** (`bool`, _optional_, defaults to `True`) — Whether or not to lowercase the input when tokenizing. - **do\_basic\_tokenize** (`bool`, _optional_, defaults to `True`) — Whether or not to do basic tokenization before WordPiece. - **never\_split** (`Iterable`, _optional_) — Collection of tokens which will never be split during tokenization. Only has an effect when `do_basic_tokenize=True` - **unk\_token** (`str`, _optional_, defaults to `"[UNK]"`) — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. - **sep\_token** (`str`, _optional_, defaults to `"[SEP]"`) — The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens. - **pad\_token** (`str`, _optional_, defaults to `"[PAD]"`) — The token used for padding, for example when batching sequences of different lengths. - **cls\_token** (`str`, _optional_, defaults to `"[CLS]"`) — The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens. - **mask\_token** (`str`, _optional_, defaults to `"[MASK]"`) — The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict. - **tokenize\_chinese\_chars** (`bool`, _optional_, defaults to `True`) — Whether or not to tokenize Chinese characters. This should likely be deactivated for Japanese (see this [issue](https://github.com/huggingface/transformers/issues/328)). - **strip\_accents** (`bool`, _optional_) — Whether or not to strip all accents. If this option is not specified, then it will be determined by the value for `lowercase` (as in the original BERT). Construct a REALM tokenizer. [RealmTokenizer](/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmTokenizer) is identical to [BertTokenizer](/docs/transformers/v4.34.0/en/model_doc/bert#transformers.BertTokenizer) and runs end-to-end tokenization: punctuation splitting and wordpiece. This tokenizer inherits from [PreTrainedTokenizer](/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizer) which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. #### build\_inputs\_with\_special\_tokens [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/realm/tokenization_realm.py#L300) ( token\_ids\_0: typing.List\[int\] token\_ids\_1: typing.Optional\[typing.List\[int\]\] = None ) → `List[int]` Parameters - **token\_ids\_0** (`List[int]`) — List of IDs to which the special tokens will be added. - **token\_ids\_1** (`List[int]`, _optional_) — Optional second list of IDs for sequence pairs. List of [input IDs](../glossary#input-ids) with the appropriate special tokens. Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A REALM sequence has the following format: - single sequence: `[CLS] X [SEP]` - pair of sequences: `[CLS] A [SEP] B [SEP]` #### get\_special\_tokens\_mask [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/realm/tokenization_realm.py#L325) ( token\_ids\_0: typing.List\[int\] token\_ids\_1: typing.Optional\[typing.List\[int\]\] = None already\_has\_special\_tokens: bool = False ) → `List[int]` Parameters - **token\_ids\_0** (`List[int]`) — List of IDs. - **token\_ids\_1** (`List[int]`, _optional_) — Optional second list of IDs for sequence pairs. - **already\_has\_special\_tokens** (`bool`, _optional_, defaults to `False`) — Whether or not the token list is already formatted with special tokens for the model. A list of integers in the range \[0, 1\]: 1 for a special token, 0 for a sequence token. Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer `prepare_for_model` method. #### create\_token\_type\_ids\_from\_sequences [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/realm/tokenization_realm.py#L353) ( token\_ids\_0: typing.List\[int\] token\_ids\_1: typing.Optional\[typing.List\[int\]\] = None ) → `List[int]` Parameters - **token\_ids\_0** (`List[int]`) — List of IDs. - **token\_ids\_1** (`List[int]`, _optional_) — Optional second list of IDs for sequence pairs. List of [token type IDs](../glossary#token-type-ids) according to the given sequence(s). Create a mask from the two sequences passed to be used in a sequence-pair classification task. A REALM sequence pair mask has the following format: ``` 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 | first sequence | second sequence | ``` If `token_ids_1` is `None`, this method only returns the first portion of the mask (0s). #### save\_vocabulary [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/realm/tokenization_realm.py#L382) ( save\_directory: str filename\_prefix: typing.Optional\[str\] = None ) #### batch\_encode\_candidates [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/realm/tokenization_realm.py#L227) ( text \*\*kwargs ) → [BatchEncoding](/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.BatchEncoding) Parameters - **text** (`List[List[str]]`) — The batch of sequences to be encoded. Each sequence must be in this format: (batch\_size, num\_candidates, text). - **text\_pair** (`List[List[str]]`, _optional_) — The batch of sequences to be encoded. Each sequence must be in this format: (batch\_size, num\_candidates, text). \*\*kwargs — Keyword arguments of the **call** method. Encoded text or text pair. Encode a batch of text or text pair. This method is similar to regular **call** method but has the following differences: 1. Handle additional num\_candidate axis. (batch\_size, num\_candidates, text) 2. Always pad the sequences to _max\_length_. 3. Must specify _max\_length_ in order to stack packs of candidates into a batch. - single sequence: `[CLS] X [SEP]` - pair of sequences: `[CLS] A [SEP] B [SEP]` Example: ``` >>> from transformers import RealmTokenizer >>> >>> text = [["Hello world!", "Nice to meet you!"], ["The cute cat.", "The adorable dog."]] >>> tokenizer = RealmTokenizer.from_pretrained("google/realm-cc-news-pretrained-encoder") >>> tokenized_text = tokenizer.batch_encode_candidates(text, max_length=10, return_tensors="pt") ``` ## RealmTokenizerFast ### class transformers.RealmTokenizerFast [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/realm/tokenization_realm_fast.py#L102) ( vocab\_file = None tokenizer\_file = None do\_lower\_case = True unk\_token = '\[UNK\]' sep\_token = '\[SEP\]' pad\_token = '\[PAD\]' cls\_token = '\[CLS\]' mask\_token = '\[MASK\]' tokenize\_chinese\_chars = True strip\_accents = None \*\*kwargs ) Parameters - **vocab\_file** (`str`) — File containing the vocabulary. - **do\_lower\_case** (`bool`, _optional_, defaults to `True`) — Whether or not to lowercase the input when tokenizing. - **unk\_token** (`str`, _optional_, defaults to `"[UNK]"`) — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. - **sep\_token** (`str`, _optional_, defaults to `"[SEP]"`) — The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens. - **pad\_token** (`str`, _optional_, defaults to `"[PAD]"`) — The token used for padding, for example when batching sequences of different lengths. - **cls\_token** (`str`, _optional_, defaults to `"[CLS]"`) — The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens. - **mask\_token** (`str`, _optional_, defaults to `"[MASK]"`) — The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict. - **clean\_text** (`bool`, _optional_, defaults to `True`) — Whether or not to clean the text before tokenization by removing any control characters and replacing all whitespaces by the classic one. - **tokenize\_chinese\_chars** (`bool`, _optional_, defaults to `True`) — Whether or not to tokenize Chinese characters. This should likely be deactivated for Japanese (see [this issue](https://github.com/huggingface/transformers/issues/328)). - **strip\_accents** (`bool`, _optional_) — Whether or not to strip all accents. If this option is not specified, then it will be determined by the value for `lowercase` (as in the original BERT). - **wordpieces\_prefix** (`str`, _optional_, defaults to `"##"`) — The prefix for subwords. Construct a “fast” REALM tokenizer (backed by HuggingFace’s _tokenizers_ library). Based on WordPiece. [RealmTokenizerFast](/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmTokenizerFast) is identical to [BertTokenizerFast](/docs/transformers/v4.34.0/en/model_doc/bert#transformers.BertTokenizerFast) and runs end-to-end tokenization: punctuation splitting and wordpiece. This tokenizer inherits from [PreTrainedTokenizerFast](/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast) which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. #### batch\_encode\_candidates [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/realm/tokenization_realm_fast.py#L193) ( text \*\*kwargs ) → [BatchEncoding](/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.BatchEncoding) Parameters - **text** (`List[List[str]]`) — The batch of sequences to be encoded. Each sequence must be in this format: (batch\_size, num\_candidates, text). - **text\_pair** (`List[List[str]]`, _optional_) — The batch of sequences to be encoded. Each sequence must be in this format: (batch\_size, num\_candidates, text). \*\*kwargs — Keyword arguments of the **call** method. Encoded text or text pair. Encode a batch of text or text pair. This method is similar to regular **call** method but has the following differences: 1. Handle additional num\_candidate axis. (batch\_size, num\_candidates, text) 2. Always pad the sequences to _max\_length_. 3. Must specify _max\_length_ in order to stack packs of candidates into a batch. - single sequence: `[CLS] X [SEP]` - pair of sequences: `[CLS] A [SEP] B [SEP]` Example: ``` >>> from transformers import RealmTokenizerFast >>> >>> text = [["Hello world!", "Nice to meet you!"], ["The cute cat.", "The adorable dog."]] >>> tokenizer = RealmTokenizerFast.from_pretrained("google/realm-cc-news-pretrained-encoder") >>> tokenized_text = tokenizer.batch_encode_candidates(text, max_length=10, return_tensors="pt") ``` ## RealmRetriever ### class transformers.RealmRetriever [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/realm/retrieval_realm.py#L72) ( block\_records tokenizer ) Parameters - **block\_records** (`np.ndarray`) — A numpy array which cantains evidence texts. - **tokenizer** ([RealmTokenizer](/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmTokenizer)) — The tokenizer to encode retrieved texts. The retriever of REALM outputting the retrieved evidence block and whether the block has answers as well as answer positions.” #### block\_has\_answer [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/realm/retrieval_realm.py#L129) ( concat\_inputs answer\_ids ) check if retrieved\_blocks has answers. ## RealmEmbedder ### class transformers.RealmEmbedder [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/realm/modeling_realm.py#L1151) ( config ) Parameters - **config** ([RealmConfig](/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. The embedder of REALM outputting projected score that will be used to calculate relevance score. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/realm/modeling_realm.py#L1167) ( input\_ids: typing.Optional\[torch.LongTensor\] = None attention\_mask: typing.Optional\[torch.FloatTensor\] = None token\_type\_ids: typing.Optional\[torch.LongTensor\] = None position\_ids: typing.Optional\[torch.LongTensor\] = None head\_mask: typing.Optional\[torch.FloatTensor\] = None inputs\_embeds: typing.Optional\[torch.FloatTensor\] = None output\_attentions: typing.Optional\[bool\] = None output\_hidden\_states: typing.Optional\[bool\] = None return\_dict: typing.Optional\[bool\] = None ) → `transformers.models.realm.modeling_realm.RealmEmbedderOutput` or `tuple(torch.FloatTensor)` Parameters - **input\_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using [AutoTokenizer](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer). See [PreTrainedTokenizer.encode()](/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode) and [PreTrainedTokenizer.**call**()](/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__) for details. [What are input IDs?](../glossary#input-ids) - **attention\_mask** (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, _optional_) — Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - **token\_type\_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`, _optional_) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, 1]`: - 0 corresponds to a _sentence A_ token, - 1 corresponds to a _sentence B_ token. [What are token type IDs?](../glossary#token-type-ids) - **position\_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`, _optional_) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, config.max_position_embeddings - 1]`. [What are position IDs?](../glossary#position-ids) - **head\_mask** (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, _optional_) — Mask to nullify selected heads of the self-attention modules. Mask values selected in `[0, 1]`: - 1 indicates the head is **not masked**, - 0 indicates the head is **masked**. - **inputs\_embeds** (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, _optional_) — Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert _input\_ids_ indices into associated vectors than the model’s internal embedding lookup matrix. - **output\_attentions** (`bool`, _optional_) — Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. - **output\_hidden\_states** (`bool`, _optional_) — Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for more detail. - **return\_dict** (`bool`, _optional_) — Whether or not to return a [ModelOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput) instead of a plain tuple. Returns `transformers.models.realm.modeling_realm.RealmEmbedderOutput` or `tuple(torch.FloatTensor)` A `transformers.models.realm.modeling_realm.RealmEmbedderOutput` or a tuple of `torch.FloatTensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various elements depending on the configuration ([RealmConfig](/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmConfig)) and inputs. - **projected\_score** (`torch.FloatTensor` of shape `(batch_size, config.retriever_proj_size)`) — Projected score. - **hidden\_states** (`tuple(torch.FloatTensor)`, _optional_, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`) — Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. Hidden-states of the model at the output of each layer plus the initial embedding outputs. - **attentions** (`tuple(torch.FloatTensor)`, _optional_, returned when `output_attentions=True` is passed or when `config.output_attentions=True`) — Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The [RealmEmbedder](/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmEmbedder) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoTokenizer, RealmEmbedder >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("google/realm-cc-news-pretrained-embedder") >>> model = RealmEmbedder.from_pretrained("google/realm-cc-news-pretrained-embedder") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs) >>> projected_score = outputs.projected_score ``` ## RealmScorer ### class transformers.RealmScorer [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/realm/modeling_realm.py#L1233) ( config query\_embedder = None ) Parameters - **config** ([RealmConfig](/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. - **query\_embedder** ([RealmEmbedder](/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmEmbedder)) — Embedder for input sequences. If not specified, it will use the same embedder as candidate sequences. The scorer of REALM outputting relevance scores representing the score of document candidates (before softmax). This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/realm/modeling_realm.py#L1249) ( input\_ids: typing.Optional\[torch.LongTensor\] = None attention\_mask: typing.Optional\[torch.FloatTensor\] = None token\_type\_ids: typing.Optional\[torch.LongTensor\] = None position\_ids: typing.Optional\[torch.LongTensor\] = None candidate\_input\_ids: typing.Optional\[torch.LongTensor\] = None candidate\_attention\_mask: typing.Optional\[torch.FloatTensor\] = None candidate\_token\_type\_ids: typing.Optional\[torch.LongTensor\] = None candidate\_inputs\_embeds: typing.Optional\[torch.FloatTensor\] = None head\_mask: typing.Optional\[torch.FloatTensor\] = None inputs\_embeds: typing.Optional\[torch.FloatTensor\] = None output\_attentions: typing.Optional\[bool\] = None output\_hidden\_states: typing.Optional\[bool\] = None return\_dict: typing.Optional\[bool\] = None ) → `transformers.models.realm.modeling_realm.RealmScorerOutput` or `tuple(torch.FloatTensor)` Parameters - **input\_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using [AutoTokenizer](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer). See [PreTrainedTokenizer.encode()](/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode) and [PreTrainedTokenizer.**call**()](/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__) for details. [What are input IDs?](../glossary#input-ids) - **attention\_mask** (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, _optional_) — Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - **token\_type\_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`, _optional_) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, 1]`: - 0 corresponds to a _sentence A_ token, - 1 corresponds to a _sentence B_ token. [What are token type IDs?](../glossary#token-type-ids) - **position\_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`, _optional_) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, config.max_position_embeddings - 1]`. [What are position IDs?](../glossary#position-ids) - **head\_mask** (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, _optional_) — Mask to nullify selected heads of the self-attention modules. Mask values selected in `[0, 1]`: - 1 indicates the head is **not masked**, - 0 indicates the head is **masked**. - **inputs\_embeds** (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, _optional_) — Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert _input\_ids_ indices into associated vectors than the model’s internal embedding lookup matrix. - **output\_attentions** (`bool`, _optional_) — Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. - **output\_hidden\_states** (`bool`, _optional_) — Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for more detail. - **return\_dict** (`bool`, _optional_) — Whether or not to return a [ModelOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput) instead of a plain tuple. - **candidate\_input\_ids** (`torch.LongTensor` of shape `(batch_size, num_candidates, sequence_length)`) — Indices of candidate input sequence tokens in the vocabulary. Indices can be obtained using [AutoTokenizer](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer). See [PreTrainedTokenizer.encode()](/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode) and [PreTrainedTokenizer.**call**()](/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__) for details. [What are input IDs?](../glossary#input-ids) - **candidate\_attention\_mask** (`torch.FloatTensor` of shape `(batch_size, num_candidates, sequence_length)`, _optional_) — Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - **candidate\_token\_type\_ids** (`torch.LongTensor` of shape `(batch_size, num_candidates, sequence_length)`, _optional_) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, 1]`: - 0 corresponds to a _sentence A_ token, - 1 corresponds to a _sentence B_ token. [What are token type IDs?](../glossary#token-type-ids) - **candidate\_inputs\_embeds** (`torch.FloatTensor` of shape `(batch_size * num_candidates, sequence_length, hidden_size)`, _optional_) — Optionally, instead of passing `candidate_input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert _candidate\_input\_ids_ indices into associated vectors than the model’s internal embedding lookup matrix. Returns `transformers.models.realm.modeling_realm.RealmScorerOutput` or `tuple(torch.FloatTensor)` A `transformers.models.realm.modeling_realm.RealmScorerOutput` or a tuple of `torch.FloatTensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various elements depending on the configuration ([RealmConfig](/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmConfig)) and inputs. - **relevance\_score** (`torch.FloatTensor` of shape `(batch_size, config.num_candidates)`) — The relevance score of document candidates (before softmax). - **query\_score** (`torch.FloatTensor` of shape `(batch_size, config.retriever_proj_size)`) — Query score derived from the query embedder. - **candidate\_score** (`torch.FloatTensor` of shape `(batch_size, config.num_candidates, config.retriever_proj_size)`) — Candidate score derived from the embedder. The [RealmScorer](/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmScorer) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> import torch >>> from transformers import AutoTokenizer, RealmScorer >>> tokenizer = AutoTokenizer.from_pretrained("google/realm-cc-news-pretrained-scorer") >>> model = RealmScorer.from_pretrained("google/realm-cc-news-pretrained-scorer", num_candidates=2) >>> >>> input_texts = ["How are you?", "What is the item in the picture?"] >>> candidates_texts = [["Hello world!", "Nice to meet you!"], ["A cute cat.", "An adorable dog."]] >>> inputs = tokenizer(input_texts, return_tensors="pt") >>> candidates_inputs = tokenizer.batch_encode_candidates(candidates_texts, max_length=10, return_tensors="pt") >>> outputs = model( ... **inputs, ... candidate_input_ids=candidates_inputs.input_ids, ... candidate_attention_mask=candidates_inputs.attention_mask, ... candidate_token_type_ids=candidates_inputs.token_type_ids, ... ) >>> relevance_score = outputs.relevance_score ``` ## RealmKnowledgeAugEncoder ### class transformers.RealmKnowledgeAugEncoder [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/realm/modeling_realm.py#L1381) ( config ) Parameters - **config** ([RealmConfig](/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. The knowledge-augmented encoder of REALM outputting masked language model logits and marginal log-likelihood loss. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/realm/modeling_realm.py#L1402) ( input\_ids: typing.Optional\[torch.LongTensor\] = None attention\_mask: typing.Optional\[torch.FloatTensor\] = None token\_type\_ids: typing.Optional\[torch.LongTensor\] = None position\_ids: typing.Optional\[torch.LongTensor\] = None head\_mask: typing.Optional\[torch.FloatTensor\] = None inputs\_embeds: typing.Optional\[torch.FloatTensor\] = None relevance\_score: typing.Optional\[torch.FloatTensor\] = None labels: typing.Optional\[torch.LongTensor\] = None mlm\_mask: typing.Optional\[torch.LongTensor\] = None output\_attentions: typing.Optional\[bool\] = None output\_hidden\_states: typing.Optional\[bool\] = None return\_dict: typing.Optional\[bool\] = None ) → [transformers.modeling\_outputs.MaskedLMOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.MaskedLMOutput) or `tuple(torch.FloatTensor)` Parameters - **input\_ids** (`torch.LongTensor` of shape `(batch_size, num_candidates, sequence_length)`) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using [AutoTokenizer](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer). See [PreTrainedTokenizer.encode()](/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode) and [PreTrainedTokenizer.**call**()](/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__) for details. [What are input IDs?](../glossary#input-ids) - **attention\_mask** (`torch.FloatTensor` of shape `(batch_size, num_candidates, sequence_length)`, _optional_) — Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - **token\_type\_ids** (`torch.LongTensor` of shape `(batch_size, num_candidates, sequence_length)`, _optional_) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, 1]`: - 0 corresponds to a _sentence A_ token, - 1 corresponds to a _sentence B_ token. [What are token type IDs?](../glossary#token-type-ids) - **position\_ids** (`torch.LongTensor` of shape `(batch_size, num_candidates, sequence_length)`, _optional_) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, config.max_position_embeddings - 1]`. [What are position IDs?](../glossary#position-ids) - **head\_mask** (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, _optional_) — Mask to nullify selected heads of the self-attention modules. Mask values selected in `[0, 1]`: - 1 indicates the head is **not masked**, - 0 indicates the head is **masked**. - **inputs\_embeds** (`torch.FloatTensor` of shape `(batch_size, num_candidates, sequence_length, hidden_size)`, _optional_) — Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert _input\_ids_ indices into associated vectors than the model’s internal embedding lookup matrix. - **output\_attentions** (`bool`, _optional_) — Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. - **output\_hidden\_states** (`bool`, _optional_) — Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for more detail. - **return\_dict** (`bool`, _optional_) — Whether or not to return a [ModelOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput) instead of a plain tuple. - **relevance\_score** (`torch.FloatTensor` of shape `(batch_size, num_candidates)`, _optional_) — Relevance score derived from RealmScorer, must be specified if you want to compute the masked language modeling loss. - **labels** (`torch.LongTensor` of shape `(batch_size, sequence_length)`, _optional_) — Labels for computing the masked language modeling loss. Indices should be in `[-100, 0, ..., config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are ignored (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]` - **mlm\_mask** (`torch.LongTensor` of shape `(batch_size, sequence_length)`, _optional_) — Mask to avoid calculating joint loss on certain positions. If not specified, the loss will not be masked. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. A [transformers.modeling\_outputs.MaskedLMOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.MaskedLMOutput) or a tuple of `torch.FloatTensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various elements depending on the configuration ([RealmConfig](/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmConfig)) and inputs. - **loss** (`torch.FloatTensor` of shape `(1,)`, _optional_, returned when `labels` is provided) — Masked language modeling (MLM) loss. - **logits** (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). - **hidden\_states** (`tuple(torch.FloatTensor)`, _optional_, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`) — Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. - **attentions** (`tuple(torch.FloatTensor)`, _optional_, returned when `output_attentions=True` is passed or when `config.output_attentions=True`) — Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The [RealmKnowledgeAugEncoder](/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmKnowledgeAugEncoder) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> import torch >>> from transformers import AutoTokenizer, RealmKnowledgeAugEncoder >>> tokenizer = AutoTokenizer.from_pretrained("google/realm-cc-news-pretrained-encoder") >>> model = RealmKnowledgeAugEncoder.from_pretrained( ... "google/realm-cc-news-pretrained-encoder", num_candidates=2 ... ) >>> >>> text = [["Hello world!", "Nice to meet you!"], ["The cute cat.", "The adorable dog."]] >>> inputs = tokenizer.batch_encode_candidates(text, max_length=10, return_tensors="pt") >>> outputs = model(**inputs) >>> logits = outputs.logits ``` ## RealmReader ### class transformers.RealmReader [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/realm/modeling_realm.py#L1531) ( config ) Parameters - **config** ([RealmConfig](/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. The reader of REALM. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/realm/modeling_realm.py#L1542) ( input\_ids: typing.Optional\[torch.LongTensor\] = None attention\_mask: typing.Optional\[torch.FloatTensor\] = None token\_type\_ids: typing.Optional\[torch.LongTensor\] = None position\_ids: typing.Optional\[torch.LongTensor\] = None head\_mask: typing.Optional\[torch.FloatTensor\] = None inputs\_embeds: typing.Optional\[torch.FloatTensor\] = None relevance\_score: typing.Optional\[torch.FloatTensor\] = None block\_mask: typing.Optional\[torch.BoolTensor\] = None start\_positions: typing.Optional\[torch.LongTensor\] = None end\_positions: typing.Optional\[torch.LongTensor\] = None has\_answers: typing.Optional\[torch.BoolTensor\] = None output\_attentions: typing.Optional\[bool\] = None output\_hidden\_states: typing.Optional\[bool\] = None return\_dict: typing.Optional\[bool\] = None ) → `transformers.models.realm.modeling_realm.RealmReaderOutput` or `tuple(torch.FloatTensor)` Parameters - **input\_ids** (`torch.LongTensor` of shape `(reader_beam_size, sequence_length)`) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using [AutoTokenizer](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer). See [PreTrainedTokenizer.encode()](/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode) and [PreTrainedTokenizer.**call**()](/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__) for details. [What are input IDs?](../glossary#input-ids) - **attention\_mask** (`torch.FloatTensor` of shape `(reader_beam_size, sequence_length)`, _optional_) — Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - **token\_type\_ids** (`torch.LongTensor` of shape `(reader_beam_size, sequence_length)`, _optional_) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, 1]`: - 0 corresponds to a _sentence A_ token, - 1 corresponds to a _sentence B_ token. [What are token type IDs?](../glossary#token-type-ids) - **position\_ids** (`torch.LongTensor` of shape `(reader_beam_size, sequence_length)`, _optional_) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, config.max_position_embeddings - 1]`. [What are position IDs?](../glossary#position-ids) - **head\_mask** (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, _optional_) — Mask to nullify selected heads of the self-attention modules. Mask values selected in `[0, 1]`: - 1 indicates the head is **not masked**, - 0 indicates the head is **masked**. - **inputs\_embeds** (`torch.FloatTensor` of shape `(reader_beam_size, sequence_length, hidden_size)`, _optional_) — Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert _input\_ids_ indices into associated vectors than the model’s internal embedding lookup matrix. - **output\_attentions** (`bool`, _optional_) — Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. - **output\_hidden\_states** (`bool`, _optional_) — Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for more detail. - **return\_dict** (`bool`, _optional_) — Whether or not to return a [ModelOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput) instead of a plain tuple. - **relevance\_score** (`torch.FloatTensor` of shape `(searcher_beam_size,)`, _optional_) — Relevance score, which must be specified if you want to compute the logits and marginal log loss. - **block\_mask** (`torch.BoolTensor` of shape `(searcher_beam_size, sequence_length)`, _optional_) — The mask of the evidence block, which must be specified if you want to compute the logits and marginal log loss. - **start\_positions** (`torch.LongTensor` of shape `(searcher_beam_size,)`, _optional_) — Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence are not taken into account for computing the loss. - **end\_positions** (`torch.LongTensor` of shape `(searcher_beam_size,)`, _optional_) — Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence are not taken into account for computing the loss. - **has\_answers** (`torch.BoolTensor` of shape `(searcher_beam_size,)`, _optional_) — Whether or not the evidence block has answer(s). Returns `transformers.models.realm.modeling_realm.RealmReaderOutput` or `tuple(torch.FloatTensor)` A `transformers.models.realm.modeling_realm.RealmReaderOutput` or a tuple of `torch.FloatTensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various elements depending on the configuration ([RealmConfig](/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmConfig)) and inputs. - **loss** (`torch.FloatTensor` of shape `(1,)`, _optional_, returned when `start_positions`, `end_positions`, `has_answers` are provided) — Total loss. - **retriever\_loss** (`torch.FloatTensor` of shape `(1,)`, _optional_, returned when `start_positions`, `end_positions`, `has_answers` are provided) — Retriever loss. - **reader\_loss** (`torch.FloatTensor` of shape `(1,)`, _optional_, returned when `start_positions`, `end_positions`, `has_answers` are provided) — Reader loss. - **retriever\_correct** (`torch.BoolTensor` of shape `(config.searcher_beam_size,)`, _optional_) — Whether or not an evidence block contains answer. - **reader\_correct** (`torch.BoolTensor` of shape `(config.reader_beam_size, num_candidates)`, _optional_) — Whether or not a span candidate contains answer. - **block\_idx** (`torch.LongTensor` of shape `()`) — The index of the retrieved evidence block in which the predicted answer is most likely. - **candidate** (`torch.LongTensor` of shape `()`) — The index of the retrieved span candidates in which the predicted answer is most likely. - **start\_pos** (`torch.IntTensor` of shape `()`) — Predicted answer starting position in _RealmReader_’s inputs. - **end\_pos** (`torch.IntTensor` of shape `()`) — Predicted answer ending position in _RealmReader_’s inputs. - **hidden\_states** (`tuple(torch.FloatTensor)`, _optional_, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`) — Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. Hidden-states of the model at the output of each layer plus the initial embedding outputs. - **attentions** (`tuple(torch.FloatTensor)`, _optional_, returned when `output_attentions=True` is passed or when `config.output_attentions=True`) — Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The [RealmReader](/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmReader) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. ## RealmForOpenQA ### class transformers.RealmForOpenQA [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/realm/modeling_realm.py#L1735) ( config retriever = None ) Parameters - **config** ([RealmConfig](/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. `RealmForOpenQA` for end-to-end open domain question answering. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### block\_embedding\_to [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/realm/modeling_realm.py#L1758) ( device ) Parameters - **device** (`str` or `torch.device`) — The device to which `self.block_emb` will be sent. Send `self.block_emb` to a specific device. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/realm/modeling_realm.py#L1768) ( input\_ids: typing.Optional\[torch.LongTensor\] attention\_mask: typing.Optional\[torch.FloatTensor\] = None token\_type\_ids: typing.Optional\[torch.LongTensor\] = None answer\_ids: typing.Optional\[torch.LongTensor\] = None return\_dict: typing.Optional\[bool\] = None ) → `transformers.models.realm.modeling_realm.RealmForOpenQAOutput` or `tuple(torch.FloatTensor)` Parameters - **input\_ids** (`torch.LongTensor` of shape `(1, sequence_length)`) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using [AutoTokenizer](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer). See [PreTrainedTokenizer.encode()](/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode) and [PreTrainedTokenizer.**call**()](/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__) for details. [What are input IDs?](../glossary#input-ids) - **attention\_mask** (`torch.FloatTensor` of shape `(1, sequence_length)`, _optional_) — Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - **token\_type\_ids** (`torch.LongTensor` of shape `(1, sequence_length)`, _optional_) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, 1]`: - 0 corresponds to a _sentence A_ token, - 1 corresponds to a _sentence B_ token (should not be used in this model by design). [What are token type IDs?](../glossary#token-type-ids) - **answer\_ids** (`list` of shape `(num_answers, answer_length)`, _optional_) — Answer ids for computing the marginal log-likelihood loss. Indices should be in `[-1, 0, ..., config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-1` are ignored (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]` - **return\_dict** (`bool`, _optional_) — Whether or not to return a [ModelOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput) instead of a plain tuple. Returns `transformers.models.realm.modeling_realm.RealmForOpenQAOutput` or `tuple(torch.FloatTensor)` A `transformers.models.realm.modeling_realm.RealmForOpenQAOutput` or a tuple of `torch.FloatTensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various elements depending on the configuration ([RealmConfig](/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmConfig)) and inputs. - **reader\_output** (`dict`) — Reader output. - **predicted\_answer\_ids** (`torch.LongTensor` of shape `(answer_sequence_length)`) — Predicted answer ids. The [RealmForOpenQA](/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmForOpenQA) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> import torch >>> from transformers import RealmForOpenQA, RealmRetriever, AutoTokenizer >>> retriever = RealmRetriever.from_pretrained("google/realm-orqa-nq-openqa") >>> tokenizer = AutoTokenizer.from_pretrained("google/realm-orqa-nq-openqa") >>> model = RealmForOpenQA.from_pretrained("google/realm-orqa-nq-openqa", retriever=retriever) >>> question = "Who is the pioneer in modern computer science?" >>> question_ids = tokenizer([question], return_tensors="pt") >>> answer_ids = tokenizer( ... ["alan mathison turing"], ... add_special_tokens=False, ... return_token_type_ids=False, ... return_attention_mask=False, ... ).input_ids >>> reader_output, predicted_answer_ids = model(**question_ids, answer_ids=answer_ids, return_dict=False) >>> predicted_answer = tokenizer.decode(predicted_answer_ids) >>> loss = reader_output.loss ```
<!DOCTYPE html><html class=""><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"> <meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science."> <meta property="fb:app_id" content="1321688464574422"> <meta name="twitter:card" content="summary_large_image"> <meta name="twitter:site" content="@huggingface"> <meta property="og:title" content="REALM"> <meta property="og:type" content="website"> <meta property="og:url" content="https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/realm"> <meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png"> <link rel="stylesheet" href="/front/build/kube-5e23f38/style.css"> <link rel="preconnect" href="https://fonts.gstatic.com"> <link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&amp;display=swap" rel="stylesheet"> <link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&amp;display=swap" rel="stylesheet"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" as="style" onload="this.onload=null;this.rel='stylesheet'"> <noscript> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" /> </noscript> <title>REALM</title> <script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script> <script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head> <body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage"> <div class="flex min-h-screen flex-col"> <div class="SVELTE_HYDRATER contents" data-props="{&quot;classNames&quot;:&quot;&quot;,&quot;isWide&quot;:true,&quot;isZh&quot;:false}" data-target="MainHeader"><header class="border-b border-gray-100 "><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <div class="flex flex-none items-center justify-center p-0.5 place-self-stretch lg:hidden"><button class="relative z-40 flex h-6 w-8 items-center justify-center" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div></div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="rounded-full border border-transparent bg-gray-900 px-3 py-1 leading-none text-white hover:border-black hover:bg-white hover:text-black" href="/join">Sign Up</a></li></ul></nav></div></header></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div> <main class="flex flex-1 flex-col"><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapters&quot;:[{&quot;title&quot;:&quot;Get started&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;🤗 Transformers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;index&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/index&quot;},{&quot;title&quot;:&quot;Quick tour&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;quicktour&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/quicktour&quot;},{&quot;title&quot;:&quot;Installation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;installation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/installation&quot;}]},{&quot;title&quot;:&quot;Tutorials&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Run inference with pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_tutorial&quot;},{&quot;title&quot;:&quot;Write portable code with AutoClass&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;autoclass_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/autoclass_tutorial&quot;},{&quot;title&quot;:&quot;Preprocess data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocessing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/preprocessing&quot;},{&quot;title&quot;:&quot;Fine-tune a pretrained model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;training&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/training&quot;},{&quot;title&quot;:&quot;Train with a script&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;run_scripts&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/run_scripts&quot;},{&quot;title&quot;:&quot;Set up distributed training with 🤗 Accelerate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;accelerate&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/accelerate&quot;},{&quot;title&quot;:&quot;Load and train adapters with 🤗 PEFT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;peft&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/peft&quot;},{&quot;title&quot;:&quot;Share your model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_sharing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_sharing&quot;},{&quot;title&quot;:&quot;Agents&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers_agents&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/transformers_agents&quot;},{&quot;title&quot;:&quot;Generation with LLMs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;llm_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/llm_tutorial&quot;}]},{&quot;title&quot;:&quot;Task Guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Natural Language Processing&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text classification&quot;,&quot;id&quot;:&quot;tasks/sequence_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/sequence_classification&quot;},{&quot;title&quot;:&quot;Token classification&quot;,&quot;id&quot;:&quot;tasks/token_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/token_classification&quot;},{&quot;title&quot;:&quot;Question answering&quot;,&quot;id&quot;:&quot;tasks/question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/question_answering&quot;},{&quot;title&quot;:&quot;Causal language modeling&quot;,&quot;id&quot;:&quot;tasks/language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/language_modeling&quot;},{&quot;title&quot;:&quot;Masked language modeling&quot;,&quot;id&quot;:&quot;tasks/masked_language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/masked_language_modeling&quot;},{&quot;title&quot;:&quot;Translation&quot;,&quot;id&quot;:&quot;tasks/translation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/translation&quot;},{&quot;title&quot;:&quot;Summarization&quot;,&quot;id&quot;:&quot;tasks/summarization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/summarization&quot;},{&quot;title&quot;:&quot;Multiple choice&quot;,&quot;id&quot;:&quot;tasks/multiple_choice&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/multiple_choice&quot;}]},{&quot;title&quot;:&quot;Audio&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio classification&quot;,&quot;id&quot;:&quot;tasks/audio_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/audio_classification&quot;},{&quot;title&quot;:&quot;Automatic speech recognition&quot;,&quot;id&quot;:&quot;tasks/asr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/asr&quot;}]},{&quot;title&quot;:&quot;Computer Vision&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image classification&quot;,&quot;id&quot;:&quot;tasks/image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_classification&quot;},{&quot;title&quot;:&quot;Semantic segmentation&quot;,&quot;id&quot;:&quot;tasks/semantic_segmentation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/semantic_segmentation&quot;},{&quot;title&quot;:&quot;Video classification&quot;,&quot;id&quot;:&quot;tasks/video_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/video_classification&quot;},{&quot;title&quot;:&quot;Object detection&quot;,&quot;id&quot;:&quot;tasks/object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot object detection&quot;,&quot;id&quot;:&quot;tasks/zero_shot_object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot image classification&quot;,&quot;id&quot;:&quot;tasks/zero_shot_image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification&quot;},{&quot;title&quot;:&quot;Depth estimation&quot;,&quot;id&quot;:&quot;tasks/monocular_depth_estimation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation&quot;}]},{&quot;title&quot;:&quot;Multimodal&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image captioning&quot;,&quot;id&quot;:&quot;tasks/image_captioning&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_captioning&quot;},{&quot;title&quot;:&quot;Document Question Answering&quot;,&quot;id&quot;:&quot;tasks/document_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/document_question_answering&quot;},{&quot;title&quot;:&quot;Visual Question Answering&quot;,&quot;id&quot;:&quot;tasks/visual_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/visual_question_answering&quot;},{&quot;title&quot;:&quot;Text to speech&quot;,&quot;id&quot;:&quot;tasks/text-to-speech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/text-to-speech&quot;}]},{&quot;title&quot;:&quot;Generation&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Customize the generation strategy&quot;,&quot;id&quot;:&quot;generation_strategies&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/generation_strategies&quot;}]},{&quot;title&quot;:&quot;Prompting&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image tasks with IDEFICS&quot;,&quot;id&quot;:&quot;tasks/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/idefics&quot;}]}]},{&quot;title&quot;:&quot;Developer guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Use fast tokenizers from 🤗 Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;fast_tokenizers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/fast_tokenizers&quot;},{&quot;title&quot;:&quot;Run inference with multilingual models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;multilingual&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/multilingual&quot;},{&quot;title&quot;:&quot;Use model-specific APIs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;create_a_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/create_a_model&quot;},{&quot;title&quot;:&quot;Share a custom model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_models&quot;},{&quot;title&quot;:&quot;Templates for chat models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;chat_templating&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/chat_templating&quot;},{&quot;title&quot;:&quot;Run training on Amazon SageMaker&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;sagemaker&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/sagemaker&quot;},{&quot;title&quot;:&quot;Export to ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;serialization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/serialization&quot;},{&quot;title&quot;:&quot;Export to TFLite&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tflite&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tflite&quot;},{&quot;title&quot;:&quot;Export to TorchScript&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;torchscript&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/torchscript&quot;},{&quot;title&quot;:&quot;Benchmarks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;benchmarks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/benchmarks&quot;},{&quot;title&quot;:&quot;Notebooks with examples&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;notebooks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/notebooks&quot;},{&quot;title&quot;:&quot;Community resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;community&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/community&quot;},{&quot;title&quot;:&quot;Custom Tools and Prompts&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_tools&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_tools&quot;},{&quot;title&quot;:&quot;Troubleshoot&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;troubleshooting&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/troubleshooting&quot;}]},{&quot;title&quot;:&quot;Performance and scalability&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;performance&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/performance&quot;},{&quot;title&quot;:&quot;Efficient training techniques&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Methods and tools for efficient training on a single GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_one&quot;},{&quot;title&quot;:&quot;Multiple GPUs and parallelism&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_many&quot;},{&quot;title&quot;:&quot;Efficient training on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu&quot;},{&quot;title&quot;:&quot;Distributed CPU training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu_many&quot;},{&quot;title&quot;:&quot;Training on TPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu&quot;},{&quot;title&quot;:&quot;Training on TPU with TensorFlow&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu_tf&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu_tf&quot;},{&quot;title&quot;:&quot;Training on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_special&quot;},{&quot;title&quot;:&quot;Custom hardware for training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_hardware&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_hardware&quot;},{&quot;title&quot;:&quot;Hyperparameter Search using Trainer API&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;hpo_train&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/hpo_train&quot;}]},{&quot;title&quot;:&quot;Optimizing inference&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Inference on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_cpu&quot;},{&quot;title&quot;:&quot;Inference on one GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_one&quot;},{&quot;title&quot;:&quot;Inference on many GPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_many&quot;},{&quot;title&quot;:&quot;Inference on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_special&quot;}]},{&quot;title&quot;:&quot;Instantiating a big model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;big_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/big_models&quot;},{&quot;title&quot;:&quot;Troubleshooting&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;debugging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/debugging&quot;},{&quot;title&quot;:&quot;XLA Integration for TensorFlow Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tf_xla&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tf_xla&quot;},{&quot;title&quot;:&quot;Optimize inference using `torch.compile()`&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_torch_compile&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_torch_compile&quot;}]},{&quot;title&quot;:&quot;Contribute&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;How to contribute to transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;contributing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/contributing&quot;},{&quot;title&quot;:&quot;How to add a model to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_model&quot;},{&quot;title&quot;:&quot;How to convert a 🤗 Transformers model to TensorFlow?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_tensorflow_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_tensorflow_model&quot;},{&quot;title&quot;:&quot;How to add a pipeline to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_pipeline&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_pipeline&quot;},{&quot;title&quot;:&quot;Testing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;testing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/testing&quot;},{&quot;title&quot;:&quot;Checks on a Pull Request&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pr_checks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pr_checks&quot;}]},{&quot;title&quot;:&quot;Conceptual guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Philosophy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;philosophy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/philosophy&quot;},{&quot;title&quot;:&quot;Glossary&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;glossary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/glossary&quot;},{&quot;title&quot;:&quot;What 🤗 Transformers can do&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;task_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/task_summary&quot;},{&quot;title&quot;:&quot;How 🤗 Transformers solve tasks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks_explained&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks_explained&quot;},{&quot;title&quot;:&quot;The Transformer model family&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_summary&quot;},{&quot;title&quot;:&quot;Summary of the tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tokenizer_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tokenizer_summary&quot;},{&quot;title&quot;:&quot;Attention mechanisms&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;attention&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/attention&quot;},{&quot;title&quot;:&quot;Padding and truncation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pad_truncation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pad_truncation&quot;},{&quot;title&quot;:&quot;BERTology&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;bertology&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/bertology&quot;},{&quot;title&quot;:&quot;Perplexity of fixed-length models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perplexity&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perplexity&quot;},{&quot;title&quot;:&quot;Pipelines for webserver inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_webserver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_webserver&quot;},{&quot;title&quot;:&quot;Model training anatomy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_memory_anatomy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_memory_anatomy&quot;}]},{&quot;title&quot;:&quot;API&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Main Classes&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Agents and Tools&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/agent&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/agent&quot;},{&quot;title&quot;:&quot;Auto Classes&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/auto&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/auto&quot;},{&quot;title&quot;:&quot;Callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/callback&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/callback&quot;},{&quot;title&quot;:&quot;Configuration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/configuration&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/configuration&quot;},{&quot;title&quot;:&quot;Data Collator&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/data_collator&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/data_collator&quot;},{&quot;title&quot;:&quot;Keras callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/keras_callbacks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/keras_callbacks&quot;},{&quot;title&quot;:&quot;Logging&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/logging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/logging&quot;},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/model&quot;},{&quot;title&quot;:&quot;Text Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/text_generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/text_generation&quot;},{&quot;title&quot;:&quot;ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/onnx&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/onnx&quot;},{&quot;title&quot;:&quot;Optimization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/optimizer_schedules&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules&quot;},{&quot;title&quot;:&quot;Model outputs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/output&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/output&quot;},{&quot;title&quot;:&quot;Pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/pipelines&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/pipelines&quot;},{&quot;title&quot;:&quot;Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/processors&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/processors&quot;},{&quot;title&quot;:&quot;Quantization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/quantization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/quantization&quot;},{&quot;title&quot;:&quot;Tokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/tokenizer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/tokenizer&quot;},{&quot;title&quot;:&quot;Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/trainer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/trainer&quot;},{&quot;title&quot;:&quot;DeepSpeed Integration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/deepspeed&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/deepspeed&quot;},{&quot;title&quot;:&quot;Feature Extractor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/feature_extractor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/feature_extractor&quot;},{&quot;title&quot;:&quot;Image Processor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/image_processor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/image_processor&quot;}]},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALBERT&quot;,&quot;id&quot;:&quot;model_doc/albert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/albert&quot;},{&quot;title&quot;:&quot;BART&quot;,&quot;id&quot;:&quot;model_doc/bart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bart&quot;},{&quot;title&quot;:&quot;BARThez&quot;,&quot;id&quot;:&quot;model_doc/barthez&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/barthez&quot;},{&quot;title&quot;:&quot;BARTpho&quot;,&quot;id&quot;:&quot;model_doc/bartpho&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bartpho&quot;},{&quot;title&quot;:&quot;BERT&quot;,&quot;id&quot;:&quot;model_doc/bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert&quot;},{&quot;title&quot;:&quot;BertGeneration&quot;,&quot;id&quot;:&quot;model_doc/bert-generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-generation&quot;},{&quot;title&quot;:&quot;BertJapanese&quot;,&quot;id&quot;:&quot;model_doc/bert-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-japanese&quot;},{&quot;title&quot;:&quot;Bertweet&quot;,&quot;id&quot;:&quot;model_doc/bertweet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bertweet&quot;},{&quot;title&quot;:&quot;BigBird&quot;,&quot;id&quot;:&quot;model_doc/big_bird&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/big_bird&quot;},{&quot;title&quot;:&quot;BigBirdPegasus&quot;,&quot;id&quot;:&quot;model_doc/bigbird_pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus&quot;},{&quot;title&quot;:&quot;BioGpt&quot;,&quot;id&quot;:&quot;model_doc/biogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/biogpt&quot;},{&quot;title&quot;:&quot;Blenderbot&quot;,&quot;id&quot;:&quot;model_doc/blenderbot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot&quot;},{&quot;title&quot;:&quot;Blenderbot Small&quot;,&quot;id&quot;:&quot;model_doc/blenderbot-small&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot-small&quot;},{&quot;title&quot;:&quot;BLOOM&quot;,&quot;id&quot;:&quot;model_doc/bloom&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bloom&quot;},{&quot;title&quot;:&quot;BORT&quot;,&quot;id&quot;:&quot;model_doc/bort&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bort&quot;},{&quot;title&quot;:&quot;ByT5&quot;,&quot;id&quot;:&quot;model_doc/byt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/byt5&quot;},{&quot;title&quot;:&quot;CamemBERT&quot;,&quot;id&quot;:&quot;model_doc/camembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/camembert&quot;},{&quot;title&quot;:&quot;CANINE&quot;,&quot;id&quot;:&quot;model_doc/canine&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/canine&quot;},{&quot;title&quot;:&quot;CodeGen&quot;,&quot;id&quot;:&quot;model_doc/codegen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/codegen&quot;},{&quot;title&quot;:&quot;CodeLlama&quot;,&quot;id&quot;:&quot;model_doc/code_llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/code_llama&quot;},{&quot;title&quot;:&quot;ConvBERT&quot;,&quot;id&quot;:&quot;model_doc/convbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convbert&quot;},{&quot;title&quot;:&quot;CPM&quot;,&quot;id&quot;:&quot;model_doc/cpm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpm&quot;},{&quot;title&quot;:&quot;CPMANT&quot;,&quot;id&quot;:&quot;model_doc/cpmant&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpmant&quot;},{&quot;title&quot;:&quot;CTRL&quot;,&quot;id&quot;:&quot;model_doc/ctrl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ctrl&quot;},{&quot;title&quot;:&quot;DeBERTa&quot;,&quot;id&quot;:&quot;model_doc/deberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta&quot;},{&quot;title&quot;:&quot;DeBERTa-v2&quot;,&quot;id&quot;:&quot;model_doc/deberta-v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta-v2&quot;},{&quot;title&quot;:&quot;DialoGPT&quot;,&quot;id&quot;:&quot;model_doc/dialogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dialogpt&quot;},{&quot;title&quot;:&quot;DistilBERT&quot;,&quot;id&quot;:&quot;model_doc/distilbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/distilbert&quot;},{&quot;title&quot;:&quot;DPR&quot;,&quot;id&quot;:&quot;model_doc/dpr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpr&quot;},{&quot;title&quot;:&quot;ELECTRA&quot;,&quot;id&quot;:&quot;model_doc/electra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/electra&quot;},{&quot;title&quot;:&quot;Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encoder-decoder&quot;},{&quot;title&quot;:&quot;ERNIE&quot;,&quot;id&quot;:&quot;model_doc/ernie&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie&quot;},{&quot;title&quot;:&quot;ErnieM&quot;,&quot;id&quot;:&quot;model_doc/ernie_m&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie_m&quot;},{&quot;title&quot;:&quot;ESM&quot;,&quot;id&quot;:&quot;model_doc/esm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/esm&quot;},{&quot;title&quot;:&quot;Falcon&quot;,&quot;id&quot;:&quot;model_doc/falcon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/falcon&quot;},{&quot;title&quot;:&quot;FLAN-T5&quot;,&quot;id&quot;:&quot;model_doc/flan-t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-t5&quot;},{&quot;title&quot;:&quot;FLAN-UL2&quot;,&quot;id&quot;:&quot;model_doc/flan-ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-ul2&quot;},{&quot;title&quot;:&quot;FlauBERT&quot;,&quot;id&quot;:&quot;model_doc/flaubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flaubert&quot;},{&quot;title&quot;:&quot;FNet&quot;,&quot;id&quot;:&quot;model_doc/fnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fnet&quot;},{&quot;title&quot;:&quot;FSMT&quot;,&quot;id&quot;:&quot;model_doc/fsmt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fsmt&quot;},{&quot;title&quot;:&quot;Funnel Transformer&quot;,&quot;id&quot;:&quot;model_doc/funnel&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/funnel&quot;},{&quot;title&quot;:&quot;GPT&quot;,&quot;id&quot;:&quot;model_doc/openai-gpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/openai-gpt&quot;},{&quot;title&quot;:&quot;GPT Neo&quot;,&quot;id&quot;:&quot;model_doc/gpt_neo&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neo&quot;},{&quot;title&quot;:&quot;GPT NeoX&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox&quot;},{&quot;title&quot;:&quot;GPT NeoX Japanese&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox_japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese&quot;},{&quot;title&quot;:&quot;GPT-J&quot;,&quot;id&quot;:&quot;model_doc/gptj&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptj&quot;},{&quot;title&quot;:&quot;GPT2&quot;,&quot;id&quot;:&quot;model_doc/gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt2&quot;},{&quot;title&quot;:&quot;GPTBigCode&quot;,&quot;id&quot;:&quot;model_doc/gpt_bigcode&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode&quot;},{&quot;title&quot;:&quot;GPTSAN Japanese&quot;,&quot;id&quot;:&quot;model_doc/gptsan-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese&quot;},{&quot;title&quot;:&quot;GPTSw3&quot;,&quot;id&quot;:&quot;model_doc/gpt-sw3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt-sw3&quot;},{&quot;title&quot;:&quot;HerBERT&quot;,&quot;id&quot;:&quot;model_doc/herbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/herbert&quot;},{&quot;title&quot;:&quot;I-BERT&quot;,&quot;id&quot;:&quot;model_doc/ibert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ibert&quot;},{&quot;title&quot;:&quot;Jukebox&quot;,&quot;id&quot;:&quot;model_doc/jukebox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/jukebox&quot;},{&quot;title&quot;:&quot;LED&quot;,&quot;id&quot;:&quot;model_doc/led&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/led&quot;},{&quot;title&quot;:&quot;LLaMA&quot;,&quot;id&quot;:&quot;model_doc/llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama&quot;},{&quot;title&quot;:&quot;Llama2&quot;,&quot;id&quot;:&quot;model_doc/llama2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama2&quot;},{&quot;title&quot;:&quot;Longformer&quot;,&quot;id&quot;:&quot;model_doc/longformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longformer&quot;},{&quot;title&quot;:&quot;LongT5&quot;,&quot;id&quot;:&quot;model_doc/longt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longt5&quot;},{&quot;title&quot;:&quot;LUKE&quot;,&quot;id&quot;:&quot;model_doc/luke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/luke&quot;},{&quot;title&quot;:&quot;M2M100&quot;,&quot;id&quot;:&quot;model_doc/m2m_100&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/m2m_100&quot;},{&quot;title&quot;:&quot;MarianMT&quot;,&quot;id&quot;:&quot;model_doc/marian&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/marian&quot;},{&quot;title&quot;:&quot;MarkupLM&quot;,&quot;id&quot;:&quot;model_doc/markuplm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/markuplm&quot;},{&quot;title&quot;:&quot;MBart and MBart-50&quot;,&quot;id&quot;:&quot;model_doc/mbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mbart&quot;},{&quot;title&quot;:&quot;MEGA&quot;,&quot;id&quot;:&quot;model_doc/mega&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mega&quot;},{&quot;title&quot;:&quot;MegatronBERT&quot;,&quot;id&quot;:&quot;model_doc/megatron-bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron-bert&quot;},{&quot;title&quot;:&quot;MegatronGPT2&quot;,&quot;id&quot;:&quot;model_doc/megatron_gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2&quot;},{&quot;title&quot;:&quot;Mistral&quot;,&quot;id&quot;:&quot;model_doc/mistral&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mistral&quot;},{&quot;title&quot;:&quot;mLUKE&quot;,&quot;id&quot;:&quot;model_doc/mluke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mluke&quot;},{&quot;title&quot;:&quot;MobileBERT&quot;,&quot;id&quot;:&quot;model_doc/mobilebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilebert&quot;},{&quot;title&quot;:&quot;MPNet&quot;,&quot;id&quot;:&quot;model_doc/mpnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpnet&quot;},{&quot;title&quot;:&quot;MPT&quot;,&quot;id&quot;:&quot;model_doc/mpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpt&quot;},{&quot;title&quot;:&quot;MRA&quot;,&quot;id&quot;:&quot;model_doc/mra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mra&quot;},{&quot;title&quot;:&quot;MT5&quot;,&quot;id&quot;:&quot;model_doc/mt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mt5&quot;},{&quot;title&quot;:&quot;MVP&quot;,&quot;id&quot;:&quot;model_doc/mvp&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mvp&quot;},{&quot;title&quot;:&quot;NEZHA&quot;,&quot;id&quot;:&quot;model_doc/nezha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nezha&quot;},{&quot;title&quot;:&quot;NLLB&quot;,&quot;id&quot;:&quot;model_doc/nllb&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb&quot;},{&quot;title&quot;:&quot;NLLB-MoE&quot;,&quot;id&quot;:&quot;model_doc/nllb-moe&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb-moe&quot;},{&quot;title&quot;:&quot;Nyströmformer&quot;,&quot;id&quot;:&quot;model_doc/nystromformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nystromformer&quot;},{&quot;title&quot;:&quot;Open-Llama&quot;,&quot;id&quot;:&quot;model_doc/open-llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/open-llama&quot;},{&quot;title&quot;:&quot;OPT&quot;,&quot;id&quot;:&quot;model_doc/opt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/opt&quot;},{&quot;title&quot;:&quot;Pegasus&quot;,&quot;id&quot;:&quot;model_doc/pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus&quot;},{&quot;title&quot;:&quot;PEGASUS-X&quot;,&quot;id&quot;:&quot;model_doc/pegasus_x&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus_x&quot;},{&quot;title&quot;:&quot;Persimmon&quot;,&quot;id&quot;:&quot;model_doc/persimmon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/persimmon&quot;},{&quot;title&quot;:&quot;PhoBERT&quot;,&quot;id&quot;:&quot;model_doc/phobert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/phobert&quot;},{&quot;title&quot;:&quot;PLBart&quot;,&quot;id&quot;:&quot;model_doc/plbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/plbart&quot;},{&quot;title&quot;:&quot;ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/prophetnet&quot;},{&quot;title&quot;:&quot;QDQBert&quot;,&quot;id&quot;:&quot;model_doc/qdqbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/qdqbert&quot;},{&quot;title&quot;:&quot;RAG&quot;,&quot;id&quot;:&quot;model_doc/rag&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rag&quot;},{&quot;title&quot;:&quot;REALM&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/realm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/realm&quot;},{&quot;title&quot;:&quot;Reformer&quot;,&quot;id&quot;:&quot;model_doc/reformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/reformer&quot;},{&quot;title&quot;:&quot;RemBERT&quot;,&quot;id&quot;:&quot;model_doc/rembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rembert&quot;},{&quot;title&quot;:&quot;RetriBERT&quot;,&quot;id&quot;:&quot;model_doc/retribert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/retribert&quot;},{&quot;title&quot;:&quot;RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta&quot;},{&quot;title&quot;:&quot;RoBERTa-PreLayerNorm&quot;,&quot;id&quot;:&quot;model_doc/roberta-prelayernorm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm&quot;},{&quot;title&quot;:&quot;RoCBert&quot;,&quot;id&quot;:&quot;model_doc/roc_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roc_bert&quot;},{&quot;title&quot;:&quot;RoFormer&quot;,&quot;id&quot;:&quot;model_doc/roformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roformer&quot;},{&quot;title&quot;:&quot;RWKV&quot;,&quot;id&quot;:&quot;model_doc/rwkv&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rwkv&quot;},{&quot;title&quot;:&quot;Splinter&quot;,&quot;id&quot;:&quot;model_doc/splinter&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/splinter&quot;},{&quot;title&quot;:&quot;SqueezeBERT&quot;,&quot;id&quot;:&quot;model_doc/squeezebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/squeezebert&quot;},{&quot;title&quot;:&quot;SwitchTransformers&quot;,&quot;id&quot;:&quot;model_doc/switch_transformers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/switch_transformers&quot;},{&quot;title&quot;:&quot;T5&quot;,&quot;id&quot;:&quot;model_doc/t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5&quot;},{&quot;title&quot;:&quot;T5v1.1&quot;,&quot;id&quot;:&quot;model_doc/t5v1.1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5v1.1&quot;},{&quot;title&quot;:&quot;TAPEX&quot;,&quot;id&quot;:&quot;model_doc/tapex&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapex&quot;},{&quot;title&quot;:&quot;Transformer XL&quot;,&quot;id&quot;:&quot;model_doc/transfo-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/transfo-xl&quot;},{&quot;title&quot;:&quot;UL2&quot;,&quot;id&quot;:&quot;model_doc/ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ul2&quot;},{&quot;title&quot;:&quot;UMT5&quot;,&quot;id&quot;:&quot;model_doc/umt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/umt5&quot;},{&quot;title&quot;:&quot;X-MOD&quot;,&quot;id&quot;:&quot;model_doc/xmod&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xmod&quot;},{&quot;title&quot;:&quot;XGLM&quot;,&quot;id&quot;:&quot;model_doc/xglm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xglm&quot;},{&quot;title&quot;:&quot;XLM&quot;,&quot;id&quot;:&quot;model_doc/xlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm&quot;},{&quot;title&quot;:&quot;XLM-ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/xlm-prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa-XL&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl&quot;},{&quot;title&quot;:&quot;XLM-V&quot;,&quot;id&quot;:&quot;model_doc/xlm-v&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-v&quot;},{&quot;title&quot;:&quot;XLNet&quot;,&quot;id&quot;:&quot;model_doc/xlnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlnet&quot;},{&quot;title&quot;:&quot;YOSO&quot;,&quot;id&quot;:&quot;model_doc/yoso&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yoso&quot;}]},{&quot;title&quot;:&quot;Vision models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;BEiT&quot;,&quot;id&quot;:&quot;model_doc/beit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/beit&quot;},{&quot;title&quot;:&quot;BiT&quot;,&quot;id&quot;:&quot;model_doc/bit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bit&quot;},{&quot;title&quot;:&quot;Conditional DETR&quot;,&quot;id&quot;:&quot;model_doc/conditional_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/conditional_detr&quot;},{&quot;title&quot;:&quot;ConvNeXT&quot;,&quot;id&quot;:&quot;model_doc/convnext&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnext&quot;},{&quot;title&quot;:&quot;ConvNeXTV2&quot;,&quot;id&quot;:&quot;model_doc/convnextv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnextv2&quot;},{&quot;title&quot;:&quot;CvT&quot;,&quot;id&quot;:&quot;model_doc/cvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cvt&quot;},{&quot;title&quot;:&quot;Deformable DETR&quot;,&quot;id&quot;:&quot;model_doc/deformable_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deformable_detr&quot;},{&quot;title&quot;:&quot;DeiT&quot;,&quot;id&quot;:&quot;model_doc/deit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deit&quot;},{&quot;title&quot;:&quot;DETA&quot;,&quot;id&quot;:&quot;model_doc/deta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deta&quot;},{&quot;title&quot;:&quot;DETR&quot;,&quot;id&quot;:&quot;model_doc/detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/detr&quot;},{&quot;title&quot;:&quot;DiNAT&quot;,&quot;id&quot;:&quot;model_doc/dinat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinat&quot;},{&quot;title&quot;:&quot;DINO V2&quot;,&quot;id&quot;:&quot;model_doc/dinov2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinov2&quot;},{&quot;title&quot;:&quot;DiT&quot;,&quot;id&quot;:&quot;model_doc/dit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dit&quot;},{&quot;title&quot;:&quot;DPT&quot;,&quot;id&quot;:&quot;model_doc/dpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpt&quot;},{&quot;title&quot;:&quot;EfficientFormer&quot;,&quot;id&quot;:&quot;model_doc/efficientformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientformer&quot;},{&quot;title&quot;:&quot;EfficientNet&quot;,&quot;id&quot;:&quot;model_doc/efficientnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientnet&quot;},{&quot;title&quot;:&quot;FocalNet&quot;,&quot;id&quot;:&quot;model_doc/focalnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/focalnet&quot;},{&quot;title&quot;:&quot;GLPN&quot;,&quot;id&quot;:&quot;model_doc/glpn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/glpn&quot;},{&quot;title&quot;:&quot;ImageGPT&quot;,&quot;id&quot;:&quot;model_doc/imagegpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/imagegpt&quot;},{&quot;title&quot;:&quot;LeViT&quot;,&quot;id&quot;:&quot;model_doc/levit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/levit&quot;},{&quot;title&quot;:&quot;Mask2Former&quot;,&quot;id&quot;:&quot;model_doc/mask2former&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mask2former&quot;},{&quot;title&quot;:&quot;MaskFormer&quot;,&quot;id&quot;:&quot;model_doc/maskformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/maskformer&quot;},{&quot;title&quot;:&quot;MobileNetV1&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v1&quot;},{&quot;title&quot;:&quot;MobileNetV2&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v2&quot;},{&quot;title&quot;:&quot;MobileViT&quot;,&quot;id&quot;:&quot;model_doc/mobilevit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevit&quot;},{&quot;title&quot;:&quot;MobileViTV2&quot;,&quot;id&quot;:&quot;model_doc/mobilevitv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevitv2&quot;},{&quot;title&quot;:&quot;NAT&quot;,&quot;id&quot;:&quot;model_doc/nat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nat&quot;},{&quot;title&quot;:&quot;PoolFormer&quot;,&quot;id&quot;:&quot;model_doc/poolformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/poolformer&quot;},{&quot;title&quot;:&quot;Pyramid Vision Transformer (PVT)&quot;,&quot;id&quot;:&quot;model_doc/pvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pvt&quot;},{&quot;title&quot;:&quot;RegNet&quot;,&quot;id&quot;:&quot;model_doc/regnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/regnet&quot;},{&quot;title&quot;:&quot;ResNet&quot;,&quot;id&quot;:&quot;model_doc/resnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/resnet&quot;},{&quot;title&quot;:&quot;SegFormer&quot;,&quot;id&quot;:&quot;model_doc/segformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/segformer&quot;},{&quot;title&quot;:&quot;SwiftFormer&quot;,&quot;id&quot;:&quot;model_doc/swiftformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swiftformer&quot;},{&quot;title&quot;:&quot;Swin Transformer&quot;,&quot;id&quot;:&quot;model_doc/swin&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin&quot;},{&quot;title&quot;:&quot;Swin Transformer V2&quot;,&quot;id&quot;:&quot;model_doc/swinv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swinv2&quot;},{&quot;title&quot;:&quot;Swin2SR&quot;,&quot;id&quot;:&quot;model_doc/swin2sr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin2sr&quot;},{&quot;title&quot;:&quot;Table Transformer&quot;,&quot;id&quot;:&quot;model_doc/table-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/table-transformer&quot;},{&quot;title&quot;:&quot;TimeSformer&quot;,&quot;id&quot;:&quot;model_doc/timesformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/timesformer&quot;},{&quot;title&quot;:&quot;UperNet&quot;,&quot;id&quot;:&quot;model_doc/upernet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/upernet&quot;},{&quot;title&quot;:&quot;VAN&quot;,&quot;id&quot;:&quot;model_doc/van&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/van&quot;},{&quot;title&quot;:&quot;VideoMAE&quot;,&quot;id&quot;:&quot;model_doc/videomae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/videomae&quot;},{&quot;title&quot;:&quot;Vision Transformer (ViT)&quot;,&quot;id&quot;:&quot;model_doc/vit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit&quot;},{&quot;title&quot;:&quot;ViT Hybrid&quot;,&quot;id&quot;:&quot;model_doc/vit_hybrid&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_hybrid&quot;},{&quot;title&quot;:&quot;ViTDet&quot;,&quot;id&quot;:&quot;model_doc/vitdet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitdet&quot;},{&quot;title&quot;:&quot;ViTMAE&quot;,&quot;id&quot;:&quot;model_doc/vit_mae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_mae&quot;},{&quot;title&quot;:&quot;ViTMatte&quot;,&quot;id&quot;:&quot;model_doc/vitmatte&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitmatte&quot;},{&quot;title&quot;:&quot;ViTMSN&quot;,&quot;id&quot;:&quot;model_doc/vit_msn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_msn&quot;},{&quot;title&quot;:&quot;ViViT&quot;,&quot;id&quot;:&quot;model_doc/vivit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vivit&quot;},{&quot;title&quot;:&quot;YOLOS&quot;,&quot;id&quot;:&quot;model_doc/yolos&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yolos&quot;}]},{&quot;title&quot;:&quot;Audio models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio Spectrogram Transformer&quot;,&quot;id&quot;:&quot;model_doc/audio-spectrogram-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer&quot;},{&quot;title&quot;:&quot;Bark&quot;,&quot;id&quot;:&quot;model_doc/bark&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bark&quot;},{&quot;title&quot;:&quot;CLAP&quot;,&quot;id&quot;:&quot;model_doc/clap&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clap&quot;},{&quot;title&quot;:&quot;EnCodec&quot;,&quot;id&quot;:&quot;model_doc/encodec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encodec&quot;},{&quot;title&quot;:&quot;Hubert&quot;,&quot;id&quot;:&quot;model_doc/hubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/hubert&quot;},{&quot;title&quot;:&quot;MCTCT&quot;,&quot;id&quot;:&quot;model_doc/mctct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mctct&quot;},{&quot;title&quot;:&quot;MMS&quot;,&quot;id&quot;:&quot;model_doc/mms&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mms&quot;},{&quot;title&quot;:&quot;MusicGen&quot;,&quot;id&quot;:&quot;model_doc/musicgen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/musicgen&quot;},{&quot;title&quot;:&quot;Pop2Piano&quot;,&quot;id&quot;:&quot;model_doc/pop2piano&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pop2piano&quot;},{&quot;title&quot;:&quot;SEW&quot;,&quot;id&quot;:&quot;model_doc/sew&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew&quot;},{&quot;title&quot;:&quot;SEW-D&quot;,&quot;id&quot;:&quot;model_doc/sew-d&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew-d&quot;},{&quot;title&quot;:&quot;Speech2Text&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text&quot;},{&quot;title&quot;:&quot;Speech2Text2&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text_2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2&quot;},{&quot;title&quot;:&quot;SpeechT5&quot;,&quot;id&quot;:&quot;model_doc/speecht5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speecht5&quot;},{&quot;title&quot;:&quot;UniSpeech&quot;,&quot;id&quot;:&quot;model_doc/unispeech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech&quot;},{&quot;title&quot;:&quot;UniSpeech-SAT&quot;,&quot;id&quot;:&quot;model_doc/unispeech-sat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech-sat&quot;},{&quot;title&quot;:&quot;VITS&quot;,&quot;id&quot;:&quot;model_doc/vits&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vits&quot;},{&quot;title&quot;:&quot;Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2&quot;},{&quot;title&quot;:&quot;Wav2Vec2-Conformer&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2-conformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer&quot;},{&quot;title&quot;:&quot;Wav2Vec2Phoneme&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2_phoneme&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme&quot;},{&quot;title&quot;:&quot;WavLM&quot;,&quot;id&quot;:&quot;model_doc/wavlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wavlm&quot;},{&quot;title&quot;:&quot;Whisper&quot;,&quot;id&quot;:&quot;model_doc/whisper&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/whisper&quot;},{&quot;title&quot;:&quot;XLS-R&quot;,&quot;id&quot;:&quot;model_doc/xls_r&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xls_r&quot;},{&quot;title&quot;:&quot;XLSR-Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/xlsr_wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2&quot;}]},{&quot;title&quot;:&quot;Multimodal models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALIGN&quot;,&quot;id&quot;:&quot;model_doc/align&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/align&quot;},{&quot;title&quot;:&quot;AltCLIP&quot;,&quot;id&quot;:&quot;model_doc/altclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/altclip&quot;},{&quot;title&quot;:&quot;BLIP&quot;,&quot;id&quot;:&quot;model_doc/blip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip&quot;},{&quot;title&quot;:&quot;BLIP-2&quot;,&quot;id&quot;:&quot;model_doc/blip-2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip-2&quot;},{&quot;title&quot;:&quot;BridgeTower&quot;,&quot;id&quot;:&quot;model_doc/bridgetower&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bridgetower&quot;},{&quot;title&quot;:&quot;BROS&quot;,&quot;id&quot;:&quot;model_doc/bros&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bros&quot;},{&quot;title&quot;:&quot;Chinese-CLIP&quot;,&quot;id&quot;:&quot;model_doc/chinese_clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/chinese_clip&quot;},{&quot;title&quot;:&quot;CLIP&quot;,&quot;id&quot;:&quot;model_doc/clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clip&quot;},{&quot;title&quot;:&quot;CLIPSeg&quot;,&quot;id&quot;:&quot;model_doc/clipseg&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clipseg&quot;},{&quot;title&quot;:&quot;Data2Vec&quot;,&quot;id&quot;:&quot;model_doc/data2vec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/data2vec&quot;},{&quot;title&quot;:&quot;DePlot&quot;,&quot;id&quot;:&quot;model_doc/deplot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deplot&quot;},{&quot;title&quot;:&quot;Donut&quot;,&quot;id&quot;:&quot;model_doc/donut&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/donut&quot;},{&quot;title&quot;:&quot;FLAVA&quot;,&quot;id&quot;:&quot;model_doc/flava&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flava&quot;},{&quot;title&quot;:&quot;GIT&quot;,&quot;id&quot;:&quot;model_doc/git&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/git&quot;},{&quot;title&quot;:&quot;GroupViT&quot;,&quot;id&quot;:&quot;model_doc/groupvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/groupvit&quot;},{&quot;title&quot;:&quot;IDEFICS&quot;,&quot;id&quot;:&quot;model_doc/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/idefics&quot;},{&quot;title&quot;:&quot;InstructBLIP&quot;,&quot;id&quot;:&quot;model_doc/instructblip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/instructblip&quot;},{&quot;title&quot;:&quot;LayoutLM&quot;,&quot;id&quot;:&quot;model_doc/layoutlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlm&quot;},{&quot;title&quot;:&quot;LayoutLMV2&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv2&quot;},{&quot;title&quot;:&quot;LayoutLMV3&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv3&quot;},{&quot;title&quot;:&quot;LayoutXLM&quot;,&quot;id&quot;:&quot;model_doc/layoutxlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutxlm&quot;},{&quot;title&quot;:&quot;LiLT&quot;,&quot;id&quot;:&quot;model_doc/lilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lilt&quot;},{&quot;title&quot;:&quot;LXMERT&quot;,&quot;id&quot;:&quot;model_doc/lxmert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lxmert&quot;},{&quot;title&quot;:&quot;MatCha&quot;,&quot;id&quot;:&quot;model_doc/matcha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/matcha&quot;},{&quot;title&quot;:&quot;MGP-STR&quot;,&quot;id&quot;:&quot;model_doc/mgp-str&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mgp-str&quot;},{&quot;title&quot;:&quot;Nougat&quot;,&quot;id&quot;:&quot;model_doc/nougat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nougat&quot;},{&quot;title&quot;:&quot;OneFormer&quot;,&quot;id&quot;:&quot;model_doc/oneformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/oneformer&quot;},{&quot;title&quot;:&quot;OWL-ViT&quot;,&quot;id&quot;:&quot;model_doc/owlvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/owlvit&quot;},{&quot;title&quot;:&quot;Perceiver&quot;,&quot;id&quot;:&quot;model_doc/perceiver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/perceiver&quot;},{&quot;title&quot;:&quot;Pix2Struct&quot;,&quot;id&quot;:&quot;model_doc/pix2struct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pix2struct&quot;},{&quot;title&quot;:&quot;Segment Anything&quot;,&quot;id&quot;:&quot;model_doc/sam&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sam&quot;},{&quot;title&quot;:&quot;Speech Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/speech-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder&quot;},{&quot;title&quot;:&quot;TAPAS&quot;,&quot;id&quot;:&quot;model_doc/tapas&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapas&quot;},{&quot;title&quot;:&quot;TrOCR&quot;,&quot;id&quot;:&quot;model_doc/trocr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trocr&quot;},{&quot;title&quot;:&quot;TVLT&quot;,&quot;id&quot;:&quot;model_doc/tvlt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tvlt&quot;},{&quot;title&quot;:&quot;ViLT&quot;,&quot;id&quot;:&quot;model_doc/vilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vilt&quot;},{&quot;title&quot;:&quot;Vision Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/vision-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder&quot;},{&quot;title&quot;:&quot;Vision Text Dual Encoder&quot;,&quot;id&quot;:&quot;model_doc/vision-text-dual-encoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder&quot;},{&quot;title&quot;:&quot;VisualBERT&quot;,&quot;id&quot;:&quot;model_doc/visual_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/visual_bert&quot;},{&quot;title&quot;:&quot;X-CLIP&quot;,&quot;id&quot;:&quot;model_doc/xclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xclip&quot;}]},{&quot;title&quot;:&quot;Reinforcement learning models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Decision Transformer&quot;,&quot;id&quot;:&quot;model_doc/decision_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/decision_transformer&quot;},{&quot;title&quot;:&quot;Trajectory Transformer&quot;,&quot;id&quot;:&quot;model_doc/trajectory_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer&quot;}]},{&quot;title&quot;:&quot;Time series models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Autoformer&quot;,&quot;id&quot;:&quot;model_doc/autoformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/autoformer&quot;},{&quot;title&quot;:&quot;Informer&quot;,&quot;id&quot;:&quot;model_doc/informer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/informer&quot;},{&quot;title&quot;:&quot;Time Series Transformer&quot;,&quot;id&quot;:&quot;model_doc/time_series_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/time_series_transformer&quot;}]},{&quot;title&quot;:&quot;Graph models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Graphormer&quot;,&quot;id&quot;:&quot;model_doc/graphormer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/graphormer&quot;}]}]},{&quot;title&quot;:&quot;Internal Helpers&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Custom Layers and Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/modeling_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/modeling_utils&quot;},{&quot;title&quot;:&quot;Utilities for pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/pipelines_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/pipelines_utils&quot;},{&quot;title&quot;:&quot;Utilities for Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/tokenization_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/tokenization_utils&quot;},{&quot;title&quot;:&quot;Utilities for Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/trainer_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/trainer_utils&quot;},{&quot;title&quot;:&quot;Utilities for Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/generation_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/generation_utils&quot;},{&quot;title&quot;:&quot;Utilities for Image Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/image_processing_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/image_processing_utils&quot;},{&quot;title&quot;:&quot;Utilities for Audio processing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/audio_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/audio_utils&quot;},{&quot;title&quot;:&quot;General Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/file_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/file_utils&quot;},{&quot;title&quot;:&quot;Utilities for Time Series&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/time_series_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/time_series_utils&quot;}]}]}],&quot;chapterId&quot;:&quot;model_doc/realm&quot;,&quot;docType&quot;:&quot;docs&quot;,&quot;isLoggedIn&quot;:false,&quot;lang&quot;:&quot;en&quot;,&quot;langs&quot;:[&quot;de&quot;,&quot;en&quot;,&quot;es&quot;,&quot;fr&quot;,&quot;it&quot;,&quot;ko&quot;,&quot;pt&quot;,&quot;zh&quot;],&quot;library&quot;:&quot;transformers&quot;,&quot;theme&quot;:&quot;light&quot;,&quot;version&quot;:&quot;v4.34.0&quot;,&quot;versions&quot;:[{&quot;version&quot;:&quot;main&quot;},{&quot;version&quot;:&quot;v4.34.0&quot;},{&quot;version&quot;:&quot;v4.33.3&quot;},{&quot;version&quot;:&quot;v4.33.2&quot;},{&quot;version&quot;:&quot;v4.33.0&quot;},{&quot;version&quot;:&quot;v4.32.1&quot;},{&quot;version&quot;:&quot;v4.32.0&quot;},{&quot;version&quot;:&quot;v4.31.0&quot;},{&quot;version&quot;:&quot;v4.30.0&quot;},{&quot;version&quot;:&quot;v4.29.1&quot;},{&quot;version&quot;:&quot;v4.29.0&quot;},{&quot;version&quot;:&quot;v4.28.1&quot;},{&quot;version&quot;:&quot;v4.28.0&quot;},{&quot;version&quot;:&quot;v4.27.2&quot;},{&quot;version&quot;:&quot;v4.27.1&quot;},{&quot;version&quot;:&quot;v4.27.0&quot;},{&quot;version&quot;:&quot;v4.26.1&quot;},{&quot;version&quot;:&quot;v4.26.0&quot;},{&quot;version&quot;:&quot;v4.25.1&quot;},{&quot;version&quot;:&quot;v4.24.0&quot;},{&quot;version&quot;:&quot;v4.23.1&quot;},{&quot;version&quot;:&quot;v4.23.0&quot;},{&quot;version&quot;:&quot;v4.22.2&quot;},{&quot;version&quot;:&quot;v4.22.1&quot;},{&quot;version&quot;:&quot;v4.22.0&quot;},{&quot;version&quot;:&quot;v4.21.3&quot;},{&quot;version&quot;:&quot;v4.21.2&quot;},{&quot;version&quot;:&quot;v4.21.1&quot;},{&quot;version&quot;:&quot;v4.21.0&quot;},{&quot;version&quot;:&quot;v4.20.1&quot;},{&quot;version&quot;:&quot;v4.20.0&quot;},{&quot;version&quot;:&quot;v4.19.4&quot;},{&quot;version&quot;:&quot;v4.19.3&quot;},{&quot;version&quot;:&quot;v4.19.2&quot;},{&quot;version&quot;:&quot;v4.19.0&quot;},{&quot;version&quot;:&quot;v4.18.0&quot;},{&quot;version&quot;:&quot;v4.17.0&quot;},{&quot;version&quot;:&quot;v4.16.2&quot;},{&quot;version&quot;:&quot;v4.16.1&quot;},{&quot;version&quot;:&quot;v4.16.0&quot;},{&quot;version&quot;:&quot;v4.15.0&quot;},{&quot;version&quot;:&quot;v4.14.1&quot;},{&quot;version&quot;:&quot;v4.13.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.5&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.4&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.0.0&quot;},{&quot;version&quot;:&quot;doc-builder-html&quot;}],&quot;title&quot;:&quot;REALM&quot;}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"><div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation</p> <div class="flex items-center"><p class="font-semibold">REALM</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "><button class=" " type="button"><h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> <span class="ml-auto rounded border border-gray-200 bg-gray-100 px-0.5 text-xs dark:border-gray-800 dark:bg-gray-800"><kbd class="font-sans">⌘K</kbd></span></button> <div class="flex items-center"><select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1">v4.34.0</option><option value="2">v4.33.3</option><option value="3">v4.32.1</option><option value="4">v4.31.0</option><option value="5">v4.30.0</option><option value="6">v4.29.1</option><option value="7">v4.28.1</option><option value="8">v4.27.2</option><option value="9">v4.26.1</option><option value="10">v4.25.1</option><option value="11">v4.24.0</option><option value="12">v4.23.1</option><option value="13">v4.22.2</option><option value="14">v4.21.3</option><option value="15">v4.20.1</option><option value="16">v4.19.4</option><option value="17">v4.18.0</option><option value="18">v4.17.0</option><option value="19">v4.16.2</option><option value="20">v4.15.0</option><option value="21">v4.14.1</option><option value="22">v4.13.0</option><option value="23">v4.12.5</option><option value="24">v4.11.3</option><option value="25">v4.10.1</option><option value="26">v4.9.2</option><option value="27">v4.8.2</option><option value="28">v4.7.0</option><option value="29">v4.6.0</option><option value="30">v4.5.1</option><option value="31">v4.4.2</option><option value="32">v4.3.3</option><option value="33">v4.2.2</option><option value="34">v4.1.1</option><option value="35">v4.0.1</option><option value="36">v3.5.1</option><option value="37">v3.4.0</option><option value="38">v3.3.1</option><option value="39">v3.2.0</option><option value="40">v3.1.0</option><option value="41">v3.0.2</option><option value="42">v2.11.0</option><option value="43">v2.10.0</option><option value="44">v2.9.1</option><option value="45">v2.8.0</option><option value="46">v2.7.0</option><option value="47">v2.6.0</option><option value="48">v2.5.1</option><option value="49">v2.4.1</option><option value="50">v2.3.0</option><option value="51">v2.2.2</option><option value="52">v2.1.1</option><option value="53">v2.0.0</option><option value="54">v1.2.0</option><option value="55">v1.1.0</option><option value="56">v1.0.0</option><option value="57">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"><button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"><svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> 112,792</a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Get started</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/index">🤗 Transformers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/quicktour">Quick tour </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/installation">Installation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Tutorials</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_tutorial">Run inference with pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/autoclass_tutorial">Write portable code with AutoClass </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/preprocessing">Preprocess data </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/training">Fine-tune a pretrained model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/run_scripts">Train with a script </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/accelerate">Set up distributed training with 🤗 Accelerate </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/peft">Load and train adapters with 🤗 PEFT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_sharing">Share your model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/transformers_agents">Agents </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/llm_tutorial">Generation with LLMs </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Task Guides</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Natural Language Processing</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Computer Vision</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Generation</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Prompting</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Developer guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/fast_tokenizers">Use fast tokenizers from 🤗 Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/multilingual">Run inference with multilingual models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/create_a_model">Use model-specific APIs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_models">Share a custom model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/chat_templating">Templates for chat models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/sagemaker">Run training on Amazon SageMaker </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/serialization">Export to ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tflite">Export to TFLite </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/torchscript">Export to TorchScript </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/benchmarks">Benchmarks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/notebooks">Notebooks with examples </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/community">Community resources </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_tools">Custom Tools and Prompts </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/troubleshooting">Troubleshoot </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Performance and scalability</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/performance">Overview </a><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Efficient training techniques</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_one">Methods and tools for efficient training on a single GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_many">Multiple GPUs and parallelism </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu">Efficient training on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu_many">Distributed CPU training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu">Training on TPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu_tf">Training on TPU with TensorFlow </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_special">Training on Specialized Hardware </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_hardware">Custom hardware for training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/hpo_train">Hyperparameter Search using Trainer API </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Optimizing inference</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_cpu">Inference on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_one">Inference on one GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_many">Inference on many GPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_special">Inference on Specialized Hardware </a> </div><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/big_models">Instantiating a big model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/debugging">Troubleshooting </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tf_xla">XLA Integration for TensorFlow Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perf_torch_compile">Optimize inference using `torch.compile()` </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Contribute</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/contributing">How to contribute to transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_model">How to add a model to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_tensorflow_model">How to convert a 🤗 Transformers model to TensorFlow? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_pipeline">How to add a pipeline to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/testing">Testing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pr_checks">Checks on a Pull Request </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Conceptual guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/philosophy">Philosophy </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/glossary">Glossary </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/task_summary">What 🤗 Transformers can do </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tasks_explained">How 🤗 Transformers solve tasks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_summary">The Transformer model family </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tokenizer_summary">Summary of the tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/attention">Attention mechanisms </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pad_truncation">Padding and truncation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/bertology">BERTology </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perplexity">Perplexity of fixed-length models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_webserver">Pipelines for webserver inference </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_memory_anatomy">Model training anatomy </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>API</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Main Classes</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/agent">Agents and Tools </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/model_doc/auto">Auto Classes </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/callback">Callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/configuration">Configuration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/data_collator">Data Collator </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks">Keras callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/logging">Logging </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/model">Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/text_generation">Text Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/onnx">ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules">Optimization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/output">Model outputs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/pipelines">Pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/processors">Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/quantization">Quantization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/tokenizer">Tokenizer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/trainer">Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/deepspeed">DeepSpeed Integration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor">Feature Extractor </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/image_processor">Image Processor </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Models</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Text models</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/albert">ALBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bart">BART </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/barthez">BARThez </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bartpho">BARTpho </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bert">BERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bert-generation">BertGeneration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bert-japanese">BertJapanese </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bertweet">Bertweet </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/big_bird">BigBird </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus">BigBirdPegasus </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/biogpt">BioGpt </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/blenderbot">Blenderbot </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/blenderbot-small">Blenderbot Small </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bloom">BLOOM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bort">BORT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/byt5">ByT5 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/camembert">CamemBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/canine">CANINE </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/codegen">CodeGen </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/code_llama">CodeLlama </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/convbert">ConvBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/cpm">CPM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/cpmant">CPMANT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ctrl">CTRL </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/deberta">DeBERTa </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/deberta-v2">DeBERTa-v2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/dialogpt">DialoGPT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/distilbert">DistilBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/dpr">DPR </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/electra">ELECTRA </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/encoder-decoder">Encoder Decoder Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ernie">ERNIE </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ernie_m">ErnieM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/esm">ESM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/falcon">Falcon </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/flan-t5">FLAN-T5 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/flan-ul2">FLAN-UL2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/flaubert">FlauBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/fnet">FNet </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/fsmt">FSMT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/funnel">Funnel Transformer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/openai-gpt">GPT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt_neo">GPT Neo </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt_neox">GPT NeoX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese">GPT NeoX Japanese </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gptj">GPT-J </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt2">GPT2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode">GPTBigCode </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese">GPTSAN Japanese </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt-sw3">GPTSw3 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/herbert">HerBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ibert">I-BERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/jukebox">Jukebox </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/led">LED </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/llama">LLaMA </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/llama2">Llama2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/longformer">Longformer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/longt5">LongT5 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/luke">LUKE </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/m2m_100">M2M100 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/marian">MarianMT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/markuplm">MarkupLM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mbart">MBart and MBart-50 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mega">MEGA </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/megatron-bert">MegatronBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2">MegatronGPT2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mistral">Mistral </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mluke">mLUKE </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mobilebert">MobileBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mpnet">MPNet </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mpt">MPT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mra">MRA </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mt5">MT5 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mvp">MVP </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nezha">NEZHA </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nllb">NLLB </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nllb-moe">NLLB-MoE </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nystromformer">Nyströmformer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/open-llama">Open-Llama </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/opt">OPT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/pegasus">Pegasus </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/pegasus_x">PEGASUS-X </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/persimmon">Persimmon </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/phobert">PhoBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/plbart">PLBart </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/prophetnet">ProphetNet </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/qdqbert">QDQBert </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/rag">RAG </a><a data-sveltekit-reload="" class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/realm">REALM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/reformer">Reformer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/rembert">RemBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/retribert">RetriBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/roberta">RoBERTa </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm">RoBERTa-PreLayerNorm </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/roc_bert">RoCBert </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/roformer">RoFormer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/rwkv">RWKV </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/splinter">Splinter </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/squeezebert">SqueezeBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/switch_transformers">SwitchTransformers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/t5">T5 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/t5v1.1">T5v1.1 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/tapex">TAPEX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/transfo-xl">Transformer XL </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ul2">UL2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/umt5">UMT5 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xmod">X-MOD </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xglm">XGLM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm">XLM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet">XLM-ProphetNet </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta">XLM-RoBERTa </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl">XLM-RoBERTa-XL </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm-v">XLM-V </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlnet">XLNet </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/yoso">YOSO </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Vision models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Reinforcement learning models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Time series models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Graph models</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Internal Helpers</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/modeling_utils">Custom Layers and Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/pipelines_utils">Utilities for pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/tokenization_utils">Utilities for Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/trainer_utils">Utilities for Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/generation_utils">Utilities for Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/image_processing_utils">Utilities for Image Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/audio_utils">Utilities for Audio processing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/file_utils">General Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/time_series_utils">Utilities for Time Series </a> </div> </div></nav></div></div></div> <div class="z-1 min-w-0 flex-1"> <div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg"> <div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div> <p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience </p> <div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes </div></div></div> <div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a> <p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div> <div class="prose-doc prose relative mx-auto max-w-4xl break-words"><!-- HTML_TAG_START --> <link href="/docs/transformers/v4.34.0/en/_app/immutable/assets/0.e3b0c442.css" rel="modulepreload"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/entry/start.c2db227a.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/scheduler.9bc65507.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/singletons.e3057404.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/index.3b203c72.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/paths.e7de6301.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/entry/app.879d9b87.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/index.78c82d43.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/0.242aaaff.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/each.e59479a4.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/214.d2c9e18c.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/Tip.87d55b76.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/Docstring.4e7352e2.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/globals.7f7f1b26.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/IconCopyLink.bedaa44d.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/CodeBlock.73e038be.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/ExampleCodeBlock.872b014d.js"><!-- HEAD_svelte-1phssyn_START --><meta name="hf:doc:metadata" content="{&quot;local&quot;:&quot;realm&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;overview&quot;,&quot;title&quot;:&quot;Overview&quot;},{&quot;local&quot;:&quot;transformers.RealmConfig&quot;,&quot;title&quot;:&quot;RealmConfig&quot;},{&quot;local&quot;:&quot;transformers.RealmTokenizer&quot;,&quot;title&quot;:&quot;RealmTokenizer&quot;},{&quot;local&quot;:&quot;transformers.RealmTokenizerFast&quot;,&quot;title&quot;:&quot;RealmTokenizerFast&quot;},{&quot;local&quot;:&quot;transformers.RealmRetriever&quot;,&quot;title&quot;:&quot;RealmRetriever&quot;},{&quot;local&quot;:&quot;transformers.RealmEmbedder&quot;,&quot;title&quot;:&quot;RealmEmbedder&quot;},{&quot;local&quot;:&quot;transformers.RealmScorer&quot;,&quot;title&quot;:&quot;RealmScorer&quot;},{&quot;local&quot;:&quot;transformers.RealmKnowledgeAugEncoder&quot;,&quot;title&quot;:&quot;RealmKnowledgeAugEncoder&quot;},{&quot;local&quot;:&quot;transformers.RealmReader&quot;,&quot;title&quot;:&quot;RealmReader&quot;},{&quot;local&quot;:&quot;transformers.RealmForOpenQA&quot;,&quot;title&quot;:&quot;RealmForOpenQA&quot;}],&quot;title&quot;:&quot;REALM&quot;}"><!-- HEAD_svelte-1phssyn_END --> <p></p> <h1 class="relative group"><a id="realm" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#realm"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1t9q8u8">REALM</span></h1> <h2 class="relative group"><a id="overview" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#overview"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1jsw1pg">Overview</span></h2> <p data-svelte-h="svelte-5xttfr">The REALM model was proposed in <a href="https://arxiv.org/abs/2002.08909" rel="nofollow">REALM: Retrieval-Augmented Language Model Pre-Training</a> by Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang. It’s a retrieval-augmented language model that firstly retrieves documents from a textual knowledge corpus and then utilizes retrieved documents to process question answering tasks.</p> <p data-svelte-h="svelte-vfdo9a">The abstract from the paper is the following:</p> <p data-svelte-h="svelte-gptcva"><em>Language model pre-training has been shown to capture a surprising amount of world knowledge, crucial for NLP tasks such as question answering. However, this knowledge is stored implicitly in the parameters of a neural network, requiring ever-larger networks to cover more facts. To capture knowledge in a more modular and interpretable way, we augment language model pre-training with a latent knowledge retriever, which allows the model to retrieve and attend over documents from a large corpus such as Wikipedia, used during pre-training, fine-tuning and inference. For the first time, we show how to pre-train such a knowledge retriever in an unsupervised manner, using masked language modeling as the learning signal and backpropagating through a retrieval step that considers millions of documents. We demonstrate the effectiveness of Retrieval-Augmented Language Model pre-training (REALM) by fine-tuning on the challenging task of Open-domain Question Answering (Open-QA). We compare against state-of-the-art models for both explicit and implicit knowledge storage on three popular Open-QA benchmarks, and find that we outperform all previous methods by a significant margin (4-16% absolute accuracy), while also providing qualitative benefits such as interpretability and modularity.</em></p> <p data-svelte-h="svelte-112zouf">This model was contributed by <a href="https://huggingface.co/qqaatw" rel="nofollow">qqaatw</a>. The original code can be found <a href="https://github.com/google-research/language/tree/master/language/realm" rel="nofollow">here</a>.</p> <h2 class="relative group"><a id="transformers.RealmConfig" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmConfig"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1lrqlau">RealmConfig</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RealmConfig"><!-- HTML_TAG_START --><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">RealmConfig</span></span></h3><!-- HTML_TAG_END --> <a id="transformers.RealmConfig" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RealmConfig"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/realm/configuration_realm.py#L44" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">vocab_size<span class="opacity-60"> = 30522</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_size<span class="opacity-60"> = 768</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">retriever_proj_size<span class="opacity-60"> = 128</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_hidden_layers<span class="opacity-60"> = 12</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_attention_heads<span class="opacity-60"> = 12</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_candidates<span class="opacity-60"> = 8</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">intermediate_size<span class="opacity-60"> = 3072</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_act<span class="opacity-60"> = 'gelu_new'</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_dropout_prob<span class="opacity-60"> = 0.1</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_probs_dropout_prob<span class="opacity-60"> = 0.1</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">max_position_embeddings<span class="opacity-60"> = 512</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">type_vocab_size<span class="opacity-60"> = 2</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">initializer_range<span class="opacity-60"> = 0.02</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">layer_norm_eps<span class="opacity-60"> = 1e-12</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">span_hidden_size<span class="opacity-60"> = 256</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">max_span_width<span class="opacity-60"> = 10</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">reader_layer_norm_eps<span class="opacity-60"> = 0.001</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">reader_beam_size<span class="opacity-60"> = 5</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">reader_seq_len<span class="opacity-60"> = 320</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_block_records<span class="opacity-60"> = 13353718</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">searcher_beam_size<span class="opacity-60"> = 5000</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pad_token_id<span class="opacity-60"> = 1</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">bos_token_id<span class="opacity-60"> = 0</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">eos_token_id<span class="opacity-60"> = 2</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmConfig.vocab_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmConfig.vocab_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>vocab_size</strong> (<code>int</code>, <em>optional</em>, defaults to 30522) — Vocabulary size of the REALM model. Defines the number of different tokens that can be represented by the <code>inputs_ids</code> passed when calling <a href="/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmEmbedder">RealmEmbedder</a>, <a href="/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmScorer">RealmScorer</a>, <a href="/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmKnowledgeAugEncoder">RealmKnowledgeAugEncoder</a>, or <a href="/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmReader">RealmReader</a>.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmConfig.hidden_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmConfig.hidden_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>hidden_size</strong> (<code>int</code>, <em>optional</em>, defaults to 768) — Dimension of the encoder layers and the pooler layer.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmConfig.retriever_proj_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmConfig.retriever_proj_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>retriever_proj_size</strong> (<code>int</code>, <em>optional</em>, defaults to 128) — Dimension of the retriever(embedder) projection.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmConfig.num_hidden_layers" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmConfig.num_hidden_layers"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>num_hidden_layers</strong> (<code>int</code>, <em>optional</em>, defaults to 12) — Number of hidden layers in the Transformer encoder.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmConfig.num_attention_heads" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmConfig.num_attention_heads"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>num_attention_heads</strong> (<code>int</code>, <em>optional</em>, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmConfig.num_candidates" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmConfig.num_candidates"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>num_candidates</strong> (<code>int</code>, <em>optional</em>, defaults to 8) — Number of candidates inputted to the RealmScorer or RealmKnowledgeAugEncoder.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmConfig.intermediate_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmConfig.intermediate_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>intermediate_size</strong> (<code>int</code>, <em>optional</em>, defaults to 3072) — Dimension of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmConfig.hidden_act" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmConfig.hidden_act"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>hidden_act</strong> (<code>str</code> or <code>function</code>, <em>optional</em>, defaults to <code>"gelu_new"</code>) — The non-linear activation function (function or string) in the encoder and pooler. If string, <code>"gelu"</code>, <code>"relu"</code>, <code>"selu"</code> and <code>"gelu_new"</code> are supported.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmConfig.hidden_dropout_prob" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmConfig.hidden_dropout_prob"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>hidden_dropout_prob</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) — The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmConfig.attention_probs_dropout_prob" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmConfig.attention_probs_dropout_prob"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>attention_probs_dropout_prob</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) — The dropout ratio for the attention probabilities.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmConfig.max_position_embeddings" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmConfig.max_position_embeddings"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>max_position_embeddings</strong> (<code>int</code>, <em>optional</em>, defaults to 512) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048).<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmConfig.type_vocab_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmConfig.type_vocab_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>type_vocab_size</strong> (<code>int</code>, <em>optional</em>, defaults to 2) — The vocabulary size of the <code>token_type_ids</code> passed when calling <a href="/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmEmbedder">RealmEmbedder</a>, <a href="/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmScorer">RealmScorer</a>, <a href="/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmKnowledgeAugEncoder">RealmKnowledgeAugEncoder</a>, or <a href="/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmReader">RealmReader</a>.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmConfig.initializer_range" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmConfig.initializer_range"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>initializer_range</strong> (<code>float</code>, <em>optional</em>, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmConfig.layer_norm_eps" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmConfig.layer_norm_eps"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>layer_norm_eps</strong> (<code>float</code>, <em>optional</em>, defaults to 1e-12) — The epsilon used by the layer normalization layers.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmConfig.span_hidden_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmConfig.span_hidden_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>span_hidden_size</strong> (<code>int</code>, <em>optional</em>, defaults to 256) — Dimension of the reader’s spans.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmConfig.max_span_width" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmConfig.max_span_width"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>max_span_width</strong> (<code>int</code>, <em>optional</em>, defaults to 10) — Max span width of the reader.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmConfig.reader_layer_norm_eps" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmConfig.reader_layer_norm_eps"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>reader_layer_norm_eps</strong> (<code>float</code>, <em>optional</em>, defaults to 1e-3) — The epsilon used by the reader’s layer normalization layers.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmConfig.reader_beam_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmConfig.reader_beam_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>reader_beam_size</strong> (<code>int</code>, <em>optional</em>, defaults to 5) — Beam size of the reader.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmConfig.reader_seq_len" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmConfig.reader_seq_len"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>reader_seq_len</strong> (<code>int</code>, <em>optional</em>, defaults to 288+32) — Maximum sequence length of the reader.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmConfig.num_block_records" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmConfig.num_block_records"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>num_block_records</strong> (<code>int</code>, <em>optional</em>, defaults to 13353718) — Number of block records.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmConfig.searcher_beam_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmConfig.searcher_beam_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>searcher_beam_size</strong> (<code>int</code>, <em>optional</em>, defaults to 5000) — Beam size of the searcher. Note that when eval mode is enabled, <em>searcher_beam_size</em> will be the same as <em>reader_beam_size</em>.<!-- HTML_TAG_END --> </span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1txlcov">This is the configuration class to store the configuration of</p> <ol data-svelte-h="svelte-17rmwrw"><li><a href="/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmEmbedder">RealmEmbedder</a></li> <li><a href="/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmScorer">RealmScorer</a></li> <li><a href="/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmKnowledgeAugEncoder">RealmKnowledgeAugEncoder</a></li> <li><a href="/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmRetriever">RealmRetriever</a></li> <li><a href="/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmReader">RealmReader</a></li> <li><a href="/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmForOpenQA">RealmForOpenQA</a></li></ol> <p data-svelte-h="svelte-15a8bjo">It is used to instantiate an REALM model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the REALM <a href="https://huggingface.co/google/realm-cc-news-pretrained-embedder" rel="nofollow">google/realm-cc-news-pretrained-embedder</a> architecture.</p> <p data-svelte-h="svelte-10kqkkl">Configuration objects inherit from <a href="/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> and can be used to control the model outputs. Read the documentation from <a href="/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> for more information.</p> <div class="relative group rounded-md"><a id="transformers.RealmConfig.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmConfig.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> RealmConfig, RealmEmbedder <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Initializing a REALM realm-cc-news-pretrained-* style configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>configuration = RealmConfig() <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Initializing a model (with random weights) from the google/realm-cc-news-pretrained-embedder style configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model = RealmEmbedder(configuration) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Accessing the model configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>configuration = model.config<!-- HTML_TAG_END --></pre></div></div></div> <h2 class="relative group"><a id="transformers.RealmTokenizer" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizer"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1mo6jqx">RealmTokenizer</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RealmTokenizer"><!-- HTML_TAG_START --><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">RealmTokenizer</span></span></h3><!-- HTML_TAG_END --> <a id="transformers.RealmTokenizer" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RealmTokenizer"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/realm/tokenization_realm.py#L95" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">vocab_file<span class="opacity-60"></span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">do_lower_case<span class="opacity-60"> = True</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">do_basic_tokenize<span class="opacity-60"> = True</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">never_split<span class="opacity-60"> = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">unk_token<span class="opacity-60"> = '[UNK]'</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">sep_token<span class="opacity-60"> = '[SEP]'</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pad_token<span class="opacity-60"> = '[PAD]'</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">cls_token<span class="opacity-60"> = '[CLS]'</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_token<span class="opacity-60"> = '[MASK]'</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">tokenize_chinese_chars<span class="opacity-60"> = True</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">strip_accents<span class="opacity-60"> = None</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizer.vocab_file" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizer.vocab_file"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>vocab_file</strong> (<code>str</code>) — File containing the vocabulary.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizer.do_lower_case" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizer.do_lower_case"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>do_lower_case</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether or not to lowercase the input when tokenizing.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizer.do_basic_tokenize" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizer.do_basic_tokenize"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>do_basic_tokenize</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether or not to do basic tokenization before WordPiece.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizer.never_split" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizer.never_split"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>never_split</strong> (<code>Iterable</code>, <em>optional</em>) — Collection of tokens which will never be split during tokenization. Only has an effect when <code>do_basic_tokenize=True</code><!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizer.unk_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizer.unk_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>unk_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"[UNK]"</code>) — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizer.sep_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizer.sep_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>sep_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"[SEP]"</code>) — The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizer.pad_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizer.pad_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>pad_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"[PAD]"</code>) — The token used for padding, for example when batching sequences of different lengths.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizer.cls_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizer.cls_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>cls_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"[CLS]"</code>) — The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizer.mask_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizer.mask_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>mask_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"[MASK]"</code>) — The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizer.tokenize_chinese_chars" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizer.tokenize_chinese_chars"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>tokenize_chinese_chars</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether or not to tokenize Chinese characters.<p></p> <p>This should likely be deactivated for Japanese (see this <a href="https://github.com/huggingface/transformers/issues/328" rel="nofollow">issue</a>).<!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizer.strip_accents" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizer.strip_accents"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>strip_accents</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to strip all accents. If this option is not specified, then it will be determined by the value for <code>lowercase</code> (as in the original BERT).<!-- HTML_TAG_END --> </span></span> </li></ul> </div></div> <p data-svelte-h="svelte-x1le7e">Construct a REALM tokenizer.</p> <p data-svelte-h="svelte-1tephpe"><a href="/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmTokenizer">RealmTokenizer</a> is identical to <a href="/docs/transformers/v4.34.0/en/model_doc/bert#transformers.BertTokenizer">BertTokenizer</a> and runs end-to-end tokenization: punctuation splitting and wordpiece.</p> <p data-svelte-h="svelte-1b0fouy">This tokenizer inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizer">PreTrainedTokenizer</a> which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RealmTokenizer.build_inputs_with_special_tokens"><!-- HTML_TAG_START --><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>build_inputs_with_special_tokens</span></h4><!-- HTML_TAG_END --> <a id="transformers.RealmTokenizer.build_inputs_with_special_tokens" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RealmTokenizer.build_inputs_with_special_tokens"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/realm/tokenization_realm.py#L300" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_0<span class="opacity-60">: typing.List[int]</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_1<span class="opacity-60">: typing.Optional[typing.List[int]] = None</span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><!-- HTML_TAG_START --><span><code>List[int]</code></span><!-- HTML_TAG_END --></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizer.build_inputs_with_special_tokens.token_ids_0" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizer.build_inputs_with_special_tokens.token_ids_0"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>token_ids_0</strong> (<code>List[int]</code>) — List of IDs to which the special tokens will be added.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizer.build_inputs_with_special_tokens.token_ids_1" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizer.build_inputs_with_special_tokens.token_ids_1"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>token_ids_1</strong> (<code>List[int]</code>, <em>optional</em>) — Optional second list of IDs for sequence pairs.<!-- HTML_TAG_END --> </span></span> </li></ul> <div id="transformers.RealmTokenizer.build_inputs_with_special_tokens.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <!-- HTML_TAG_START --> <p><code>List[int]</code></p> <!-- HTML_TAG_END --> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"><!-- HTML_TAG_START --> </p><p>List of <a href="../glossary#input-ids">input IDs</a> with the appropriate special tokens.</p> <!-- HTML_TAG_END --><p></p> </div></div> <p data-svelte-h="svelte-q7bpic">Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A REALM sequence has the following format:</p> <ul data-svelte-h="svelte-xi6653"><li>single sequence: <code>[CLS] X [SEP]</code></li> <li>pair of sequences: <code>[CLS] A [SEP] B [SEP]</code></li></ul></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RealmTokenizer.get_special_tokens_mask"><!-- HTML_TAG_START --><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>get_special_tokens_mask</span></h4><!-- HTML_TAG_END --> <a id="transformers.RealmTokenizer.get_special_tokens_mask" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RealmTokenizer.get_special_tokens_mask"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/realm/tokenization_realm.py#L325" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_0<span class="opacity-60">: typing.List[int]</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_1<span class="opacity-60">: typing.Optional[typing.List[int]] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">already_has_special_tokens<span class="opacity-60">: bool = False</span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><!-- HTML_TAG_START --><span><code>List[int]</code></span><!-- HTML_TAG_END --></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizer.get_special_tokens_mask.token_ids_0" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizer.get_special_tokens_mask.token_ids_0"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>token_ids_0</strong> (<code>List[int]</code>) — List of IDs.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizer.get_special_tokens_mask.token_ids_1" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizer.get_special_tokens_mask.token_ids_1"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>token_ids_1</strong> (<code>List[int]</code>, <em>optional</em>) — Optional second list of IDs for sequence pairs.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizer.get_special_tokens_mask.already_has_special_tokens" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizer.get_special_tokens_mask.already_has_special_tokens"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>already_has_special_tokens</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not the token list is already formatted with special tokens for the model.<!-- HTML_TAG_END --> </span></span> </li></ul> <div id="transformers.RealmTokenizer.get_special_tokens_mask.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <!-- HTML_TAG_START --> <p><code>List[int]</code></p> <!-- HTML_TAG_END --> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"><!-- HTML_TAG_START --> </p><p>A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.</p> <!-- HTML_TAG_END --><p></p> </div></div> <p data-svelte-h="svelte-1f4f5kp">Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer <code>prepare_for_model</code> method.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RealmTokenizer.create_token_type_ids_from_sequences"><!-- HTML_TAG_START --><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>create_token_type_ids_from_sequences</span></h4><!-- HTML_TAG_END --> <a id="transformers.RealmTokenizer.create_token_type_ids_from_sequences" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RealmTokenizer.create_token_type_ids_from_sequences"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/realm/tokenization_realm.py#L353" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_0<span class="opacity-60">: typing.List[int]</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_1<span class="opacity-60">: typing.Optional[typing.List[int]] = None</span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><!-- HTML_TAG_START --><span><code>List[int]</code></span><!-- HTML_TAG_END --></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizer.create_token_type_ids_from_sequences.token_ids_0" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizer.create_token_type_ids_from_sequences.token_ids_0"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>token_ids_0</strong> (<code>List[int]</code>) — List of IDs.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizer.create_token_type_ids_from_sequences.token_ids_1" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizer.create_token_type_ids_from_sequences.token_ids_1"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>token_ids_1</strong> (<code>List[int]</code>, <em>optional</em>) — Optional second list of IDs for sequence pairs.<!-- HTML_TAG_END --> </span></span> </li></ul> <div id="transformers.RealmTokenizer.create_token_type_ids_from_sequences.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <!-- HTML_TAG_START --> <p><code>List[int]</code></p> <!-- HTML_TAG_END --> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"><!-- HTML_TAG_START --> </p><p>List of <a href="../glossary#token-type-ids">token type IDs</a> according to the given sequence(s).</p> <!-- HTML_TAG_END --><p></p> </div></div> <p data-svelte-h="svelte-11hyn6f">Create a mask from the two sequences passed to be used in a sequence-pair classification task. A REALM sequence</p> <div class="relative group rounded-md"><a id="transformers.RealmTokenizer.create_token_type_ids_from_sequences.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizer.create_token_type_ids_from_sequences.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-qjgeij">pair mask has the following format:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START -->0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 1 </span>1<span class="hljs-number"> 1 </span>1<span class="hljs-number"> 1 </span>1<span class="hljs-number"> 1 </span>1 1 | first sequence | second sequence |<!-- HTML_TAG_END --></pre></div></div> <p data-svelte-h="svelte-owoxgn">If <code>token_ids_1</code> is <code>None</code>, this method only returns the first portion of the mask (0s).</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RealmTokenizer.save_vocabulary"><!-- HTML_TAG_START --><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>save_vocabulary</span></h4><!-- HTML_TAG_END --> <a id="transformers.RealmTokenizer.save_vocabulary" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RealmTokenizer.save_vocabulary"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/realm/tokenization_realm.py#L382" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">save_directory<span class="opacity-60">: str</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">filename_prefix<span class="opacity-60">: typing.Optional[str] = None</span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> </div></div></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RealmTokenizer.batch_encode_candidates"><!-- HTML_TAG_START --><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>batch_encode_candidates</span></h4><!-- HTML_TAG_END --> <a id="transformers.RealmTokenizer.batch_encode_candidates" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RealmTokenizer.batch_encode_candidates"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/realm/tokenization_realm.py#L227" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">text<span class="opacity-60"></span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><!-- HTML_TAG_START --><span><a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.BatchEncoding">BatchEncoding</a></span><!-- HTML_TAG_END --></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizer.batch_encode_candidates.text" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizer.batch_encode_candidates.text"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>text</strong> (<code>List[List[str]]</code>) — The batch of sequences to be encoded. Each sequence must be in this format: (batch_size, num_candidates, text).<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizer.batch_encode_candidates.text_pair" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizer.batch_encode_candidates.text_pair"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>text_pair</strong> (<code>List[List[str]]</code>, <em>optional</em>) — The batch of sequences to be encoded. Each sequence must be in this format: (batch_size, num_candidates, text). **kwargs — Keyword arguments of the <strong>call</strong> method.<!-- HTML_TAG_END --> </span></span> </li></ul> <div id="transformers.RealmTokenizer.batch_encode_candidates.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <!-- HTML_TAG_START --> <p><a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.BatchEncoding">BatchEncoding</a></p> <!-- HTML_TAG_END --> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"><!-- HTML_TAG_START --> </p><p>Encoded text or text pair.</p> <!-- HTML_TAG_END --><p></p> </div></div> <p data-svelte-h="svelte-ykazgt">Encode a batch of text or text pair. This method is similar to regular <strong>call</strong> method but has the following differences:</p> <ol data-svelte-h="svelte-1kcpahy"><li>Handle additional num_candidate axis. (batch_size, num_candidates, text)</li> <li>Always pad the sequences to <em>max_length</em>.</li> <li>Must specify <em>max_length</em> in order to stack packs of candidates into a batch.</li></ol> <ul data-svelte-h="svelte-xi6653"><li>single sequence: <code>[CLS] X [SEP]</code></li> <li>pair of sequences: <code>[CLS] A [SEP] B [SEP]</code></li></ul> <div class="relative group rounded-md"><a id="transformers.RealmTokenizer.batch_encode_candidates.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizer.batch_encode_candidates.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> RealmTokenizer <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># batch_size = 2, num_candidates = 2</span> <span class="hljs-meta">&gt;&gt;&gt; </span>text = [[<span class="hljs-string">"Hello world!"</span>, <span class="hljs-string">"Nice to meet you!"</span>], [<span class="hljs-string">"The cute cat."</span>, <span class="hljs-string">"The adorable dog."</span>]] <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = RealmTokenizer.from_pretrained(<span class="hljs-string">"google/realm-cc-news-pretrained-encoder"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>tokenized_text = tokenizer.batch_encode_candidates(text, max_length=<span class="hljs-number">10</span>, return_tensors=<span class="hljs-string">"pt"</span>)<!-- HTML_TAG_END --></pre></div></div></div></div> <h2 class="relative group"><a id="transformers.RealmTokenizerFast" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizerFast"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-8bimtr">RealmTokenizerFast</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RealmTokenizerFast"><!-- HTML_TAG_START --><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">RealmTokenizerFast</span></span></h3><!-- HTML_TAG_END --> <a id="transformers.RealmTokenizerFast" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RealmTokenizerFast"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/realm/tokenization_realm_fast.py#L102" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">vocab_file<span class="opacity-60"> = None</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">tokenizer_file<span class="opacity-60"> = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">do_lower_case<span class="opacity-60"> = True</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">unk_token<span class="opacity-60"> = '[UNK]'</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">sep_token<span class="opacity-60"> = '[SEP]'</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pad_token<span class="opacity-60"> = '[PAD]'</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">cls_token<span class="opacity-60"> = '[CLS]'</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_token<span class="opacity-60"> = '[MASK]'</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">tokenize_chinese_chars<span class="opacity-60"> = True</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">strip_accents<span class="opacity-60"> = None</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizerFast.vocab_file" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizerFast.vocab_file"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>vocab_file</strong> (<code>str</code>) — File containing the vocabulary.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizerFast.do_lower_case" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizerFast.do_lower_case"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>do_lower_case</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether or not to lowercase the input when tokenizing.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizerFast.unk_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizerFast.unk_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>unk_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"[UNK]"</code>) — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizerFast.sep_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizerFast.sep_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>sep_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"[SEP]"</code>) — The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizerFast.pad_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizerFast.pad_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>pad_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"[PAD]"</code>) — The token used for padding, for example when batching sequences of different lengths.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizerFast.cls_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizerFast.cls_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>cls_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"[CLS]"</code>) — The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizerFast.mask_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizerFast.mask_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>mask_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"[MASK]"</code>) — The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizerFast.clean_text" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizerFast.clean_text"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>clean_text</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether or not to clean the text before tokenization by removing any control characters and replacing all whitespaces by the classic one.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizerFast.tokenize_chinese_chars" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizerFast.tokenize_chinese_chars"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>tokenize_chinese_chars</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether or not to tokenize Chinese characters. This should likely be deactivated for Japanese (see <a href="https://github.com/huggingface/transformers/issues/328" rel="nofollow">this issue</a>).<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizerFast.strip_accents" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizerFast.strip_accents"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>strip_accents</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to strip all accents. If this option is not specified, then it will be determined by the value for <code>lowercase</code> (as in the original BERT).<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizerFast.wordpieces_prefix" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizerFast.wordpieces_prefix"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>wordpieces_prefix</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"##"</code>) — The prefix for subwords.<!-- HTML_TAG_END --> </span></span> </li></ul> </div></div> <p data-svelte-h="svelte-q0n2sz">Construct a “fast” REALM tokenizer (backed by HuggingFace’s <em>tokenizers</em> library). Based on WordPiece.</p> <p data-svelte-h="svelte-19u8ha2"><a href="/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmTokenizerFast">RealmTokenizerFast</a> is identical to <a href="/docs/transformers/v4.34.0/en/model_doc/bert#transformers.BertTokenizerFast">BertTokenizerFast</a> and runs end-to-end tokenization: punctuation splitting and wordpiece.</p> <p data-svelte-h="svelte-ttxvs6">This tokenizer inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast">PreTrainedTokenizerFast</a> which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RealmTokenizerFast.batch_encode_candidates"><!-- HTML_TAG_START --><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>batch_encode_candidates</span></h4><!-- HTML_TAG_END --> <a id="transformers.RealmTokenizerFast.batch_encode_candidates" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RealmTokenizerFast.batch_encode_candidates"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/realm/tokenization_realm_fast.py#L193" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">text<span class="opacity-60"></span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><!-- HTML_TAG_START --><span><a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.BatchEncoding">BatchEncoding</a></span><!-- HTML_TAG_END --></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizerFast.batch_encode_candidates.text" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizerFast.batch_encode_candidates.text"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>text</strong> (<code>List[List[str]]</code>) — The batch of sequences to be encoded. Each sequence must be in this format: (batch_size, num_candidates, text).<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmTokenizerFast.batch_encode_candidates.text_pair" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizerFast.batch_encode_candidates.text_pair"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>text_pair</strong> (<code>List[List[str]]</code>, <em>optional</em>) — The batch of sequences to be encoded. Each sequence must be in this format: (batch_size, num_candidates, text). **kwargs — Keyword arguments of the <strong>call</strong> method.<!-- HTML_TAG_END --> </span></span> </li></ul> <div id="transformers.RealmTokenizerFast.batch_encode_candidates.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <!-- HTML_TAG_START --> <p><a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.BatchEncoding">BatchEncoding</a></p> <!-- HTML_TAG_END --> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"><!-- HTML_TAG_START --> </p><p>Encoded text or text pair.</p> <!-- HTML_TAG_END --><p></p> </div></div> <p data-svelte-h="svelte-ykazgt">Encode a batch of text or text pair. This method is similar to regular <strong>call</strong> method but has the following differences:</p> <ol data-svelte-h="svelte-1kcpahy"><li>Handle additional num_candidate axis. (batch_size, num_candidates, text)</li> <li>Always pad the sequences to <em>max_length</em>.</li> <li>Must specify <em>max_length</em> in order to stack packs of candidates into a batch.</li></ol> <ul data-svelte-h="svelte-xi6653"><li>single sequence: <code>[CLS] X [SEP]</code></li> <li>pair of sequences: <code>[CLS] A [SEP] B [SEP]</code></li></ul> <div class="relative group rounded-md"><a id="transformers.RealmTokenizerFast.batch_encode_candidates.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmTokenizerFast.batch_encode_candidates.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> RealmTokenizerFast <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># batch_size = 2, num_candidates = 2</span> <span class="hljs-meta">&gt;&gt;&gt; </span>text = [[<span class="hljs-string">"Hello world!"</span>, <span class="hljs-string">"Nice to meet you!"</span>], [<span class="hljs-string">"The cute cat."</span>, <span class="hljs-string">"The adorable dog."</span>]] <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = RealmTokenizerFast.from_pretrained(<span class="hljs-string">"google/realm-cc-news-pretrained-encoder"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>tokenized_text = tokenizer.batch_encode_candidates(text, max_length=<span class="hljs-number">10</span>, return_tensors=<span class="hljs-string">"pt"</span>)<!-- HTML_TAG_END --></pre></div></div></div></div> <h2 class="relative group"><a id="transformers.RealmRetriever" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmRetriever"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1dy9sng">RealmRetriever</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RealmRetriever"><!-- HTML_TAG_START --><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">RealmRetriever</span></span></h3><!-- HTML_TAG_END --> <a id="transformers.RealmRetriever" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RealmRetriever"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/realm/retrieval_realm.py#L72" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">block_records<span class="opacity-60"></span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">tokenizer<span class="opacity-60"></span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmRetriever.block_records" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmRetriever.block_records"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>block_records</strong> (<code>np.ndarray</code>) — A numpy array which cantains evidence texts.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmRetriever.tokenizer" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmRetriever.tokenizer"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>tokenizer</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmTokenizer">RealmTokenizer</a>) — The tokenizer to encode retrieved texts.<!-- HTML_TAG_END --> </span></span> </li></ul> </div></div> <p data-svelte-h="svelte-ongswt">The retriever of REALM outputting the retrieved evidence block and whether the block has answers as well as answer positions.”</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RealmRetriever.block_has_answer"><!-- HTML_TAG_START --><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>block_has_answer</span></h4><!-- HTML_TAG_END --> <a id="transformers.RealmRetriever.block_has_answer" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RealmRetriever.block_has_answer"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/realm/retrieval_realm.py#L129" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">concat_inputs<span class="opacity-60"></span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">answer_ids<span class="opacity-60"></span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> </div></div> <p data-svelte-h="svelte-1ig21z">check if retrieved_blocks has answers.</p></div></div> <h2 class="relative group"><a id="transformers.RealmEmbedder" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmEmbedder"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-12ov3r8">RealmEmbedder</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RealmEmbedder"><!-- HTML_TAG_START --><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">RealmEmbedder</span></span></h3><!-- HTML_TAG_END --> <a id="transformers.RealmEmbedder" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RealmEmbedder"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/realm/modeling_realm.py#L1151" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmEmbedder.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmEmbedder.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmConfig">RealmConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.<!-- HTML_TAG_END --> </span></span> </li></ul> </div></div> <p data-svelte-h="svelte-2kzyme">The embedder of REALM outputting projected score that will be used to calculate relevance score. This model is a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RealmEmbedder.forward"><!-- HTML_TAG_START --><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4><!-- HTML_TAG_END --> <a id="transformers.RealmEmbedder.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RealmEmbedder.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/realm/modeling_realm.py#L1167" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><!-- HTML_TAG_START --><span><code>transformers.models.realm.modeling_realm.RealmEmbedderOutput</code> or <code>tuple(torch.FloatTensor)</code></span><!-- HTML_TAG_END --></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmEmbedder.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmEmbedder.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmEmbedder.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmEmbedder.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmEmbedder.forward.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmEmbedder.forward.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>token_type_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token.</li> </ul> <p><a href="../glossary#token-type-ids">What are token type IDs?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmEmbedder.forward.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmEmbedder.forward.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>position_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.<p></p> <p><a href="../glossary#position-ids">What are position IDs?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmEmbedder.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmEmbedder.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul><!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmEmbedder.forward.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmEmbedder.forward.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <em>input_ids</em> indices into associated vectors than the model’s internal embedding lookup matrix.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmEmbedder.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmEmbedder.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmEmbedder.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmEmbedder.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmEmbedder.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmEmbedder.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.<!-- HTML_TAG_END --> </span></span> </li></ul> <div id="transformers.RealmEmbedder.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <!-- HTML_TAG_START --> <p><code>transformers.models.realm.modeling_realm.RealmEmbedderOutput</code> or <code>tuple(torch.FloatTensor)</code></p> <!-- HTML_TAG_END --> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"><!-- HTML_TAG_START --> </p><p>A <code>transformers.models.realm.modeling_realm.RealmEmbedderOutput</code> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmConfig">RealmConfig</a>) and inputs.</p> <ul> <li> <p><strong>projected_score</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, config.retriever_proj_size)</code>) — Projected score.</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> <!-- HTML_TAG_END --><p></p> </div></div> <p data-svelte-h="svelte-rpsi0">The <a href="/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmEmbedder">RealmEmbedder</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.RealmEmbedder.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmEmbedder.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, RealmEmbedder <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"google/realm-cc-news-pretrained-embedder"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = RealmEmbedder.from_pretrained(<span class="hljs-string">"google/realm-cc-news-pretrained-embedder"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(<span class="hljs-string">"Hello, my dog is cute"</span>, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(**inputs) <span class="hljs-meta">&gt;&gt;&gt; </span>projected_score = outputs.projected_score<!-- HTML_TAG_END --></pre></div></div></div></div> <h2 class="relative group"><a id="transformers.RealmScorer" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmScorer"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-12sp14k">RealmScorer</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RealmScorer"><!-- HTML_TAG_START --><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">RealmScorer</span></span></h3><!-- HTML_TAG_END --> <a id="transformers.RealmScorer" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RealmScorer"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/realm/modeling_realm.py#L1233" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">query_embedder<span class="opacity-60"> = None</span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmScorer.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmScorer.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmConfig">RealmConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmScorer.query_embedder" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmScorer.query_embedder"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>query_embedder</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmEmbedder">RealmEmbedder</a>) — Embedder for input sequences. If not specified, it will use the same embedder as candidate sequences.<!-- HTML_TAG_END --> </span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1rwza5g">The scorer of REALM outputting relevance scores representing the score of document candidates (before softmax). This model is a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RealmScorer.forward"><!-- HTML_TAG_START --><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4><!-- HTML_TAG_END --> <a id="transformers.RealmScorer.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RealmScorer.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/realm/modeling_realm.py#L1249" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">candidate_input_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">candidate_attention_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">candidate_token_type_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">candidate_inputs_embeds<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><!-- HTML_TAG_START --><span><code>transformers.models.realm.modeling_realm.RealmScorerOutput</code> or <code>tuple(torch.FloatTensor)</code></span><!-- HTML_TAG_END --></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmScorer.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmScorer.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmScorer.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmScorer.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmScorer.forward.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmScorer.forward.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>token_type_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token.</li> </ul> <p><a href="../glossary#token-type-ids">What are token type IDs?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmScorer.forward.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmScorer.forward.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>position_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.<p></p> <p><a href="../glossary#position-ids">What are position IDs?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmScorer.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmScorer.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul><!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmScorer.forward.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmScorer.forward.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <em>input_ids</em> indices into associated vectors than the model’s internal embedding lookup matrix.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmScorer.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmScorer.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmScorer.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmScorer.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmScorer.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmScorer.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmScorer.forward.candidate_input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmScorer.forward.candidate_input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>candidate_input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, num_candidates, sequence_length)</code>) — Indices of candidate input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmScorer.forward.candidate_attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmScorer.forward.candidate_attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>candidate_attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_candidates, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmScorer.forward.candidate_token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmScorer.forward.candidate_token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>candidate_token_type_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, num_candidates, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token.</li> </ul> <p><a href="../glossary#token-type-ids">What are token type IDs?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmScorer.forward.candidate_inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmScorer.forward.candidate_inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>candidate_inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size * num_candidates, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>candidate_input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <em>candidate_input_ids</em> indices into associated vectors than the model’s internal embedding lookup matrix.<!-- HTML_TAG_END --> </span></span> </li></ul> <div id="transformers.RealmScorer.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <!-- HTML_TAG_START --> <p><code>transformers.models.realm.modeling_realm.RealmScorerOutput</code> or <code>tuple(torch.FloatTensor)</code></p> <!-- HTML_TAG_END --> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"><!-- HTML_TAG_START --> </p><p>A <code>transformers.models.realm.modeling_realm.RealmScorerOutput</code> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmConfig">RealmConfig</a>) and inputs.</p> <ul> <li><strong>relevance_score</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, config.num_candidates)</code>) — The relevance score of document candidates (before softmax).</li> <li><strong>query_score</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, config.retriever_proj_size)</code>) — Query score derived from the query embedder.</li> <li><strong>candidate_score</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, config.num_candidates, config.retriever_proj_size)</code>) — Candidate score derived from the embedder.</li> </ul> <!-- HTML_TAG_END --><p></p> </div></div> <p data-svelte-h="svelte-21aq1g">The <a href="/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmScorer">RealmScorer</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.RealmScorer.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmScorer.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, RealmScorer <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"google/realm-cc-news-pretrained-scorer"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = RealmScorer.from_pretrained(<span class="hljs-string">"google/realm-cc-news-pretrained-scorer"</span>, num_candidates=<span class="hljs-number">2</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># batch_size = 2, num_candidates = 2</span> <span class="hljs-meta">&gt;&gt;&gt; </span>input_texts = [<span class="hljs-string">"How are you?"</span>, <span class="hljs-string">"What is the item in the picture?"</span>] <span class="hljs-meta">&gt;&gt;&gt; </span>candidates_texts = [[<span class="hljs-string">"Hello world!"</span>, <span class="hljs-string">"Nice to meet you!"</span>], [<span class="hljs-string">"A cute cat."</span>, <span class="hljs-string">"An adorable dog."</span>]] <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(input_texts, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>candidates_inputs = tokenizer.batch_encode_candidates(candidates_texts, max_length=<span class="hljs-number">10</span>, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model( <span class="hljs-meta">... </span> **inputs, <span class="hljs-meta">... </span> candidate_input_ids=candidates_inputs.input_ids, <span class="hljs-meta">... </span> candidate_attention_mask=candidates_inputs.attention_mask, <span class="hljs-meta">... </span> candidate_token_type_ids=candidates_inputs.token_type_ids, <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>relevance_score = outputs.relevance_score<!-- HTML_TAG_END --></pre></div></div></div></div> <h2 class="relative group"><a id="transformers.RealmKnowledgeAugEncoder" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmKnowledgeAugEncoder"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-5qexw1">RealmKnowledgeAugEncoder</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RealmKnowledgeAugEncoder"><!-- HTML_TAG_START --><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">RealmKnowledgeAugEncoder</span></span></h3><!-- HTML_TAG_END --> <a id="transformers.RealmKnowledgeAugEncoder" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RealmKnowledgeAugEncoder"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/realm/modeling_realm.py#L1381" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmKnowledgeAugEncoder.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmKnowledgeAugEncoder.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmConfig">RealmConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.<!-- HTML_TAG_END --> </span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1rac53c">The knowledge-augmented encoder of REALM outputting masked language model logits and marginal log-likelihood loss. This model is a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RealmKnowledgeAugEncoder.forward"><!-- HTML_TAG_START --><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4><!-- HTML_TAG_END --> <a id="transformers.RealmKnowledgeAugEncoder.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RealmKnowledgeAugEncoder.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/realm/modeling_realm.py#L1402" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">relevance_score<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mlm_mask<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><!-- HTML_TAG_START --><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.MaskedLMOutput">transformers.modeling_outputs.MaskedLMOutput</a> or <code>tuple(torch.FloatTensor)</code></span><!-- HTML_TAG_END --></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmKnowledgeAugEncoder.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmKnowledgeAugEncoder.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, num_candidates, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmKnowledgeAugEncoder.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmKnowledgeAugEncoder.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_candidates, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmKnowledgeAugEncoder.forward.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmKnowledgeAugEncoder.forward.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>token_type_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, num_candidates, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token.</li> </ul> <p><a href="../glossary#token-type-ids">What are token type IDs?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmKnowledgeAugEncoder.forward.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmKnowledgeAugEncoder.forward.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>position_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, num_candidates, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.<p></p> <p><a href="../glossary#position-ids">What are position IDs?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmKnowledgeAugEncoder.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmKnowledgeAugEncoder.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul><!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmKnowledgeAugEncoder.forward.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmKnowledgeAugEncoder.forward.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_candidates, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <em>input_ids</em> indices into associated vectors than the model’s internal embedding lookup matrix.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmKnowledgeAugEncoder.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmKnowledgeAugEncoder.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmKnowledgeAugEncoder.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmKnowledgeAugEncoder.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmKnowledgeAugEncoder.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmKnowledgeAugEncoder.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmKnowledgeAugEncoder.forward.relevance_score" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmKnowledgeAugEncoder.forward.relevance_score"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>relevance_score</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_candidates)</code>, <em>optional</em>) — Relevance score derived from RealmScorer, must be specified if you want to compute the masked language modeling loss.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmKnowledgeAugEncoder.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmKnowledgeAugEncoder.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Labels for computing the masked language modeling loss. Indices should be in <code>[-100, 0, ..., config.vocab_size]</code> (see <code>input_ids</code> docstring) Tokens with indices set to <code>-100</code> are ignored (masked), the loss is only computed for the tokens with labels in <code>[0, ..., config.vocab_size]</code><!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmKnowledgeAugEncoder.forward.mlm_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmKnowledgeAugEncoder.forward.mlm_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>mlm_mask</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid calculating joint loss on certain positions. If not specified, the loss will not be masked. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul><!-- HTML_TAG_END --> </span></span> </li></ul> <div id="transformers.RealmKnowledgeAugEncoder.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <!-- HTML_TAG_START --> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.MaskedLMOutput">transformers.modeling_outputs.MaskedLMOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <!-- HTML_TAG_END --> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"><!-- HTML_TAG_START --> </p><p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.MaskedLMOutput">transformers.modeling_outputs.MaskedLMOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmConfig">RealmConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Masked language modeling (MLM) loss.</p> </li> <li> <p><strong>logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, config.vocab_size)</code>) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> <!-- HTML_TAG_END --><p></p> </div></div> <p data-svelte-h="svelte-3o90s">The <a href="/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmKnowledgeAugEncoder">RealmKnowledgeAugEncoder</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.RealmKnowledgeAugEncoder.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmKnowledgeAugEncoder.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, RealmKnowledgeAugEncoder <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"google/realm-cc-news-pretrained-encoder"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = RealmKnowledgeAugEncoder.from_pretrained( <span class="hljs-meta">... </span> <span class="hljs-string">"google/realm-cc-news-pretrained-encoder"</span>, num_candidates=<span class="hljs-number">2</span> <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># batch_size = 2, num_candidates = 2</span> <span class="hljs-meta">&gt;&gt;&gt; </span>text = [[<span class="hljs-string">"Hello world!"</span>, <span class="hljs-string">"Nice to meet you!"</span>], [<span class="hljs-string">"The cute cat."</span>, <span class="hljs-string">"The adorable dog."</span>]] <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer.batch_encode_candidates(text, max_length=<span class="hljs-number">10</span>, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(**inputs) <span class="hljs-meta">&gt;&gt;&gt; </span>logits = outputs.logits<!-- HTML_TAG_END --></pre></div></div></div></div> <h2 class="relative group"><a id="transformers.RealmReader" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmReader"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-wn6ocv">RealmReader</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RealmReader"><!-- HTML_TAG_START --><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">RealmReader</span></span></h3><!-- HTML_TAG_END --> <a id="transformers.RealmReader" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RealmReader"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/realm/modeling_realm.py#L1531" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmReader.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmReader.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmConfig">RealmConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.<!-- HTML_TAG_END --> </span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1ybpoit">The reader of REALM. This model is a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RealmReader.forward"><!-- HTML_TAG_START --><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4><!-- HTML_TAG_END --> <a id="transformers.RealmReader.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RealmReader.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/realm/modeling_realm.py#L1542" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">head_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs_embeds<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">relevance_score<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">block_mask<span class="opacity-60">: typing.Optional[torch.BoolTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">start_positions<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">end_positions<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">has_answers<span class="opacity-60">: typing.Optional[torch.BoolTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><!-- HTML_TAG_START --><span><code>transformers.models.realm.modeling_realm.RealmReaderOutput</code> or <code>tuple(torch.FloatTensor)</code></span><!-- HTML_TAG_END --></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmReader.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmReader.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(reader_beam_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmReader.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmReader.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(reader_beam_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmReader.forward.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmReader.forward.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>token_type_ids</strong> (<code>torch.LongTensor</code> of shape <code>(reader_beam_size, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token.</li> </ul> <p><a href="../glossary#token-type-ids">What are token type IDs?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmReader.forward.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmReader.forward.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>position_ids</strong> (<code>torch.LongTensor</code> of shape <code>(reader_beam_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.<p></p> <p><a href="../glossary#position-ids">What are position IDs?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmReader.forward.head_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmReader.forward.head_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>head_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(num_heads,)</code> or <code>(num_layers, num_heads)</code>, <em>optional</em>) — Mask to nullify selected heads of the self-attention modules. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 indicates the head is <strong>not masked</strong>,</li> <li>0 indicates the head is <strong>masked</strong>.</li> </ul><!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmReader.forward.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmReader.forward.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(reader_beam_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <em>input_ids</em> indices into associated vectors than the model’s internal embedding lookup matrix.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmReader.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmReader.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmReader.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmReader.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmReader.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmReader.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmReader.forward.relevance_score" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmReader.forward.relevance_score"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>relevance_score</strong> (<code>torch.FloatTensor</code> of shape <code>(searcher_beam_size,)</code>, <em>optional</em>) — Relevance score, which must be specified if you want to compute the logits and marginal log loss.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmReader.forward.block_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmReader.forward.block_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>block_mask</strong> (<code>torch.BoolTensor</code> of shape <code>(searcher_beam_size, sequence_length)</code>, <em>optional</em>) — The mask of the evidence block, which must be specified if you want to compute the logits and marginal log loss.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmReader.forward.start_positions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmReader.forward.start_positions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>start_positions</strong> (<code>torch.LongTensor</code> of shape <code>(searcher_beam_size,)</code>, <em>optional</em>) — Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (<code>sequence_length</code>). Position outside of the sequence are not taken into account for computing the loss.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmReader.forward.end_positions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmReader.forward.end_positions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>end_positions</strong> (<code>torch.LongTensor</code> of shape <code>(searcher_beam_size,)</code>, <em>optional</em>) — Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (<code>sequence_length</code>). Position outside of the sequence are not taken into account for computing the loss.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmReader.forward.has_answers" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmReader.forward.has_answers"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>has_answers</strong> (<code>torch.BoolTensor</code> of shape <code>(searcher_beam_size,)</code>, <em>optional</em>) — Whether or not the evidence block has answer(s).<!-- HTML_TAG_END --> </span></span> </li></ul> <div id="transformers.RealmReader.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <!-- HTML_TAG_START --> <p><code>transformers.models.realm.modeling_realm.RealmReaderOutput</code> or <code>tuple(torch.FloatTensor)</code></p> <!-- HTML_TAG_END --> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"><!-- HTML_TAG_START --> </p><p>A <code>transformers.models.realm.modeling_realm.RealmReaderOutput</code> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmConfig">RealmConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>start_positions</code>, <code>end_positions</code>, <code>has_answers</code> are provided) — Total loss.</p> </li> <li> <p><strong>retriever_loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>start_positions</code>, <code>end_positions</code>, <code>has_answers</code> are provided) — Retriever loss.</p> </li> <li> <p><strong>reader_loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>start_positions</code>, <code>end_positions</code>, <code>has_answers</code> are provided) — Reader loss.</p> </li> <li> <p><strong>retriever_correct</strong> (<code>torch.BoolTensor</code> of shape <code>(config.searcher_beam_size,)</code>, <em>optional</em>) — Whether or not an evidence block contains answer.</p> </li> <li> <p><strong>reader_correct</strong> (<code>torch.BoolTensor</code> of shape <code>(config.reader_beam_size, num_candidates)</code>, <em>optional</em>) — Whether or not a span candidate contains answer.</p> </li> <li> <p><strong>block_idx</strong> (<code>torch.LongTensor</code> of shape <code>()</code>) — The index of the retrieved evidence block in which the predicted answer is most likely.</p> </li> <li> <p><strong>candidate</strong> (<code>torch.LongTensor</code> of shape <code>()</code>) — The index of the retrieved span candidates in which the predicted answer is most likely.</p> </li> <li> <p><strong>start_pos</strong> (<code>torch.IntTensor</code> of shape <code>()</code>) — Predicted answer starting position in <em>RealmReader</em>’s inputs.</p> </li> <li> <p><strong>end_pos</strong> (<code>torch.IntTensor</code> of shape <code>()</code>) — Predicted answer ending position in <em>RealmReader</em>’s inputs.</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> <!-- HTML_TAG_END --><p></p> </div></div> <p data-svelte-h="svelte-5g4ok0">The <a href="/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmReader">RealmReader</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div></div></div> <h2 class="relative group"><a id="transformers.RealmForOpenQA" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmForOpenQA"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-6xz6yp">RealmForOpenQA</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RealmForOpenQA"><!-- HTML_TAG_START --><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">RealmForOpenQA</span></span></h3><!-- HTML_TAG_END --> <a id="transformers.RealmForOpenQA" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RealmForOpenQA"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/realm/modeling_realm.py#L1735" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">retriever<span class="opacity-60"> = None</span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmForOpenQA.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmForOpenQA.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmConfig">RealmConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.<!-- HTML_TAG_END --> </span></span> </li></ul> </div></div> <p data-svelte-h="svelte-16o09y2"><code>RealmForOpenQA</code> for end-to-end open domain question answering. This model is a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RealmForOpenQA.block_embedding_to"><!-- HTML_TAG_START --><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>block_embedding_to</span></h4><!-- HTML_TAG_END --> <a id="transformers.RealmForOpenQA.block_embedding_to" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RealmForOpenQA.block_embedding_to"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/realm/modeling_realm.py#L1758" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">device<span class="opacity-60"></span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmForOpenQA.block_embedding_to.device" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmForOpenQA.block_embedding_to.device"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>device</strong> (<code>str</code> or <code>torch.device</code>) — The device to which <code>self.block_emb</code> will be sent.<!-- HTML_TAG_END --> </span></span> </li></ul> </div></div> <p data-svelte-h="svelte-4rmqa9">Send <code>self.block_emb</code> to a specific device.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RealmForOpenQA.forward"><!-- HTML_TAG_START --><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4><!-- HTML_TAG_END --> <a id="transformers.RealmForOpenQA.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RealmForOpenQA.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/realm/modeling_realm.py#L1768" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.LongTensor]</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">answer_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><!-- HTML_TAG_START --><span><code>transformers.models.realm.modeling_realm.RealmForOpenQAOutput</code> or <code>tuple(torch.FloatTensor)</code></span><!-- HTML_TAG_END --></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmForOpenQA.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmForOpenQA.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(1, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmForOpenQA.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmForOpenQA.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(1, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmForOpenQA.forward.token_type_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmForOpenQA.forward.token_type_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>token_type_ids</strong> (<code>torch.LongTensor</code> of shape <code>(1, sequence_length)</code>, <em>optional</em>) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in <code>[0, 1]</code>:<p></p> <ul> <li>0 corresponds to a <em>sentence A</em> token,</li> <li>1 corresponds to a <em>sentence B</em> token (should not be used in this model by design).</li> </ul> <p><a href="../glossary#token-type-ids">What are token type IDs?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmForOpenQA.forward.answer_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmForOpenQA.forward.answer_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>answer_ids</strong> (<code>list</code> of shape <code>(num_answers, answer_length)</code>, <em>optional</em>) — Answer ids for computing the marginal log-likelihood loss. Indices should be in <code>[-1, 0, ..., config.vocab_size]</code> (see <code>input_ids</code> docstring) Tokens with indices set to <code>-1</code> are ignored (masked), the loss is only computed for the tokens with labels in <code>[0, ..., config.vocab_size]</code><!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RealmForOpenQA.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmForOpenQA.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.<!-- HTML_TAG_END --> </span></span> </li></ul> <div id="transformers.RealmForOpenQA.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <!-- HTML_TAG_START --> <p><code>transformers.models.realm.modeling_realm.RealmForOpenQAOutput</code> or <code>tuple(torch.FloatTensor)</code></p> <!-- HTML_TAG_END --> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"><!-- HTML_TAG_START --> </p><p>A <code>transformers.models.realm.modeling_realm.RealmForOpenQAOutput</code> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmConfig">RealmConfig</a>) and inputs.</p> <ul> <li><strong>reader_output</strong> (<code>dict</code>) — Reader output.</li> <li><strong>predicted_answer_ids</strong> (<code>torch.LongTensor</code> of shape <code>(answer_sequence_length)</code>) — Predicted answer ids.</li> </ul> <!-- HTML_TAG_END --><p></p> </div></div> <p data-svelte-h="svelte-9au21w">The <a href="/docs/transformers/v4.34.0/en/model_doc/realm#transformers.RealmForOpenQA">RealmForOpenQA</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.RealmForOpenQA.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RealmForOpenQA.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> RealmForOpenQA, RealmRetriever, AutoTokenizer <span class="hljs-meta">&gt;&gt;&gt; </span>retriever = RealmRetriever.from_pretrained(<span class="hljs-string">"google/realm-orqa-nq-openqa"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"google/realm-orqa-nq-openqa"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = RealmForOpenQA.from_pretrained(<span class="hljs-string">"google/realm-orqa-nq-openqa"</span>, retriever=retriever) <span class="hljs-meta">&gt;&gt;&gt; </span>question = <span class="hljs-string">"Who is the pioneer in modern computer science?"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>question_ids = tokenizer([question], return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>answer_ids = tokenizer( <span class="hljs-meta">... </span> [<span class="hljs-string">"alan mathison turing"</span>], <span class="hljs-meta">... </span> add_special_tokens=<span class="hljs-literal">False</span>, <span class="hljs-meta">... </span> return_token_type_ids=<span class="hljs-literal">False</span>, <span class="hljs-meta">... </span> return_attention_mask=<span class="hljs-literal">False</span>, <span class="hljs-meta">... </span>).input_ids <span class="hljs-meta">&gt;&gt;&gt; </span>reader_output, predicted_answer_ids = model(**question_ids, answer_ids=answer_ids, return_dict=<span class="hljs-literal">False</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_answer = tokenizer.decode(predicted_answer_ids) <span class="hljs-meta">&gt;&gt;&gt; </span>loss = reader_output.loss<!-- HTML_TAG_END --></pre></div></div></div></div> <p></p> <script> { __sveltekit_1yybmhh = { assets: "/docs/transformers/v4.34.0/en", base: "/docs/transformers/v4.34.0/en", env: {} }; const element = document.currentScript.parentElement; const data = [null,null]; Promise.all([ import("/docs/transformers/v4.34.0/en/_app/immutable/entry/start.c2db227a.js"), import("/docs/transformers/v4.34.0/en/_app/immutable/entry/app.879d9b87.js") ]).then(([kit, app]) => { kit.start(app, element, { node_ids: [0, 214], data, form: null, error: null }); }); } </script> <!-- HTML_TAG_END --></div> <div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/rag" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>RAG</a> <a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/reformer" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">Reformer<span class="ml-2 translate-y-px">→</span></a></div></div></div> <div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapter&quot;:{&quot;title&quot;:&quot;REALM&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;realm&quot;,&quot;url&quot;:&quot;#realm&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;overview&quot;,&quot;url&quot;:&quot;#overview&quot;},{&quot;title&quot;:&quot;RealmConfig&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.RealmConfig&quot;,&quot;url&quot;:&quot;#transformers.RealmConfig&quot;},{&quot;title&quot;:&quot;RealmTokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.RealmTokenizer&quot;,&quot;url&quot;:&quot;#transformers.RealmTokenizer&quot;},{&quot;title&quot;:&quot;RealmTokenizerFast&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.RealmTokenizerFast&quot;,&quot;url&quot;:&quot;#transformers.RealmTokenizerFast&quot;},{&quot;title&quot;:&quot;RealmRetriever&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.RealmRetriever&quot;,&quot;url&quot;:&quot;#transformers.RealmRetriever&quot;},{&quot;title&quot;:&quot;RealmEmbedder&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.RealmEmbedder&quot;,&quot;url&quot;:&quot;#transformers.RealmEmbedder&quot;},{&quot;title&quot;:&quot;RealmScorer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.RealmScorer&quot;,&quot;url&quot;:&quot;#transformers.RealmScorer&quot;},{&quot;title&quot;:&quot;RealmKnowledgeAugEncoder&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.RealmKnowledgeAugEncoder&quot;,&quot;url&quot;:&quot;#transformers.RealmKnowledgeAugEncoder&quot;},{&quot;title&quot;:&quot;RealmReader&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.RealmReader&quot;,&quot;url&quot;:&quot;#transformers.RealmReader&quot;},{&quot;title&quot;:&quot;RealmForOpenQA&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.RealmForOpenQA&quot;,&quot;url&quot;:&quot;#transformers.RealmForOpenQA&quot;}]}}" data-target="SubSideMenu"><nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#realm" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-realm">REALM</a> <a href="#overview" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-overview"><wbr>Overview</a> <a href="#transformers.RealmConfig" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.RealmConfig"><wbr>Realm<wbr>Config</a> <a href="#transformers.RealmTokenizer" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.RealmTokenizer"><wbr>Realm<wbr>Tokenizer</a> <a href="#transformers.RealmTokenizerFast" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.RealmTokenizerFast"><wbr>Realm<wbr>Tokenizer<wbr>Fast</a> <a href="#transformers.RealmRetriever" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.RealmRetriever"><wbr>Realm<wbr>Retriever</a> <a href="#transformers.RealmEmbedder" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.RealmEmbedder"><wbr>Realm<wbr>Embedder</a> <a href="#transformers.RealmScorer" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.RealmScorer"><wbr>Realm<wbr>Scorer</a> <a href="#transformers.RealmKnowledgeAugEncoder" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.RealmKnowledgeAugEncoder"><wbr>Realm<wbr>Knowledge<wbr>Aug<wbr>Encoder</a> <a href="#transformers.RealmReader" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.RealmReader"><wbr>Realm<wbr>Reader</a> <a href="#transformers.RealmForOpenQA" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.RealmForOpenQA"><wbr>Realm<wbr>For<wbr>OpenQA</a> </nav></div></div></div> <div id="doc-footer"></div></main> </div> <script> import("/front/build/kube-5e23f38/index.js"); window.moonSha = "kube-5e23f38/"; window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc","environment":"production","userAgent":"HuggingFace (production)"}`); </script> <!-- Stripe --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://js.stripe.com/v3/"; script.async = true; document.head.appendChild(script); } </script> <!-- Google analytics v4 --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL"; script.async = true; document.head.appendChild(script); window.dataLayer = window.dataLayer || []; function gtag() { if (window.dataLayer !== undefined) { window.dataLayer.push(arguments); } } gtag("js", new Date()); gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/v4.34.0/en/model_doc/realm" }); /// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" }); /// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent /// TODO: ask the user for their consent and update this with gtag('consent', 'update') } </script> <!-- Google Analytics v3 (deprecated) --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { (function (i, s, o, g, r, a, m) { i["GoogleAnalyticsObject"] = r; (i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments); }), (i[r].l = 1 * new Date()); (a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]); a.async = 1; a.src = g; m.parentNode.insertBefore(a, m); })(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics"); ganalytics("create", "UA-83738774-2", "auto"); ganalytics("send", "pageview", "/docs/transformers/v4.34.0/en/model_doc/realm"); } </script> <iframe name="__privateStripeMetricsController5390" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-27c67c0d52761104439bb051c7856ab1.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fv4.34.0%2Fen%2Fmodel_doc%2Frealm%23transformers.RealmConfig&amp;title=REALM&amp;referrer=&amp;muid=38397bf3-d1df-433f-a1ab-3a999964eeba83e258&amp;sid=7a2cecc6-6b9a-4e4a-88b4-4bd8a189a43fe6315f&amp;version=6&amp;preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html>
2023-10-05T13:33:49.633Z
Vision Encoder Decoder Models
https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderConfig
# Vision Encoder Decoder Models ## Overview The [VisionEncoderDecoderModel](/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderModel) can be used to initialize an image-to-text model with any pretrained Transformer-based vision model as the encoder (_e.g._ [ViT](vit), [BEiT](beit), [DeiT](deit), [Swin](swin)) and any pretrained language model as the decoder (_e.g._ [RoBERTa](roberta), [GPT2](gpt2), [BERT](bert), [DistilBERT](distilbert)). The effectiveness of initializing image-to-text-sequence models with pretrained checkpoints has been shown in (for example) [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei. After such a [VisionEncoderDecoderModel](/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderModel) has been trained/fine-tuned, it can be saved/loaded just like any other models (see the examples below for more information). An example application is image captioning, in which the encoder is used to encode the image, after which an autoregressive language model generates the caption. Another example is optical character recognition. Refer to [TrOCR](trocr), which is an instance of [VisionEncoderDecoderModel](/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderModel). ## Randomly initializing `VisionEncoderDecoderModel` from model configurations. [VisionEncoderDecoderModel](/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderModel) can be randomly initialized from an encoder and a decoder config. In the following example, we show how to do this using the default [ViTModel](/docs/transformers/v4.34.0/en/model_doc/vit#transformers.ViTModel) configuration for the encoder and the default `BertForCausalLM` configuration for the decoder. ``` >>> from transformers import BertConfig, ViTConfig, VisionEncoderDecoderConfig, VisionEncoderDecoderModel >>> config_encoder = ViTConfig() >>> config_decoder = BertConfig() >>> config = VisionEncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder) >>> model = VisionEncoderDecoderModel(config=config) ``` ## Initialising `VisionEncoderDecoderModel` from a pretrained encoder and a pretrained decoder. [VisionEncoderDecoderModel](/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderModel) can be initialized from a pretrained encoder checkpoint and a pretrained decoder checkpoint. Note that any pretrained Transformer-based vision model, _e.g._ [Swin](swin), can serve as the encoder and both pretrained auto-encoding models, _e.g._ BERT, pretrained causal language models, _e.g._ GPT2, as well as the pretrained decoder part of sequence-to-sequence models, _e.g._ decoder of BART, can be used as the decoder. Depending on which architecture you choose as the decoder, the cross-attention layers might be randomly initialized. Initializing [VisionEncoderDecoderModel](/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderModel) from a pretrained encoder and decoder checkpoint requires the model to be fine-tuned on a downstream task, as has been shown in [the _Warm-starting-encoder-decoder blog post_](https://huggingface.co/blog/warm-starting-encoder-decoder). To do so, the `VisionEncoderDecoderModel` class provides a [VisionEncoderDecoderModel.from\_encoder\_decoder\_pretrained()](/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderModel.from_encoder_decoder_pretrained) method. ``` >>> from transformers import VisionEncoderDecoderModel >>> model = VisionEncoderDecoderModel.from_encoder_decoder_pretrained( ... "microsoft/swin-base-patch4-window7-224-in22k", "bert-base-uncased" ... ) ``` ## Loading an existing `VisionEncoderDecoderModel` checkpoint and perform inference. To load fine-tuned checkpoints of the `VisionEncoderDecoderModel` class, [VisionEncoderDecoderModel](/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderModel) provides the `from_pretrained(...)` method just like any other model architecture in Transformers. To perform inference, one uses the `generate` method, which allows to autoregressively generate text. This method supports various forms of decoding, such as greedy, beam search and multinomial sampling. ``` >>> import requests >>> from PIL import Image >>> from transformers import GPT2TokenizerFast, ViTImageProcessor, VisionEncoderDecoderModel >>> >>> model = VisionEncoderDecoderModel.from_pretrained("nlpconnect/vit-gpt2-image-captioning") >>> tokenizer = GPT2TokenizerFast.from_pretrained("nlpconnect/vit-gpt2-image-captioning") >>> image_processor = ViTImageProcessor.from_pretrained("nlpconnect/vit-gpt2-image-captioning") >>> >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> pixel_values = image_processor(image, return_tensors="pt").pixel_values >>> >>> generated_ids = model.generate(pixel_values) >>> generated_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] >>> print(generated_text) a cat laying on a blanket next to a cat laying on a bed ``` ## Loading a PyTorch checkpoint into `TFVisionEncoderDecoderModel`. `TFVisionEncoderDecoderModel.from_pretrained()` currently doesn’t support initializing the model from a PyTorch checkpoint. Passing `from_pt=True` to this method will throw an exception. If there are only PyTorch checkpoints for a particular vision encoder-decoder model, a workaround is: ``` >>> from transformers import VisionEncoderDecoderModel, TFVisionEncoderDecoderModel >>> _model = VisionEncoderDecoderModel.from_pretrained("nlpconnect/vit-gpt2-image-captioning") >>> _model.encoder.save_pretrained("./encoder") >>> _model.decoder.save_pretrained("./decoder") >>> model = TFVisionEncoderDecoderModel.from_encoder_decoder_pretrained( ... "./encoder", "./decoder", encoder_from_pt=True, decoder_from_pt=True ... ) >>> >>> model.config = _model.config ``` ## Training Once the model is created, it can be fine-tuned similar to BART, T5 or any other encoder-decoder model on a dataset of (image, text) pairs. As you can see, only 2 inputs are required for the model in order to compute a loss: `pixel_values` (which are the images) and `labels` (which are the `input_ids` of the encoded target sequence). ``` >>> from transformers import ViTImageProcessor, BertTokenizer, VisionEncoderDecoderModel >>> from datasets import load_dataset >>> image_processor = ViTImageProcessor.from_pretrained("google/vit-base-patch16-224-in21k") >>> tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") >>> model = VisionEncoderDecoderModel.from_encoder_decoder_pretrained( ... "google/vit-base-patch16-224-in21k", "bert-base-uncased" ... ) >>> model.config.decoder_start_token_id = tokenizer.cls_token_id >>> model.config.pad_token_id = tokenizer.pad_token_id >>> dataset = load_dataset("huggingface/cats-image") >>> image = dataset["test"]["image"][0] >>> pixel_values = image_processor(image, return_tensors="pt").pixel_values >>> labels = tokenizer( ... "an image of two cats chilling on a couch", ... return_tensors="pt", ... ).input_ids >>> >>> loss = model(pixel_values=pixel_values, labels=labels).loss ``` This model was contributed by [nielsr](https://github.com/nielsrogge). This model’s TensorFlow and Flax versions were contributed by [ydshieh](https://github.com/ydshieh). ## VisionEncoderDecoderConfig ### class transformers.VisionEncoderDecoderConfig [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vision_encoder_decoder/configuration_vision_encoder_decoder.py#L33) ( \*\*kwargs ) Parameters - **kwargs** (_optional_) — Dictionary of keyword arguments. Notably: - **encoder** ([PretrainedConfig](/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig), _optional_) — An instance of a configuration object that defines the encoder config. - **decoder** ([PretrainedConfig](/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig), _optional_) — An instance of a configuration object that defines the decoder config. [VisionEncoderDecoderConfig](/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderConfig) is the configuration class to store the configuration of a [VisionEncoderDecoderModel](/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderModel). It is used to instantiate a Vision-Encoder-Text-Decoder model according to the specified arguments, defining the encoder and decoder configs. Configuration objects inherit from [PretrainedConfig](/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig) and can be used to control the model outputs. Read the documentation from [PretrainedConfig](/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig) for more information. Examples: ``` >>> from transformers import BertConfig, ViTConfig, VisionEncoderDecoderConfig, VisionEncoderDecoderModel >>> >>> config_encoder = ViTConfig() >>> config_decoder = BertConfig() >>> config = VisionEncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder) >>> >>> model = VisionEncoderDecoderModel(config=config) >>> >>> config_encoder = model.config.encoder >>> config_decoder = model.config.decoder >>> >>> config_decoder.is_decoder = True >>> config_decoder.add_cross_attention = True >>> >>> model.save_pretrained("my-model") >>> >>> encoder_decoder_config = VisionEncoderDecoderConfig.from_pretrained("my-model") >>> model = VisionEncoderDecoderModel.from_pretrained("my-model", config=encoder_decoder_config) ``` #### from\_encoder\_decoder\_configs [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vision_encoder_decoder/configuration_vision_encoder_decoder.py#L99) ( encoder\_config: PretrainedConfigdecoder\_config: PretrainedConfig\*\*kwargs ) → [VisionEncoderDecoderConfig](/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderConfig) An instance of a configuration object Instantiate a [VisionEncoderDecoderConfig](/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderConfig) (or a derived class) from a pre-trained encoder model configuration and decoder model configuration. ## VisionEncoderDecoderModel ### class transformers.VisionEncoderDecoderModel [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vision_encoder_decoder/modeling_vision_encoder_decoder.py#L151) ( config: typing.Optional\[transformers.configuration\_utils.PretrainedConfig\] = Noneencoder: typing.Optional\[transformers.modeling\_utils.PreTrainedModel\] = Nonedecoder: typing.Optional\[transformers.modeling\_utils.PreTrainedModel\] = None ) Parameters - **config** ([VisionEncoderDecoderConfig](/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. This class can be used to initialize an image-to-text-sequence model with any pretrained vision autoencoding model as the encoder and any pretrained text autoregressive model as the decoder. The encoder is loaded via [from\_pretrained()](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained) function and the decoder is loaded via [from\_pretrained()](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained) function. Cross-attention layers are automatically added to the decoder and should be fine-tuned on a downstream generative task, like image captioning. The effectiveness of initializing sequence-to-sequence models with pretrained checkpoints for sequence generation tasks was shown in [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. Additionally, in [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) it is shown how leveraging large pretrained vision models for optical character recognition (OCR) yields a significant performance improvement. After such a Vision-Encoder-Text-Decoder model has been trained/fine-tuned, it can be saved/loaded just like any other models (see the examples for more information). This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. [VisionEncoderDecoderModel](/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderModel) is a generic model class that will be instantiated as a transformer architecture with one of the base vision model classes of the library as encoder and another one as decoder when created with the :meth_~transformers.AutoModel.from\_pretrained_ class method for the encoder and :meth_~transformers.AutoModelForCausalLM.from\_pretrained_ class method for the decoder. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vision_encoder_decoder/modeling_vision_encoder_decoder.py#L519) ( pixel\_values: typing.Optional\[torch.FloatTensor\] = Nonedecoder\_input\_ids: typing.Optional\[torch.LongTensor\] = Nonedecoder\_attention\_mask: typing.Optional\[torch.BoolTensor\] = Noneencoder\_outputs: typing.Optional\[typing.Tuple\[torch.FloatTensor\]\] = Nonepast\_key\_values: typing.Optional\[typing.Tuple\[typing.Tuple\[torch.FloatTensor\]\]\] = Nonedecoder\_inputs\_embeds: typing.Optional\[torch.FloatTensor\] = Nonelabels: typing.Optional\[torch.LongTensor\] = Noneuse\_cache: typing.Optional\[bool\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None\*\*kwargs ) → [transformers.modeling\_outputs.Seq2SeqLMOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.Seq2SeqLMOutput) or `tuple(torch.FloatTensor)` The [VisionEncoderDecoderModel](/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderModel) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: ``` >>> from transformers import AutoProcessor, VisionEncoderDecoderModel >>> import requests >>> from PIL import Image >>> import torch >>> processor = AutoProcessor.from_pretrained("microsoft/trocr-base-handwritten") >>> model = VisionEncoderDecoderModel.from_pretrained("microsoft/trocr-base-handwritten") >>> >>> url = "https://fki.tic.heia-fr.ch/static/img/a01-122-02.jpg" >>> image = Image.open(requests.get(url, stream=True).raw).convert("RGB") >>> >>> model.config.decoder_start_token_id = processor.tokenizer.cls_token_id >>> model.config.pad_token_id = processor.tokenizer.pad_token_id >>> model.config.vocab_size = model.config.decoder.vocab_size >>> pixel_values = processor(image, return_tensors="pt").pixel_values >>> text = "hello world" >>> labels = processor.tokenizer(text, return_tensors="pt").input_ids >>> outputs = model(pixel_values=pixel_values, labels=labels) >>> loss = outputs.loss >>> >>> generated_ids = model.generate(pixel_values) >>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` #### from\_encoder\_decoder\_pretrained [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vision_encoder_decoder/modeling_vision_encoder_decoder.py#L365) ( encoder\_pretrained\_model\_name\_or\_path: str = Nonedecoder\_pretrained\_model\_name\_or\_path: str = None\*model\_args\*\*kwargs ) Instantiate an encoder and a decoder from one or two base classes of the library from pretrained model checkpoints. The model is set in evaluation mode by default using `model.eval()` (Dropout modules are deactivated). To train the model, you need to first set it back in training mode with `model.train()`. Example: ``` >>> from transformers import VisionEncoderDecoderModel >>> >>> model = VisionEncoderDecoderModel.from_encoder_decoder_pretrained( ... "google/vit-base-patch16-224-in21k", "bert-base-uncased" ... ) >>> >>> model.save_pretrained("./vit-bert") >>> >>> model = VisionEncoderDecoderModel.from_pretrained("./vit-bert") ``` ## TFVisionEncoderDecoderModel ### class transformers.TFVisionEncoderDecoderModel [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vision_encoder_decoder/modeling_tf_vision_encoder_decoder.py#L176) ( \*args\*\*kwargs ) Parameters - **config** ([VisionEncoderDecoderConfig](/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel.from_pretrained) method to load the model weights. This class can be used to initialize an image-to-text-sequence model with any pretrained vision autoencoding model as the encoder and any pretrained text autoregressive model as the decoder. The encoder is loaded via [from\_pretrained()](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained) function and the decoder is loaded via [from\_pretrained()](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained) function. Cross-attention layers are automatically added to the decoder and should be fine-tuned on a downstream generative task, like image captioning. The effectiveness of initializing sequence-to-sequence models with pretrained checkpoints for sequence generation tasks was shown in [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. Additionally, in [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) it is shown how leveraging large pretrained vision models for optical character recognition (OCR) yields a significant performance improvement. After such a Vision-Encoder-Text-Decoder model has been trained/fine-tuned, it can be saved/loaded just like any other models (see the examples for more information). This model inherits from [TFPreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a [tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. [TFVisionEncoderDecoderModel](/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder#transformers.TFVisionEncoderDecoderModel) is a generic model class that will be instantiated as a transformer architecture with one of the base vision model classes of the library as encoder and another one of the base model classes as decoder when created with the [from\_pretrained()](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained) class method for the encoder and [from\_pretrained()](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained) class method for the decoder. #### call [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vision_encoder_decoder/modeling_tf_vision_encoder_decoder.py#L486) ( pixel\_values: np.ndarray | tf.Tensor | None = Nonedecoder\_input\_ids: np.ndarray | tf.Tensor | None = Nonedecoder\_attention\_mask: np.ndarray | tf.Tensor | None = Noneencoder\_outputs: Optional\[Union\[Tuple, TFBaseModelOutput\]\] = Nonepast\_key\_values: Optional\[Tuple\[Tuple\[Union\[np.ndarray, tf.Tensor\]\]\]\] = Nonedecoder\_inputs\_embeds: np.ndarray | tf.Tensor | None = Nonelabels: np.ndarray | tf.Tensor | None = Noneuse\_cache: Optional\[bool\] = Noneoutput\_attentions: Optional\[bool\] = Noneoutput\_hidden\_states: Optional\[bool\] = Nonereturn\_dict: Optional\[bool\] = Nonetraining: bool = False\*\*kwargs ) → [transformers.modeling\_tf\_outputs.TFSeq2SeqLMOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFSeq2SeqLMOutput) or `tuple(tf.Tensor)` The [TFVisionEncoderDecoderModel](/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder#transformers.TFVisionEncoderDecoderModel) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: ``` >>> from transformers import AutoImageProcessor, AutoTokenizer, TFVisionEncoderDecoderModel >>> from PIL import Image >>> import requests >>> image_processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-224-in21k") >>> decoder_tokenizer = AutoTokenizer.from_pretrained("gpt2") >>> >>> model = TFVisionEncoderDecoderModel.from_encoder_decoder_pretrained( ... "google/vit-base-patch16-224-in21k", "gpt2" ... ) >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> img = Image.open(requests.get(url, stream=True).raw) >>> >>> pixel_values = image_processor(images=img, return_tensors="tf").pixel_values >>> decoder_input_ids = decoder_tokenizer("Linda Davis", return_tensors="tf").input_ids >>> outputs = model(pixel_values=pixel_values, decoder_input_ids=decoder_input_ids) >>> >>> outputs = model(pixel_values=pixel_values, decoder_input_ids=decoder_input_ids, labels=decoder_input_ids) >>> loss, logits = outputs.loss, outputs.logits >>> >>> model.save_pretrained("vit-gpt2") >>> model = TFVisionEncoderDecoderModel.from_pretrained("vit-gpt2") >>> >>> generated = model.generate(pixel_values, decoder_start_token_id=model.config.decoder.bos_token_id) ``` #### from\_encoder\_decoder\_pretrained [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vision_encoder_decoder/modeling_tf_vision_encoder_decoder.py#L338) ( encoder\_pretrained\_model\_name\_or\_path: str = Nonedecoder\_pretrained\_model\_name\_or\_path: str = None\*model\_args\*\*kwargs ) Instantiate an encoder and a decoder from one or two base classes of the library from pretrained model checkpoints. Example: ``` >>> from transformers import TFVisionEncoderDecoderModel >>> >>> model = TFVisionEncoderDecoderModel.from_encoder_decoder_pretrained( ... "google/vit-base-patch16-224-in21k", "bert-base-uncased" ... ) >>> >>> model.save_pretrained("./vit-bert") >>> >>> model = TFVisionEncoderDecoderModel.from_pretrained("./vit-bert") ``` ## FlaxVisionEncoderDecoderModel ### class transformers.FlaxVisionEncoderDecoderModel [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vision_encoder_decoder/modeling_flax_vision_encoder_decoder.py#L268) ( config: VisionEncoderDecoderConfiginput\_shape: typing.Optional\[typing.Tuple\] = Noneseed: int = 0dtype: dtype = <class 'jax.numpy.float32'>\_do\_init: bool = True\*\*kwargs ) Parameters - **config** ([VisionEncoderDecoderConfig](/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.from_pretrained) method to load the model weights. - **dtype** (`jax.numpy.dtype`, _optional_, defaults to `jax.numpy.float32`) — The data type of the computation. Can be one of `jax.numpy.float32`, `jax.numpy.float16` (on GPUs) and `jax.numpy.bfloat16` (on TPUs). This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given `dtype`. **Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.** If you wish to change the dtype of the model parameters, see [to\_fp16()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.to_fp16) and [to\_bf16()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.to_bf16). This class can be used to initialize an image-to-text-sequence model with any pretrained vision autoencoding model as the encoder and any pretrained text autoregressive model as the decoder. The encoder is loaded via [from\_pretrained()](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained) function and the decoder is loaded via [from\_pretrained()](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained) function. Cross-attention layers are automatically added to the decoder and should be fine-tuned on a downstream generative task, like image captioning. The effectiveness of initializing sequence-to-sequence models with pretrained checkpoints for sequence generation tasks was shown in [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. Additionally, in [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) it is shown how leveraging large pretrained vision models for optical character recognition (OCR) yields a significant performance improvement. After such a Vision-Encoder-Text-Decoder model has been trained/fine-tuned, it can be saved/loaded just like any other models (see the examples for more information). This model inherits from [FlaxPreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a Flax Linen [flax.nn.Module](https://flax.readthedocs.io/en/latest/_autosummary/flax.nn.module.html) subclass. Use it as a regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior. [FlaxVisionEncoderDecoderModel](/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder#transformers.FlaxVisionEncoderDecoderModel) is a generic model class that will be instantiated as a transformer architecture with the module (flax.nn.Module) of one of the base vision model classes of the library as encoder module and another one as decoder module when created with the :meth_~transformers.FlaxAutoModel.from\_pretrained_ class method for the encoder and :meth_~transformers.FlaxAutoModelForCausalLM.from\_pretrained_ class method for the decoder. #### \_\_call\_\_ [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vision_encoder_decoder/modeling_flax_vision_encoder_decoder.py#L598) ( pixel\_values: Arraydecoder\_input\_ids: typing.Optional\[jax.Array\] = Nonedecoder\_attention\_mask: typing.Optional\[jax.Array\] = Nonedecoder\_position\_ids: typing.Optional\[jax.Array\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = Nonetrain: bool = Falseparams: dict = Nonedropout\_rng: PRNGKey = None ) → [transformers.modeling\_flax\_outputs.FlaxSeq2SeqLMOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput) or `tuple(torch.FloatTensor)` The [FlaxVisionEncoderDecoderModel](/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder#transformers.FlaxVisionEncoderDecoderModel) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: ``` >>> from transformers import FlaxVisionEncoderDecoderModel, AutoImageProcessor, AutoTokenizer >>> from PIL import Image >>> import requests >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> image_processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-224-in21k") >>> >>> tokenizer_output = AutoTokenizer.from_pretrained("gpt2") >>> >>> model = FlaxVisionEncoderDecoderModel.from_encoder_decoder_pretrained( ... "google/vit-base-patch16-224-in21k", "gpt2" ... ) >>> pixel_values = image_processor(images=image, return_tensors="np").pixel_values >>> >>> model.config.eos_token_id = model.config.decoder.eos_token_id >>> model.config.pad_token_id = model.config.eos_token_id >>> >>> sequences = model.generate(pixel_values, num_beams=4, max_length=12).sequences >>> captions = tokenizer_output.batch_decode(sequences, skip_special_tokens=True) ``` #### from\_encoder\_decoder\_pretrained [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vision_encoder_decoder/modeling_flax_vision_encoder_decoder.py#L723) ( encoder\_pretrained\_model\_name\_or\_path: typing.Union\[str, os.PathLike, NoneType\] = Nonedecoder\_pretrained\_model\_name\_or\_path: typing.Union\[str, os.PathLike, NoneType\] = None\*model\_args\*\*kwargs ) Instantiate an encoder and a decoder from one or two base classes of the library from pretrained model checkpoints. Example: ``` >>> from transformers import FlaxVisionEncoderDecoderModel >>> >>> model = FlaxVisionEncoderDecoderModel.from_encoder_decoder_pretrained( ... "google/vit-base-patch16-224-in21k", "gpt2" ... ) >>> >>> model.save_pretrained("./vit-gpt2") >>> >>> model = FlaxVisionEncoderDecoderModel.from_pretrained("./vit-gpt2") ```
<!DOCTYPE html><html class=""><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"> <meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science."> <meta property="fb:app_id" content="1321688464574422"> <meta name="twitter:card" content="summary_large_image"> <meta name="twitter:site" content="@huggingface"> <meta property="og:title" content="Vision Encoder Decoder Models"> <meta property="og:type" content="website"> <meta property="og:url" content="https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder"> <meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png"> <link rel="stylesheet" href="/front/build/kube-5e23f38/style.css"> <link rel="preconnect" href="https://fonts.gstatic.com"> <link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&amp;display=swap" rel="stylesheet"> <link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&amp;display=swap" rel="stylesheet"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" as="style" onload="this.onload=null;this.rel='stylesheet'"> <noscript> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" /> </noscript> <title>Vision Encoder Decoder Models</title> <script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script> <script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><link rel="stylesheet" href="/docs/transformers/v4.34.0/en/_app/immutable/assets/0.e3b0c442.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/1.38c5c2f6.js"><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"><meta name="hf:doc:metadata" content="{&quot;local&quot;:&quot;vision-encoder-decoder-models&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;overview&quot;,&quot;title&quot;:&quot;Overview&quot;},{&quot;local&quot;:&quot;randomly-initializing-visionencoderdecodermodel-from-model-configurations&quot;,&quot;title&quot;:&quot;Randomly initializing `VisionEncoderDecoderModel` from model configurations.&quot;},{&quot;local&quot;:&quot;initialising-visionencoderdecodermodel-from-a-pretrained-encoder-and-a-pretrained-decoder&quot;,&quot;title&quot;:&quot;Initialising `VisionEncoderDecoderModel` from a pretrained encoder and a pretrained decoder.&quot;},{&quot;local&quot;:&quot;loading-an-existing-visionencoderdecodermodel-checkpoint-and-perform-inference&quot;,&quot;title&quot;:&quot;Loading an existing `VisionEncoderDecoderModel` checkpoint and perform inference.&quot;},{&quot;local&quot;:&quot;loading-a-pytorch-checkpoint-into-tfvisionencoderdecodermodel&quot;,&quot;title&quot;:&quot;Loading a PyTorch checkpoint into `TFVisionEncoderDecoderModel`.&quot;},{&quot;local&quot;:&quot;training&quot;,&quot;title&quot;:&quot;Training&quot;},{&quot;local&quot;:&quot;transformers.VisionEncoderDecoderConfig&quot;,&quot;title&quot;:&quot;VisionEncoderDecoderConfig&quot;},{&quot;local&quot;:&quot;transformers.VisionEncoderDecoderModel&quot;,&quot;title&quot;:&quot;VisionEncoderDecoderModel&quot;},{&quot;local&quot;:&quot;transformers.TFVisionEncoderDecoderModel&quot;,&quot;title&quot;:&quot;TFVisionEncoderDecoderModel&quot;},{&quot;local&quot;:&quot;transformers.FlaxVisionEncoderDecoderModel&quot;,&quot;title&quot;:&quot;FlaxVisionEncoderDecoderModel&quot;}],&quot;title&quot;:&quot;Vision Encoder Decoder Models&quot;}"></head> <body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage"> <div class="flex min-h-screen flex-col"> <div class="SVELTE_HYDRATER contents" data-props="{&quot;classNames&quot;:&quot;&quot;,&quot;isWide&quot;:true,&quot;isZh&quot;:false}" data-target="MainHeader"><header class="border-b border-gray-100 "><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <div class="flex flex-none items-center justify-center p-0.5 place-self-stretch lg:hidden"><button class="relative z-40 flex h-6 w-8 items-center justify-center" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div></div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="rounded-full border border-transparent bg-gray-900 px-3 py-1 leading-none text-white hover:border-black hover:bg-white hover:text-black" href="/join">Sign Up</a></li></ul></nav></div></header></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div> <main class="flex flex-1 flex-col"><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapters&quot;:[{&quot;title&quot;:&quot;Get started&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;🤗 Transformers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;index&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/index&quot;},{&quot;title&quot;:&quot;Quick tour&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;quicktour&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/quicktour&quot;},{&quot;title&quot;:&quot;Installation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;installation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/installation&quot;}]},{&quot;title&quot;:&quot;Tutorials&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Run inference with pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_tutorial&quot;},{&quot;title&quot;:&quot;Write portable code with AutoClass&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;autoclass_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/autoclass_tutorial&quot;},{&quot;title&quot;:&quot;Preprocess data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocessing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/preprocessing&quot;},{&quot;title&quot;:&quot;Fine-tune a pretrained model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;training&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/training&quot;},{&quot;title&quot;:&quot;Train with a script&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;run_scripts&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/run_scripts&quot;},{&quot;title&quot;:&quot;Set up distributed training with 🤗 Accelerate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;accelerate&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/accelerate&quot;},{&quot;title&quot;:&quot;Load and train adapters with 🤗 PEFT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;peft&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/peft&quot;},{&quot;title&quot;:&quot;Share your model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_sharing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_sharing&quot;},{&quot;title&quot;:&quot;Agents&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers_agents&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/transformers_agents&quot;},{&quot;title&quot;:&quot;Generation with LLMs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;llm_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/llm_tutorial&quot;}]},{&quot;title&quot;:&quot;Task Guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Natural Language Processing&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text classification&quot;,&quot;id&quot;:&quot;tasks/sequence_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/sequence_classification&quot;},{&quot;title&quot;:&quot;Token classification&quot;,&quot;id&quot;:&quot;tasks/token_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/token_classification&quot;},{&quot;title&quot;:&quot;Question answering&quot;,&quot;id&quot;:&quot;tasks/question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/question_answering&quot;},{&quot;title&quot;:&quot;Causal language modeling&quot;,&quot;id&quot;:&quot;tasks/language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/language_modeling&quot;},{&quot;title&quot;:&quot;Masked language modeling&quot;,&quot;id&quot;:&quot;tasks/masked_language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/masked_language_modeling&quot;},{&quot;title&quot;:&quot;Translation&quot;,&quot;id&quot;:&quot;tasks/translation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/translation&quot;},{&quot;title&quot;:&quot;Summarization&quot;,&quot;id&quot;:&quot;tasks/summarization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/summarization&quot;},{&quot;title&quot;:&quot;Multiple choice&quot;,&quot;id&quot;:&quot;tasks/multiple_choice&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/multiple_choice&quot;}]},{&quot;title&quot;:&quot;Audio&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio classification&quot;,&quot;id&quot;:&quot;tasks/audio_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/audio_classification&quot;},{&quot;title&quot;:&quot;Automatic speech recognition&quot;,&quot;id&quot;:&quot;tasks/asr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/asr&quot;}]},{&quot;title&quot;:&quot;Computer Vision&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image classification&quot;,&quot;id&quot;:&quot;tasks/image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_classification&quot;},{&quot;title&quot;:&quot;Semantic segmentation&quot;,&quot;id&quot;:&quot;tasks/semantic_segmentation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/semantic_segmentation&quot;},{&quot;title&quot;:&quot;Video classification&quot;,&quot;id&quot;:&quot;tasks/video_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/video_classification&quot;},{&quot;title&quot;:&quot;Object detection&quot;,&quot;id&quot;:&quot;tasks/object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot object detection&quot;,&quot;id&quot;:&quot;tasks/zero_shot_object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot image classification&quot;,&quot;id&quot;:&quot;tasks/zero_shot_image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification&quot;},{&quot;title&quot;:&quot;Depth estimation&quot;,&quot;id&quot;:&quot;tasks/monocular_depth_estimation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation&quot;}]},{&quot;title&quot;:&quot;Multimodal&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image captioning&quot;,&quot;id&quot;:&quot;tasks/image_captioning&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_captioning&quot;},{&quot;title&quot;:&quot;Document Question Answering&quot;,&quot;id&quot;:&quot;tasks/document_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/document_question_answering&quot;},{&quot;title&quot;:&quot;Visual Question Answering&quot;,&quot;id&quot;:&quot;tasks/visual_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/visual_question_answering&quot;},{&quot;title&quot;:&quot;Text to speech&quot;,&quot;id&quot;:&quot;tasks/text-to-speech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/text-to-speech&quot;}]},{&quot;title&quot;:&quot;Generation&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Customize the generation strategy&quot;,&quot;id&quot;:&quot;generation_strategies&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/generation_strategies&quot;}]},{&quot;title&quot;:&quot;Prompting&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image tasks with IDEFICS&quot;,&quot;id&quot;:&quot;tasks/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/idefics&quot;}]}]},{&quot;title&quot;:&quot;Developer guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Use fast tokenizers from 🤗 Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;fast_tokenizers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/fast_tokenizers&quot;},{&quot;title&quot;:&quot;Run inference with multilingual models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;multilingual&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/multilingual&quot;},{&quot;title&quot;:&quot;Use model-specific APIs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;create_a_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/create_a_model&quot;},{&quot;title&quot;:&quot;Share a custom model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_models&quot;},{&quot;title&quot;:&quot;Templates for chat models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;chat_templating&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/chat_templating&quot;},{&quot;title&quot;:&quot;Run training on Amazon SageMaker&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;sagemaker&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/sagemaker&quot;},{&quot;title&quot;:&quot;Export to ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;serialization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/serialization&quot;},{&quot;title&quot;:&quot;Export to TFLite&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tflite&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tflite&quot;},{&quot;title&quot;:&quot;Export to TorchScript&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;torchscript&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/torchscript&quot;},{&quot;title&quot;:&quot;Benchmarks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;benchmarks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/benchmarks&quot;},{&quot;title&quot;:&quot;Notebooks with examples&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;notebooks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/notebooks&quot;},{&quot;title&quot;:&quot;Community resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;community&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/community&quot;},{&quot;title&quot;:&quot;Custom Tools and Prompts&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_tools&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_tools&quot;},{&quot;title&quot;:&quot;Troubleshoot&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;troubleshooting&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/troubleshooting&quot;}]},{&quot;title&quot;:&quot;Performance and scalability&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;performance&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/performance&quot;},{&quot;title&quot;:&quot;Efficient training techniques&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Methods and tools for efficient training on a single GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_one&quot;},{&quot;title&quot;:&quot;Multiple GPUs and parallelism&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_many&quot;},{&quot;title&quot;:&quot;Efficient training on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu&quot;},{&quot;title&quot;:&quot;Distributed CPU training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu_many&quot;},{&quot;title&quot;:&quot;Training on TPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu&quot;},{&quot;title&quot;:&quot;Training on TPU with TensorFlow&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu_tf&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu_tf&quot;},{&quot;title&quot;:&quot;Training on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_special&quot;},{&quot;title&quot;:&quot;Custom hardware for training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_hardware&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_hardware&quot;},{&quot;title&quot;:&quot;Hyperparameter Search using Trainer API&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;hpo_train&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/hpo_train&quot;}]},{&quot;title&quot;:&quot;Optimizing inference&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Inference on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_cpu&quot;},{&quot;title&quot;:&quot;Inference on one GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_one&quot;},{&quot;title&quot;:&quot;Inference on many GPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_many&quot;},{&quot;title&quot;:&quot;Inference on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_special&quot;}]},{&quot;title&quot;:&quot;Instantiating a big model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;big_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/big_models&quot;},{&quot;title&quot;:&quot;Troubleshooting&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;debugging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/debugging&quot;},{&quot;title&quot;:&quot;XLA Integration for TensorFlow Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tf_xla&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tf_xla&quot;},{&quot;title&quot;:&quot;Optimize inference using `torch.compile()`&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_torch_compile&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_torch_compile&quot;}]},{&quot;title&quot;:&quot;Contribute&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;How to contribute to transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;contributing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/contributing&quot;},{&quot;title&quot;:&quot;How to add a model to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_model&quot;},{&quot;title&quot;:&quot;How to convert a 🤗 Transformers model to TensorFlow?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_tensorflow_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_tensorflow_model&quot;},{&quot;title&quot;:&quot;How to add a pipeline to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_pipeline&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_pipeline&quot;},{&quot;title&quot;:&quot;Testing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;testing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/testing&quot;},{&quot;title&quot;:&quot;Checks on a Pull Request&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pr_checks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pr_checks&quot;}]},{&quot;title&quot;:&quot;Conceptual guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Philosophy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;philosophy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/philosophy&quot;},{&quot;title&quot;:&quot;Glossary&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;glossary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/glossary&quot;},{&quot;title&quot;:&quot;What 🤗 Transformers can do&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;task_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/task_summary&quot;},{&quot;title&quot;:&quot;How 🤗 Transformers solve tasks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks_explained&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks_explained&quot;},{&quot;title&quot;:&quot;The Transformer model family&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_summary&quot;},{&quot;title&quot;:&quot;Summary of the tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tokenizer_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tokenizer_summary&quot;},{&quot;title&quot;:&quot;Attention mechanisms&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;attention&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/attention&quot;},{&quot;title&quot;:&quot;Padding and truncation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pad_truncation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pad_truncation&quot;},{&quot;title&quot;:&quot;BERTology&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;bertology&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/bertology&quot;},{&quot;title&quot;:&quot;Perplexity of fixed-length models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perplexity&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perplexity&quot;},{&quot;title&quot;:&quot;Pipelines for webserver inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_webserver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_webserver&quot;},{&quot;title&quot;:&quot;Model training anatomy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_memory_anatomy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_memory_anatomy&quot;}]},{&quot;title&quot;:&quot;API&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Main Classes&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Agents and Tools&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/agent&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/agent&quot;},{&quot;title&quot;:&quot;Auto Classes&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/auto&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/auto&quot;},{&quot;title&quot;:&quot;Callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/callback&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/callback&quot;},{&quot;title&quot;:&quot;Configuration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/configuration&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/configuration&quot;},{&quot;title&quot;:&quot;Data Collator&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/data_collator&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/data_collator&quot;},{&quot;title&quot;:&quot;Keras callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/keras_callbacks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/keras_callbacks&quot;},{&quot;title&quot;:&quot;Logging&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/logging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/logging&quot;},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/model&quot;},{&quot;title&quot;:&quot;Text Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/text_generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/text_generation&quot;},{&quot;title&quot;:&quot;ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/onnx&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/onnx&quot;},{&quot;title&quot;:&quot;Optimization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/optimizer_schedules&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules&quot;},{&quot;title&quot;:&quot;Model outputs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/output&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/output&quot;},{&quot;title&quot;:&quot;Pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/pipelines&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/pipelines&quot;},{&quot;title&quot;:&quot;Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/processors&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/processors&quot;},{&quot;title&quot;:&quot;Quantization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/quantization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/quantization&quot;},{&quot;title&quot;:&quot;Tokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/tokenizer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/tokenizer&quot;},{&quot;title&quot;:&quot;Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/trainer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/trainer&quot;},{&quot;title&quot;:&quot;DeepSpeed Integration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/deepspeed&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/deepspeed&quot;},{&quot;title&quot;:&quot;Feature Extractor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/feature_extractor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/feature_extractor&quot;},{&quot;title&quot;:&quot;Image Processor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/image_processor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/image_processor&quot;}]},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALBERT&quot;,&quot;id&quot;:&quot;model_doc/albert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/albert&quot;},{&quot;title&quot;:&quot;BART&quot;,&quot;id&quot;:&quot;model_doc/bart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bart&quot;},{&quot;title&quot;:&quot;BARThez&quot;,&quot;id&quot;:&quot;model_doc/barthez&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/barthez&quot;},{&quot;title&quot;:&quot;BARTpho&quot;,&quot;id&quot;:&quot;model_doc/bartpho&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bartpho&quot;},{&quot;title&quot;:&quot;BERT&quot;,&quot;id&quot;:&quot;model_doc/bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert&quot;},{&quot;title&quot;:&quot;BertGeneration&quot;,&quot;id&quot;:&quot;model_doc/bert-generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-generation&quot;},{&quot;title&quot;:&quot;BertJapanese&quot;,&quot;id&quot;:&quot;model_doc/bert-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-japanese&quot;},{&quot;title&quot;:&quot;Bertweet&quot;,&quot;id&quot;:&quot;model_doc/bertweet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bertweet&quot;},{&quot;title&quot;:&quot;BigBird&quot;,&quot;id&quot;:&quot;model_doc/big_bird&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/big_bird&quot;},{&quot;title&quot;:&quot;BigBirdPegasus&quot;,&quot;id&quot;:&quot;model_doc/bigbird_pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus&quot;},{&quot;title&quot;:&quot;BioGpt&quot;,&quot;id&quot;:&quot;model_doc/biogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/biogpt&quot;},{&quot;title&quot;:&quot;Blenderbot&quot;,&quot;id&quot;:&quot;model_doc/blenderbot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot&quot;},{&quot;title&quot;:&quot;Blenderbot Small&quot;,&quot;id&quot;:&quot;model_doc/blenderbot-small&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot-small&quot;},{&quot;title&quot;:&quot;BLOOM&quot;,&quot;id&quot;:&quot;model_doc/bloom&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bloom&quot;},{&quot;title&quot;:&quot;BORT&quot;,&quot;id&quot;:&quot;model_doc/bort&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bort&quot;},{&quot;title&quot;:&quot;ByT5&quot;,&quot;id&quot;:&quot;model_doc/byt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/byt5&quot;},{&quot;title&quot;:&quot;CamemBERT&quot;,&quot;id&quot;:&quot;model_doc/camembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/camembert&quot;},{&quot;title&quot;:&quot;CANINE&quot;,&quot;id&quot;:&quot;model_doc/canine&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/canine&quot;},{&quot;title&quot;:&quot;CodeGen&quot;,&quot;id&quot;:&quot;model_doc/codegen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/codegen&quot;},{&quot;title&quot;:&quot;CodeLlama&quot;,&quot;id&quot;:&quot;model_doc/code_llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/code_llama&quot;},{&quot;title&quot;:&quot;ConvBERT&quot;,&quot;id&quot;:&quot;model_doc/convbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convbert&quot;},{&quot;title&quot;:&quot;CPM&quot;,&quot;id&quot;:&quot;model_doc/cpm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpm&quot;},{&quot;title&quot;:&quot;CPMANT&quot;,&quot;id&quot;:&quot;model_doc/cpmant&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpmant&quot;},{&quot;title&quot;:&quot;CTRL&quot;,&quot;id&quot;:&quot;model_doc/ctrl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ctrl&quot;},{&quot;title&quot;:&quot;DeBERTa&quot;,&quot;id&quot;:&quot;model_doc/deberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta&quot;},{&quot;title&quot;:&quot;DeBERTa-v2&quot;,&quot;id&quot;:&quot;model_doc/deberta-v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta-v2&quot;},{&quot;title&quot;:&quot;DialoGPT&quot;,&quot;id&quot;:&quot;model_doc/dialogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dialogpt&quot;},{&quot;title&quot;:&quot;DistilBERT&quot;,&quot;id&quot;:&quot;model_doc/distilbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/distilbert&quot;},{&quot;title&quot;:&quot;DPR&quot;,&quot;id&quot;:&quot;model_doc/dpr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpr&quot;},{&quot;title&quot;:&quot;ELECTRA&quot;,&quot;id&quot;:&quot;model_doc/electra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/electra&quot;},{&quot;title&quot;:&quot;Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encoder-decoder&quot;},{&quot;title&quot;:&quot;ERNIE&quot;,&quot;id&quot;:&quot;model_doc/ernie&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie&quot;},{&quot;title&quot;:&quot;ErnieM&quot;,&quot;id&quot;:&quot;model_doc/ernie_m&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie_m&quot;},{&quot;title&quot;:&quot;ESM&quot;,&quot;id&quot;:&quot;model_doc/esm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/esm&quot;},{&quot;title&quot;:&quot;Falcon&quot;,&quot;id&quot;:&quot;model_doc/falcon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/falcon&quot;},{&quot;title&quot;:&quot;FLAN-T5&quot;,&quot;id&quot;:&quot;model_doc/flan-t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-t5&quot;},{&quot;title&quot;:&quot;FLAN-UL2&quot;,&quot;id&quot;:&quot;model_doc/flan-ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-ul2&quot;},{&quot;title&quot;:&quot;FlauBERT&quot;,&quot;id&quot;:&quot;model_doc/flaubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flaubert&quot;},{&quot;title&quot;:&quot;FNet&quot;,&quot;id&quot;:&quot;model_doc/fnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fnet&quot;},{&quot;title&quot;:&quot;FSMT&quot;,&quot;id&quot;:&quot;model_doc/fsmt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fsmt&quot;},{&quot;title&quot;:&quot;Funnel Transformer&quot;,&quot;id&quot;:&quot;model_doc/funnel&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/funnel&quot;},{&quot;title&quot;:&quot;GPT&quot;,&quot;id&quot;:&quot;model_doc/openai-gpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/openai-gpt&quot;},{&quot;title&quot;:&quot;GPT Neo&quot;,&quot;id&quot;:&quot;model_doc/gpt_neo&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neo&quot;},{&quot;title&quot;:&quot;GPT NeoX&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox&quot;},{&quot;title&quot;:&quot;GPT NeoX Japanese&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox_japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese&quot;},{&quot;title&quot;:&quot;GPT-J&quot;,&quot;id&quot;:&quot;model_doc/gptj&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptj&quot;},{&quot;title&quot;:&quot;GPT2&quot;,&quot;id&quot;:&quot;model_doc/gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt2&quot;},{&quot;title&quot;:&quot;GPTBigCode&quot;,&quot;id&quot;:&quot;model_doc/gpt_bigcode&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode&quot;},{&quot;title&quot;:&quot;GPTSAN Japanese&quot;,&quot;id&quot;:&quot;model_doc/gptsan-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese&quot;},{&quot;title&quot;:&quot;GPTSw3&quot;,&quot;id&quot;:&quot;model_doc/gpt-sw3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt-sw3&quot;},{&quot;title&quot;:&quot;HerBERT&quot;,&quot;id&quot;:&quot;model_doc/herbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/herbert&quot;},{&quot;title&quot;:&quot;I-BERT&quot;,&quot;id&quot;:&quot;model_doc/ibert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ibert&quot;},{&quot;title&quot;:&quot;Jukebox&quot;,&quot;id&quot;:&quot;model_doc/jukebox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/jukebox&quot;},{&quot;title&quot;:&quot;LED&quot;,&quot;id&quot;:&quot;model_doc/led&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/led&quot;},{&quot;title&quot;:&quot;LLaMA&quot;,&quot;id&quot;:&quot;model_doc/llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama&quot;},{&quot;title&quot;:&quot;Llama2&quot;,&quot;id&quot;:&quot;model_doc/llama2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama2&quot;},{&quot;title&quot;:&quot;Longformer&quot;,&quot;id&quot;:&quot;model_doc/longformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longformer&quot;},{&quot;title&quot;:&quot;LongT5&quot;,&quot;id&quot;:&quot;model_doc/longt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longt5&quot;},{&quot;title&quot;:&quot;LUKE&quot;,&quot;id&quot;:&quot;model_doc/luke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/luke&quot;},{&quot;title&quot;:&quot;M2M100&quot;,&quot;id&quot;:&quot;model_doc/m2m_100&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/m2m_100&quot;},{&quot;title&quot;:&quot;MarianMT&quot;,&quot;id&quot;:&quot;model_doc/marian&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/marian&quot;},{&quot;title&quot;:&quot;MarkupLM&quot;,&quot;id&quot;:&quot;model_doc/markuplm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/markuplm&quot;},{&quot;title&quot;:&quot;MBart and MBart-50&quot;,&quot;id&quot;:&quot;model_doc/mbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mbart&quot;},{&quot;title&quot;:&quot;MEGA&quot;,&quot;id&quot;:&quot;model_doc/mega&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mega&quot;},{&quot;title&quot;:&quot;MegatronBERT&quot;,&quot;id&quot;:&quot;model_doc/megatron-bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron-bert&quot;},{&quot;title&quot;:&quot;MegatronGPT2&quot;,&quot;id&quot;:&quot;model_doc/megatron_gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2&quot;},{&quot;title&quot;:&quot;Mistral&quot;,&quot;id&quot;:&quot;model_doc/mistral&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mistral&quot;},{&quot;title&quot;:&quot;mLUKE&quot;,&quot;id&quot;:&quot;model_doc/mluke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mluke&quot;},{&quot;title&quot;:&quot;MobileBERT&quot;,&quot;id&quot;:&quot;model_doc/mobilebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilebert&quot;},{&quot;title&quot;:&quot;MPNet&quot;,&quot;id&quot;:&quot;model_doc/mpnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpnet&quot;},{&quot;title&quot;:&quot;MPT&quot;,&quot;id&quot;:&quot;model_doc/mpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpt&quot;},{&quot;title&quot;:&quot;MRA&quot;,&quot;id&quot;:&quot;model_doc/mra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mra&quot;},{&quot;title&quot;:&quot;MT5&quot;,&quot;id&quot;:&quot;model_doc/mt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mt5&quot;},{&quot;title&quot;:&quot;MVP&quot;,&quot;id&quot;:&quot;model_doc/mvp&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mvp&quot;},{&quot;title&quot;:&quot;NEZHA&quot;,&quot;id&quot;:&quot;model_doc/nezha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nezha&quot;},{&quot;title&quot;:&quot;NLLB&quot;,&quot;id&quot;:&quot;model_doc/nllb&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb&quot;},{&quot;title&quot;:&quot;NLLB-MoE&quot;,&quot;id&quot;:&quot;model_doc/nllb-moe&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb-moe&quot;},{&quot;title&quot;:&quot;Nyströmformer&quot;,&quot;id&quot;:&quot;model_doc/nystromformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nystromformer&quot;},{&quot;title&quot;:&quot;Open-Llama&quot;,&quot;id&quot;:&quot;model_doc/open-llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/open-llama&quot;},{&quot;title&quot;:&quot;OPT&quot;,&quot;id&quot;:&quot;model_doc/opt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/opt&quot;},{&quot;title&quot;:&quot;Pegasus&quot;,&quot;id&quot;:&quot;model_doc/pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus&quot;},{&quot;title&quot;:&quot;PEGASUS-X&quot;,&quot;id&quot;:&quot;model_doc/pegasus_x&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus_x&quot;},{&quot;title&quot;:&quot;Persimmon&quot;,&quot;id&quot;:&quot;model_doc/persimmon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/persimmon&quot;},{&quot;title&quot;:&quot;PhoBERT&quot;,&quot;id&quot;:&quot;model_doc/phobert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/phobert&quot;},{&quot;title&quot;:&quot;PLBart&quot;,&quot;id&quot;:&quot;model_doc/plbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/plbart&quot;},{&quot;title&quot;:&quot;ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/prophetnet&quot;},{&quot;title&quot;:&quot;QDQBert&quot;,&quot;id&quot;:&quot;model_doc/qdqbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/qdqbert&quot;},{&quot;title&quot;:&quot;RAG&quot;,&quot;id&quot;:&quot;model_doc/rag&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rag&quot;},{&quot;title&quot;:&quot;REALM&quot;,&quot;id&quot;:&quot;model_doc/realm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/realm&quot;},{&quot;title&quot;:&quot;Reformer&quot;,&quot;id&quot;:&quot;model_doc/reformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/reformer&quot;},{&quot;title&quot;:&quot;RemBERT&quot;,&quot;id&quot;:&quot;model_doc/rembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rembert&quot;},{&quot;title&quot;:&quot;RetriBERT&quot;,&quot;id&quot;:&quot;model_doc/retribert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/retribert&quot;},{&quot;title&quot;:&quot;RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta&quot;},{&quot;title&quot;:&quot;RoBERTa-PreLayerNorm&quot;,&quot;id&quot;:&quot;model_doc/roberta-prelayernorm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm&quot;},{&quot;title&quot;:&quot;RoCBert&quot;,&quot;id&quot;:&quot;model_doc/roc_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roc_bert&quot;},{&quot;title&quot;:&quot;RoFormer&quot;,&quot;id&quot;:&quot;model_doc/roformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roformer&quot;},{&quot;title&quot;:&quot;RWKV&quot;,&quot;id&quot;:&quot;model_doc/rwkv&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rwkv&quot;},{&quot;title&quot;:&quot;Splinter&quot;,&quot;id&quot;:&quot;model_doc/splinter&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/splinter&quot;},{&quot;title&quot;:&quot;SqueezeBERT&quot;,&quot;id&quot;:&quot;model_doc/squeezebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/squeezebert&quot;},{&quot;title&quot;:&quot;SwitchTransformers&quot;,&quot;id&quot;:&quot;model_doc/switch_transformers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/switch_transformers&quot;},{&quot;title&quot;:&quot;T5&quot;,&quot;id&quot;:&quot;model_doc/t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5&quot;},{&quot;title&quot;:&quot;T5v1.1&quot;,&quot;id&quot;:&quot;model_doc/t5v1.1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5v1.1&quot;},{&quot;title&quot;:&quot;TAPEX&quot;,&quot;id&quot;:&quot;model_doc/tapex&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapex&quot;},{&quot;title&quot;:&quot;Transformer XL&quot;,&quot;id&quot;:&quot;model_doc/transfo-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/transfo-xl&quot;},{&quot;title&quot;:&quot;UL2&quot;,&quot;id&quot;:&quot;model_doc/ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ul2&quot;},{&quot;title&quot;:&quot;UMT5&quot;,&quot;id&quot;:&quot;model_doc/umt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/umt5&quot;},{&quot;title&quot;:&quot;X-MOD&quot;,&quot;id&quot;:&quot;model_doc/xmod&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xmod&quot;},{&quot;title&quot;:&quot;XGLM&quot;,&quot;id&quot;:&quot;model_doc/xglm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xglm&quot;},{&quot;title&quot;:&quot;XLM&quot;,&quot;id&quot;:&quot;model_doc/xlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm&quot;},{&quot;title&quot;:&quot;XLM-ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/xlm-prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa-XL&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl&quot;},{&quot;title&quot;:&quot;XLM-V&quot;,&quot;id&quot;:&quot;model_doc/xlm-v&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-v&quot;},{&quot;title&quot;:&quot;XLNet&quot;,&quot;id&quot;:&quot;model_doc/xlnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlnet&quot;},{&quot;title&quot;:&quot;YOSO&quot;,&quot;id&quot;:&quot;model_doc/yoso&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yoso&quot;}]},{&quot;title&quot;:&quot;Vision models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;BEiT&quot;,&quot;id&quot;:&quot;model_doc/beit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/beit&quot;},{&quot;title&quot;:&quot;BiT&quot;,&quot;id&quot;:&quot;model_doc/bit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bit&quot;},{&quot;title&quot;:&quot;Conditional DETR&quot;,&quot;id&quot;:&quot;model_doc/conditional_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/conditional_detr&quot;},{&quot;title&quot;:&quot;ConvNeXT&quot;,&quot;id&quot;:&quot;model_doc/convnext&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnext&quot;},{&quot;title&quot;:&quot;ConvNeXTV2&quot;,&quot;id&quot;:&quot;model_doc/convnextv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnextv2&quot;},{&quot;title&quot;:&quot;CvT&quot;,&quot;id&quot;:&quot;model_doc/cvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cvt&quot;},{&quot;title&quot;:&quot;Deformable DETR&quot;,&quot;id&quot;:&quot;model_doc/deformable_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deformable_detr&quot;},{&quot;title&quot;:&quot;DeiT&quot;,&quot;id&quot;:&quot;model_doc/deit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deit&quot;},{&quot;title&quot;:&quot;DETA&quot;,&quot;id&quot;:&quot;model_doc/deta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deta&quot;},{&quot;title&quot;:&quot;DETR&quot;,&quot;id&quot;:&quot;model_doc/detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/detr&quot;},{&quot;title&quot;:&quot;DiNAT&quot;,&quot;id&quot;:&quot;model_doc/dinat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinat&quot;},{&quot;title&quot;:&quot;DINO V2&quot;,&quot;id&quot;:&quot;model_doc/dinov2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinov2&quot;},{&quot;title&quot;:&quot;DiT&quot;,&quot;id&quot;:&quot;model_doc/dit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dit&quot;},{&quot;title&quot;:&quot;DPT&quot;,&quot;id&quot;:&quot;model_doc/dpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpt&quot;},{&quot;title&quot;:&quot;EfficientFormer&quot;,&quot;id&quot;:&quot;model_doc/efficientformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientformer&quot;},{&quot;title&quot;:&quot;EfficientNet&quot;,&quot;id&quot;:&quot;model_doc/efficientnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientnet&quot;},{&quot;title&quot;:&quot;FocalNet&quot;,&quot;id&quot;:&quot;model_doc/focalnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/focalnet&quot;},{&quot;title&quot;:&quot;GLPN&quot;,&quot;id&quot;:&quot;model_doc/glpn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/glpn&quot;},{&quot;title&quot;:&quot;ImageGPT&quot;,&quot;id&quot;:&quot;model_doc/imagegpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/imagegpt&quot;},{&quot;title&quot;:&quot;LeViT&quot;,&quot;id&quot;:&quot;model_doc/levit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/levit&quot;},{&quot;title&quot;:&quot;Mask2Former&quot;,&quot;id&quot;:&quot;model_doc/mask2former&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mask2former&quot;},{&quot;title&quot;:&quot;MaskFormer&quot;,&quot;id&quot;:&quot;model_doc/maskformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/maskformer&quot;},{&quot;title&quot;:&quot;MobileNetV1&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v1&quot;},{&quot;title&quot;:&quot;MobileNetV2&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v2&quot;},{&quot;title&quot;:&quot;MobileViT&quot;,&quot;id&quot;:&quot;model_doc/mobilevit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevit&quot;},{&quot;title&quot;:&quot;MobileViTV2&quot;,&quot;id&quot;:&quot;model_doc/mobilevitv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevitv2&quot;},{&quot;title&quot;:&quot;NAT&quot;,&quot;id&quot;:&quot;model_doc/nat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nat&quot;},{&quot;title&quot;:&quot;PoolFormer&quot;,&quot;id&quot;:&quot;model_doc/poolformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/poolformer&quot;},{&quot;title&quot;:&quot;Pyramid Vision Transformer (PVT)&quot;,&quot;id&quot;:&quot;model_doc/pvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pvt&quot;},{&quot;title&quot;:&quot;RegNet&quot;,&quot;id&quot;:&quot;model_doc/regnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/regnet&quot;},{&quot;title&quot;:&quot;ResNet&quot;,&quot;id&quot;:&quot;model_doc/resnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/resnet&quot;},{&quot;title&quot;:&quot;SegFormer&quot;,&quot;id&quot;:&quot;model_doc/segformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/segformer&quot;},{&quot;title&quot;:&quot;SwiftFormer&quot;,&quot;id&quot;:&quot;model_doc/swiftformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swiftformer&quot;},{&quot;title&quot;:&quot;Swin Transformer&quot;,&quot;id&quot;:&quot;model_doc/swin&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin&quot;},{&quot;title&quot;:&quot;Swin Transformer V2&quot;,&quot;id&quot;:&quot;model_doc/swinv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swinv2&quot;},{&quot;title&quot;:&quot;Swin2SR&quot;,&quot;id&quot;:&quot;model_doc/swin2sr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin2sr&quot;},{&quot;title&quot;:&quot;Table Transformer&quot;,&quot;id&quot;:&quot;model_doc/table-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/table-transformer&quot;},{&quot;title&quot;:&quot;TimeSformer&quot;,&quot;id&quot;:&quot;model_doc/timesformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/timesformer&quot;},{&quot;title&quot;:&quot;UperNet&quot;,&quot;id&quot;:&quot;model_doc/upernet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/upernet&quot;},{&quot;title&quot;:&quot;VAN&quot;,&quot;id&quot;:&quot;model_doc/van&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/van&quot;},{&quot;title&quot;:&quot;VideoMAE&quot;,&quot;id&quot;:&quot;model_doc/videomae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/videomae&quot;},{&quot;title&quot;:&quot;Vision Transformer (ViT)&quot;,&quot;id&quot;:&quot;model_doc/vit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit&quot;},{&quot;title&quot;:&quot;ViT Hybrid&quot;,&quot;id&quot;:&quot;model_doc/vit_hybrid&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_hybrid&quot;},{&quot;title&quot;:&quot;ViTDet&quot;,&quot;id&quot;:&quot;model_doc/vitdet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitdet&quot;},{&quot;title&quot;:&quot;ViTMAE&quot;,&quot;id&quot;:&quot;model_doc/vit_mae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_mae&quot;},{&quot;title&quot;:&quot;ViTMatte&quot;,&quot;id&quot;:&quot;model_doc/vitmatte&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitmatte&quot;},{&quot;title&quot;:&quot;ViTMSN&quot;,&quot;id&quot;:&quot;model_doc/vit_msn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_msn&quot;},{&quot;title&quot;:&quot;ViViT&quot;,&quot;id&quot;:&quot;model_doc/vivit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vivit&quot;},{&quot;title&quot;:&quot;YOLOS&quot;,&quot;id&quot;:&quot;model_doc/yolos&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yolos&quot;}]},{&quot;title&quot;:&quot;Audio models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio Spectrogram Transformer&quot;,&quot;id&quot;:&quot;model_doc/audio-spectrogram-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer&quot;},{&quot;title&quot;:&quot;Bark&quot;,&quot;id&quot;:&quot;model_doc/bark&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bark&quot;},{&quot;title&quot;:&quot;CLAP&quot;,&quot;id&quot;:&quot;model_doc/clap&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clap&quot;},{&quot;title&quot;:&quot;EnCodec&quot;,&quot;id&quot;:&quot;model_doc/encodec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encodec&quot;},{&quot;title&quot;:&quot;Hubert&quot;,&quot;id&quot;:&quot;model_doc/hubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/hubert&quot;},{&quot;title&quot;:&quot;MCTCT&quot;,&quot;id&quot;:&quot;model_doc/mctct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mctct&quot;},{&quot;title&quot;:&quot;MMS&quot;,&quot;id&quot;:&quot;model_doc/mms&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mms&quot;},{&quot;title&quot;:&quot;MusicGen&quot;,&quot;id&quot;:&quot;model_doc/musicgen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/musicgen&quot;},{&quot;title&quot;:&quot;Pop2Piano&quot;,&quot;id&quot;:&quot;model_doc/pop2piano&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pop2piano&quot;},{&quot;title&quot;:&quot;SEW&quot;,&quot;id&quot;:&quot;model_doc/sew&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew&quot;},{&quot;title&quot;:&quot;SEW-D&quot;,&quot;id&quot;:&quot;model_doc/sew-d&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew-d&quot;},{&quot;title&quot;:&quot;Speech2Text&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text&quot;},{&quot;title&quot;:&quot;Speech2Text2&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text_2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2&quot;},{&quot;title&quot;:&quot;SpeechT5&quot;,&quot;id&quot;:&quot;model_doc/speecht5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speecht5&quot;},{&quot;title&quot;:&quot;UniSpeech&quot;,&quot;id&quot;:&quot;model_doc/unispeech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech&quot;},{&quot;title&quot;:&quot;UniSpeech-SAT&quot;,&quot;id&quot;:&quot;model_doc/unispeech-sat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech-sat&quot;},{&quot;title&quot;:&quot;VITS&quot;,&quot;id&quot;:&quot;model_doc/vits&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vits&quot;},{&quot;title&quot;:&quot;Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2&quot;},{&quot;title&quot;:&quot;Wav2Vec2-Conformer&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2-conformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer&quot;},{&quot;title&quot;:&quot;Wav2Vec2Phoneme&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2_phoneme&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme&quot;},{&quot;title&quot;:&quot;WavLM&quot;,&quot;id&quot;:&quot;model_doc/wavlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wavlm&quot;},{&quot;title&quot;:&quot;Whisper&quot;,&quot;id&quot;:&quot;model_doc/whisper&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/whisper&quot;},{&quot;title&quot;:&quot;XLS-R&quot;,&quot;id&quot;:&quot;model_doc/xls_r&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xls_r&quot;},{&quot;title&quot;:&quot;XLSR-Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/xlsr_wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2&quot;}]},{&quot;title&quot;:&quot;Multimodal models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALIGN&quot;,&quot;id&quot;:&quot;model_doc/align&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/align&quot;},{&quot;title&quot;:&quot;AltCLIP&quot;,&quot;id&quot;:&quot;model_doc/altclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/altclip&quot;},{&quot;title&quot;:&quot;BLIP&quot;,&quot;id&quot;:&quot;model_doc/blip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip&quot;},{&quot;title&quot;:&quot;BLIP-2&quot;,&quot;id&quot;:&quot;model_doc/blip-2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip-2&quot;},{&quot;title&quot;:&quot;BridgeTower&quot;,&quot;id&quot;:&quot;model_doc/bridgetower&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bridgetower&quot;},{&quot;title&quot;:&quot;BROS&quot;,&quot;id&quot;:&quot;model_doc/bros&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bros&quot;},{&quot;title&quot;:&quot;Chinese-CLIP&quot;,&quot;id&quot;:&quot;model_doc/chinese_clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/chinese_clip&quot;},{&quot;title&quot;:&quot;CLIP&quot;,&quot;id&quot;:&quot;model_doc/clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clip&quot;},{&quot;title&quot;:&quot;CLIPSeg&quot;,&quot;id&quot;:&quot;model_doc/clipseg&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clipseg&quot;},{&quot;title&quot;:&quot;Data2Vec&quot;,&quot;id&quot;:&quot;model_doc/data2vec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/data2vec&quot;},{&quot;title&quot;:&quot;DePlot&quot;,&quot;id&quot;:&quot;model_doc/deplot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deplot&quot;},{&quot;title&quot;:&quot;Donut&quot;,&quot;id&quot;:&quot;model_doc/donut&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/donut&quot;},{&quot;title&quot;:&quot;FLAVA&quot;,&quot;id&quot;:&quot;model_doc/flava&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flava&quot;},{&quot;title&quot;:&quot;GIT&quot;,&quot;id&quot;:&quot;model_doc/git&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/git&quot;},{&quot;title&quot;:&quot;GroupViT&quot;,&quot;id&quot;:&quot;model_doc/groupvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/groupvit&quot;},{&quot;title&quot;:&quot;IDEFICS&quot;,&quot;id&quot;:&quot;model_doc/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/idefics&quot;},{&quot;title&quot;:&quot;InstructBLIP&quot;,&quot;id&quot;:&quot;model_doc/instructblip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/instructblip&quot;},{&quot;title&quot;:&quot;LayoutLM&quot;,&quot;id&quot;:&quot;model_doc/layoutlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlm&quot;},{&quot;title&quot;:&quot;LayoutLMV2&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv2&quot;},{&quot;title&quot;:&quot;LayoutLMV3&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv3&quot;},{&quot;title&quot;:&quot;LayoutXLM&quot;,&quot;id&quot;:&quot;model_doc/layoutxlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutxlm&quot;},{&quot;title&quot;:&quot;LiLT&quot;,&quot;id&quot;:&quot;model_doc/lilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lilt&quot;},{&quot;title&quot;:&quot;LXMERT&quot;,&quot;id&quot;:&quot;model_doc/lxmert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lxmert&quot;},{&quot;title&quot;:&quot;MatCha&quot;,&quot;id&quot;:&quot;model_doc/matcha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/matcha&quot;},{&quot;title&quot;:&quot;MGP-STR&quot;,&quot;id&quot;:&quot;model_doc/mgp-str&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mgp-str&quot;},{&quot;title&quot;:&quot;Nougat&quot;,&quot;id&quot;:&quot;model_doc/nougat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nougat&quot;},{&quot;title&quot;:&quot;OneFormer&quot;,&quot;id&quot;:&quot;model_doc/oneformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/oneformer&quot;},{&quot;title&quot;:&quot;OWL-ViT&quot;,&quot;id&quot;:&quot;model_doc/owlvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/owlvit&quot;},{&quot;title&quot;:&quot;Perceiver&quot;,&quot;id&quot;:&quot;model_doc/perceiver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/perceiver&quot;},{&quot;title&quot;:&quot;Pix2Struct&quot;,&quot;id&quot;:&quot;model_doc/pix2struct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pix2struct&quot;},{&quot;title&quot;:&quot;Segment Anything&quot;,&quot;id&quot;:&quot;model_doc/sam&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sam&quot;},{&quot;title&quot;:&quot;Speech Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/speech-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder&quot;},{&quot;title&quot;:&quot;TAPAS&quot;,&quot;id&quot;:&quot;model_doc/tapas&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapas&quot;},{&quot;title&quot;:&quot;TrOCR&quot;,&quot;id&quot;:&quot;model_doc/trocr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trocr&quot;},{&quot;title&quot;:&quot;TVLT&quot;,&quot;id&quot;:&quot;model_doc/tvlt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tvlt&quot;},{&quot;title&quot;:&quot;ViLT&quot;,&quot;id&quot;:&quot;model_doc/vilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vilt&quot;},{&quot;title&quot;:&quot;Vision Encoder Decoder Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/vision-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder&quot;},{&quot;title&quot;:&quot;Vision Text Dual Encoder&quot;,&quot;id&quot;:&quot;model_doc/vision-text-dual-encoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder&quot;},{&quot;title&quot;:&quot;VisualBERT&quot;,&quot;id&quot;:&quot;model_doc/visual_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/visual_bert&quot;},{&quot;title&quot;:&quot;X-CLIP&quot;,&quot;id&quot;:&quot;model_doc/xclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xclip&quot;}]},{&quot;title&quot;:&quot;Reinforcement learning models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Decision Transformer&quot;,&quot;id&quot;:&quot;model_doc/decision_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/decision_transformer&quot;},{&quot;title&quot;:&quot;Trajectory Transformer&quot;,&quot;id&quot;:&quot;model_doc/trajectory_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer&quot;}]},{&quot;title&quot;:&quot;Time series models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Autoformer&quot;,&quot;id&quot;:&quot;model_doc/autoformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/autoformer&quot;},{&quot;title&quot;:&quot;Informer&quot;,&quot;id&quot;:&quot;model_doc/informer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/informer&quot;},{&quot;title&quot;:&quot;Time Series Transformer&quot;,&quot;id&quot;:&quot;model_doc/time_series_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/time_series_transformer&quot;}]},{&quot;title&quot;:&quot;Graph models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Graphormer&quot;,&quot;id&quot;:&quot;model_doc/graphormer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/graphormer&quot;}]}]},{&quot;title&quot;:&quot;Internal Helpers&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Custom Layers and Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/modeling_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/modeling_utils&quot;},{&quot;title&quot;:&quot;Utilities for pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/pipelines_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/pipelines_utils&quot;},{&quot;title&quot;:&quot;Utilities for Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/tokenization_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/tokenization_utils&quot;},{&quot;title&quot;:&quot;Utilities for Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/trainer_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/trainer_utils&quot;},{&quot;title&quot;:&quot;Utilities for Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/generation_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/generation_utils&quot;},{&quot;title&quot;:&quot;Utilities for Image Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/image_processing_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/image_processing_utils&quot;},{&quot;title&quot;:&quot;Utilities for Audio processing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/audio_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/audio_utils&quot;},{&quot;title&quot;:&quot;General Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/file_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/file_utils&quot;},{&quot;title&quot;:&quot;Utilities for Time Series&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/time_series_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/time_series_utils&quot;}]}]}],&quot;chapterId&quot;:&quot;model_doc/vision-encoder-decoder&quot;,&quot;docType&quot;:&quot;docs&quot;,&quot;isLoggedIn&quot;:false,&quot;lang&quot;:&quot;en&quot;,&quot;langs&quot;:[&quot;de&quot;,&quot;en&quot;,&quot;es&quot;,&quot;fr&quot;,&quot;it&quot;,&quot;ko&quot;,&quot;pt&quot;,&quot;zh&quot;],&quot;library&quot;:&quot;transformers&quot;,&quot;theme&quot;:&quot;light&quot;,&quot;version&quot;:&quot;v4.34.0&quot;,&quot;versions&quot;:[{&quot;version&quot;:&quot;main&quot;},{&quot;version&quot;:&quot;v4.34.0&quot;},{&quot;version&quot;:&quot;v4.33.3&quot;},{&quot;version&quot;:&quot;v4.33.2&quot;},{&quot;version&quot;:&quot;v4.33.0&quot;},{&quot;version&quot;:&quot;v4.32.1&quot;},{&quot;version&quot;:&quot;v4.32.0&quot;},{&quot;version&quot;:&quot;v4.31.0&quot;},{&quot;version&quot;:&quot;v4.30.0&quot;},{&quot;version&quot;:&quot;v4.29.1&quot;},{&quot;version&quot;:&quot;v4.29.0&quot;},{&quot;version&quot;:&quot;v4.28.1&quot;},{&quot;version&quot;:&quot;v4.28.0&quot;},{&quot;version&quot;:&quot;v4.27.2&quot;},{&quot;version&quot;:&quot;v4.27.1&quot;},{&quot;version&quot;:&quot;v4.27.0&quot;},{&quot;version&quot;:&quot;v4.26.1&quot;},{&quot;version&quot;:&quot;v4.26.0&quot;},{&quot;version&quot;:&quot;v4.25.1&quot;},{&quot;version&quot;:&quot;v4.24.0&quot;},{&quot;version&quot;:&quot;v4.23.1&quot;},{&quot;version&quot;:&quot;v4.23.0&quot;},{&quot;version&quot;:&quot;v4.22.2&quot;},{&quot;version&quot;:&quot;v4.22.1&quot;},{&quot;version&quot;:&quot;v4.22.0&quot;},{&quot;version&quot;:&quot;v4.21.3&quot;},{&quot;version&quot;:&quot;v4.21.2&quot;},{&quot;version&quot;:&quot;v4.21.1&quot;},{&quot;version&quot;:&quot;v4.21.0&quot;},{&quot;version&quot;:&quot;v4.20.1&quot;},{&quot;version&quot;:&quot;v4.20.0&quot;},{&quot;version&quot;:&quot;v4.19.4&quot;},{&quot;version&quot;:&quot;v4.19.3&quot;},{&quot;version&quot;:&quot;v4.19.2&quot;},{&quot;version&quot;:&quot;v4.19.0&quot;},{&quot;version&quot;:&quot;v4.18.0&quot;},{&quot;version&quot;:&quot;v4.17.0&quot;},{&quot;version&quot;:&quot;v4.16.2&quot;},{&quot;version&quot;:&quot;v4.16.1&quot;},{&quot;version&quot;:&quot;v4.16.0&quot;},{&quot;version&quot;:&quot;v4.15.0&quot;},{&quot;version&quot;:&quot;v4.14.1&quot;},{&quot;version&quot;:&quot;v4.13.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.5&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.4&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.0.0&quot;},{&quot;version&quot;:&quot;doc-builder-html&quot;}],&quot;title&quot;:&quot;Vision Encoder Decoder Models&quot;}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"><div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation</p> <div class="flex items-center"><p class="font-semibold">Vision Encoder Decoder Models</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "><button class=" " type="button"><h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> <span class="ml-auto rounded border border-gray-200 bg-gray-100 px-0.5 text-xs dark:border-gray-800 dark:bg-gray-800"><kbd class="font-sans">⌘K</kbd></span></button> <div class="flex items-center"><select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1">v4.34.0</option><option value="2">v4.33.3</option><option value="3">v4.32.1</option><option value="4">v4.31.0</option><option value="5">v4.30.0</option><option value="6">v4.29.1</option><option value="7">v4.28.1</option><option value="8">v4.27.2</option><option value="9">v4.26.1</option><option value="10">v4.25.1</option><option value="11">v4.24.0</option><option value="12">v4.23.1</option><option value="13">v4.22.2</option><option value="14">v4.21.3</option><option value="15">v4.20.1</option><option value="16">v4.19.4</option><option value="17">v4.18.0</option><option value="18">v4.17.0</option><option value="19">v4.16.2</option><option value="20">v4.15.0</option><option value="21">v4.14.1</option><option value="22">v4.13.0</option><option value="23">v4.12.5</option><option value="24">v4.11.3</option><option value="25">v4.10.1</option><option value="26">v4.9.2</option><option value="27">v4.8.2</option><option value="28">v4.7.0</option><option value="29">v4.6.0</option><option value="30">v4.5.1</option><option value="31">v4.4.2</option><option value="32">v4.3.3</option><option value="33">v4.2.2</option><option value="34">v4.1.1</option><option value="35">v4.0.1</option><option value="36">v3.5.1</option><option value="37">v3.4.0</option><option value="38">v3.3.1</option><option value="39">v3.2.0</option><option value="40">v3.1.0</option><option value="41">v3.0.2</option><option value="42">v2.11.0</option><option value="43">v2.10.0</option><option value="44">v2.9.1</option><option value="45">v2.8.0</option><option value="46">v2.7.0</option><option value="47">v2.6.0</option><option value="48">v2.5.1</option><option value="49">v2.4.1</option><option value="50">v2.3.0</option><option value="51">v2.2.2</option><option value="52">v2.1.1</option><option value="53">v2.0.0</option><option value="54">v1.2.0</option><option value="55">v1.1.0</option><option value="56">v1.0.0</option><option value="57">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"><button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"><svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> 112,792</a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Get started</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/index">🤗 Transformers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/quicktour">Quick tour </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/installation">Installation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Tutorials</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_tutorial">Run inference with pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/autoclass_tutorial">Write portable code with AutoClass </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/preprocessing">Preprocess data </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/training">Fine-tune a pretrained model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/run_scripts">Train with a script </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/accelerate">Set up distributed training with 🤗 Accelerate </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/peft">Load and train adapters with 🤗 PEFT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_sharing">Share your model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/transformers_agents">Agents </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/llm_tutorial">Generation with LLMs </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Task Guides</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Natural Language Processing</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Computer Vision</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Generation</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Prompting</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Developer guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/fast_tokenizers">Use fast tokenizers from 🤗 Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/multilingual">Run inference with multilingual models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/create_a_model">Use model-specific APIs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_models">Share a custom model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/chat_templating">Templates for chat models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/sagemaker">Run training on Amazon SageMaker </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/serialization">Export to ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tflite">Export to TFLite </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/torchscript">Export to TorchScript </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/benchmarks">Benchmarks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/notebooks">Notebooks with examples </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/community">Community resources </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_tools">Custom Tools and Prompts </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/troubleshooting">Troubleshoot </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Performance and scalability</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/performance">Overview </a><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Efficient training techniques</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_one">Methods and tools for efficient training on a single GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_many">Multiple GPUs and parallelism </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu">Efficient training on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu_many">Distributed CPU training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu">Training on TPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu_tf">Training on TPU with TensorFlow </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_special">Training on Specialized Hardware </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_hardware">Custom hardware for training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/hpo_train">Hyperparameter Search using Trainer API </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Optimizing inference</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_cpu">Inference on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_one">Inference on one GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_many">Inference on many GPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_special">Inference on Specialized Hardware </a> </div><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/big_models">Instantiating a big model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/debugging">Troubleshooting </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tf_xla">XLA Integration for TensorFlow Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perf_torch_compile">Optimize inference using `torch.compile()` </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Contribute</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/contributing">How to contribute to transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_model">How to add a model to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_tensorflow_model">How to convert a 🤗 Transformers model to TensorFlow? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_pipeline">How to add a pipeline to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/testing">Testing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pr_checks">Checks on a Pull Request </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Conceptual guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/philosophy">Philosophy </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/glossary">Glossary </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/task_summary">What 🤗 Transformers can do </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tasks_explained">How 🤗 Transformers solve tasks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_summary">The Transformer model family </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tokenizer_summary">Summary of the tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/attention">Attention mechanisms </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pad_truncation">Padding and truncation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/bertology">BERTology </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perplexity">Perplexity of fixed-length models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_webserver">Pipelines for webserver inference </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_memory_anatomy">Model training anatomy </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>API</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Main Classes</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/agent">Agents and Tools </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/model_doc/auto">Auto Classes </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/callback">Callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/configuration">Configuration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/data_collator">Data Collator </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks">Keras callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/logging">Logging </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/model">Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/text_generation">Text Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/onnx">ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules">Optimization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/output">Model outputs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/pipelines">Pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/processors">Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/quantization">Quantization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/tokenizer">Tokenizer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/trainer">Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/deepspeed">DeepSpeed Integration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor">Feature Extractor </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/image_processor">Image Processor </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Models</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Text models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Vision models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal models</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/align">ALIGN </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/altclip">AltCLIP </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/blip">BLIP </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/blip-2">BLIP-2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bridgetower">BridgeTower </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bros">BROS </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/chinese_clip">Chinese-CLIP </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/clip">CLIP </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/clipseg">CLIPSeg </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/data2vec">Data2Vec </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/deplot">DePlot </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/donut">Donut </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/flava">FLAVA </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/git">GIT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/groupvit">GroupViT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/idefics">IDEFICS </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/instructblip">InstructBLIP </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/layoutlm">LayoutLM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/layoutlmv2">LayoutLMV2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/layoutlmv3">LayoutLMV3 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/layoutxlm">LayoutXLM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/lilt">LiLT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/lxmert">LXMERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/matcha">MatCha </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mgp-str">MGP-STR </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nougat">Nougat </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/oneformer">OneFormer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/owlvit">OWL-ViT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/perceiver">Perceiver </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/pix2struct">Pix2Struct </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/sam">Segment Anything </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder">Speech Encoder Decoder Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/tapas">TAPAS </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/trocr">TrOCR </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/tvlt">TVLT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/vilt">ViLT </a><a data-sveltekit-reload="" class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder">Vision Encoder Decoder Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder">Vision Text Dual Encoder </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/visual_bert">VisualBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xclip">X-CLIP </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Reinforcement learning models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Time series models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Graph models</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Internal Helpers</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/modeling_utils">Custom Layers and Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/pipelines_utils">Utilities for pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/tokenization_utils">Utilities for Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/trainer_utils">Utilities for Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/generation_utils">Utilities for Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/image_processing_utils">Utilities for Image Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/audio_utils">Utilities for Audio processing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/file_utils">General Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/time_series_utils">Utilities for Time Series </a> </div> </div></nav></div></div></div> <div class="z-1 min-w-0 flex-1"> <div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg"> <div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div> <p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience </p> <div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes </div></div></div> <div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a> <p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div> <div class="prose-doc prose relative mx-auto max-w-4xl break-words"> <p></p> <h1 class="relative group"><a id="vision-encoder-decoder-models" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#vision-encoder-decoder-models"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1ppaj05">Vision Encoder Decoder Models</span></h1> <h2 class="relative group"><a id="overview" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#overview"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1jsw1pg">Overview</span></h2> <p data-svelte-h="svelte-840494">The <a href="/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderModel">VisionEncoderDecoderModel</a> can be used to initialize an image-to-text model with any pretrained Transformer-based vision model as the encoder (<em>e.g.</em> <a href="vit">ViT</a>, <a href="beit">BEiT</a>, <a href="deit">DeiT</a>, <a href="swin">Swin</a>) and any pretrained language model as the decoder (<em>e.g.</em> <a href="roberta">RoBERTa</a>, <a href="gpt2">GPT2</a>, <a href="bert">BERT</a>, <a href="distilbert">DistilBERT</a>).</p> <p data-svelte-h="svelte-3c28cv">The effectiveness of initializing image-to-text-sequence models with pretrained checkpoints has been shown in (for example) <a href="https://arxiv.org/abs/2109.10282" rel="nofollow">TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models</a> by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei.</p> <p data-svelte-h="svelte-f91wjl">After such a <a href="/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderModel">VisionEncoderDecoderModel</a> has been trained/fine-tuned, it can be saved/loaded just like any other models (see the examples below for more information).</p> <p data-svelte-h="svelte-2npwc9">An example application is image captioning, in which the encoder is used to encode the image, after which an autoregressive language model generates the caption. Another example is optical character recognition. Refer to <a href="trocr">TrOCR</a>, which is an instance of <a href="/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderModel">VisionEncoderDecoderModel</a>.</p> <h2 class="relative group"><a id="randomly-initializing-visionencoderdecodermodel-from-model-configurations" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#randomly-initializing-visionencoderdecodermodel-from-model-configurations"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1gm2njo">Randomly initializing <code>VisionEncoderDecoderModel</code> from model configurations.</span></h2> <p data-svelte-h="svelte-pou6of"><a href="/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderModel">VisionEncoderDecoderModel</a> can be randomly initialized from an encoder and a decoder config. In the following example, we show how to do this using the default <a href="/docs/transformers/v4.34.0/en/model_doc/vit#transformers.ViTModel">ViTModel</a> configuration for the encoder and the default <code>BertForCausalLM</code> configuration for the decoder.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> BertConfig, ViTConfig, VisionEncoderDecoderConfig, VisionEncoderDecoderModel <span class="hljs-meta">&gt;&gt;&gt; </span>config_encoder = ViTConfig() <span class="hljs-meta">&gt;&gt;&gt; </span>config_decoder = BertConfig() <span class="hljs-meta">&gt;&gt;&gt; </span>config = VisionEncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder) <span class="hljs-meta">&gt;&gt;&gt; </span>model = VisionEncoderDecoderModel(config=config)</pre></div> <h2 class="relative group"><a id="initialising-visionencoderdecodermodel-from-a-pretrained-encoder-and-a-pretrained-decoder" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#initialising-visionencoderdecodermodel-from-a-pretrained-encoder-and-a-pretrained-decoder"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-169i8n6">Initialising <code>VisionEncoderDecoderModel</code> from a pretrained encoder and a pretrained decoder.</span></h2> <p data-svelte-h="svelte-10ro3l3"><a href="/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderModel">VisionEncoderDecoderModel</a> can be initialized from a pretrained encoder checkpoint and a pretrained decoder checkpoint. Note that any pretrained Transformer-based vision model, <em>e.g.</em> <a href="swin">Swin</a>, can serve as the encoder and both pretrained auto-encoding models, <em>e.g.</em> BERT, pretrained causal language models, <em>e.g.</em> GPT2, as well as the pretrained decoder part of sequence-to-sequence models, <em>e.g.</em> decoder of BART, can be used as the decoder. Depending on which architecture you choose as the decoder, the cross-attention layers might be randomly initialized. Initializing <a href="/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderModel">VisionEncoderDecoderModel</a> from a pretrained encoder and decoder checkpoint requires the model to be fine-tuned on a downstream task, as has been shown in <a href="https://huggingface.co/blog/warm-starting-encoder-decoder" rel="nofollow">the <em>Warm-starting-encoder-decoder blog post</em></a>. To do so, the <code>VisionEncoderDecoderModel</code> class provides a <a href="/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderModel.from_encoder_decoder_pretrained">VisionEncoderDecoderModel.from_encoder_decoder_pretrained()</a> method.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> VisionEncoderDecoderModel <span class="hljs-meta">&gt;&gt;&gt; </span>model = VisionEncoderDecoderModel.from_encoder_decoder_pretrained( <span class="hljs-meta">... </span> <span class="hljs-string">"microsoft/swin-base-patch4-window7-224-in22k"</span>, <span class="hljs-string">"bert-base-uncased"</span> <span class="hljs-meta">... </span>)</pre></div> <h2 class="relative group"><a id="loading-an-existing-visionencoderdecodermodel-checkpoint-and-perform-inference" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#loading-an-existing-visionencoderdecodermodel-checkpoint-and-perform-inference"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-13unqmy">Loading an existing <code>VisionEncoderDecoderModel</code> checkpoint and perform inference.</span></h2> <p data-svelte-h="svelte-42k893">To load fine-tuned checkpoints of the <code>VisionEncoderDecoderModel</code> class, <a href="/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderModel">VisionEncoderDecoderModel</a> provides the <code>from_pretrained(...)</code> method just like any other model architecture in Transformers.</p> <p data-svelte-h="svelte-otiwkm">To perform inference, one uses the <code>generate</code> method, which allows to autoregressively generate text. This method supports various forms of decoding, such as greedy, beam search and multinomial sampling.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> requests <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> PIL <span class="hljs-keyword">import</span> Image <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> GPT2TokenizerFast, ViTImageProcessor, VisionEncoderDecoderModel <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># load a fine-tuned image captioning model and corresponding tokenizer and image processor</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model = VisionEncoderDecoderModel.from_pretrained(<span class="hljs-string">"nlpconnect/vit-gpt2-image-captioning"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = GPT2TokenizerFast.from_pretrained(<span class="hljs-string">"nlpconnect/vit-gpt2-image-captioning"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>image_processor = ViTImageProcessor.from_pretrained(<span class="hljs-string">"nlpconnect/vit-gpt2-image-captioning"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># let's perform inference on an image</span> <span class="hljs-meta">&gt;&gt;&gt; </span>url = <span class="hljs-string">"http://images.cocodataset.org/val2017/000000039769.jpg"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>image = Image.<span class="hljs-built_in">open</span>(requests.get(url, stream=<span class="hljs-literal">True</span>).raw) <span class="hljs-meta">&gt;&gt;&gt; </span>pixel_values = image_processor(image, return_tensors=<span class="hljs-string">"pt"</span>).pixel_values <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># autoregressively generate caption (uses greedy decoding by default)</span> <span class="hljs-meta">&gt;&gt;&gt; </span>generated_ids = model.generate(pixel_values) <span class="hljs-meta">&gt;&gt;&gt; </span>generated_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=<span class="hljs-literal">True</span>)[<span class="hljs-number">0</span>] <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-built_in">print</span>(generated_text) a cat laying on a blanket <span class="hljs-built_in">next</span> to a cat laying on a bed</pre></div> <h2 class="relative group"><a id="loading-a-pytorch-checkpoint-into-tfvisionencoderdecodermodel" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#loading-a-pytorch-checkpoint-into-tfvisionencoderdecodermodel"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-xpt4iv">Loading a PyTorch checkpoint into <code>TFVisionEncoderDecoderModel</code>.</span></h2> <p data-svelte-h="svelte-ime2r"><code>TFVisionEncoderDecoderModel.from_pretrained()</code> currently doesn’t support initializing the model from a PyTorch checkpoint. Passing <code>from_pt=True</code> to this method will throw an exception. If there are only PyTorch checkpoints for a particular vision encoder-decoder model, a workaround is:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> VisionEncoderDecoderModel, TFVisionEncoderDecoderModel <span class="hljs-meta">&gt;&gt;&gt; </span>_model = VisionEncoderDecoderModel.from_pretrained(<span class="hljs-string">"nlpconnect/vit-gpt2-image-captioning"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>_model.encoder.save_pretrained(<span class="hljs-string">"./encoder"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>_model.decoder.save_pretrained(<span class="hljs-string">"./decoder"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = TFVisionEncoderDecoderModel.from_encoder_decoder_pretrained( <span class="hljs-meta">... </span> <span class="hljs-string">"./encoder"</span>, <span class="hljs-string">"./decoder"</span>, encoder_from_pt=<span class="hljs-literal">True</span>, decoder_from_pt=<span class="hljs-literal">True</span> <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># This is only for copying some specific attributes of this particular model.</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model.config = _model.config</pre></div> <h2 class="relative group"><a id="training" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#training"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1q1s287">Training</span></h2> <p data-svelte-h="svelte-f1ugst">Once the model is created, it can be fine-tuned similar to BART, T5 or any other encoder-decoder model on a dataset of (image, text) pairs. As you can see, only 2 inputs are required for the model in order to compute a loss: <code>pixel_values</code> (which are the images) and <code>labels</code> (which are the <code>input_ids</code> of the encoded target sequence).</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> ViTImageProcessor, BertTokenizer, VisionEncoderDecoderModel <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset <span class="hljs-meta">&gt;&gt;&gt; </span>image_processor = ViTImageProcessor.from_pretrained(<span class="hljs-string">"google/vit-base-patch16-224-in21k"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = BertTokenizer.from_pretrained(<span class="hljs-string">"bert-base-uncased"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = VisionEncoderDecoderModel.from_encoder_decoder_pretrained( <span class="hljs-meta">... </span> <span class="hljs-string">"google/vit-base-patch16-224-in21k"</span>, <span class="hljs-string">"bert-base-uncased"</span> <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model.config.decoder_start_token_id = tokenizer.cls_token_id <span class="hljs-meta">&gt;&gt;&gt; </span>model.config.pad_token_id = tokenizer.pad_token_id <span class="hljs-meta">&gt;&gt;&gt; </span>dataset = load_dataset(<span class="hljs-string">"huggingface/cats-image"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>image = dataset[<span class="hljs-string">"test"</span>][<span class="hljs-string">"image"</span>][<span class="hljs-number">0</span>] <span class="hljs-meta">&gt;&gt;&gt; </span>pixel_values = image_processor(image, return_tensors=<span class="hljs-string">"pt"</span>).pixel_values <span class="hljs-meta">&gt;&gt;&gt; </span>labels = tokenizer( <span class="hljs-meta">... </span> <span class="hljs-string">"an image of two cats chilling on a couch"</span>, <span class="hljs-meta">... </span> return_tensors=<span class="hljs-string">"pt"</span>, <span class="hljs-meta">... </span>).input_ids <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># the forward function automatically creates the correct decoder_input_ids</span> <span class="hljs-meta">&gt;&gt;&gt; </span>loss = model(pixel_values=pixel_values, labels=labels).loss</pre></div> <p data-svelte-h="svelte-52yqis">This model was contributed by <a href="https://github.com/nielsrogge" rel="nofollow">nielsr</a>. This model’s TensorFlow and Flax versions were contributed by <a href="https://github.com/ydshieh" rel="nofollow">ydshieh</a>.</p> <h2 class="relative group"><a id="transformers.VisionEncoderDecoderConfig" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionEncoderDecoderConfig"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-ec8qhb">VisionEncoderDecoderConfig</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.VisionEncoderDecoderConfig"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">VisionEncoderDecoderConfig</span></span></h3> <a id="transformers.VisionEncoderDecoderConfig" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.VisionEncoderDecoderConfig"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vision_encoder_decoder/configuration_vision_encoder_decoder.py#L33" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionEncoderDecoderConfig.kwargs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionEncoderDecoderConfig.kwargs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>kwargs</strong> (<em>optional</em>) — Dictionary of keyword arguments. Notably:<p></p> <ul> <li><strong>encoder</strong> (<a href="/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a>, <em>optional</em>) — An instance of a configuration object that defines the encoder config.</li> <li><strong>decoder</strong> (<a href="/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a>, <em>optional</em>) — An instance of a configuration object that defines the decoder config.</li> </ul></span></span> </li></ul> </div></div> <p data-svelte-h="svelte-122oscu"><a href="/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderConfig">VisionEncoderDecoderConfig</a> is the configuration class to store the configuration of a <a href="/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderModel">VisionEncoderDecoderModel</a>. It is used to instantiate a Vision-Encoder-Text-Decoder model according to the specified arguments, defining the encoder and decoder configs.</p> <p data-svelte-h="svelte-10kqkkl">Configuration objects inherit from <a href="/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> and can be used to control the model outputs. Read the documentation from <a href="/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> for more information.</p> <div class="relative group rounded-md"><a id="transformers.VisionEncoderDecoderConfig.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionEncoderDecoderConfig.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-kvfsh7">Examples:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> BertConfig, ViTConfig, VisionEncoderDecoderConfig, VisionEncoderDecoderModel <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Initializing a ViT &amp; BERT style configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>config_encoder = ViTConfig() <span class="hljs-meta">&gt;&gt;&gt; </span>config_decoder = BertConfig() <span class="hljs-meta">&gt;&gt;&gt; </span>config = VisionEncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Initializing a ViTBert model (with random weights) from a ViT &amp; bert-base-uncased style configurations</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model = VisionEncoderDecoderModel(config=config) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Accessing the model configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>config_encoder = model.config.encoder <span class="hljs-meta">&gt;&gt;&gt; </span>config_decoder = model.config.decoder <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># set decoder config to causal lm</span> <span class="hljs-meta">&gt;&gt;&gt; </span>config_decoder.is_decoder = <span class="hljs-literal">True</span> <span class="hljs-meta">&gt;&gt;&gt; </span>config_decoder.add_cross_attention = <span class="hljs-literal">True</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Saving the model, including its configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model.save_pretrained(<span class="hljs-string">"my-model"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># loading model and config from pretrained folder</span> <span class="hljs-meta">&gt;&gt;&gt; </span>encoder_decoder_config = VisionEncoderDecoderConfig.from_pretrained(<span class="hljs-string">"my-model"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = VisionEncoderDecoderModel.from_pretrained(<span class="hljs-string">"my-model"</span>, config=encoder_decoder_config)</pre></div></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.VisionEncoderDecoderConfig.from_encoder_decoder_configs"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>from_encoder_decoder_configs</span></h4> <a id="transformers.VisionEncoderDecoderConfig.from_encoder_decoder_configs" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.VisionEncoderDecoderConfig.from_encoder_decoder_configs"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vision_encoder_decoder/configuration_vision_encoder_decoder.py#L99" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_config<span class="opacity-60">: PretrainedConfig</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_config<span class="opacity-60">: PretrainedConfig</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderConfig">VisionEncoderDecoderConfig</a></span></span></p> <div class="!mb-10 relative docstring-details "> <div id="transformers.VisionEncoderDecoderConfig.from_encoder_decoder_configs.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderConfig">VisionEncoderDecoderConfig</a></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>An instance of a configuration object</p> </p> </div></div> <p data-svelte-h="svelte-sl5g3p">Instantiate a <a href="/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderConfig">VisionEncoderDecoderConfig</a> (or a derived class) from a pre-trained encoder model configuration and decoder model configuration.</p></div></div> <h2 class="relative group"><a id="transformers.VisionEncoderDecoderModel" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionEncoderDecoderModel"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-19dn9ii">VisionEncoderDecoderModel</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.VisionEncoderDecoderModel"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">VisionEncoderDecoderModel</span></span></h3> <a id="transformers.VisionEncoderDecoderModel" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.VisionEncoderDecoderModel"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vision_encoder_decoder/modeling_vision_encoder_decoder.py#L151" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60">: typing.Optional[transformers.configuration_utils.PretrainedConfig] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder<span class="opacity-60">: typing.Optional[transformers.modeling_utils.PreTrainedModel] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder<span class="opacity-60">: typing.Optional[transformers.modeling_utils.PreTrainedModel] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionEncoderDecoderModel.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionEncoderDecoderModel.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderConfig">VisionEncoderDecoderConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-sjm8yc">This class can be used to initialize an image-to-text-sequence model with any pretrained vision autoencoding model as the encoder and any pretrained text autoregressive model as the decoder. The encoder is loaded via <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained">from_pretrained()</a> function and the decoder is loaded via <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained">from_pretrained()</a> function. Cross-attention layers are automatically added to the decoder and should be fine-tuned on a downstream generative task, like image captioning.</p> <p data-svelte-h="svelte-1faerbf">The effectiveness of initializing sequence-to-sequence models with pretrained checkpoints for sequence generation tasks was shown in <a href="https://arxiv.org/abs/1907.12461" rel="nofollow">Leveraging Pre-trained Checkpoints for Sequence Generation Tasks</a> by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu.</p> <p data-svelte-h="svelte-yi0xwh">Additionally, in <a href="https://arxiv.org/abs/2109.10282" rel="nofollow">TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models</a> it is shown how leveraging large pretrained vision models for optical character recognition (OCR) yields a significant performance improvement.</p> <p data-svelte-h="svelte-1q35eh4">After such a Vision-Encoder-Text-Decoder model has been trained/fine-tuned, it can be saved/loaded just like any other models (see the examples for more information).</p> <p data-svelte-h="svelte-hmtw9k">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-hswkmf">This model is also a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <p data-svelte-h="svelte-xkqpeb"><a href="/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderModel">VisionEncoderDecoderModel</a> is a generic model class that will be instantiated as a transformer architecture with one of the base vision model classes of the library as encoder and another one as decoder when created with the :meth<em>~transformers.AutoModel.from_pretrained</em> class method for the encoder and :meth<em>~transformers.AutoModelForCausalLM.from_pretrained</em> class method for the decoder.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.VisionEncoderDecoderModel.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.VisionEncoderDecoderModel.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.VisionEncoderDecoderModel.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vision_encoder_decoder/modeling_vision_encoder_decoder.py#L519" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pixel_values<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_input_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_attention_mask<span class="opacity-60">: typing.Optional[torch.BoolTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_outputs<span class="opacity-60">: typing.Optional[typing.Tuple[torch.FloatTensor]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">past_key_values<span class="opacity-60">: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_inputs_embeds<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_cache<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.Seq2SeqLMOutput">transformers.modeling_outputs.Seq2SeqLMOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 12 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionEncoderDecoderModel.forward.pixel_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionEncoderDecoderModel.forward.pixel_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>pixel_values</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_channels, height, width)</code>) — Pixel values. Pixel values can be obtained using an image processor (e.g. if you use ViT as the encoder, you should use <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoImageProcessor">AutoImageProcessor</a>). See <a href="/docs/transformers/v4.34.0/en/model_doc/deit#transformers.DeiTFeatureExtractor.__call__">ViTImageProcessor.<strong>call</strong>()</a> for details.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionEncoderDecoderModel.forward.decoder_input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionEncoderDecoderModel.forward.decoder_input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, target_sequence_length)</code>, <em>optional</em>) — Indices of decoder input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizer">PreTrainedTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p> <p>If <code>past_key_values</code> is used, optionally only the last <code>decoder_input_ids</code> have to be input (see <code>past_key_values</code>).</p> <p>For training, <code>decoder_input_ids</code> are automatically created by the model by shifting the <code>labels</code> to the right, replacing -100 by the <code>pad_token_id</code> and prepending them with the <code>decoder_start_token_id</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionEncoderDecoderModel.forward.decoder_attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionEncoderDecoderModel.forward.decoder_attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_attention_mask</strong> (<code>torch.BoolTensor</code> of shape <code>(batch_size, target_sequence_length)</code>, <em>optional</em>) — Default behavior: generate a tensor that ignores pad tokens in <code>decoder_input_ids</code>. Causal mask will also be used by default.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionEncoderDecoderModel.forward.encoder_outputs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionEncoderDecoderModel.forward.encoder_outputs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>encoder_outputs</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>) — This tuple must consist of (<code>last_hidden_state</code>, <em>optional</em>: <code>hidden_states</code>, <em>optional</em>: <code>attentions</code>) <code>last_hidden_state</code> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>) is a tensor of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionEncoderDecoderModel.forward.past_key_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionEncoderDecoderModel.forward.past_key_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>past_key_values</strong> (<code>tuple(tuple(torch.FloatTensor))</code> of length <code>config.n_layers</code> with each tuple having 4 tensors of shape <code>(batch_size, num_heads, sequence_length - 1, embed_size_per_head)</code>) — Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.<p></p> <p>If <code>past_key_values</code> are used, the user can optionally input only the last <code>decoder_input_ids</code> (those that don’t have their past key value states given to this model) of shape <code>(batch_size, 1)</code> instead of all <code>decoder_input_ids</code> of shape <code>(batch_size, sequence_length)</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionEncoderDecoderModel.forward.decoder_inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionEncoderDecoderModel.forward.decoder_inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, target_sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>decoder_input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>decoder_input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionEncoderDecoderModel.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionEncoderDecoderModel.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Labels for computing the masked language modeling loss for the decoder. Indices should be in <code>[-100, 0, ..., config.vocab_size]</code> (see <code>input_ids</code> docstring) Tokens with indices set to <code>-100</code> are ignored (masked), the loss is only computed for the tokens with labels in <code>[0, ..., config.vocab_size]</code></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionEncoderDecoderModel.forward.use_cache" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionEncoderDecoderModel.forward.use_cache"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>use_cache</strong> (<code>bool</code>, <em>optional</em>) — If set to <code>True</code>, <code>past_key_values</code> key value states are returned and can be used to speed up decoding (see <code>past_key_values</code>).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionEncoderDecoderModel.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionEncoderDecoderModel.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionEncoderDecoderModel.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionEncoderDecoderModel.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionEncoderDecoderModel.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionEncoderDecoderModel.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — If set to <code>True</code>, the model will return a <code>~utils.Seq2SeqLMOutput</code> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionEncoderDecoderModel.forward.kwargs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionEncoderDecoderModel.forward.kwargs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>kwargs</strong> (<em>optional</em>) — Remaining dictionary of keyword arguments. Keyword arguments come in two flavors:<p></p> <ul> <li>Without a prefix which will be input as <code>**encoder_kwargs</code> for the encoder forward function.</li> <li>With a <em>decoder_</em> prefix which will be input as <code>**decoder_kwargs</code> for the decoder forward function.</li> </ul></span></span> </li></ul> <div id="transformers.VisionEncoderDecoderModel.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.Seq2SeqLMOutput">transformers.modeling_outputs.Seq2SeqLMOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.Seq2SeqLMOutput">transformers.modeling_outputs.Seq2SeqLMOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderConfig">VisionEncoderDecoderConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Language modeling loss.</p> </li> <li> <p><strong>logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, config.vocab_size)</code>) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).</p> </li> <li> <p><strong>past_key_values</strong> (<code>tuple(tuple(torch.FloatTensor))</code>, <em>optional</em>, returned when <code>use_cache=True</code> is passed or when <code>config.use_cache=True</code>) — Tuple of <code>tuple(torch.FloatTensor)</code> of length <code>config.n_layers</code>, with each tuple having 2 tensors of shape <code>(batch_size, num_heads, sequence_length, embed_size_per_head)</code>) and 2 additional tensors of shape <code>(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)</code>.</p> <p>Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see <code>past_key_values</code> input) to speed up sequential decoding.</p> </li> <li> <p><strong>decoder_hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>decoder_attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> <li> <p><strong>cross_attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.</p> </li> <li> <p><strong>encoder_last_hidden_state</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Sequence of hidden-states at the output of the last layer of the encoder of the model.</p> </li> <li> <p><strong>encoder_hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>encoder_attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-1b9cpkx">The <a href="/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderModel">VisionEncoderDecoderModel</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.VisionEncoderDecoderModel.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionEncoderDecoderModel.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-kvfsh7">Examples:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoProcessor, VisionEncoderDecoderModel <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> requests <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> PIL <span class="hljs-keyword">import</span> Image <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>processor = AutoProcessor.from_pretrained(<span class="hljs-string">"microsoft/trocr-base-handwritten"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = VisionEncoderDecoderModel.from_pretrained(<span class="hljs-string">"microsoft/trocr-base-handwritten"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># load image from the IAM dataset</span> <span class="hljs-meta">&gt;&gt;&gt; </span>url = <span class="hljs-string">"https://fki.tic.heia-fr.ch/static/img/a01-122-02.jpg"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>image = Image.<span class="hljs-built_in">open</span>(requests.get(url, stream=<span class="hljs-literal">True</span>).raw).convert(<span class="hljs-string">"RGB"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># training</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model.config.decoder_start_token_id = processor.tokenizer.cls_token_id <span class="hljs-meta">&gt;&gt;&gt; </span>model.config.pad_token_id = processor.tokenizer.pad_token_id <span class="hljs-meta">&gt;&gt;&gt; </span>model.config.vocab_size = model.config.decoder.vocab_size <span class="hljs-meta">&gt;&gt;&gt; </span>pixel_values = processor(image, return_tensors=<span class="hljs-string">"pt"</span>).pixel_values <span class="hljs-meta">&gt;&gt;&gt; </span>text = <span class="hljs-string">"hello world"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>labels = processor.tokenizer(text, return_tensors=<span class="hljs-string">"pt"</span>).input_ids <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(pixel_values=pixel_values, labels=labels) <span class="hljs-meta">&gt;&gt;&gt; </span>loss = outputs.loss <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># inference (generation)</span> <span class="hljs-meta">&gt;&gt;&gt; </span>generated_ids = model.generate(pixel_values) <span class="hljs-meta">&gt;&gt;&gt; </span>generated_text = processor.batch_decode(generated_ids, skip_special_tokens=<span class="hljs-literal">True</span>)[<span class="hljs-number">0</span>]</pre></div></div></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.VisionEncoderDecoderModel.from_encoder_decoder_pretrained"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>from_encoder_decoder_pretrained</span></h4> <a id="transformers.VisionEncoderDecoderModel.from_encoder_decoder_pretrained" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.VisionEncoderDecoderModel.from_encoder_decoder_pretrained"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vision_encoder_decoder/modeling_vision_encoder_decoder.py#L365" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_pretrained_model_name_or_path<span class="opacity-60">: str = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_pretrained_model_name_or_path<span class="opacity-60">: str = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">*model_args<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 4 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionEncoderDecoderModel.from_encoder_decoder_pretrained.encoder_pretrained_model_name_or_path" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionEncoderDecoderModel.from_encoder_decoder_pretrained.encoder_pretrained_model_name_or_path"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>encoder_pretrained_model_name_or_path</strong> (<code>str</code>, <em>optional</em>) — Information necessary to initiate the image encoder. Can be either:<p></p> <ul> <li>A string, the <em>model id</em> of a pretrained model hosted inside a model repo on huggingface.co. An example is <code>google/vit-base-patch16-224-in21k</code>.</li> <li>A path to a <em>directory</em> containing model weights saved using <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.save_pretrained">save_pretrained()</a>, e.g., <code>./my_model_directory/</code>.</li> <li>A path or url to a <em>tensorflow index checkpoint file</em> (e.g, <code>./tf_model/model.ckpt.index</code>). In this case, <code>from_tf</code> should be set to <code>True</code> and a configuration object should be provided as <code>config</code> argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionEncoderDecoderModel.from_encoder_decoder_pretrained.decoder_pretrained_model_name_or_path" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionEncoderDecoderModel.from_encoder_decoder_pretrained.decoder_pretrained_model_name_or_path"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_pretrained_model_name_or_path</strong> (<code>str</code>, <em>optional</em>, defaults to <code>None</code>) — Information necessary to initiate the text decoder. Can be either:<p></p> <ul> <li>A string, the <em>model id</em> of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like <code>bert-base-uncased</code>, or namespaced under a user or organization name, like <code>dbmdz/bert-base-german-cased</code>.</li> <li>A path to a <em>directory</em> containing model weights saved using <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.save_pretrained">save_pretrained()</a>, e.g., <code>./my_model_directory/</code>.</li> <li>A path or url to a <em>tensorflow index checkpoint file</em> (e.g, <code>./tf_model/model.ckpt.index</code>). In this case, <code>from_tf</code> should be set to <code>True</code> and a configuration object should be provided as <code>config</code> argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionEncoderDecoderModel.from_encoder_decoder_pretrained.model_args" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionEncoderDecoderModel.from_encoder_decoder_pretrained.model_args"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>model_args</strong> (remaining positional arguments, <em>optional</em>) — All remaning positional arguments will be passed to the underlying model’s <code>__init__</code> method.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionEncoderDecoderModel.from_encoder_decoder_pretrained.kwargs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionEncoderDecoderModel.from_encoder_decoder_pretrained.kwargs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>kwargs</strong> (remaining dictionary of keyword arguments, <em>optional</em>) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., <code>output_attentions=True</code>).<p></p> <ul> <li>To update the encoder configuration, use the prefix <em>encoder_</em> for each configuration parameter.</li> <li>To update the decoder configuration, use the prefix <em>decoder_</em> for each configuration parameter.</li> <li>To update the parent model configuration, do not use a prefix for each configuration parameter.</li> </ul> <p>Behaves differently depending on whether a <code>config</code> is provided or automatically loaded.</p></span></span> </li></ul> </div></div> <p data-svelte-h="svelte-n4p3zm">Instantiate an encoder and a decoder from one or two base classes of the library from pretrained model checkpoints.</p> <p data-svelte-h="svelte-ce5sus">The model is set in evaluation mode by default using <code>model.eval()</code> (Dropout modules are deactivated). To train the model, you need to first set it back in training mode with <code>model.train()</code>.</p> <div class="relative group rounded-md"><a id="transformers.VisionEncoderDecoderModel.from_encoder_decoder_pretrained.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionEncoderDecoderModel.from_encoder_decoder_pretrained.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> VisionEncoderDecoderModel <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># initialize a vit-bert from a pretrained ViT and a pretrained BERT model. Note that the cross-attention layers will be randomly initialized</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model = VisionEncoderDecoderModel.from_encoder_decoder_pretrained( <span class="hljs-meta">... </span> <span class="hljs-string">"google/vit-base-patch16-224-in21k"</span>, <span class="hljs-string">"bert-base-uncased"</span> <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># saving model after fine-tuning</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model.save_pretrained(<span class="hljs-string">"./vit-bert"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># load fine-tuned model</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model = VisionEncoderDecoderModel.from_pretrained(<span class="hljs-string">"./vit-bert"</span>)</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.TFVisionEncoderDecoderModel" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionEncoderDecoderModel"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-rz6g64">TFVisionEncoderDecoderModel</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFVisionEncoderDecoderModel"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">TFVisionEncoderDecoderModel</span></span></h3> <a id="transformers.TFVisionEncoderDecoderModel" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFVisionEncoderDecoderModel"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vision_encoder_decoder/modeling_tf_vision_encoder_decoder.py#L176" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">*args<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFVisionEncoderDecoderModel.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionEncoderDecoderModel.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderConfig">VisionEncoderDecoderConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-sjm8yc">This class can be used to initialize an image-to-text-sequence model with any pretrained vision autoencoding model as the encoder and any pretrained text autoregressive model as the decoder. The encoder is loaded via <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained">from_pretrained()</a> function and the decoder is loaded via <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained">from_pretrained()</a> function. Cross-attention layers are automatically added to the decoder and should be fine-tuned on a downstream generative task, like image captioning.</p> <p data-svelte-h="svelte-1faerbf">The effectiveness of initializing sequence-to-sequence models with pretrained checkpoints for sequence generation tasks was shown in <a href="https://arxiv.org/abs/1907.12461" rel="nofollow">Leveraging Pre-trained Checkpoints for Sequence Generation Tasks</a> by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu.</p> <p data-svelte-h="svelte-yi0xwh">Additionally, in <a href="https://arxiv.org/abs/2109.10282" rel="nofollow">TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models</a> it is shown how leveraging large pretrained vision models for optical character recognition (OCR) yields a significant performance improvement.</p> <p data-svelte-h="svelte-1q35eh4">After such a Vision-Encoder-Text-Decoder model has been trained/fine-tuned, it can be saved/loaded just like any other models (see the examples for more information).</p> <p data-svelte-h="svelte-1i0vt4o">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel">TFPreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-1ivrf8m">This model is also a <a href="https://www.tensorflow.org/api_docs/python/tf/keras/Model" rel="nofollow">tf.keras.Model</a> subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.</p> <p data-svelte-h="svelte-u06szi"><a href="/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder#transformers.TFVisionEncoderDecoderModel">TFVisionEncoderDecoderModel</a> is a generic model class that will be instantiated as a transformer architecture with one of the base vision model classes of the library as encoder and another one of the base model classes as decoder when created with the <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained">from_pretrained()</a> class method for the encoder and <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained">from_pretrained()</a> class method for the decoder.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFVisionEncoderDecoderModel.call"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>call</span></h4> <a id="transformers.TFVisionEncoderDecoderModel.call" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFVisionEncoderDecoderModel.call"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vision_encoder_decoder/modeling_tf_vision_encoder_decoder.py#L486" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pixel_values<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_input_ids<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_attention_mask<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_outputs<span class="opacity-60">: Optional[Union[Tuple, TFBaseModelOutput]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">past_key_values<span class="opacity-60">: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_inputs_embeds<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: np.ndarray | tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_cache<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">training<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFSeq2SeqLMOutput">transformers.modeling_tf_outputs.TFSeq2SeqLMOutput</a> or <code>tuple(tf.Tensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 13 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFVisionEncoderDecoderModel.call.pixel_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionEncoderDecoderModel.call.pixel_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>pixel_values</strong> (<code>np.ndarray</code>, <code>tf.Tensor</code>, <code>List[tf.Tensor]</code> `<code>Dict[str, tf.Tensor]</code> or <code>Dict[str, np.ndarray]</code> and each example must have the shape <code>(batch_size, num_channels, height, width)</code>) — Pixel values. Pixel values can be obtained using the vision’s model’s image processor. For example, using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoImageProcessor">AutoImageProcessor</a>. See <a href="/docs/transformers/v4.34.0/en/model_doc/deit#transformers.DeiTFeatureExtractor.__call__">ViTImageProcessor.<strong>call</strong>()</a> for details.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFVisionEncoderDecoderModel.call.decoder_input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionEncoderDecoderModel.call.decoder_input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_input_ids</strong> (<code>np.ndarray</code> or <code>tf.Tensor</code> of shape <code>(batch_size, target_sequence_length)</code>, <em>optional</em>) — Indices of decoder input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizer">PreTrainedTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p> <p>If <code>past_key_values</code> is used, optionally only the last <code>decoder_input_ids</code> have to be input (see <code>past_key_values</code>).</p> <p>Provide for sequence to sequence training to the decoder. Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizer">PreTrainedTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFVisionEncoderDecoderModel.call.decoder_attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionEncoderDecoderModel.call.decoder_attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_attention_mask</strong> (<code>np.ndarray</code> or <code>tf.Tensor</code> of shape <code>(batch_size, target_sequence_length)</code>, <em>optional</em>) — Default behavior: generate a tensor that ignores pad tokens in <code>decoder_input_ids</code>. Causal mask will also be used by default.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFVisionEncoderDecoderModel.call.encoder_outputs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionEncoderDecoderModel.call.encoder_outputs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>encoder_outputs</strong> (<code>tuple(tuple(tf.Tensor)</code>, <em>optional</em>) — This tuple must consist of (<code>last_hidden_state</code>, <em>optional</em>: <code>hidden_states</code>, <em>optional</em>: <code>attentions</code>) <code>last_hidden_state</code> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>) is a tensor of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFVisionEncoderDecoderModel.call.past_key_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionEncoderDecoderModel.call.past_key_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>past_key_values</strong> (<code>tuple(tuple(tf.Tensor))</code> of length <code>config.n_layers</code> with each tuple having 4 tensors of shape <code>(batch_size, num_heads, sequence_length - 1, embed_size_per_head)</code>) — Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.<p></p> <p>If <code>past_key_values</code> are used, the user can optionally input only the last <code>decoder_input_ids</code> (those that don’t have their past key value states given to this model) of shape <code>(batch_size, 1)</code> instead of all <code>decoder_input_ids</code> of shape <code>(batch_size, sequence_length)</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFVisionEncoderDecoderModel.call.decoder_inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionEncoderDecoderModel.call.decoder_inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_inputs_embeds</strong> (<code>np.ndarray</code> or <code>tf.Tensor</code> of shape <code>(batch_size, target_sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>decoder_input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>decoder_input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFVisionEncoderDecoderModel.call.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionEncoderDecoderModel.call.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>np.ndarray</code> or <code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Labels for computing the masked language modeling loss for the decoder. Indices should be in <code>[-100, 0, ..., config.vocab_size]</code> (see <code>input_ids</code> docstring) Tokens with indices set to <code>-100</code> are ignored (masked), the loss is only computed for the tokens with labels in <code>[0, ..., config.vocab_size]</code></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFVisionEncoderDecoderModel.call.use_cache" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionEncoderDecoderModel.call.use_cache"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>use_cache</strong> (<code>bool</code>, <em>optional</em>) — If set to <code>True</code>, <code>past_key_values</code> key value states are returned and can be used to speed up decoding (see <code>past_key_values</code>).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFVisionEncoderDecoderModel.call.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionEncoderDecoderModel.call.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFVisionEncoderDecoderModel.call.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionEncoderDecoderModel.call.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFVisionEncoderDecoderModel.call.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionEncoderDecoderModel.call.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — If set to <code>True</code>, the model will return a <code>~utils.Seq2SeqLMOutput</code> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFVisionEncoderDecoderModel.call.training" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionEncoderDecoderModel.call.training"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>training</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFVisionEncoderDecoderModel.call.kwargs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionEncoderDecoderModel.call.kwargs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>kwargs</strong> (<em>optional</em>) — Remaining dictionary of keyword arguments. Keyword arguments come in two flavors:<p></p> <ul> <li>Without a prefix which will be input as <code>**encoder_kwargs</code> for the encoder forward function.</li> <li>With a <em>decoder_</em> prefix which will be input as <code>**decoder_kwargs</code> for the decoder forward function.</li> </ul></span></span> </li></ul> <div id="transformers.TFVisionEncoderDecoderModel.call.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFSeq2SeqLMOutput">transformers.modeling_tf_outputs.TFSeq2SeqLMOutput</a> or <code>tuple(tf.Tensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_tf_outputs.TFSeq2SeqLMOutput">transformers.modeling_tf_outputs.TFSeq2SeqLMOutput</a> or a tuple of <code>tf.Tensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderConfig">VisionEncoderDecoderConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>tf.Tensor</code> of shape <code>(n,)</code>, <em>optional</em>, where n is the number of non-masked labels, returned when <code>labels</code> is provided) — Language modeling loss.</p> </li> <li> <p><strong>logits</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length, config.vocab_size)</code>) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).</p> </li> <li> <p><strong>past_key_values</strong> (<code>List[tf.Tensor]</code>, <em>optional</em>, returned when <code>use_cache=True</code> is passed or when <code>config.use_cache=True</code>) — List of <code>tf.Tensor</code> of length <code>config.n_layers</code>, with each tensor of shape <code>(2, batch_size, num_heads, sequence_length, embed_size_per_head)</code>).</p> <p>Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be used (see <code>past_key_values</code> input) to speed up sequential decoding.</p> </li> <li> <p><strong>decoder_hidden_states</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>tf.Tensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>decoder_attentions</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>tf.Tensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> <li> <p><strong>cross_attentions</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>tf.Tensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.</p> </li> <li> <p><strong>encoder_last_hidden_state</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Sequence of hidden-states at the output of the last layer of the encoder of the model.</p> </li> <li> <p><strong>encoder_hidden_states</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>tf.Tensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>encoder_attentions</strong> (<code>tuple(tf.Tensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>tf.Tensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-1htxgwp">The <a href="/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder#transformers.TFVisionEncoderDecoderModel">TFVisionEncoderDecoderModel</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.TFVisionEncoderDecoderModel.call.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionEncoderDecoderModel.call.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-kvfsh7">Examples:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoImageProcessor, AutoTokenizer, TFVisionEncoderDecoderModel <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> PIL <span class="hljs-keyword">import</span> Image <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> requests <span class="hljs-meta">&gt;&gt;&gt; </span>image_processor = AutoImageProcessor.from_pretrained(<span class="hljs-string">"google/vit-base-patch16-224-in21k"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>decoder_tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"gpt2"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># initialize a bert2gpt2 from a pretrained BERT and GPT2 models. Note that the cross-attention layers will be randomly initialized</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model = TFVisionEncoderDecoderModel.from_encoder_decoder_pretrained( <span class="hljs-meta">... </span> <span class="hljs-string">"google/vit-base-patch16-224-in21k"</span>, <span class="hljs-string">"gpt2"</span> <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>url = <span class="hljs-string">"http://images.cocodataset.org/val2017/000000039769.jpg"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>img = Image.<span class="hljs-built_in">open</span>(requests.get(url, stream=<span class="hljs-literal">True</span>).raw) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># forward</span> <span class="hljs-meta">&gt;&gt;&gt; </span>pixel_values = image_processor(images=img, return_tensors=<span class="hljs-string">"tf"</span>).pixel_values <span class="hljs-comment"># Batch size 1</span> <span class="hljs-meta">&gt;&gt;&gt; </span>decoder_input_ids = decoder_tokenizer(<span class="hljs-string">"Linda Davis"</span>, return_tensors=<span class="hljs-string">"tf"</span>).input_ids <span class="hljs-comment"># Batch size 1</span> <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(pixel_values=pixel_values, decoder_input_ids=decoder_input_ids) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># training</span> <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(pixel_values=pixel_values, decoder_input_ids=decoder_input_ids, labels=decoder_input_ids) <span class="hljs-meta">&gt;&gt;&gt; </span>loss, logits = outputs.loss, outputs.logits <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># save and load from pretrained</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model.save_pretrained(<span class="hljs-string">"vit-gpt2"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = TFVisionEncoderDecoderModel.from_pretrained(<span class="hljs-string">"vit-gpt2"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># generation</span> <span class="hljs-meta">&gt;&gt;&gt; </span>generated = model.generate(pixel_values, decoder_start_token_id=model.config.decoder.bos_token_id)</pre></div></div></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFVisionEncoderDecoderModel.from_encoder_decoder_pretrained"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>from_encoder_decoder_pretrained</span></h4> <a id="transformers.TFVisionEncoderDecoderModel.from_encoder_decoder_pretrained" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFVisionEncoderDecoderModel.from_encoder_decoder_pretrained"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vision_encoder_decoder/modeling_tf_vision_encoder_decoder.py#L338" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_pretrained_model_name_or_path<span class="opacity-60">: str = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_pretrained_model_name_or_path<span class="opacity-60">: str = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">*model_args<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 4 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFVisionEncoderDecoderModel.from_encoder_decoder_pretrained.encoder_pretrained_model_name_or_path" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionEncoderDecoderModel.from_encoder_decoder_pretrained.encoder_pretrained_model_name_or_path"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>encoder_pretrained_model_name_or_path</strong> (<code>str</code>, <em>optional</em>) — Information necessary to initiate the encoder. Can be either:<p></p> <ul> <li>A string, the <em>model id</em> of a pretrained model hosted inside a model repo on huggingface.co. An example is <code>google/vit-base-patch16-224-in21k</code>.</li> <li>A path to a <em>directory</em> containing model weights saved using <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel.save_pretrained">save_pretrained()</a>, e.g., <code>./my_model_directory/</code>.</li> <li>A path or url to a <em>pytorch index checkpoint file</em> (e.g, <code>./pt_model/</code>). In this case, <code>encoder_from_pt</code> should be set to <code>True</code>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFVisionEncoderDecoderModel.from_encoder_decoder_pretrained.decoder_pretrained_model_name_or_path" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionEncoderDecoderModel.from_encoder_decoder_pretrained.decoder_pretrained_model_name_or_path"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_pretrained_model_name_or_path</strong> (<code>str</code>, <em>optional</em>, defaults to <em>None</em>) — Information necessary to initiate the decoder. Can be either:<p></p> <ul> <li>A string, the <em>model id</em> of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like <code>bert-base-uncased</code>, or namespaced under a user or organization name, like <code>dbmdz/bert-base-german-cased</code>.</li> <li>A path to a <em>directory</em> containing model weights saved using <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel.save_pretrained">save_pretrained()</a>, e.g., <code>./my_model_directory/</code>.</li> <li>A path or url to a <em>pytorch checkpoint file</em> (e.g, <code>./pt_model/</code>). In this case, <code>decoder_from_pt</code> should be set to <code>True</code>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFVisionEncoderDecoderModel.from_encoder_decoder_pretrained.model_args" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionEncoderDecoderModel.from_encoder_decoder_pretrained.model_args"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>model_args</strong> (remaining positional arguments, <em>optional</em>) — All remaning positional arguments will be passed to the underlying model’s <code>__init__</code> method.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFVisionEncoderDecoderModel.from_encoder_decoder_pretrained.kwargs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionEncoderDecoderModel.from_encoder_decoder_pretrained.kwargs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>kwargs</strong> (remaining dictionary of keyword arguments, <em>optional</em>) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., <code>output_attentions=True</code>).<p></p> <ul> <li>To update the encoder configuration, use the prefix <em>encoder_</em> for each configuration parameter.</li> <li>To update the decoder configuration, use the prefix <em>decoder_</em> for each configuration parameter.</li> <li>To update the parent model configuration, do not use a prefix for each configuration parameter.</li> </ul> <p>Behaves differently depending on whether a <code>config</code> is provided or automatically loaded.</p></span></span> </li></ul> </div></div> <p data-svelte-h="svelte-n4p3zm">Instantiate an encoder and a decoder from one or two base classes of the library from pretrained model checkpoints.</p> <div class="relative group rounded-md"><a id="transformers.TFVisionEncoderDecoderModel.from_encoder_decoder_pretrained.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionEncoderDecoderModel.from_encoder_decoder_pretrained.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> TFVisionEncoderDecoderModel <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># initialize a vit-bert from a pretrained ViT and a pretrained BERT model. Note that the cross-attention layers will be randomly initialized</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model = TFVisionEncoderDecoderModel.from_encoder_decoder_pretrained( <span class="hljs-meta">... </span> <span class="hljs-string">"google/vit-base-patch16-224-in21k"</span>, <span class="hljs-string">"bert-base-uncased"</span> <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># saving model after fine-tuning</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model.save_pretrained(<span class="hljs-string">"./vit-bert"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># load fine-tuned model</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model = TFVisionEncoderDecoderModel.from_pretrained(<span class="hljs-string">"./vit-bert"</span>)</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.FlaxVisionEncoderDecoderModel" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxVisionEncoderDecoderModel"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-3aqoor">FlaxVisionEncoderDecoderModel</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FlaxVisionEncoderDecoderModel"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">FlaxVisionEncoderDecoderModel</span></span></h3> <a id="transformers.FlaxVisionEncoderDecoderModel" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FlaxVisionEncoderDecoderModel"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vision_encoder_decoder/modeling_flax_vision_encoder_decoder.py#L268" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60">: VisionEncoderDecoderConfig</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_shape<span class="opacity-60">: typing.Optional[typing.Tuple] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">seed<span class="opacity-60">: int = 0</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">dtype<span class="opacity-60">: dtype = &lt;class 'jax.numpy.float32'&gt;</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">_do_init<span class="opacity-60">: bool = True</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxVisionEncoderDecoderModel.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxVisionEncoderDecoderModel.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderConfig">VisionEncoderDecoderConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxVisionEncoderDecoderModel.dtype" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxVisionEncoderDecoderModel.dtype"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>dtype</strong> (<code>jax.numpy.dtype</code>, <em>optional</em>, defaults to <code>jax.numpy.float32</code>) — The data type of the computation. Can be one of <code>jax.numpy.float32</code>, <code>jax.numpy.float16</code> (on GPUs) and <code>jax.numpy.bfloat16</code> (on TPUs).<p></p> <p>This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given <code>dtype</code>.</p> <p><strong>Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.</strong></p> <p>If you wish to change the dtype of the model parameters, see <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.to_fp16">to_fp16()</a> and <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.to_bf16">to_bf16()</a>.</p></span></span> </li></ul> </div></div> <p data-svelte-h="svelte-sjm8yc">This class can be used to initialize an image-to-text-sequence model with any pretrained vision autoencoding model as the encoder and any pretrained text autoregressive model as the decoder. The encoder is loaded via <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained">from_pretrained()</a> function and the decoder is loaded via <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained">from_pretrained()</a> function. Cross-attention layers are automatically added to the decoder and should be fine-tuned on a downstream generative task, like image captioning.</p> <p data-svelte-h="svelte-1faerbf">The effectiveness of initializing sequence-to-sequence models with pretrained checkpoints for sequence generation tasks was shown in <a href="https://arxiv.org/abs/1907.12461" rel="nofollow">Leveraging Pre-trained Checkpoints for Sequence Generation Tasks</a> by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu.</p> <p data-svelte-h="svelte-yi0xwh">Additionally, in <a href="https://arxiv.org/abs/2109.10282" rel="nofollow">TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models</a> it is shown how leveraging large pretrained vision models for optical character recognition (OCR) yields a significant performance improvement.</p> <p data-svelte-h="svelte-1q35eh4">After such a Vision-Encoder-Text-Decoder model has been trained/fine-tuned, it can be saved/loaded just like any other models (see the examples for more information).</p> <p data-svelte-h="svelte-1b68hcc">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel">FlaxPreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-idybz1">This model is also a Flax Linen <a href="https://flax.readthedocs.io/en/latest/_autosummary/flax.nn.module.html" rel="nofollow">flax.nn.Module</a> subclass. Use it as a regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.</p> <p data-svelte-h="svelte-zs79bv"><a href="/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder#transformers.FlaxVisionEncoderDecoderModel">FlaxVisionEncoderDecoderModel</a> is a generic model class that will be instantiated as a transformer architecture with the module (flax.nn.Module) of one of the base vision model classes of the library as encoder module and another one as decoder module when created with the :meth<em>~transformers.FlaxAutoModel.from_pretrained</em> class method for the encoder and :meth<em>~transformers.FlaxAutoModelForCausalLM.from_pretrained</em> class method for the decoder.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FlaxVisionEncoderDecoderModel.__call__"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>__call__</span></h4> <a id="transformers.FlaxVisionEncoderDecoderModel.__call__" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FlaxVisionEncoderDecoderModel.__call__"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vision_encoder_decoder/modeling_flax_vision_encoder_decoder.py#L598" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pixel_values<span class="opacity-60">: Array</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_input_ids<span class="opacity-60">: typing.Optional[jax.Array] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_attention_mask<span class="opacity-60">: typing.Optional[jax.Array] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_position_ids<span class="opacity-60">: typing.Optional[jax.Array] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">train<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">params<span class="opacity-60">: dict = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">dropout_rng<span class="opacity-60">: PRNGKey = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput">transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 7 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxVisionEncoderDecoderModel.__call__.pixel_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxVisionEncoderDecoderModel.__call__.pixel_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>pixel_values</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, num_channels, height, width)</code>) — Pixel values. Pixel values can be obtained using the vision model’s image processor. For example, using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoImageProcessor">AutoImageProcessor</a>. See <a href="/docs/transformers/v4.34.0/en/model_doc/deit#transformers.DeiTFeatureExtractor.__call__">ViTImageProcessor.<strong>call</strong>()</a> for details.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxVisionEncoderDecoderModel.__call__.decoder_input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxVisionEncoderDecoderModel.__call__.decoder_input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_input_ids</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, target_sequence_length)</code>, <em>optional</em>) — Indices of decoder input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizer">PreTrainedTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#decoder-input-ids">What are decoder input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxVisionEncoderDecoderModel.__call__.decoder_attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxVisionEncoderDecoderModel.__call__.decoder_attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_attention_mask</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, target_sequence_length)</code>, <em>optional</em>) — Default behavior: generate a tensor that ignores pad tokens in <code>decoder_input_ids</code>. Causal mask will also be used by default.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxVisionEncoderDecoderModel.__call__.decoder_position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxVisionEncoderDecoderModel.__call__.decoder_position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_position_ids</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the range <code>[0, config.decoder.max_position_embeddings - 1]</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxVisionEncoderDecoderModel.__call__.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxVisionEncoderDecoderModel.__call__.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxVisionEncoderDecoderModel.__call__.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxVisionEncoderDecoderModel.__call__.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxVisionEncoderDecoderModel.__call__.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxVisionEncoderDecoderModel.__call__.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — If set to <code>True</code>, the model will return a <code>~utils.FlaxSeq2SeqLMOutput</code> instead of a plain tuple.</span></span> </li></ul> <div id="transformers.FlaxVisionEncoderDecoderModel.__call__.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput">transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput">transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderConfig">VisionEncoderDecoderConfig</a>) and inputs.</p> <ul> <li> <p><strong>logits</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, sequence_length, config.vocab_size)</code>) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).</p> </li> <li> <p><strong>past_key_values</strong> (<code>tuple(tuple(jnp.ndarray))</code>, <em>optional</em>, returned when <code>use_cache=True</code> is passed or when <code>config.use_cache=True</code>) — Tuple of <code>tuple(jnp.ndarray)</code> of length <code>config.n_layers</code>, with each tuple having 2 tensors of shape <code>(batch_size, num_heads, sequence_length, embed_size_per_head)</code>) and 2 additional tensors of shape <code>(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)</code>.</p> <p>Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see <code>past_key_values</code> input) to speed up sequential decoding.</p> </li> <li> <p><strong>decoder_hidden_states</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>jnp.ndarray</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>decoder_attentions</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>jnp.ndarray</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> <li> <p><strong>cross_attentions</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>jnp.ndarray</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.</p> </li> <li> <p><strong>encoder_last_hidden_state</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Sequence of hidden-states at the output of the last layer of the encoder of the model.</p> </li> <li> <p><strong>encoder_hidden_states</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>jnp.ndarray</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>encoder_attentions</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>jnp.ndarray</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-hxhzd5">The <a href="/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder#transformers.FlaxVisionEncoderDecoderModel">FlaxVisionEncoderDecoderModel</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.FlaxVisionEncoderDecoderModel.__call__.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxVisionEncoderDecoderModel.__call__.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-kvfsh7">Examples:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> FlaxVisionEncoderDecoderModel, AutoImageProcessor, AutoTokenizer <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> PIL <span class="hljs-keyword">import</span> Image <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> requests <span class="hljs-meta">&gt;&gt;&gt; </span>url = <span class="hljs-string">"http://images.cocodataset.org/val2017/000000039769.jpg"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>image = Image.<span class="hljs-built_in">open</span>(requests.get(url, stream=<span class="hljs-literal">True</span>).raw) <span class="hljs-meta">&gt;&gt;&gt; </span>image_processor = AutoImageProcessor.from_pretrained(<span class="hljs-string">"google/vit-base-patch16-224-in21k"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># load output tokenizer</span> <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer_output = AutoTokenizer.from_pretrained(<span class="hljs-string">"gpt2"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># initialize a vit-gpt2 from pretrained ViT and GPT2 models. Note that the cross-attention layers will be randomly initialized</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model = FlaxVisionEncoderDecoderModel.from_encoder_decoder_pretrained( <span class="hljs-meta">... </span> <span class="hljs-string">"google/vit-base-patch16-224-in21k"</span>, <span class="hljs-string">"gpt2"</span> <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>pixel_values = image_processor(images=image, return_tensors=<span class="hljs-string">"np"</span>).pixel_values <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># use GPT2's eos_token as the pad as well as eos token</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model.config.eos_token_id = model.config.decoder.eos_token_id <span class="hljs-meta">&gt;&gt;&gt; </span>model.config.pad_token_id = model.config.eos_token_id <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># generation</span> <span class="hljs-meta">&gt;&gt;&gt; </span>sequences = model.generate(pixel_values, num_beams=<span class="hljs-number">4</span>, max_length=<span class="hljs-number">12</span>).sequences <span class="hljs-meta">&gt;&gt;&gt; </span>captions = tokenizer_output.batch_decode(sequences, skip_special_tokens=<span class="hljs-literal">True</span>)</pre></div></div></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FlaxVisionEncoderDecoderModel.from_encoder_decoder_pretrained"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>from_encoder_decoder_pretrained</span></h4> <a id="transformers.FlaxVisionEncoderDecoderModel.from_encoder_decoder_pretrained" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FlaxVisionEncoderDecoderModel.from_encoder_decoder_pretrained"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vision_encoder_decoder/modeling_flax_vision_encoder_decoder.py#L723" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_pretrained_model_name_or_path<span class="opacity-60">: typing.Union[str, os.PathLike, NoneType] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_pretrained_model_name_or_path<span class="opacity-60">: typing.Union[str, os.PathLike, NoneType] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">*model_args<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 4 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxVisionEncoderDecoderModel.from_encoder_decoder_pretrained.encoder_pretrained_model_name_or_path" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxVisionEncoderDecoderModel.from_encoder_decoder_pretrained.encoder_pretrained_model_name_or_path"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>encoder_pretrained_model_name_or_path</strong> (<code>Union[str, os.PathLike]</code>, <em>optional</em>) — Information necessary to initiate the encoder. Can be either:<p></p> <ul> <li>A string, the <em>model id</em> of a pretrained model hosted inside a model repo on huggingface.co. An example is <code>google/vit-base-patch16-224-in21k</code>.</li> <li>A path to a <em>directory</em> containing model weights saved using <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.save_pretrained">save_pretrained()</a>, e.g., <code>./my_model_directory/</code>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxVisionEncoderDecoderModel.from_encoder_decoder_pretrained.decoder_pretrained_model_name_or_path" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxVisionEncoderDecoderModel.from_encoder_decoder_pretrained.decoder_pretrained_model_name_or_path"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_pretrained_model_name_or_path</strong> (<code>Union[str, os.PathLike]</code>, <em>optional</em>, defaults to <code>None</code>) — Information necessary to initiate the decoder. Can be either:<p></p> <ul> <li>A string, the <em>model id</em> of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like <code>bert-base-uncased</code>, or namespaced under a user or organization name, like <code>dbmdz/bert-base-german-cased</code>.</li> <li>A path to a <em>directory</em> containing model weights saved using <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.save_pretrained">save_pretrained()</a>, e.g., <code>./my_model_directory/</code>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxVisionEncoderDecoderModel.from_encoder_decoder_pretrained.model_args" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxVisionEncoderDecoderModel.from_encoder_decoder_pretrained.model_args"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>model_args</strong> (remaining positional arguments, <em>optional</em>) — All remaning positional arguments will be passed to the underlying model’s <code>__init__</code> method.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxVisionEncoderDecoderModel.from_encoder_decoder_pretrained.kwargs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxVisionEncoderDecoderModel.from_encoder_decoder_pretrained.kwargs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>kwargs</strong> (remaining dictionary of keyword arguments, <em>optional</em>) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., <code>output_attentions=True</code>).<p></p> <ul> <li>To update the encoder configuration, use the prefix <em>encoder_</em> for each configuration parameter.</li> <li>To update the decoder configuration, use the prefix <em>decoder_</em> for each configuration parameter.</li> <li>To update the parent model configuration, do not use a prefix for each configuration parameter.</li> </ul> <p>Behaves differently depending on whether a <code>config</code> is provided or automatically loaded.</p></span></span> </li></ul> </div></div> <p data-svelte-h="svelte-n4p3zm">Instantiate an encoder and a decoder from one or two base classes of the library from pretrained model checkpoints.</p> <div class="relative group rounded-md"><a id="transformers.FlaxVisionEncoderDecoderModel.from_encoder_decoder_pretrained.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxVisionEncoderDecoderModel.from_encoder_decoder_pretrained.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> FlaxVisionEncoderDecoderModel <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># initialize a vit-gpt2 from a pretrained ViT and a pretrained GPT2 model. Note that the cross-attention layers will be randomly initialized</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model = FlaxVisionEncoderDecoderModel.from_encoder_decoder_pretrained( <span class="hljs-meta">... </span> <span class="hljs-string">"google/vit-base-patch16-224-in21k"</span>, <span class="hljs-string">"gpt2"</span> <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># saving model after fine-tuning</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model.save_pretrained(<span class="hljs-string">"./vit-gpt2"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># load fine-tuned model</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model = FlaxVisionEncoderDecoderModel.from_pretrained(<span class="hljs-string">"./vit-gpt2"</span>)</pre></div></div></div></div> <p></p> <div id="svelte-announcer" aria-live="assertive" aria-atomic="true" style="position: absolute; left: 0px; top: 0px; clip: rect(0px, 0px, 0px, 0px); clip-path: inset(50%); overflow: hidden; white-space: nowrap; width: 1px; height: 1px;"></div></div> <div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/vilt" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>ViLT</a> <a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">Vision Text Dual Encoder<span class="ml-2 translate-y-px">→</span></a></div></div></div> <div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapter&quot;:{&quot;title&quot;:&quot;Vision Encoder Decoder Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;vision-encoder-decoder-models&quot;,&quot;url&quot;:&quot;#vision-encoder-decoder-models&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;overview&quot;,&quot;url&quot;:&quot;#overview&quot;},{&quot;title&quot;:&quot;Randomly initializing `VisionEncoderDecoderModel` from model configurations.&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;randomly-initializing-visionencoderdecodermodel-from-model-configurations&quot;,&quot;url&quot;:&quot;#randomly-initializing-visionencoderdecodermodel-from-model-configurations&quot;},{&quot;title&quot;:&quot;Initialising `VisionEncoderDecoderModel` from a pretrained encoder and a pretrained decoder.&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;initialising-visionencoderdecodermodel-from-a-pretrained-encoder-and-a-pretrained-decoder&quot;,&quot;url&quot;:&quot;#initialising-visionencoderdecodermodel-from-a-pretrained-encoder-and-a-pretrained-decoder&quot;},{&quot;title&quot;:&quot;Loading an existing `VisionEncoderDecoderModel` checkpoint and perform inference.&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;loading-an-existing-visionencoderdecodermodel-checkpoint-and-perform-inference&quot;,&quot;url&quot;:&quot;#loading-an-existing-visionencoderdecodermodel-checkpoint-and-perform-inference&quot;},{&quot;title&quot;:&quot;Loading a PyTorch checkpoint into `TFVisionEncoderDecoderModel`.&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;loading-a-pytorch-checkpoint-into-tfvisionencoderdecodermodel&quot;,&quot;url&quot;:&quot;#loading-a-pytorch-checkpoint-into-tfvisionencoderdecodermodel&quot;},{&quot;title&quot;:&quot;Training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;training&quot;,&quot;url&quot;:&quot;#training&quot;},{&quot;title&quot;:&quot;VisionEncoderDecoderConfig&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.VisionEncoderDecoderConfig&quot;,&quot;url&quot;:&quot;#transformers.VisionEncoderDecoderConfig&quot;},{&quot;title&quot;:&quot;VisionEncoderDecoderModel&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.VisionEncoderDecoderModel&quot;,&quot;url&quot;:&quot;#transformers.VisionEncoderDecoderModel&quot;},{&quot;title&quot;:&quot;TFVisionEncoderDecoderModel&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.TFVisionEncoderDecoderModel&quot;,&quot;url&quot;:&quot;#transformers.TFVisionEncoderDecoderModel&quot;},{&quot;title&quot;:&quot;FlaxVisionEncoderDecoderModel&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.FlaxVisionEncoderDecoderModel&quot;,&quot;url&quot;:&quot;#transformers.FlaxVisionEncoderDecoderModel&quot;}]}}" data-target="SubSideMenu"><nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#vision-encoder-decoder-models" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-vision-encoder-decoder-models"><wbr>Vision <wbr>Encoder <wbr>Decoder <wbr>Models</a> <a href="#overview" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-overview"><wbr>Overview</a> <a href="#randomly-initializing-visionencoderdecodermodel-from-model-configurations" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-randomly-initializing-visionencoderdecodermodel-from-model-configurations"><wbr>Randomly initializing `<wbr>Vision<wbr>Encoder<wbr>Decoder<wbr>Model` from model configurations.</a> <a href="#initialising-visionencoderdecodermodel-from-a-pretrained-encoder-and-a-pretrained-decoder" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-initialising-visionencoderdecodermodel-from-a-pretrained-encoder-and-a-pretrained-decoder"><wbr>Initialising `<wbr>Vision<wbr>Encoder<wbr>Decoder<wbr>Model` from a pretrained encoder and a pretrained decoder.</a> <a href="#loading-an-existing-visionencoderdecodermodel-checkpoint-and-perform-inference" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-loading-an-existing-visionencoderdecodermodel-checkpoint-and-perform-inference"><wbr>Loading an existing `<wbr>Vision<wbr>Encoder<wbr>Decoder<wbr>Model` checkpoint and perform inference.</a> <a href="#loading-a-pytorch-checkpoint-into-tfvisionencoderdecodermodel" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-loading-a-pytorch-checkpoint-into-tfvisionencoderdecodermodel"><wbr>Loading a <wbr>Py<wbr>Torch checkpoint into `TF<wbr>Vision<wbr>Encoder<wbr>Decoder<wbr>Model`.</a> <a href="#training" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-training"><wbr>Training</a> <a href="#transformers.VisionEncoderDecoderConfig" class="pl-4 text-gray-700 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.VisionEncoderDecoderConfig"><wbr>Vision<wbr>Encoder<wbr>Decoder<wbr>Config</a> <a href="#transformers.VisionEncoderDecoderModel" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.VisionEncoderDecoderModel"><wbr>Vision<wbr>Encoder<wbr>Decoder<wbr>Model</a> <a href="#transformers.TFVisionEncoderDecoderModel" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.TFVisionEncoderDecoderModel">TF<wbr>Vision<wbr>Encoder<wbr>Decoder<wbr>Model</a> <a href="#transformers.FlaxVisionEncoderDecoderModel" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.FlaxVisionEncoderDecoderModel"><wbr>Flax<wbr>Vision<wbr>Encoder<wbr>Decoder<wbr>Model</a> </nav></div></div></div> <div id="doc-footer"></div></main> </div> <script> import("/front/build/kube-5e23f38/index.js"); window.moonSha = "kube-5e23f38/"; window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc","environment":"production","userAgent":"HuggingFace (production)"}`); </script> <!-- Stripe --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://js.stripe.com/v3/"; script.async = true; document.head.appendChild(script); } </script> <!-- Google analytics v4 --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL"; script.async = true; document.head.appendChild(script); window.dataLayer = window.dataLayer || []; function gtag() { if (window.dataLayer !== undefined) { window.dataLayer.push(arguments); } } gtag("js", new Date()); gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder" }); /// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" }); /// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent /// TODO: ask the user for their consent and update this with gtag('consent', 'update') } </script> <!-- Google Analytics v3 (deprecated) --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { (function (i, s, o, g, r, a, m) { i["GoogleAnalyticsObject"] = r; (i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments); }), (i[r].l = 1 * new Date()); (a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]); a.async = 1; a.src = g; m.parentNode.insertBefore(a, m); })(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics"); ganalytics("create", "UA-83738774-2", "auto"); ganalytics("send", "pageview", "/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder"); } </script> <iframe name="__privateStripeMetricsController3180" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-27c67c0d52761104439bb051c7856ab1.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fv4.34.0%2Fen%2Fmodel_doc%2Fvision-encoder-decoder%23transformers.VisionEncoderDecoderConfig&amp;title=Vision%20Encoder%20Decoder%20Models&amp;referrer=&amp;muid=NA&amp;sid=NA&amp;version=6&amp;preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html>
2023-10-05T13:33:50.058Z
RetriBERT
https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/retribert#transformers.RetriBertConfig
# RetriBERT This model is in maintenance mode only, so we won’t accept any new PRs changing its code. If you run into any issues running this model, please reinstall the last version that supported this model: v4.30.0. You can do so by running the following command: `pip install -U transformers==4.30.0`. ## Overview The RetriBERT model was proposed in the blog post [Explain Anything Like I’m Five: A Model for Open Domain Long Form Question Answering](https://yjernite.github.io/lfqa.html). RetriBERT is a small model that uses either a single or pair of BERT encoders with lower-dimension projection for dense semantic indexing of text. This model was contributed by [yjernite](https://huggingface.co/yjernite). Code to train and use the model can be found [here](https://github.com/huggingface/transformers/tree/main/examples/research-projects/distillation). ## RetriBertConfig ### class transformers.RetriBertConfig [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/deprecated/retribert/configuration_retribert.py#L31) ( vocab\_size = 30522hidden\_size = 768num\_hidden\_layers = 8num\_attention\_heads = 12intermediate\_size = 3072hidden\_act = 'gelu'hidden\_dropout\_prob = 0.1attention\_probs\_dropout\_prob = 0.1max\_position\_embeddings = 512type\_vocab\_size = 2initializer\_range = 0.02layer\_norm\_eps = 1e-12share\_encoders = Trueprojection\_dim = 128pad\_token\_id = 0\*\*kwargs ) This is the configuration class to store the configuration of a [RetriBertModel](/docs/transformers/v4.34.0/en/model_doc/retribert#transformers.RetriBertModel). It is used to instantiate a RetriBertModel model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the RetriBERT [yjernite/retribert-base-uncased](https://huggingface.co/yjernite/retribert-base-uncased) architecture. Configuration objects inherit from [PretrainedConfig](/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig) and can be used to control the model outputs. Read the documentation from [PretrainedConfig](/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig) for more information. ## RetriBertTokenizer ### class transformers.RetriBertTokenizer [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/deprecated/retribert/tokenization_retribert.py#L70) ( vocab\_filedo\_lower\_case = Truedo\_basic\_tokenize = Truenever\_split = Noneunk\_token = '\[UNK\]'sep\_token = '\[SEP\]'pad\_token = '\[PAD\]'cls\_token = '\[CLS\]'mask\_token = '\[MASK\]'tokenize\_chinese\_chars = Truestrip\_accents = None\*\*kwargs ) Constructs a RetriBERT tokenizer. [RetriBertTokenizer](/docs/transformers/v4.34.0/en/model_doc/retribert#transformers.RetriBertTokenizer) is identical to [BertTokenizer](/docs/transformers/v4.34.0/en/model_doc/bert#transformers.BertTokenizer) and runs end-to-end tokenization: punctuation splitting and wordpiece. This tokenizer inherits from [PreTrainedTokenizer](/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizer) which contains most of the main methods. Users should refer to: this superclass for more information regarding those methods. #### build\_inputs\_with\_special\_tokens [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/deprecated/retribert/tokenization_retribert.py#L214) ( token\_ids\_0: typing.List\[int\]token\_ids\_1: typing.Optional\[typing.List\[int\]\] = None ) → `List[int]` Parameters - **token\_ids\_0** (`List[int]`) — List of IDs to which the special tokens will be added. - **token\_ids\_1** (`List[int]`, _optional_) — Optional second list of IDs for sequence pairs. List of [input IDs](../glossary#input-ids) with the appropriate special tokens. Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A BERT sequence has the following format: - single sequence: `[CLS] X [SEP]` - pair of sequences: `[CLS] A [SEP] B [SEP]` Converts a sequence of tokens (string) in a single string. #### create\_token\_type\_ids\_from\_sequences [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/deprecated/retribert/tokenization_retribert.py#L269) ( token\_ids\_0: typing.List\[int\]token\_ids\_1: typing.Optional\[typing.List\[int\]\] = None ) → `List[int]` Parameters - **token\_ids\_0** (`List[int]`) — List of IDs. - **token\_ids\_1** (`List[int]`, _optional_) — Optional second list of IDs for sequence pairs. List of [token type IDs](../glossary#token-type-ids) according to the given sequence(s). Create a mask from the two sequences passed to be used in a sequence-pair classification task. A BERT sequence pair mask has the following format: ``` 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 | first sequence | second sequence | ``` If `token_ids_1` is `None`, this method only returns the first portion of the mask (0s). #### get\_special\_tokens\_mask [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/deprecated/retribert/tokenization_retribert.py#L240) ( token\_ids\_0: typing.List\[int\]token\_ids\_1: typing.Optional\[typing.List\[int\]\] = Nonealready\_has\_special\_tokens: bool = False ) → `List[int]` Parameters - **token\_ids\_0** (`List[int]`) — List of IDs. - **token\_ids\_1** (`List[int]`, _optional_) — Optional second list of IDs for sequence pairs. - **already\_has\_special\_tokens** (`bool`, _optional_, defaults to `False`) — Whether or not the token list is already formatted with special tokens for the model. A list of integers in the range \[0, 1\]: 1 for a special token, 0 for a sequence token. Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer `prepare_for_model` method. ## RetriBertTokenizerFast ### class transformers.RetriBertTokenizerFast [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/deprecated/retribert/tokenization_retribert_fast.py#L54) ( vocab\_file = Nonetokenizer\_file = Nonedo\_lower\_case = Trueunk\_token = '\[UNK\]'sep\_token = '\[SEP\]'pad\_token = '\[PAD\]'cls\_token = '\[CLS\]'mask\_token = '\[MASK\]'tokenize\_chinese\_chars = Truestrip\_accents = None\*\*kwargs ) Construct a “fast” RetriBERT tokenizer (backed by HuggingFace’s _tokenizers_ library). [RetriBertTokenizerFast](/docs/transformers/v4.34.0/en/model_doc/retribert#transformers.RetriBertTokenizerFast) is identical to [BertTokenizerFast](/docs/transformers/v4.34.0/en/model_doc/bert#transformers.BertTokenizerFast) and runs end-to-end tokenization: punctuation splitting and wordpiece. This tokenizer inherits from [PreTrainedTokenizerFast](/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast) which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. #### build\_inputs\_with\_special\_tokens [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/deprecated/retribert/tokenization_retribert_fast.py#L148) ( token\_ids\_0token\_ids\_1 = None ) → `List[int]` Parameters - **token\_ids\_0** (`List[int]`) — List of IDs to which the special tokens will be added. - **token\_ids\_1** (`List[int]`, _optional_) — Optional second list of IDs for sequence pairs. List of [input IDs](../glossary#input-ids) with the appropriate special tokens. Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A BERT sequence has the following format: - single sequence: `[CLS] X [SEP]` - pair of sequences: `[CLS] A [SEP] B [SEP]` #### create\_token\_type\_ids\_from\_sequences [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/deprecated/retribert/tokenization_retribert_fast.py#L173) ( token\_ids\_0: typing.List\[int\]token\_ids\_1: typing.Optional\[typing.List\[int\]\] = None ) → `List[int]` Parameters - **token\_ids\_0** (`List[int]`) — List of IDs. - **token\_ids\_1** (`List[int]`, _optional_) — Optional second list of IDs for sequence pairs. List of [token type IDs](../glossary#token-type-ids) according to the given sequence(s). Create a mask from the two sequences passed to be used in a sequence-pair classification task. A BERT sequence pair mask has the following format: ``` 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 | first sequence | second sequence | ``` If `token_ids_1` is `None`, this method only returns the first portion of the mask (0s). ## RetriBertModel ### class transformers.RetriBertModel [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/deprecated/retribert/modeling_retribert.py#L88) ( config: RetriBertConfig ) Parameters - **config** ([RetriBertConfig](/docs/transformers/v4.34.0/en/model_doc/retribert#transformers.RetriBertConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. Bert Based model to embed queries or document for document retrieval. This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/deprecated/retribert/modeling_retribert.py#L176) ( input\_ids\_query: LongTensorattention\_mask\_query: typing.Optional\[torch.FloatTensor\]input\_ids\_doc: LongTensorattention\_mask\_doc: typing.Optional\[torch.FloatTensor\]checkpoint\_batch\_size: int = -1 ) → \`torch.FloatTensor“
<!DOCTYPE html><html class=""><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"> <meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science."> <meta property="fb:app_id" content="1321688464574422"> <meta name="twitter:card" content="summary_large_image"> <meta name="twitter:site" content="@huggingface"> <meta property="og:title" content="RetriBERT"> <meta property="og:type" content="website"> <meta property="og:url" content="https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/retribert"> <meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png"> <link rel="stylesheet" href="/front/build/kube-5e23f38/style.css"> <link rel="preconnect" href="https://fonts.gstatic.com"> <link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&amp;display=swap" rel="stylesheet"> <link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&amp;display=swap" rel="stylesheet"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" as="style" onload="this.onload=null;this.rel='stylesheet'"> <noscript> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" /> </noscript> <title>RetriBERT</title> <script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script> <script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"><link rel="stylesheet" href="/docs/transformers/v4.34.0/en/_app/immutable/assets/0.e3b0c442.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/1.38c5c2f6.js"><meta name="hf:doc:metadata" content="{&quot;local&quot;:&quot;retribert&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;overview&quot;,&quot;title&quot;:&quot;Overview&quot;},{&quot;local&quot;:&quot;transformers.RetriBertConfig&quot;,&quot;title&quot;:&quot;RetriBertConfig&quot;},{&quot;local&quot;:&quot;transformers.RetriBertTokenizer&quot;,&quot;title&quot;:&quot;RetriBertTokenizer&quot;},{&quot;local&quot;:&quot;transformers.RetriBertTokenizerFast&quot;,&quot;title&quot;:&quot;RetriBertTokenizerFast&quot;},{&quot;local&quot;:&quot;transformers.RetriBertModel&quot;,&quot;title&quot;:&quot;RetriBertModel&quot;}],&quot;title&quot;:&quot;RetriBERT&quot;}"></head> <body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage"> <div class="flex min-h-screen flex-col"> <div class="SVELTE_HYDRATER contents" data-props="{&quot;classNames&quot;:&quot;&quot;,&quot;isWide&quot;:true,&quot;isZh&quot;:false}" data-target="MainHeader"><header class="border-b border-gray-100 "><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <div class="flex flex-none items-center justify-center p-0.5 place-self-stretch lg:hidden"><button class="relative z-40 flex h-6 w-8 items-center justify-center" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div></div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="rounded-full border border-transparent bg-gray-900 px-3 py-1 leading-none text-white hover:border-black hover:bg-white hover:text-black" href="/join">Sign Up</a></li></ul></nav></div></header></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div> <main class="flex flex-1 flex-col"><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapters&quot;:[{&quot;title&quot;:&quot;Get started&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;🤗 Transformers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;index&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/index&quot;},{&quot;title&quot;:&quot;Quick tour&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;quicktour&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/quicktour&quot;},{&quot;title&quot;:&quot;Installation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;installation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/installation&quot;}]},{&quot;title&quot;:&quot;Tutorials&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Run inference with pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_tutorial&quot;},{&quot;title&quot;:&quot;Write portable code with AutoClass&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;autoclass_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/autoclass_tutorial&quot;},{&quot;title&quot;:&quot;Preprocess data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocessing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/preprocessing&quot;},{&quot;title&quot;:&quot;Fine-tune a pretrained model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;training&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/training&quot;},{&quot;title&quot;:&quot;Train with a script&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;run_scripts&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/run_scripts&quot;},{&quot;title&quot;:&quot;Set up distributed training with 🤗 Accelerate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;accelerate&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/accelerate&quot;},{&quot;title&quot;:&quot;Load and train adapters with 🤗 PEFT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;peft&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/peft&quot;},{&quot;title&quot;:&quot;Share your model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_sharing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_sharing&quot;},{&quot;title&quot;:&quot;Agents&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers_agents&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/transformers_agents&quot;},{&quot;title&quot;:&quot;Generation with LLMs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;llm_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/llm_tutorial&quot;}]},{&quot;title&quot;:&quot;Task Guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Natural Language Processing&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text classification&quot;,&quot;id&quot;:&quot;tasks/sequence_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/sequence_classification&quot;},{&quot;title&quot;:&quot;Token classification&quot;,&quot;id&quot;:&quot;tasks/token_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/token_classification&quot;},{&quot;title&quot;:&quot;Question answering&quot;,&quot;id&quot;:&quot;tasks/question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/question_answering&quot;},{&quot;title&quot;:&quot;Causal language modeling&quot;,&quot;id&quot;:&quot;tasks/language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/language_modeling&quot;},{&quot;title&quot;:&quot;Masked language modeling&quot;,&quot;id&quot;:&quot;tasks/masked_language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/masked_language_modeling&quot;},{&quot;title&quot;:&quot;Translation&quot;,&quot;id&quot;:&quot;tasks/translation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/translation&quot;},{&quot;title&quot;:&quot;Summarization&quot;,&quot;id&quot;:&quot;tasks/summarization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/summarization&quot;},{&quot;title&quot;:&quot;Multiple choice&quot;,&quot;id&quot;:&quot;tasks/multiple_choice&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/multiple_choice&quot;}]},{&quot;title&quot;:&quot;Audio&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio classification&quot;,&quot;id&quot;:&quot;tasks/audio_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/audio_classification&quot;},{&quot;title&quot;:&quot;Automatic speech recognition&quot;,&quot;id&quot;:&quot;tasks/asr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/asr&quot;}]},{&quot;title&quot;:&quot;Computer Vision&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image classification&quot;,&quot;id&quot;:&quot;tasks/image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_classification&quot;},{&quot;title&quot;:&quot;Semantic segmentation&quot;,&quot;id&quot;:&quot;tasks/semantic_segmentation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/semantic_segmentation&quot;},{&quot;title&quot;:&quot;Video classification&quot;,&quot;id&quot;:&quot;tasks/video_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/video_classification&quot;},{&quot;title&quot;:&quot;Object detection&quot;,&quot;id&quot;:&quot;tasks/object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot object detection&quot;,&quot;id&quot;:&quot;tasks/zero_shot_object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot image classification&quot;,&quot;id&quot;:&quot;tasks/zero_shot_image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification&quot;},{&quot;title&quot;:&quot;Depth estimation&quot;,&quot;id&quot;:&quot;tasks/monocular_depth_estimation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation&quot;}]},{&quot;title&quot;:&quot;Multimodal&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image captioning&quot;,&quot;id&quot;:&quot;tasks/image_captioning&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_captioning&quot;},{&quot;title&quot;:&quot;Document Question Answering&quot;,&quot;id&quot;:&quot;tasks/document_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/document_question_answering&quot;},{&quot;title&quot;:&quot;Visual Question Answering&quot;,&quot;id&quot;:&quot;tasks/visual_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/visual_question_answering&quot;},{&quot;title&quot;:&quot;Text to speech&quot;,&quot;id&quot;:&quot;tasks/text-to-speech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/text-to-speech&quot;}]},{&quot;title&quot;:&quot;Generation&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Customize the generation strategy&quot;,&quot;id&quot;:&quot;generation_strategies&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/generation_strategies&quot;}]},{&quot;title&quot;:&quot;Prompting&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image tasks with IDEFICS&quot;,&quot;id&quot;:&quot;tasks/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/idefics&quot;}]}]},{&quot;title&quot;:&quot;Developer guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Use fast tokenizers from 🤗 Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;fast_tokenizers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/fast_tokenizers&quot;},{&quot;title&quot;:&quot;Run inference with multilingual models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;multilingual&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/multilingual&quot;},{&quot;title&quot;:&quot;Use model-specific APIs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;create_a_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/create_a_model&quot;},{&quot;title&quot;:&quot;Share a custom model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_models&quot;},{&quot;title&quot;:&quot;Templates for chat models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;chat_templating&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/chat_templating&quot;},{&quot;title&quot;:&quot;Run training on Amazon SageMaker&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;sagemaker&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/sagemaker&quot;},{&quot;title&quot;:&quot;Export to ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;serialization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/serialization&quot;},{&quot;title&quot;:&quot;Export to TFLite&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tflite&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tflite&quot;},{&quot;title&quot;:&quot;Export to TorchScript&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;torchscript&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/torchscript&quot;},{&quot;title&quot;:&quot;Benchmarks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;benchmarks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/benchmarks&quot;},{&quot;title&quot;:&quot;Notebooks with examples&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;notebooks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/notebooks&quot;},{&quot;title&quot;:&quot;Community resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;community&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/community&quot;},{&quot;title&quot;:&quot;Custom Tools and Prompts&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_tools&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_tools&quot;},{&quot;title&quot;:&quot;Troubleshoot&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;troubleshooting&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/troubleshooting&quot;}]},{&quot;title&quot;:&quot;Performance and scalability&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;performance&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/performance&quot;},{&quot;title&quot;:&quot;Efficient training techniques&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Methods and tools for efficient training on a single GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_one&quot;},{&quot;title&quot;:&quot;Multiple GPUs and parallelism&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_many&quot;},{&quot;title&quot;:&quot;Efficient training on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu&quot;},{&quot;title&quot;:&quot;Distributed CPU training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu_many&quot;},{&quot;title&quot;:&quot;Training on TPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu&quot;},{&quot;title&quot;:&quot;Training on TPU with TensorFlow&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu_tf&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu_tf&quot;},{&quot;title&quot;:&quot;Training on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_special&quot;},{&quot;title&quot;:&quot;Custom hardware for training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_hardware&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_hardware&quot;},{&quot;title&quot;:&quot;Hyperparameter Search using Trainer API&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;hpo_train&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/hpo_train&quot;}]},{&quot;title&quot;:&quot;Optimizing inference&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Inference on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_cpu&quot;},{&quot;title&quot;:&quot;Inference on one GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_one&quot;},{&quot;title&quot;:&quot;Inference on many GPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_many&quot;},{&quot;title&quot;:&quot;Inference on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_special&quot;}]},{&quot;title&quot;:&quot;Instantiating a big model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;big_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/big_models&quot;},{&quot;title&quot;:&quot;Troubleshooting&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;debugging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/debugging&quot;},{&quot;title&quot;:&quot;XLA Integration for TensorFlow Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tf_xla&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tf_xla&quot;},{&quot;title&quot;:&quot;Optimize inference using `torch.compile()`&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_torch_compile&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_torch_compile&quot;}]},{&quot;title&quot;:&quot;Contribute&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;How to contribute to transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;contributing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/contributing&quot;},{&quot;title&quot;:&quot;How to add a model to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_model&quot;},{&quot;title&quot;:&quot;How to convert a 🤗 Transformers model to TensorFlow?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_tensorflow_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_tensorflow_model&quot;},{&quot;title&quot;:&quot;How to add a pipeline to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_pipeline&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_pipeline&quot;},{&quot;title&quot;:&quot;Testing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;testing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/testing&quot;},{&quot;title&quot;:&quot;Checks on a Pull Request&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pr_checks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pr_checks&quot;}]},{&quot;title&quot;:&quot;Conceptual guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Philosophy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;philosophy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/philosophy&quot;},{&quot;title&quot;:&quot;Glossary&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;glossary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/glossary&quot;},{&quot;title&quot;:&quot;What 🤗 Transformers can do&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;task_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/task_summary&quot;},{&quot;title&quot;:&quot;How 🤗 Transformers solve tasks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks_explained&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks_explained&quot;},{&quot;title&quot;:&quot;The Transformer model family&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_summary&quot;},{&quot;title&quot;:&quot;Summary of the tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tokenizer_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tokenizer_summary&quot;},{&quot;title&quot;:&quot;Attention mechanisms&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;attention&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/attention&quot;},{&quot;title&quot;:&quot;Padding and truncation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pad_truncation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pad_truncation&quot;},{&quot;title&quot;:&quot;BERTology&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;bertology&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/bertology&quot;},{&quot;title&quot;:&quot;Perplexity of fixed-length models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perplexity&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perplexity&quot;},{&quot;title&quot;:&quot;Pipelines for webserver inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_webserver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_webserver&quot;},{&quot;title&quot;:&quot;Model training anatomy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_memory_anatomy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_memory_anatomy&quot;}]},{&quot;title&quot;:&quot;API&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Main Classes&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Agents and Tools&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/agent&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/agent&quot;},{&quot;title&quot;:&quot;Auto Classes&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/auto&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/auto&quot;},{&quot;title&quot;:&quot;Callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/callback&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/callback&quot;},{&quot;title&quot;:&quot;Configuration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/configuration&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/configuration&quot;},{&quot;title&quot;:&quot;Data Collator&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/data_collator&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/data_collator&quot;},{&quot;title&quot;:&quot;Keras callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/keras_callbacks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/keras_callbacks&quot;},{&quot;title&quot;:&quot;Logging&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/logging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/logging&quot;},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/model&quot;},{&quot;title&quot;:&quot;Text Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/text_generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/text_generation&quot;},{&quot;title&quot;:&quot;ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/onnx&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/onnx&quot;},{&quot;title&quot;:&quot;Optimization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/optimizer_schedules&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules&quot;},{&quot;title&quot;:&quot;Model outputs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/output&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/output&quot;},{&quot;title&quot;:&quot;Pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/pipelines&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/pipelines&quot;},{&quot;title&quot;:&quot;Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/processors&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/processors&quot;},{&quot;title&quot;:&quot;Quantization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/quantization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/quantization&quot;},{&quot;title&quot;:&quot;Tokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/tokenizer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/tokenizer&quot;},{&quot;title&quot;:&quot;Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/trainer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/trainer&quot;},{&quot;title&quot;:&quot;DeepSpeed Integration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/deepspeed&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/deepspeed&quot;},{&quot;title&quot;:&quot;Feature Extractor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/feature_extractor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/feature_extractor&quot;},{&quot;title&quot;:&quot;Image Processor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/image_processor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/image_processor&quot;}]},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALBERT&quot;,&quot;id&quot;:&quot;model_doc/albert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/albert&quot;},{&quot;title&quot;:&quot;BART&quot;,&quot;id&quot;:&quot;model_doc/bart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bart&quot;},{&quot;title&quot;:&quot;BARThez&quot;,&quot;id&quot;:&quot;model_doc/barthez&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/barthez&quot;},{&quot;title&quot;:&quot;BARTpho&quot;,&quot;id&quot;:&quot;model_doc/bartpho&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bartpho&quot;},{&quot;title&quot;:&quot;BERT&quot;,&quot;id&quot;:&quot;model_doc/bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert&quot;},{&quot;title&quot;:&quot;BertGeneration&quot;,&quot;id&quot;:&quot;model_doc/bert-generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-generation&quot;},{&quot;title&quot;:&quot;BertJapanese&quot;,&quot;id&quot;:&quot;model_doc/bert-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-japanese&quot;},{&quot;title&quot;:&quot;Bertweet&quot;,&quot;id&quot;:&quot;model_doc/bertweet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bertweet&quot;},{&quot;title&quot;:&quot;BigBird&quot;,&quot;id&quot;:&quot;model_doc/big_bird&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/big_bird&quot;},{&quot;title&quot;:&quot;BigBirdPegasus&quot;,&quot;id&quot;:&quot;model_doc/bigbird_pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus&quot;},{&quot;title&quot;:&quot;BioGpt&quot;,&quot;id&quot;:&quot;model_doc/biogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/biogpt&quot;},{&quot;title&quot;:&quot;Blenderbot&quot;,&quot;id&quot;:&quot;model_doc/blenderbot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot&quot;},{&quot;title&quot;:&quot;Blenderbot Small&quot;,&quot;id&quot;:&quot;model_doc/blenderbot-small&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot-small&quot;},{&quot;title&quot;:&quot;BLOOM&quot;,&quot;id&quot;:&quot;model_doc/bloom&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bloom&quot;},{&quot;title&quot;:&quot;BORT&quot;,&quot;id&quot;:&quot;model_doc/bort&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bort&quot;},{&quot;title&quot;:&quot;ByT5&quot;,&quot;id&quot;:&quot;model_doc/byt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/byt5&quot;},{&quot;title&quot;:&quot;CamemBERT&quot;,&quot;id&quot;:&quot;model_doc/camembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/camembert&quot;},{&quot;title&quot;:&quot;CANINE&quot;,&quot;id&quot;:&quot;model_doc/canine&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/canine&quot;},{&quot;title&quot;:&quot;CodeGen&quot;,&quot;id&quot;:&quot;model_doc/codegen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/codegen&quot;},{&quot;title&quot;:&quot;CodeLlama&quot;,&quot;id&quot;:&quot;model_doc/code_llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/code_llama&quot;},{&quot;title&quot;:&quot;ConvBERT&quot;,&quot;id&quot;:&quot;model_doc/convbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convbert&quot;},{&quot;title&quot;:&quot;CPM&quot;,&quot;id&quot;:&quot;model_doc/cpm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpm&quot;},{&quot;title&quot;:&quot;CPMANT&quot;,&quot;id&quot;:&quot;model_doc/cpmant&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpmant&quot;},{&quot;title&quot;:&quot;CTRL&quot;,&quot;id&quot;:&quot;model_doc/ctrl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ctrl&quot;},{&quot;title&quot;:&quot;DeBERTa&quot;,&quot;id&quot;:&quot;model_doc/deberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta&quot;},{&quot;title&quot;:&quot;DeBERTa-v2&quot;,&quot;id&quot;:&quot;model_doc/deberta-v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta-v2&quot;},{&quot;title&quot;:&quot;DialoGPT&quot;,&quot;id&quot;:&quot;model_doc/dialogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dialogpt&quot;},{&quot;title&quot;:&quot;DistilBERT&quot;,&quot;id&quot;:&quot;model_doc/distilbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/distilbert&quot;},{&quot;title&quot;:&quot;DPR&quot;,&quot;id&quot;:&quot;model_doc/dpr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpr&quot;},{&quot;title&quot;:&quot;ELECTRA&quot;,&quot;id&quot;:&quot;model_doc/electra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/electra&quot;},{&quot;title&quot;:&quot;Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encoder-decoder&quot;},{&quot;title&quot;:&quot;ERNIE&quot;,&quot;id&quot;:&quot;model_doc/ernie&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie&quot;},{&quot;title&quot;:&quot;ErnieM&quot;,&quot;id&quot;:&quot;model_doc/ernie_m&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie_m&quot;},{&quot;title&quot;:&quot;ESM&quot;,&quot;id&quot;:&quot;model_doc/esm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/esm&quot;},{&quot;title&quot;:&quot;Falcon&quot;,&quot;id&quot;:&quot;model_doc/falcon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/falcon&quot;},{&quot;title&quot;:&quot;FLAN-T5&quot;,&quot;id&quot;:&quot;model_doc/flan-t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-t5&quot;},{&quot;title&quot;:&quot;FLAN-UL2&quot;,&quot;id&quot;:&quot;model_doc/flan-ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-ul2&quot;},{&quot;title&quot;:&quot;FlauBERT&quot;,&quot;id&quot;:&quot;model_doc/flaubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flaubert&quot;},{&quot;title&quot;:&quot;FNet&quot;,&quot;id&quot;:&quot;model_doc/fnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fnet&quot;},{&quot;title&quot;:&quot;FSMT&quot;,&quot;id&quot;:&quot;model_doc/fsmt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fsmt&quot;},{&quot;title&quot;:&quot;Funnel Transformer&quot;,&quot;id&quot;:&quot;model_doc/funnel&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/funnel&quot;},{&quot;title&quot;:&quot;GPT&quot;,&quot;id&quot;:&quot;model_doc/openai-gpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/openai-gpt&quot;},{&quot;title&quot;:&quot;GPT Neo&quot;,&quot;id&quot;:&quot;model_doc/gpt_neo&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neo&quot;},{&quot;title&quot;:&quot;GPT NeoX&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox&quot;},{&quot;title&quot;:&quot;GPT NeoX Japanese&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox_japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese&quot;},{&quot;title&quot;:&quot;GPT-J&quot;,&quot;id&quot;:&quot;model_doc/gptj&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptj&quot;},{&quot;title&quot;:&quot;GPT2&quot;,&quot;id&quot;:&quot;model_doc/gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt2&quot;},{&quot;title&quot;:&quot;GPTBigCode&quot;,&quot;id&quot;:&quot;model_doc/gpt_bigcode&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode&quot;},{&quot;title&quot;:&quot;GPTSAN Japanese&quot;,&quot;id&quot;:&quot;model_doc/gptsan-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese&quot;},{&quot;title&quot;:&quot;GPTSw3&quot;,&quot;id&quot;:&quot;model_doc/gpt-sw3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt-sw3&quot;},{&quot;title&quot;:&quot;HerBERT&quot;,&quot;id&quot;:&quot;model_doc/herbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/herbert&quot;},{&quot;title&quot;:&quot;I-BERT&quot;,&quot;id&quot;:&quot;model_doc/ibert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ibert&quot;},{&quot;title&quot;:&quot;Jukebox&quot;,&quot;id&quot;:&quot;model_doc/jukebox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/jukebox&quot;},{&quot;title&quot;:&quot;LED&quot;,&quot;id&quot;:&quot;model_doc/led&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/led&quot;},{&quot;title&quot;:&quot;LLaMA&quot;,&quot;id&quot;:&quot;model_doc/llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama&quot;},{&quot;title&quot;:&quot;Llama2&quot;,&quot;id&quot;:&quot;model_doc/llama2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama2&quot;},{&quot;title&quot;:&quot;Longformer&quot;,&quot;id&quot;:&quot;model_doc/longformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longformer&quot;},{&quot;title&quot;:&quot;LongT5&quot;,&quot;id&quot;:&quot;model_doc/longt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longt5&quot;},{&quot;title&quot;:&quot;LUKE&quot;,&quot;id&quot;:&quot;model_doc/luke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/luke&quot;},{&quot;title&quot;:&quot;M2M100&quot;,&quot;id&quot;:&quot;model_doc/m2m_100&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/m2m_100&quot;},{&quot;title&quot;:&quot;MarianMT&quot;,&quot;id&quot;:&quot;model_doc/marian&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/marian&quot;},{&quot;title&quot;:&quot;MarkupLM&quot;,&quot;id&quot;:&quot;model_doc/markuplm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/markuplm&quot;},{&quot;title&quot;:&quot;MBart and MBart-50&quot;,&quot;id&quot;:&quot;model_doc/mbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mbart&quot;},{&quot;title&quot;:&quot;MEGA&quot;,&quot;id&quot;:&quot;model_doc/mega&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mega&quot;},{&quot;title&quot;:&quot;MegatronBERT&quot;,&quot;id&quot;:&quot;model_doc/megatron-bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron-bert&quot;},{&quot;title&quot;:&quot;MegatronGPT2&quot;,&quot;id&quot;:&quot;model_doc/megatron_gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2&quot;},{&quot;title&quot;:&quot;Mistral&quot;,&quot;id&quot;:&quot;model_doc/mistral&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mistral&quot;},{&quot;title&quot;:&quot;mLUKE&quot;,&quot;id&quot;:&quot;model_doc/mluke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mluke&quot;},{&quot;title&quot;:&quot;MobileBERT&quot;,&quot;id&quot;:&quot;model_doc/mobilebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilebert&quot;},{&quot;title&quot;:&quot;MPNet&quot;,&quot;id&quot;:&quot;model_doc/mpnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpnet&quot;},{&quot;title&quot;:&quot;MPT&quot;,&quot;id&quot;:&quot;model_doc/mpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpt&quot;},{&quot;title&quot;:&quot;MRA&quot;,&quot;id&quot;:&quot;model_doc/mra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mra&quot;},{&quot;title&quot;:&quot;MT5&quot;,&quot;id&quot;:&quot;model_doc/mt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mt5&quot;},{&quot;title&quot;:&quot;MVP&quot;,&quot;id&quot;:&quot;model_doc/mvp&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mvp&quot;},{&quot;title&quot;:&quot;NEZHA&quot;,&quot;id&quot;:&quot;model_doc/nezha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nezha&quot;},{&quot;title&quot;:&quot;NLLB&quot;,&quot;id&quot;:&quot;model_doc/nllb&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb&quot;},{&quot;title&quot;:&quot;NLLB-MoE&quot;,&quot;id&quot;:&quot;model_doc/nllb-moe&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb-moe&quot;},{&quot;title&quot;:&quot;Nyströmformer&quot;,&quot;id&quot;:&quot;model_doc/nystromformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nystromformer&quot;},{&quot;title&quot;:&quot;Open-Llama&quot;,&quot;id&quot;:&quot;model_doc/open-llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/open-llama&quot;},{&quot;title&quot;:&quot;OPT&quot;,&quot;id&quot;:&quot;model_doc/opt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/opt&quot;},{&quot;title&quot;:&quot;Pegasus&quot;,&quot;id&quot;:&quot;model_doc/pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus&quot;},{&quot;title&quot;:&quot;PEGASUS-X&quot;,&quot;id&quot;:&quot;model_doc/pegasus_x&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus_x&quot;},{&quot;title&quot;:&quot;Persimmon&quot;,&quot;id&quot;:&quot;model_doc/persimmon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/persimmon&quot;},{&quot;title&quot;:&quot;PhoBERT&quot;,&quot;id&quot;:&quot;model_doc/phobert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/phobert&quot;},{&quot;title&quot;:&quot;PLBart&quot;,&quot;id&quot;:&quot;model_doc/plbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/plbart&quot;},{&quot;title&quot;:&quot;ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/prophetnet&quot;},{&quot;title&quot;:&quot;QDQBert&quot;,&quot;id&quot;:&quot;model_doc/qdqbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/qdqbert&quot;},{&quot;title&quot;:&quot;RAG&quot;,&quot;id&quot;:&quot;model_doc/rag&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rag&quot;},{&quot;title&quot;:&quot;REALM&quot;,&quot;id&quot;:&quot;model_doc/realm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/realm&quot;},{&quot;title&quot;:&quot;Reformer&quot;,&quot;id&quot;:&quot;model_doc/reformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/reformer&quot;},{&quot;title&quot;:&quot;RemBERT&quot;,&quot;id&quot;:&quot;model_doc/rembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rembert&quot;},{&quot;title&quot;:&quot;RetriBERT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/retribert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/retribert&quot;},{&quot;title&quot;:&quot;RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta&quot;},{&quot;title&quot;:&quot;RoBERTa-PreLayerNorm&quot;,&quot;id&quot;:&quot;model_doc/roberta-prelayernorm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm&quot;},{&quot;title&quot;:&quot;RoCBert&quot;,&quot;id&quot;:&quot;model_doc/roc_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roc_bert&quot;},{&quot;title&quot;:&quot;RoFormer&quot;,&quot;id&quot;:&quot;model_doc/roformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roformer&quot;},{&quot;title&quot;:&quot;RWKV&quot;,&quot;id&quot;:&quot;model_doc/rwkv&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rwkv&quot;},{&quot;title&quot;:&quot;Splinter&quot;,&quot;id&quot;:&quot;model_doc/splinter&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/splinter&quot;},{&quot;title&quot;:&quot;SqueezeBERT&quot;,&quot;id&quot;:&quot;model_doc/squeezebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/squeezebert&quot;},{&quot;title&quot;:&quot;SwitchTransformers&quot;,&quot;id&quot;:&quot;model_doc/switch_transformers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/switch_transformers&quot;},{&quot;title&quot;:&quot;T5&quot;,&quot;id&quot;:&quot;model_doc/t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5&quot;},{&quot;title&quot;:&quot;T5v1.1&quot;,&quot;id&quot;:&quot;model_doc/t5v1.1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5v1.1&quot;},{&quot;title&quot;:&quot;TAPEX&quot;,&quot;id&quot;:&quot;model_doc/tapex&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapex&quot;},{&quot;title&quot;:&quot;Transformer XL&quot;,&quot;id&quot;:&quot;model_doc/transfo-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/transfo-xl&quot;},{&quot;title&quot;:&quot;UL2&quot;,&quot;id&quot;:&quot;model_doc/ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ul2&quot;},{&quot;title&quot;:&quot;UMT5&quot;,&quot;id&quot;:&quot;model_doc/umt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/umt5&quot;},{&quot;title&quot;:&quot;X-MOD&quot;,&quot;id&quot;:&quot;model_doc/xmod&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xmod&quot;},{&quot;title&quot;:&quot;XGLM&quot;,&quot;id&quot;:&quot;model_doc/xglm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xglm&quot;},{&quot;title&quot;:&quot;XLM&quot;,&quot;id&quot;:&quot;model_doc/xlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm&quot;},{&quot;title&quot;:&quot;XLM-ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/xlm-prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa-XL&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl&quot;},{&quot;title&quot;:&quot;XLM-V&quot;,&quot;id&quot;:&quot;model_doc/xlm-v&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-v&quot;},{&quot;title&quot;:&quot;XLNet&quot;,&quot;id&quot;:&quot;model_doc/xlnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlnet&quot;},{&quot;title&quot;:&quot;YOSO&quot;,&quot;id&quot;:&quot;model_doc/yoso&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yoso&quot;}]},{&quot;title&quot;:&quot;Vision models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;BEiT&quot;,&quot;id&quot;:&quot;model_doc/beit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/beit&quot;},{&quot;title&quot;:&quot;BiT&quot;,&quot;id&quot;:&quot;model_doc/bit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bit&quot;},{&quot;title&quot;:&quot;Conditional DETR&quot;,&quot;id&quot;:&quot;model_doc/conditional_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/conditional_detr&quot;},{&quot;title&quot;:&quot;ConvNeXT&quot;,&quot;id&quot;:&quot;model_doc/convnext&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnext&quot;},{&quot;title&quot;:&quot;ConvNeXTV2&quot;,&quot;id&quot;:&quot;model_doc/convnextv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnextv2&quot;},{&quot;title&quot;:&quot;CvT&quot;,&quot;id&quot;:&quot;model_doc/cvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cvt&quot;},{&quot;title&quot;:&quot;Deformable DETR&quot;,&quot;id&quot;:&quot;model_doc/deformable_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deformable_detr&quot;},{&quot;title&quot;:&quot;DeiT&quot;,&quot;id&quot;:&quot;model_doc/deit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deit&quot;},{&quot;title&quot;:&quot;DETA&quot;,&quot;id&quot;:&quot;model_doc/deta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deta&quot;},{&quot;title&quot;:&quot;DETR&quot;,&quot;id&quot;:&quot;model_doc/detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/detr&quot;},{&quot;title&quot;:&quot;DiNAT&quot;,&quot;id&quot;:&quot;model_doc/dinat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinat&quot;},{&quot;title&quot;:&quot;DINO V2&quot;,&quot;id&quot;:&quot;model_doc/dinov2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinov2&quot;},{&quot;title&quot;:&quot;DiT&quot;,&quot;id&quot;:&quot;model_doc/dit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dit&quot;},{&quot;title&quot;:&quot;DPT&quot;,&quot;id&quot;:&quot;model_doc/dpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpt&quot;},{&quot;title&quot;:&quot;EfficientFormer&quot;,&quot;id&quot;:&quot;model_doc/efficientformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientformer&quot;},{&quot;title&quot;:&quot;EfficientNet&quot;,&quot;id&quot;:&quot;model_doc/efficientnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientnet&quot;},{&quot;title&quot;:&quot;FocalNet&quot;,&quot;id&quot;:&quot;model_doc/focalnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/focalnet&quot;},{&quot;title&quot;:&quot;GLPN&quot;,&quot;id&quot;:&quot;model_doc/glpn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/glpn&quot;},{&quot;title&quot;:&quot;ImageGPT&quot;,&quot;id&quot;:&quot;model_doc/imagegpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/imagegpt&quot;},{&quot;title&quot;:&quot;LeViT&quot;,&quot;id&quot;:&quot;model_doc/levit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/levit&quot;},{&quot;title&quot;:&quot;Mask2Former&quot;,&quot;id&quot;:&quot;model_doc/mask2former&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mask2former&quot;},{&quot;title&quot;:&quot;MaskFormer&quot;,&quot;id&quot;:&quot;model_doc/maskformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/maskformer&quot;},{&quot;title&quot;:&quot;MobileNetV1&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v1&quot;},{&quot;title&quot;:&quot;MobileNetV2&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v2&quot;},{&quot;title&quot;:&quot;MobileViT&quot;,&quot;id&quot;:&quot;model_doc/mobilevit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevit&quot;},{&quot;title&quot;:&quot;MobileViTV2&quot;,&quot;id&quot;:&quot;model_doc/mobilevitv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevitv2&quot;},{&quot;title&quot;:&quot;NAT&quot;,&quot;id&quot;:&quot;model_doc/nat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nat&quot;},{&quot;title&quot;:&quot;PoolFormer&quot;,&quot;id&quot;:&quot;model_doc/poolformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/poolformer&quot;},{&quot;title&quot;:&quot;Pyramid Vision Transformer (PVT)&quot;,&quot;id&quot;:&quot;model_doc/pvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pvt&quot;},{&quot;title&quot;:&quot;RegNet&quot;,&quot;id&quot;:&quot;model_doc/regnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/regnet&quot;},{&quot;title&quot;:&quot;ResNet&quot;,&quot;id&quot;:&quot;model_doc/resnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/resnet&quot;},{&quot;title&quot;:&quot;SegFormer&quot;,&quot;id&quot;:&quot;model_doc/segformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/segformer&quot;},{&quot;title&quot;:&quot;SwiftFormer&quot;,&quot;id&quot;:&quot;model_doc/swiftformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swiftformer&quot;},{&quot;title&quot;:&quot;Swin Transformer&quot;,&quot;id&quot;:&quot;model_doc/swin&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin&quot;},{&quot;title&quot;:&quot;Swin Transformer V2&quot;,&quot;id&quot;:&quot;model_doc/swinv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swinv2&quot;},{&quot;title&quot;:&quot;Swin2SR&quot;,&quot;id&quot;:&quot;model_doc/swin2sr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin2sr&quot;},{&quot;title&quot;:&quot;Table Transformer&quot;,&quot;id&quot;:&quot;model_doc/table-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/table-transformer&quot;},{&quot;title&quot;:&quot;TimeSformer&quot;,&quot;id&quot;:&quot;model_doc/timesformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/timesformer&quot;},{&quot;title&quot;:&quot;UperNet&quot;,&quot;id&quot;:&quot;model_doc/upernet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/upernet&quot;},{&quot;title&quot;:&quot;VAN&quot;,&quot;id&quot;:&quot;model_doc/van&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/van&quot;},{&quot;title&quot;:&quot;VideoMAE&quot;,&quot;id&quot;:&quot;model_doc/videomae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/videomae&quot;},{&quot;title&quot;:&quot;Vision Transformer (ViT)&quot;,&quot;id&quot;:&quot;model_doc/vit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit&quot;},{&quot;title&quot;:&quot;ViT Hybrid&quot;,&quot;id&quot;:&quot;model_doc/vit_hybrid&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_hybrid&quot;},{&quot;title&quot;:&quot;ViTDet&quot;,&quot;id&quot;:&quot;model_doc/vitdet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitdet&quot;},{&quot;title&quot;:&quot;ViTMAE&quot;,&quot;id&quot;:&quot;model_doc/vit_mae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_mae&quot;},{&quot;title&quot;:&quot;ViTMatte&quot;,&quot;id&quot;:&quot;model_doc/vitmatte&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitmatte&quot;},{&quot;title&quot;:&quot;ViTMSN&quot;,&quot;id&quot;:&quot;model_doc/vit_msn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_msn&quot;},{&quot;title&quot;:&quot;ViViT&quot;,&quot;id&quot;:&quot;model_doc/vivit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vivit&quot;},{&quot;title&quot;:&quot;YOLOS&quot;,&quot;id&quot;:&quot;model_doc/yolos&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yolos&quot;}]},{&quot;title&quot;:&quot;Audio models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio Spectrogram Transformer&quot;,&quot;id&quot;:&quot;model_doc/audio-spectrogram-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer&quot;},{&quot;title&quot;:&quot;Bark&quot;,&quot;id&quot;:&quot;model_doc/bark&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bark&quot;},{&quot;title&quot;:&quot;CLAP&quot;,&quot;id&quot;:&quot;model_doc/clap&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clap&quot;},{&quot;title&quot;:&quot;EnCodec&quot;,&quot;id&quot;:&quot;model_doc/encodec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encodec&quot;},{&quot;title&quot;:&quot;Hubert&quot;,&quot;id&quot;:&quot;model_doc/hubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/hubert&quot;},{&quot;title&quot;:&quot;MCTCT&quot;,&quot;id&quot;:&quot;model_doc/mctct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mctct&quot;},{&quot;title&quot;:&quot;MMS&quot;,&quot;id&quot;:&quot;model_doc/mms&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mms&quot;},{&quot;title&quot;:&quot;MusicGen&quot;,&quot;id&quot;:&quot;model_doc/musicgen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/musicgen&quot;},{&quot;title&quot;:&quot;Pop2Piano&quot;,&quot;id&quot;:&quot;model_doc/pop2piano&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pop2piano&quot;},{&quot;title&quot;:&quot;SEW&quot;,&quot;id&quot;:&quot;model_doc/sew&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew&quot;},{&quot;title&quot;:&quot;SEW-D&quot;,&quot;id&quot;:&quot;model_doc/sew-d&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew-d&quot;},{&quot;title&quot;:&quot;Speech2Text&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text&quot;},{&quot;title&quot;:&quot;Speech2Text2&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text_2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2&quot;},{&quot;title&quot;:&quot;SpeechT5&quot;,&quot;id&quot;:&quot;model_doc/speecht5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speecht5&quot;},{&quot;title&quot;:&quot;UniSpeech&quot;,&quot;id&quot;:&quot;model_doc/unispeech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech&quot;},{&quot;title&quot;:&quot;UniSpeech-SAT&quot;,&quot;id&quot;:&quot;model_doc/unispeech-sat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech-sat&quot;},{&quot;title&quot;:&quot;VITS&quot;,&quot;id&quot;:&quot;model_doc/vits&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vits&quot;},{&quot;title&quot;:&quot;Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2&quot;},{&quot;title&quot;:&quot;Wav2Vec2-Conformer&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2-conformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer&quot;},{&quot;title&quot;:&quot;Wav2Vec2Phoneme&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2_phoneme&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme&quot;},{&quot;title&quot;:&quot;WavLM&quot;,&quot;id&quot;:&quot;model_doc/wavlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wavlm&quot;},{&quot;title&quot;:&quot;Whisper&quot;,&quot;id&quot;:&quot;model_doc/whisper&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/whisper&quot;},{&quot;title&quot;:&quot;XLS-R&quot;,&quot;id&quot;:&quot;model_doc/xls_r&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xls_r&quot;},{&quot;title&quot;:&quot;XLSR-Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/xlsr_wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2&quot;}]},{&quot;title&quot;:&quot;Multimodal models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALIGN&quot;,&quot;id&quot;:&quot;model_doc/align&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/align&quot;},{&quot;title&quot;:&quot;AltCLIP&quot;,&quot;id&quot;:&quot;model_doc/altclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/altclip&quot;},{&quot;title&quot;:&quot;BLIP&quot;,&quot;id&quot;:&quot;model_doc/blip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip&quot;},{&quot;title&quot;:&quot;BLIP-2&quot;,&quot;id&quot;:&quot;model_doc/blip-2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip-2&quot;},{&quot;title&quot;:&quot;BridgeTower&quot;,&quot;id&quot;:&quot;model_doc/bridgetower&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bridgetower&quot;},{&quot;title&quot;:&quot;BROS&quot;,&quot;id&quot;:&quot;model_doc/bros&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bros&quot;},{&quot;title&quot;:&quot;Chinese-CLIP&quot;,&quot;id&quot;:&quot;model_doc/chinese_clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/chinese_clip&quot;},{&quot;title&quot;:&quot;CLIP&quot;,&quot;id&quot;:&quot;model_doc/clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clip&quot;},{&quot;title&quot;:&quot;CLIPSeg&quot;,&quot;id&quot;:&quot;model_doc/clipseg&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clipseg&quot;},{&quot;title&quot;:&quot;Data2Vec&quot;,&quot;id&quot;:&quot;model_doc/data2vec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/data2vec&quot;},{&quot;title&quot;:&quot;DePlot&quot;,&quot;id&quot;:&quot;model_doc/deplot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deplot&quot;},{&quot;title&quot;:&quot;Donut&quot;,&quot;id&quot;:&quot;model_doc/donut&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/donut&quot;},{&quot;title&quot;:&quot;FLAVA&quot;,&quot;id&quot;:&quot;model_doc/flava&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flava&quot;},{&quot;title&quot;:&quot;GIT&quot;,&quot;id&quot;:&quot;model_doc/git&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/git&quot;},{&quot;title&quot;:&quot;GroupViT&quot;,&quot;id&quot;:&quot;model_doc/groupvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/groupvit&quot;},{&quot;title&quot;:&quot;IDEFICS&quot;,&quot;id&quot;:&quot;model_doc/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/idefics&quot;},{&quot;title&quot;:&quot;InstructBLIP&quot;,&quot;id&quot;:&quot;model_doc/instructblip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/instructblip&quot;},{&quot;title&quot;:&quot;LayoutLM&quot;,&quot;id&quot;:&quot;model_doc/layoutlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlm&quot;},{&quot;title&quot;:&quot;LayoutLMV2&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv2&quot;},{&quot;title&quot;:&quot;LayoutLMV3&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv3&quot;},{&quot;title&quot;:&quot;LayoutXLM&quot;,&quot;id&quot;:&quot;model_doc/layoutxlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutxlm&quot;},{&quot;title&quot;:&quot;LiLT&quot;,&quot;id&quot;:&quot;model_doc/lilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lilt&quot;},{&quot;title&quot;:&quot;LXMERT&quot;,&quot;id&quot;:&quot;model_doc/lxmert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lxmert&quot;},{&quot;title&quot;:&quot;MatCha&quot;,&quot;id&quot;:&quot;model_doc/matcha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/matcha&quot;},{&quot;title&quot;:&quot;MGP-STR&quot;,&quot;id&quot;:&quot;model_doc/mgp-str&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mgp-str&quot;},{&quot;title&quot;:&quot;Nougat&quot;,&quot;id&quot;:&quot;model_doc/nougat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nougat&quot;},{&quot;title&quot;:&quot;OneFormer&quot;,&quot;id&quot;:&quot;model_doc/oneformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/oneformer&quot;},{&quot;title&quot;:&quot;OWL-ViT&quot;,&quot;id&quot;:&quot;model_doc/owlvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/owlvit&quot;},{&quot;title&quot;:&quot;Perceiver&quot;,&quot;id&quot;:&quot;model_doc/perceiver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/perceiver&quot;},{&quot;title&quot;:&quot;Pix2Struct&quot;,&quot;id&quot;:&quot;model_doc/pix2struct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pix2struct&quot;},{&quot;title&quot;:&quot;Segment Anything&quot;,&quot;id&quot;:&quot;model_doc/sam&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sam&quot;},{&quot;title&quot;:&quot;Speech Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/speech-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder&quot;},{&quot;title&quot;:&quot;TAPAS&quot;,&quot;id&quot;:&quot;model_doc/tapas&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapas&quot;},{&quot;title&quot;:&quot;TrOCR&quot;,&quot;id&quot;:&quot;model_doc/trocr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trocr&quot;},{&quot;title&quot;:&quot;TVLT&quot;,&quot;id&quot;:&quot;model_doc/tvlt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tvlt&quot;},{&quot;title&quot;:&quot;ViLT&quot;,&quot;id&quot;:&quot;model_doc/vilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vilt&quot;},{&quot;title&quot;:&quot;Vision Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/vision-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder&quot;},{&quot;title&quot;:&quot;Vision Text Dual Encoder&quot;,&quot;id&quot;:&quot;model_doc/vision-text-dual-encoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder&quot;},{&quot;title&quot;:&quot;VisualBERT&quot;,&quot;id&quot;:&quot;model_doc/visual_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/visual_bert&quot;},{&quot;title&quot;:&quot;X-CLIP&quot;,&quot;id&quot;:&quot;model_doc/xclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xclip&quot;}]},{&quot;title&quot;:&quot;Reinforcement learning models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Decision Transformer&quot;,&quot;id&quot;:&quot;model_doc/decision_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/decision_transformer&quot;},{&quot;title&quot;:&quot;Trajectory Transformer&quot;,&quot;id&quot;:&quot;model_doc/trajectory_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer&quot;}]},{&quot;title&quot;:&quot;Time series models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Autoformer&quot;,&quot;id&quot;:&quot;model_doc/autoformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/autoformer&quot;},{&quot;title&quot;:&quot;Informer&quot;,&quot;id&quot;:&quot;model_doc/informer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/informer&quot;},{&quot;title&quot;:&quot;Time Series Transformer&quot;,&quot;id&quot;:&quot;model_doc/time_series_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/time_series_transformer&quot;}]},{&quot;title&quot;:&quot;Graph models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Graphormer&quot;,&quot;id&quot;:&quot;model_doc/graphormer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/graphormer&quot;}]}]},{&quot;title&quot;:&quot;Internal Helpers&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Custom Layers and Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/modeling_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/modeling_utils&quot;},{&quot;title&quot;:&quot;Utilities for pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/pipelines_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/pipelines_utils&quot;},{&quot;title&quot;:&quot;Utilities for Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/tokenization_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/tokenization_utils&quot;},{&quot;title&quot;:&quot;Utilities for Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/trainer_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/trainer_utils&quot;},{&quot;title&quot;:&quot;Utilities for Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/generation_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/generation_utils&quot;},{&quot;title&quot;:&quot;Utilities for Image Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/image_processing_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/image_processing_utils&quot;},{&quot;title&quot;:&quot;Utilities for Audio processing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/audio_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/audio_utils&quot;},{&quot;title&quot;:&quot;General Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/file_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/file_utils&quot;},{&quot;title&quot;:&quot;Utilities for Time Series&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/time_series_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/time_series_utils&quot;}]}]}],&quot;chapterId&quot;:&quot;model_doc/retribert&quot;,&quot;docType&quot;:&quot;docs&quot;,&quot;isLoggedIn&quot;:false,&quot;lang&quot;:&quot;en&quot;,&quot;langs&quot;:[&quot;de&quot;,&quot;en&quot;,&quot;es&quot;,&quot;fr&quot;,&quot;it&quot;,&quot;ko&quot;,&quot;pt&quot;,&quot;zh&quot;],&quot;library&quot;:&quot;transformers&quot;,&quot;theme&quot;:&quot;light&quot;,&quot;version&quot;:&quot;v4.34.0&quot;,&quot;versions&quot;:[{&quot;version&quot;:&quot;main&quot;},{&quot;version&quot;:&quot;v4.34.0&quot;},{&quot;version&quot;:&quot;v4.33.3&quot;},{&quot;version&quot;:&quot;v4.33.2&quot;},{&quot;version&quot;:&quot;v4.33.0&quot;},{&quot;version&quot;:&quot;v4.32.1&quot;},{&quot;version&quot;:&quot;v4.32.0&quot;},{&quot;version&quot;:&quot;v4.31.0&quot;},{&quot;version&quot;:&quot;v4.30.0&quot;},{&quot;version&quot;:&quot;v4.29.1&quot;},{&quot;version&quot;:&quot;v4.29.0&quot;},{&quot;version&quot;:&quot;v4.28.1&quot;},{&quot;version&quot;:&quot;v4.28.0&quot;},{&quot;version&quot;:&quot;v4.27.2&quot;},{&quot;version&quot;:&quot;v4.27.1&quot;},{&quot;version&quot;:&quot;v4.27.0&quot;},{&quot;version&quot;:&quot;v4.26.1&quot;},{&quot;version&quot;:&quot;v4.26.0&quot;},{&quot;version&quot;:&quot;v4.25.1&quot;},{&quot;version&quot;:&quot;v4.24.0&quot;},{&quot;version&quot;:&quot;v4.23.1&quot;},{&quot;version&quot;:&quot;v4.23.0&quot;},{&quot;version&quot;:&quot;v4.22.2&quot;},{&quot;version&quot;:&quot;v4.22.1&quot;},{&quot;version&quot;:&quot;v4.22.0&quot;},{&quot;version&quot;:&quot;v4.21.3&quot;},{&quot;version&quot;:&quot;v4.21.2&quot;},{&quot;version&quot;:&quot;v4.21.1&quot;},{&quot;version&quot;:&quot;v4.21.0&quot;},{&quot;version&quot;:&quot;v4.20.1&quot;},{&quot;version&quot;:&quot;v4.20.0&quot;},{&quot;version&quot;:&quot;v4.19.4&quot;},{&quot;version&quot;:&quot;v4.19.3&quot;},{&quot;version&quot;:&quot;v4.19.2&quot;},{&quot;version&quot;:&quot;v4.19.0&quot;},{&quot;version&quot;:&quot;v4.18.0&quot;},{&quot;version&quot;:&quot;v4.17.0&quot;},{&quot;version&quot;:&quot;v4.16.2&quot;},{&quot;version&quot;:&quot;v4.16.1&quot;},{&quot;version&quot;:&quot;v4.16.0&quot;},{&quot;version&quot;:&quot;v4.15.0&quot;},{&quot;version&quot;:&quot;v4.14.1&quot;},{&quot;version&quot;:&quot;v4.13.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.5&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.4&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.0.0&quot;},{&quot;version&quot;:&quot;doc-builder-html&quot;}],&quot;title&quot;:&quot;RetriBERT&quot;}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"><div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation</p> <div class="flex items-center"><p class="font-semibold">RetriBERT</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "><button class=" " type="button"><h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> <span class="ml-auto rounded border border-gray-200 bg-gray-100 px-0.5 text-xs dark:border-gray-800 dark:bg-gray-800"><kbd class="font-sans">⌘K</kbd></span></button> <div class="flex items-center"><select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1">v4.34.0</option><option value="2">v4.33.3</option><option value="3">v4.32.1</option><option value="4">v4.31.0</option><option value="5">v4.30.0</option><option value="6">v4.29.1</option><option value="7">v4.28.1</option><option value="8">v4.27.2</option><option value="9">v4.26.1</option><option value="10">v4.25.1</option><option value="11">v4.24.0</option><option value="12">v4.23.1</option><option value="13">v4.22.2</option><option value="14">v4.21.3</option><option value="15">v4.20.1</option><option value="16">v4.19.4</option><option value="17">v4.18.0</option><option value="18">v4.17.0</option><option value="19">v4.16.2</option><option value="20">v4.15.0</option><option value="21">v4.14.1</option><option value="22">v4.13.0</option><option value="23">v4.12.5</option><option value="24">v4.11.3</option><option value="25">v4.10.1</option><option value="26">v4.9.2</option><option value="27">v4.8.2</option><option value="28">v4.7.0</option><option value="29">v4.6.0</option><option value="30">v4.5.1</option><option value="31">v4.4.2</option><option value="32">v4.3.3</option><option value="33">v4.2.2</option><option value="34">v4.1.1</option><option value="35">v4.0.1</option><option value="36">v3.5.1</option><option value="37">v3.4.0</option><option value="38">v3.3.1</option><option value="39">v3.2.0</option><option value="40">v3.1.0</option><option value="41">v3.0.2</option><option value="42">v2.11.0</option><option value="43">v2.10.0</option><option value="44">v2.9.1</option><option value="45">v2.8.0</option><option value="46">v2.7.0</option><option value="47">v2.6.0</option><option value="48">v2.5.1</option><option value="49">v2.4.1</option><option value="50">v2.3.0</option><option value="51">v2.2.2</option><option value="52">v2.1.1</option><option value="53">v2.0.0</option><option value="54">v1.2.0</option><option value="55">v1.1.0</option><option value="56">v1.0.0</option><option value="57">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"><button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"><svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> 112,792</a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Get started</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/index">🤗 Transformers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/quicktour">Quick tour </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/installation">Installation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Tutorials</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_tutorial">Run inference with pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/autoclass_tutorial">Write portable code with AutoClass </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/preprocessing">Preprocess data </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/training">Fine-tune a pretrained model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/run_scripts">Train with a script </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/accelerate">Set up distributed training with 🤗 Accelerate </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/peft">Load and train adapters with 🤗 PEFT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_sharing">Share your model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/transformers_agents">Agents </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/llm_tutorial">Generation with LLMs </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Task Guides</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Natural Language Processing</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Computer Vision</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Generation</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Prompting</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Developer guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/fast_tokenizers">Use fast tokenizers from 🤗 Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/multilingual">Run inference with multilingual models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/create_a_model">Use model-specific APIs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_models">Share a custom model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/chat_templating">Templates for chat models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/sagemaker">Run training on Amazon SageMaker </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/serialization">Export to ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tflite">Export to TFLite </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/torchscript">Export to TorchScript </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/benchmarks">Benchmarks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/notebooks">Notebooks with examples </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/community">Community resources </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_tools">Custom Tools and Prompts </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/troubleshooting">Troubleshoot </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Performance and scalability</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/performance">Overview </a><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Efficient training techniques</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_one">Methods and tools for efficient training on a single GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_many">Multiple GPUs and parallelism </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu">Efficient training on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu_many">Distributed CPU training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu">Training on TPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu_tf">Training on TPU with TensorFlow </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_special">Training on Specialized Hardware </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_hardware">Custom hardware for training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/hpo_train">Hyperparameter Search using Trainer API </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Optimizing inference</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_cpu">Inference on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_one">Inference on one GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_many">Inference on many GPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_special">Inference on Specialized Hardware </a> </div><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/big_models">Instantiating a big model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/debugging">Troubleshooting </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tf_xla">XLA Integration for TensorFlow Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perf_torch_compile">Optimize inference using `torch.compile()` </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Contribute</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/contributing">How to contribute to transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_model">How to add a model to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_tensorflow_model">How to convert a 🤗 Transformers model to TensorFlow? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_pipeline">How to add a pipeline to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/testing">Testing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pr_checks">Checks on a Pull Request </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Conceptual guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/philosophy">Philosophy </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/glossary">Glossary </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/task_summary">What 🤗 Transformers can do </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tasks_explained">How 🤗 Transformers solve tasks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_summary">The Transformer model family </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tokenizer_summary">Summary of the tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/attention">Attention mechanisms </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pad_truncation">Padding and truncation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/bertology">BERTology </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perplexity">Perplexity of fixed-length models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_webserver">Pipelines for webserver inference </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_memory_anatomy">Model training anatomy </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>API</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Main Classes</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/agent">Agents and Tools </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/model_doc/auto">Auto Classes </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/callback">Callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/configuration">Configuration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/data_collator">Data Collator </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks">Keras callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/logging">Logging </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/model">Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/text_generation">Text Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/onnx">ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules">Optimization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/output">Model outputs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/pipelines">Pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/processors">Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/quantization">Quantization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/tokenizer">Tokenizer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/trainer">Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/deepspeed">DeepSpeed Integration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor">Feature Extractor </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/image_processor">Image Processor </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Models</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Text models</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/albert">ALBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bart">BART </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/barthez">BARThez </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bartpho">BARTpho </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bert">BERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bert-generation">BertGeneration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bert-japanese">BertJapanese </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bertweet">Bertweet </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/big_bird">BigBird </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus">BigBirdPegasus </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/biogpt">BioGpt </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/blenderbot">Blenderbot </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/blenderbot-small">Blenderbot Small </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bloom">BLOOM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bort">BORT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/byt5">ByT5 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/camembert">CamemBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/canine">CANINE </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/codegen">CodeGen </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/code_llama">CodeLlama </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/convbert">ConvBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/cpm">CPM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/cpmant">CPMANT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ctrl">CTRL </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/deberta">DeBERTa </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/deberta-v2">DeBERTa-v2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/dialogpt">DialoGPT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/distilbert">DistilBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/dpr">DPR </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/electra">ELECTRA </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/encoder-decoder">Encoder Decoder Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ernie">ERNIE </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ernie_m">ErnieM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/esm">ESM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/falcon">Falcon </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/flan-t5">FLAN-T5 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/flan-ul2">FLAN-UL2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/flaubert">FlauBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/fnet">FNet </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/fsmt">FSMT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/funnel">Funnel Transformer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/openai-gpt">GPT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt_neo">GPT Neo </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt_neox">GPT NeoX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese">GPT NeoX Japanese </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gptj">GPT-J </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt2">GPT2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode">GPTBigCode </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese">GPTSAN Japanese </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt-sw3">GPTSw3 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/herbert">HerBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ibert">I-BERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/jukebox">Jukebox </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/led">LED </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/llama">LLaMA </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/llama2">Llama2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/longformer">Longformer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/longt5">LongT5 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/luke">LUKE </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/m2m_100">M2M100 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/marian">MarianMT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/markuplm">MarkupLM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mbart">MBart and MBart-50 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mega">MEGA </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/megatron-bert">MegatronBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2">MegatronGPT2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mistral">Mistral </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mluke">mLUKE </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mobilebert">MobileBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mpnet">MPNet </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mpt">MPT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mra">MRA </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mt5">MT5 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mvp">MVP </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nezha">NEZHA </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nllb">NLLB </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nllb-moe">NLLB-MoE </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nystromformer">Nyströmformer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/open-llama">Open-Llama </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/opt">OPT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/pegasus">Pegasus </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/pegasus_x">PEGASUS-X </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/persimmon">Persimmon </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/phobert">PhoBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/plbart">PLBart </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/prophetnet">ProphetNet </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/qdqbert">QDQBert </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/rag">RAG </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/realm">REALM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/reformer">Reformer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/rembert">RemBERT </a><a data-sveltekit-reload="" class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/retribert">RetriBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/roberta">RoBERTa </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm">RoBERTa-PreLayerNorm </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/roc_bert">RoCBert </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/roformer">RoFormer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/rwkv">RWKV </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/splinter">Splinter </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/squeezebert">SqueezeBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/switch_transformers">SwitchTransformers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/t5">T5 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/t5v1.1">T5v1.1 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/tapex">TAPEX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/transfo-xl">Transformer XL </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ul2">UL2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/umt5">UMT5 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xmod">X-MOD </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xglm">XGLM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm">XLM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet">XLM-ProphetNet </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta">XLM-RoBERTa </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl">XLM-RoBERTa-XL </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm-v">XLM-V </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlnet">XLNet </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/yoso">YOSO </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Vision models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Reinforcement learning models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Time series models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Graph models</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Internal Helpers</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/modeling_utils">Custom Layers and Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/pipelines_utils">Utilities for pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/tokenization_utils">Utilities for Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/trainer_utils">Utilities for Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/generation_utils">Utilities for Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/image_processing_utils">Utilities for Image Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/audio_utils">Utilities for Audio processing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/file_utils">General Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/time_series_utils">Utilities for Time Series </a> </div> </div></nav></div></div></div> <div class="z-1 min-w-0 flex-1"> <div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg"> <div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div> <p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience </p> <div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes </div></div></div> <div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a> <p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div> <div class="prose-doc prose relative mx-auto max-w-4xl break-words"> <p></p> <h1 class="relative group"><a id="retribert" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#retribert"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-61udwe">RetriBERT</span></h1> <div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400"><p data-svelte-h="svelte-lwu440">This model is in maintenance mode only, so we won’t accept any new PRs changing its code.</p> <p data-svelte-h="svelte-4042uy">If you run into any issues running this model, please reinstall the last version that supported this model: v4.30.0. You can do so by running the following command: <code>pip install -U transformers==4.30.0</code>.</p></div> <h2 class="relative group"><a id="overview" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#overview"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1jsw1pg">Overview</span></h2> <p data-svelte-h="svelte-td4eob">The RetriBERT model was proposed in the blog post <a href="https://yjernite.github.io/lfqa.html" rel="nofollow">Explain Anything Like I’m Five: A Model for Open Domain Long Form Question Answering</a>. RetriBERT is a small model that uses either a single or pair of BERT encoders with lower-dimension projection for dense semantic indexing of text.</p> <p data-svelte-h="svelte-1hctczn">This model was contributed by <a href="https://huggingface.co/yjernite" rel="nofollow">yjernite</a>. Code to train and use the model can be found <a href="https://github.com/huggingface/transformers/tree/main/examples/research-projects/distillation" rel="nofollow">here</a>.</p> <h2 class="relative group"><a id="transformers.RetriBertConfig" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertConfig"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-18s88js">RetriBertConfig</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RetriBertConfig"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">RetriBertConfig</span></span></h3> <a id="transformers.RetriBertConfig" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RetriBertConfig"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/deprecated/retribert/configuration_retribert.py#L31" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">vocab_size<span class="opacity-60"> = 30522</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_size<span class="opacity-60"> = 768</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_hidden_layers<span class="opacity-60"> = 8</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_attention_heads<span class="opacity-60"> = 12</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">intermediate_size<span class="opacity-60"> = 3072</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_act<span class="opacity-60"> = 'gelu'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_dropout_prob<span class="opacity-60"> = 0.1</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_probs_dropout_prob<span class="opacity-60"> = 0.1</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">max_position_embeddings<span class="opacity-60"> = 512</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">type_vocab_size<span class="opacity-60"> = 2</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">initializer_range<span class="opacity-60"> = 0.02</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">layer_norm_eps<span class="opacity-60"> = 1e-12</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">share_encoders<span class="opacity-60"> = True</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">projection_dim<span class="opacity-60"> = 128</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pad_token_id<span class="opacity-60"> = 0</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 14 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertConfig.vocab_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertConfig.vocab_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>vocab_size</strong> (<code>int</code>, <em>optional</em>, defaults to 30522) — Vocabulary size of the RetriBERT model. Defines the number of different tokens that can be represented by the <code>inputs_ids</code> passed when calling <a href="/docs/transformers/v4.34.0/en/model_doc/retribert#transformers.RetriBertModel">RetriBertModel</a></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertConfig.hidden_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertConfig.hidden_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hidden_size</strong> (<code>int</code>, <em>optional</em>, defaults to 768) — Dimensionality of the encoder layers and the pooler layer.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertConfig.num_hidden_layers" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertConfig.num_hidden_layers"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>num_hidden_layers</strong> (<code>int</code>, <em>optional</em>, defaults to 12) — Number of hidden layers in the Transformer encoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertConfig.num_attention_heads" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertConfig.num_attention_heads"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>num_attention_heads</strong> (<code>int</code>, <em>optional</em>, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertConfig.intermediate_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertConfig.intermediate_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>intermediate_size</strong> (<code>int</code>, <em>optional</em>, defaults to 3072) — Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertConfig.hidden_act" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertConfig.hidden_act"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hidden_act</strong> (<code>str</code> or <code>function</code>, <em>optional</em>, defaults to <code>"gelu"</code>) — The non-linear activation function (function or string) in the encoder and pooler. If string, <code>"gelu"</code>, <code>"relu"</code>, <code>"silu"</code> and <code>"gelu_new"</code> are supported.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertConfig.hidden_dropout_prob" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertConfig.hidden_dropout_prob"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>hidden_dropout_prob</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertConfig.attention_probs_dropout_prob" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertConfig.attention_probs_dropout_prob"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_probs_dropout_prob</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) — The dropout ratio for the attention probabilities.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertConfig.max_position_embeddings" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertConfig.max_position_embeddings"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>max_position_embeddings</strong> (<code>int</code>, <em>optional</em>, defaults to 512) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertConfig.type_vocab_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertConfig.type_vocab_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>type_vocab_size</strong> (<code>int</code>, <em>optional</em>, defaults to 2) — The vocabulary size of the <em>token_type_ids</em> passed into <a href="/docs/transformers/v4.34.0/en/model_doc/bert#transformers.BertModel">BertModel</a>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertConfig.initializer_range" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertConfig.initializer_range"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>initializer_range</strong> (<code>float</code>, <em>optional</em>, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertConfig.layer_norm_eps" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertConfig.layer_norm_eps"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>layer_norm_eps</strong> (<code>float</code>, <em>optional</em>, defaults to 1e-12) — The epsilon used by the layer normalization layers.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertConfig.share_encoders" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertConfig.share_encoders"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>share_encoders</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether or not to use the same Bert-type encoder for the queries and document</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertConfig.projection_dim" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertConfig.projection_dim"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>projection_dim</strong> (<code>int</code>, <em>optional</em>, defaults to 128) — Final dimension of the query and document representation after projection</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-6apvnx">This is the configuration class to store the configuration of a <a href="/docs/transformers/v4.34.0/en/model_doc/retribert#transformers.RetriBertModel">RetriBertModel</a>. It is used to instantiate a RetriBertModel model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the RetriBERT <a href="https://huggingface.co/yjernite/retribert-base-uncased" rel="nofollow">yjernite/retribert-base-uncased</a> architecture.</p> <p data-svelte-h="svelte-10kqkkl">Configuration objects inherit from <a href="/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> and can be used to control the model outputs. Read the documentation from <a href="/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> for more information.</p></div> <h2 class="relative group"><a id="transformers.RetriBertTokenizer" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizer"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-7osg9r">RetriBertTokenizer</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RetriBertTokenizer"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">RetriBertTokenizer</span></span></h3> <a id="transformers.RetriBertTokenizer" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RetriBertTokenizer"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/deprecated/retribert/tokenization_retribert.py#L70" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">vocab_file<span class="opacity-60"></span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">do_lower_case<span class="opacity-60"> = True</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">do_basic_tokenize<span class="opacity-60"> = True</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">never_split<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">unk_token<span class="opacity-60"> = '[UNK]'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">sep_token<span class="opacity-60"> = '[SEP]'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pad_token<span class="opacity-60"> = '[PAD]'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">cls_token<span class="opacity-60"> = '[CLS]'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_token<span class="opacity-60"> = '[MASK]'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">tokenize_chinese_chars<span class="opacity-60"> = True</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">strip_accents<span class="opacity-60"> = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 11 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizer.vocab_file" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizer.vocab_file"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>vocab_file</strong> (<code>str</code>) — File containing the vocabulary.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizer.do_lower_case" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizer.do_lower_case"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>do_lower_case</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether or not to lowercase the input when tokenizing.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizer.do_basic_tokenize" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizer.do_basic_tokenize"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>do_basic_tokenize</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether or not to do basic tokenization before WordPiece.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizer.never_split" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizer.never_split"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>never_split</strong> (<code>Iterable</code>, <em>optional</em>) — Collection of tokens which will never be split during tokenization. Only has an effect when <code>do_basic_tokenize=True</code></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizer.unk_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizer.unk_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>unk_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"[UNK]"</code>) — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizer.sep_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizer.sep_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>sep_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"[SEP]"</code>) — The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizer.pad_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizer.pad_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>pad_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"[PAD]"</code>) — The token used for padding, for example when batching sequences of different lengths.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizer.cls_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizer.cls_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>cls_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"[CLS]"</code>) — The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizer.mask_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizer.mask_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mask_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"[MASK]"</code>) — The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizer.tokenize_chinese_chars" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizer.tokenize_chinese_chars"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>tokenize_chinese_chars</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether or not to tokenize Chinese characters. This should likely be deactivated for Japanese (see this <a href="https://github.com/huggingface/transformers/issues/328" rel="nofollow">issue</a>).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizer.strip_accents" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizer.strip_accents"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>strip_accents</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to strip all accents. If this option is not specified, then it will be determined by the value for <code>lowercase</code> (as in the original BERT).</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-otns3l">Constructs a RetriBERT tokenizer.</p> <p data-svelte-h="svelte-fb8zo0"><a href="/docs/transformers/v4.34.0/en/model_doc/retribert#transformers.RetriBertTokenizer">RetriBertTokenizer</a> is identical to <a href="/docs/transformers/v4.34.0/en/model_doc/bert#transformers.BertTokenizer">BertTokenizer</a> and runs end-to-end tokenization: punctuation splitting and wordpiece.</p> <p data-svelte-h="svelte-1vzsz7y">This tokenizer inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizer">PreTrainedTokenizer</a> which contains most of the main methods. Users should refer to: this superclass for more information regarding those methods.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RetriBertTokenizer.build_inputs_with_special_tokens"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>build_inputs_with_special_tokens</span></h4> <a id="transformers.RetriBertTokenizer.build_inputs_with_special_tokens" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RetriBertTokenizer.build_inputs_with_special_tokens"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/deprecated/retribert/tokenization_retribert.py#L214" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_0<span class="opacity-60">: typing.List[int]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_1<span class="opacity-60">: typing.Optional[typing.List[int]] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>List[int]</code></span></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizer.build_inputs_with_special_tokens.token_ids_0" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizer.build_inputs_with_special_tokens.token_ids_0"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_0</strong> (<code>List[int]</code>) — List of IDs to which the special tokens will be added.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizer.build_inputs_with_special_tokens.token_ids_1" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizer.build_inputs_with_special_tokens.token_ids_1"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_1</strong> (<code>List[int]</code>, <em>optional</em>) — Optional second list of IDs for sequence pairs.</span></span> </li></ul> <div id="transformers.RetriBertTokenizer.build_inputs_with_special_tokens.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><code>List[int]</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>List of <a href="../glossary#input-ids">input IDs</a> with the appropriate special tokens.</p> </p> </div></div> <p data-svelte-h="svelte-t7qurq">Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A BERT sequence has the following format:</p> <ul data-svelte-h="svelte-xi6653"><li>single sequence: <code>[CLS] X [SEP]</code></li> <li>pair of sequences: <code>[CLS] A [SEP] B [SEP]</code></li></ul></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RetriBertTokenizer.convert_tokens_to_string"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>convert_tokens_to_string</span></h4> <a id="transformers.RetriBertTokenizer.convert_tokens_to_string" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RetriBertTokenizer.convert_tokens_to_string"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/deprecated/retribert/tokenization_retribert.py#L208" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">tokens<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> </div></div> <p data-svelte-h="svelte-b3k2yi">Converts a sequence of tokens (string) in a single string.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RetriBertTokenizer.create_token_type_ids_from_sequences"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>create_token_type_ids_from_sequences</span></h4> <a id="transformers.RetriBertTokenizer.create_token_type_ids_from_sequences" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RetriBertTokenizer.create_token_type_ids_from_sequences"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/deprecated/retribert/tokenization_retribert.py#L269" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_0<span class="opacity-60">: typing.List[int]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_1<span class="opacity-60">: typing.Optional[typing.List[int]] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>List[int]</code></span></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizer.create_token_type_ids_from_sequences.token_ids_0" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizer.create_token_type_ids_from_sequences.token_ids_0"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_0</strong> (<code>List[int]</code>) — List of IDs.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizer.create_token_type_ids_from_sequences.token_ids_1" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizer.create_token_type_ids_from_sequences.token_ids_1"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_1</strong> (<code>List[int]</code>, <em>optional</em>) — Optional second list of IDs for sequence pairs.</span></span> </li></ul> <div id="transformers.RetriBertTokenizer.create_token_type_ids_from_sequences.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><code>List[int]</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>List of <a href="../glossary#token-type-ids">token type IDs</a> according to the given sequence(s).</p> </p> </div></div> <p data-svelte-h="svelte-gn6wi7">Create a mask from the two sequences passed to be used in a sequence-pair classification task. A BERT sequence</p> <div class="relative group rounded-md"><a id="transformers.RetriBertTokenizer.create_token_type_ids_from_sequences.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizer.create_token_type_ids_from_sequences.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-qjgeij">pair mask has the following format:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class="">0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 1 </span>1<span class="hljs-number"> 1 </span>1<span class="hljs-number"> 1 </span>1<span class="hljs-number"> 1 </span>1 1 | first sequence | second sequence |</pre></div></div> <p data-svelte-h="svelte-owoxgn">If <code>token_ids_1</code> is <code>None</code>, this method only returns the first portion of the mask (0s).</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RetriBertTokenizer.get_special_tokens_mask"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>get_special_tokens_mask</span></h4> <a id="transformers.RetriBertTokenizer.get_special_tokens_mask" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RetriBertTokenizer.get_special_tokens_mask"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/deprecated/retribert/tokenization_retribert.py#L240" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_0<span class="opacity-60">: typing.List[int]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_1<span class="opacity-60">: typing.Optional[typing.List[int]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">already_has_special_tokens<span class="opacity-60">: bool = False</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>List[int]</code></span></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizer.get_special_tokens_mask.token_ids_0" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizer.get_special_tokens_mask.token_ids_0"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_0</strong> (<code>List[int]</code>) — List of IDs.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizer.get_special_tokens_mask.token_ids_1" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizer.get_special_tokens_mask.token_ids_1"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_1</strong> (<code>List[int]</code>, <em>optional</em>) — Optional second list of IDs for sequence pairs.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizer.get_special_tokens_mask.already_has_special_tokens" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizer.get_special_tokens_mask.already_has_special_tokens"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>already_has_special_tokens</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not the token list is already formatted with special tokens for the model.</span></span> </li></ul> <div id="transformers.RetriBertTokenizer.get_special_tokens_mask.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><code>List[int]</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.</p> </p> </div></div> <p data-svelte-h="svelte-1f4f5kp">Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer <code>prepare_for_model</code> method.</p></div></div> <h2 class="relative group"><a id="transformers.RetriBertTokenizerFast" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizerFast"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-n7amuh">RetriBertTokenizerFast</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RetriBertTokenizerFast"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">RetriBertTokenizerFast</span></span></h3> <a id="transformers.RetriBertTokenizerFast" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RetriBertTokenizerFast"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/deprecated/retribert/tokenization_retribert_fast.py#L54" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">vocab_file<span class="opacity-60"> = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">tokenizer_file<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">do_lower_case<span class="opacity-60"> = True</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">unk_token<span class="opacity-60"> = '[UNK]'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">sep_token<span class="opacity-60"> = '[SEP]'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pad_token<span class="opacity-60"> = '[PAD]'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">cls_token<span class="opacity-60"> = '[CLS]'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_token<span class="opacity-60"> = '[MASK]'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">tokenize_chinese_chars<span class="opacity-60"> = True</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">strip_accents<span class="opacity-60"> = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 11 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizerFast.vocab_file" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizerFast.vocab_file"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>vocab_file</strong> (<code>str</code>) — File containing the vocabulary.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizerFast.do_lower_case" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizerFast.do_lower_case"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>do_lower_case</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether or not to lowercase the input when tokenizing.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizerFast.unk_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizerFast.unk_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>unk_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"[UNK]"</code>) — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizerFast.sep_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizerFast.sep_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>sep_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"[SEP]"</code>) — The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizerFast.pad_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizerFast.pad_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>pad_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"[PAD]"</code>) — The token used for padding, for example when batching sequences of different lengths.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizerFast.cls_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizerFast.cls_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>cls_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"[CLS]"</code>) — The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizerFast.mask_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizerFast.mask_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mask_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"[MASK]"</code>) — The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizerFast.clean_text" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizerFast.clean_text"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>clean_text</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether or not to clean the text before tokenization by removing any control characters and replacing all whitespaces by the classic one.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizerFast.tokenize_chinese_chars" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizerFast.tokenize_chinese_chars"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>tokenize_chinese_chars</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether or not to tokenize Chinese characters. This should likely be deactivated for Japanese (see <a href="https://github.com/huggingface/transformers/issues/328" rel="nofollow">this issue</a>).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizerFast.strip_accents" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizerFast.strip_accents"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>strip_accents</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to strip all accents. If this option is not specified, then it will be determined by the value for <code>lowercase</code> (as in the original BERT).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizerFast.wordpieces_prefix" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizerFast.wordpieces_prefix"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>wordpieces_prefix</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"##"</code>) — The prefix for subwords.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1g6s8hh">Construct a “fast” RetriBERT tokenizer (backed by HuggingFace’s <em>tokenizers</em> library).</p> <p data-svelte-h="svelte-1m3vpe8"><a href="/docs/transformers/v4.34.0/en/model_doc/retribert#transformers.RetriBertTokenizerFast">RetriBertTokenizerFast</a> is identical to <a href="/docs/transformers/v4.34.0/en/model_doc/bert#transformers.BertTokenizerFast">BertTokenizerFast</a> and runs end-to-end tokenization: punctuation splitting and wordpiece.</p> <p data-svelte-h="svelte-ttxvs6">This tokenizer inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast">PreTrainedTokenizerFast</a> which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RetriBertTokenizerFast.build_inputs_with_special_tokens"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>build_inputs_with_special_tokens</span></h4> <a id="transformers.RetriBertTokenizerFast.build_inputs_with_special_tokens" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RetriBertTokenizerFast.build_inputs_with_special_tokens"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/deprecated/retribert/tokenization_retribert_fast.py#L148" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_0<span class="opacity-60"></span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_1<span class="opacity-60"> = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>List[int]</code></span></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizerFast.build_inputs_with_special_tokens.token_ids_0" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizerFast.build_inputs_with_special_tokens.token_ids_0"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_0</strong> (<code>List[int]</code>) — List of IDs to which the special tokens will be added.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizerFast.build_inputs_with_special_tokens.token_ids_1" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizerFast.build_inputs_with_special_tokens.token_ids_1"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_1</strong> (<code>List[int]</code>, <em>optional</em>) — Optional second list of IDs for sequence pairs.</span></span> </li></ul> <div id="transformers.RetriBertTokenizerFast.build_inputs_with_special_tokens.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><code>List[int]</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>List of <a href="../glossary#input-ids">input IDs</a> with the appropriate special tokens.</p> </p> </div></div> <p data-svelte-h="svelte-t7qurq">Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A BERT sequence has the following format:</p> <ul data-svelte-h="svelte-xi6653"><li>single sequence: <code>[CLS] X [SEP]</code></li> <li>pair of sequences: <code>[CLS] A [SEP] B [SEP]</code></li></ul></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RetriBertTokenizerFast.create_token_type_ids_from_sequences"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>create_token_type_ids_from_sequences</span></h4> <a id="transformers.RetriBertTokenizerFast.create_token_type_ids_from_sequences" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RetriBertTokenizerFast.create_token_type_ids_from_sequences"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/deprecated/retribert/tokenization_retribert_fast.py#L173" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_0<span class="opacity-60">: typing.List[int]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_1<span class="opacity-60">: typing.Optional[typing.List[int]] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>List[int]</code></span></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizerFast.create_token_type_ids_from_sequences.token_ids_0" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizerFast.create_token_type_ids_from_sequences.token_ids_0"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_0</strong> (<code>List[int]</code>) — List of IDs.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertTokenizerFast.create_token_type_ids_from_sequences.token_ids_1" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizerFast.create_token_type_ids_from_sequences.token_ids_1"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_1</strong> (<code>List[int]</code>, <em>optional</em>) — Optional second list of IDs for sequence pairs.</span></span> </li></ul> <div id="transformers.RetriBertTokenizerFast.create_token_type_ids_from_sequences.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><code>List[int]</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>List of <a href="../glossary#token-type-ids">token type IDs</a> according to the given sequence(s).</p> </p> </div></div> <p data-svelte-h="svelte-gn6wi7">Create a mask from the two sequences passed to be used in a sequence-pair classification task. A BERT sequence</p> <div class="relative group rounded-md"><a id="transformers.RetriBertTokenizerFast.create_token_type_ids_from_sequences.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertTokenizerFast.create_token_type_ids_from_sequences.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-qjgeij">pair mask has the following format:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class="">0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 1 </span>1<span class="hljs-number"> 1 </span>1<span class="hljs-number"> 1 </span>1<span class="hljs-number"> 1 </span>1 1 | first sequence | second sequence |</pre></div></div> <p data-svelte-h="svelte-owoxgn">If <code>token_ids_1</code> is <code>None</code>, this method only returns the first portion of the mask (0s).</p></div></div> <h2 class="relative group"><a id="transformers.RetriBertModel" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertModel"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-160emrj">RetriBertModel</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RetriBertModel"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">RetriBertModel</span></span></h3> <a id="transformers.RetriBertModel" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RetriBertModel"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/deprecated/retribert/modeling_retribert.py#L88" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60">: RetriBertConfig</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertModel.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertModel.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/retribert#transformers.RetriBertConfig">RetriBertConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1c8tt6f">Bert Based model to embed queries or document for document retrieval.</p> <p data-svelte-h="svelte-hmtw9k">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-hswkmf">This model is also a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.RetriBertModel.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.RetriBertModel.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.RetriBertModel.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/deprecated/retribert/modeling_retribert.py#L176" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids_query<span class="opacity-60">: LongTensor</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask_query<span class="opacity-60">: typing.Optional[torch.FloatTensor]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids_doc<span class="opacity-60">: LongTensor</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask_doc<span class="opacity-60">: typing.Optional[torch.FloatTensor]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">checkpoint_batch_size<span class="opacity-60">: int = -1</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span>`torch.FloatTensor“</span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 5 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertModel.forward.input_ids_query" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertModel.forward.input_ids_query"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids_query</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary for the queries in a batch.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertModel.forward.attention_mask_query" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertModel.forward.attention_mask_query"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask_query</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertModel.forward.input_ids_doc" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertModel.forward.input_ids_doc"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids_doc</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary for the documents in a batch.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertModel.forward.attention_mask_doc" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertModel.forward.attention_mask_doc"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask_doc</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on documents padding token indices.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.RetriBertModel.forward.checkpoint_batch_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.RetriBertModel.forward.checkpoint_batch_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>checkpoint_batch_size</strong> (<code>int</code>, <em>optional</em>, defaults to <code>-1</code>) — If greater than 0, uses gradient checkpointing to only compute sequence representation on <code>checkpoint_batch_size</code> examples at a time on the GPU. All query representations are still compared to all document representations in the batch.</span></span> </li></ul> <div id="transformers.RetriBertModel.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p>`torch.FloatTensor“</p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>The bidirectional cross-entropy loss obtained while trying to match each query to its corresponding document and each document to its corresponding query in the batch</p> </p> </div></div></div></div> <p></p> <div id="svelte-announcer" aria-live="assertive" aria-atomic="true" style="position: absolute; left: 0px; top: 0px; clip: rect(0px, 0px, 0px, 0px); clip-path: inset(50%); overflow: hidden; white-space: nowrap; width: 1px; height: 1px;"></div></div> <div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/rembert" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>RemBERT</a> <a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/roberta" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">RoBERTa<span class="ml-2 translate-y-px">→</span></a></div></div></div> <div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapter&quot;:{&quot;title&quot;:&quot;RetriBERT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;retribert&quot;,&quot;url&quot;:&quot;#retribert&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;overview&quot;,&quot;url&quot;:&quot;#overview&quot;},{&quot;title&quot;:&quot;RetriBertConfig&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.RetriBertConfig&quot;,&quot;url&quot;:&quot;#transformers.RetriBertConfig&quot;},{&quot;title&quot;:&quot;RetriBertTokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.RetriBertTokenizer&quot;,&quot;url&quot;:&quot;#transformers.RetriBertTokenizer&quot;},{&quot;title&quot;:&quot;RetriBertTokenizerFast&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.RetriBertTokenizerFast&quot;,&quot;url&quot;:&quot;#transformers.RetriBertTokenizerFast&quot;},{&quot;title&quot;:&quot;RetriBertModel&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.RetriBertModel&quot;,&quot;url&quot;:&quot;#transformers.RetriBertModel&quot;}]}}" data-target="SubSideMenu"><nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#retribert" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-retribert"><wbr>RetriBERT</a> <a href="#overview" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-overview"><wbr>Overview</a> <a href="#transformers.RetriBertConfig" class="pl-4 text-gray-700 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.RetriBertConfig"><wbr>Retri<wbr>Bert<wbr>Config</a> <a href="#transformers.RetriBertTokenizer" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.RetriBertTokenizer"><wbr>Retri<wbr>Bert<wbr>Tokenizer</a> <a href="#transformers.RetriBertTokenizerFast" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.RetriBertTokenizerFast"><wbr>Retri<wbr>Bert<wbr>Tokenizer<wbr>Fast</a> <a href="#transformers.RetriBertModel" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.RetriBertModel"><wbr>Retri<wbr>Bert<wbr>Model</a> </nav></div></div></div> <div id="doc-footer"></div></main> </div> <script> import("/front/build/kube-5e23f38/index.js"); window.moonSha = "kube-5e23f38/"; window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc","environment":"production","userAgent":"HuggingFace (production)"}`); </script> <!-- Stripe --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://js.stripe.com/v3/"; script.async = true; document.head.appendChild(script); } </script> <!-- Google analytics v4 --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL"; script.async = true; document.head.appendChild(script); window.dataLayer = window.dataLayer || []; function gtag() { if (window.dataLayer !== undefined) { window.dataLayer.push(arguments); } } gtag("js", new Date()); gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/v4.34.0/en/model_doc/retribert" }); /// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" }); /// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent /// TODO: ask the user for their consent and update this with gtag('consent', 'update') } </script> <!-- Google Analytics v3 (deprecated) --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { (function (i, s, o, g, r, a, m) { i["GoogleAnalyticsObject"] = r; (i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments); }), (i[r].l = 1 * new Date()); (a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]); a.async = 1; a.src = g; m.parentNode.insertBefore(a, m); })(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics"); ganalytics("create", "UA-83738774-2", "auto"); ganalytics("send", "pageview", "/docs/transformers/v4.34.0/en/model_doc/retribert"); } </script> <iframe name="__privateStripeMetricsController7890" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-27c67c0d52761104439bb051c7856ab1.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fv4.34.0%2Fen%2Fmodel_doc%2Fretribert%23transformers.RetriBertConfig&amp;title=RetriBERT&amp;referrer=&amp;muid=38397bf3-d1df-433f-a1ab-3a999964eeba83e258&amp;sid=7a2cecc6-6b9a-4e4a-88b4-4bd8a189a43fe6315f&amp;version=6&amp;preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html>
2023-10-05T13:33:50.328Z
SEW-D
https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/sew-d#transformers.SEWDConfig
# SEW-D ## Overview SEW-D (Squeezed and Efficient Wav2Vec with Disentangled attention) was proposed in [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. The abstract from the paper is the following: _This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes._ Tips: - SEW-D is a speech model that accepts a float array corresponding to the raw waveform of the speech signal. - SEWDForCTC is fine-tuned using connectionist temporal classification (CTC) so the model output has to be decoded using [Wav2Vec2CTCTokenizer](/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2CTCTokenizer). This model was contributed by [anton-l](https://huggingface.co/anton-l). ## Documentation resources - [Audio classification task guide](../tasks/audio_classification) - [Automatic speech recognition task guide](../tasks/asr) ## SEWDConfig ### class transformers.SEWDConfig [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/sew_d/configuration_sew_d.py#L32) ( vocab\_size = 32 hidden\_size = 768 num\_hidden\_layers = 12 num\_attention\_heads = 12 intermediate\_size = 3072 squeeze\_factor = 2 max\_position\_embeddings = 512 position\_buckets = 256 share\_att\_key = True relative\_attention = True pos\_att\_type = ('p2c', 'c2p') norm\_rel\_ebd = 'layer\_norm' hidden\_act = 'gelu\_python' hidden\_dropout = 0.1 activation\_dropout = 0.1 attention\_dropout = 0.1 feat\_proj\_dropout = 0.0 final\_dropout = 0.1 initializer\_range = 0.02 layer\_norm\_eps = 1e-07 feature\_layer\_norm\_eps = 1e-05 feat\_extract\_norm = 'group' feat\_extract\_activation = 'gelu' conv\_dim = (64, 128, 128, 128, 128, 256, 256, 256, 256, 512, 512, 512, 512) conv\_stride = (5, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1) conv\_kernel = (10, 3, 1, 3, 1, 3, 1, 3, 1, 2, 1, 2, 1) conv\_bias = False num\_conv\_pos\_embeddings = 128 num\_conv\_pos\_embedding\_groups = 16 apply\_spec\_augment = True mask\_time\_prob = 0.05 mask\_time\_length = 10 mask\_time\_min\_masks = 2 mask\_feature\_prob = 0.0 mask\_feature\_length = 10 mask\_feature\_min\_masks = 0 ctc\_loss\_reduction = 'mean' ctc\_zero\_infinity = False use\_weighted\_layer\_sum = False classifier\_proj\_size = 256 pad\_token\_id = 0 bos\_token\_id = 1 eos\_token\_id = 2 \*\*kwargs ) Parameters - **vocab\_size** (`int`, _optional_, defaults to 32) — Vocabulary size of the SEW-D model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling `SEWD`. - **hidden\_size** (`int`, _optional_, defaults to 768) — Dimensionality of the encoder layers and the pooler layer. - **num\_hidden\_layers** (`int`, _optional_, defaults to 12) — Number of hidden layers in the Transformer encoder. - **num\_attention\_heads** (`int`, _optional_, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder. - **intermediate\_size** (`int`, _optional_, defaults to 3072) — Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder. - **squeeze\_factor** (`int`, _optional_, defaults to 2) — Sequence length downsampling factor after the encoder and upsampling factor after the transformer. - **max\_position\_embeddings** (`int`, _optional_, defaults to 512) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). - **position\_buckets** (`int`, _optional_, defaults to 256) — The maximum size of relative position embeddings. - **share\_att\_key** (`bool`, _optional_, defaults to `True`) — Whether to share attention key with c2p and p2c. - **relative\_attention** (`bool`, _optional_, defaults to `True`) — Whether to use relative position encoding. - **pos\_att\_type** (`Tuple[str]`, _optional_, defaults to `("p2c", "c2p")`) — The type of relative position attention, it can be a combination of `("p2c", "c2p")`, e.g. `("p2c")`, `("p2c", "c2p")`, `("p2c", "c2p")`. - **norm\_rel\_ebd** (`str`, _optional_, defaults to `"layer_norm"`) — Whether to use layer norm in relative embedding (`"layer_norm"` if yes) - **hidden\_act** (`str` or `function`, _optional_, defaults to `"gelu_python"`) — The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"`, `"gelu_python"` and `"gelu_new"` are supported. - **hidden\_dropout** (`float`, _optional_, defaults to 0.1) — Deprecated. Not used by the model and will be removed in a future version. - **activation\_dropout** (`float`, _optional_, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. - **attention\_dropout** (`float`, _optional_, defaults to 0.1) — The dropout ratio for the attention probabilities. - **final\_dropout** (`float`, _optional_, defaults to 0.1) — The dropout probability for the final projection layer of [SEWDForCTC](/docs/transformers/v4.34.0/en/model_doc/sew-d#transformers.SEWDForCTC). - **initializer\_range** (`float`, _optional_, defaults to 0.02) — The standard deviation of the truncated\_normal\_initializer for initializing all weight matrices. - **layer\_norm\_eps** (`float`, _optional_, defaults to 1e-7) — The epsilon used by the layer normalization layers in the transformer encoder. - **feature\_layer\_norm\_eps** (`float`, _optional_, defaults to 1e-5) — The epsilon used by the layer normalization after the feature encoder. - **feat\_extract\_norm** (`str`, _optional_, defaults to `"group"`) — The norm to be applied to 1D convolutional layers in feature encoder. One of `"group"` for group normalization of only the first 1D convolutional layer or `"layer"` for layer normalization of all 1D convolutional layers. - **feat\_proj\_dropout** (`float`, _optional_, defaults to 0.0) — The dropout probability for output of the feature encoder. - **feat\_extract\_activation** (`str,` optional`, defaults to` “gelu”`) -- The non-linear activation function (function or string) in the 1D convolutional layers of the feature extractor. If string,` “gelu”`,` “relu”`,` “selu”`and`“gelu\_new”\` are supported. - **conv\_dim** (`Tuple[int]` or `List[int]`, _optional_, defaults to `(64, 128, 128, 128, 128, 256, 256, 256, 256, 512, 512, 512, 512)`) — A tuple of integers defining the number of input and output channels of each 1D convolutional layer in the feature encoder. The length of _conv\_dim_ defines the number of 1D convolutional layers. - **conv\_stride** (`Tuple[int]` or `List[int]`, _optional_, defaults to `(5, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1)`) — A tuple of integers defining the stride of each 1D convolutional layer in the feature encoder. The length of _conv\_stride_ defines the number of convolutional layers and has to match the length of _conv\_dim_. - **conv\_kernel** (`Tuple[int]` or `List[int]`, _optional_, defaults to `(10, 3, 1, 3, 1, 3, 1, 3, 1, 2, 1, 2, 1)`) — A tuple of integers defining the kernel size of each 1D convolutional layer in the feature encoder. The length of _conv\_kernel_ defines the number of convolutional layers and has to match the length of _conv\_dim_. - **conv\_bias** (`bool`, _optional_, defaults to `False`) — Whether the 1D convolutional layers have a bias. - **num\_conv\_pos\_embeddings** (`int`, _optional_, defaults to 128) — Number of convolutional positional embeddings. Defines the kernel size of 1D convolutional positional embeddings layer. - **num\_conv\_pos\_embedding\_groups** (`int`, _optional_, defaults to 16) — Number of groups of 1D convolutional positional embeddings layer. - **apply\_spec\_augment** (`bool`, _optional_, defaults to `True`) — Whether to apply _SpecAugment_ data augmentation to the outputs of the feature encoder. For reference see [SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition](https://arxiv.org/abs/1904.08779). - **mask\_time\_prob** (`float`, _optional_, defaults to 0.05) — Percentage (between 0 and 1) of all feature vectors along the time axis which will be masked. The masking procecure generates ”mask\_time\_prob_len(time\_axis)/mask\_time\_length” independent masks over the axis. If reasoning from the propability of each feature vector to be chosen as the start of the vector span to be masked,_ mask\_time\_prob _should be \`prob\_vector\_start_mask\_time\_length`. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant if` apply\_spec\_augment is True\`. - **mask\_time\_length** (`int`, _optional_, defaults to 10) — Length of vector span along the time axis. - **mask\_time\_min\_masks** (`int`, _optional_, defaults to 2), — The minimum number of masks of length `mask_feature_length` generated along the time axis, each time step, irrespectively of `mask_feature_prob`. Only relevant if ”mask\_time\_prob\*len(time\_axis)/mask\_time\_length < mask\_time\_min\_masks” - **mask\_feature\_prob** (`float`, _optional_, defaults to 0.0) — Percentage (between 0 and 1) of all feature vectors along the feature axis which will be masked. The masking procecure generates ”mask\_feature\_prob_len(feature\_axis)/mask\_time\_length” independent masks over the axis. If reasoning from the propability of each feature vector to be chosen as the start of the vector span to be masked,_ mask\_feature\_prob _should be \`prob\_vector\_start_mask\_feature\_length`. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant if` apply\_spec\_augment is True\`. - **mask\_feature\_length** (`int`, _optional_, defaults to 10) — Length of vector span along the feature axis. - **mask\_feature\_min\_masks** (`int`, _optional_, defaults to 0), — The minimum number of masks of length `mask_feature_length` generated along the feature axis, each time step, irrespectively of `mask_feature_prob`. Only relevant if ”mask\_feature\_prob\*len(feature\_axis)/mask\_feature\_length < mask\_feature\_min\_masks” - **diversity\_loss\_weight** (`int`, _optional_, defaults to 0.1) — The weight of the codebook diversity loss component. - **ctc\_loss\_reduction** (`str`, _optional_, defaults to `"sum"`) — Specifies the reduction to apply to the output of `torch.nn.CTCLoss`. Only relevant when training an instance of [SEWDForCTC](/docs/transformers/v4.34.0/en/model_doc/sew-d#transformers.SEWDForCTC). - **ctc\_zero\_infinity** (`bool`, _optional_, defaults to `False`) — Whether to zero infinite losses and the associated gradients of `torch.nn.CTCLoss`. Infinite losses mainly occur when the inputs are too short to be aligned to the targets. Only relevant when training an instance of [SEWDForCTC](/docs/transformers/v4.34.0/en/model_doc/sew-d#transformers.SEWDForCTC). - **use\_weighted\_layer\_sum** (`bool`, _optional_, defaults to `False`) — Whether to use a weighted average of layer outputs with learned weights. Only relevant when using an instance of [Wav2Vec2ForSequenceClassification](/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2ForSequenceClassification). - **classifier\_proj\_size** (`int`, _optional_, defaults to 256) — Dimensionality of the projection before token mean-pooling for classification. This is the configuration class to store the configuration of a [SEWDModel](/docs/transformers/v4.34.0/en/model_doc/sew-d#transformers.SEWDModel). It is used to instantiate a SEW-D model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the SEW-D [asapp/sew-d-tiny-100k](https://huggingface.co/asapp/sew-d-tiny-100k) architecture. Configuration objects inherit from [PretrainedConfig](/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig) and can be used to control the model outputs. Read the documentation from [PretrainedConfig](/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig) for more information. Example: ``` >>> from transformers import SEWDConfig, SEWDModel >>> >>> configuration = SEWDConfig() >>> >>> model = SEWDModel(configuration) >>> >>> configuration = model.config ``` Serializes this instance to a Python dictionary. ## SEWDModel ### class transformers.SEWDModel [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/sew_d/modeling_sew_d.py#L1381) ( config: SEWDConfig ) Parameters - **config** ([SEWDConfig](/docs/transformers/v4.34.0/en/model_doc/sew-d#transformers.SEWDConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. The bare SEW-D Model transformer outputting raw hidden-states without any specific head on top. SEW-D was proposed in [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.). This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/sew_d/modeling_sew_d.py#L1448) ( input\_values: typing.Optional\[torch.Tensor\] attention\_mask: typing.Optional\[torch.Tensor\] = None mask\_time\_indices: typing.Optional\[torch.FloatTensor\] = None output\_attentions: typing.Optional\[bool\] = None output\_hidden\_states: typing.Optional\[bool\] = None return\_dict: typing.Optional\[bool\] = None ) → [transformers.modeling\_outputs.BaseModelOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.BaseModelOutput) or `tuple(torch.FloatTensor)` Parameters - **input\_values** (`torch.FloatTensor` of shape `(batch_size, sequence_length)`) — Float values of input raw speech waveform. Values can be obtained by loading a `.flac` or `.wav` audio file into an array of type `List[float]` or a `numpy.ndarray`, _e.g._ via the soundfile library (`pip install soundfile`). To prepare the array into `input_values`, the [AutoProcessor](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoProcessor) should be used for padding and conversion into a tensor of type `torch.FloatTensor`. See [Wav2Vec2Processor.**call**()](/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Processor.__call__) for details. - **attention\_mask** (`torch.LongTensor` of shape `(batch_size, sequence_length)`, _optional_) — Mask to avoid performing convolution and attention on padding token indices. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - **output\_attentions** (`bool`, _optional_) — Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. - **output\_hidden\_states** (`bool`, _optional_) — Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for more detail. - **return\_dict** (`bool`, _optional_) — Whether or not to return a [ModelOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput) instead of a plain tuple. A [transformers.modeling\_outputs.BaseModelOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.BaseModelOutput) or a tuple of `torch.FloatTensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various elements depending on the configuration ([SEWDConfig](/docs/transformers/v4.34.0/en/model_doc/sew-d#transformers.SEWDConfig)) and inputs. - **last\_hidden\_state** (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`) — Sequence of hidden-states at the output of the last layer of the model. - **hidden\_states** (`tuple(torch.FloatTensor)`, _optional_, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`) — Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. - **attentions** (`tuple(torch.FloatTensor)`, _optional_, returned when `output_attentions=True` is passed or when `config.output_attentions=True`) — Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The [SEWDModel](/docs/transformers/v4.34.0/en/model_doc/sew-d#transformers.SEWDModel) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoProcessor, SEWDModel >>> import torch >>> from datasets import load_dataset >>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") >>> dataset = dataset.sort("id") >>> sampling_rate = dataset.features["audio"].sampling_rate >>> processor = AutoProcessor.from_pretrained("asapp/sew-d-tiny-100k-ft-ls100h") >>> model = SEWDModel.from_pretrained("asapp/sew-d-tiny-100k-ft-ls100h") >>> >>> inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt") >>> with torch.no_grad(): ... outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state >>> list(last_hidden_states.shape) [1, 292, 384] ``` ## SEWDForCTC ### class transformers.SEWDForCTC [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/sew_d/modeling_sew_d.py#L1510) ( config target\_lang: typing.Optional\[str\] = None ) Parameters - **config** ([SEWDConfig](/docs/transformers/v4.34.0/en/model_doc/sew-d#transformers.SEWDConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. SEW-D Model with a `language modeling` head on top for Connectionist Temporal Classification (CTC). SEW-D was proposed in [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.). This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/sew_d/modeling_sew_d.py#L1582) ( input\_values: typing.Optional\[torch.Tensor\] attention\_mask: typing.Optional\[torch.Tensor\] = None output\_attentions: typing.Optional\[bool\] = None output\_hidden\_states: typing.Optional\[bool\] = None return\_dict: typing.Optional\[bool\] = None labels: typing.Optional\[torch.Tensor\] = None ) → [transformers.modeling\_outputs.CausalLMOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.CausalLMOutput) or `tuple(torch.FloatTensor)` Parameters - **input\_values** (`torch.FloatTensor` of shape `(batch_size, sequence_length)`) — Float values of input raw speech waveform. Values can be obtained by loading a `.flac` or `.wav` audio file into an array of type `List[float]` or a `numpy.ndarray`, _e.g._ via the soundfile library (`pip install soundfile`). To prepare the array into `input_values`, the [AutoProcessor](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoProcessor) should be used for padding and conversion into a tensor of type `torch.FloatTensor`. See [Wav2Vec2Processor.**call**()](/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Processor.__call__) for details. - **attention\_mask** (`torch.LongTensor` of shape `(batch_size, sequence_length)`, _optional_) — Mask to avoid performing convolution and attention on padding token indices. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - **output\_attentions** (`bool`, _optional_) — Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. - **output\_hidden\_states** (`bool`, _optional_) — Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for more detail. - **return\_dict** (`bool`, _optional_) — Whether or not to return a [ModelOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput) instead of a plain tuple. - **labels** (`torch.LongTensor` of shape `(batch_size, target_length)`, _optional_) — Labels for connectionist temporal classification. Note that `target_length` has to be smaller or equal to the sequence length of the output logits. Indices are selected in `[-100, 0, ..., config.vocab_size - 1]`. All labels set to `-100` are ignored (masked), the loss is only computed for labels in `[0, ..., config.vocab_size - 1]`. A [transformers.modeling\_outputs.CausalLMOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.CausalLMOutput) or a tuple of `torch.FloatTensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various elements depending on the configuration ([SEWDConfig](/docs/transformers/v4.34.0/en/model_doc/sew-d#transformers.SEWDConfig)) and inputs. - **loss** (`torch.FloatTensor` of shape `(1,)`, _optional_, returned when `labels` is provided) — Language modeling loss (for next-token prediction). - **logits** (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). - **hidden\_states** (`tuple(torch.FloatTensor)`, _optional_, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`) — Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. - **attentions** (`tuple(torch.FloatTensor)`, _optional_, returned when `output_attentions=True` is passed or when `config.output_attentions=True`) — Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The [SEWDForCTC](/docs/transformers/v4.34.0/en/model_doc/sew-d#transformers.SEWDForCTC) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoProcessor, SEWDForCTC >>> from datasets import load_dataset >>> import torch >>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") >>> dataset = dataset.sort("id") >>> sampling_rate = dataset.features["audio"].sampling_rate >>> processor = AutoProcessor.from_pretrained("asapp/sew-d-tiny-100k-ft-ls100h") >>> model = SEWDForCTC.from_pretrained("asapp/sew-d-tiny-100k-ft-ls100h") >>> >>> inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_ids = torch.argmax(logits, dim=-1) >>> >>> transcription = processor.batch_decode(predicted_ids) >>> transcription[0] 'MISTER QUILTER IS THE APOSTIL OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL' >>> inputs["labels"] = processor(text=dataset[0]["text"], return_tensors="pt").input_ids >>> >>> loss = model(**inputs).loss >>> round(loss.item(), 2) 0.21 ``` ## SEWDForSequenceClassification ### class transformers.SEWDForSequenceClassification [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/sew_d/modeling_sew_d.py#L1670) ( config ) Parameters - **config** ([SEWDConfig](/docs/transformers/v4.34.0/en/model_doc/sew-d#transformers.SEWDConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. SEWD Model with a sequence classification head on top (a linear layer over the pooled output) for tasks like SUPERB Keyword Spotting. SEW-D was proposed in [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.). This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/sew_d/modeling_sew_d.py#L1715) ( input\_values: typing.Optional\[torch.Tensor\] attention\_mask: typing.Optional\[torch.Tensor\] = None output\_attentions: typing.Optional\[bool\] = None output\_hidden\_states: typing.Optional\[bool\] = None return\_dict: typing.Optional\[bool\] = None labels: typing.Optional\[torch.Tensor\] = None ) → [transformers.modeling\_outputs.SequenceClassifierOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.SequenceClassifierOutput) or `tuple(torch.FloatTensor)` Parameters - **input\_values** (`torch.FloatTensor` of shape `(batch_size, sequence_length)`) — Float values of input raw speech waveform. Values can be obtained by loading a `.flac` or `.wav` audio file into an array of type `List[float]` or a `numpy.ndarray`, _e.g._ via the soundfile library (`pip install soundfile`). To prepare the array into `input_values`, the [AutoProcessor](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoProcessor) should be used for padding and conversion into a tensor of type `torch.FloatTensor`. See [Wav2Vec2Processor.**call**()](/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Processor.__call__) for details. - **attention\_mask** (`torch.LongTensor` of shape `(batch_size, sequence_length)`, _optional_) — Mask to avoid performing convolution and attention on padding token indices. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) - **output\_attentions** (`bool`, _optional_) — Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. - **output\_hidden\_states** (`bool`, _optional_) — Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for more detail. - **return\_dict** (`bool`, _optional_) — Whether or not to return a [ModelOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput) instead of a plain tuple. - **labels** (`torch.LongTensor` of shape `(batch_size,)`, _optional_) — Labels for computing the sequence classification/regression loss. Indices should be in `[0, ..., config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If `config.num_labels > 1` a classification loss is computed (Cross-Entropy). A [transformers.modeling\_outputs.SequenceClassifierOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.SequenceClassifierOutput) or a tuple of `torch.FloatTensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various elements depending on the configuration ([SEWDConfig](/docs/transformers/v4.34.0/en/model_doc/sew-d#transformers.SEWDConfig)) and inputs. - **loss** (`torch.FloatTensor` of shape `(1,)`, _optional_, returned when `labels` is provided) — Classification (or regression if config.num\_labels==1) loss. - **logits** (`torch.FloatTensor` of shape `(batch_size, config.num_labels)`) — Classification (or regression if config.num\_labels==1) scores (before SoftMax). - **hidden\_states** (`tuple(torch.FloatTensor)`, _optional_, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`) — Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. - **attentions** (`tuple(torch.FloatTensor)`, _optional_, returned when `output_attentions=True` is passed or when `config.output_attentions=True`) — Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The [SEWDForSequenceClassification](/docs/transformers/v4.34.0/en/model_doc/sew-d#transformers.SEWDForSequenceClassification) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: ``` >>> from transformers import AutoFeatureExtractor, SEWDForSequenceClassification >>> from datasets import load_dataset >>> import torch >>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") >>> dataset = dataset.sort("id") >>> sampling_rate = dataset.features["audio"].sampling_rate >>> feature_extractor = AutoFeatureExtractor.from_pretrained("anton-l/sew-d-mid-400k-ft-keyword-spotting") >>> model = SEWDForSequenceClassification.from_pretrained("anton-l/sew-d-mid-400k-ft-keyword-spotting") >>> >>> inputs = feature_extractor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_ids = torch.argmax(logits, dim=-1).item() >>> predicted_label = model.config.id2label[predicted_class_ids] >>> predicted_label '_unknown_' >>> >>> target_label = model.config.id2label[0] >>> inputs["labels"] = torch.tensor([model.config.label2id[target_label]]) >>> loss = model(**inputs).loss >>> round(loss.item(), 2) 3.16 ```
<!DOCTYPE html><html class=""><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"> <meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science."> <meta property="fb:app_id" content="1321688464574422"> <meta name="twitter:card" content="summary_large_image"> <meta name="twitter:site" content="@huggingface"> <meta property="og:title" content="SEW-D"> <meta property="og:type" content="website"> <meta property="og:url" content="https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/sew-d"> <meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png"> <link rel="stylesheet" href="/front/build/kube-5e23f38/style.css"> <link rel="preconnect" href="https://fonts.gstatic.com"> <link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&amp;display=swap" rel="stylesheet"> <link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&amp;display=swap" rel="stylesheet"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" as="style" onload="this.onload=null;this.rel='stylesheet'"> <noscript> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" /> </noscript> <title>SEW-D</title> <script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script> <script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"><link rel="stylesheet" href="/docs/transformers/v4.34.0/en/_app/immutable/assets/0.e3b0c442.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/1.38c5c2f6.js"></head> <body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage"> <div class="flex min-h-screen flex-col"> <div class="SVELTE_HYDRATER contents" data-props="{&quot;classNames&quot;:&quot;&quot;,&quot;isWide&quot;:true,&quot;isZh&quot;:false}" data-target="MainHeader"><header class="border-b border-gray-100 "><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <div class="flex flex-none items-center justify-center p-0.5 place-self-stretch lg:hidden"><button class="relative z-40 flex h-6 w-8 items-center justify-center" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div></div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="rounded-full border border-transparent bg-gray-900 px-3 py-1 leading-none text-white hover:border-black hover:bg-white hover:text-black" href="/join">Sign Up</a></li></ul></nav></div></header></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div> <main class="flex flex-1 flex-col"><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapters&quot;:[{&quot;title&quot;:&quot;Get started&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;🤗 Transformers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;index&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/index&quot;},{&quot;title&quot;:&quot;Quick tour&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;quicktour&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/quicktour&quot;},{&quot;title&quot;:&quot;Installation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;installation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/installation&quot;}]},{&quot;title&quot;:&quot;Tutorials&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Run inference with pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_tutorial&quot;},{&quot;title&quot;:&quot;Write portable code with AutoClass&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;autoclass_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/autoclass_tutorial&quot;},{&quot;title&quot;:&quot;Preprocess data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocessing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/preprocessing&quot;},{&quot;title&quot;:&quot;Fine-tune a pretrained model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;training&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/training&quot;},{&quot;title&quot;:&quot;Train with a script&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;run_scripts&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/run_scripts&quot;},{&quot;title&quot;:&quot;Set up distributed training with 🤗 Accelerate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;accelerate&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/accelerate&quot;},{&quot;title&quot;:&quot;Load and train adapters with 🤗 PEFT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;peft&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/peft&quot;},{&quot;title&quot;:&quot;Share your model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_sharing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_sharing&quot;},{&quot;title&quot;:&quot;Agents&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers_agents&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/transformers_agents&quot;},{&quot;title&quot;:&quot;Generation with LLMs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;llm_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/llm_tutorial&quot;}]},{&quot;title&quot;:&quot;Task Guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Natural Language Processing&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text classification&quot;,&quot;id&quot;:&quot;tasks/sequence_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/sequence_classification&quot;},{&quot;title&quot;:&quot;Token classification&quot;,&quot;id&quot;:&quot;tasks/token_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/token_classification&quot;},{&quot;title&quot;:&quot;Question answering&quot;,&quot;id&quot;:&quot;tasks/question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/question_answering&quot;},{&quot;title&quot;:&quot;Causal language modeling&quot;,&quot;id&quot;:&quot;tasks/language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/language_modeling&quot;},{&quot;title&quot;:&quot;Masked language modeling&quot;,&quot;id&quot;:&quot;tasks/masked_language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/masked_language_modeling&quot;},{&quot;title&quot;:&quot;Translation&quot;,&quot;id&quot;:&quot;tasks/translation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/translation&quot;},{&quot;title&quot;:&quot;Summarization&quot;,&quot;id&quot;:&quot;tasks/summarization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/summarization&quot;},{&quot;title&quot;:&quot;Multiple choice&quot;,&quot;id&quot;:&quot;tasks/multiple_choice&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/multiple_choice&quot;}]},{&quot;title&quot;:&quot;Audio&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio classification&quot;,&quot;id&quot;:&quot;tasks/audio_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/audio_classification&quot;},{&quot;title&quot;:&quot;Automatic speech recognition&quot;,&quot;id&quot;:&quot;tasks/asr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/asr&quot;}]},{&quot;title&quot;:&quot;Computer Vision&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image classification&quot;,&quot;id&quot;:&quot;tasks/image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_classification&quot;},{&quot;title&quot;:&quot;Semantic segmentation&quot;,&quot;id&quot;:&quot;tasks/semantic_segmentation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/semantic_segmentation&quot;},{&quot;title&quot;:&quot;Video classification&quot;,&quot;id&quot;:&quot;tasks/video_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/video_classification&quot;},{&quot;title&quot;:&quot;Object detection&quot;,&quot;id&quot;:&quot;tasks/object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot object detection&quot;,&quot;id&quot;:&quot;tasks/zero_shot_object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot image classification&quot;,&quot;id&quot;:&quot;tasks/zero_shot_image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification&quot;},{&quot;title&quot;:&quot;Depth estimation&quot;,&quot;id&quot;:&quot;tasks/monocular_depth_estimation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation&quot;}]},{&quot;title&quot;:&quot;Multimodal&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image captioning&quot;,&quot;id&quot;:&quot;tasks/image_captioning&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_captioning&quot;},{&quot;title&quot;:&quot;Document Question Answering&quot;,&quot;id&quot;:&quot;tasks/document_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/document_question_answering&quot;},{&quot;title&quot;:&quot;Visual Question Answering&quot;,&quot;id&quot;:&quot;tasks/visual_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/visual_question_answering&quot;},{&quot;title&quot;:&quot;Text to speech&quot;,&quot;id&quot;:&quot;tasks/text-to-speech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/text-to-speech&quot;}]},{&quot;title&quot;:&quot;Generation&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Customize the generation strategy&quot;,&quot;id&quot;:&quot;generation_strategies&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/generation_strategies&quot;}]},{&quot;title&quot;:&quot;Prompting&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image tasks with IDEFICS&quot;,&quot;id&quot;:&quot;tasks/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/idefics&quot;}]}]},{&quot;title&quot;:&quot;Developer guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Use fast tokenizers from 🤗 Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;fast_tokenizers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/fast_tokenizers&quot;},{&quot;title&quot;:&quot;Run inference with multilingual models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;multilingual&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/multilingual&quot;},{&quot;title&quot;:&quot;Use model-specific APIs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;create_a_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/create_a_model&quot;},{&quot;title&quot;:&quot;Share a custom model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_models&quot;},{&quot;title&quot;:&quot;Templates for chat models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;chat_templating&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/chat_templating&quot;},{&quot;title&quot;:&quot;Run training on Amazon SageMaker&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;sagemaker&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/sagemaker&quot;},{&quot;title&quot;:&quot;Export to ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;serialization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/serialization&quot;},{&quot;title&quot;:&quot;Export to TFLite&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tflite&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tflite&quot;},{&quot;title&quot;:&quot;Export to TorchScript&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;torchscript&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/torchscript&quot;},{&quot;title&quot;:&quot;Benchmarks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;benchmarks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/benchmarks&quot;},{&quot;title&quot;:&quot;Notebooks with examples&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;notebooks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/notebooks&quot;},{&quot;title&quot;:&quot;Community resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;community&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/community&quot;},{&quot;title&quot;:&quot;Custom Tools and Prompts&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_tools&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_tools&quot;},{&quot;title&quot;:&quot;Troubleshoot&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;troubleshooting&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/troubleshooting&quot;}]},{&quot;title&quot;:&quot;Performance and scalability&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;performance&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/performance&quot;},{&quot;title&quot;:&quot;Efficient training techniques&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Methods and tools for efficient training on a single GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_one&quot;},{&quot;title&quot;:&quot;Multiple GPUs and parallelism&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_many&quot;},{&quot;title&quot;:&quot;Efficient training on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu&quot;},{&quot;title&quot;:&quot;Distributed CPU training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu_many&quot;},{&quot;title&quot;:&quot;Training on TPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu&quot;},{&quot;title&quot;:&quot;Training on TPU with TensorFlow&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu_tf&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu_tf&quot;},{&quot;title&quot;:&quot;Training on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_special&quot;},{&quot;title&quot;:&quot;Custom hardware for training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_hardware&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_hardware&quot;},{&quot;title&quot;:&quot;Hyperparameter Search using Trainer API&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;hpo_train&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/hpo_train&quot;}]},{&quot;title&quot;:&quot;Optimizing inference&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Inference on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_cpu&quot;},{&quot;title&quot;:&quot;Inference on one GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_one&quot;},{&quot;title&quot;:&quot;Inference on many GPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_many&quot;},{&quot;title&quot;:&quot;Inference on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_special&quot;}]},{&quot;title&quot;:&quot;Instantiating a big model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;big_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/big_models&quot;},{&quot;title&quot;:&quot;Troubleshooting&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;debugging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/debugging&quot;},{&quot;title&quot;:&quot;XLA Integration for TensorFlow Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tf_xla&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tf_xla&quot;},{&quot;title&quot;:&quot;Optimize inference using `torch.compile()`&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_torch_compile&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_torch_compile&quot;}]},{&quot;title&quot;:&quot;Contribute&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;How to contribute to transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;contributing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/contributing&quot;},{&quot;title&quot;:&quot;How to add a model to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_model&quot;},{&quot;title&quot;:&quot;How to convert a 🤗 Transformers model to TensorFlow?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_tensorflow_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_tensorflow_model&quot;},{&quot;title&quot;:&quot;How to add a pipeline to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_pipeline&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_pipeline&quot;},{&quot;title&quot;:&quot;Testing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;testing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/testing&quot;},{&quot;title&quot;:&quot;Checks on a Pull Request&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pr_checks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pr_checks&quot;}]},{&quot;title&quot;:&quot;Conceptual guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Philosophy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;philosophy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/philosophy&quot;},{&quot;title&quot;:&quot;Glossary&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;glossary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/glossary&quot;},{&quot;title&quot;:&quot;What 🤗 Transformers can do&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;task_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/task_summary&quot;},{&quot;title&quot;:&quot;How 🤗 Transformers solve tasks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks_explained&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks_explained&quot;},{&quot;title&quot;:&quot;The Transformer model family&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_summary&quot;},{&quot;title&quot;:&quot;Summary of the tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tokenizer_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tokenizer_summary&quot;},{&quot;title&quot;:&quot;Attention mechanisms&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;attention&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/attention&quot;},{&quot;title&quot;:&quot;Padding and truncation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pad_truncation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pad_truncation&quot;},{&quot;title&quot;:&quot;BERTology&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;bertology&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/bertology&quot;},{&quot;title&quot;:&quot;Perplexity of fixed-length models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perplexity&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perplexity&quot;},{&quot;title&quot;:&quot;Pipelines for webserver inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_webserver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_webserver&quot;},{&quot;title&quot;:&quot;Model training anatomy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_memory_anatomy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_memory_anatomy&quot;}]},{&quot;title&quot;:&quot;API&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Main Classes&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Agents and Tools&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/agent&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/agent&quot;},{&quot;title&quot;:&quot;Auto Classes&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/auto&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/auto&quot;},{&quot;title&quot;:&quot;Callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/callback&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/callback&quot;},{&quot;title&quot;:&quot;Configuration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/configuration&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/configuration&quot;},{&quot;title&quot;:&quot;Data Collator&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/data_collator&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/data_collator&quot;},{&quot;title&quot;:&quot;Keras callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/keras_callbacks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/keras_callbacks&quot;},{&quot;title&quot;:&quot;Logging&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/logging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/logging&quot;},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/model&quot;},{&quot;title&quot;:&quot;Text Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/text_generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/text_generation&quot;},{&quot;title&quot;:&quot;ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/onnx&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/onnx&quot;},{&quot;title&quot;:&quot;Optimization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/optimizer_schedules&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules&quot;},{&quot;title&quot;:&quot;Model outputs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/output&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/output&quot;},{&quot;title&quot;:&quot;Pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/pipelines&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/pipelines&quot;},{&quot;title&quot;:&quot;Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/processors&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/processors&quot;},{&quot;title&quot;:&quot;Quantization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/quantization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/quantization&quot;},{&quot;title&quot;:&quot;Tokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/tokenizer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/tokenizer&quot;},{&quot;title&quot;:&quot;Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/trainer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/trainer&quot;},{&quot;title&quot;:&quot;DeepSpeed Integration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/deepspeed&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/deepspeed&quot;},{&quot;title&quot;:&quot;Feature Extractor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/feature_extractor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/feature_extractor&quot;},{&quot;title&quot;:&quot;Image Processor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/image_processor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/image_processor&quot;}]},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALBERT&quot;,&quot;id&quot;:&quot;model_doc/albert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/albert&quot;},{&quot;title&quot;:&quot;BART&quot;,&quot;id&quot;:&quot;model_doc/bart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bart&quot;},{&quot;title&quot;:&quot;BARThez&quot;,&quot;id&quot;:&quot;model_doc/barthez&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/barthez&quot;},{&quot;title&quot;:&quot;BARTpho&quot;,&quot;id&quot;:&quot;model_doc/bartpho&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bartpho&quot;},{&quot;title&quot;:&quot;BERT&quot;,&quot;id&quot;:&quot;model_doc/bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert&quot;},{&quot;title&quot;:&quot;BertGeneration&quot;,&quot;id&quot;:&quot;model_doc/bert-generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-generation&quot;},{&quot;title&quot;:&quot;BertJapanese&quot;,&quot;id&quot;:&quot;model_doc/bert-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-japanese&quot;},{&quot;title&quot;:&quot;Bertweet&quot;,&quot;id&quot;:&quot;model_doc/bertweet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bertweet&quot;},{&quot;title&quot;:&quot;BigBird&quot;,&quot;id&quot;:&quot;model_doc/big_bird&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/big_bird&quot;},{&quot;title&quot;:&quot;BigBirdPegasus&quot;,&quot;id&quot;:&quot;model_doc/bigbird_pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus&quot;},{&quot;title&quot;:&quot;BioGpt&quot;,&quot;id&quot;:&quot;model_doc/biogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/biogpt&quot;},{&quot;title&quot;:&quot;Blenderbot&quot;,&quot;id&quot;:&quot;model_doc/blenderbot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot&quot;},{&quot;title&quot;:&quot;Blenderbot Small&quot;,&quot;id&quot;:&quot;model_doc/blenderbot-small&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot-small&quot;},{&quot;title&quot;:&quot;BLOOM&quot;,&quot;id&quot;:&quot;model_doc/bloom&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bloom&quot;},{&quot;title&quot;:&quot;BORT&quot;,&quot;id&quot;:&quot;model_doc/bort&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bort&quot;},{&quot;title&quot;:&quot;ByT5&quot;,&quot;id&quot;:&quot;model_doc/byt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/byt5&quot;},{&quot;title&quot;:&quot;CamemBERT&quot;,&quot;id&quot;:&quot;model_doc/camembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/camembert&quot;},{&quot;title&quot;:&quot;CANINE&quot;,&quot;id&quot;:&quot;model_doc/canine&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/canine&quot;},{&quot;title&quot;:&quot;CodeGen&quot;,&quot;id&quot;:&quot;model_doc/codegen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/codegen&quot;},{&quot;title&quot;:&quot;CodeLlama&quot;,&quot;id&quot;:&quot;model_doc/code_llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/code_llama&quot;},{&quot;title&quot;:&quot;ConvBERT&quot;,&quot;id&quot;:&quot;model_doc/convbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convbert&quot;},{&quot;title&quot;:&quot;CPM&quot;,&quot;id&quot;:&quot;model_doc/cpm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpm&quot;},{&quot;title&quot;:&quot;CPMANT&quot;,&quot;id&quot;:&quot;model_doc/cpmant&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpmant&quot;},{&quot;title&quot;:&quot;CTRL&quot;,&quot;id&quot;:&quot;model_doc/ctrl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ctrl&quot;},{&quot;title&quot;:&quot;DeBERTa&quot;,&quot;id&quot;:&quot;model_doc/deberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta&quot;},{&quot;title&quot;:&quot;DeBERTa-v2&quot;,&quot;id&quot;:&quot;model_doc/deberta-v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta-v2&quot;},{&quot;title&quot;:&quot;DialoGPT&quot;,&quot;id&quot;:&quot;model_doc/dialogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dialogpt&quot;},{&quot;title&quot;:&quot;DistilBERT&quot;,&quot;id&quot;:&quot;model_doc/distilbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/distilbert&quot;},{&quot;title&quot;:&quot;DPR&quot;,&quot;id&quot;:&quot;model_doc/dpr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpr&quot;},{&quot;title&quot;:&quot;ELECTRA&quot;,&quot;id&quot;:&quot;model_doc/electra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/electra&quot;},{&quot;title&quot;:&quot;Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encoder-decoder&quot;},{&quot;title&quot;:&quot;ERNIE&quot;,&quot;id&quot;:&quot;model_doc/ernie&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie&quot;},{&quot;title&quot;:&quot;ErnieM&quot;,&quot;id&quot;:&quot;model_doc/ernie_m&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie_m&quot;},{&quot;title&quot;:&quot;ESM&quot;,&quot;id&quot;:&quot;model_doc/esm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/esm&quot;},{&quot;title&quot;:&quot;Falcon&quot;,&quot;id&quot;:&quot;model_doc/falcon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/falcon&quot;},{&quot;title&quot;:&quot;FLAN-T5&quot;,&quot;id&quot;:&quot;model_doc/flan-t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-t5&quot;},{&quot;title&quot;:&quot;FLAN-UL2&quot;,&quot;id&quot;:&quot;model_doc/flan-ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-ul2&quot;},{&quot;title&quot;:&quot;FlauBERT&quot;,&quot;id&quot;:&quot;model_doc/flaubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flaubert&quot;},{&quot;title&quot;:&quot;FNet&quot;,&quot;id&quot;:&quot;model_doc/fnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fnet&quot;},{&quot;title&quot;:&quot;FSMT&quot;,&quot;id&quot;:&quot;model_doc/fsmt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fsmt&quot;},{&quot;title&quot;:&quot;Funnel Transformer&quot;,&quot;id&quot;:&quot;model_doc/funnel&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/funnel&quot;},{&quot;title&quot;:&quot;GPT&quot;,&quot;id&quot;:&quot;model_doc/openai-gpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/openai-gpt&quot;},{&quot;title&quot;:&quot;GPT Neo&quot;,&quot;id&quot;:&quot;model_doc/gpt_neo&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neo&quot;},{&quot;title&quot;:&quot;GPT NeoX&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox&quot;},{&quot;title&quot;:&quot;GPT NeoX Japanese&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox_japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese&quot;},{&quot;title&quot;:&quot;GPT-J&quot;,&quot;id&quot;:&quot;model_doc/gptj&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptj&quot;},{&quot;title&quot;:&quot;GPT2&quot;,&quot;id&quot;:&quot;model_doc/gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt2&quot;},{&quot;title&quot;:&quot;GPTBigCode&quot;,&quot;id&quot;:&quot;model_doc/gpt_bigcode&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode&quot;},{&quot;title&quot;:&quot;GPTSAN Japanese&quot;,&quot;id&quot;:&quot;model_doc/gptsan-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese&quot;},{&quot;title&quot;:&quot;GPTSw3&quot;,&quot;id&quot;:&quot;model_doc/gpt-sw3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt-sw3&quot;},{&quot;title&quot;:&quot;HerBERT&quot;,&quot;id&quot;:&quot;model_doc/herbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/herbert&quot;},{&quot;title&quot;:&quot;I-BERT&quot;,&quot;id&quot;:&quot;model_doc/ibert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ibert&quot;},{&quot;title&quot;:&quot;Jukebox&quot;,&quot;id&quot;:&quot;model_doc/jukebox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/jukebox&quot;},{&quot;title&quot;:&quot;LED&quot;,&quot;id&quot;:&quot;model_doc/led&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/led&quot;},{&quot;title&quot;:&quot;LLaMA&quot;,&quot;id&quot;:&quot;model_doc/llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama&quot;},{&quot;title&quot;:&quot;Llama2&quot;,&quot;id&quot;:&quot;model_doc/llama2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama2&quot;},{&quot;title&quot;:&quot;Longformer&quot;,&quot;id&quot;:&quot;model_doc/longformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longformer&quot;},{&quot;title&quot;:&quot;LongT5&quot;,&quot;id&quot;:&quot;model_doc/longt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longt5&quot;},{&quot;title&quot;:&quot;LUKE&quot;,&quot;id&quot;:&quot;model_doc/luke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/luke&quot;},{&quot;title&quot;:&quot;M2M100&quot;,&quot;id&quot;:&quot;model_doc/m2m_100&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/m2m_100&quot;},{&quot;title&quot;:&quot;MarianMT&quot;,&quot;id&quot;:&quot;model_doc/marian&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/marian&quot;},{&quot;title&quot;:&quot;MarkupLM&quot;,&quot;id&quot;:&quot;model_doc/markuplm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/markuplm&quot;},{&quot;title&quot;:&quot;MBart and MBart-50&quot;,&quot;id&quot;:&quot;model_doc/mbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mbart&quot;},{&quot;title&quot;:&quot;MEGA&quot;,&quot;id&quot;:&quot;model_doc/mega&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mega&quot;},{&quot;title&quot;:&quot;MegatronBERT&quot;,&quot;id&quot;:&quot;model_doc/megatron-bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron-bert&quot;},{&quot;title&quot;:&quot;MegatronGPT2&quot;,&quot;id&quot;:&quot;model_doc/megatron_gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2&quot;},{&quot;title&quot;:&quot;Mistral&quot;,&quot;id&quot;:&quot;model_doc/mistral&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mistral&quot;},{&quot;title&quot;:&quot;mLUKE&quot;,&quot;id&quot;:&quot;model_doc/mluke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mluke&quot;},{&quot;title&quot;:&quot;MobileBERT&quot;,&quot;id&quot;:&quot;model_doc/mobilebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilebert&quot;},{&quot;title&quot;:&quot;MPNet&quot;,&quot;id&quot;:&quot;model_doc/mpnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpnet&quot;},{&quot;title&quot;:&quot;MPT&quot;,&quot;id&quot;:&quot;model_doc/mpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpt&quot;},{&quot;title&quot;:&quot;MRA&quot;,&quot;id&quot;:&quot;model_doc/mra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mra&quot;},{&quot;title&quot;:&quot;MT5&quot;,&quot;id&quot;:&quot;model_doc/mt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mt5&quot;},{&quot;title&quot;:&quot;MVP&quot;,&quot;id&quot;:&quot;model_doc/mvp&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mvp&quot;},{&quot;title&quot;:&quot;NEZHA&quot;,&quot;id&quot;:&quot;model_doc/nezha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nezha&quot;},{&quot;title&quot;:&quot;NLLB&quot;,&quot;id&quot;:&quot;model_doc/nllb&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb&quot;},{&quot;title&quot;:&quot;NLLB-MoE&quot;,&quot;id&quot;:&quot;model_doc/nllb-moe&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb-moe&quot;},{&quot;title&quot;:&quot;Nyströmformer&quot;,&quot;id&quot;:&quot;model_doc/nystromformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nystromformer&quot;},{&quot;title&quot;:&quot;Open-Llama&quot;,&quot;id&quot;:&quot;model_doc/open-llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/open-llama&quot;},{&quot;title&quot;:&quot;OPT&quot;,&quot;id&quot;:&quot;model_doc/opt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/opt&quot;},{&quot;title&quot;:&quot;Pegasus&quot;,&quot;id&quot;:&quot;model_doc/pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus&quot;},{&quot;title&quot;:&quot;PEGASUS-X&quot;,&quot;id&quot;:&quot;model_doc/pegasus_x&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus_x&quot;},{&quot;title&quot;:&quot;Persimmon&quot;,&quot;id&quot;:&quot;model_doc/persimmon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/persimmon&quot;},{&quot;title&quot;:&quot;PhoBERT&quot;,&quot;id&quot;:&quot;model_doc/phobert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/phobert&quot;},{&quot;title&quot;:&quot;PLBart&quot;,&quot;id&quot;:&quot;model_doc/plbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/plbart&quot;},{&quot;title&quot;:&quot;ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/prophetnet&quot;},{&quot;title&quot;:&quot;QDQBert&quot;,&quot;id&quot;:&quot;model_doc/qdqbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/qdqbert&quot;},{&quot;title&quot;:&quot;RAG&quot;,&quot;id&quot;:&quot;model_doc/rag&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rag&quot;},{&quot;title&quot;:&quot;REALM&quot;,&quot;id&quot;:&quot;model_doc/realm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/realm&quot;},{&quot;title&quot;:&quot;Reformer&quot;,&quot;id&quot;:&quot;model_doc/reformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/reformer&quot;},{&quot;title&quot;:&quot;RemBERT&quot;,&quot;id&quot;:&quot;model_doc/rembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rembert&quot;},{&quot;title&quot;:&quot;RetriBERT&quot;,&quot;id&quot;:&quot;model_doc/retribert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/retribert&quot;},{&quot;title&quot;:&quot;RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta&quot;},{&quot;title&quot;:&quot;RoBERTa-PreLayerNorm&quot;,&quot;id&quot;:&quot;model_doc/roberta-prelayernorm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm&quot;},{&quot;title&quot;:&quot;RoCBert&quot;,&quot;id&quot;:&quot;model_doc/roc_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roc_bert&quot;},{&quot;title&quot;:&quot;RoFormer&quot;,&quot;id&quot;:&quot;model_doc/roformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roformer&quot;},{&quot;title&quot;:&quot;RWKV&quot;,&quot;id&quot;:&quot;model_doc/rwkv&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rwkv&quot;},{&quot;title&quot;:&quot;Splinter&quot;,&quot;id&quot;:&quot;model_doc/splinter&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/splinter&quot;},{&quot;title&quot;:&quot;SqueezeBERT&quot;,&quot;id&quot;:&quot;model_doc/squeezebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/squeezebert&quot;},{&quot;title&quot;:&quot;SwitchTransformers&quot;,&quot;id&quot;:&quot;model_doc/switch_transformers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/switch_transformers&quot;},{&quot;title&quot;:&quot;T5&quot;,&quot;id&quot;:&quot;model_doc/t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5&quot;},{&quot;title&quot;:&quot;T5v1.1&quot;,&quot;id&quot;:&quot;model_doc/t5v1.1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5v1.1&quot;},{&quot;title&quot;:&quot;TAPEX&quot;,&quot;id&quot;:&quot;model_doc/tapex&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapex&quot;},{&quot;title&quot;:&quot;Transformer XL&quot;,&quot;id&quot;:&quot;model_doc/transfo-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/transfo-xl&quot;},{&quot;title&quot;:&quot;UL2&quot;,&quot;id&quot;:&quot;model_doc/ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ul2&quot;},{&quot;title&quot;:&quot;UMT5&quot;,&quot;id&quot;:&quot;model_doc/umt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/umt5&quot;},{&quot;title&quot;:&quot;X-MOD&quot;,&quot;id&quot;:&quot;model_doc/xmod&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xmod&quot;},{&quot;title&quot;:&quot;XGLM&quot;,&quot;id&quot;:&quot;model_doc/xglm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xglm&quot;},{&quot;title&quot;:&quot;XLM&quot;,&quot;id&quot;:&quot;model_doc/xlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm&quot;},{&quot;title&quot;:&quot;XLM-ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/xlm-prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa-XL&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl&quot;},{&quot;title&quot;:&quot;XLM-V&quot;,&quot;id&quot;:&quot;model_doc/xlm-v&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-v&quot;},{&quot;title&quot;:&quot;XLNet&quot;,&quot;id&quot;:&quot;model_doc/xlnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlnet&quot;},{&quot;title&quot;:&quot;YOSO&quot;,&quot;id&quot;:&quot;model_doc/yoso&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yoso&quot;}]},{&quot;title&quot;:&quot;Vision models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;BEiT&quot;,&quot;id&quot;:&quot;model_doc/beit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/beit&quot;},{&quot;title&quot;:&quot;BiT&quot;,&quot;id&quot;:&quot;model_doc/bit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bit&quot;},{&quot;title&quot;:&quot;Conditional DETR&quot;,&quot;id&quot;:&quot;model_doc/conditional_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/conditional_detr&quot;},{&quot;title&quot;:&quot;ConvNeXT&quot;,&quot;id&quot;:&quot;model_doc/convnext&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnext&quot;},{&quot;title&quot;:&quot;ConvNeXTV2&quot;,&quot;id&quot;:&quot;model_doc/convnextv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnextv2&quot;},{&quot;title&quot;:&quot;CvT&quot;,&quot;id&quot;:&quot;model_doc/cvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cvt&quot;},{&quot;title&quot;:&quot;Deformable DETR&quot;,&quot;id&quot;:&quot;model_doc/deformable_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deformable_detr&quot;},{&quot;title&quot;:&quot;DeiT&quot;,&quot;id&quot;:&quot;model_doc/deit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deit&quot;},{&quot;title&quot;:&quot;DETA&quot;,&quot;id&quot;:&quot;model_doc/deta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deta&quot;},{&quot;title&quot;:&quot;DETR&quot;,&quot;id&quot;:&quot;model_doc/detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/detr&quot;},{&quot;title&quot;:&quot;DiNAT&quot;,&quot;id&quot;:&quot;model_doc/dinat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinat&quot;},{&quot;title&quot;:&quot;DINO V2&quot;,&quot;id&quot;:&quot;model_doc/dinov2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinov2&quot;},{&quot;title&quot;:&quot;DiT&quot;,&quot;id&quot;:&quot;model_doc/dit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dit&quot;},{&quot;title&quot;:&quot;DPT&quot;,&quot;id&quot;:&quot;model_doc/dpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpt&quot;},{&quot;title&quot;:&quot;EfficientFormer&quot;,&quot;id&quot;:&quot;model_doc/efficientformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientformer&quot;},{&quot;title&quot;:&quot;EfficientNet&quot;,&quot;id&quot;:&quot;model_doc/efficientnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientnet&quot;},{&quot;title&quot;:&quot;FocalNet&quot;,&quot;id&quot;:&quot;model_doc/focalnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/focalnet&quot;},{&quot;title&quot;:&quot;GLPN&quot;,&quot;id&quot;:&quot;model_doc/glpn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/glpn&quot;},{&quot;title&quot;:&quot;ImageGPT&quot;,&quot;id&quot;:&quot;model_doc/imagegpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/imagegpt&quot;},{&quot;title&quot;:&quot;LeViT&quot;,&quot;id&quot;:&quot;model_doc/levit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/levit&quot;},{&quot;title&quot;:&quot;Mask2Former&quot;,&quot;id&quot;:&quot;model_doc/mask2former&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mask2former&quot;},{&quot;title&quot;:&quot;MaskFormer&quot;,&quot;id&quot;:&quot;model_doc/maskformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/maskformer&quot;},{&quot;title&quot;:&quot;MobileNetV1&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v1&quot;},{&quot;title&quot;:&quot;MobileNetV2&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v2&quot;},{&quot;title&quot;:&quot;MobileViT&quot;,&quot;id&quot;:&quot;model_doc/mobilevit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevit&quot;},{&quot;title&quot;:&quot;MobileViTV2&quot;,&quot;id&quot;:&quot;model_doc/mobilevitv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevitv2&quot;},{&quot;title&quot;:&quot;NAT&quot;,&quot;id&quot;:&quot;model_doc/nat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nat&quot;},{&quot;title&quot;:&quot;PoolFormer&quot;,&quot;id&quot;:&quot;model_doc/poolformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/poolformer&quot;},{&quot;title&quot;:&quot;Pyramid Vision Transformer (PVT)&quot;,&quot;id&quot;:&quot;model_doc/pvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pvt&quot;},{&quot;title&quot;:&quot;RegNet&quot;,&quot;id&quot;:&quot;model_doc/regnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/regnet&quot;},{&quot;title&quot;:&quot;ResNet&quot;,&quot;id&quot;:&quot;model_doc/resnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/resnet&quot;},{&quot;title&quot;:&quot;SegFormer&quot;,&quot;id&quot;:&quot;model_doc/segformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/segformer&quot;},{&quot;title&quot;:&quot;SwiftFormer&quot;,&quot;id&quot;:&quot;model_doc/swiftformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swiftformer&quot;},{&quot;title&quot;:&quot;Swin Transformer&quot;,&quot;id&quot;:&quot;model_doc/swin&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin&quot;},{&quot;title&quot;:&quot;Swin Transformer V2&quot;,&quot;id&quot;:&quot;model_doc/swinv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swinv2&quot;},{&quot;title&quot;:&quot;Swin2SR&quot;,&quot;id&quot;:&quot;model_doc/swin2sr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin2sr&quot;},{&quot;title&quot;:&quot;Table Transformer&quot;,&quot;id&quot;:&quot;model_doc/table-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/table-transformer&quot;},{&quot;title&quot;:&quot;TimeSformer&quot;,&quot;id&quot;:&quot;model_doc/timesformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/timesformer&quot;},{&quot;title&quot;:&quot;UperNet&quot;,&quot;id&quot;:&quot;model_doc/upernet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/upernet&quot;},{&quot;title&quot;:&quot;VAN&quot;,&quot;id&quot;:&quot;model_doc/van&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/van&quot;},{&quot;title&quot;:&quot;VideoMAE&quot;,&quot;id&quot;:&quot;model_doc/videomae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/videomae&quot;},{&quot;title&quot;:&quot;Vision Transformer (ViT)&quot;,&quot;id&quot;:&quot;model_doc/vit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit&quot;},{&quot;title&quot;:&quot;ViT Hybrid&quot;,&quot;id&quot;:&quot;model_doc/vit_hybrid&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_hybrid&quot;},{&quot;title&quot;:&quot;ViTDet&quot;,&quot;id&quot;:&quot;model_doc/vitdet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitdet&quot;},{&quot;title&quot;:&quot;ViTMAE&quot;,&quot;id&quot;:&quot;model_doc/vit_mae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_mae&quot;},{&quot;title&quot;:&quot;ViTMatte&quot;,&quot;id&quot;:&quot;model_doc/vitmatte&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitmatte&quot;},{&quot;title&quot;:&quot;ViTMSN&quot;,&quot;id&quot;:&quot;model_doc/vit_msn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_msn&quot;},{&quot;title&quot;:&quot;ViViT&quot;,&quot;id&quot;:&quot;model_doc/vivit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vivit&quot;},{&quot;title&quot;:&quot;YOLOS&quot;,&quot;id&quot;:&quot;model_doc/yolos&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yolos&quot;}]},{&quot;title&quot;:&quot;Audio models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio Spectrogram Transformer&quot;,&quot;id&quot;:&quot;model_doc/audio-spectrogram-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer&quot;},{&quot;title&quot;:&quot;Bark&quot;,&quot;id&quot;:&quot;model_doc/bark&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bark&quot;},{&quot;title&quot;:&quot;CLAP&quot;,&quot;id&quot;:&quot;model_doc/clap&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clap&quot;},{&quot;title&quot;:&quot;EnCodec&quot;,&quot;id&quot;:&quot;model_doc/encodec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encodec&quot;},{&quot;title&quot;:&quot;Hubert&quot;,&quot;id&quot;:&quot;model_doc/hubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/hubert&quot;},{&quot;title&quot;:&quot;MCTCT&quot;,&quot;id&quot;:&quot;model_doc/mctct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mctct&quot;},{&quot;title&quot;:&quot;MMS&quot;,&quot;id&quot;:&quot;model_doc/mms&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mms&quot;},{&quot;title&quot;:&quot;MusicGen&quot;,&quot;id&quot;:&quot;model_doc/musicgen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/musicgen&quot;},{&quot;title&quot;:&quot;Pop2Piano&quot;,&quot;id&quot;:&quot;model_doc/pop2piano&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pop2piano&quot;},{&quot;title&quot;:&quot;SEW&quot;,&quot;id&quot;:&quot;model_doc/sew&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew&quot;},{&quot;title&quot;:&quot;SEW-D&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/sew-d&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew-d&quot;},{&quot;title&quot;:&quot;Speech2Text&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text&quot;},{&quot;title&quot;:&quot;Speech2Text2&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text_2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2&quot;},{&quot;title&quot;:&quot;SpeechT5&quot;,&quot;id&quot;:&quot;model_doc/speecht5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speecht5&quot;},{&quot;title&quot;:&quot;UniSpeech&quot;,&quot;id&quot;:&quot;model_doc/unispeech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech&quot;},{&quot;title&quot;:&quot;UniSpeech-SAT&quot;,&quot;id&quot;:&quot;model_doc/unispeech-sat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech-sat&quot;},{&quot;title&quot;:&quot;VITS&quot;,&quot;id&quot;:&quot;model_doc/vits&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vits&quot;},{&quot;title&quot;:&quot;Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2&quot;},{&quot;title&quot;:&quot;Wav2Vec2-Conformer&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2-conformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer&quot;},{&quot;title&quot;:&quot;Wav2Vec2Phoneme&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2_phoneme&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme&quot;},{&quot;title&quot;:&quot;WavLM&quot;,&quot;id&quot;:&quot;model_doc/wavlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wavlm&quot;},{&quot;title&quot;:&quot;Whisper&quot;,&quot;id&quot;:&quot;model_doc/whisper&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/whisper&quot;},{&quot;title&quot;:&quot;XLS-R&quot;,&quot;id&quot;:&quot;model_doc/xls_r&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xls_r&quot;},{&quot;title&quot;:&quot;XLSR-Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/xlsr_wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2&quot;}]},{&quot;title&quot;:&quot;Multimodal models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALIGN&quot;,&quot;id&quot;:&quot;model_doc/align&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/align&quot;},{&quot;title&quot;:&quot;AltCLIP&quot;,&quot;id&quot;:&quot;model_doc/altclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/altclip&quot;},{&quot;title&quot;:&quot;BLIP&quot;,&quot;id&quot;:&quot;model_doc/blip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip&quot;},{&quot;title&quot;:&quot;BLIP-2&quot;,&quot;id&quot;:&quot;model_doc/blip-2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip-2&quot;},{&quot;title&quot;:&quot;BridgeTower&quot;,&quot;id&quot;:&quot;model_doc/bridgetower&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bridgetower&quot;},{&quot;title&quot;:&quot;BROS&quot;,&quot;id&quot;:&quot;model_doc/bros&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bros&quot;},{&quot;title&quot;:&quot;Chinese-CLIP&quot;,&quot;id&quot;:&quot;model_doc/chinese_clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/chinese_clip&quot;},{&quot;title&quot;:&quot;CLIP&quot;,&quot;id&quot;:&quot;model_doc/clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clip&quot;},{&quot;title&quot;:&quot;CLIPSeg&quot;,&quot;id&quot;:&quot;model_doc/clipseg&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clipseg&quot;},{&quot;title&quot;:&quot;Data2Vec&quot;,&quot;id&quot;:&quot;model_doc/data2vec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/data2vec&quot;},{&quot;title&quot;:&quot;DePlot&quot;,&quot;id&quot;:&quot;model_doc/deplot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deplot&quot;},{&quot;title&quot;:&quot;Donut&quot;,&quot;id&quot;:&quot;model_doc/donut&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/donut&quot;},{&quot;title&quot;:&quot;FLAVA&quot;,&quot;id&quot;:&quot;model_doc/flava&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flava&quot;},{&quot;title&quot;:&quot;GIT&quot;,&quot;id&quot;:&quot;model_doc/git&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/git&quot;},{&quot;title&quot;:&quot;GroupViT&quot;,&quot;id&quot;:&quot;model_doc/groupvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/groupvit&quot;},{&quot;title&quot;:&quot;IDEFICS&quot;,&quot;id&quot;:&quot;model_doc/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/idefics&quot;},{&quot;title&quot;:&quot;InstructBLIP&quot;,&quot;id&quot;:&quot;model_doc/instructblip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/instructblip&quot;},{&quot;title&quot;:&quot;LayoutLM&quot;,&quot;id&quot;:&quot;model_doc/layoutlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlm&quot;},{&quot;title&quot;:&quot;LayoutLMV2&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv2&quot;},{&quot;title&quot;:&quot;LayoutLMV3&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv3&quot;},{&quot;title&quot;:&quot;LayoutXLM&quot;,&quot;id&quot;:&quot;model_doc/layoutxlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutxlm&quot;},{&quot;title&quot;:&quot;LiLT&quot;,&quot;id&quot;:&quot;model_doc/lilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lilt&quot;},{&quot;title&quot;:&quot;LXMERT&quot;,&quot;id&quot;:&quot;model_doc/lxmert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lxmert&quot;},{&quot;title&quot;:&quot;MatCha&quot;,&quot;id&quot;:&quot;model_doc/matcha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/matcha&quot;},{&quot;title&quot;:&quot;MGP-STR&quot;,&quot;id&quot;:&quot;model_doc/mgp-str&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mgp-str&quot;},{&quot;title&quot;:&quot;Nougat&quot;,&quot;id&quot;:&quot;model_doc/nougat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nougat&quot;},{&quot;title&quot;:&quot;OneFormer&quot;,&quot;id&quot;:&quot;model_doc/oneformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/oneformer&quot;},{&quot;title&quot;:&quot;OWL-ViT&quot;,&quot;id&quot;:&quot;model_doc/owlvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/owlvit&quot;},{&quot;title&quot;:&quot;Perceiver&quot;,&quot;id&quot;:&quot;model_doc/perceiver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/perceiver&quot;},{&quot;title&quot;:&quot;Pix2Struct&quot;,&quot;id&quot;:&quot;model_doc/pix2struct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pix2struct&quot;},{&quot;title&quot;:&quot;Segment Anything&quot;,&quot;id&quot;:&quot;model_doc/sam&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sam&quot;},{&quot;title&quot;:&quot;Speech Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/speech-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder&quot;},{&quot;title&quot;:&quot;TAPAS&quot;,&quot;id&quot;:&quot;model_doc/tapas&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapas&quot;},{&quot;title&quot;:&quot;TrOCR&quot;,&quot;id&quot;:&quot;model_doc/trocr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trocr&quot;},{&quot;title&quot;:&quot;TVLT&quot;,&quot;id&quot;:&quot;model_doc/tvlt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tvlt&quot;},{&quot;title&quot;:&quot;ViLT&quot;,&quot;id&quot;:&quot;model_doc/vilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vilt&quot;},{&quot;title&quot;:&quot;Vision Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/vision-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder&quot;},{&quot;title&quot;:&quot;Vision Text Dual Encoder&quot;,&quot;id&quot;:&quot;model_doc/vision-text-dual-encoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder&quot;},{&quot;title&quot;:&quot;VisualBERT&quot;,&quot;id&quot;:&quot;model_doc/visual_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/visual_bert&quot;},{&quot;title&quot;:&quot;X-CLIP&quot;,&quot;id&quot;:&quot;model_doc/xclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xclip&quot;}]},{&quot;title&quot;:&quot;Reinforcement learning models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Decision Transformer&quot;,&quot;id&quot;:&quot;model_doc/decision_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/decision_transformer&quot;},{&quot;title&quot;:&quot;Trajectory Transformer&quot;,&quot;id&quot;:&quot;model_doc/trajectory_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer&quot;}]},{&quot;title&quot;:&quot;Time series models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Autoformer&quot;,&quot;id&quot;:&quot;model_doc/autoformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/autoformer&quot;},{&quot;title&quot;:&quot;Informer&quot;,&quot;id&quot;:&quot;model_doc/informer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/informer&quot;},{&quot;title&quot;:&quot;Time Series Transformer&quot;,&quot;id&quot;:&quot;model_doc/time_series_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/time_series_transformer&quot;}]},{&quot;title&quot;:&quot;Graph models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Graphormer&quot;,&quot;id&quot;:&quot;model_doc/graphormer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/graphormer&quot;}]}]},{&quot;title&quot;:&quot;Internal Helpers&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Custom Layers and Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/modeling_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/modeling_utils&quot;},{&quot;title&quot;:&quot;Utilities for pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/pipelines_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/pipelines_utils&quot;},{&quot;title&quot;:&quot;Utilities for Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/tokenization_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/tokenization_utils&quot;},{&quot;title&quot;:&quot;Utilities for Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/trainer_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/trainer_utils&quot;},{&quot;title&quot;:&quot;Utilities for Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/generation_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/generation_utils&quot;},{&quot;title&quot;:&quot;Utilities for Image Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/image_processing_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/image_processing_utils&quot;},{&quot;title&quot;:&quot;Utilities for Audio processing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/audio_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/audio_utils&quot;},{&quot;title&quot;:&quot;General Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/file_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/file_utils&quot;},{&quot;title&quot;:&quot;Utilities for Time Series&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/time_series_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/time_series_utils&quot;}]}]}],&quot;chapterId&quot;:&quot;model_doc/sew-d&quot;,&quot;docType&quot;:&quot;docs&quot;,&quot;isLoggedIn&quot;:false,&quot;lang&quot;:&quot;en&quot;,&quot;langs&quot;:[&quot;de&quot;,&quot;en&quot;,&quot;es&quot;,&quot;fr&quot;,&quot;it&quot;,&quot;ko&quot;,&quot;pt&quot;,&quot;zh&quot;],&quot;library&quot;:&quot;transformers&quot;,&quot;theme&quot;:&quot;light&quot;,&quot;version&quot;:&quot;v4.34.0&quot;,&quot;versions&quot;:[{&quot;version&quot;:&quot;main&quot;},{&quot;version&quot;:&quot;v4.34.0&quot;},{&quot;version&quot;:&quot;v4.33.3&quot;},{&quot;version&quot;:&quot;v4.33.2&quot;},{&quot;version&quot;:&quot;v4.33.0&quot;},{&quot;version&quot;:&quot;v4.32.1&quot;},{&quot;version&quot;:&quot;v4.32.0&quot;},{&quot;version&quot;:&quot;v4.31.0&quot;},{&quot;version&quot;:&quot;v4.30.0&quot;},{&quot;version&quot;:&quot;v4.29.1&quot;},{&quot;version&quot;:&quot;v4.29.0&quot;},{&quot;version&quot;:&quot;v4.28.1&quot;},{&quot;version&quot;:&quot;v4.28.0&quot;},{&quot;version&quot;:&quot;v4.27.2&quot;},{&quot;version&quot;:&quot;v4.27.1&quot;},{&quot;version&quot;:&quot;v4.27.0&quot;},{&quot;version&quot;:&quot;v4.26.1&quot;},{&quot;version&quot;:&quot;v4.26.0&quot;},{&quot;version&quot;:&quot;v4.25.1&quot;},{&quot;version&quot;:&quot;v4.24.0&quot;},{&quot;version&quot;:&quot;v4.23.1&quot;},{&quot;version&quot;:&quot;v4.23.0&quot;},{&quot;version&quot;:&quot;v4.22.2&quot;},{&quot;version&quot;:&quot;v4.22.1&quot;},{&quot;version&quot;:&quot;v4.22.0&quot;},{&quot;version&quot;:&quot;v4.21.3&quot;},{&quot;version&quot;:&quot;v4.21.2&quot;},{&quot;version&quot;:&quot;v4.21.1&quot;},{&quot;version&quot;:&quot;v4.21.0&quot;},{&quot;version&quot;:&quot;v4.20.1&quot;},{&quot;version&quot;:&quot;v4.20.0&quot;},{&quot;version&quot;:&quot;v4.19.4&quot;},{&quot;version&quot;:&quot;v4.19.3&quot;},{&quot;version&quot;:&quot;v4.19.2&quot;},{&quot;version&quot;:&quot;v4.19.0&quot;},{&quot;version&quot;:&quot;v4.18.0&quot;},{&quot;version&quot;:&quot;v4.17.0&quot;},{&quot;version&quot;:&quot;v4.16.2&quot;},{&quot;version&quot;:&quot;v4.16.1&quot;},{&quot;version&quot;:&quot;v4.16.0&quot;},{&quot;version&quot;:&quot;v4.15.0&quot;},{&quot;version&quot;:&quot;v4.14.1&quot;},{&quot;version&quot;:&quot;v4.13.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.5&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.4&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.0.0&quot;},{&quot;version&quot;:&quot;doc-builder-html&quot;}],&quot;title&quot;:&quot;SEW-D&quot;}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"> <div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation </p> <div class="flex items-center"><p class="font-semibold">SEW-D</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "> <button class=" " type="button"> <h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> </button> <div class="flex items-center"> <select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1" selected="">v4.34.0</option><option value="2">v4.33.3</option><option value="3">v4.32.1</option><option value="4">v4.31.0</option><option value="5">v4.30.0</option><option value="6">v4.29.1</option><option value="7">v4.28.1</option><option value="8">v4.27.2</option><option value="9">v4.26.1</option><option value="10">v4.25.1</option><option value="11">v4.24.0</option><option value="12">v4.23.1</option><option value="13">v4.22.2</option><option value="14">v4.21.3</option><option value="15">v4.20.1</option><option value="16">v4.19.4</option><option value="17">v4.18.0</option><option value="18">v4.17.0</option><option value="19">v4.16.2</option><option value="20">v4.15.0</option><option value="21">v4.14.1</option><option value="22">v4.13.0</option><option value="23">v4.12.5</option><option value="24">v4.11.3</option><option value="25">v4.10.1</option><option value="26">v4.9.2</option><option value="27">v4.8.2</option><option value="28">v4.7.0</option><option value="29">v4.6.0</option><option value="30">v4.5.1</option><option value="31">v4.4.2</option><option value="32">v4.3.3</option><option value="33">v4.2.2</option><option value="34">v4.1.1</option><option value="35">v4.0.1</option><option value="36">v3.5.1</option><option value="37">v3.4.0</option><option value="38">v3.3.1</option><option value="39">v3.2.0</option><option value="40">v3.1.0</option><option value="41">v3.0.2</option><option value="42">v2.11.0</option><option value="43">v2.10.0</option><option value="44">v2.9.1</option><option value="45">v2.8.0</option><option value="46">v2.7.0</option><option value="47">v2.6.0</option><option value="48">v2.5.1</option><option value="49">v2.4.1</option><option value="50">v2.3.0</option><option value="51">v2.2.2</option><option value="52">v2.1.1</option><option value="53">v2.0.0</option><option value="54">v1.2.0</option><option value="55">v1.1.0</option><option value="56">v1.0.0</option><option value="57">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en" selected="">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"> <button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"> <svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> </a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Get started<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/index"><!-- HTML_TAG_START -->🤗 Transformers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/quicktour"><!-- HTML_TAG_START -->Quick tour<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/installation"><!-- HTML_TAG_START -->Installation<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Tutorials<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_tutorial"><!-- HTML_TAG_START -->Run inference with pipelines<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/autoclass_tutorial"><!-- HTML_TAG_START -->Write portable code with AutoClass<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/preprocessing"><!-- HTML_TAG_START -->Preprocess data<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/training"><!-- HTML_TAG_START -->Fine-tune a pretrained model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/run_scripts"><!-- HTML_TAG_START -->Train with a script<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/accelerate"><!-- HTML_TAG_START -->Set up distributed training with 🤗 Accelerate<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/peft"><!-- HTML_TAG_START -->Load and train adapters with 🤗 PEFT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_sharing"><!-- HTML_TAG_START -->Share your model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/transformers_agents"><!-- HTML_TAG_START -->Agents<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/llm_tutorial"><!-- HTML_TAG_START -->Generation with LLMs<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Task Guides<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Natural Language Processing<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Audio<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Computer Vision<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Multimodal<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Generation<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Prompting<!-- HTML_TAG_END --></span> </span></span> </div></div> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Developer guides<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/fast_tokenizers"><!-- HTML_TAG_START -->Use fast tokenizers from 🤗 Tokenizers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/multilingual"><!-- HTML_TAG_START -->Run inference with multilingual models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/create_a_model"><!-- HTML_TAG_START -->Use model-specific APIs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_models"><!-- HTML_TAG_START -->Share a custom model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/chat_templating"><!-- HTML_TAG_START -->Templates for chat models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/sagemaker"><!-- HTML_TAG_START -->Run training on Amazon SageMaker<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/serialization"><!-- HTML_TAG_START -->Export to ONNX<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tflite"><!-- HTML_TAG_START -->Export to TFLite<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/torchscript"><!-- HTML_TAG_START -->Export to TorchScript<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/benchmarks"><!-- HTML_TAG_START -->Benchmarks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/notebooks"><!-- HTML_TAG_START -->Notebooks with examples<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/community"><!-- HTML_TAG_START -->Community resources<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_tools"><!-- HTML_TAG_START -->Custom Tools and Prompts<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/troubleshooting"><!-- HTML_TAG_START -->Troubleshoot<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Performance and scalability<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/performance"><!-- HTML_TAG_START -->Overview<!-- HTML_TAG_END --> </a> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Efficient training techniques<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_one"><!-- HTML_TAG_START -->Methods and tools for efficient training on a single GPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_many"><!-- HTML_TAG_START -->Multiple GPUs and parallelism<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu"><!-- HTML_TAG_START -->Efficient training on CPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu_many"><!-- HTML_TAG_START -->Distributed CPU training<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu"><!-- HTML_TAG_START -->Training on TPUs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu_tf"><!-- HTML_TAG_START -->Training on TPU with TensorFlow<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_special"><!-- HTML_TAG_START -->Training on Specialized Hardware<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_hardware"><!-- HTML_TAG_START -->Custom hardware for training<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/hpo_train"><!-- HTML_TAG_START -->Hyperparameter Search using Trainer API<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Optimizing inference<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_cpu"><!-- HTML_TAG_START -->Inference on CPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_one"><!-- HTML_TAG_START -->Inference on one GPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_many"><!-- HTML_TAG_START -->Inference on many GPUs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_special"><!-- HTML_TAG_START -->Inference on Specialized Hardware<!-- HTML_TAG_END --> </a> </div><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/big_models"><!-- HTML_TAG_START -->Instantiating a big model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/debugging"><!-- HTML_TAG_START -->Troubleshooting<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tf_xla"><!-- HTML_TAG_START -->XLA Integration for TensorFlow Models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perf_torch_compile"><!-- HTML_TAG_START -->Optimize inference using `torch.compile()`<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Contribute<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/contributing"><!-- HTML_TAG_START -->How to contribute to transformers?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_model"><!-- HTML_TAG_START -->How to add a model to 🤗 Transformers?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_tensorflow_model"><!-- HTML_TAG_START -->How to convert a 🤗 Transformers model to TensorFlow?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_pipeline"><!-- HTML_TAG_START -->How to add a pipeline to 🤗 Transformers?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/testing"><!-- HTML_TAG_START -->Testing<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pr_checks"><!-- HTML_TAG_START -->Checks on a Pull Request<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Conceptual guides<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/philosophy"><!-- HTML_TAG_START -->Philosophy<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/glossary"><!-- HTML_TAG_START -->Glossary<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/task_summary"><!-- HTML_TAG_START -->What 🤗 Transformers can do<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tasks_explained"><!-- HTML_TAG_START -->How 🤗 Transformers solve tasks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_summary"><!-- HTML_TAG_START -->The Transformer model family<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tokenizer_summary"><!-- HTML_TAG_START -->Summary of the tokenizers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/attention"><!-- HTML_TAG_START -->Attention mechanisms<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pad_truncation"><!-- HTML_TAG_START -->Padding and truncation<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/bertology"><!-- HTML_TAG_START -->BERTology<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perplexity"><!-- HTML_TAG_START -->Perplexity of fixed-length models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_webserver"><!-- HTML_TAG_START -->Pipelines for webserver inference<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_memory_anatomy"><!-- HTML_TAG_START -->Model training anatomy<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->API<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Main Classes<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/agent"><!-- HTML_TAG_START -->Agents and Tools<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/model_doc/auto"><!-- HTML_TAG_START -->Auto Classes<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/callback"><!-- HTML_TAG_START -->Callbacks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/configuration"><!-- HTML_TAG_START -->Configuration<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/data_collator"><!-- HTML_TAG_START -->Data Collator<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks"><!-- HTML_TAG_START -->Keras callbacks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/logging"><!-- HTML_TAG_START -->Logging<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/model"><!-- HTML_TAG_START -->Models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/text_generation"><!-- HTML_TAG_START -->Text Generation<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/onnx"><!-- HTML_TAG_START -->ONNX<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules"><!-- HTML_TAG_START -->Optimization<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/output"><!-- HTML_TAG_START -->Model outputs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/pipelines"><!-- HTML_TAG_START -->Pipelines<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/processors"><!-- HTML_TAG_START -->Processors<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/quantization"><!-- HTML_TAG_START -->Quantization<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/tokenizer"><!-- HTML_TAG_START -->Tokenizer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/trainer"><!-- HTML_TAG_START -->Trainer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/deepspeed"><!-- HTML_TAG_START -->DeepSpeed Integration<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor"><!-- HTML_TAG_START -->Feature Extractor<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/image_processor"><!-- HTML_TAG_START -->Image Processor<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Text models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Vision models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Audio models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer"><!-- HTML_TAG_START -->Audio Spectrogram Transformer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bark"><!-- HTML_TAG_START -->Bark<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/clap"><!-- HTML_TAG_START -->CLAP<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/encodec"><!-- HTML_TAG_START -->EnCodec<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/hubert"><!-- HTML_TAG_START -->Hubert<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mctct"><!-- HTML_TAG_START -->MCTCT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mms"><!-- HTML_TAG_START -->MMS<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/musicgen"><!-- HTML_TAG_START -->MusicGen<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/pop2piano"><!-- HTML_TAG_START -->Pop2Piano<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/sew"><!-- HTML_TAG_START -->SEW<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/sew-d"><!-- HTML_TAG_START -->SEW-D<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/speech_to_text"><!-- HTML_TAG_START -->Speech2Text<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2"><!-- HTML_TAG_START -->Speech2Text2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/speecht5"><!-- HTML_TAG_START -->SpeechT5<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/unispeech"><!-- HTML_TAG_START -->UniSpeech<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/unispeech-sat"><!-- HTML_TAG_START -->UniSpeech-SAT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/vits"><!-- HTML_TAG_START -->VITS<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2"><!-- HTML_TAG_START -->Wav2Vec2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer"><!-- HTML_TAG_START -->Wav2Vec2-Conformer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme"><!-- HTML_TAG_START -->Wav2Vec2Phoneme<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/wavlm"><!-- HTML_TAG_START -->WavLM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/whisper"><!-- HTML_TAG_START -->Whisper<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xls_r"><!-- HTML_TAG_START -->XLS-R<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2"><!-- HTML_TAG_START -->XLSR-Wav2Vec2<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Multimodal models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Reinforcement learning models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Time series models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Graph models<!-- HTML_TAG_END --></span> </span></span> </div></div> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Internal Helpers<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/modeling_utils"><!-- HTML_TAG_START -->Custom Layers and Utilities<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/pipelines_utils"><!-- HTML_TAG_START -->Utilities for pipelines<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/tokenization_utils"><!-- HTML_TAG_START -->Utilities for Tokenizers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/trainer_utils"><!-- HTML_TAG_START -->Utilities for Trainer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/generation_utils"><!-- HTML_TAG_START -->Utilities for Generation<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/image_processing_utils"><!-- HTML_TAG_START -->Utilities for Image Processors<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/audio_utils"><!-- HTML_TAG_START -->Utilities for Audio processing<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/file_utils"><!-- HTML_TAG_START -->General Utilities<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/time_series_utils"><!-- HTML_TAG_START -->Utilities for Time Series<!-- HTML_TAG_END --> </a> </div> </div></nav></div></div></div> <div class="z-1 min-w-0 flex-1"> <div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg"> <div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div> <p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience </p> <div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes </div></div></div> <div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a> <p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div> <div class="prose-doc prose relative mx-auto max-w-4xl break-words"><!-- HTML_TAG_START --> <link href="/docs/transformers/v4.34.0/en/_app/immutable/assets/0.e3b0c442.css" rel="modulepreload"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/entry/start.c2db227a.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/scheduler.9bc65507.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/singletons.e3057404.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/index.3b203c72.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/paths.e7de6301.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/entry/app.879d9b87.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/index.78c82d43.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/0.242aaaff.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/each.e59479a4.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/228.1e4dbbe4.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/Tip.87d55b76.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/Docstring.4e7352e2.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/globals.7f7f1b26.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/IconCopyLink.bedaa44d.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/CodeBlock.73e038be.js"> <link rel="modulepreload" href="/docs/transformers/v4.34.0/en/_app/immutable/chunks/ExampleCodeBlock.872b014d.js"><!-- HEAD_svelte-1phssyn_START --><meta name="hf:doc:metadata" content="{&quot;local&quot;:&quot;sewd&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;overview&quot;,&quot;title&quot;:&quot;Overview&quot;},{&quot;local&quot;:&quot;documentation-resources&quot;,&quot;title&quot;:&quot;Documentation resources&quot;},{&quot;local&quot;:&quot;transformers.SEWDConfig&quot;,&quot;title&quot;:&quot;SEWDConfig&quot;},{&quot;local&quot;:&quot;transformers.SEWDModel&quot;,&quot;title&quot;:&quot;SEWDModel&quot;},{&quot;local&quot;:&quot;transformers.SEWDForCTC&quot;,&quot;title&quot;:&quot;SEWDForCTC&quot;},{&quot;local&quot;:&quot;transformers.SEWDForSequenceClassification&quot;,&quot;title&quot;:&quot;SEWDForSequenceClassification&quot;}],&quot;title&quot;:&quot;SEW-D&quot;}"><!-- HEAD_svelte-1phssyn_END --> <p></p> <h1 class="relative group"><a id="sewd" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#sewd"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1q5mph7">SEW-D</span></h1> <h2 class="relative group"><a id="overview" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#overview"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1jsw1pg">Overview</span></h2> <p data-svelte-h="svelte-cpeez1">SEW-D (Squeezed and Efficient Wav2Vec with Disentangled attention) was proposed in <a href="https://arxiv.org/abs/2109.06870" rel="nofollow">Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition</a> by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.</p> <p data-svelte-h="svelte-vfdo9a">The abstract from the paper is the following:</p> <p data-svelte-h="svelte-119whrz"><em>This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.</em></p> <p data-svelte-h="svelte-axv494">Tips:</p> <ul data-svelte-h="svelte-6ftb0k"><li>SEW-D is a speech model that accepts a float array corresponding to the raw waveform of the speech signal.</li> <li>SEWDForCTC is fine-tuned using connectionist temporal classification (CTC) so the model output has to be decoded using <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2CTCTokenizer">Wav2Vec2CTCTokenizer</a>.</li></ul> <p data-svelte-h="svelte-1txcwhb">This model was contributed by <a href="https://huggingface.co/anton-l" rel="nofollow">anton-l</a>.</p> <h2 class="relative group"><a id="documentation-resources" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#documentation-resources"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-n3f0j0">Documentation resources</span></h2> <ul data-svelte-h="svelte-11qmliz"><li><a href="../tasks/audio_classification">Audio classification task guide</a></li> <li><a href="../tasks/asr">Automatic speech recognition task guide</a></li></ul> <h2 class="relative group"><a id="transformers.SEWDConfig" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-jqt79a">SEWDConfig</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.SEWDConfig"><!-- HTML_TAG_START --><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">SEWDConfig</span></span></h3><!-- HTML_TAG_END --> <a id="transformers.SEWDConfig" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.SEWDConfig"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/sew_d/configuration_sew_d.py#L32" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">vocab_size<span class="opacity-60"> = 32</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_size<span class="opacity-60"> = 768</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_hidden_layers<span class="opacity-60"> = 12</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_attention_heads<span class="opacity-60"> = 12</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">intermediate_size<span class="opacity-60"> = 3072</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">squeeze_factor<span class="opacity-60"> = 2</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">max_position_embeddings<span class="opacity-60"> = 512</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_buckets<span class="opacity-60"> = 256</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">share_att_key<span class="opacity-60"> = True</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">relative_attention<span class="opacity-60"> = True</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pos_att_type<span class="opacity-60"> = ('p2c', 'c2p')</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">norm_rel_ebd<span class="opacity-60"> = 'layer_norm'</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_act<span class="opacity-60"> = 'gelu_python'</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">hidden_dropout<span class="opacity-60"> = 0.1</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">activation_dropout<span class="opacity-60"> = 0.1</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_dropout<span class="opacity-60"> = 0.1</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">feat_proj_dropout<span class="opacity-60"> = 0.0</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">final_dropout<span class="opacity-60"> = 0.1</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">initializer_range<span class="opacity-60"> = 0.02</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">layer_norm_eps<span class="opacity-60"> = 1e-07</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">feature_layer_norm_eps<span class="opacity-60"> = 1e-05</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">feat_extract_norm<span class="opacity-60"> = 'group'</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">feat_extract_activation<span class="opacity-60"> = 'gelu'</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">conv_dim<span class="opacity-60"> = (64, 128, 128, 128, 128, 256, 256, 256, 256, 512, 512, 512, 512)</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">conv_stride<span class="opacity-60"> = (5, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1)</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">conv_kernel<span class="opacity-60"> = (10, 3, 1, 3, 1, 3, 1, 3, 1, 2, 1, 2, 1)</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">conv_bias<span class="opacity-60"> = False</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_conv_pos_embeddings<span class="opacity-60"> = 128</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">num_conv_pos_embedding_groups<span class="opacity-60"> = 16</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">apply_spec_augment<span class="opacity-60"> = True</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_time_prob<span class="opacity-60"> = 0.05</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_time_length<span class="opacity-60"> = 10</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_time_min_masks<span class="opacity-60"> = 2</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_feature_prob<span class="opacity-60"> = 0.0</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_feature_length<span class="opacity-60"> = 10</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_feature_min_masks<span class="opacity-60"> = 0</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">ctc_loss_reduction<span class="opacity-60"> = 'mean'</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">ctc_zero_infinity<span class="opacity-60"> = False</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_weighted_layer_sum<span class="opacity-60"> = False</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">classifier_proj_size<span class="opacity-60"> = 256</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pad_token_id<span class="opacity-60"> = 0</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">bos_token_id<span class="opacity-60"> = 1</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">eos_token_id<span class="opacity-60"> = 2</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.vocab_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.vocab_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>vocab_size</strong> (<code>int</code>, <em>optional</em>, defaults to 32) — Vocabulary size of the SEW-D model. Defines the number of different tokens that can be represented by the <code>inputs_ids</code> passed when calling <code>SEWD</code>.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.hidden_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.hidden_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>hidden_size</strong> (<code>int</code>, <em>optional</em>, defaults to 768) — Dimensionality of the encoder layers and the pooler layer.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.num_hidden_layers" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.num_hidden_layers"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>num_hidden_layers</strong> (<code>int</code>, <em>optional</em>, defaults to 12) — Number of hidden layers in the Transformer encoder.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.num_attention_heads" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.num_attention_heads"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>num_attention_heads</strong> (<code>int</code>, <em>optional</em>, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.intermediate_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.intermediate_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>intermediate_size</strong> (<code>int</code>, <em>optional</em>, defaults to 3072) — Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.squeeze_factor" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.squeeze_factor"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>squeeze_factor</strong> (<code>int</code>, <em>optional</em>, defaults to 2) — Sequence length downsampling factor after the encoder and upsampling factor after the transformer.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.max_position_embeddings" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.max_position_embeddings"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>max_position_embeddings</strong> (<code>int</code>, <em>optional</em>, defaults to 512) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048).<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.position_buckets" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.position_buckets"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>position_buckets</strong> (<code>int</code>, <em>optional</em>, defaults to 256) — The maximum size of relative position embeddings.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.share_att_key" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.share_att_key"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>share_att_key</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether to share attention key with c2p and p2c.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.relative_attention" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.relative_attention"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>relative_attention</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether to use relative position encoding.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.pos_att_type" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.pos_att_type"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>pos_att_type</strong> (<code>Tuple[str]</code>, <em>optional</em>, defaults to <code>("p2c", "c2p")</code>) — The type of relative position attention, it can be a combination of <code>("p2c", "c2p")</code>, e.g. <code>("p2c")</code>, <code>("p2c", "c2p")</code>, <code>("p2c", "c2p")</code>.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.norm_rel_ebd" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.norm_rel_ebd"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>norm_rel_ebd</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"layer_norm"</code>) — Whether to use layer norm in relative embedding (<code>"layer_norm"</code> if yes)<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.hidden_act" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.hidden_act"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>hidden_act</strong> (<code>str</code> or <code>function</code>, <em>optional</em>, defaults to <code>"gelu_python"</code>) — The non-linear activation function (function or string) in the encoder and pooler. If string, <code>"gelu"</code>, <code>"relu"</code>, <code>"selu"</code>, <code>"gelu_python"</code> and <code>"gelu_new"</code> are supported.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.hidden_dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.hidden_dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>hidden_dropout</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) — Deprecated. Not used by the model and will be removed in a future version.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.activation_dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.activation_dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>activation_dropout</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.attention_dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.attention_dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>attention_dropout</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) — The dropout ratio for the attention probabilities.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.final_dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.final_dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>final_dropout</strong> (<code>float</code>, <em>optional</em>, defaults to 0.1) — The dropout probability for the final projection layer of <a href="/docs/transformers/v4.34.0/en/model_doc/sew-d#transformers.SEWDForCTC">SEWDForCTC</a>.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.initializer_range" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.initializer_range"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>initializer_range</strong> (<code>float</code>, <em>optional</em>, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.layer_norm_eps" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.layer_norm_eps"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>layer_norm_eps</strong> (<code>float</code>, <em>optional</em>, defaults to 1e-7) — The epsilon used by the layer normalization layers in the transformer encoder.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.feature_layer_norm_eps" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.feature_layer_norm_eps"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>feature_layer_norm_eps</strong> (<code>float</code>, <em>optional</em>, defaults to 1e-5) — The epsilon used by the layer normalization after the feature encoder.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.feat_extract_norm" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.feat_extract_norm"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>feat_extract_norm</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"group"</code>) — The norm to be applied to 1D convolutional layers in feature encoder. One of <code>"group"</code> for group normalization of only the first 1D convolutional layer or <code>"layer"</code> for layer normalization of all 1D convolutional layers.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.feat_proj_dropout" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.feat_proj_dropout"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>feat_proj_dropout</strong> (<code>float</code>, <em>optional</em>, defaults to 0.0) — The dropout probability for output of the feature encoder.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.feat_extract_activation" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.feat_extract_activation"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>feat_extract_activation</strong> (<code>str, </code>optional<code>, defaults to </code>“gelu”<code>) -- The non-linear activation function (function or string) in the 1D convolutional layers of the feature extractor. If string, </code>“gelu”<code>, </code>“relu”<code>, </code>“selu”<code>and</code>“gelu_new”` are supported.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.conv_dim" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.conv_dim"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>conv_dim</strong> (<code>Tuple[int]</code> or <code>List[int]</code>, <em>optional</em>, defaults to <code>(64, 128, 128, 128, 128, 256, 256, 256, 256, 512, 512, 512, 512)</code>) — A tuple of integers defining the number of input and output channels of each 1D convolutional layer in the feature encoder. The length of <em>conv_dim</em> defines the number of 1D convolutional layers.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.conv_stride" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.conv_stride"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>conv_stride</strong> (<code>Tuple[int]</code> or <code>List[int]</code>, <em>optional</em>, defaults to <code>(5, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1)</code>) — A tuple of integers defining the stride of each 1D convolutional layer in the feature encoder. The length of <em>conv_stride</em> defines the number of convolutional layers and has to match the length of <em>conv_dim</em>.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.conv_kernel" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.conv_kernel"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>conv_kernel</strong> (<code>Tuple[int]</code> or <code>List[int]</code>, <em>optional</em>, defaults to <code>(10, 3, 1, 3, 1, 3, 1, 3, 1, 2, 1, 2, 1)</code>) — A tuple of integers defining the kernel size of each 1D convolutional layer in the feature encoder. The length of <em>conv_kernel</em> defines the number of convolutional layers and has to match the length of <em>conv_dim</em>.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.conv_bias" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.conv_bias"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>conv_bias</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether the 1D convolutional layers have a bias.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.num_conv_pos_embeddings" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.num_conv_pos_embeddings"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>num_conv_pos_embeddings</strong> (<code>int</code>, <em>optional</em>, defaults to 128) — Number of convolutional positional embeddings. Defines the kernel size of 1D convolutional positional embeddings layer.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.num_conv_pos_embedding_groups" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.num_conv_pos_embedding_groups"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>num_conv_pos_embedding_groups</strong> (<code>int</code>, <em>optional</em>, defaults to 16) — Number of groups of 1D convolutional positional embeddings layer.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.apply_spec_augment" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.apply_spec_augment"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>apply_spec_augment</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether to apply <em>SpecAugment</em> data augmentation to the outputs of the feature encoder. For reference see <a href="https://arxiv.org/abs/1904.08779" rel="nofollow">SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition</a>.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.mask_time_prob" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.mask_time_prob"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>mask_time_prob</strong> (<code>float</code>, <em>optional</em>, defaults to 0.05) — Percentage (between 0 and 1) of all feature vectors along the time axis which will be masked. The masking procecure generates ”mask_time_prob<em>len(time_axis)/mask_time_length” independent masks over the axis. If reasoning from the propability of each feature vector to be chosen as the start of the vector span to be masked, </em>mask_time_prob<em> should be `prob_vector_start</em>mask_time_length<code>. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant if </code>apply_spec_augment is True`.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.mask_time_length" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.mask_time_length"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>mask_time_length</strong> (<code>int</code>, <em>optional</em>, defaults to 10) — Length of vector span along the time axis.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.mask_time_min_masks" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.mask_time_min_masks"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>mask_time_min_masks</strong> (<code>int</code>, <em>optional</em>, defaults to 2), — The minimum number of masks of length <code>mask_feature_length</code> generated along the time axis, each time step, irrespectively of <code>mask_feature_prob</code>. Only relevant if ”mask_time_prob*len(time_axis)/mask_time_length &lt; mask_time_min_masks”<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.mask_feature_prob" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.mask_feature_prob"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>mask_feature_prob</strong> (<code>float</code>, <em>optional</em>, defaults to 0.0) — Percentage (between 0 and 1) of all feature vectors along the feature axis which will be masked. The masking procecure generates ”mask_feature_prob<em>len(feature_axis)/mask_time_length” independent masks over the axis. If reasoning from the propability of each feature vector to be chosen as the start of the vector span to be masked, </em>mask_feature_prob<em> should be `prob_vector_start</em>mask_feature_length<code>. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant if </code>apply_spec_augment is True`.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.mask_feature_length" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.mask_feature_length"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>mask_feature_length</strong> (<code>int</code>, <em>optional</em>, defaults to 10) — Length of vector span along the feature axis.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.mask_feature_min_masks" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.mask_feature_min_masks"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>mask_feature_min_masks</strong> (<code>int</code>, <em>optional</em>, defaults to 0), — The minimum number of masks of length <code>mask_feature_length</code> generated along the feature axis, each time step, irrespectively of <code>mask_feature_prob</code>. Only relevant if ”mask_feature_prob*len(feature_axis)/mask_feature_length &lt; mask_feature_min_masks”<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.diversity_loss_weight" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.diversity_loss_weight"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>diversity_loss_weight</strong> (<code>int</code>, <em>optional</em>, defaults to 0.1) — The weight of the codebook diversity loss component.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.ctc_loss_reduction" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.ctc_loss_reduction"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>ctc_loss_reduction</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"sum"</code>) — Specifies the reduction to apply to the output of <code>torch.nn.CTCLoss</code>. Only relevant when training an instance of <a href="/docs/transformers/v4.34.0/en/model_doc/sew-d#transformers.SEWDForCTC">SEWDForCTC</a>.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.ctc_zero_infinity" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.ctc_zero_infinity"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>ctc_zero_infinity</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether to zero infinite losses and the associated gradients of <code>torch.nn.CTCLoss</code>. Infinite losses mainly occur when the inputs are too short to be aligned to the targets. Only relevant when training an instance of <a href="/docs/transformers/v4.34.0/en/model_doc/sew-d#transformers.SEWDForCTC">SEWDForCTC</a>.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.use_weighted_layer_sum" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.use_weighted_layer_sum"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>use_weighted_layer_sum</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether to use a weighted average of layer outputs with learned weights. Only relevant when using an instance of <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2ForSequenceClassification">Wav2Vec2ForSequenceClassification</a>.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDConfig.classifier_proj_size" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.classifier_proj_size"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>classifier_proj_size</strong> (<code>int</code>, <em>optional</em>, defaults to 256) — Dimensionality of the projection before token mean-pooling for classification.<!-- HTML_TAG_END --> </span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1dzhkun">This is the configuration class to store the configuration of a <a href="/docs/transformers/v4.34.0/en/model_doc/sew-d#transformers.SEWDModel">SEWDModel</a>. It is used to instantiate a SEW-D model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the SEW-D <a href="https://huggingface.co/asapp/sew-d-tiny-100k" rel="nofollow">asapp/sew-d-tiny-100k</a> architecture.</p> <p data-svelte-h="svelte-10kqkkl">Configuration objects inherit from <a href="/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> and can be used to control the model outputs. Read the documentation from <a href="/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> for more information.</p> <div class="relative group rounded-md"><a id="transformers.SEWDConfig.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDConfig.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> SEWDConfig, SEWDModel <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Initializing a SEW-D asapp/sew-d-tiny-100k style configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>configuration = SEWDConfig() <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Initializing a model (with random weights) from the asapp/sew-d-tiny-100k style configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model = SEWDModel(configuration) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Accessing the model configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>configuration = model.config<!-- HTML_TAG_END --></pre></div></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.SEWDConfig.to_dict"><!-- HTML_TAG_START --><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>to_dict</span></h4><!-- HTML_TAG_END --> <a id="transformers.SEWDConfig.to_dict" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.SEWDConfig.to_dict"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/sew_d/configuration_sew_d.py#L291" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> </div></div> <p data-svelte-h="svelte-1ww3wqq">Serializes this instance to a Python dictionary.</p></div></div> <h2 class="relative group"><a id="transformers.SEWDModel" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDModel"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-o7p0af">SEWDModel</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.SEWDModel"><!-- HTML_TAG_START --><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">SEWDModel</span></span></h3><!-- HTML_TAG_END --> <a id="transformers.SEWDModel" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.SEWDModel"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/sew_d/modeling_sew_d.py#L1381" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60">: SEWDConfig</span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDModel.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDModel.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/sew-d#transformers.SEWDConfig">SEWDConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.<!-- HTML_TAG_END --> </span></span> </li></ul> </div></div> <p data-svelte-h="svelte-ici1zj">The bare SEW-D Model transformer outputting raw hidden-states without any specific head on top. SEW-D was proposed in <a href="https://arxiv.org/abs/2109.06870" rel="nofollow">Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition</a> by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.</p> <p data-svelte-h="svelte-1e6yl4y">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.).</p> <p data-svelte-h="svelte-68lg8f">This model is a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.SEWDModel.forward"><!-- HTML_TAG_START --><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4><!-- HTML_TAG_END --> <a id="transformers.SEWDModel.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.SEWDModel.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/sew_d/modeling_sew_d.py#L1448" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_values<span class="opacity-60">: typing.Optional[torch.Tensor]</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_time_indices<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><!-- HTML_TAG_START --><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.BaseModelOutput">transformers.modeling_outputs.BaseModelOutput</a> or <code>tuple(torch.FloatTensor)</code></span><!-- HTML_TAG_END --></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDModel.forward.input_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDModel.forward.input_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>input_values</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Float values of input raw speech waveform. Values can be obtained by loading a <code>.flac</code> or <code>.wav</code> audio file into an array of type <code>List[float]</code> or a <code>numpy.ndarray</code>, <em>e.g.</em> via the soundfile library (<code>pip install soundfile</code>). To prepare the array into <code>input_values</code>, the <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoProcessor">AutoProcessor</a> should be used for padding and conversion into a tensor of type <code>torch.FloatTensor</code>. See <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Processor.__call__">Wav2Vec2Processor.<strong>call</strong>()</a> for details.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDModel.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDModel.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>attention_mask</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing convolution and attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDModel.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDModel.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDModel.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDModel.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDModel.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDModel.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.<!-- HTML_TAG_END --> </span></span> </li></ul> <div id="transformers.SEWDModel.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <!-- HTML_TAG_START --> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.BaseModelOutput">transformers.modeling_outputs.BaseModelOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <!-- HTML_TAG_END --> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"><!-- HTML_TAG_START --> </p><p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.BaseModelOutput">transformers.modeling_outputs.BaseModelOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/sew-d#transformers.SEWDConfig">SEWDConfig</a>) and inputs.</p> <ul> <li> <p><strong>last_hidden_state</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>) — Sequence of hidden-states at the output of the last layer of the model.</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> <!-- HTML_TAG_END --><p></p> </div></div> <p data-svelte-h="svelte-wyu98t">The <a href="/docs/transformers/v4.34.0/en/model_doc/sew-d#transformers.SEWDModel">SEWDModel</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.SEWDModel.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDModel.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoProcessor, SEWDModel <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset <span class="hljs-meta">&gt;&gt;&gt; </span>dataset = load_dataset(<span class="hljs-string">"hf-internal-testing/librispeech_asr_demo"</span>, <span class="hljs-string">"clean"</span>, split=<span class="hljs-string">"validation"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>dataset = dataset.sort(<span class="hljs-string">"id"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>sampling_rate = dataset.features[<span class="hljs-string">"audio"</span>].sampling_rate <span class="hljs-meta">&gt;&gt;&gt; </span>processor = AutoProcessor.from_pretrained(<span class="hljs-string">"asapp/sew-d-tiny-100k-ft-ls100h"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = SEWDModel.from_pretrained(<span class="hljs-string">"asapp/sew-d-tiny-100k-ft-ls100h"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># audio file is decoded on the fly</span> <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = processor(dataset[<span class="hljs-number">0</span>][<span class="hljs-string">"audio"</span>][<span class="hljs-string">"array"</span>], sampling_rate=sampling_rate, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> outputs = model(**inputs) <span class="hljs-meta">&gt;&gt;&gt; </span>last_hidden_states = outputs.last_hidden_state <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-built_in">list</span>(last_hidden_states.shape) [<span class="hljs-number">1</span>, <span class="hljs-number">292</span>, <span class="hljs-number">384</span>]<!-- HTML_TAG_END --></pre></div></div></div></div> <h2 class="relative group"><a id="transformers.SEWDForCTC" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDForCTC"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1qs6w57">SEWDForCTC</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.SEWDForCTC"><!-- HTML_TAG_START --><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">SEWDForCTC</span></span></h3><!-- HTML_TAG_END --> <a id="transformers.SEWDForCTC" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.SEWDForCTC"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/sew_d/modeling_sew_d.py#L1510" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span> </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">target_lang<span class="opacity-60">: typing.Optional[str] = None</span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDForCTC.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDForCTC.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/sew-d#transformers.SEWDConfig">SEWDConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.<!-- HTML_TAG_END --> </span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1ayngxk">SEW-D Model with a <code>language modeling</code> head on top for Connectionist Temporal Classification (CTC). SEW-D was proposed in <a href="https://arxiv.org/abs/2109.06870" rel="nofollow">Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition</a> by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.</p> <p data-svelte-h="svelte-1e6yl4y">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.).</p> <p data-svelte-h="svelte-68lg8f">This model is a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.SEWDForCTC.forward"><!-- HTML_TAG_START --><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4><!-- HTML_TAG_END --> <a id="transformers.SEWDForCTC.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.SEWDForCTC.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/sew_d/modeling_sew_d.py#L1582" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_values<span class="opacity-60">: typing.Optional[torch.Tensor]</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><!-- HTML_TAG_START --><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.CausalLMOutput">transformers.modeling_outputs.CausalLMOutput</a> or <code>tuple(torch.FloatTensor)</code></span><!-- HTML_TAG_END --></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDForCTC.forward.input_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDForCTC.forward.input_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>input_values</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Float values of input raw speech waveform. Values can be obtained by loading a <code>.flac</code> or <code>.wav</code> audio file into an array of type <code>List[float]</code> or a <code>numpy.ndarray</code>, <em>e.g.</em> via the soundfile library (<code>pip install soundfile</code>). To prepare the array into <code>input_values</code>, the <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoProcessor">AutoProcessor</a> should be used for padding and conversion into a tensor of type <code>torch.FloatTensor</code>. See <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Processor.__call__">Wav2Vec2Processor.<strong>call</strong>()</a> for details.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDForCTC.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDForCTC.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>attention_mask</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing convolution and attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDForCTC.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDForCTC.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDForCTC.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDForCTC.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDForCTC.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDForCTC.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDForCTC.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDForCTC.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, target_length)</code>, <em>optional</em>) — Labels for connectionist temporal classification. Note that <code>target_length</code> has to be smaller or equal to the sequence length of the output logits. Indices are selected in <code>[-100, 0, ..., config.vocab_size - 1]</code>. All labels set to <code>-100</code> are ignored (masked), the loss is only computed for labels in <code>[0, ..., config.vocab_size - 1]</code>.<!-- HTML_TAG_END --> </span></span> </li></ul> <div id="transformers.SEWDForCTC.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <!-- HTML_TAG_START --> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.CausalLMOutput">transformers.modeling_outputs.CausalLMOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <!-- HTML_TAG_END --> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"><!-- HTML_TAG_START --> </p><p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.CausalLMOutput">transformers.modeling_outputs.CausalLMOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/sew-d#transformers.SEWDConfig">SEWDConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Language modeling loss (for next-token prediction).</p> </li> <li> <p><strong>logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, config.vocab_size)</code>) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> <!-- HTML_TAG_END --><p></p> </div></div> <p data-svelte-h="svelte-1d0m76x">The <a href="/docs/transformers/v4.34.0/en/model_doc/sew-d#transformers.SEWDForCTC">SEWDForCTC</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.SEWDForCTC.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDForCTC.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoProcessor, SEWDForCTC <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>dataset = load_dataset(<span class="hljs-string">"hf-internal-testing/librispeech_asr_demo"</span>, <span class="hljs-string">"clean"</span>, split=<span class="hljs-string">"validation"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>dataset = dataset.sort(<span class="hljs-string">"id"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>sampling_rate = dataset.features[<span class="hljs-string">"audio"</span>].sampling_rate <span class="hljs-meta">&gt;&gt;&gt; </span>processor = AutoProcessor.from_pretrained(<span class="hljs-string">"asapp/sew-d-tiny-100k-ft-ls100h"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = SEWDForCTC.from_pretrained(<span class="hljs-string">"asapp/sew-d-tiny-100k-ft-ls100h"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># audio file is decoded on the fly</span> <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = processor(dataset[<span class="hljs-number">0</span>][<span class="hljs-string">"audio"</span>][<span class="hljs-string">"array"</span>], sampling_rate=sampling_rate, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> logits = model(**inputs).logits <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_ids = torch.argmax(logits, dim=-<span class="hljs-number">1</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># transcribe speech</span> <span class="hljs-meta">&gt;&gt;&gt; </span>transcription = processor.batch_decode(predicted_ids) <span class="hljs-meta">&gt;&gt;&gt; </span>transcription[<span class="hljs-number">0</span>] <span class="hljs-string">'MISTER QUILTER IS THE APOSTIL OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL'</span> <span class="hljs-meta">&gt;&gt;&gt; </span>inputs[<span class="hljs-string">"labels"</span>] = processor(text=dataset[<span class="hljs-number">0</span>][<span class="hljs-string">"text"</span>], return_tensors=<span class="hljs-string">"pt"</span>).input_ids <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># compute loss</span> <span class="hljs-meta">&gt;&gt;&gt; </span>loss = model(**inputs).loss <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-built_in">round</span>(loss.item(), <span class="hljs-number">2</span>) <span class="hljs-number">0.21</span><!-- HTML_TAG_END --></pre></div></div></div></div> <h2 class="relative group"><a id="transformers.SEWDForSequenceClassification" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDForSequenceClassification"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1gy1jow">SEWDForSequenceClassification</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.SEWDForSequenceClassification"><!-- HTML_TAG_START --><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">SEWDForSequenceClassification</span></span></h3><!-- HTML_TAG_END --> <a id="transformers.SEWDForSequenceClassification" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.SEWDForSequenceClassification"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/sew_d/modeling_sew_d.py#L1670" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60"></span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDForSequenceClassification.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDForSequenceClassification.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/sew-d#transformers.SEWDConfig">SEWDConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.<!-- HTML_TAG_END --> </span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1d37eyv">SEWD Model with a sequence classification head on top (a linear layer over the pooled output) for tasks like SUPERB Keyword Spotting.</p> <p data-svelte-h="svelte-jlt226">SEW-D was proposed in <a href="https://arxiv.org/abs/2109.06870" rel="nofollow">Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition</a> by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.</p> <p data-svelte-h="svelte-1e6yl4y">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.).</p> <p data-svelte-h="svelte-68lg8f">This model is a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.SEWDForSequenceClassification.forward"><!-- HTML_TAG_START --><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4><!-- HTML_TAG_END --> <a id="transformers.SEWDForSequenceClassification.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.SEWDForSequenceClassification.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/sew_d/modeling_sew_d.py#L1715" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_values<span class="opacity-60">: typing.Optional[torch.Tensor]</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span> </span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span> </span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><!-- HTML_TAG_START --><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.SequenceClassifierOutput">transformers.modeling_outputs.SequenceClassifierOutput</a> or <code>tuple(torch.FloatTensor)</code></span><!-- HTML_TAG_END --></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDForSequenceClassification.forward.input_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDForSequenceClassification.forward.input_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>input_values</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Float values of input raw speech waveform. Values can be obtained by loading a <code>.flac</code> or <code>.wav</code> audio file into an array of type <code>List[float]</code> or a <code>numpy.ndarray</code>, <em>e.g.</em> via the soundfile library (<code>pip install soundfile</code>). To prepare the array into <code>input_values</code>, the <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoProcessor">AutoProcessor</a> should be used for padding and conversion into a tensor of type <code>torch.FloatTensor</code>. See <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Processor.__call__">Wav2Vec2Processor.<strong>call</strong>()</a> for details.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDForSequenceClassification.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDForSequenceClassification.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>attention_mask</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing convolution and attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a><!-- HTML_TAG_END --> </p></span></span></li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDForSequenceClassification.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDForSequenceClassification.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDForSequenceClassification.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDForSequenceClassification.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDForSequenceClassification.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDForSequenceClassification.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.<!-- HTML_TAG_END --> </span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SEWDForSequenceClassification.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDForSequenceClassification.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><!-- HTML_TAG_START --><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size,)</code>, <em>optional</em>) — Labels for computing the sequence classification/regression loss. Indices should be in <code>[0, ..., config.num_labels - 1]</code>. If <code>config.num_labels == 1</code> a regression loss is computed (Mean-Square loss), If <code>config.num_labels &gt; 1</code> a classification loss is computed (Cross-Entropy).<!-- HTML_TAG_END --> </span></span> </li></ul> <div id="transformers.SEWDForSequenceClassification.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <!-- HTML_TAG_START --> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.SequenceClassifierOutput">transformers.modeling_outputs.SequenceClassifierOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <!-- HTML_TAG_END --> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"><!-- HTML_TAG_START --> </p><p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.SequenceClassifierOutput">transformers.modeling_outputs.SequenceClassifierOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/sew-d#transformers.SEWDConfig">SEWDConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Classification (or regression if config.num_labels==1) loss.</p> </li> <li> <p><strong>logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, config.num_labels)</code>) — Classification (or regression if config.num_labels==1) scores (before SoftMax).</p> </li> <li> <p><strong>hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.</p> </li> <li> <p><strong>attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> <!-- HTML_TAG_END --><p></p> </div></div> <p data-svelte-h="svelte-1ku0gv5">The <a href="/docs/transformers/v4.34.0/en/model_doc/sew-d#transformers.SEWDForSequenceClassification">SEWDForSequenceClassification</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.SEWDForSequenceClassification.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SEWDForSequenceClassification.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoFeatureExtractor, SEWDForSequenceClassification <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>dataset = load_dataset(<span class="hljs-string">"hf-internal-testing/librispeech_asr_demo"</span>, <span class="hljs-string">"clean"</span>, split=<span class="hljs-string">"validation"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>dataset = dataset.sort(<span class="hljs-string">"id"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>sampling_rate = dataset.features[<span class="hljs-string">"audio"</span>].sampling_rate <span class="hljs-meta">&gt;&gt;&gt; </span>feature_extractor = AutoFeatureExtractor.from_pretrained(<span class="hljs-string">"anton-l/sew-d-mid-400k-ft-keyword-spotting"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = SEWDForSequenceClassification.from_pretrained(<span class="hljs-string">"anton-l/sew-d-mid-400k-ft-keyword-spotting"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># audio file is decoded on the fly</span> <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = feature_extractor(dataset[<span class="hljs-number">0</span>][<span class="hljs-string">"audio"</span>][<span class="hljs-string">"array"</span>], sampling_rate=sampling_rate, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> logits = model(**inputs).logits <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_class_ids = torch.argmax(logits, dim=-<span class="hljs-number">1</span>).item() <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_label = model.config.id2label[predicted_class_ids] <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_label <span class="hljs-string">'_unknown_'</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># compute loss - target_label is e.g. "down"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>target_label = model.config.id2label[<span class="hljs-number">0</span>] <span class="hljs-meta">&gt;&gt;&gt; </span>inputs[<span class="hljs-string">"labels"</span>] = torch.tensor([model.config.label2id[target_label]]) <span class="hljs-meta">&gt;&gt;&gt; </span>loss = model(**inputs).loss <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-built_in">round</span>(loss.item(), <span class="hljs-number">2</span>) <span class="hljs-number">3.16</span><!-- HTML_TAG_END --></pre></div></div></div></div> <p></p> <script> { __sveltekit_1yybmhh = { assets: "/docs/transformers/v4.34.0/en", base: "/docs/transformers/v4.34.0/en", env: {} }; const element = document.currentScript.parentElement; const data = [null,null]; Promise.all([ import("/docs/transformers/v4.34.0/en/_app/immutable/entry/start.c2db227a.js"), import("/docs/transformers/v4.34.0/en/_app/immutable/entry/app.879d9b87.js") ]).then(([kit, app]) => { kit.start(app, element, { node_ids: [0, 228], data, form: null, error: null }); }); } </script> <!-- HTML_TAG_END --></div> <div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/sew" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>SEW</a> <a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/speech_to_text" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">Speech2Text<span class="ml-2 translate-y-px">→</span></a></div></div></div> <div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapter&quot;:{&quot;title&quot;:&quot;SEW-D&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;sewd&quot;,&quot;url&quot;:&quot;#sewd&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;overview&quot;,&quot;url&quot;:&quot;#overview&quot;},{&quot;title&quot;:&quot;Documentation resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;documentation-resources&quot;,&quot;url&quot;:&quot;#documentation-resources&quot;},{&quot;title&quot;:&quot;SEWDConfig&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.SEWDConfig&quot;,&quot;url&quot;:&quot;#transformers.SEWDConfig&quot;},{&quot;title&quot;:&quot;SEWDModel&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.SEWDModel&quot;,&quot;url&quot;:&quot;#transformers.SEWDModel&quot;},{&quot;title&quot;:&quot;SEWDForCTC&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.SEWDForCTC&quot;,&quot;url&quot;:&quot;#transformers.SEWDForCTC&quot;},{&quot;title&quot;:&quot;SEWDForSequenceClassification&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.SEWDForSequenceClassification&quot;,&quot;url&quot;:&quot;#transformers.SEWDForSequenceClassification&quot;}]}}" data-target="SubSideMenu"><nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#sewd" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-sewd">SE<wbr>W-D</a> <a href="#overview" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-overview"><wbr>Overview</a> <a href="#documentation-resources" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-documentation-resources"><wbr>Documentation resources</a> <a href="#transformers.SEWDConfig" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.SEWDConfig">SEWD<wbr>Config</a> <a href="#transformers.SEWDModel" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.SEWDModel">SEWD<wbr>Model</a> <a href="#transformers.SEWDForCTC" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.SEWDForCTC">SEWD<wbr>ForCTC</a> <a href="#transformers.SEWDForSequenceClassification" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.SEWDForSequenceClassification">SEWD<wbr>For<wbr>Sequence<wbr>Classification</a> </nav></div></div></div> <div id="doc-footer"></div></main> </div> <script> import("/front/build/kube-5e23f38/index.js"); window.moonSha = "kube-5e23f38/"; window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc","environment":"production","userAgent":"HuggingFace (production)"}`); </script> <!-- Stripe --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://js.stripe.com/v3/"; script.async = true; document.head.appendChild(script); } </script> <!-- Google analytics v4 --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL"; script.async = true; document.head.appendChild(script); window.dataLayer = window.dataLayer || []; function gtag() { if (window.dataLayer !== undefined) { window.dataLayer.push(arguments); } } gtag("js", new Date()); gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/v4.34.0/en/model_doc/sew-d" }); /// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" }); /// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent /// TODO: ask the user for their consent and update this with gtag('consent', 'update') } </script> <!-- Google Analytics v3 (deprecated) --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { (function (i, s, o, g, r, a, m) { i["GoogleAnalyticsObject"] = r; (i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments); }), (i[r].l = 1 * new Date()); (a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]); a.async = 1; a.src = g; m.parentNode.insertBefore(a, m); })(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics"); ganalytics("create", "UA-83738774-2", "auto"); ganalytics("send", "pageview", "/docs/transformers/v4.34.0/en/model_doc/sew-d"); } </script> <iframe name="__privateStripeMetricsController3630" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-27c67c0d52761104439bb051c7856ab1.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fv4.34.0%2Fen%2Fmodel_doc%2Fsew-d%23transformers.SEWDConfig&amp;title=SEW-D&amp;referrer=&amp;muid=38397bf3-d1df-433f-a1ab-3a999964eeba83e258&amp;sid=7a2cecc6-6b9a-4e4a-88b4-4bd8a189a43fe6315f&amp;version=6&amp;preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html>
2023-10-05T13:33:50.629Z
Speech Encoder Decoder Models
https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderConfig
# Speech Encoder Decoder Models The [SpeechEncoderDecoderModel](/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderModel) can be used to initialize a speech-to-text model with any pretrained speech autoencoding model as the encoder (_e.g._ [Wav2Vec2](wav2vec2), [Hubert](hubert)) and any pretrained autoregressive model as the decoder. The effectiveness of initializing speech-sequence-to-text-sequence models with pretrained checkpoints for speech recognition and speech translation has _e.g._ been shown in [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau. An example of how to use a [SpeechEncoderDecoderModel](/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderModel) for inference can be seen in [Speech2Text2](speech_to_text_2). ## Randomly initializing `SpeechEncoderDecoderModel` from model configurations. [SpeechEncoderDecoderModel](/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderModel) can be randomly initialized from an encoder and a decoder config. In the following example, we show how to do this using the default [Wav2Vec2Model](/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Model) configuration for the encoder and the default `BertForCausalLM` configuration for the decoder. ``` >>> from transformers import BertConfig, Wav2Vec2Config, SpeechEncoderDecoderConfig, SpeechEncoderDecoderModel >>> config_encoder = Wav2Vec2Config() >>> config_decoder = BertConfig() >>> config = SpeechEncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder) >>> model = SpeechEncoderDecoderModel(config=config) ``` ## Initialising `SpeechEncoderDecoderModel` from a pretrained encoder and a pretrained decoder. [SpeechEncoderDecoderModel](/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderModel) can be initialized from a pretrained encoder checkpoint and a pretrained decoder checkpoint. Note that any pretrained Transformer-based speech model, _e.g._ [Wav2Vec2](wav2vec2), [Hubert](hubert) can serve as the encoder and both pretrained auto-encoding models, _e.g._ BERT, pretrained causal language models, _e.g._ GPT2, as well as the pretrained decoder part of sequence-to-sequence models, _e.g._ decoder of BART, can be used as the decoder. Depending on which architecture you choose as the decoder, the cross-attention layers might be randomly initialized. Initializing [SpeechEncoderDecoderModel](/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderModel) from a pretrained encoder and decoder checkpoint requires the model to be fine-tuned on a downstream task, as has been shown in [the _Warm-starting-encoder-decoder blog post_](https://huggingface.co/blog/warm-starting-encoder-decoder). To do so, the `SpeechEncoderDecoderModel` class provides a [SpeechEncoderDecoderModel.from\_encoder\_decoder\_pretrained()](/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderModel.from_encoder_decoder_pretrained) method. ``` >>> from transformers import SpeechEncoderDecoderModel >>> model = SpeechEncoderDecoderModel.from_encoder_decoder_pretrained( ... "facebook/hubert-large-ll60k", "bert-base-uncased" ... ) ``` ## Loading an existing `SpeechEncoderDecoderModel` checkpoint and perform inference. To load fine-tuned checkpoints of the `SpeechEncoderDecoderModel` class, [SpeechEncoderDecoderModel](/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderModel) provides the `from_pretrained(...)` method just like any other model architecture in Transformers. To perform inference, one uses the `generate` method, which allows to autoregressively generate text. This method supports various forms of decoding, such as greedy, beam search and multinomial sampling. ``` >>> from transformers import Wav2Vec2Processor, SpeechEncoderDecoderModel >>> from datasets import load_dataset >>> import torch >>> >>> model = SpeechEncoderDecoderModel.from_pretrained("facebook/wav2vec2-xls-r-300m-en-to-15") >>> processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-xls-r-300m-en-to-15") >>> >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> input_values = processor(ds[0]["audio"]["array"], return_tensors="pt").input_values >>> >>> generated_ids = model.generate(input_values) >>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] >>> print(generated_text) Mr. Quilter ist der Apostel der Mittelschicht und wir freuen uns, sein Evangelium willkommen heißen zu können. ``` ## Training Once the model is created, it can be fine-tuned similar to BART, T5 or any other encoder-decoder model on a dataset of (speech, text) pairs. As you can see, only 2 inputs are required for the model in order to compute a loss: `input_values` (which are the speech inputs) and `labels` (which are the `input_ids` of the encoded target sequence). ``` >>> from transformers import AutoTokenizer, AutoFeatureExtractor, SpeechEncoderDecoderModel >>> from datasets import load_dataset >>> encoder_id = "facebook/wav2vec2-base-960h" >>> decoder_id = "bert-base-uncased" >>> feature_extractor = AutoFeatureExtractor.from_pretrained(encoder_id) >>> tokenizer = AutoTokenizer.from_pretrained(decoder_id) >>> >>> model = SpeechEncoderDecoderModel.from_encoder_decoder_pretrained(encoder_id, decoder_id) >>> model.config.decoder_start_token_id = tokenizer.cls_token_id >>> model.config.pad_token_id = tokenizer.pad_token_id >>> >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> input_values = feature_extractor(ds[0]["audio"]["array"], return_tensors="pt").input_values >>> >>> labels = tokenizer(ds[0]["text"], return_tensors="pt").input_ids >>> >>> loss = model(input_values=input_values, labels=labels).loss >>> loss.backward() ``` ## SpeechEncoderDecoderConfig ### class transformers.SpeechEncoderDecoderConfig [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/speech_encoder_decoder/configuration_speech_encoder_decoder.py#L26) ( \*\*kwargs ) Parameters - **kwargs** (_optional_) — Dictionary of keyword arguments. Notably: - **encoder** ([PretrainedConfig](/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig), _optional_) — An instance of a configuration object that defines the encoder config. - **decoder** ([PretrainedConfig](/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig), _optional_) — An instance of a configuration object that defines the decoder config. [SpeechEncoderDecoderConfig](/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderConfig) is the configuration class to store the configuration of a [SpeechEncoderDecoderModel](/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderModel). It is used to instantiate an Encoder Decoder model according to the specified arguments, defining the encoder and decoder configs. Configuration objects inherit from [PretrainedConfig](/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig) and can be used to control the model outputs. Read the documentation from [PretrainedConfig](/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig) for more information. Examples: ``` >>> from transformers import BertConfig, Wav2Vec2Config, SpeechEncoderDecoderConfig, SpeechEncoderDecoderModel >>> >>> config_encoder = Wav2Vec2Config() >>> config_decoder = BertConfig() >>> config = SpeechEncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder) >>> >>> model = SpeechEncoderDecoderModel(config=config) >>> >>> config_encoder = model.config.encoder >>> config_decoder = model.config.decoder >>> >>> config_decoder.is_decoder = True >>> config_decoder.add_cross_attention = True >>> >>> model.save_pretrained("my-model") >>> >>> encoder_decoder_config = SpeechEncoderDecoderConfig.from_pretrained("my-model") >>> model = SpeechEncoderDecoderModel.from_pretrained("my-model", config=encoder_decoder_config) ``` #### from\_encoder\_decoder\_configs [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/speech_encoder_decoder/configuration_speech_encoder_decoder.py#L92) ( encoder\_config: PretrainedConfigdecoder\_config: PretrainedConfig\*\*kwargs ) → [SpeechEncoderDecoderConfig](/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderConfig) An instance of a configuration object Instantiate a [SpeechEncoderDecoderConfig](/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderConfig) (or a derived class) from a pre-trained encoder model configuration and decoder model configuration. ## SpeechEncoderDecoderModel ### class transformers.SpeechEncoderDecoderModel [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/speech_encoder_decoder/modeling_speech_encoder_decoder.py#L173) ( config: typing.Optional\[transformers.configuration\_utils.PretrainedConfig\] = Noneencoder: typing.Optional\[transformers.modeling\_utils.PreTrainedModel\] = Nonedecoder: typing.Optional\[transformers.modeling\_utils.PreTrainedModel\] = None ) Parameters - **config** ([SpeechEncoderDecoderConfig](/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. This class can be used to initialize a speech-sequence-to-text-sequence model with any pretrained speech autoencoding model as the encoder and any pretrained text autoregressive model as the decoder. The encoder is loaded via [from\_pretrained()](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained) function and the decoder is loaded via [from\_pretrained()](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained) function. Cross-attention layers are automatically added to the decoder and should be fine-tuned on a downstream generative task, like summarization. The effectiveness of initializing sequence-to-sequence models with pretrained checkpoints for sequence generation tasks was shown in [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. Additionally, in [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) it is shown how leveraging large pretrained speech models for speech translation yields a significant performance improvement. After such an Speech-Encoder Decoder model has been trained/fine-tuned, it can be saved/loaded just like any other models (see the examples for more information). This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. [SpeechEncoderDecoderModel](/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderModel) is a generic model class that will be instantiated as a transformer architecture with one of the base model classes of the library as encoder and another one as decoder when created with the :meth_~transformers.AutoModel.from\_pretrained_ class method for the encoder and :meth_~transformers.AutoModelForCausalLM.from\_pretrained_ class method for the decoder. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/speech_encoder_decoder/modeling_speech_encoder_decoder.py#L442) ( inputs: typing.Optional\[torch.FloatTensor\] = Noneattention\_mask: typing.Optional\[torch.FloatTensor\] = Nonedecoder\_input\_ids: typing.Optional\[torch.LongTensor\] = Nonedecoder\_attention\_mask: typing.Optional\[torch.BoolTensor\] = Noneencoder\_outputs: typing.Optional\[typing.Tuple\[torch.FloatTensor\]\] = Nonepast\_key\_values: typing.Optional\[typing.Tuple\[typing.Tuple\[torch.FloatTensor\]\]\] = Nonedecoder\_inputs\_embeds: typing.Optional\[torch.FloatTensor\] = Nonelabels: typing.Optional\[torch.LongTensor\] = Noneuse\_cache: typing.Optional\[bool\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Noneinput\_values: typing.Optional\[torch.FloatTensor\] = Noneinput\_features: typing.Optional\[torch.FloatTensor\] = Nonereturn\_dict: typing.Optional\[bool\] = None\*\*kwargs ) → [transformers.modeling\_outputs.Seq2SeqLMOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.Seq2SeqLMOutput) or `tuple(torch.FloatTensor)` The [SpeechEncoderDecoderModel](/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderModel) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: ``` >>> from transformers import SpeechEncoderDecoderModel, AutoProcessor >>> from datasets import load_dataset >>> import torch >>> processor = AutoProcessor.from_pretrained("facebook/wav2vec2-xls-r-300m-en-to-15") >>> model = SpeechEncoderDecoderModel.from_pretrained("facebook/wav2vec2-xls-r-300m-en-to-15") >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> input_values = processor(ds[0]["audio"]["array"], return_tensors="pt").input_values >>> >>> generated = model.generate(input_values) >>> decoded = processor.batch_decode(generated, skip_special_tokens=True)[0] >>> decoded 'Mr. Quilter ist der Apostel der Mittelschicht und wir freuen uns, sein Evangelium willkommen heißen zu können.' >>> >>> labels = processor(text=ds[0]["text"], return_tensors="pt").input_ids >>> loss = model(input_values, labels=labels).loss >>> loss.backward() ``` #### from\_encoder\_decoder\_pretrained [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/speech_encoder_decoder/modeling_speech_encoder_decoder.py#L287) ( encoder\_pretrained\_model\_name\_or\_path: str = Nonedecoder\_pretrained\_model\_name\_or\_path: str = None\*model\_args\*\*kwargs ) Instantiate an encoder and a decoder from one or two base classes of the library from pretrained model checkpoints. The model is set in evaluation mode by default using `model.eval()` (Dropout modules are deactivated). To train the model, you need to first set it back in training mode with `model.train()`. Example: ``` >>> from transformers import SpeechEncoderDecoderModel >>> >>> model = SpeechEncoderDecoderModel.from_encoder_decoder_pretrained( ... "facebook/wav2vec2-base-960h", "bert-base-uncased" ... ) >>> >>> model.save_pretrained("./wav2vec2bert") >>> >>> model = SpeechEncoderDecoderModel.from_pretrained("./wav2vec2bert") ``` ## FlaxSpeechEncoderDecoderModel ### class transformers.FlaxSpeechEncoderDecoderModel [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/speech_encoder_decoder/modeling_flax_speech_encoder_decoder.py#L329) ( config: SpeechEncoderDecoderConfiginput\_shape: typing.Optional\[typing.Tuple\] = Noneseed: int = 0dtype: dtype = <class 'jax.numpy.float32'>\_do\_init: bool = True\*\*kwargs ) Parameters - **config** ([SpeechEncoderDecoderConfig](/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.from_pretrained) method to load the model weights. - **dtype** (`jax.numpy.dtype`, _optional_, defaults to `jax.numpy.float32`) — The data type of the computation. Can be one of `jax.numpy.float32`, `jax.numpy.float16` (on GPUs) and `jax.numpy.bfloat16` (on TPUs). This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given `dtype`. **Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.** If you wish to change the dtype of the model parameters, see [to\_fp16()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.to_fp16) and [to\_bf16()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.to_bf16). This class can be used to initialize a speech-sequence-to-text-sequence model with any pretrained speech autoencoding model as the encoder and any pretrained text autoregressive model as the decoder. The encoder is loaded via [from\_pretrained()](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained) function and the decoder is loaded via [from\_pretrained()](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained) function. Cross-attention layers are automatically added to the decoder and should be fine-tuned on a downstream generative task, like summarization. The effectiveness of initializing sequence-to-sequence models with pretrained checkpoints for sequence generation tasks was shown in [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. Additionally, in [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) it is shown how leveraging large pretrained speech models for speech translation yields a significant performance improvement. After such an Speech-Encoder Decoder model has been trained/fine-tuned, it can be saved/loaded just like any other models (see the examples for more information). This model inherits from [FlaxPreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a Flax Linen [flax.nn.Module](https://flax.readthedocs.io/en/latest/_autosummary/flax.nn.module.html) subclass. Use it as a regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior. [FlaxSpeechEncoderDecoderModel](/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder#transformers.FlaxSpeechEncoderDecoderModel) is a generic model class that will be instantiated as a transformer architecture with the module (flax.nn.Module) of one of the base model classes of the library as encoder module and another one as decoder module when created with the :meth_~transformers.FlaxAutoModel.from\_pretrained_ class method for the encoder and :meth_~transformers.FlaxAutoModelForCausalLM.from\_pretrained_ class method for the decoder. #### \_\_call\_\_ [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/speech_encoder_decoder/modeling_flax_speech_encoder_decoder.py#L660) ( inputs: Arrayattention\_mask: typing.Optional\[jax.Array\] = Nonedecoder\_input\_ids: typing.Optional\[jax.Array\] = Nonedecoder\_attention\_mask: typing.Optional\[jax.Array\] = Nonedecoder\_position\_ids: typing.Optional\[jax.Array\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = Nonetrain: bool = Falsefreeze\_feature\_encoder: bool = Falseparams: dict = Nonedropout\_rng: PRNGKey = None ) → [transformers.modeling\_flax\_outputs.FlaxSeq2SeqLMOutput](/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput) or `tuple(torch.FloatTensor)` The [FlaxSpeechEncoderDecoderModel](/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder#transformers.FlaxSpeechEncoderDecoderModel) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: ``` >>> from transformers import FlaxSpeechEncoderDecoderModel, AutoTokenizer >>> >>> model = FlaxSpeechEncoderDecoderModel.from_pretrained("patrickvonplaten/wav2vec2-2-bart-large") >>> >>> tokenizer_output = AutoTokenizer.from_pretrained("facebook/bart-large") >>> inputs = jnp.ones((2, 5000), dtype=jnp.float32) >>> >>> model.config.decoder_start_token_id = model.decoder.config.bos_token_id >>> model.config.pad_token_id = model.decoder.config.pad_token_id >>> model.config.eos_token_id = model.decoder.config.eos_token_id >>> outputs = model.generate(inputs) ``` #### from\_encoder\_decoder\_pretrained [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/speech_encoder_decoder/modeling_flax_speech_encoder_decoder.py#L782) ( encoder\_pretrained\_model\_name\_or\_path: typing.Union\[str, os.PathLike, NoneType\] = Nonedecoder\_pretrained\_model\_name\_or\_path: typing.Union\[str, os.PathLike, NoneType\] = None\*model\_args\*\*kwargs ) Instantiate an encoder and a decoder from one or two base classes of the library from pretrained model checkpoints. Example: ``` >>> from transformers import FlaxSpeechEncoderDecoderModel >>> >>> model = FlaxSpeechEncoderDecoderModel.from_encoder_decoder_pretrained( ... "facebook/wav2vec2-large-lv60", "facebook/bart-large" ... ) >>> >>> model.save_pretrained("./wav2vec2-2-bart-large") >>> >>> model = FlaxSpeechEncoderDecoderModel.from_pretrained("./wav2vec2-2-bart-large") ```
<!DOCTYPE html><html class=""><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"> <meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science."> <meta property="fb:app_id" content="1321688464574422"> <meta name="twitter:card" content="summary_large_image"> <meta name="twitter:site" content="@huggingface"> <meta property="og:title" content="Speech Encoder Decoder Models"> <meta property="og:type" content="website"> <meta property="og:url" content="https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder"> <meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png"> <link rel="stylesheet" href="/front/build/kube-5e23f38/style.css"> <link rel="preconnect" href="https://fonts.gstatic.com"> <link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&amp;display=swap" rel="stylesheet"> <link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&amp;display=swap" rel="stylesheet"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" as="style" onload="this.onload=null;this.rel='stylesheet'"> <noscript> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" /> </noscript> <title>Speech Encoder Decoder Models</title> <script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script> <script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"><link rel="stylesheet" href="/docs/transformers/v4.34.0/en/_app/immutable/assets/0.e3b0c442.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/1.38c5c2f6.js"><meta name="hf:doc:metadata" content="{&quot;local&quot;:&quot;speech-encoder-decoder-models&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;randomly-initializing-speechencoderdecodermodel-from-model-configurations&quot;,&quot;title&quot;:&quot;Randomly initializing `SpeechEncoderDecoderModel` from model configurations.&quot;},{&quot;local&quot;:&quot;initialising-speechencoderdecodermodel-from-a-pretrained-encoder-and-a-pretrained-decoder&quot;,&quot;title&quot;:&quot;Initialising `SpeechEncoderDecoderModel` from a pretrained encoder and a pretrained decoder.&quot;},{&quot;local&quot;:&quot;loading-an-existing-speechencoderdecodermodel-checkpoint-and-perform-inference&quot;,&quot;title&quot;:&quot;Loading an existing `SpeechEncoderDecoderModel` checkpoint and perform inference.&quot;},{&quot;local&quot;:&quot;training&quot;,&quot;title&quot;:&quot;Training&quot;},{&quot;local&quot;:&quot;transformers.SpeechEncoderDecoderConfig&quot;,&quot;title&quot;:&quot;SpeechEncoderDecoderConfig&quot;},{&quot;local&quot;:&quot;transformers.SpeechEncoderDecoderModel&quot;,&quot;title&quot;:&quot;SpeechEncoderDecoderModel&quot;},{&quot;local&quot;:&quot;transformers.FlaxSpeechEncoderDecoderModel&quot;,&quot;title&quot;:&quot;FlaxSpeechEncoderDecoderModel&quot;}],&quot;title&quot;:&quot;Speech Encoder Decoder Models&quot;}"></head> <body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage"> <div class="flex min-h-screen flex-col"> <div class="SVELTE_HYDRATER contents" data-props="{&quot;classNames&quot;:&quot;&quot;,&quot;isWide&quot;:true,&quot;isZh&quot;:false}" data-target="MainHeader"><header class="border-b border-gray-100 "><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <div class="flex flex-none items-center justify-center p-0.5 place-self-stretch lg:hidden"><button class="relative z-40 flex h-6 w-8 items-center justify-center" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div></div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="rounded-full border border-transparent bg-gray-900 px-3 py-1 leading-none text-white hover:border-black hover:bg-white hover:text-black" href="/join">Sign Up</a></li></ul></nav></div></header></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div> <main class="flex flex-1 flex-col"><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapters&quot;:[{&quot;title&quot;:&quot;Get started&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;🤗 Transformers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;index&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/index&quot;},{&quot;title&quot;:&quot;Quick tour&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;quicktour&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/quicktour&quot;},{&quot;title&quot;:&quot;Installation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;installation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/installation&quot;}]},{&quot;title&quot;:&quot;Tutorials&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Run inference with pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_tutorial&quot;},{&quot;title&quot;:&quot;Write portable code with AutoClass&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;autoclass_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/autoclass_tutorial&quot;},{&quot;title&quot;:&quot;Preprocess data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocessing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/preprocessing&quot;},{&quot;title&quot;:&quot;Fine-tune a pretrained model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;training&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/training&quot;},{&quot;title&quot;:&quot;Train with a script&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;run_scripts&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/run_scripts&quot;},{&quot;title&quot;:&quot;Set up distributed training with 🤗 Accelerate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;accelerate&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/accelerate&quot;},{&quot;title&quot;:&quot;Load and train adapters with 🤗 PEFT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;peft&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/peft&quot;},{&quot;title&quot;:&quot;Share your model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_sharing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_sharing&quot;},{&quot;title&quot;:&quot;Agents&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers_agents&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/transformers_agents&quot;},{&quot;title&quot;:&quot;Generation with LLMs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;llm_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/llm_tutorial&quot;}]},{&quot;title&quot;:&quot;Task Guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Natural Language Processing&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text classification&quot;,&quot;id&quot;:&quot;tasks/sequence_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/sequence_classification&quot;},{&quot;title&quot;:&quot;Token classification&quot;,&quot;id&quot;:&quot;tasks/token_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/token_classification&quot;},{&quot;title&quot;:&quot;Question answering&quot;,&quot;id&quot;:&quot;tasks/question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/question_answering&quot;},{&quot;title&quot;:&quot;Causal language modeling&quot;,&quot;id&quot;:&quot;tasks/language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/language_modeling&quot;},{&quot;title&quot;:&quot;Masked language modeling&quot;,&quot;id&quot;:&quot;tasks/masked_language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/masked_language_modeling&quot;},{&quot;title&quot;:&quot;Translation&quot;,&quot;id&quot;:&quot;tasks/translation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/translation&quot;},{&quot;title&quot;:&quot;Summarization&quot;,&quot;id&quot;:&quot;tasks/summarization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/summarization&quot;},{&quot;title&quot;:&quot;Multiple choice&quot;,&quot;id&quot;:&quot;tasks/multiple_choice&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/multiple_choice&quot;}]},{&quot;title&quot;:&quot;Audio&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio classification&quot;,&quot;id&quot;:&quot;tasks/audio_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/audio_classification&quot;},{&quot;title&quot;:&quot;Automatic speech recognition&quot;,&quot;id&quot;:&quot;tasks/asr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/asr&quot;}]},{&quot;title&quot;:&quot;Computer Vision&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image classification&quot;,&quot;id&quot;:&quot;tasks/image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_classification&quot;},{&quot;title&quot;:&quot;Semantic segmentation&quot;,&quot;id&quot;:&quot;tasks/semantic_segmentation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/semantic_segmentation&quot;},{&quot;title&quot;:&quot;Video classification&quot;,&quot;id&quot;:&quot;tasks/video_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/video_classification&quot;},{&quot;title&quot;:&quot;Object detection&quot;,&quot;id&quot;:&quot;tasks/object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot object detection&quot;,&quot;id&quot;:&quot;tasks/zero_shot_object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot image classification&quot;,&quot;id&quot;:&quot;tasks/zero_shot_image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification&quot;},{&quot;title&quot;:&quot;Depth estimation&quot;,&quot;id&quot;:&quot;tasks/monocular_depth_estimation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation&quot;}]},{&quot;title&quot;:&quot;Multimodal&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image captioning&quot;,&quot;id&quot;:&quot;tasks/image_captioning&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_captioning&quot;},{&quot;title&quot;:&quot;Document Question Answering&quot;,&quot;id&quot;:&quot;tasks/document_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/document_question_answering&quot;},{&quot;title&quot;:&quot;Visual Question Answering&quot;,&quot;id&quot;:&quot;tasks/visual_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/visual_question_answering&quot;},{&quot;title&quot;:&quot;Text to speech&quot;,&quot;id&quot;:&quot;tasks/text-to-speech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/text-to-speech&quot;}]},{&quot;title&quot;:&quot;Generation&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Customize the generation strategy&quot;,&quot;id&quot;:&quot;generation_strategies&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/generation_strategies&quot;}]},{&quot;title&quot;:&quot;Prompting&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image tasks with IDEFICS&quot;,&quot;id&quot;:&quot;tasks/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/idefics&quot;}]}]},{&quot;title&quot;:&quot;Developer guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Use fast tokenizers from 🤗 Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;fast_tokenizers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/fast_tokenizers&quot;},{&quot;title&quot;:&quot;Run inference with multilingual models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;multilingual&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/multilingual&quot;},{&quot;title&quot;:&quot;Use model-specific APIs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;create_a_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/create_a_model&quot;},{&quot;title&quot;:&quot;Share a custom model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_models&quot;},{&quot;title&quot;:&quot;Templates for chat models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;chat_templating&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/chat_templating&quot;},{&quot;title&quot;:&quot;Run training on Amazon SageMaker&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;sagemaker&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/sagemaker&quot;},{&quot;title&quot;:&quot;Export to ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;serialization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/serialization&quot;},{&quot;title&quot;:&quot;Export to TFLite&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tflite&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tflite&quot;},{&quot;title&quot;:&quot;Export to TorchScript&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;torchscript&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/torchscript&quot;},{&quot;title&quot;:&quot;Benchmarks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;benchmarks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/benchmarks&quot;},{&quot;title&quot;:&quot;Notebooks with examples&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;notebooks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/notebooks&quot;},{&quot;title&quot;:&quot;Community resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;community&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/community&quot;},{&quot;title&quot;:&quot;Custom Tools and Prompts&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_tools&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_tools&quot;},{&quot;title&quot;:&quot;Troubleshoot&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;troubleshooting&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/troubleshooting&quot;}]},{&quot;title&quot;:&quot;Performance and scalability&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;performance&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/performance&quot;},{&quot;title&quot;:&quot;Efficient training techniques&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Methods and tools for efficient training on a single GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_one&quot;},{&quot;title&quot;:&quot;Multiple GPUs and parallelism&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_many&quot;},{&quot;title&quot;:&quot;Efficient training on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu&quot;},{&quot;title&quot;:&quot;Distributed CPU training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu_many&quot;},{&quot;title&quot;:&quot;Training on TPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu&quot;},{&quot;title&quot;:&quot;Training on TPU with TensorFlow&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu_tf&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu_tf&quot;},{&quot;title&quot;:&quot;Training on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_special&quot;},{&quot;title&quot;:&quot;Custom hardware for training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_hardware&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_hardware&quot;},{&quot;title&quot;:&quot;Hyperparameter Search using Trainer API&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;hpo_train&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/hpo_train&quot;}]},{&quot;title&quot;:&quot;Optimizing inference&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Inference on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_cpu&quot;},{&quot;title&quot;:&quot;Inference on one GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_one&quot;},{&quot;title&quot;:&quot;Inference on many GPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_many&quot;},{&quot;title&quot;:&quot;Inference on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_special&quot;}]},{&quot;title&quot;:&quot;Instantiating a big model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;big_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/big_models&quot;},{&quot;title&quot;:&quot;Troubleshooting&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;debugging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/debugging&quot;},{&quot;title&quot;:&quot;XLA Integration for TensorFlow Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tf_xla&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tf_xla&quot;},{&quot;title&quot;:&quot;Optimize inference using `torch.compile()`&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_torch_compile&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_torch_compile&quot;}]},{&quot;title&quot;:&quot;Contribute&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;How to contribute to transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;contributing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/contributing&quot;},{&quot;title&quot;:&quot;How to add a model to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_model&quot;},{&quot;title&quot;:&quot;How to convert a 🤗 Transformers model to TensorFlow?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_tensorflow_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_tensorflow_model&quot;},{&quot;title&quot;:&quot;How to add a pipeline to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_pipeline&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_pipeline&quot;},{&quot;title&quot;:&quot;Testing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;testing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/testing&quot;},{&quot;title&quot;:&quot;Checks on a Pull Request&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pr_checks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pr_checks&quot;}]},{&quot;title&quot;:&quot;Conceptual guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Philosophy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;philosophy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/philosophy&quot;},{&quot;title&quot;:&quot;Glossary&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;glossary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/glossary&quot;},{&quot;title&quot;:&quot;What 🤗 Transformers can do&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;task_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/task_summary&quot;},{&quot;title&quot;:&quot;How 🤗 Transformers solve tasks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks_explained&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks_explained&quot;},{&quot;title&quot;:&quot;The Transformer model family&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_summary&quot;},{&quot;title&quot;:&quot;Summary of the tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tokenizer_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tokenizer_summary&quot;},{&quot;title&quot;:&quot;Attention mechanisms&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;attention&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/attention&quot;},{&quot;title&quot;:&quot;Padding and truncation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pad_truncation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pad_truncation&quot;},{&quot;title&quot;:&quot;BERTology&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;bertology&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/bertology&quot;},{&quot;title&quot;:&quot;Perplexity of fixed-length models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perplexity&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perplexity&quot;},{&quot;title&quot;:&quot;Pipelines for webserver inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_webserver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_webserver&quot;},{&quot;title&quot;:&quot;Model training anatomy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_memory_anatomy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_memory_anatomy&quot;}]},{&quot;title&quot;:&quot;API&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Main Classes&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Agents and Tools&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/agent&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/agent&quot;},{&quot;title&quot;:&quot;Auto Classes&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/auto&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/auto&quot;},{&quot;title&quot;:&quot;Callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/callback&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/callback&quot;},{&quot;title&quot;:&quot;Configuration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/configuration&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/configuration&quot;},{&quot;title&quot;:&quot;Data Collator&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/data_collator&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/data_collator&quot;},{&quot;title&quot;:&quot;Keras callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/keras_callbacks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/keras_callbacks&quot;},{&quot;title&quot;:&quot;Logging&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/logging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/logging&quot;},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/model&quot;},{&quot;title&quot;:&quot;Text Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/text_generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/text_generation&quot;},{&quot;title&quot;:&quot;ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/onnx&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/onnx&quot;},{&quot;title&quot;:&quot;Optimization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/optimizer_schedules&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules&quot;},{&quot;title&quot;:&quot;Model outputs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/output&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/output&quot;},{&quot;title&quot;:&quot;Pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/pipelines&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/pipelines&quot;},{&quot;title&quot;:&quot;Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/processors&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/processors&quot;},{&quot;title&quot;:&quot;Quantization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/quantization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/quantization&quot;},{&quot;title&quot;:&quot;Tokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/tokenizer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/tokenizer&quot;},{&quot;title&quot;:&quot;Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/trainer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/trainer&quot;},{&quot;title&quot;:&quot;DeepSpeed Integration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/deepspeed&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/deepspeed&quot;},{&quot;title&quot;:&quot;Feature Extractor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/feature_extractor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/feature_extractor&quot;},{&quot;title&quot;:&quot;Image Processor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/image_processor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/image_processor&quot;}]},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALBERT&quot;,&quot;id&quot;:&quot;model_doc/albert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/albert&quot;},{&quot;title&quot;:&quot;BART&quot;,&quot;id&quot;:&quot;model_doc/bart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bart&quot;},{&quot;title&quot;:&quot;BARThez&quot;,&quot;id&quot;:&quot;model_doc/barthez&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/barthez&quot;},{&quot;title&quot;:&quot;BARTpho&quot;,&quot;id&quot;:&quot;model_doc/bartpho&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bartpho&quot;},{&quot;title&quot;:&quot;BERT&quot;,&quot;id&quot;:&quot;model_doc/bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert&quot;},{&quot;title&quot;:&quot;BertGeneration&quot;,&quot;id&quot;:&quot;model_doc/bert-generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-generation&quot;},{&quot;title&quot;:&quot;BertJapanese&quot;,&quot;id&quot;:&quot;model_doc/bert-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-japanese&quot;},{&quot;title&quot;:&quot;Bertweet&quot;,&quot;id&quot;:&quot;model_doc/bertweet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bertweet&quot;},{&quot;title&quot;:&quot;BigBird&quot;,&quot;id&quot;:&quot;model_doc/big_bird&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/big_bird&quot;},{&quot;title&quot;:&quot;BigBirdPegasus&quot;,&quot;id&quot;:&quot;model_doc/bigbird_pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus&quot;},{&quot;title&quot;:&quot;BioGpt&quot;,&quot;id&quot;:&quot;model_doc/biogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/biogpt&quot;},{&quot;title&quot;:&quot;Blenderbot&quot;,&quot;id&quot;:&quot;model_doc/blenderbot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot&quot;},{&quot;title&quot;:&quot;Blenderbot Small&quot;,&quot;id&quot;:&quot;model_doc/blenderbot-small&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot-small&quot;},{&quot;title&quot;:&quot;BLOOM&quot;,&quot;id&quot;:&quot;model_doc/bloom&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bloom&quot;},{&quot;title&quot;:&quot;BORT&quot;,&quot;id&quot;:&quot;model_doc/bort&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bort&quot;},{&quot;title&quot;:&quot;ByT5&quot;,&quot;id&quot;:&quot;model_doc/byt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/byt5&quot;},{&quot;title&quot;:&quot;CamemBERT&quot;,&quot;id&quot;:&quot;model_doc/camembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/camembert&quot;},{&quot;title&quot;:&quot;CANINE&quot;,&quot;id&quot;:&quot;model_doc/canine&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/canine&quot;},{&quot;title&quot;:&quot;CodeGen&quot;,&quot;id&quot;:&quot;model_doc/codegen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/codegen&quot;},{&quot;title&quot;:&quot;CodeLlama&quot;,&quot;id&quot;:&quot;model_doc/code_llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/code_llama&quot;},{&quot;title&quot;:&quot;ConvBERT&quot;,&quot;id&quot;:&quot;model_doc/convbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convbert&quot;},{&quot;title&quot;:&quot;CPM&quot;,&quot;id&quot;:&quot;model_doc/cpm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpm&quot;},{&quot;title&quot;:&quot;CPMANT&quot;,&quot;id&quot;:&quot;model_doc/cpmant&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpmant&quot;},{&quot;title&quot;:&quot;CTRL&quot;,&quot;id&quot;:&quot;model_doc/ctrl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ctrl&quot;},{&quot;title&quot;:&quot;DeBERTa&quot;,&quot;id&quot;:&quot;model_doc/deberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta&quot;},{&quot;title&quot;:&quot;DeBERTa-v2&quot;,&quot;id&quot;:&quot;model_doc/deberta-v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta-v2&quot;},{&quot;title&quot;:&quot;DialoGPT&quot;,&quot;id&quot;:&quot;model_doc/dialogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dialogpt&quot;},{&quot;title&quot;:&quot;DistilBERT&quot;,&quot;id&quot;:&quot;model_doc/distilbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/distilbert&quot;},{&quot;title&quot;:&quot;DPR&quot;,&quot;id&quot;:&quot;model_doc/dpr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpr&quot;},{&quot;title&quot;:&quot;ELECTRA&quot;,&quot;id&quot;:&quot;model_doc/electra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/electra&quot;},{&quot;title&quot;:&quot;Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encoder-decoder&quot;},{&quot;title&quot;:&quot;ERNIE&quot;,&quot;id&quot;:&quot;model_doc/ernie&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie&quot;},{&quot;title&quot;:&quot;ErnieM&quot;,&quot;id&quot;:&quot;model_doc/ernie_m&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie_m&quot;},{&quot;title&quot;:&quot;ESM&quot;,&quot;id&quot;:&quot;model_doc/esm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/esm&quot;},{&quot;title&quot;:&quot;Falcon&quot;,&quot;id&quot;:&quot;model_doc/falcon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/falcon&quot;},{&quot;title&quot;:&quot;FLAN-T5&quot;,&quot;id&quot;:&quot;model_doc/flan-t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-t5&quot;},{&quot;title&quot;:&quot;FLAN-UL2&quot;,&quot;id&quot;:&quot;model_doc/flan-ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-ul2&quot;},{&quot;title&quot;:&quot;FlauBERT&quot;,&quot;id&quot;:&quot;model_doc/flaubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flaubert&quot;},{&quot;title&quot;:&quot;FNet&quot;,&quot;id&quot;:&quot;model_doc/fnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fnet&quot;},{&quot;title&quot;:&quot;FSMT&quot;,&quot;id&quot;:&quot;model_doc/fsmt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fsmt&quot;},{&quot;title&quot;:&quot;Funnel Transformer&quot;,&quot;id&quot;:&quot;model_doc/funnel&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/funnel&quot;},{&quot;title&quot;:&quot;GPT&quot;,&quot;id&quot;:&quot;model_doc/openai-gpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/openai-gpt&quot;},{&quot;title&quot;:&quot;GPT Neo&quot;,&quot;id&quot;:&quot;model_doc/gpt_neo&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neo&quot;},{&quot;title&quot;:&quot;GPT NeoX&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox&quot;},{&quot;title&quot;:&quot;GPT NeoX Japanese&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox_japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese&quot;},{&quot;title&quot;:&quot;GPT-J&quot;,&quot;id&quot;:&quot;model_doc/gptj&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptj&quot;},{&quot;title&quot;:&quot;GPT2&quot;,&quot;id&quot;:&quot;model_doc/gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt2&quot;},{&quot;title&quot;:&quot;GPTBigCode&quot;,&quot;id&quot;:&quot;model_doc/gpt_bigcode&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode&quot;},{&quot;title&quot;:&quot;GPTSAN Japanese&quot;,&quot;id&quot;:&quot;model_doc/gptsan-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese&quot;},{&quot;title&quot;:&quot;GPTSw3&quot;,&quot;id&quot;:&quot;model_doc/gpt-sw3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt-sw3&quot;},{&quot;title&quot;:&quot;HerBERT&quot;,&quot;id&quot;:&quot;model_doc/herbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/herbert&quot;},{&quot;title&quot;:&quot;I-BERT&quot;,&quot;id&quot;:&quot;model_doc/ibert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ibert&quot;},{&quot;title&quot;:&quot;Jukebox&quot;,&quot;id&quot;:&quot;model_doc/jukebox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/jukebox&quot;},{&quot;title&quot;:&quot;LED&quot;,&quot;id&quot;:&quot;model_doc/led&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/led&quot;},{&quot;title&quot;:&quot;LLaMA&quot;,&quot;id&quot;:&quot;model_doc/llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama&quot;},{&quot;title&quot;:&quot;Llama2&quot;,&quot;id&quot;:&quot;model_doc/llama2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama2&quot;},{&quot;title&quot;:&quot;Longformer&quot;,&quot;id&quot;:&quot;model_doc/longformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longformer&quot;},{&quot;title&quot;:&quot;LongT5&quot;,&quot;id&quot;:&quot;model_doc/longt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longt5&quot;},{&quot;title&quot;:&quot;LUKE&quot;,&quot;id&quot;:&quot;model_doc/luke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/luke&quot;},{&quot;title&quot;:&quot;M2M100&quot;,&quot;id&quot;:&quot;model_doc/m2m_100&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/m2m_100&quot;},{&quot;title&quot;:&quot;MarianMT&quot;,&quot;id&quot;:&quot;model_doc/marian&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/marian&quot;},{&quot;title&quot;:&quot;MarkupLM&quot;,&quot;id&quot;:&quot;model_doc/markuplm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/markuplm&quot;},{&quot;title&quot;:&quot;MBart and MBart-50&quot;,&quot;id&quot;:&quot;model_doc/mbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mbart&quot;},{&quot;title&quot;:&quot;MEGA&quot;,&quot;id&quot;:&quot;model_doc/mega&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mega&quot;},{&quot;title&quot;:&quot;MegatronBERT&quot;,&quot;id&quot;:&quot;model_doc/megatron-bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron-bert&quot;},{&quot;title&quot;:&quot;MegatronGPT2&quot;,&quot;id&quot;:&quot;model_doc/megatron_gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2&quot;},{&quot;title&quot;:&quot;Mistral&quot;,&quot;id&quot;:&quot;model_doc/mistral&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mistral&quot;},{&quot;title&quot;:&quot;mLUKE&quot;,&quot;id&quot;:&quot;model_doc/mluke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mluke&quot;},{&quot;title&quot;:&quot;MobileBERT&quot;,&quot;id&quot;:&quot;model_doc/mobilebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilebert&quot;},{&quot;title&quot;:&quot;MPNet&quot;,&quot;id&quot;:&quot;model_doc/mpnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpnet&quot;},{&quot;title&quot;:&quot;MPT&quot;,&quot;id&quot;:&quot;model_doc/mpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpt&quot;},{&quot;title&quot;:&quot;MRA&quot;,&quot;id&quot;:&quot;model_doc/mra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mra&quot;},{&quot;title&quot;:&quot;MT5&quot;,&quot;id&quot;:&quot;model_doc/mt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mt5&quot;},{&quot;title&quot;:&quot;MVP&quot;,&quot;id&quot;:&quot;model_doc/mvp&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mvp&quot;},{&quot;title&quot;:&quot;NEZHA&quot;,&quot;id&quot;:&quot;model_doc/nezha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nezha&quot;},{&quot;title&quot;:&quot;NLLB&quot;,&quot;id&quot;:&quot;model_doc/nllb&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb&quot;},{&quot;title&quot;:&quot;NLLB-MoE&quot;,&quot;id&quot;:&quot;model_doc/nllb-moe&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb-moe&quot;},{&quot;title&quot;:&quot;Nyströmformer&quot;,&quot;id&quot;:&quot;model_doc/nystromformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nystromformer&quot;},{&quot;title&quot;:&quot;Open-Llama&quot;,&quot;id&quot;:&quot;model_doc/open-llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/open-llama&quot;},{&quot;title&quot;:&quot;OPT&quot;,&quot;id&quot;:&quot;model_doc/opt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/opt&quot;},{&quot;title&quot;:&quot;Pegasus&quot;,&quot;id&quot;:&quot;model_doc/pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus&quot;},{&quot;title&quot;:&quot;PEGASUS-X&quot;,&quot;id&quot;:&quot;model_doc/pegasus_x&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus_x&quot;},{&quot;title&quot;:&quot;Persimmon&quot;,&quot;id&quot;:&quot;model_doc/persimmon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/persimmon&quot;},{&quot;title&quot;:&quot;PhoBERT&quot;,&quot;id&quot;:&quot;model_doc/phobert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/phobert&quot;},{&quot;title&quot;:&quot;PLBart&quot;,&quot;id&quot;:&quot;model_doc/plbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/plbart&quot;},{&quot;title&quot;:&quot;ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/prophetnet&quot;},{&quot;title&quot;:&quot;QDQBert&quot;,&quot;id&quot;:&quot;model_doc/qdqbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/qdqbert&quot;},{&quot;title&quot;:&quot;RAG&quot;,&quot;id&quot;:&quot;model_doc/rag&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rag&quot;},{&quot;title&quot;:&quot;REALM&quot;,&quot;id&quot;:&quot;model_doc/realm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/realm&quot;},{&quot;title&quot;:&quot;Reformer&quot;,&quot;id&quot;:&quot;model_doc/reformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/reformer&quot;},{&quot;title&quot;:&quot;RemBERT&quot;,&quot;id&quot;:&quot;model_doc/rembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rembert&quot;},{&quot;title&quot;:&quot;RetriBERT&quot;,&quot;id&quot;:&quot;model_doc/retribert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/retribert&quot;},{&quot;title&quot;:&quot;RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta&quot;},{&quot;title&quot;:&quot;RoBERTa-PreLayerNorm&quot;,&quot;id&quot;:&quot;model_doc/roberta-prelayernorm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm&quot;},{&quot;title&quot;:&quot;RoCBert&quot;,&quot;id&quot;:&quot;model_doc/roc_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roc_bert&quot;},{&quot;title&quot;:&quot;RoFormer&quot;,&quot;id&quot;:&quot;model_doc/roformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roformer&quot;},{&quot;title&quot;:&quot;RWKV&quot;,&quot;id&quot;:&quot;model_doc/rwkv&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rwkv&quot;},{&quot;title&quot;:&quot;Splinter&quot;,&quot;id&quot;:&quot;model_doc/splinter&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/splinter&quot;},{&quot;title&quot;:&quot;SqueezeBERT&quot;,&quot;id&quot;:&quot;model_doc/squeezebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/squeezebert&quot;},{&quot;title&quot;:&quot;SwitchTransformers&quot;,&quot;id&quot;:&quot;model_doc/switch_transformers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/switch_transformers&quot;},{&quot;title&quot;:&quot;T5&quot;,&quot;id&quot;:&quot;model_doc/t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5&quot;},{&quot;title&quot;:&quot;T5v1.1&quot;,&quot;id&quot;:&quot;model_doc/t5v1.1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5v1.1&quot;},{&quot;title&quot;:&quot;TAPEX&quot;,&quot;id&quot;:&quot;model_doc/tapex&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapex&quot;},{&quot;title&quot;:&quot;Transformer XL&quot;,&quot;id&quot;:&quot;model_doc/transfo-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/transfo-xl&quot;},{&quot;title&quot;:&quot;UL2&quot;,&quot;id&quot;:&quot;model_doc/ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ul2&quot;},{&quot;title&quot;:&quot;UMT5&quot;,&quot;id&quot;:&quot;model_doc/umt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/umt5&quot;},{&quot;title&quot;:&quot;X-MOD&quot;,&quot;id&quot;:&quot;model_doc/xmod&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xmod&quot;},{&quot;title&quot;:&quot;XGLM&quot;,&quot;id&quot;:&quot;model_doc/xglm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xglm&quot;},{&quot;title&quot;:&quot;XLM&quot;,&quot;id&quot;:&quot;model_doc/xlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm&quot;},{&quot;title&quot;:&quot;XLM-ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/xlm-prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa-XL&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl&quot;},{&quot;title&quot;:&quot;XLM-V&quot;,&quot;id&quot;:&quot;model_doc/xlm-v&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-v&quot;},{&quot;title&quot;:&quot;XLNet&quot;,&quot;id&quot;:&quot;model_doc/xlnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlnet&quot;},{&quot;title&quot;:&quot;YOSO&quot;,&quot;id&quot;:&quot;model_doc/yoso&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yoso&quot;}]},{&quot;title&quot;:&quot;Vision models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;BEiT&quot;,&quot;id&quot;:&quot;model_doc/beit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/beit&quot;},{&quot;title&quot;:&quot;BiT&quot;,&quot;id&quot;:&quot;model_doc/bit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bit&quot;},{&quot;title&quot;:&quot;Conditional DETR&quot;,&quot;id&quot;:&quot;model_doc/conditional_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/conditional_detr&quot;},{&quot;title&quot;:&quot;ConvNeXT&quot;,&quot;id&quot;:&quot;model_doc/convnext&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnext&quot;},{&quot;title&quot;:&quot;ConvNeXTV2&quot;,&quot;id&quot;:&quot;model_doc/convnextv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnextv2&quot;},{&quot;title&quot;:&quot;CvT&quot;,&quot;id&quot;:&quot;model_doc/cvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cvt&quot;},{&quot;title&quot;:&quot;Deformable DETR&quot;,&quot;id&quot;:&quot;model_doc/deformable_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deformable_detr&quot;},{&quot;title&quot;:&quot;DeiT&quot;,&quot;id&quot;:&quot;model_doc/deit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deit&quot;},{&quot;title&quot;:&quot;DETA&quot;,&quot;id&quot;:&quot;model_doc/deta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deta&quot;},{&quot;title&quot;:&quot;DETR&quot;,&quot;id&quot;:&quot;model_doc/detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/detr&quot;},{&quot;title&quot;:&quot;DiNAT&quot;,&quot;id&quot;:&quot;model_doc/dinat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinat&quot;},{&quot;title&quot;:&quot;DINO V2&quot;,&quot;id&quot;:&quot;model_doc/dinov2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinov2&quot;},{&quot;title&quot;:&quot;DiT&quot;,&quot;id&quot;:&quot;model_doc/dit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dit&quot;},{&quot;title&quot;:&quot;DPT&quot;,&quot;id&quot;:&quot;model_doc/dpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpt&quot;},{&quot;title&quot;:&quot;EfficientFormer&quot;,&quot;id&quot;:&quot;model_doc/efficientformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientformer&quot;},{&quot;title&quot;:&quot;EfficientNet&quot;,&quot;id&quot;:&quot;model_doc/efficientnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientnet&quot;},{&quot;title&quot;:&quot;FocalNet&quot;,&quot;id&quot;:&quot;model_doc/focalnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/focalnet&quot;},{&quot;title&quot;:&quot;GLPN&quot;,&quot;id&quot;:&quot;model_doc/glpn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/glpn&quot;},{&quot;title&quot;:&quot;ImageGPT&quot;,&quot;id&quot;:&quot;model_doc/imagegpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/imagegpt&quot;},{&quot;title&quot;:&quot;LeViT&quot;,&quot;id&quot;:&quot;model_doc/levit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/levit&quot;},{&quot;title&quot;:&quot;Mask2Former&quot;,&quot;id&quot;:&quot;model_doc/mask2former&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mask2former&quot;},{&quot;title&quot;:&quot;MaskFormer&quot;,&quot;id&quot;:&quot;model_doc/maskformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/maskformer&quot;},{&quot;title&quot;:&quot;MobileNetV1&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v1&quot;},{&quot;title&quot;:&quot;MobileNetV2&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v2&quot;},{&quot;title&quot;:&quot;MobileViT&quot;,&quot;id&quot;:&quot;model_doc/mobilevit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevit&quot;},{&quot;title&quot;:&quot;MobileViTV2&quot;,&quot;id&quot;:&quot;model_doc/mobilevitv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevitv2&quot;},{&quot;title&quot;:&quot;NAT&quot;,&quot;id&quot;:&quot;model_doc/nat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nat&quot;},{&quot;title&quot;:&quot;PoolFormer&quot;,&quot;id&quot;:&quot;model_doc/poolformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/poolformer&quot;},{&quot;title&quot;:&quot;Pyramid Vision Transformer (PVT)&quot;,&quot;id&quot;:&quot;model_doc/pvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pvt&quot;},{&quot;title&quot;:&quot;RegNet&quot;,&quot;id&quot;:&quot;model_doc/regnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/regnet&quot;},{&quot;title&quot;:&quot;ResNet&quot;,&quot;id&quot;:&quot;model_doc/resnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/resnet&quot;},{&quot;title&quot;:&quot;SegFormer&quot;,&quot;id&quot;:&quot;model_doc/segformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/segformer&quot;},{&quot;title&quot;:&quot;SwiftFormer&quot;,&quot;id&quot;:&quot;model_doc/swiftformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swiftformer&quot;},{&quot;title&quot;:&quot;Swin Transformer&quot;,&quot;id&quot;:&quot;model_doc/swin&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin&quot;},{&quot;title&quot;:&quot;Swin Transformer V2&quot;,&quot;id&quot;:&quot;model_doc/swinv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swinv2&quot;},{&quot;title&quot;:&quot;Swin2SR&quot;,&quot;id&quot;:&quot;model_doc/swin2sr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin2sr&quot;},{&quot;title&quot;:&quot;Table Transformer&quot;,&quot;id&quot;:&quot;model_doc/table-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/table-transformer&quot;},{&quot;title&quot;:&quot;TimeSformer&quot;,&quot;id&quot;:&quot;model_doc/timesformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/timesformer&quot;},{&quot;title&quot;:&quot;UperNet&quot;,&quot;id&quot;:&quot;model_doc/upernet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/upernet&quot;},{&quot;title&quot;:&quot;VAN&quot;,&quot;id&quot;:&quot;model_doc/van&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/van&quot;},{&quot;title&quot;:&quot;VideoMAE&quot;,&quot;id&quot;:&quot;model_doc/videomae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/videomae&quot;},{&quot;title&quot;:&quot;Vision Transformer (ViT)&quot;,&quot;id&quot;:&quot;model_doc/vit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit&quot;},{&quot;title&quot;:&quot;ViT Hybrid&quot;,&quot;id&quot;:&quot;model_doc/vit_hybrid&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_hybrid&quot;},{&quot;title&quot;:&quot;ViTDet&quot;,&quot;id&quot;:&quot;model_doc/vitdet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitdet&quot;},{&quot;title&quot;:&quot;ViTMAE&quot;,&quot;id&quot;:&quot;model_doc/vit_mae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_mae&quot;},{&quot;title&quot;:&quot;ViTMatte&quot;,&quot;id&quot;:&quot;model_doc/vitmatte&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitmatte&quot;},{&quot;title&quot;:&quot;ViTMSN&quot;,&quot;id&quot;:&quot;model_doc/vit_msn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_msn&quot;},{&quot;title&quot;:&quot;ViViT&quot;,&quot;id&quot;:&quot;model_doc/vivit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vivit&quot;},{&quot;title&quot;:&quot;YOLOS&quot;,&quot;id&quot;:&quot;model_doc/yolos&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yolos&quot;}]},{&quot;title&quot;:&quot;Audio models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio Spectrogram Transformer&quot;,&quot;id&quot;:&quot;model_doc/audio-spectrogram-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer&quot;},{&quot;title&quot;:&quot;Bark&quot;,&quot;id&quot;:&quot;model_doc/bark&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bark&quot;},{&quot;title&quot;:&quot;CLAP&quot;,&quot;id&quot;:&quot;model_doc/clap&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clap&quot;},{&quot;title&quot;:&quot;EnCodec&quot;,&quot;id&quot;:&quot;model_doc/encodec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encodec&quot;},{&quot;title&quot;:&quot;Hubert&quot;,&quot;id&quot;:&quot;model_doc/hubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/hubert&quot;},{&quot;title&quot;:&quot;MCTCT&quot;,&quot;id&quot;:&quot;model_doc/mctct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mctct&quot;},{&quot;title&quot;:&quot;MMS&quot;,&quot;id&quot;:&quot;model_doc/mms&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mms&quot;},{&quot;title&quot;:&quot;MusicGen&quot;,&quot;id&quot;:&quot;model_doc/musicgen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/musicgen&quot;},{&quot;title&quot;:&quot;Pop2Piano&quot;,&quot;id&quot;:&quot;model_doc/pop2piano&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pop2piano&quot;},{&quot;title&quot;:&quot;SEW&quot;,&quot;id&quot;:&quot;model_doc/sew&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew&quot;},{&quot;title&quot;:&quot;SEW-D&quot;,&quot;id&quot;:&quot;model_doc/sew-d&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew-d&quot;},{&quot;title&quot;:&quot;Speech2Text&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text&quot;},{&quot;title&quot;:&quot;Speech2Text2&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text_2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2&quot;},{&quot;title&quot;:&quot;SpeechT5&quot;,&quot;id&quot;:&quot;model_doc/speecht5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speecht5&quot;},{&quot;title&quot;:&quot;UniSpeech&quot;,&quot;id&quot;:&quot;model_doc/unispeech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech&quot;},{&quot;title&quot;:&quot;UniSpeech-SAT&quot;,&quot;id&quot;:&quot;model_doc/unispeech-sat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech-sat&quot;},{&quot;title&quot;:&quot;VITS&quot;,&quot;id&quot;:&quot;model_doc/vits&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vits&quot;},{&quot;title&quot;:&quot;Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2&quot;},{&quot;title&quot;:&quot;Wav2Vec2-Conformer&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2-conformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer&quot;},{&quot;title&quot;:&quot;Wav2Vec2Phoneme&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2_phoneme&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme&quot;},{&quot;title&quot;:&quot;WavLM&quot;,&quot;id&quot;:&quot;model_doc/wavlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wavlm&quot;},{&quot;title&quot;:&quot;Whisper&quot;,&quot;id&quot;:&quot;model_doc/whisper&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/whisper&quot;},{&quot;title&quot;:&quot;XLS-R&quot;,&quot;id&quot;:&quot;model_doc/xls_r&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xls_r&quot;},{&quot;title&quot;:&quot;XLSR-Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/xlsr_wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2&quot;}]},{&quot;title&quot;:&quot;Multimodal models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALIGN&quot;,&quot;id&quot;:&quot;model_doc/align&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/align&quot;},{&quot;title&quot;:&quot;AltCLIP&quot;,&quot;id&quot;:&quot;model_doc/altclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/altclip&quot;},{&quot;title&quot;:&quot;BLIP&quot;,&quot;id&quot;:&quot;model_doc/blip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip&quot;},{&quot;title&quot;:&quot;BLIP-2&quot;,&quot;id&quot;:&quot;model_doc/blip-2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip-2&quot;},{&quot;title&quot;:&quot;BridgeTower&quot;,&quot;id&quot;:&quot;model_doc/bridgetower&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bridgetower&quot;},{&quot;title&quot;:&quot;BROS&quot;,&quot;id&quot;:&quot;model_doc/bros&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bros&quot;},{&quot;title&quot;:&quot;Chinese-CLIP&quot;,&quot;id&quot;:&quot;model_doc/chinese_clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/chinese_clip&quot;},{&quot;title&quot;:&quot;CLIP&quot;,&quot;id&quot;:&quot;model_doc/clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clip&quot;},{&quot;title&quot;:&quot;CLIPSeg&quot;,&quot;id&quot;:&quot;model_doc/clipseg&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clipseg&quot;},{&quot;title&quot;:&quot;Data2Vec&quot;,&quot;id&quot;:&quot;model_doc/data2vec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/data2vec&quot;},{&quot;title&quot;:&quot;DePlot&quot;,&quot;id&quot;:&quot;model_doc/deplot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deplot&quot;},{&quot;title&quot;:&quot;Donut&quot;,&quot;id&quot;:&quot;model_doc/donut&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/donut&quot;},{&quot;title&quot;:&quot;FLAVA&quot;,&quot;id&quot;:&quot;model_doc/flava&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flava&quot;},{&quot;title&quot;:&quot;GIT&quot;,&quot;id&quot;:&quot;model_doc/git&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/git&quot;},{&quot;title&quot;:&quot;GroupViT&quot;,&quot;id&quot;:&quot;model_doc/groupvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/groupvit&quot;},{&quot;title&quot;:&quot;IDEFICS&quot;,&quot;id&quot;:&quot;model_doc/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/idefics&quot;},{&quot;title&quot;:&quot;InstructBLIP&quot;,&quot;id&quot;:&quot;model_doc/instructblip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/instructblip&quot;},{&quot;title&quot;:&quot;LayoutLM&quot;,&quot;id&quot;:&quot;model_doc/layoutlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlm&quot;},{&quot;title&quot;:&quot;LayoutLMV2&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv2&quot;},{&quot;title&quot;:&quot;LayoutLMV3&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv3&quot;},{&quot;title&quot;:&quot;LayoutXLM&quot;,&quot;id&quot;:&quot;model_doc/layoutxlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutxlm&quot;},{&quot;title&quot;:&quot;LiLT&quot;,&quot;id&quot;:&quot;model_doc/lilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lilt&quot;},{&quot;title&quot;:&quot;LXMERT&quot;,&quot;id&quot;:&quot;model_doc/lxmert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lxmert&quot;},{&quot;title&quot;:&quot;MatCha&quot;,&quot;id&quot;:&quot;model_doc/matcha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/matcha&quot;},{&quot;title&quot;:&quot;MGP-STR&quot;,&quot;id&quot;:&quot;model_doc/mgp-str&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mgp-str&quot;},{&quot;title&quot;:&quot;Nougat&quot;,&quot;id&quot;:&quot;model_doc/nougat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nougat&quot;},{&quot;title&quot;:&quot;OneFormer&quot;,&quot;id&quot;:&quot;model_doc/oneformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/oneformer&quot;},{&quot;title&quot;:&quot;OWL-ViT&quot;,&quot;id&quot;:&quot;model_doc/owlvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/owlvit&quot;},{&quot;title&quot;:&quot;Perceiver&quot;,&quot;id&quot;:&quot;model_doc/perceiver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/perceiver&quot;},{&quot;title&quot;:&quot;Pix2Struct&quot;,&quot;id&quot;:&quot;model_doc/pix2struct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pix2struct&quot;},{&quot;title&quot;:&quot;Segment Anything&quot;,&quot;id&quot;:&quot;model_doc/sam&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sam&quot;},{&quot;title&quot;:&quot;Speech Encoder Decoder Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/speech-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder&quot;},{&quot;title&quot;:&quot;TAPAS&quot;,&quot;id&quot;:&quot;model_doc/tapas&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapas&quot;},{&quot;title&quot;:&quot;TrOCR&quot;,&quot;id&quot;:&quot;model_doc/trocr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trocr&quot;},{&quot;title&quot;:&quot;TVLT&quot;,&quot;id&quot;:&quot;model_doc/tvlt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tvlt&quot;},{&quot;title&quot;:&quot;ViLT&quot;,&quot;id&quot;:&quot;model_doc/vilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vilt&quot;},{&quot;title&quot;:&quot;Vision Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/vision-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder&quot;},{&quot;title&quot;:&quot;Vision Text Dual Encoder&quot;,&quot;id&quot;:&quot;model_doc/vision-text-dual-encoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder&quot;},{&quot;title&quot;:&quot;VisualBERT&quot;,&quot;id&quot;:&quot;model_doc/visual_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/visual_bert&quot;},{&quot;title&quot;:&quot;X-CLIP&quot;,&quot;id&quot;:&quot;model_doc/xclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xclip&quot;}]},{&quot;title&quot;:&quot;Reinforcement learning models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Decision Transformer&quot;,&quot;id&quot;:&quot;model_doc/decision_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/decision_transformer&quot;},{&quot;title&quot;:&quot;Trajectory Transformer&quot;,&quot;id&quot;:&quot;model_doc/trajectory_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer&quot;}]},{&quot;title&quot;:&quot;Time series models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Autoformer&quot;,&quot;id&quot;:&quot;model_doc/autoformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/autoformer&quot;},{&quot;title&quot;:&quot;Informer&quot;,&quot;id&quot;:&quot;model_doc/informer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/informer&quot;},{&quot;title&quot;:&quot;Time Series Transformer&quot;,&quot;id&quot;:&quot;model_doc/time_series_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/time_series_transformer&quot;}]},{&quot;title&quot;:&quot;Graph models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Graphormer&quot;,&quot;id&quot;:&quot;model_doc/graphormer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/graphormer&quot;}]}]},{&quot;title&quot;:&quot;Internal Helpers&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Custom Layers and Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/modeling_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/modeling_utils&quot;},{&quot;title&quot;:&quot;Utilities for pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/pipelines_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/pipelines_utils&quot;},{&quot;title&quot;:&quot;Utilities for Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/tokenization_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/tokenization_utils&quot;},{&quot;title&quot;:&quot;Utilities for Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/trainer_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/trainer_utils&quot;},{&quot;title&quot;:&quot;Utilities for Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/generation_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/generation_utils&quot;},{&quot;title&quot;:&quot;Utilities for Image Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/image_processing_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/image_processing_utils&quot;},{&quot;title&quot;:&quot;Utilities for Audio processing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/audio_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/audio_utils&quot;},{&quot;title&quot;:&quot;General Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/file_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/file_utils&quot;},{&quot;title&quot;:&quot;Utilities for Time Series&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/time_series_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/time_series_utils&quot;}]}]}],&quot;chapterId&quot;:&quot;model_doc/speech-encoder-decoder&quot;,&quot;docType&quot;:&quot;docs&quot;,&quot;isLoggedIn&quot;:false,&quot;lang&quot;:&quot;en&quot;,&quot;langs&quot;:[&quot;de&quot;,&quot;en&quot;,&quot;es&quot;,&quot;fr&quot;,&quot;it&quot;,&quot;ko&quot;,&quot;pt&quot;,&quot;zh&quot;],&quot;library&quot;:&quot;transformers&quot;,&quot;theme&quot;:&quot;light&quot;,&quot;version&quot;:&quot;v4.34.0&quot;,&quot;versions&quot;:[{&quot;version&quot;:&quot;main&quot;},{&quot;version&quot;:&quot;v4.34.0&quot;},{&quot;version&quot;:&quot;v4.33.3&quot;},{&quot;version&quot;:&quot;v4.33.2&quot;},{&quot;version&quot;:&quot;v4.33.0&quot;},{&quot;version&quot;:&quot;v4.32.1&quot;},{&quot;version&quot;:&quot;v4.32.0&quot;},{&quot;version&quot;:&quot;v4.31.0&quot;},{&quot;version&quot;:&quot;v4.30.0&quot;},{&quot;version&quot;:&quot;v4.29.1&quot;},{&quot;version&quot;:&quot;v4.29.0&quot;},{&quot;version&quot;:&quot;v4.28.1&quot;},{&quot;version&quot;:&quot;v4.28.0&quot;},{&quot;version&quot;:&quot;v4.27.2&quot;},{&quot;version&quot;:&quot;v4.27.1&quot;},{&quot;version&quot;:&quot;v4.27.0&quot;},{&quot;version&quot;:&quot;v4.26.1&quot;},{&quot;version&quot;:&quot;v4.26.0&quot;},{&quot;version&quot;:&quot;v4.25.1&quot;},{&quot;version&quot;:&quot;v4.24.0&quot;},{&quot;version&quot;:&quot;v4.23.1&quot;},{&quot;version&quot;:&quot;v4.23.0&quot;},{&quot;version&quot;:&quot;v4.22.2&quot;},{&quot;version&quot;:&quot;v4.22.1&quot;},{&quot;version&quot;:&quot;v4.22.0&quot;},{&quot;version&quot;:&quot;v4.21.3&quot;},{&quot;version&quot;:&quot;v4.21.2&quot;},{&quot;version&quot;:&quot;v4.21.1&quot;},{&quot;version&quot;:&quot;v4.21.0&quot;},{&quot;version&quot;:&quot;v4.20.1&quot;},{&quot;version&quot;:&quot;v4.20.0&quot;},{&quot;version&quot;:&quot;v4.19.4&quot;},{&quot;version&quot;:&quot;v4.19.3&quot;},{&quot;version&quot;:&quot;v4.19.2&quot;},{&quot;version&quot;:&quot;v4.19.0&quot;},{&quot;version&quot;:&quot;v4.18.0&quot;},{&quot;version&quot;:&quot;v4.17.0&quot;},{&quot;version&quot;:&quot;v4.16.2&quot;},{&quot;version&quot;:&quot;v4.16.1&quot;},{&quot;version&quot;:&quot;v4.16.0&quot;},{&quot;version&quot;:&quot;v4.15.0&quot;},{&quot;version&quot;:&quot;v4.14.1&quot;},{&quot;version&quot;:&quot;v4.13.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.5&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.4&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.0.0&quot;},{&quot;version&quot;:&quot;doc-builder-html&quot;}],&quot;title&quot;:&quot;Speech Encoder Decoder Models&quot;}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"><div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation</p> <div class="flex items-center"><p class="font-semibold">Speech Encoder Decoder Models</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "><button class=" " type="button"><h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> <span class="ml-auto rounded border border-gray-200 bg-gray-100 px-0.5 text-xs dark:border-gray-800 dark:bg-gray-800"><kbd class="font-sans">⌘K</kbd></span></button> <div class="flex items-center"><select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1">v4.34.0</option><option value="2">v4.33.3</option><option value="3">v4.32.1</option><option value="4">v4.31.0</option><option value="5">v4.30.0</option><option value="6">v4.29.1</option><option value="7">v4.28.1</option><option value="8">v4.27.2</option><option value="9">v4.26.1</option><option value="10">v4.25.1</option><option value="11">v4.24.0</option><option value="12">v4.23.1</option><option value="13">v4.22.2</option><option value="14">v4.21.3</option><option value="15">v4.20.1</option><option value="16">v4.19.4</option><option value="17">v4.18.0</option><option value="18">v4.17.0</option><option value="19">v4.16.2</option><option value="20">v4.15.0</option><option value="21">v4.14.1</option><option value="22">v4.13.0</option><option value="23">v4.12.5</option><option value="24">v4.11.3</option><option value="25">v4.10.1</option><option value="26">v4.9.2</option><option value="27">v4.8.2</option><option value="28">v4.7.0</option><option value="29">v4.6.0</option><option value="30">v4.5.1</option><option value="31">v4.4.2</option><option value="32">v4.3.3</option><option value="33">v4.2.2</option><option value="34">v4.1.1</option><option value="35">v4.0.1</option><option value="36">v3.5.1</option><option value="37">v3.4.0</option><option value="38">v3.3.1</option><option value="39">v3.2.0</option><option value="40">v3.1.0</option><option value="41">v3.0.2</option><option value="42">v2.11.0</option><option value="43">v2.10.0</option><option value="44">v2.9.1</option><option value="45">v2.8.0</option><option value="46">v2.7.0</option><option value="47">v2.6.0</option><option value="48">v2.5.1</option><option value="49">v2.4.1</option><option value="50">v2.3.0</option><option value="51">v2.2.2</option><option value="52">v2.1.1</option><option value="53">v2.0.0</option><option value="54">v1.2.0</option><option value="55">v1.1.0</option><option value="56">v1.0.0</option><option value="57">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"><button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"><svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> 112,792</a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Get started</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/index">🤗 Transformers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/quicktour">Quick tour </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/installation">Installation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Tutorials</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_tutorial">Run inference with pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/autoclass_tutorial">Write portable code with AutoClass </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/preprocessing">Preprocess data </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/training">Fine-tune a pretrained model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/run_scripts">Train with a script </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/accelerate">Set up distributed training with 🤗 Accelerate </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/peft">Load and train adapters with 🤗 PEFT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_sharing">Share your model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/transformers_agents">Agents </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/llm_tutorial">Generation with LLMs </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Task Guides</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Natural Language Processing</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Computer Vision</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Generation</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Prompting</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Developer guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/fast_tokenizers">Use fast tokenizers from 🤗 Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/multilingual">Run inference with multilingual models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/create_a_model">Use model-specific APIs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_models">Share a custom model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/chat_templating">Templates for chat models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/sagemaker">Run training on Amazon SageMaker </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/serialization">Export to ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tflite">Export to TFLite </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/torchscript">Export to TorchScript </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/benchmarks">Benchmarks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/notebooks">Notebooks with examples </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/community">Community resources </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_tools">Custom Tools and Prompts </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/troubleshooting">Troubleshoot </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Performance and scalability</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/performance">Overview </a><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Efficient training techniques</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_one">Methods and tools for efficient training on a single GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_many">Multiple GPUs and parallelism </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu">Efficient training on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu_many">Distributed CPU training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu">Training on TPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu_tf">Training on TPU with TensorFlow </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_special">Training on Specialized Hardware </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_hardware">Custom hardware for training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/hpo_train">Hyperparameter Search using Trainer API </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Optimizing inference</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_cpu">Inference on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_one">Inference on one GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_many">Inference on many GPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_special">Inference on Specialized Hardware </a> </div><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/big_models">Instantiating a big model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/debugging">Troubleshooting </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tf_xla">XLA Integration for TensorFlow Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perf_torch_compile">Optimize inference using `torch.compile()` </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Contribute</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/contributing">How to contribute to transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_model">How to add a model to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_tensorflow_model">How to convert a 🤗 Transformers model to TensorFlow? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_pipeline">How to add a pipeline to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/testing">Testing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pr_checks">Checks on a Pull Request </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Conceptual guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/philosophy">Philosophy </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/glossary">Glossary </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/task_summary">What 🤗 Transformers can do </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tasks_explained">How 🤗 Transformers solve tasks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_summary">The Transformer model family </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tokenizer_summary">Summary of the tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/attention">Attention mechanisms </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pad_truncation">Padding and truncation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/bertology">BERTology </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perplexity">Perplexity of fixed-length models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_webserver">Pipelines for webserver inference </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_memory_anatomy">Model training anatomy </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>API</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Main Classes</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/agent">Agents and Tools </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/model_doc/auto">Auto Classes </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/callback">Callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/configuration">Configuration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/data_collator">Data Collator </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks">Keras callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/logging">Logging </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/model">Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/text_generation">Text Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/onnx">ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules">Optimization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/output">Model outputs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/pipelines">Pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/processors">Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/quantization">Quantization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/tokenizer">Tokenizer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/trainer">Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/deepspeed">DeepSpeed Integration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor">Feature Extractor </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/image_processor">Image Processor </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Models</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Text models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Vision models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal models</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/align">ALIGN </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/altclip">AltCLIP </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/blip">BLIP </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/blip-2">BLIP-2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bridgetower">BridgeTower </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bros">BROS </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/chinese_clip">Chinese-CLIP </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/clip">CLIP </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/clipseg">CLIPSeg </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/data2vec">Data2Vec </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/deplot">DePlot </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/donut">Donut </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/flava">FLAVA </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/git">GIT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/groupvit">GroupViT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/idefics">IDEFICS </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/instructblip">InstructBLIP </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/layoutlm">LayoutLM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/layoutlmv2">LayoutLMV2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/layoutlmv3">LayoutLMV3 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/layoutxlm">LayoutXLM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/lilt">LiLT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/lxmert">LXMERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/matcha">MatCha </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mgp-str">MGP-STR </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nougat">Nougat </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/oneformer">OneFormer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/owlvit">OWL-ViT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/perceiver">Perceiver </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/pix2struct">Pix2Struct </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/sam">Segment Anything </a><a data-sveltekit-reload="" class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder">Speech Encoder Decoder Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/tapas">TAPAS </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/trocr">TrOCR </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/tvlt">TVLT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/vilt">ViLT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder">Vision Encoder Decoder Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder">Vision Text Dual Encoder </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/visual_bert">VisualBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xclip">X-CLIP </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Reinforcement learning models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Time series models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Graph models</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Internal Helpers</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/modeling_utils">Custom Layers and Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/pipelines_utils">Utilities for pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/tokenization_utils">Utilities for Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/trainer_utils">Utilities for Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/generation_utils">Utilities for Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/image_processing_utils">Utilities for Image Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/audio_utils">Utilities for Audio processing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/file_utils">General Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/time_series_utils">Utilities for Time Series </a> </div> </div></nav></div></div></div> <div class="z-1 min-w-0 flex-1"> <div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg"> <div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div> <p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience </p> <div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes </div></div></div> <div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a> <p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div> <div class="prose-doc prose relative mx-auto max-w-4xl break-words"> <p></p> <h1 class="relative group"><a id="speech-encoder-decoder-models" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#speech-encoder-decoder-models"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-r7mnk7">Speech Encoder Decoder Models</span></h1> <p data-svelte-h="svelte-1khw8ai">The <a href="/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderModel">SpeechEncoderDecoderModel</a> can be used to initialize a speech-to-text model with any pretrained speech autoencoding model as the encoder (<em>e.g.</em> <a href="wav2vec2">Wav2Vec2</a>, <a href="hubert">Hubert</a>) and any pretrained autoregressive model as the decoder.</p> <p data-svelte-h="svelte-1jxsacc">The effectiveness of initializing speech-sequence-to-text-sequence models with pretrained checkpoints for speech recognition and speech translation has <em>e.g.</em> been shown in <a href="https://arxiv.org/abs/2104.06678" rel="nofollow">Large-Scale Self- and Semi-Supervised Learning for Speech Translation</a> by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau.</p> <p data-svelte-h="svelte-dzp0r0">An example of how to use a <a href="/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderModel">SpeechEncoderDecoderModel</a> for inference can be seen in <a href="speech_to_text_2">Speech2Text2</a>.</p> <h2 class="relative group"><a id="randomly-initializing-speechencoderdecodermodel-from-model-configurations" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#randomly-initializing-speechencoderdecodermodel-from-model-configurations"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1deaxse">Randomly initializing <code>SpeechEncoderDecoderModel</code> from model configurations.</span></h2> <p data-svelte-h="svelte-1cu1ede"><a href="/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderModel">SpeechEncoderDecoderModel</a> can be randomly initialized from an encoder and a decoder config. In the following example, we show how to do this using the default <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Model">Wav2Vec2Model</a> configuration for the encoder and the default <code>BertForCausalLM</code> configuration for the decoder.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> BertConfig, Wav2Vec2Config, SpeechEncoderDecoderConfig, SpeechEncoderDecoderModel <span class="hljs-meta">&gt;&gt;&gt; </span>config_encoder = Wav2Vec2Config() <span class="hljs-meta">&gt;&gt;&gt; </span>config_decoder = BertConfig() <span class="hljs-meta">&gt;&gt;&gt; </span>config = SpeechEncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder) <span class="hljs-meta">&gt;&gt;&gt; </span>model = SpeechEncoderDecoderModel(config=config)</pre></div> <h2 class="relative group"><a id="initialising-speechencoderdecodermodel-from-a-pretrained-encoder-and-a-pretrained-decoder" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#initialising-speechencoderdecodermodel-from-a-pretrained-encoder-and-a-pretrained-decoder"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-ldtcl0">Initialising <code>SpeechEncoderDecoderModel</code> from a pretrained encoder and a pretrained decoder.</span></h2> <p data-svelte-h="svelte-czw2og"><a href="/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderModel">SpeechEncoderDecoderModel</a> can be initialized from a pretrained encoder checkpoint and a pretrained decoder checkpoint. Note that any pretrained Transformer-based speech model, <em>e.g.</em> <a href="wav2vec2">Wav2Vec2</a>, <a href="hubert">Hubert</a> can serve as the encoder and both pretrained auto-encoding models, <em>e.g.</em> BERT, pretrained causal language models, <em>e.g.</em> GPT2, as well as the pretrained decoder part of sequence-to-sequence models, <em>e.g.</em> decoder of BART, can be used as the decoder. Depending on which architecture you choose as the decoder, the cross-attention layers might be randomly initialized. Initializing <a href="/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderModel">SpeechEncoderDecoderModel</a> from a pretrained encoder and decoder checkpoint requires the model to be fine-tuned on a downstream task, as has been shown in <a href="https://huggingface.co/blog/warm-starting-encoder-decoder" rel="nofollow">the <em>Warm-starting-encoder-decoder blog post</em></a>. To do so, the <code>SpeechEncoderDecoderModel</code> class provides a <a href="/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderModel.from_encoder_decoder_pretrained">SpeechEncoderDecoderModel.from_encoder_decoder_pretrained()</a> method.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> SpeechEncoderDecoderModel <span class="hljs-meta">&gt;&gt;&gt; </span>model = SpeechEncoderDecoderModel.from_encoder_decoder_pretrained( <span class="hljs-meta">... </span> <span class="hljs-string">"facebook/hubert-large-ll60k"</span>, <span class="hljs-string">"bert-base-uncased"</span> <span class="hljs-meta">... </span>)</pre></div> <h2 class="relative group"><a id="loading-an-existing-speechencoderdecodermodel-checkpoint-and-perform-inference" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#loading-an-existing-speechencoderdecodermodel-checkpoint-and-perform-inference"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-oyhzrg">Loading an existing <code>SpeechEncoderDecoderModel</code> checkpoint and perform inference.</span></h2> <p data-svelte-h="svelte-1px3xyz">To load fine-tuned checkpoints of the <code>SpeechEncoderDecoderModel</code> class, <a href="/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderModel">SpeechEncoderDecoderModel</a> provides the <code>from_pretrained(...)</code> method just like any other model architecture in Transformers.</p> <p data-svelte-h="svelte-otiwkm">To perform inference, one uses the <code>generate</code> method, which allows to autoregressively generate text. This method supports various forms of decoding, such as greedy, beam search and multinomial sampling.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> Wav2Vec2Processor, SpeechEncoderDecoderModel <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># load a fine-tuned speech translation model and corresponding processor</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model = SpeechEncoderDecoderModel.from_pretrained(<span class="hljs-string">"facebook/wav2vec2-xls-r-300m-en-to-15"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>processor = Wav2Vec2Processor.from_pretrained(<span class="hljs-string">"facebook/wav2vec2-xls-r-300m-en-to-15"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># let's perform inference on a piece of English speech (which we'll translate to German)</span> <span class="hljs-meta">&gt;&gt;&gt; </span>ds = load_dataset(<span class="hljs-string">"hf-internal-testing/librispeech_asr_dummy"</span>, <span class="hljs-string">"clean"</span>, split=<span class="hljs-string">"validation"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>input_values = processor(ds[<span class="hljs-number">0</span>][<span class="hljs-string">"audio"</span>][<span class="hljs-string">"array"</span>], return_tensors=<span class="hljs-string">"pt"</span>).input_values <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># autoregressively generate transcription (uses greedy decoding by default)</span> <span class="hljs-meta">&gt;&gt;&gt; </span>generated_ids = model.generate(input_values) <span class="hljs-meta">&gt;&gt;&gt; </span>generated_text = processor.batch_decode(generated_ids, skip_special_tokens=<span class="hljs-literal">True</span>)[<span class="hljs-number">0</span>] <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-built_in">print</span>(generated_text) Mr. Quilter ist der Apostel der Mittelschicht und wir freuen uns, sein Evangelium willkommen heißen zu können.</pre></div> <h2 class="relative group"><a id="training" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#training"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1q1s287">Training</span></h2> <p data-svelte-h="svelte-1mtzhr7">Once the model is created, it can be fine-tuned similar to BART, T5 or any other encoder-decoder model on a dataset of (speech, text) pairs. As you can see, only 2 inputs are required for the model in order to compute a loss: <code>input_values</code> (which are the speech inputs) and <code>labels</code> (which are the <code>input_ids</code> of the encoded target sequence).</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, AutoFeatureExtractor, SpeechEncoderDecoderModel <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset <span class="hljs-meta">&gt;&gt;&gt; </span>encoder_id = <span class="hljs-string">"facebook/wav2vec2-base-960h"</span> <span class="hljs-comment"># acoustic model encoder</span> <span class="hljs-meta">&gt;&gt;&gt; </span>decoder_id = <span class="hljs-string">"bert-base-uncased"</span> <span class="hljs-comment"># text decoder</span> <span class="hljs-meta">&gt;&gt;&gt; </span>feature_extractor = AutoFeatureExtractor.from_pretrained(encoder_id) <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(decoder_id) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Combine pre-trained encoder and pre-trained decoder to form a Seq2Seq model</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model = SpeechEncoderDecoderModel.from_encoder_decoder_pretrained(encoder_id, decoder_id) <span class="hljs-meta">&gt;&gt;&gt; </span>model.config.decoder_start_token_id = tokenizer.cls_token_id <span class="hljs-meta">&gt;&gt;&gt; </span>model.config.pad_token_id = tokenizer.pad_token_id <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># load an audio input and pre-process (normalise mean/std to 0/1)</span> <span class="hljs-meta">&gt;&gt;&gt; </span>ds = load_dataset(<span class="hljs-string">"hf-internal-testing/librispeech_asr_dummy"</span>, <span class="hljs-string">"clean"</span>, split=<span class="hljs-string">"validation"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>input_values = feature_extractor(ds[<span class="hljs-number">0</span>][<span class="hljs-string">"audio"</span>][<span class="hljs-string">"array"</span>], return_tensors=<span class="hljs-string">"pt"</span>).input_values <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># load its corresponding transcription and tokenize to generate labels</span> <span class="hljs-meta">&gt;&gt;&gt; </span>labels = tokenizer(ds[<span class="hljs-number">0</span>][<span class="hljs-string">"text"</span>], return_tensors=<span class="hljs-string">"pt"</span>).input_ids <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># the forward function automatically creates the correct decoder_input_ids</span> <span class="hljs-meta">&gt;&gt;&gt; </span>loss = model(input_values=input_values, labels=labels).loss <span class="hljs-meta">&gt;&gt;&gt; </span>loss.backward()</pre></div> <h2 class="relative group"><a id="transformers.SpeechEncoderDecoderConfig" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SpeechEncoderDecoderConfig"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-evmvi9">SpeechEncoderDecoderConfig</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.SpeechEncoderDecoderConfig"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">SpeechEncoderDecoderConfig</span></span></h3> <a id="transformers.SpeechEncoderDecoderConfig" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.SpeechEncoderDecoderConfig"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/speech_encoder_decoder/configuration_speech_encoder_decoder.py#L26" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SpeechEncoderDecoderConfig.kwargs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SpeechEncoderDecoderConfig.kwargs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>kwargs</strong> (<em>optional</em>) — Dictionary of keyword arguments. Notably:<p></p> <ul> <li><strong>encoder</strong> (<a href="/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a>, <em>optional</em>) — An instance of a configuration object that defines the encoder config.</li> <li><strong>decoder</strong> (<a href="/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a>, <em>optional</em>) — An instance of a configuration object that defines the decoder config.</li> </ul></span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1ghs0ke"><a href="/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderConfig">SpeechEncoderDecoderConfig</a> is the configuration class to store the configuration of a <a href="/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderModel">SpeechEncoderDecoderModel</a>. It is used to instantiate an Encoder Decoder model according to the specified arguments, defining the encoder and decoder configs.</p> <p data-svelte-h="svelte-10kqkkl">Configuration objects inherit from <a href="/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> and can be used to control the model outputs. Read the documentation from <a href="/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> for more information.</p> <div class="relative group rounded-md"><a id="transformers.SpeechEncoderDecoderConfig.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SpeechEncoderDecoderConfig.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-kvfsh7">Examples:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> BertConfig, Wav2Vec2Config, SpeechEncoderDecoderConfig, SpeechEncoderDecoderModel <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Initializing a Wav2Vec2 &amp; BERT style configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>config_encoder = Wav2Vec2Config() <span class="hljs-meta">&gt;&gt;&gt; </span>config_decoder = BertConfig() <span class="hljs-meta">&gt;&gt;&gt; </span>config = SpeechEncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Initializing a Wav2Vec2Bert model from a Wav2Vec2 &amp; bert-base-uncased style configurations</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model = SpeechEncoderDecoderModel(config=config) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Accessing the model configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>config_encoder = model.config.encoder <span class="hljs-meta">&gt;&gt;&gt; </span>config_decoder = model.config.decoder <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># set decoder config to causal lm</span> <span class="hljs-meta">&gt;&gt;&gt; </span>config_decoder.is_decoder = <span class="hljs-literal">True</span> <span class="hljs-meta">&gt;&gt;&gt; </span>config_decoder.add_cross_attention = <span class="hljs-literal">True</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Saving the model, including its configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model.save_pretrained(<span class="hljs-string">"my-model"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># loading model and config from pretrained folder</span> <span class="hljs-meta">&gt;&gt;&gt; </span>encoder_decoder_config = SpeechEncoderDecoderConfig.from_pretrained(<span class="hljs-string">"my-model"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = SpeechEncoderDecoderModel.from_pretrained(<span class="hljs-string">"my-model"</span>, config=encoder_decoder_config)</pre></div></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.SpeechEncoderDecoderConfig.from_encoder_decoder_configs"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>from_encoder_decoder_configs</span></h4> <a id="transformers.SpeechEncoderDecoderConfig.from_encoder_decoder_configs" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.SpeechEncoderDecoderConfig.from_encoder_decoder_configs"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/speech_encoder_decoder/configuration_speech_encoder_decoder.py#L92" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_config<span class="opacity-60">: PretrainedConfig</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_config<span class="opacity-60">: PretrainedConfig</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderConfig">SpeechEncoderDecoderConfig</a></span></span></p> <div class="!mb-10 relative docstring-details "> <div id="transformers.SpeechEncoderDecoderConfig.from_encoder_decoder_configs.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderConfig">SpeechEncoderDecoderConfig</a></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>An instance of a configuration object</p> </p> </div></div> <p data-svelte-h="svelte-aibm2f">Instantiate a <a href="/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderConfig">SpeechEncoderDecoderConfig</a> (or a derived class) from a pre-trained encoder model configuration and decoder model configuration.</p></div></div> <h2 class="relative group"><a id="transformers.SpeechEncoderDecoderModel" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SpeechEncoderDecoderModel"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-eltfpw">SpeechEncoderDecoderModel</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.SpeechEncoderDecoderModel"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">SpeechEncoderDecoderModel</span></span></h3> <a id="transformers.SpeechEncoderDecoderModel" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.SpeechEncoderDecoderModel"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/speech_encoder_decoder/modeling_speech_encoder_decoder.py#L173" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60">: typing.Optional[transformers.configuration_utils.PretrainedConfig] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder<span class="opacity-60">: typing.Optional[transformers.modeling_utils.PreTrainedModel] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder<span class="opacity-60">: typing.Optional[transformers.modeling_utils.PreTrainedModel] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SpeechEncoderDecoderModel.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SpeechEncoderDecoderModel.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderConfig">SpeechEncoderDecoderConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1baw9m3">This class can be used to initialize a speech-sequence-to-text-sequence model with any pretrained speech autoencoding model as the encoder and any pretrained text autoregressive model as the decoder. The encoder is loaded via <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained">from_pretrained()</a> function and the decoder is loaded via <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained">from_pretrained()</a> function. Cross-attention layers are automatically added to the decoder and should be fine-tuned on a downstream generative task, like summarization.</p> <p data-svelte-h="svelte-1faerbf">The effectiveness of initializing sequence-to-sequence models with pretrained checkpoints for sequence generation tasks was shown in <a href="https://arxiv.org/abs/1907.12461" rel="nofollow">Leveraging Pre-trained Checkpoints for Sequence Generation Tasks</a> by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu.</p> <p data-svelte-h="svelte-xu4z0z">Additionally, in <a href="https://arxiv.org/abs/2104.06678" rel="nofollow">Large-Scale Self- and Semi-Supervised Learning for Speech Translation</a> it is shown how leveraging large pretrained speech models for speech translation yields a significant performance improvement.</p> <p data-svelte-h="svelte-1dsbzbd">After such an Speech-Encoder Decoder model has been trained/fine-tuned, it can be saved/loaded just like any other models (see the examples for more information).</p> <p data-svelte-h="svelte-hmtw9k">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-hswkmf">This model is also a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <p data-svelte-h="svelte-9mlicr"><a href="/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderModel">SpeechEncoderDecoderModel</a> is a generic model class that will be instantiated as a transformer architecture with one of the base model classes of the library as encoder and another one as decoder when created with the :meth<em>~transformers.AutoModel.from_pretrained</em> class method for the encoder and :meth<em>~transformers.AutoModelForCausalLM.from_pretrained</em> class method for the decoder.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.SpeechEncoderDecoderModel.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.SpeechEncoderDecoderModel.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.SpeechEncoderDecoderModel.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/speech_encoder_decoder/modeling_speech_encoder_decoder.py#L442" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_input_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_attention_mask<span class="opacity-60">: typing.Optional[torch.BoolTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_outputs<span class="opacity-60">: typing.Optional[typing.Tuple[torch.FloatTensor]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">past_key_values<span class="opacity-60">: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_inputs_embeds<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">labels<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_cache<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_values<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_features<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.Seq2SeqLMOutput">transformers.modeling_outputs.Seq2SeqLMOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 16 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SpeechEncoderDecoderModel.forward.inputs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SpeechEncoderDecoderModel.forward.inputs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code> or <code>(batch_size, sequence_length, feature_dim)</code>, <em>optional</em>) — Float values of input raw speech waveform or speech features. Values can be obtained by loading a <code>.flac</code> or <code>.wav</code> audio file into an array of type <code>List[float]</code> or a <code>numpy.ndarray</code>, <em>e.g.</em> via the soundfile library (<code>pip install soundfile</code>). To prepare the array into <code>inputs</code>, either the <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Processor">Wav2Vec2Processor</a> or <a href="/docs/transformers/v4.34.0/en/model_doc/speech_to_text#transformers.Speech2TextProcessor">Speech2TextProcessor</a> should be used for padding and conversion into a tensor of type <code>torch.FloatTensor</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SpeechEncoderDecoderModel.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SpeechEncoderDecoderModel.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SpeechEncoderDecoderModel.forward.decoder_input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SpeechEncoderDecoderModel.forward.decoder_input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, target_sequence_length)</code>, <em>optional</em>) — Indices of decoder input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizer">PreTrainedTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p> <p>If <code>past_key_values</code> is used, optionally only the last <code>decoder_input_ids</code> have to be input (see <code>past_key_values</code>).</p> <p>For training, <code>decoder_input_ids</code> are automatically created by the model by shifting the <code>labels</code> to the right, replacing -100 by the <code>pad_token_id</code> and prepending them with the <code>decoder_start_token_id</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SpeechEncoderDecoderModel.forward.decoder_attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SpeechEncoderDecoderModel.forward.decoder_attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_attention_mask</strong> (<code>torch.BoolTensor</code> of shape <code>(batch_size, target_sequence_length)</code>, <em>optional</em>) — Default behavior: generate a tensor that ignores pad tokens in <code>decoder_input_ids</code>. Causal mask will also be used by default.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SpeechEncoderDecoderModel.forward.encoder_outputs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SpeechEncoderDecoderModel.forward.encoder_outputs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>encoder_outputs</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>) — This tuple must consist of (<code>last_hidden_state</code>, <em>optional</em>: <code>hidden_states</code>, <em>optional</em>: <code>attentions</code>) <code>last_hidden_state</code> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>) is a tensor of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SpeechEncoderDecoderModel.forward.past_key_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SpeechEncoderDecoderModel.forward.past_key_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>past_key_values</strong> (<code>tuple(tuple(torch.FloatTensor))</code> of length <code>config.n_layers</code> with each tuple having 4 tensors of shape <code>(batch_size, num_heads, sequence_length - 1, embed_size_per_head)</code>) — Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.<p></p> <p>If <code>past_key_values</code> are used, the user can optionally input only the last <code>decoder_input_ids</code> (those that don’t have their past key value states given to this model) of shape <code>(batch_size, 1)</code> instead of all <code>decoder_input_ids</code> of shape <code>(batch_size, sequence_length)</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SpeechEncoderDecoderModel.forward.inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SpeechEncoderDecoderModel.forward.inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SpeechEncoderDecoderModel.forward.decoder_inputs_embeds" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SpeechEncoderDecoderModel.forward.decoder_inputs_embeds"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_inputs_embeds</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, target_sequence_length, hidden_size)</code>, <em>optional</em>) — Optionally, instead of passing <code>decoder_input_ids</code> you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert <code>decoder_input_ids</code> indices into associated vectors than the model’s internal embedding lookup matrix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SpeechEncoderDecoderModel.forward.labels" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SpeechEncoderDecoderModel.forward.labels"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>labels</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Labels for computing the masked language modeling loss for the decoder. Indices should be in <code>[-100, 0, ..., config.vocab_size]</code> (see <code>input_ids</code> docstring) Tokens with indices set to <code>-100</code> are ignored (masked), the loss is only computed for the tokens with labels in <code>[0, ..., config.vocab_size]</code></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SpeechEncoderDecoderModel.forward.use_cache" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SpeechEncoderDecoderModel.forward.use_cache"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>use_cache</strong> (<code>bool</code>, <em>optional</em>) — If set to <code>True</code>, <code>past_key_values</code> key value states are returned and can be used to speed up decoding (see <code>past_key_values</code>).</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SpeechEncoderDecoderModel.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SpeechEncoderDecoderModel.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SpeechEncoderDecoderModel.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SpeechEncoderDecoderModel.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SpeechEncoderDecoderModel.forward.input_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SpeechEncoderDecoderModel.forward.input_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_values</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Float values of input raw speech waveform. Values can be obtained by loading a <em>.flac</em> or <em>.wav</em> audio file into an array of type <em>List[float]</em> or a <em>numpy.ndarray</em>, <em>e.g.</em> via the soundfile library (<em>pip install soundfile</em>). To prepare the array into <em>input_values</em>, the <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Processor">Wav2Vec2Processor</a> should be used for padding and conversion into a tensor of type <em>torch.FloatTensor</em>. See <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Processor.__call__">Wav2Vec2Processor.<strong>call</strong>()</a> for details.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SpeechEncoderDecoderModel.forward.input_features" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SpeechEncoderDecoderModel.forward.input_features"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_features</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, feature_size)</code>, <em>optional</em>) — Float values of fbank features extracted from the raw speech waveform. Raw speech waveform can be obtained by loading a <code>.flac</code> or <code>.wav</code> audio file into an array of type <code>List[float]</code> or a <code>numpy.ndarray</code>, <em>e.g.</em> via the soundfile library (<code>pip install soundfile</code>). To prepare the array into <code>input_features</code>, the <a href="/docs/transformers/v4.34.0/en/model_doc/speech_to_text#transformers.Speech2TextFeatureExtractor">Speech2TextFeatureExtractor</a> should be used for extracting the fbank features, padding and conversion into a tensor of type <code>torch.FloatTensor</code>. See <a href="/docs/transformers/v4.34.0/en/model_doc/speech_to_text#transformers.Speech2TextFeatureExtractor.__call__"><strong>call</strong>()</a></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SpeechEncoderDecoderModel.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SpeechEncoderDecoderModel.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — If set to <code>True</code>, the model will return a <code>~utils.Seq2SeqLMOutput</code> instead of a plain tuple.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SpeechEncoderDecoderModel.forward.kwargs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SpeechEncoderDecoderModel.forward.kwargs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>kwargs</strong> (<em>optional</em>) — Remaining dictionary of keyword arguments. Keyword arguments come in two flavors:<p></p> <ul> <li>Without a prefix which will be input as <code>**encoder_kwargs</code> for the encoder forward function.</li> <li>With a <em>decoder_</em> prefix which will be input as <code>**decoder_kwargs</code> for the decoder forward function.</li> </ul></span></span> </li></ul> <div id="transformers.SpeechEncoderDecoderModel.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.Seq2SeqLMOutput">transformers.modeling_outputs.Seq2SeqLMOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_outputs.Seq2SeqLMOutput">transformers.modeling_outputs.Seq2SeqLMOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderConfig">SpeechEncoderDecoderConfig</a>) and inputs.</p> <ul> <li> <p><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>labels</code> is provided) — Language modeling loss.</p> </li> <li> <p><strong>logits</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, config.vocab_size)</code>) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).</p> </li> <li> <p><strong>past_key_values</strong> (<code>tuple(tuple(torch.FloatTensor))</code>, <em>optional</em>, returned when <code>use_cache=True</code> is passed or when <code>config.use_cache=True</code>) — Tuple of <code>tuple(torch.FloatTensor)</code> of length <code>config.n_layers</code>, with each tuple having 2 tensors of shape <code>(batch_size, num_heads, sequence_length, embed_size_per_head)</code>) and 2 additional tensors of shape <code>(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)</code>.</p> <p>Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see <code>past_key_values</code> input) to speed up sequential decoding.</p> </li> <li> <p><strong>decoder_hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>decoder_attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> <li> <p><strong>cross_attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.</p> </li> <li> <p><strong>encoder_last_hidden_state</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Sequence of hidden-states at the output of the last layer of the encoder of the model.</p> </li> <li> <p><strong>encoder_hidden_states</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>encoder_attentions</strong> (<code>tuple(torch.FloatTensor)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>torch.FloatTensor</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-1grdlxn">The <a href="/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderModel">SpeechEncoderDecoderModel</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.SpeechEncoderDecoderModel.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SpeechEncoderDecoderModel.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-kvfsh7">Examples:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> SpeechEncoderDecoderModel, AutoProcessor <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>processor = AutoProcessor.from_pretrained(<span class="hljs-string">"facebook/wav2vec2-xls-r-300m-en-to-15"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = SpeechEncoderDecoderModel.from_pretrained(<span class="hljs-string">"facebook/wav2vec2-xls-r-300m-en-to-15"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>ds = load_dataset(<span class="hljs-string">"hf-internal-testing/librispeech_asr_dummy"</span>, <span class="hljs-string">"clean"</span>, split=<span class="hljs-string">"validation"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>input_values = processor(ds[<span class="hljs-number">0</span>][<span class="hljs-string">"audio"</span>][<span class="hljs-string">"array"</span>], return_tensors=<span class="hljs-string">"pt"</span>).input_values <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Inference: Translate English speech to German</span> <span class="hljs-meta">&gt;&gt;&gt; </span>generated = model.generate(input_values) <span class="hljs-meta">&gt;&gt;&gt; </span>decoded = processor.batch_decode(generated, skip_special_tokens=<span class="hljs-literal">True</span>)[<span class="hljs-number">0</span>] <span class="hljs-meta">&gt;&gt;&gt; </span>decoded <span class="hljs-string">'Mr. Quilter ist der Apostel der Mittelschicht und wir freuen uns, sein Evangelium willkommen heißen zu können.'</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Training: Train model on English transcription</span> <span class="hljs-meta">&gt;&gt;&gt; </span>labels = processor(text=ds[<span class="hljs-number">0</span>][<span class="hljs-string">"text"</span>], return_tensors=<span class="hljs-string">"pt"</span>).input_ids <span class="hljs-meta">&gt;&gt;&gt; </span>loss = model(input_values, labels=labels).loss <span class="hljs-meta">&gt;&gt;&gt; </span>loss.backward()</pre></div></div></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.SpeechEncoderDecoderModel.from_encoder_decoder_pretrained"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>from_encoder_decoder_pretrained</span></h4> <a id="transformers.SpeechEncoderDecoderModel.from_encoder_decoder_pretrained" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.SpeechEncoderDecoderModel.from_encoder_decoder_pretrained"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/speech_encoder_decoder/modeling_speech_encoder_decoder.py#L287" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_pretrained_model_name_or_path<span class="opacity-60">: str = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_pretrained_model_name_or_path<span class="opacity-60">: str = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">*model_args<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 4 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SpeechEncoderDecoderModel.from_encoder_decoder_pretrained.encoder_pretrained_model_name_or_path" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SpeechEncoderDecoderModel.from_encoder_decoder_pretrained.encoder_pretrained_model_name_or_path"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>encoder_pretrained_model_name_or_path</strong> (<code>str</code>, <em>optional</em>) — Information necessary to initiate the encoder. Can be either:<p></p> <ul> <li>A string, the <em>model id</em> of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like <code>bert-base-uncased</code>, or namespaced under a user or organization name, like <code>dbmdz/bert-base-german-cased</code>.</li> <li>A path to a <em>directory</em> containing model weights saved using <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.save_pretrained">save_pretrained()</a>, e.g., <code>./my_model_directory/</code>.</li> <li>A path or url to a <em>tensorflow index checkpoint file</em> (e.g, <code>./tf_model/model.ckpt.index</code>). In this case, <code>from_tf</code> should be set to <code>True</code> and a configuration object should be provided as <code>config</code> argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SpeechEncoderDecoderModel.from_encoder_decoder_pretrained.decoder_pretrained_model_name_or_path" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SpeechEncoderDecoderModel.from_encoder_decoder_pretrained.decoder_pretrained_model_name_or_path"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_pretrained_model_name_or_path</strong> (<code>str</code>, <em>optional</em>, defaults to <code>None</code>) — Information necessary to initiate the decoder. Can be either:<p></p> <ul> <li>A string, the <em>model id</em> of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like <code>bert-base-uncased</code>, or namespaced under a user or organization name, like <code>dbmdz/bert-base-german-cased</code>.</li> <li>A path to a <em>directory</em> containing model weights saved using <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.save_pretrained">save_pretrained()</a>, e.g., <code>./my_model_directory/</code>.</li> <li>A path or url to a <em>tensorflow index checkpoint file</em> (e.g, <code>./tf_model/model.ckpt.index</code>). In this case, <code>from_tf</code> should be set to <code>True</code> and a configuration object should be provided as <code>config</code> argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SpeechEncoderDecoderModel.from_encoder_decoder_pretrained.model_args" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SpeechEncoderDecoderModel.from_encoder_decoder_pretrained.model_args"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>model_args</strong> (remaining positional arguments, <em>optional</em>) — All remaning positional arguments will be passed to the underlying model’s <code>__init__</code> method.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.SpeechEncoderDecoderModel.from_encoder_decoder_pretrained.kwargs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SpeechEncoderDecoderModel.from_encoder_decoder_pretrained.kwargs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>kwargs</strong> (remaining dictionary of keyword arguments, <em>optional</em>) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., <code>output_attentions=True</code>).<p></p> <ul> <li>To update the encoder configuration, use the prefix <em>encoder_</em> for each configuration parameter.</li> <li>To update the decoder configuration, use the prefix <em>decoder_</em> for each configuration parameter.</li> <li>To update the parent model configuration, do not use a prefix for each configuration parameter.</li> </ul> <p>Behaves differently depending on whether a <code>config</code> is provided or automatically loaded.</p></span></span> </li></ul> </div></div> <p data-svelte-h="svelte-n4p3zm">Instantiate an encoder and a decoder from one or two base classes of the library from pretrained model checkpoints.</p> <p data-svelte-h="svelte-ce5sus">The model is set in evaluation mode by default using <code>model.eval()</code> (Dropout modules are deactivated). To train the model, you need to first set it back in training mode with <code>model.train()</code>.</p> <div class="relative group rounded-md"><a id="transformers.SpeechEncoderDecoderModel.from_encoder_decoder_pretrained.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.SpeechEncoderDecoderModel.from_encoder_decoder_pretrained.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> SpeechEncoderDecoderModel <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># initialize a wav2vec2bert from a pretrained Wav2Vec2 and a pretrained BERT model. Note that the cross-attention layers will be randomly initialized</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model = SpeechEncoderDecoderModel.from_encoder_decoder_pretrained( <span class="hljs-meta">... </span> <span class="hljs-string">"facebook/wav2vec2-base-960h"</span>, <span class="hljs-string">"bert-base-uncased"</span> <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># saving model after fine-tuning</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model.save_pretrained(<span class="hljs-string">"./wav2vec2bert"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># load fine-tuned model</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model = SpeechEncoderDecoderModel.from_pretrained(<span class="hljs-string">"./wav2vec2bert"</span>)</pre></div></div></div></div> <h2 class="relative group"><a id="transformers.FlaxSpeechEncoderDecoderModel" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxSpeechEncoderDecoderModel"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1m464nt">FlaxSpeechEncoderDecoderModel</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FlaxSpeechEncoderDecoderModel"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">FlaxSpeechEncoderDecoderModel</span></span></h3> <a id="transformers.FlaxSpeechEncoderDecoderModel" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FlaxSpeechEncoderDecoderModel"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/speech_encoder_decoder/modeling_flax_speech_encoder_decoder.py#L329" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60">: SpeechEncoderDecoderConfig</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_shape<span class="opacity-60">: typing.Optional[typing.Tuple] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">seed<span class="opacity-60">: int = 0</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">dtype<span class="opacity-60">: dtype = &lt;class 'jax.numpy.float32'&gt;</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">_do_init<span class="opacity-60">: bool = True</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxSpeechEncoderDecoderModel.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxSpeechEncoderDecoderModel.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderConfig">SpeechEncoderDecoderConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxSpeechEncoderDecoderModel.dtype" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxSpeechEncoderDecoderModel.dtype"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>dtype</strong> (<code>jax.numpy.dtype</code>, <em>optional</em>, defaults to <code>jax.numpy.float32</code>) — The data type of the computation. Can be one of <code>jax.numpy.float32</code>, <code>jax.numpy.float16</code> (on GPUs) and <code>jax.numpy.bfloat16</code> (on TPUs).<p></p> <p>This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given <code>dtype</code>.</p> <p><strong>Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.</strong></p> <p>If you wish to change the dtype of the model parameters, see <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.to_fp16">to_fp16()</a> and <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.to_bf16">to_bf16()</a>.</p></span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1baw9m3">This class can be used to initialize a speech-sequence-to-text-sequence model with any pretrained speech autoencoding model as the encoder and any pretrained text autoregressive model as the decoder. The encoder is loaded via <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained">from_pretrained()</a> function and the decoder is loaded via <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained">from_pretrained()</a> function. Cross-attention layers are automatically added to the decoder and should be fine-tuned on a downstream generative task, like summarization.</p> <p data-svelte-h="svelte-1faerbf">The effectiveness of initializing sequence-to-sequence models with pretrained checkpoints for sequence generation tasks was shown in <a href="https://arxiv.org/abs/1907.12461" rel="nofollow">Leveraging Pre-trained Checkpoints for Sequence Generation Tasks</a> by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu.</p> <p data-svelte-h="svelte-xu4z0z">Additionally, in <a href="https://arxiv.org/abs/2104.06678" rel="nofollow">Large-Scale Self- and Semi-Supervised Learning for Speech Translation</a> it is shown how leveraging large pretrained speech models for speech translation yields a significant performance improvement.</p> <p data-svelte-h="svelte-1dsbzbd">After such an Speech-Encoder Decoder model has been trained/fine-tuned, it can be saved/loaded just like any other models (see the examples for more information).</p> <p data-svelte-h="svelte-1b68hcc">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel">FlaxPreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-idybz1">This model is also a Flax Linen <a href="https://flax.readthedocs.io/en/latest/_autosummary/flax.nn.module.html" rel="nofollow">flax.nn.Module</a> subclass. Use it as a regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.</p> <p data-svelte-h="svelte-36uqfb"><a href="/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder#transformers.FlaxSpeechEncoderDecoderModel">FlaxSpeechEncoderDecoderModel</a> is a generic model class that will be instantiated as a transformer architecture with the module (flax.nn.Module) of one of the base model classes of the library as encoder module and another one as decoder module when created with the :meth<em>~transformers.FlaxAutoModel.from_pretrained</em> class method for the encoder and :meth<em>~transformers.FlaxAutoModelForCausalLM.from_pretrained</em> class method for the decoder.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FlaxSpeechEncoderDecoderModel.__call__"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>__call__</span></h4> <a id="transformers.FlaxSpeechEncoderDecoderModel.__call__" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FlaxSpeechEncoderDecoderModel.__call__"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/speech_encoder_decoder/modeling_flax_speech_encoder_decoder.py#L660" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">inputs<span class="opacity-60">: Array</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[jax.Array] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_input_ids<span class="opacity-60">: typing.Optional[jax.Array] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_attention_mask<span class="opacity-60">: typing.Optional[jax.Array] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_position_ids<span class="opacity-60">: typing.Optional[jax.Array] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">train<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">freeze_feature_encoder<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">params<span class="opacity-60">: dict = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">dropout_rng<span class="opacity-60">: PRNGKey = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput">transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput</a> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 7 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxSpeechEncoderDecoderModel.__call__.inputs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxSpeechEncoderDecoderModel.__call__.inputs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>inputs</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, sequence_length)</code> or <code>(batch_size, sequence_length, feature_dim)</code>, <em>optional</em>) — Float values of input raw speech waveform or speech features. Values can be obtained by loading a <code>.flac</code> or <code>.wav</code> audio file into an array of type <code>List[float]</code> or a <code>numpy.ndarray</code>, <em>e.g.</em> via the soundfile library (<code>pip install soundfile</code>). To prepare the array into <code>inputs</code>, either the <a href="/docs/transformers/v4.34.0/en/model_doc/wav2vec2#transformers.Wav2Vec2Processor">Wav2Vec2Processor</a> or <a href="/docs/transformers/v4.34.0/en/model_doc/speech_to_text#transformers.Speech2TextProcessor">Speech2TextProcessor</a> should be used for padding and conversion into a tensor of type <code>torch.FloatTensor</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxSpeechEncoderDecoderModel.__call__.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxSpeechEncoderDecoderModel.__call__.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxSpeechEncoderDecoderModel.__call__.decoder_input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxSpeechEncoderDecoderModel.__call__.decoder_input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_input_ids</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, target_sequence_length)</code>, <em>optional</em>) — Indices of decoder input sequence tokens in the vocabulary.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizer">PreTrainedTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p> <p>If <code>past_key_values</code> is used, optionally only the last <code>decoder_input_ids</code> have to be input (see <code>past_key_values</code>).</p> <p>For sequence to sequence training, <code>decoder_input_ids</code> should be provided. <code>decoder_input_ids</code> should be created outside of the model by shifting the <code>labels</code> to the right, replacing -100 by the <code>pad_token_id</code> and prepending them with the <code>decoder_start_token_id</code>.</p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxSpeechEncoderDecoderModel.__call__.decoder_attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxSpeechEncoderDecoderModel.__call__.decoder_attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_attention_mask</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, target_sequence_length)</code>, <em>optional</em>) — Default behavior: generate a tensor that ignores pad tokens in <code>decoder_input_ids</code>. Causal mask will also be used by default.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxSpeechEncoderDecoderModel.__call__.decoder_position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxSpeechEncoderDecoderModel.__call__.decoder_position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_position_ids</strong> (<code>numpy.ndarray</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the range <code>[0, config.decoder.max_position_embeddings - 1]</code>.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxSpeechEncoderDecoderModel.__call__.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxSpeechEncoderDecoderModel.__call__.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxSpeechEncoderDecoderModel.__call__.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxSpeechEncoderDecoderModel.__call__.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — If set to <code>True</code>, the model will return a <code>~utils.FlaxSeq2SeqLMOutput</code> instead of a plain tuple.</span></span> </li></ul> <div id="transformers.FlaxSpeechEncoderDecoderModel.__call__.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput">transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput</a> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput">transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput</a> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderConfig">SpeechEncoderDecoderConfig</a>) and inputs.</p> <ul> <li> <p><strong>logits</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, sequence_length, config.vocab_size)</code>) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).</p> </li> <li> <p><strong>past_key_values</strong> (<code>tuple(tuple(jnp.ndarray))</code>, <em>optional</em>, returned when <code>use_cache=True</code> is passed or when <code>config.use_cache=True</code>) — Tuple of <code>tuple(jnp.ndarray)</code> of length <code>config.n_layers</code>, with each tuple having 2 tensors of shape <code>(batch_size, num_heads, sequence_length, embed_size_per_head)</code>) and 2 additional tensors of shape <code>(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)</code>.</p> <p>Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see <code>past_key_values</code> input) to speed up sequential decoding.</p> </li> <li> <p><strong>decoder_hidden_states</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>jnp.ndarray</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>decoder_attentions</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>jnp.ndarray</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> <li> <p><strong>cross_attentions</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>jnp.ndarray</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.</p> </li> <li> <p><strong>encoder_last_hidden_state</strong> (<code>jnp.ndarray</code> of shape <code>(batch_size, sequence_length, hidden_size)</code>, <em>optional</em>) — Sequence of hidden-states at the output of the last layer of the encoder of the model.</p> </li> <li> <p><strong>encoder_hidden_states</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_hidden_states=True</code> is passed or when <code>config.output_hidden_states=True</code>) — Tuple of <code>jnp.ndarray</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.</p> </li> <li> <p><strong>encoder_attentions</strong> (<code>tuple(jnp.ndarray)</code>, <em>optional</em>, returned when <code>output_attentions=True</code> is passed or when <code>config.output_attentions=True</code>) — Tuple of <code>jnp.ndarray</code> (one for each layer) of shape <code>(batch_size, num_heads, sequence_length, sequence_length)</code>.</p> <p>Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads.</p> </li> </ul> </p> </div></div> <p data-svelte-h="svelte-1ay43ef">The <a href="/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder#transformers.FlaxSpeechEncoderDecoderModel">FlaxSpeechEncoderDecoderModel</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.FlaxSpeechEncoderDecoderModel.__call__.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxSpeechEncoderDecoderModel.__call__.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-kvfsh7">Examples:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> FlaxSpeechEncoderDecoderModel, AutoTokenizer <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># load a fine-tuned wav2vec2-2-bart model</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model = FlaxSpeechEncoderDecoderModel.from_pretrained(<span class="hljs-string">"patrickvonplaten/wav2vec2-2-bart-large"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># load output tokenizer</span> <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer_output = AutoTokenizer.from_pretrained(<span class="hljs-string">"facebook/bart-large"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = jnp.ones((<span class="hljs-number">2</span>, <span class="hljs-number">5000</span>), dtype=jnp.float32) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># use bart's special bos, pad and eos tokens</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model.config.decoder_start_token_id = model.decoder.config.bos_token_id <span class="hljs-meta">&gt;&gt;&gt; </span>model.config.pad_token_id = model.decoder.config.pad_token_id <span class="hljs-meta">&gt;&gt;&gt; </span>model.config.eos_token_id = model.decoder.config.eos_token_id <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model.generate(inputs) <span class="hljs-comment"># Assert something? More interesting input? dtype correct?</span></pre></div></div></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FlaxSpeechEncoderDecoderModel.from_encoder_decoder_pretrained"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>from_encoder_decoder_pretrained</span></h4> <a id="transformers.FlaxSpeechEncoderDecoderModel.from_encoder_decoder_pretrained" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FlaxSpeechEncoderDecoderModel.from_encoder_decoder_pretrained"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/speech_encoder_decoder/modeling_flax_speech_encoder_decoder.py#L782" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">encoder_pretrained_model_name_or_path<span class="opacity-60">: typing.Union[str, os.PathLike, NoneType] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">decoder_pretrained_model_name_or_path<span class="opacity-60">: typing.Union[str, os.PathLike, NoneType] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">*model_args<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 4 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxSpeechEncoderDecoderModel.from_encoder_decoder_pretrained.encoder_pretrained_model_name_or_path" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxSpeechEncoderDecoderModel.from_encoder_decoder_pretrained.encoder_pretrained_model_name_or_path"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>encoder_pretrained_model_name_or_path</strong> (<code>Union[str, os.PathLike]</code>, <em>optional</em>) — Information necessary to initiate the encoder. Can be either:<p></p> <ul> <li>A string, the <em>model id</em> of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like <code>bert-base-uncased</code>, or namespaced under a user or organization name, like <code>dbmdz/bert-base-german-cased</code>.</li> <li>A path to a <em>directory</em> containing model weights saved using <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.save_pretrained">save_pretrained()</a>, e.g., <code>./my_model_directory/</code>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxSpeechEncoderDecoderModel.from_encoder_decoder_pretrained.decoder_pretrained_model_name_or_path" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxSpeechEncoderDecoderModel.from_encoder_decoder_pretrained.decoder_pretrained_model_name_or_path"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>decoder_pretrained_model_name_or_path</strong> (<code>Union[str, os.PathLike]</code>, <em>optional</em>, defaults to <code>None</code>) — Information necessary to initiate the decoder. Can be either:<p></p> <ul> <li>A string, the <em>model id</em> of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like <code>bert-base-uncased</code>, or namespaced under a user or organization name, like <code>dbmdz/bert-base-german-cased</code>.</li> <li>A path to a <em>directory</em> containing model weights saved using <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.save_pretrained">save_pretrained()</a>, e.g., <code>./my_model_directory/</code>.</li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxSpeechEncoderDecoderModel.from_encoder_decoder_pretrained.model_args" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxSpeechEncoderDecoderModel.from_encoder_decoder_pretrained.model_args"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>model_args</strong> (remaining positional arguments, <em>optional</em>) — All remaning positional arguments will be passed to the underlying model’s <code>__init__</code> method.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxSpeechEncoderDecoderModel.from_encoder_decoder_pretrained.kwargs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxSpeechEncoderDecoderModel.from_encoder_decoder_pretrained.kwargs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>kwargs</strong> (remaining dictionary of keyword arguments, <em>optional</em>) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., <code>output_attentions=True</code>).<p></p> <ul> <li>To update the encoder configuration, use the prefix <em>encoder_</em> for each configuration parameter.</li> <li>To update the decoder configuration, use the prefix <em>decoder_</em> for each configuration parameter.</li> <li>To update the parent model configuration, do not use a prefix for each configuration parameter.</li> </ul> <p>Behaves differently depending on whether a <code>config</code> is provided or automatically loaded.</p></span></span> </li></ul> </div></div> <p data-svelte-h="svelte-n4p3zm">Instantiate an encoder and a decoder from one or two base classes of the library from pretrained model checkpoints.</p> <div class="relative group rounded-md"><a id="transformers.FlaxSpeechEncoderDecoderModel.from_encoder_decoder_pretrained.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxSpeechEncoderDecoderModel.from_encoder_decoder_pretrained.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-11lpom8">Example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> FlaxSpeechEncoderDecoderModel <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># initialize a wav2vec2-2-bart from pretrained wav2vec2 and bart models. Note that the cross-attention layers will be randomly initialized</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model = FlaxSpeechEncoderDecoderModel.from_encoder_decoder_pretrained( <span class="hljs-meta">... </span> <span class="hljs-string">"facebook/wav2vec2-large-lv60"</span>, <span class="hljs-string">"facebook/bart-large"</span> <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># saving model after fine-tuning</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model.save_pretrained(<span class="hljs-string">"./wav2vec2-2-bart-large"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># load fine-tuned model</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model = FlaxSpeechEncoderDecoderModel.from_pretrained(<span class="hljs-string">"./wav2vec2-2-bart-large"</span>)</pre></div></div></div></div> <p></p> <div id="svelte-announcer" aria-live="assertive" aria-atomic="true" style="position: absolute; left: 0px; top: 0px; clip: rect(0px, 0px, 0px, 0px); clip-path: inset(50%); overflow: hidden; white-space: nowrap; width: 1px; height: 1px;"></div></div> <div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/sam" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>Segment Anything</a> <a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/tapas" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">TAPAS<span class="ml-2 translate-y-px">→</span></a></div></div></div> <div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapter&quot;:{&quot;title&quot;:&quot;Speech Encoder Decoder Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;speech-encoder-decoder-models&quot;,&quot;url&quot;:&quot;#speech-encoder-decoder-models&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Randomly initializing `SpeechEncoderDecoderModel` from model configurations.&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;randomly-initializing-speechencoderdecodermodel-from-model-configurations&quot;,&quot;url&quot;:&quot;#randomly-initializing-speechencoderdecodermodel-from-model-configurations&quot;},{&quot;title&quot;:&quot;Initialising `SpeechEncoderDecoderModel` from a pretrained encoder and a pretrained decoder.&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;initialising-speechencoderdecodermodel-from-a-pretrained-encoder-and-a-pretrained-decoder&quot;,&quot;url&quot;:&quot;#initialising-speechencoderdecodermodel-from-a-pretrained-encoder-and-a-pretrained-decoder&quot;},{&quot;title&quot;:&quot;Loading an existing `SpeechEncoderDecoderModel` checkpoint and perform inference.&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;loading-an-existing-speechencoderdecodermodel-checkpoint-and-perform-inference&quot;,&quot;url&quot;:&quot;#loading-an-existing-speechencoderdecodermodel-checkpoint-and-perform-inference&quot;},{&quot;title&quot;:&quot;Training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;training&quot;,&quot;url&quot;:&quot;#training&quot;},{&quot;title&quot;:&quot;SpeechEncoderDecoderConfig&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.SpeechEncoderDecoderConfig&quot;,&quot;url&quot;:&quot;#transformers.SpeechEncoderDecoderConfig&quot;},{&quot;title&quot;:&quot;SpeechEncoderDecoderModel&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.SpeechEncoderDecoderModel&quot;,&quot;url&quot;:&quot;#transformers.SpeechEncoderDecoderModel&quot;},{&quot;title&quot;:&quot;FlaxSpeechEncoderDecoderModel&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.FlaxSpeechEncoderDecoderModel&quot;,&quot;url&quot;:&quot;#transformers.FlaxSpeechEncoderDecoderModel&quot;}]}}" data-target="SubSideMenu"><nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#speech-encoder-decoder-models" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-speech-encoder-decoder-models"><wbr>Speech <wbr>Encoder <wbr>Decoder <wbr>Models</a> <a href="#randomly-initializing-speechencoderdecodermodel-from-model-configurations" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-randomly-initializing-speechencoderdecodermodel-from-model-configurations"><wbr>Randomly initializing `<wbr>Speech<wbr>Encoder<wbr>Decoder<wbr>Model` from model configurations.</a> <a href="#initialising-speechencoderdecodermodel-from-a-pretrained-encoder-and-a-pretrained-decoder" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-initialising-speechencoderdecodermodel-from-a-pretrained-encoder-and-a-pretrained-decoder"><wbr>Initialising `<wbr>Speech<wbr>Encoder<wbr>Decoder<wbr>Model` from a pretrained encoder and a pretrained decoder.</a> <a href="#loading-an-existing-speechencoderdecodermodel-checkpoint-and-perform-inference" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-loading-an-existing-speechencoderdecodermodel-checkpoint-and-perform-inference"><wbr>Loading an existing `<wbr>Speech<wbr>Encoder<wbr>Decoder<wbr>Model` checkpoint and perform inference.</a> <a href="#training" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-training"><wbr>Training</a> <a href="#transformers.SpeechEncoderDecoderConfig" class="pl-4 text-gray-700 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.SpeechEncoderDecoderConfig"><wbr>Speech<wbr>Encoder<wbr>Decoder<wbr>Config</a> <a href="#transformers.SpeechEncoderDecoderModel" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.SpeechEncoderDecoderModel"><wbr>Speech<wbr>Encoder<wbr>Decoder<wbr>Model</a> <a href="#transformers.FlaxSpeechEncoderDecoderModel" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.FlaxSpeechEncoderDecoderModel"><wbr>Flax<wbr>Speech<wbr>Encoder<wbr>Decoder<wbr>Model</a> </nav></div></div></div> <div id="doc-footer"></div></main> </div> <script> import("/front/build/kube-5e23f38/index.js"); window.moonSha = "kube-5e23f38/"; window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc","environment":"production","userAgent":"HuggingFace (production)"}`); </script> <!-- Stripe --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://js.stripe.com/v3/"; script.async = true; document.head.appendChild(script); } </script> <!-- Google analytics v4 --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL"; script.async = true; document.head.appendChild(script); window.dataLayer = window.dataLayer || []; function gtag() { if (window.dataLayer !== undefined) { window.dataLayer.push(arguments); } } gtag("js", new Date()); gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder" }); /// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" }); /// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent /// TODO: ask the user for their consent and update this with gtag('consent', 'update') } </script> <!-- Google Analytics v3 (deprecated) --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { (function (i, s, o, g, r, a, m) { i["GoogleAnalyticsObject"] = r; (i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments); }), (i[r].l = 1 * new Date()); (a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]); a.async = 1; a.src = g; m.parentNode.insertBefore(a, m); })(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics"); ganalytics("create", "UA-83738774-2", "auto"); ganalytics("send", "pageview", "/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder"); } </script> <iframe name="__privateStripeMetricsController4160" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-27c67c0d52761104439bb051c7856ab1.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fv4.34.0%2Fen%2Fmodel_doc%2Fspeech-encoder-decoder%23transformers.SpeechEncoderDecoderConfig&amp;title=Speech%20Encoder%20Decoder%20Models&amp;referrer=&amp;muid=38397bf3-d1df-433f-a1ab-3a999964eeba83e258&amp;sid=7a2cecc6-6b9a-4e4a-88b4-4bd8a189a43fe6315f&amp;version=6&amp;preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html>
2023-10-05T13:33:50.943Z
VisionTextDualEncoder
https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder#transformers.VisionTextDualEncoderConfig
# VisionTextDualEncoder ## Overview The [VisionTextDualEncoderModel](/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder#transformers.VisionTextDualEncoderModel) can be used to initialize a vision-text dual encoder model with any pretrained vision autoencoding model as the vision encoder (_e.g._ [ViT](vit), [BEiT](beit), [DeiT](deit)) and any pretrained text autoencoding model as the text encoder (_e.g._ [RoBERTa](roberta), [BERT](bert)). Two projection layers are added on top of both the vision and text encoder to project the output embeddings to a shared latent space. The projection layers are randomly initialized so the model should be fine-tuned on a downstream task. This model can be used to align the vision-text embeddings using CLIP like contrastive image-text training and then can be used for zero-shot vision tasks such image-classification or retrieval. In [LiT: Zero-Shot Transfer with Locked-image Text Tuning](https://arxiv.org/abs/2111.07991) it is shown how leveraging pre-trained (locked/frozen) image and text model for contrastive learning yields significant improvement on new zero-shot vision tasks such as image classification or retrieval. ## VisionTextDualEncoderConfig ### class transformers.VisionTextDualEncoderConfig [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vision_text_dual_encoder/configuration_vision_text_dual_encoder.py#L27) ( projection\_dim = 512logit\_scale\_init\_value = 2.6592\*\*kwargs ) Parameters - **text\_config** (`dict`) — Dictionary of configuration options that defines text model config. - **vision\_config** (`dict`) — Dictionary of configuration options that defines vison model config. - **projection\_dim** (`int`, _optional_, defaults to 512) — Dimentionality of text and vision projection layers. - **logit\_scale\_init\_value** (`float`, _optional_, defaults to 2.6592) — The inital value of the _logit\_scale_ paramter. Default is used as per the original CLIP implementation. - **kwargs** (_optional_) — Dictionary of keyword arguments. [VisionTextDualEncoderConfig](/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder#transformers.VisionTextDualEncoderConfig) is the configuration class to store the configuration of a [VisionTextDualEncoderModel](/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder#transformers.VisionTextDualEncoderModel). It is used to instantiate [VisionTextDualEncoderModel](/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder#transformers.VisionTextDualEncoderModel) model according to the specified arguments, defining the text model and vision model configs. Configuration objects inherit from [PretrainedConfig](/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig) and can be used to control the model outputs. Read the documentation from [PretrainedConfig](/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig) for more information. Examples: ``` >>> from transformers import ViTConfig, BertConfig, VisionTextDualEncoderConfig, VisionTextDualEncoderModel >>> >>> config_vision = ViTConfig() >>> config_text = BertConfig() >>> config = VisionTextDualEncoderConfig.from_vision_text_configs(config_vision, config_text, projection_dim=512) >>> >>> model = VisionTextDualEncoderModel(config=config) >>> >>> config_vision = model.config.vision_config >>> config_text = model.config.text_config >>> >>> model.save_pretrained("vit-bert") >>> >>> vision_text_config = VisionTextDualEncoderConfig.from_pretrained("vit-bert") >>> model = VisionTextDualEncoderModel.from_pretrained("vit-bert", config=vision_text_config) ``` ## VisionTextDualEncoderProcessor ### class transformers.VisionTextDualEncoderProcessor [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vision_text_dual_encoder/processing_vision_text_dual_encoder.py#L25) ( image\_processor = Nonetokenizer = None\*\*kwargs ) Parameters - **image\_processor** ([AutoImageProcessor](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoImageProcessor)) — The image processor is a required input. - **tokenizer** ([PreTrainedTokenizer](/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizer)) — The tokenizer is a required input. Constructs a VisionTextDualEncoder processor which wraps an image processor and a tokenizer into a single processor. [VisionTextDualEncoderProcessor](/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder#transformers.VisionTextDualEncoderProcessor) offers all the functionalities of [AutoImageProcessor](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoImageProcessor) and [AutoTokenizer](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer). See the `__call__()` and [decode()](/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder#transformers.VisionTextDualEncoderProcessor.decode) for more information. This method forwards all its arguments to VisionTextDualEncoderTokenizer’s [batch\_decode()](/docs/transformers/v4.34.0/en/model_doc/speecht5#transformers.SpeechT5Tokenizer.batch_decode). Please refer to the docstring of this method for more information. This method forwards all its arguments to VisionTextDualEncoderTokenizer’s [decode()](/docs/transformers/v4.34.0/en/model_doc/speecht5#transformers.SpeechT5Tokenizer.decode). Please refer to the docstring of this method for more information. ## VisionTextDualEncoderModel ### class transformers.VisionTextDualEncoderModel [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vision_text_dual_encoder/modeling_vision_text_dual_encoder.py#L162) ( config: typing.Optional\[transformers.models.vision\_text\_dual\_encoder.configuration\_vision\_text\_dual\_encoder.VisionTextDualEncoderConfig\] = Nonevision\_model: typing.Optional\[transformers.modeling\_utils.PreTrainedModel\] = Nonetext\_model: typing.Optional\[transformers.modeling\_utils.PreTrainedModel\] = None ) Parameters - **config** ([VisionEncoderDecoderConfig](/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. This class can be used to initialize a vision-text dual encoder model with any pretrained vision autoencoding model as the vision encoder and any pretrained text model as the text encoder. The vision and text encoders are loaded via the [from\_pretrained()](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained) method. The projection layers are automatically added to the model and should be fine-tuned on a downstream task, like contrastive image-text modeling. In [LiT: Zero-Shot Transfer with Locked-image Text Tuning](https://arxiv.org/abs/2111.07991) it is shown how leveraging pre-trained (locked/frozen) image and text model for contrastive learning yields significant improvment on new zero-shot vision tasks such as image classification or retrieval. After such a Vision-Text-Dual-Encoder model has been trained/fine-tuned, it can be saved/loaded just like any other models (see the examples for more information). This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vision_text_dual_encoder/modeling_vision_text_dual_encoder.py#L293) ( input\_ids: typing.Optional\[torch.LongTensor\] = Nonepixel\_values: typing.Optional\[torch.FloatTensor\] = Noneattention\_mask: typing.Optional\[torch.Tensor\] = Noneposition\_ids: typing.Optional\[torch.LongTensor\] = Nonereturn\_loss: typing.Optional\[bool\] = Nonetoken\_type\_ids: typing.Optional\[torch.LongTensor\] = Noneoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → `transformers.models.clip.modeling_clip.CLIPOutput` or `tuple(torch.FloatTensor)` The [VisionTextDualEncoderModel](/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder#transformers.VisionTextDualEncoderModel) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: ``` >>> from PIL import Image >>> import requests >>> from transformers import ( ... VisionTextDualEncoderModel, ... VisionTextDualEncoderProcessor, ... AutoImageProcessor, ... AutoTokenizer, ... ) >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") >>> image_processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-224") >>> processor = VisionTextDualEncoderProcessor(image_processor, tokenizer) >>> model = VisionTextDualEncoderModel.from_vision_text_pretrained( ... "google/vit-base-patch16-224", "bert-base-uncased" ... ) >>> >>> urls = [ ... "http://images.cocodataset.org/val2017/000000039769.jpg", ... "https://farm3.staticflickr.com/2674/5850229113_4fe05d5265_z.jpg", ... ] >>> images = [Image.open(requests.get(url, stream=True).raw) for url in urls] >>> inputs = processor( ... text=["a photo of a cat", "a photo of a dog"], images=images, return_tensors="pt", padding=True ... ) >>> outputs = model( ... input_ids=inputs.input_ids, ... attention_mask=inputs.attention_mask, ... pixel_values=inputs.pixel_values, ... return_loss=True, ... ) >>> loss, logits_per_image = outputs.loss, outputs.logits_per_image >>> >>> model.save_pretrained("vit-bert") >>> model = VisionTextDualEncoderModel.from_pretrained("vit-bert") >>> >>> outputs = model(**inputs) >>> logits_per_image = outputs.logits_per_image >>> probs = logits_per_image.softmax(dim=1) ``` ## FlaxVisionTextDualEncoderModel ### class transformers.FlaxVisionTextDualEncoderModel [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vision_text_dual_encoder/modeling_flax_vision_text_dual_encoder.py#L219) ( config: VisionTextDualEncoderConfiginput\_shape: typing.Optional\[typing.Tuple\] = Noneseed: int = 0dtype: dtype = <class 'jax.numpy.float32'>\_do\_init: bool = True\*\*kwargs ) Parameters - **config** ([VisionTextDualEncoderConfig](/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder#transformers.VisionTextDualEncoderConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.from_pretrained) method to load the model weights. - **dtype** (`jax.numpy.dtype`, _optional_, defaults to `jax.numpy.float32`) — The data type of the computation. Can be one of `jax.numpy.float32`, `jax.numpy.float16` (on GPUs) and `jax.numpy.bfloat16` (on TPUs). This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given `dtype`. **Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.** If you wish to change the dtype of the model parameters, see [to\_fp16()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.to_fp16) and [to\_bf16()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.to_bf16). This class can be used to initialize a vision-text dual encoder model with any pretrained vision autoencoding model as the vision encoder and any pretrained text model as the text encoder. The vision and text encoders are loaded via the [from\_pretrained()](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained) method. The projection layers are automatically added to the model and should be fine-tuned on a downstream task, like contrastive image-text modeling. In [LiT: Zero-Shot Transfer with Locked-image Text Tuning](https://arxiv.org/abs/2111.07991) it is shown how leveraging pre-trained (locked/frozen) image and text model for contrastive learning yields significant improvment on new zero-shot vision tasks such as image classification or retrieval. After such a Vision-Text-Dual-Encoder model has been trained/fine-tuned, it can be saved/loaded just like any other models (see the examples for more information). This model inherits from [PreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a Flax Linen [flax.linen.Module](https://flax.readthedocs.io/en/latest/flax.linen.html#module) subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior. Finally, this model supports inherent JAX features such as: - [Just-In-Time (JIT) compilation](https://jax.readthedocs.io/en/latest/jax.html#just-in-time-compilation-jit) - [Automatic Differentiation](https://jax.readthedocs.io/en/latest/jax.html#automatic-differentiation) - [Vectorization](https://jax.readthedocs.io/en/latest/jax.html#vectorization-vmap) - [Parallelization](https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap) #### \_\_call\_\_ [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vision_text_dual_encoder/modeling_flax_vision_text_dual_encoder.py#L269) ( input\_idspixel\_valuesattention\_mask = Noneposition\_ids = Nonetoken\_type\_ids = Noneparams: dict = Nonedropout\_rng: PRNGKey = Nonetrain: bool = Falseoutput\_attentions: typing.Optional\[bool\] = Noneoutput\_hidden\_states: typing.Optional\[bool\] = Nonereturn\_dict: typing.Optional\[bool\] = None ) → `transformers.models.clip.modeling_flax_clip.FlaxCLIPOutput` or `tuple(torch.FloatTensor)` The [FlaxVisionTextDualEncoderModel](/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder#transformers.FlaxVisionTextDualEncoderModel) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: ``` >>> from PIL import Image >>> import requests >>> import jax >>> from transformers import ( ... FlaxVisionTextDualEncoderModel, ... VisionTextDualEncoderProcessor, ... AutoImageProcessor, ... AutoTokenizer, ... ) >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") >>> image_processor = AutoImageProcesor.from_pretrained("google/vit-base-patch16-224") >>> processor = VisionTextDualEncoderProcessor(image_processor, tokenizer) >>> model = FlaxVisionTextDualEncoderModel.from_vision_text_pretrained( ... "google/vit-base-patch16-224", "bert-base-uncased" ... ) >>> >>> urls = [ ... "http://images.cocodataset.org/val2017/000000039769.jpg", ... "https://farm3.staticflickr.com/2674/5850229113_4fe05d5265_z.jpg", ... ] >>> images = [Image.open(requests.get(url, stream=True).raw) for url in urls] >>> inputs = processor( ... text=["a photo of a cat", "a photo of a dog"], images=images, return_tensors="np", padding=True ... ) >>> outputs = model( ... input_ids=inputs.input_ids, ... attention_mask=inputs.attention_mask, ... pixel_values=inputs.pixel_values, ... ) >>> logits_per_image = outputs.logits_per_image >>> >>> model.save_pretrained("vit-bert") >>> model = FlaxVisionTextDualEncoderModel.from_pretrained("vit-bert") >>> >>> outputs = model(**inputs) >>> logits_per_image = outputs.logits_per_image >>> probs = jax.nn.softmax(logits_per_image, axis=1) ``` ## TFVisionTextDualEncoderModel ### class transformers.TFVisionTextDualEncoderModel [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vision_text_dual_encoder/modeling_tf_vision_text_dual_encoder.py#L176) ( \*args\*\*kwargs ) Parameters - **config** ([VisionEncoderDecoderConfig](/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderConfig)) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from\_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel.from_pretrained) method to load the model weights. This class can be used to initialize a vision-text dual encoder model with any pretrained vision autoencoding model as the vision encoder and any pretrained text model as the text encoder. The vision and text encoders are loaded via the [from\_pretrained()](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained) method. The projection layers are automatically added to the model and should be fine-tuned on a downstream task, like contrastive image-text modeling. In [LiT: Zero-Shot Transfer with Locked-image Text Tuning](https://arxiv.org/abs/2111.07991) it is shown how leveraging pre-trained (locked/frozen) image and text model for contrastive learning yields significant improvment on new zero-shot vision tasks such as image classification or retrieval. After such a Vision-Text-Dual-Encoder model has been trained/fine-tuned, it can be saved/loaded just like any other models (see the examples for more information). This model inherits from [TFPreTrainedModel](/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel). Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a Keras [Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) subclass. Use it as a regular Keras Model and refer to the TF documentation for all matter related to general usage and behavior. #### call [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vision_text_dual_encoder/modeling_tf_vision_text_dual_encoder.py#L341) ( input\_ids: tf.Tensor | None = Nonepixel\_values: tf.Tensor | None = Noneattention\_mask: tf.Tensor | None = Noneposition\_ids: tf.Tensor | None = Nonereturn\_loss: Optional\[bool\] = Nonetoken\_type\_ids: tf.Tensor | None = Noneoutput\_attentions: Optional\[bool\] = Noneoutput\_hidden\_states: Optional\[bool\] = Nonereturn\_dict: Optional\[bool\] = Nonetraining: bool = False ) → `transformers.models.clip.modeling_tf_clip.TFCLIPOutput` or `tuple(tf.Tensor)` The [TFVisionTextDualEncoderModel](/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder#transformers.TFVisionTextDualEncoderModel) forward method, overrides the `__call__` special method. Although the recipe for forward pass needs to be defined within this function, one should call the `Module` instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: ``` >>> from PIL import Image >>> import requests >>> from transformers import ( ... TFVisionTextDualEncoderModel, ... VisionTextDualEncoderProcessor, ... AutoImageProcessor, ... AutoTokenizer, ... ) >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") >>> image_processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-224") >>> processor = VisionTextDualEncoderProcessor(image_processor, tokenizer) >>> model = TFVisionTextDualEncoderModel.from_vision_text_pretrained( ... "google/vit-base-patch16-224", "bert-base-uncased" ... ) >>> >>> urls = [ ... "http://images.cocodataset.org/val2017/000000039769.jpg", ... "https://farm3.staticflickr.com/2674/5850229113_4fe05d5265_z.jpg", ... ] >>> images = [Image.open(requests.get(url, stream=True).raw) for url in urls] >>> inputs = processor( ... text=["a photo of a cat", "a photo of a dog"], images=images, return_tensors="np", padding=True ... ) >>> outputs = model( ... input_ids=inputs.input_ids, ... attention_mask=inputs.attention_mask, ... pixel_values=inputs.pixel_values, ... return_loss=True, ... ) >>> loss, logits_per_image = outputs.loss, outputs.logits_per_image >>> >>> model.save_pretrained("vit-bert") >>> model = TFVisionTextDualEncoderModel.from_pretrained("vit-bert") >>> >>> outputs = model(**inputs) >>> logits_per_image = outputs.logits_per_image >>> probs = tf.nn.softmax(logits_per_image, axis=1) ```
<!DOCTYPE html><html class=""><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"> <meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science."> <meta property="fb:app_id" content="1321688464574422"> <meta name="twitter:card" content="summary_large_image"> <meta name="twitter:site" content="@huggingface"> <meta property="og:title" content="VisionTextDualEncoder"> <meta property="og:type" content="website"> <meta property="og:url" content="https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder"> <meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png"> <link rel="stylesheet" href="/front/build/kube-5e23f38/style.css"> <link rel="preconnect" href="https://fonts.gstatic.com"> <link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&amp;display=swap" rel="stylesheet"> <link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&amp;display=swap" rel="stylesheet"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" as="style" onload="this.onload=null;this.rel='stylesheet'"> <noscript> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" /> </noscript> <title>VisionTextDualEncoder</title> <script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script> <script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"><link rel="stylesheet" href="/docs/transformers/v4.34.0/en/_app/immutable/assets/0.e3b0c442.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/1.38c5c2f6.js"><meta name="hf:doc:metadata" content="{&quot;local&quot;:&quot;visiontextdualencoder&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;overview&quot;,&quot;title&quot;:&quot;Overview&quot;},{&quot;local&quot;:&quot;transformers.VisionTextDualEncoderConfig&quot;,&quot;title&quot;:&quot;VisionTextDualEncoderConfig&quot;},{&quot;local&quot;:&quot;transformers.VisionTextDualEncoderProcessor&quot;,&quot;title&quot;:&quot;VisionTextDualEncoderProcessor&quot;},{&quot;local&quot;:&quot;transformers.VisionTextDualEncoderModel&quot;,&quot;title&quot;:&quot;VisionTextDualEncoderModel&quot;},{&quot;local&quot;:&quot;transformers.FlaxVisionTextDualEncoderModel&quot;,&quot;title&quot;:&quot;FlaxVisionTextDualEncoderModel&quot;},{&quot;local&quot;:&quot;transformers.TFVisionTextDualEncoderModel&quot;,&quot;title&quot;:&quot;TFVisionTextDualEncoderModel&quot;}],&quot;title&quot;:&quot;VisionTextDualEncoder&quot;}"></head> <body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage"> <div class="flex min-h-screen flex-col"> <div class="SVELTE_HYDRATER contents" data-props="{&quot;classNames&quot;:&quot;&quot;,&quot;isWide&quot;:true,&quot;isZh&quot;:false}" data-target="MainHeader"><header class="border-b border-gray-100 "><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <div class="flex flex-none items-center justify-center p-0.5 place-self-stretch lg:hidden"><button class="relative z-40 flex h-6 w-8 items-center justify-center" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div></div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="rounded-full border border-transparent bg-gray-900 px-3 py-1 leading-none text-white hover:border-black hover:bg-white hover:text-black" href="/join">Sign Up</a></li></ul></nav></div></header></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div> <main class="flex flex-1 flex-col"><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapters&quot;:[{&quot;title&quot;:&quot;Get started&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;🤗 Transformers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;index&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/index&quot;},{&quot;title&quot;:&quot;Quick tour&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;quicktour&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/quicktour&quot;},{&quot;title&quot;:&quot;Installation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;installation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/installation&quot;}]},{&quot;title&quot;:&quot;Tutorials&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Run inference with pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_tutorial&quot;},{&quot;title&quot;:&quot;Write portable code with AutoClass&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;autoclass_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/autoclass_tutorial&quot;},{&quot;title&quot;:&quot;Preprocess data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocessing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/preprocessing&quot;},{&quot;title&quot;:&quot;Fine-tune a pretrained model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;training&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/training&quot;},{&quot;title&quot;:&quot;Train with a script&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;run_scripts&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/run_scripts&quot;},{&quot;title&quot;:&quot;Set up distributed training with 🤗 Accelerate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;accelerate&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/accelerate&quot;},{&quot;title&quot;:&quot;Load and train adapters with 🤗 PEFT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;peft&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/peft&quot;},{&quot;title&quot;:&quot;Share your model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_sharing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_sharing&quot;},{&quot;title&quot;:&quot;Agents&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers_agents&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/transformers_agents&quot;},{&quot;title&quot;:&quot;Generation with LLMs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;llm_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/llm_tutorial&quot;}]},{&quot;title&quot;:&quot;Task Guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Natural Language Processing&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text classification&quot;,&quot;id&quot;:&quot;tasks/sequence_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/sequence_classification&quot;},{&quot;title&quot;:&quot;Token classification&quot;,&quot;id&quot;:&quot;tasks/token_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/token_classification&quot;},{&quot;title&quot;:&quot;Question answering&quot;,&quot;id&quot;:&quot;tasks/question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/question_answering&quot;},{&quot;title&quot;:&quot;Causal language modeling&quot;,&quot;id&quot;:&quot;tasks/language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/language_modeling&quot;},{&quot;title&quot;:&quot;Masked language modeling&quot;,&quot;id&quot;:&quot;tasks/masked_language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/masked_language_modeling&quot;},{&quot;title&quot;:&quot;Translation&quot;,&quot;id&quot;:&quot;tasks/translation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/translation&quot;},{&quot;title&quot;:&quot;Summarization&quot;,&quot;id&quot;:&quot;tasks/summarization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/summarization&quot;},{&quot;title&quot;:&quot;Multiple choice&quot;,&quot;id&quot;:&quot;tasks/multiple_choice&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/multiple_choice&quot;}]},{&quot;title&quot;:&quot;Audio&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio classification&quot;,&quot;id&quot;:&quot;tasks/audio_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/audio_classification&quot;},{&quot;title&quot;:&quot;Automatic speech recognition&quot;,&quot;id&quot;:&quot;tasks/asr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/asr&quot;}]},{&quot;title&quot;:&quot;Computer Vision&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image classification&quot;,&quot;id&quot;:&quot;tasks/image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_classification&quot;},{&quot;title&quot;:&quot;Semantic segmentation&quot;,&quot;id&quot;:&quot;tasks/semantic_segmentation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/semantic_segmentation&quot;},{&quot;title&quot;:&quot;Video classification&quot;,&quot;id&quot;:&quot;tasks/video_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/video_classification&quot;},{&quot;title&quot;:&quot;Object detection&quot;,&quot;id&quot;:&quot;tasks/object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot object detection&quot;,&quot;id&quot;:&quot;tasks/zero_shot_object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot image classification&quot;,&quot;id&quot;:&quot;tasks/zero_shot_image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification&quot;},{&quot;title&quot;:&quot;Depth estimation&quot;,&quot;id&quot;:&quot;tasks/monocular_depth_estimation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation&quot;}]},{&quot;title&quot;:&quot;Multimodal&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image captioning&quot;,&quot;id&quot;:&quot;tasks/image_captioning&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_captioning&quot;},{&quot;title&quot;:&quot;Document Question Answering&quot;,&quot;id&quot;:&quot;tasks/document_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/document_question_answering&quot;},{&quot;title&quot;:&quot;Visual Question Answering&quot;,&quot;id&quot;:&quot;tasks/visual_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/visual_question_answering&quot;},{&quot;title&quot;:&quot;Text to speech&quot;,&quot;id&quot;:&quot;tasks/text-to-speech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/text-to-speech&quot;}]},{&quot;title&quot;:&quot;Generation&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Customize the generation strategy&quot;,&quot;id&quot;:&quot;generation_strategies&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/generation_strategies&quot;}]},{&quot;title&quot;:&quot;Prompting&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image tasks with IDEFICS&quot;,&quot;id&quot;:&quot;tasks/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/idefics&quot;}]}]},{&quot;title&quot;:&quot;Developer guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Use fast tokenizers from 🤗 Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;fast_tokenizers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/fast_tokenizers&quot;},{&quot;title&quot;:&quot;Run inference with multilingual models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;multilingual&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/multilingual&quot;},{&quot;title&quot;:&quot;Use model-specific APIs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;create_a_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/create_a_model&quot;},{&quot;title&quot;:&quot;Share a custom model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_models&quot;},{&quot;title&quot;:&quot;Templates for chat models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;chat_templating&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/chat_templating&quot;},{&quot;title&quot;:&quot;Run training on Amazon SageMaker&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;sagemaker&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/sagemaker&quot;},{&quot;title&quot;:&quot;Export to ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;serialization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/serialization&quot;},{&quot;title&quot;:&quot;Export to TFLite&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tflite&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tflite&quot;},{&quot;title&quot;:&quot;Export to TorchScript&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;torchscript&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/torchscript&quot;},{&quot;title&quot;:&quot;Benchmarks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;benchmarks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/benchmarks&quot;},{&quot;title&quot;:&quot;Notebooks with examples&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;notebooks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/notebooks&quot;},{&quot;title&quot;:&quot;Community resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;community&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/community&quot;},{&quot;title&quot;:&quot;Custom Tools and Prompts&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_tools&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_tools&quot;},{&quot;title&quot;:&quot;Troubleshoot&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;troubleshooting&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/troubleshooting&quot;}]},{&quot;title&quot;:&quot;Performance and scalability&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;performance&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/performance&quot;},{&quot;title&quot;:&quot;Efficient training techniques&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Methods and tools for efficient training on a single GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_one&quot;},{&quot;title&quot;:&quot;Multiple GPUs and parallelism&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_many&quot;},{&quot;title&quot;:&quot;Efficient training on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu&quot;},{&quot;title&quot;:&quot;Distributed CPU training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu_many&quot;},{&quot;title&quot;:&quot;Training on TPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu&quot;},{&quot;title&quot;:&quot;Training on TPU with TensorFlow&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu_tf&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu_tf&quot;},{&quot;title&quot;:&quot;Training on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_special&quot;},{&quot;title&quot;:&quot;Custom hardware for training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_hardware&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_hardware&quot;},{&quot;title&quot;:&quot;Hyperparameter Search using Trainer API&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;hpo_train&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/hpo_train&quot;}]},{&quot;title&quot;:&quot;Optimizing inference&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Inference on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_cpu&quot;},{&quot;title&quot;:&quot;Inference on one GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_one&quot;},{&quot;title&quot;:&quot;Inference on many GPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_many&quot;},{&quot;title&quot;:&quot;Inference on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_special&quot;}]},{&quot;title&quot;:&quot;Instantiating a big model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;big_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/big_models&quot;},{&quot;title&quot;:&quot;Troubleshooting&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;debugging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/debugging&quot;},{&quot;title&quot;:&quot;XLA Integration for TensorFlow Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tf_xla&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tf_xla&quot;},{&quot;title&quot;:&quot;Optimize inference using `torch.compile()`&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_torch_compile&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_torch_compile&quot;}]},{&quot;title&quot;:&quot;Contribute&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;How to contribute to transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;contributing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/contributing&quot;},{&quot;title&quot;:&quot;How to add a model to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_model&quot;},{&quot;title&quot;:&quot;How to convert a 🤗 Transformers model to TensorFlow?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_tensorflow_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_tensorflow_model&quot;},{&quot;title&quot;:&quot;How to add a pipeline to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_pipeline&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_pipeline&quot;},{&quot;title&quot;:&quot;Testing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;testing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/testing&quot;},{&quot;title&quot;:&quot;Checks on a Pull Request&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pr_checks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pr_checks&quot;}]},{&quot;title&quot;:&quot;Conceptual guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Philosophy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;philosophy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/philosophy&quot;},{&quot;title&quot;:&quot;Glossary&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;glossary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/glossary&quot;},{&quot;title&quot;:&quot;What 🤗 Transformers can do&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;task_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/task_summary&quot;},{&quot;title&quot;:&quot;How 🤗 Transformers solve tasks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks_explained&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks_explained&quot;},{&quot;title&quot;:&quot;The Transformer model family&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_summary&quot;},{&quot;title&quot;:&quot;Summary of the tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tokenizer_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tokenizer_summary&quot;},{&quot;title&quot;:&quot;Attention mechanisms&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;attention&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/attention&quot;},{&quot;title&quot;:&quot;Padding and truncation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pad_truncation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pad_truncation&quot;},{&quot;title&quot;:&quot;BERTology&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;bertology&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/bertology&quot;},{&quot;title&quot;:&quot;Perplexity of fixed-length models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perplexity&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perplexity&quot;},{&quot;title&quot;:&quot;Pipelines for webserver inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_webserver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_webserver&quot;},{&quot;title&quot;:&quot;Model training anatomy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_memory_anatomy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_memory_anatomy&quot;}]},{&quot;title&quot;:&quot;API&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Main Classes&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Agents and Tools&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/agent&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/agent&quot;},{&quot;title&quot;:&quot;Auto Classes&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/auto&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/auto&quot;},{&quot;title&quot;:&quot;Callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/callback&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/callback&quot;},{&quot;title&quot;:&quot;Configuration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/configuration&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/configuration&quot;},{&quot;title&quot;:&quot;Data Collator&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/data_collator&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/data_collator&quot;},{&quot;title&quot;:&quot;Keras callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/keras_callbacks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/keras_callbacks&quot;},{&quot;title&quot;:&quot;Logging&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/logging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/logging&quot;},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/model&quot;},{&quot;title&quot;:&quot;Text Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/text_generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/text_generation&quot;},{&quot;title&quot;:&quot;ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/onnx&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/onnx&quot;},{&quot;title&quot;:&quot;Optimization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/optimizer_schedules&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules&quot;},{&quot;title&quot;:&quot;Model outputs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/output&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/output&quot;},{&quot;title&quot;:&quot;Pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/pipelines&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/pipelines&quot;},{&quot;title&quot;:&quot;Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/processors&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/processors&quot;},{&quot;title&quot;:&quot;Quantization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/quantization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/quantization&quot;},{&quot;title&quot;:&quot;Tokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/tokenizer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/tokenizer&quot;},{&quot;title&quot;:&quot;Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/trainer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/trainer&quot;},{&quot;title&quot;:&quot;DeepSpeed Integration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/deepspeed&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/deepspeed&quot;},{&quot;title&quot;:&quot;Feature Extractor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/feature_extractor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/feature_extractor&quot;},{&quot;title&quot;:&quot;Image Processor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/image_processor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/image_processor&quot;}]},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALBERT&quot;,&quot;id&quot;:&quot;model_doc/albert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/albert&quot;},{&quot;title&quot;:&quot;BART&quot;,&quot;id&quot;:&quot;model_doc/bart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bart&quot;},{&quot;title&quot;:&quot;BARThez&quot;,&quot;id&quot;:&quot;model_doc/barthez&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/barthez&quot;},{&quot;title&quot;:&quot;BARTpho&quot;,&quot;id&quot;:&quot;model_doc/bartpho&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bartpho&quot;},{&quot;title&quot;:&quot;BERT&quot;,&quot;id&quot;:&quot;model_doc/bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert&quot;},{&quot;title&quot;:&quot;BertGeneration&quot;,&quot;id&quot;:&quot;model_doc/bert-generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-generation&quot;},{&quot;title&quot;:&quot;BertJapanese&quot;,&quot;id&quot;:&quot;model_doc/bert-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-japanese&quot;},{&quot;title&quot;:&quot;Bertweet&quot;,&quot;id&quot;:&quot;model_doc/bertweet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bertweet&quot;},{&quot;title&quot;:&quot;BigBird&quot;,&quot;id&quot;:&quot;model_doc/big_bird&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/big_bird&quot;},{&quot;title&quot;:&quot;BigBirdPegasus&quot;,&quot;id&quot;:&quot;model_doc/bigbird_pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus&quot;},{&quot;title&quot;:&quot;BioGpt&quot;,&quot;id&quot;:&quot;model_doc/biogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/biogpt&quot;},{&quot;title&quot;:&quot;Blenderbot&quot;,&quot;id&quot;:&quot;model_doc/blenderbot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot&quot;},{&quot;title&quot;:&quot;Blenderbot Small&quot;,&quot;id&quot;:&quot;model_doc/blenderbot-small&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot-small&quot;},{&quot;title&quot;:&quot;BLOOM&quot;,&quot;id&quot;:&quot;model_doc/bloom&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bloom&quot;},{&quot;title&quot;:&quot;BORT&quot;,&quot;id&quot;:&quot;model_doc/bort&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bort&quot;},{&quot;title&quot;:&quot;ByT5&quot;,&quot;id&quot;:&quot;model_doc/byt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/byt5&quot;},{&quot;title&quot;:&quot;CamemBERT&quot;,&quot;id&quot;:&quot;model_doc/camembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/camembert&quot;},{&quot;title&quot;:&quot;CANINE&quot;,&quot;id&quot;:&quot;model_doc/canine&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/canine&quot;},{&quot;title&quot;:&quot;CodeGen&quot;,&quot;id&quot;:&quot;model_doc/codegen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/codegen&quot;},{&quot;title&quot;:&quot;CodeLlama&quot;,&quot;id&quot;:&quot;model_doc/code_llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/code_llama&quot;},{&quot;title&quot;:&quot;ConvBERT&quot;,&quot;id&quot;:&quot;model_doc/convbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convbert&quot;},{&quot;title&quot;:&quot;CPM&quot;,&quot;id&quot;:&quot;model_doc/cpm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpm&quot;},{&quot;title&quot;:&quot;CPMANT&quot;,&quot;id&quot;:&quot;model_doc/cpmant&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpmant&quot;},{&quot;title&quot;:&quot;CTRL&quot;,&quot;id&quot;:&quot;model_doc/ctrl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ctrl&quot;},{&quot;title&quot;:&quot;DeBERTa&quot;,&quot;id&quot;:&quot;model_doc/deberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta&quot;},{&quot;title&quot;:&quot;DeBERTa-v2&quot;,&quot;id&quot;:&quot;model_doc/deberta-v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta-v2&quot;},{&quot;title&quot;:&quot;DialoGPT&quot;,&quot;id&quot;:&quot;model_doc/dialogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dialogpt&quot;},{&quot;title&quot;:&quot;DistilBERT&quot;,&quot;id&quot;:&quot;model_doc/distilbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/distilbert&quot;},{&quot;title&quot;:&quot;DPR&quot;,&quot;id&quot;:&quot;model_doc/dpr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpr&quot;},{&quot;title&quot;:&quot;ELECTRA&quot;,&quot;id&quot;:&quot;model_doc/electra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/electra&quot;},{&quot;title&quot;:&quot;Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encoder-decoder&quot;},{&quot;title&quot;:&quot;ERNIE&quot;,&quot;id&quot;:&quot;model_doc/ernie&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie&quot;},{&quot;title&quot;:&quot;ErnieM&quot;,&quot;id&quot;:&quot;model_doc/ernie_m&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie_m&quot;},{&quot;title&quot;:&quot;ESM&quot;,&quot;id&quot;:&quot;model_doc/esm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/esm&quot;},{&quot;title&quot;:&quot;Falcon&quot;,&quot;id&quot;:&quot;model_doc/falcon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/falcon&quot;},{&quot;title&quot;:&quot;FLAN-T5&quot;,&quot;id&quot;:&quot;model_doc/flan-t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-t5&quot;},{&quot;title&quot;:&quot;FLAN-UL2&quot;,&quot;id&quot;:&quot;model_doc/flan-ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-ul2&quot;},{&quot;title&quot;:&quot;FlauBERT&quot;,&quot;id&quot;:&quot;model_doc/flaubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flaubert&quot;},{&quot;title&quot;:&quot;FNet&quot;,&quot;id&quot;:&quot;model_doc/fnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fnet&quot;},{&quot;title&quot;:&quot;FSMT&quot;,&quot;id&quot;:&quot;model_doc/fsmt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fsmt&quot;},{&quot;title&quot;:&quot;Funnel Transformer&quot;,&quot;id&quot;:&quot;model_doc/funnel&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/funnel&quot;},{&quot;title&quot;:&quot;GPT&quot;,&quot;id&quot;:&quot;model_doc/openai-gpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/openai-gpt&quot;},{&quot;title&quot;:&quot;GPT Neo&quot;,&quot;id&quot;:&quot;model_doc/gpt_neo&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neo&quot;},{&quot;title&quot;:&quot;GPT NeoX&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox&quot;},{&quot;title&quot;:&quot;GPT NeoX Japanese&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox_japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese&quot;},{&quot;title&quot;:&quot;GPT-J&quot;,&quot;id&quot;:&quot;model_doc/gptj&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptj&quot;},{&quot;title&quot;:&quot;GPT2&quot;,&quot;id&quot;:&quot;model_doc/gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt2&quot;},{&quot;title&quot;:&quot;GPTBigCode&quot;,&quot;id&quot;:&quot;model_doc/gpt_bigcode&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode&quot;},{&quot;title&quot;:&quot;GPTSAN Japanese&quot;,&quot;id&quot;:&quot;model_doc/gptsan-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese&quot;},{&quot;title&quot;:&quot;GPTSw3&quot;,&quot;id&quot;:&quot;model_doc/gpt-sw3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt-sw3&quot;},{&quot;title&quot;:&quot;HerBERT&quot;,&quot;id&quot;:&quot;model_doc/herbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/herbert&quot;},{&quot;title&quot;:&quot;I-BERT&quot;,&quot;id&quot;:&quot;model_doc/ibert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ibert&quot;},{&quot;title&quot;:&quot;Jukebox&quot;,&quot;id&quot;:&quot;model_doc/jukebox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/jukebox&quot;},{&quot;title&quot;:&quot;LED&quot;,&quot;id&quot;:&quot;model_doc/led&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/led&quot;},{&quot;title&quot;:&quot;LLaMA&quot;,&quot;id&quot;:&quot;model_doc/llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama&quot;},{&quot;title&quot;:&quot;Llama2&quot;,&quot;id&quot;:&quot;model_doc/llama2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama2&quot;},{&quot;title&quot;:&quot;Longformer&quot;,&quot;id&quot;:&quot;model_doc/longformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longformer&quot;},{&quot;title&quot;:&quot;LongT5&quot;,&quot;id&quot;:&quot;model_doc/longt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longt5&quot;},{&quot;title&quot;:&quot;LUKE&quot;,&quot;id&quot;:&quot;model_doc/luke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/luke&quot;},{&quot;title&quot;:&quot;M2M100&quot;,&quot;id&quot;:&quot;model_doc/m2m_100&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/m2m_100&quot;},{&quot;title&quot;:&quot;MarianMT&quot;,&quot;id&quot;:&quot;model_doc/marian&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/marian&quot;},{&quot;title&quot;:&quot;MarkupLM&quot;,&quot;id&quot;:&quot;model_doc/markuplm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/markuplm&quot;},{&quot;title&quot;:&quot;MBart and MBart-50&quot;,&quot;id&quot;:&quot;model_doc/mbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mbart&quot;},{&quot;title&quot;:&quot;MEGA&quot;,&quot;id&quot;:&quot;model_doc/mega&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mega&quot;},{&quot;title&quot;:&quot;MegatronBERT&quot;,&quot;id&quot;:&quot;model_doc/megatron-bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron-bert&quot;},{&quot;title&quot;:&quot;MegatronGPT2&quot;,&quot;id&quot;:&quot;model_doc/megatron_gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2&quot;},{&quot;title&quot;:&quot;Mistral&quot;,&quot;id&quot;:&quot;model_doc/mistral&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mistral&quot;},{&quot;title&quot;:&quot;mLUKE&quot;,&quot;id&quot;:&quot;model_doc/mluke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mluke&quot;},{&quot;title&quot;:&quot;MobileBERT&quot;,&quot;id&quot;:&quot;model_doc/mobilebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilebert&quot;},{&quot;title&quot;:&quot;MPNet&quot;,&quot;id&quot;:&quot;model_doc/mpnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpnet&quot;},{&quot;title&quot;:&quot;MPT&quot;,&quot;id&quot;:&quot;model_doc/mpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpt&quot;},{&quot;title&quot;:&quot;MRA&quot;,&quot;id&quot;:&quot;model_doc/mra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mra&quot;},{&quot;title&quot;:&quot;MT5&quot;,&quot;id&quot;:&quot;model_doc/mt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mt5&quot;},{&quot;title&quot;:&quot;MVP&quot;,&quot;id&quot;:&quot;model_doc/mvp&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mvp&quot;},{&quot;title&quot;:&quot;NEZHA&quot;,&quot;id&quot;:&quot;model_doc/nezha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nezha&quot;},{&quot;title&quot;:&quot;NLLB&quot;,&quot;id&quot;:&quot;model_doc/nllb&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb&quot;},{&quot;title&quot;:&quot;NLLB-MoE&quot;,&quot;id&quot;:&quot;model_doc/nllb-moe&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb-moe&quot;},{&quot;title&quot;:&quot;Nyströmformer&quot;,&quot;id&quot;:&quot;model_doc/nystromformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nystromformer&quot;},{&quot;title&quot;:&quot;Open-Llama&quot;,&quot;id&quot;:&quot;model_doc/open-llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/open-llama&quot;},{&quot;title&quot;:&quot;OPT&quot;,&quot;id&quot;:&quot;model_doc/opt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/opt&quot;},{&quot;title&quot;:&quot;Pegasus&quot;,&quot;id&quot;:&quot;model_doc/pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus&quot;},{&quot;title&quot;:&quot;PEGASUS-X&quot;,&quot;id&quot;:&quot;model_doc/pegasus_x&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus_x&quot;},{&quot;title&quot;:&quot;Persimmon&quot;,&quot;id&quot;:&quot;model_doc/persimmon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/persimmon&quot;},{&quot;title&quot;:&quot;PhoBERT&quot;,&quot;id&quot;:&quot;model_doc/phobert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/phobert&quot;},{&quot;title&quot;:&quot;PLBart&quot;,&quot;id&quot;:&quot;model_doc/plbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/plbart&quot;},{&quot;title&quot;:&quot;ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/prophetnet&quot;},{&quot;title&quot;:&quot;QDQBert&quot;,&quot;id&quot;:&quot;model_doc/qdqbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/qdqbert&quot;},{&quot;title&quot;:&quot;RAG&quot;,&quot;id&quot;:&quot;model_doc/rag&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rag&quot;},{&quot;title&quot;:&quot;REALM&quot;,&quot;id&quot;:&quot;model_doc/realm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/realm&quot;},{&quot;title&quot;:&quot;Reformer&quot;,&quot;id&quot;:&quot;model_doc/reformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/reformer&quot;},{&quot;title&quot;:&quot;RemBERT&quot;,&quot;id&quot;:&quot;model_doc/rembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rembert&quot;},{&quot;title&quot;:&quot;RetriBERT&quot;,&quot;id&quot;:&quot;model_doc/retribert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/retribert&quot;},{&quot;title&quot;:&quot;RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta&quot;},{&quot;title&quot;:&quot;RoBERTa-PreLayerNorm&quot;,&quot;id&quot;:&quot;model_doc/roberta-prelayernorm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm&quot;},{&quot;title&quot;:&quot;RoCBert&quot;,&quot;id&quot;:&quot;model_doc/roc_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roc_bert&quot;},{&quot;title&quot;:&quot;RoFormer&quot;,&quot;id&quot;:&quot;model_doc/roformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roformer&quot;},{&quot;title&quot;:&quot;RWKV&quot;,&quot;id&quot;:&quot;model_doc/rwkv&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rwkv&quot;},{&quot;title&quot;:&quot;Splinter&quot;,&quot;id&quot;:&quot;model_doc/splinter&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/splinter&quot;},{&quot;title&quot;:&quot;SqueezeBERT&quot;,&quot;id&quot;:&quot;model_doc/squeezebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/squeezebert&quot;},{&quot;title&quot;:&quot;SwitchTransformers&quot;,&quot;id&quot;:&quot;model_doc/switch_transformers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/switch_transformers&quot;},{&quot;title&quot;:&quot;T5&quot;,&quot;id&quot;:&quot;model_doc/t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5&quot;},{&quot;title&quot;:&quot;T5v1.1&quot;,&quot;id&quot;:&quot;model_doc/t5v1.1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5v1.1&quot;},{&quot;title&quot;:&quot;TAPEX&quot;,&quot;id&quot;:&quot;model_doc/tapex&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapex&quot;},{&quot;title&quot;:&quot;Transformer XL&quot;,&quot;id&quot;:&quot;model_doc/transfo-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/transfo-xl&quot;},{&quot;title&quot;:&quot;UL2&quot;,&quot;id&quot;:&quot;model_doc/ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ul2&quot;},{&quot;title&quot;:&quot;UMT5&quot;,&quot;id&quot;:&quot;model_doc/umt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/umt5&quot;},{&quot;title&quot;:&quot;X-MOD&quot;,&quot;id&quot;:&quot;model_doc/xmod&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xmod&quot;},{&quot;title&quot;:&quot;XGLM&quot;,&quot;id&quot;:&quot;model_doc/xglm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xglm&quot;},{&quot;title&quot;:&quot;XLM&quot;,&quot;id&quot;:&quot;model_doc/xlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm&quot;},{&quot;title&quot;:&quot;XLM-ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/xlm-prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa-XL&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl&quot;},{&quot;title&quot;:&quot;XLM-V&quot;,&quot;id&quot;:&quot;model_doc/xlm-v&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-v&quot;},{&quot;title&quot;:&quot;XLNet&quot;,&quot;id&quot;:&quot;model_doc/xlnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlnet&quot;},{&quot;title&quot;:&quot;YOSO&quot;,&quot;id&quot;:&quot;model_doc/yoso&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yoso&quot;}]},{&quot;title&quot;:&quot;Vision models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;BEiT&quot;,&quot;id&quot;:&quot;model_doc/beit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/beit&quot;},{&quot;title&quot;:&quot;BiT&quot;,&quot;id&quot;:&quot;model_doc/bit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bit&quot;},{&quot;title&quot;:&quot;Conditional DETR&quot;,&quot;id&quot;:&quot;model_doc/conditional_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/conditional_detr&quot;},{&quot;title&quot;:&quot;ConvNeXT&quot;,&quot;id&quot;:&quot;model_doc/convnext&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnext&quot;},{&quot;title&quot;:&quot;ConvNeXTV2&quot;,&quot;id&quot;:&quot;model_doc/convnextv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnextv2&quot;},{&quot;title&quot;:&quot;CvT&quot;,&quot;id&quot;:&quot;model_doc/cvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cvt&quot;},{&quot;title&quot;:&quot;Deformable DETR&quot;,&quot;id&quot;:&quot;model_doc/deformable_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deformable_detr&quot;},{&quot;title&quot;:&quot;DeiT&quot;,&quot;id&quot;:&quot;model_doc/deit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deit&quot;},{&quot;title&quot;:&quot;DETA&quot;,&quot;id&quot;:&quot;model_doc/deta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deta&quot;},{&quot;title&quot;:&quot;DETR&quot;,&quot;id&quot;:&quot;model_doc/detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/detr&quot;},{&quot;title&quot;:&quot;DiNAT&quot;,&quot;id&quot;:&quot;model_doc/dinat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinat&quot;},{&quot;title&quot;:&quot;DINO V2&quot;,&quot;id&quot;:&quot;model_doc/dinov2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinov2&quot;},{&quot;title&quot;:&quot;DiT&quot;,&quot;id&quot;:&quot;model_doc/dit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dit&quot;},{&quot;title&quot;:&quot;DPT&quot;,&quot;id&quot;:&quot;model_doc/dpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpt&quot;},{&quot;title&quot;:&quot;EfficientFormer&quot;,&quot;id&quot;:&quot;model_doc/efficientformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientformer&quot;},{&quot;title&quot;:&quot;EfficientNet&quot;,&quot;id&quot;:&quot;model_doc/efficientnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientnet&quot;},{&quot;title&quot;:&quot;FocalNet&quot;,&quot;id&quot;:&quot;model_doc/focalnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/focalnet&quot;},{&quot;title&quot;:&quot;GLPN&quot;,&quot;id&quot;:&quot;model_doc/glpn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/glpn&quot;},{&quot;title&quot;:&quot;ImageGPT&quot;,&quot;id&quot;:&quot;model_doc/imagegpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/imagegpt&quot;},{&quot;title&quot;:&quot;LeViT&quot;,&quot;id&quot;:&quot;model_doc/levit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/levit&quot;},{&quot;title&quot;:&quot;Mask2Former&quot;,&quot;id&quot;:&quot;model_doc/mask2former&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mask2former&quot;},{&quot;title&quot;:&quot;MaskFormer&quot;,&quot;id&quot;:&quot;model_doc/maskformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/maskformer&quot;},{&quot;title&quot;:&quot;MobileNetV1&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v1&quot;},{&quot;title&quot;:&quot;MobileNetV2&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v2&quot;},{&quot;title&quot;:&quot;MobileViT&quot;,&quot;id&quot;:&quot;model_doc/mobilevit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevit&quot;},{&quot;title&quot;:&quot;MobileViTV2&quot;,&quot;id&quot;:&quot;model_doc/mobilevitv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevitv2&quot;},{&quot;title&quot;:&quot;NAT&quot;,&quot;id&quot;:&quot;model_doc/nat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nat&quot;},{&quot;title&quot;:&quot;PoolFormer&quot;,&quot;id&quot;:&quot;model_doc/poolformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/poolformer&quot;},{&quot;title&quot;:&quot;Pyramid Vision Transformer (PVT)&quot;,&quot;id&quot;:&quot;model_doc/pvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pvt&quot;},{&quot;title&quot;:&quot;RegNet&quot;,&quot;id&quot;:&quot;model_doc/regnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/regnet&quot;},{&quot;title&quot;:&quot;ResNet&quot;,&quot;id&quot;:&quot;model_doc/resnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/resnet&quot;},{&quot;title&quot;:&quot;SegFormer&quot;,&quot;id&quot;:&quot;model_doc/segformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/segformer&quot;},{&quot;title&quot;:&quot;SwiftFormer&quot;,&quot;id&quot;:&quot;model_doc/swiftformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swiftformer&quot;},{&quot;title&quot;:&quot;Swin Transformer&quot;,&quot;id&quot;:&quot;model_doc/swin&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin&quot;},{&quot;title&quot;:&quot;Swin Transformer V2&quot;,&quot;id&quot;:&quot;model_doc/swinv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swinv2&quot;},{&quot;title&quot;:&quot;Swin2SR&quot;,&quot;id&quot;:&quot;model_doc/swin2sr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin2sr&quot;},{&quot;title&quot;:&quot;Table Transformer&quot;,&quot;id&quot;:&quot;model_doc/table-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/table-transformer&quot;},{&quot;title&quot;:&quot;TimeSformer&quot;,&quot;id&quot;:&quot;model_doc/timesformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/timesformer&quot;},{&quot;title&quot;:&quot;UperNet&quot;,&quot;id&quot;:&quot;model_doc/upernet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/upernet&quot;},{&quot;title&quot;:&quot;VAN&quot;,&quot;id&quot;:&quot;model_doc/van&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/van&quot;},{&quot;title&quot;:&quot;VideoMAE&quot;,&quot;id&quot;:&quot;model_doc/videomae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/videomae&quot;},{&quot;title&quot;:&quot;Vision Transformer (ViT)&quot;,&quot;id&quot;:&quot;model_doc/vit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit&quot;},{&quot;title&quot;:&quot;ViT Hybrid&quot;,&quot;id&quot;:&quot;model_doc/vit_hybrid&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_hybrid&quot;},{&quot;title&quot;:&quot;ViTDet&quot;,&quot;id&quot;:&quot;model_doc/vitdet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitdet&quot;},{&quot;title&quot;:&quot;ViTMAE&quot;,&quot;id&quot;:&quot;model_doc/vit_mae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_mae&quot;},{&quot;title&quot;:&quot;ViTMatte&quot;,&quot;id&quot;:&quot;model_doc/vitmatte&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitmatte&quot;},{&quot;title&quot;:&quot;ViTMSN&quot;,&quot;id&quot;:&quot;model_doc/vit_msn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_msn&quot;},{&quot;title&quot;:&quot;ViViT&quot;,&quot;id&quot;:&quot;model_doc/vivit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vivit&quot;},{&quot;title&quot;:&quot;YOLOS&quot;,&quot;id&quot;:&quot;model_doc/yolos&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yolos&quot;}]},{&quot;title&quot;:&quot;Audio models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio Spectrogram Transformer&quot;,&quot;id&quot;:&quot;model_doc/audio-spectrogram-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer&quot;},{&quot;title&quot;:&quot;Bark&quot;,&quot;id&quot;:&quot;model_doc/bark&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bark&quot;},{&quot;title&quot;:&quot;CLAP&quot;,&quot;id&quot;:&quot;model_doc/clap&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clap&quot;},{&quot;title&quot;:&quot;EnCodec&quot;,&quot;id&quot;:&quot;model_doc/encodec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encodec&quot;},{&quot;title&quot;:&quot;Hubert&quot;,&quot;id&quot;:&quot;model_doc/hubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/hubert&quot;},{&quot;title&quot;:&quot;MCTCT&quot;,&quot;id&quot;:&quot;model_doc/mctct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mctct&quot;},{&quot;title&quot;:&quot;MMS&quot;,&quot;id&quot;:&quot;model_doc/mms&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mms&quot;},{&quot;title&quot;:&quot;MusicGen&quot;,&quot;id&quot;:&quot;model_doc/musicgen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/musicgen&quot;},{&quot;title&quot;:&quot;Pop2Piano&quot;,&quot;id&quot;:&quot;model_doc/pop2piano&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pop2piano&quot;},{&quot;title&quot;:&quot;SEW&quot;,&quot;id&quot;:&quot;model_doc/sew&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew&quot;},{&quot;title&quot;:&quot;SEW-D&quot;,&quot;id&quot;:&quot;model_doc/sew-d&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew-d&quot;},{&quot;title&quot;:&quot;Speech2Text&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text&quot;},{&quot;title&quot;:&quot;Speech2Text2&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text_2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2&quot;},{&quot;title&quot;:&quot;SpeechT5&quot;,&quot;id&quot;:&quot;model_doc/speecht5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speecht5&quot;},{&quot;title&quot;:&quot;UniSpeech&quot;,&quot;id&quot;:&quot;model_doc/unispeech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech&quot;},{&quot;title&quot;:&quot;UniSpeech-SAT&quot;,&quot;id&quot;:&quot;model_doc/unispeech-sat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech-sat&quot;},{&quot;title&quot;:&quot;VITS&quot;,&quot;id&quot;:&quot;model_doc/vits&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vits&quot;},{&quot;title&quot;:&quot;Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2&quot;},{&quot;title&quot;:&quot;Wav2Vec2-Conformer&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2-conformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer&quot;},{&quot;title&quot;:&quot;Wav2Vec2Phoneme&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2_phoneme&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme&quot;},{&quot;title&quot;:&quot;WavLM&quot;,&quot;id&quot;:&quot;model_doc/wavlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wavlm&quot;},{&quot;title&quot;:&quot;Whisper&quot;,&quot;id&quot;:&quot;model_doc/whisper&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/whisper&quot;},{&quot;title&quot;:&quot;XLS-R&quot;,&quot;id&quot;:&quot;model_doc/xls_r&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xls_r&quot;},{&quot;title&quot;:&quot;XLSR-Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/xlsr_wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2&quot;}]},{&quot;title&quot;:&quot;Multimodal models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALIGN&quot;,&quot;id&quot;:&quot;model_doc/align&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/align&quot;},{&quot;title&quot;:&quot;AltCLIP&quot;,&quot;id&quot;:&quot;model_doc/altclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/altclip&quot;},{&quot;title&quot;:&quot;BLIP&quot;,&quot;id&quot;:&quot;model_doc/blip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip&quot;},{&quot;title&quot;:&quot;BLIP-2&quot;,&quot;id&quot;:&quot;model_doc/blip-2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip-2&quot;},{&quot;title&quot;:&quot;BridgeTower&quot;,&quot;id&quot;:&quot;model_doc/bridgetower&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bridgetower&quot;},{&quot;title&quot;:&quot;BROS&quot;,&quot;id&quot;:&quot;model_doc/bros&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bros&quot;},{&quot;title&quot;:&quot;Chinese-CLIP&quot;,&quot;id&quot;:&quot;model_doc/chinese_clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/chinese_clip&quot;},{&quot;title&quot;:&quot;CLIP&quot;,&quot;id&quot;:&quot;model_doc/clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clip&quot;},{&quot;title&quot;:&quot;CLIPSeg&quot;,&quot;id&quot;:&quot;model_doc/clipseg&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clipseg&quot;},{&quot;title&quot;:&quot;Data2Vec&quot;,&quot;id&quot;:&quot;model_doc/data2vec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/data2vec&quot;},{&quot;title&quot;:&quot;DePlot&quot;,&quot;id&quot;:&quot;model_doc/deplot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deplot&quot;},{&quot;title&quot;:&quot;Donut&quot;,&quot;id&quot;:&quot;model_doc/donut&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/donut&quot;},{&quot;title&quot;:&quot;FLAVA&quot;,&quot;id&quot;:&quot;model_doc/flava&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flava&quot;},{&quot;title&quot;:&quot;GIT&quot;,&quot;id&quot;:&quot;model_doc/git&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/git&quot;},{&quot;title&quot;:&quot;GroupViT&quot;,&quot;id&quot;:&quot;model_doc/groupvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/groupvit&quot;},{&quot;title&quot;:&quot;IDEFICS&quot;,&quot;id&quot;:&quot;model_doc/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/idefics&quot;},{&quot;title&quot;:&quot;InstructBLIP&quot;,&quot;id&quot;:&quot;model_doc/instructblip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/instructblip&quot;},{&quot;title&quot;:&quot;LayoutLM&quot;,&quot;id&quot;:&quot;model_doc/layoutlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlm&quot;},{&quot;title&quot;:&quot;LayoutLMV2&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv2&quot;},{&quot;title&quot;:&quot;LayoutLMV3&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv3&quot;},{&quot;title&quot;:&quot;LayoutXLM&quot;,&quot;id&quot;:&quot;model_doc/layoutxlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutxlm&quot;},{&quot;title&quot;:&quot;LiLT&quot;,&quot;id&quot;:&quot;model_doc/lilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lilt&quot;},{&quot;title&quot;:&quot;LXMERT&quot;,&quot;id&quot;:&quot;model_doc/lxmert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lxmert&quot;},{&quot;title&quot;:&quot;MatCha&quot;,&quot;id&quot;:&quot;model_doc/matcha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/matcha&quot;},{&quot;title&quot;:&quot;MGP-STR&quot;,&quot;id&quot;:&quot;model_doc/mgp-str&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mgp-str&quot;},{&quot;title&quot;:&quot;Nougat&quot;,&quot;id&quot;:&quot;model_doc/nougat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nougat&quot;},{&quot;title&quot;:&quot;OneFormer&quot;,&quot;id&quot;:&quot;model_doc/oneformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/oneformer&quot;},{&quot;title&quot;:&quot;OWL-ViT&quot;,&quot;id&quot;:&quot;model_doc/owlvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/owlvit&quot;},{&quot;title&quot;:&quot;Perceiver&quot;,&quot;id&quot;:&quot;model_doc/perceiver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/perceiver&quot;},{&quot;title&quot;:&quot;Pix2Struct&quot;,&quot;id&quot;:&quot;model_doc/pix2struct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pix2struct&quot;},{&quot;title&quot;:&quot;Segment Anything&quot;,&quot;id&quot;:&quot;model_doc/sam&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sam&quot;},{&quot;title&quot;:&quot;Speech Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/speech-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder&quot;},{&quot;title&quot;:&quot;TAPAS&quot;,&quot;id&quot;:&quot;model_doc/tapas&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapas&quot;},{&quot;title&quot;:&quot;TrOCR&quot;,&quot;id&quot;:&quot;model_doc/trocr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trocr&quot;},{&quot;title&quot;:&quot;TVLT&quot;,&quot;id&quot;:&quot;model_doc/tvlt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tvlt&quot;},{&quot;title&quot;:&quot;ViLT&quot;,&quot;id&quot;:&quot;model_doc/vilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vilt&quot;},{&quot;title&quot;:&quot;Vision Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/vision-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder&quot;},{&quot;title&quot;:&quot;Vision Text Dual Encoder&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/vision-text-dual-encoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder&quot;},{&quot;title&quot;:&quot;VisualBERT&quot;,&quot;id&quot;:&quot;model_doc/visual_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/visual_bert&quot;},{&quot;title&quot;:&quot;X-CLIP&quot;,&quot;id&quot;:&quot;model_doc/xclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xclip&quot;}]},{&quot;title&quot;:&quot;Reinforcement learning models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Decision Transformer&quot;,&quot;id&quot;:&quot;model_doc/decision_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/decision_transformer&quot;},{&quot;title&quot;:&quot;Trajectory Transformer&quot;,&quot;id&quot;:&quot;model_doc/trajectory_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer&quot;}]},{&quot;title&quot;:&quot;Time series models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Autoformer&quot;,&quot;id&quot;:&quot;model_doc/autoformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/autoformer&quot;},{&quot;title&quot;:&quot;Informer&quot;,&quot;id&quot;:&quot;model_doc/informer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/informer&quot;},{&quot;title&quot;:&quot;Time Series Transformer&quot;,&quot;id&quot;:&quot;model_doc/time_series_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/time_series_transformer&quot;}]},{&quot;title&quot;:&quot;Graph models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Graphormer&quot;,&quot;id&quot;:&quot;model_doc/graphormer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/graphormer&quot;}]}]},{&quot;title&quot;:&quot;Internal Helpers&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Custom Layers and Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/modeling_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/modeling_utils&quot;},{&quot;title&quot;:&quot;Utilities for pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/pipelines_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/pipelines_utils&quot;},{&quot;title&quot;:&quot;Utilities for Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/tokenization_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/tokenization_utils&quot;},{&quot;title&quot;:&quot;Utilities for Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/trainer_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/trainer_utils&quot;},{&quot;title&quot;:&quot;Utilities for Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/generation_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/generation_utils&quot;},{&quot;title&quot;:&quot;Utilities for Image Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/image_processing_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/image_processing_utils&quot;},{&quot;title&quot;:&quot;Utilities for Audio processing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/audio_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/audio_utils&quot;},{&quot;title&quot;:&quot;General Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/file_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/file_utils&quot;},{&quot;title&quot;:&quot;Utilities for Time Series&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/time_series_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/time_series_utils&quot;}]}]}],&quot;chapterId&quot;:&quot;model_doc/vision-text-dual-encoder&quot;,&quot;docType&quot;:&quot;docs&quot;,&quot;isLoggedIn&quot;:false,&quot;lang&quot;:&quot;en&quot;,&quot;langs&quot;:[&quot;de&quot;,&quot;en&quot;,&quot;es&quot;,&quot;fr&quot;,&quot;it&quot;,&quot;ko&quot;,&quot;pt&quot;,&quot;zh&quot;],&quot;library&quot;:&quot;transformers&quot;,&quot;theme&quot;:&quot;light&quot;,&quot;version&quot;:&quot;v4.34.0&quot;,&quot;versions&quot;:[{&quot;version&quot;:&quot;main&quot;},{&quot;version&quot;:&quot;v4.34.0&quot;},{&quot;version&quot;:&quot;v4.33.3&quot;},{&quot;version&quot;:&quot;v4.33.2&quot;},{&quot;version&quot;:&quot;v4.33.0&quot;},{&quot;version&quot;:&quot;v4.32.1&quot;},{&quot;version&quot;:&quot;v4.32.0&quot;},{&quot;version&quot;:&quot;v4.31.0&quot;},{&quot;version&quot;:&quot;v4.30.0&quot;},{&quot;version&quot;:&quot;v4.29.1&quot;},{&quot;version&quot;:&quot;v4.29.0&quot;},{&quot;version&quot;:&quot;v4.28.1&quot;},{&quot;version&quot;:&quot;v4.28.0&quot;},{&quot;version&quot;:&quot;v4.27.2&quot;},{&quot;version&quot;:&quot;v4.27.1&quot;},{&quot;version&quot;:&quot;v4.27.0&quot;},{&quot;version&quot;:&quot;v4.26.1&quot;},{&quot;version&quot;:&quot;v4.26.0&quot;},{&quot;version&quot;:&quot;v4.25.1&quot;},{&quot;version&quot;:&quot;v4.24.0&quot;},{&quot;version&quot;:&quot;v4.23.1&quot;},{&quot;version&quot;:&quot;v4.23.0&quot;},{&quot;version&quot;:&quot;v4.22.2&quot;},{&quot;version&quot;:&quot;v4.22.1&quot;},{&quot;version&quot;:&quot;v4.22.0&quot;},{&quot;version&quot;:&quot;v4.21.3&quot;},{&quot;version&quot;:&quot;v4.21.2&quot;},{&quot;version&quot;:&quot;v4.21.1&quot;},{&quot;version&quot;:&quot;v4.21.0&quot;},{&quot;version&quot;:&quot;v4.20.1&quot;},{&quot;version&quot;:&quot;v4.20.0&quot;},{&quot;version&quot;:&quot;v4.19.4&quot;},{&quot;version&quot;:&quot;v4.19.3&quot;},{&quot;version&quot;:&quot;v4.19.2&quot;},{&quot;version&quot;:&quot;v4.19.0&quot;},{&quot;version&quot;:&quot;v4.18.0&quot;},{&quot;version&quot;:&quot;v4.17.0&quot;},{&quot;version&quot;:&quot;v4.16.2&quot;},{&quot;version&quot;:&quot;v4.16.1&quot;},{&quot;version&quot;:&quot;v4.16.0&quot;},{&quot;version&quot;:&quot;v4.15.0&quot;},{&quot;version&quot;:&quot;v4.14.1&quot;},{&quot;version&quot;:&quot;v4.13.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.5&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.4&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.0.0&quot;},{&quot;version&quot;:&quot;doc-builder-html&quot;}],&quot;title&quot;:&quot;VisionTextDualEncoder&quot;}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"> <div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation </p> <div class="flex items-center"><p class="font-semibold">VisionTextDualEncoder</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "> <button class=" " type="button"> <h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> </button> <div class="flex items-center"> <select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1" selected="">v4.34.0</option><option value="2">v4.33.3</option><option value="3">v4.32.1</option><option value="4">v4.31.0</option><option value="5">v4.30.0</option><option value="6">v4.29.1</option><option value="7">v4.28.1</option><option value="8">v4.27.2</option><option value="9">v4.26.1</option><option value="10">v4.25.1</option><option value="11">v4.24.0</option><option value="12">v4.23.1</option><option value="13">v4.22.2</option><option value="14">v4.21.3</option><option value="15">v4.20.1</option><option value="16">v4.19.4</option><option value="17">v4.18.0</option><option value="18">v4.17.0</option><option value="19">v4.16.2</option><option value="20">v4.15.0</option><option value="21">v4.14.1</option><option value="22">v4.13.0</option><option value="23">v4.12.5</option><option value="24">v4.11.3</option><option value="25">v4.10.1</option><option value="26">v4.9.2</option><option value="27">v4.8.2</option><option value="28">v4.7.0</option><option value="29">v4.6.0</option><option value="30">v4.5.1</option><option value="31">v4.4.2</option><option value="32">v4.3.3</option><option value="33">v4.2.2</option><option value="34">v4.1.1</option><option value="35">v4.0.1</option><option value="36">v3.5.1</option><option value="37">v3.4.0</option><option value="38">v3.3.1</option><option value="39">v3.2.0</option><option value="40">v3.1.0</option><option value="41">v3.0.2</option><option value="42">v2.11.0</option><option value="43">v2.10.0</option><option value="44">v2.9.1</option><option value="45">v2.8.0</option><option value="46">v2.7.0</option><option value="47">v2.6.0</option><option value="48">v2.5.1</option><option value="49">v2.4.1</option><option value="50">v2.3.0</option><option value="51">v2.2.2</option><option value="52">v2.1.1</option><option value="53">v2.0.0</option><option value="54">v1.2.0</option><option value="55">v1.1.0</option><option value="56">v1.0.0</option><option value="57">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en" selected="">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"> <button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"> <svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> </a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Get started<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/index"><!-- HTML_TAG_START -->🤗 Transformers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/quicktour"><!-- HTML_TAG_START -->Quick tour<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/installation"><!-- HTML_TAG_START -->Installation<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Tutorials<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_tutorial"><!-- HTML_TAG_START -->Run inference with pipelines<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/autoclass_tutorial"><!-- HTML_TAG_START -->Write portable code with AutoClass<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/preprocessing"><!-- HTML_TAG_START -->Preprocess data<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/training"><!-- HTML_TAG_START -->Fine-tune a pretrained model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/run_scripts"><!-- HTML_TAG_START -->Train with a script<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/accelerate"><!-- HTML_TAG_START -->Set up distributed training with 🤗 Accelerate<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/peft"><!-- HTML_TAG_START -->Load and train adapters with 🤗 PEFT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_sharing"><!-- HTML_TAG_START -->Share your model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/transformers_agents"><!-- HTML_TAG_START -->Agents<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/llm_tutorial"><!-- HTML_TAG_START -->Generation with LLMs<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Task Guides<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Natural Language Processing<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Audio<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Computer Vision<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Multimodal<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Generation<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Prompting<!-- HTML_TAG_END --></span> </span></span> </div></div> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Developer guides<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/fast_tokenizers"><!-- HTML_TAG_START -->Use fast tokenizers from 🤗 Tokenizers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/multilingual"><!-- HTML_TAG_START -->Run inference with multilingual models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/create_a_model"><!-- HTML_TAG_START -->Use model-specific APIs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_models"><!-- HTML_TAG_START -->Share a custom model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/chat_templating"><!-- HTML_TAG_START -->Templates for chat models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/sagemaker"><!-- HTML_TAG_START -->Run training on Amazon SageMaker<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/serialization"><!-- HTML_TAG_START -->Export to ONNX<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tflite"><!-- HTML_TAG_START -->Export to TFLite<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/torchscript"><!-- HTML_TAG_START -->Export to TorchScript<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/benchmarks"><!-- HTML_TAG_START -->Benchmarks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/notebooks"><!-- HTML_TAG_START -->Notebooks with examples<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/community"><!-- HTML_TAG_START -->Community resources<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_tools"><!-- HTML_TAG_START -->Custom Tools and Prompts<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/troubleshooting"><!-- HTML_TAG_START -->Troubleshoot<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Performance and scalability<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/performance"><!-- HTML_TAG_START -->Overview<!-- HTML_TAG_END --> </a> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Efficient training techniques<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_one"><!-- HTML_TAG_START -->Methods and tools for efficient training on a single GPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_many"><!-- HTML_TAG_START -->Multiple GPUs and parallelism<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu"><!-- HTML_TAG_START -->Efficient training on CPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu_many"><!-- HTML_TAG_START -->Distributed CPU training<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu"><!-- HTML_TAG_START -->Training on TPUs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu_tf"><!-- HTML_TAG_START -->Training on TPU with TensorFlow<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_special"><!-- HTML_TAG_START -->Training on Specialized Hardware<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_hardware"><!-- HTML_TAG_START -->Custom hardware for training<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/hpo_train"><!-- HTML_TAG_START -->Hyperparameter Search using Trainer API<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Optimizing inference<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_cpu"><!-- HTML_TAG_START -->Inference on CPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_one"><!-- HTML_TAG_START -->Inference on one GPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_many"><!-- HTML_TAG_START -->Inference on many GPUs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_special"><!-- HTML_TAG_START -->Inference on Specialized Hardware<!-- HTML_TAG_END --> </a> </div><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/big_models"><!-- HTML_TAG_START -->Instantiating a big model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/debugging"><!-- HTML_TAG_START -->Troubleshooting<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tf_xla"><!-- HTML_TAG_START -->XLA Integration for TensorFlow Models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perf_torch_compile"><!-- HTML_TAG_START -->Optimize inference using `torch.compile()`<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Contribute<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/contributing"><!-- HTML_TAG_START -->How to contribute to transformers?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_model"><!-- HTML_TAG_START -->How to add a model to 🤗 Transformers?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_tensorflow_model"><!-- HTML_TAG_START -->How to convert a 🤗 Transformers model to TensorFlow?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_pipeline"><!-- HTML_TAG_START -->How to add a pipeline to 🤗 Transformers?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/testing"><!-- HTML_TAG_START -->Testing<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pr_checks"><!-- HTML_TAG_START -->Checks on a Pull Request<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Conceptual guides<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/philosophy"><!-- HTML_TAG_START -->Philosophy<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/glossary"><!-- HTML_TAG_START -->Glossary<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/task_summary"><!-- HTML_TAG_START -->What 🤗 Transformers can do<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tasks_explained"><!-- HTML_TAG_START -->How 🤗 Transformers solve tasks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_summary"><!-- HTML_TAG_START -->The Transformer model family<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tokenizer_summary"><!-- HTML_TAG_START -->Summary of the tokenizers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/attention"><!-- HTML_TAG_START -->Attention mechanisms<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pad_truncation"><!-- HTML_TAG_START -->Padding and truncation<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/bertology"><!-- HTML_TAG_START -->BERTology<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perplexity"><!-- HTML_TAG_START -->Perplexity of fixed-length models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_webserver"><!-- HTML_TAG_START -->Pipelines for webserver inference<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_memory_anatomy"><!-- HTML_TAG_START -->Model training anatomy<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->API<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Main Classes<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/agent"><!-- HTML_TAG_START -->Agents and Tools<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/model_doc/auto"><!-- HTML_TAG_START -->Auto Classes<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/callback"><!-- HTML_TAG_START -->Callbacks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/configuration"><!-- HTML_TAG_START -->Configuration<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/data_collator"><!-- HTML_TAG_START -->Data Collator<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks"><!-- HTML_TAG_START -->Keras callbacks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/logging"><!-- HTML_TAG_START -->Logging<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/model"><!-- HTML_TAG_START -->Models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/text_generation"><!-- HTML_TAG_START -->Text Generation<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/onnx"><!-- HTML_TAG_START -->ONNX<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules"><!-- HTML_TAG_START -->Optimization<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/output"><!-- HTML_TAG_START -->Model outputs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/pipelines"><!-- HTML_TAG_START -->Pipelines<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/processors"><!-- HTML_TAG_START -->Processors<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/quantization"><!-- HTML_TAG_START -->Quantization<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/tokenizer"><!-- HTML_TAG_START -->Tokenizer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/trainer"><!-- HTML_TAG_START -->Trainer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/deepspeed"><!-- HTML_TAG_START -->DeepSpeed Integration<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor"><!-- HTML_TAG_START -->Feature Extractor<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/image_processor"><!-- HTML_TAG_START -->Image Processor<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Text models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Vision models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Audio models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Multimodal models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/align"><!-- HTML_TAG_START -->ALIGN<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/altclip"><!-- HTML_TAG_START -->AltCLIP<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/blip"><!-- HTML_TAG_START -->BLIP<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/blip-2"><!-- HTML_TAG_START -->BLIP-2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bridgetower"><!-- HTML_TAG_START -->BridgeTower<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bros"><!-- HTML_TAG_START -->BROS<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/chinese_clip"><!-- HTML_TAG_START -->Chinese-CLIP<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/clip"><!-- HTML_TAG_START -->CLIP<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/clipseg"><!-- HTML_TAG_START -->CLIPSeg<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/data2vec"><!-- HTML_TAG_START -->Data2Vec<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/deplot"><!-- HTML_TAG_START -->DePlot<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/donut"><!-- HTML_TAG_START -->Donut<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/flava"><!-- HTML_TAG_START -->FLAVA<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/git"><!-- HTML_TAG_START -->GIT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/groupvit"><!-- HTML_TAG_START -->GroupViT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/idefics"><!-- HTML_TAG_START -->IDEFICS<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/instructblip"><!-- HTML_TAG_START -->InstructBLIP<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/layoutlm"><!-- HTML_TAG_START -->LayoutLM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/layoutlmv2"><!-- HTML_TAG_START -->LayoutLMV2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/layoutlmv3"><!-- HTML_TAG_START -->LayoutLMV3<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/layoutxlm"><!-- HTML_TAG_START -->LayoutXLM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/lilt"><!-- HTML_TAG_START -->LiLT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/lxmert"><!-- HTML_TAG_START -->LXMERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/matcha"><!-- HTML_TAG_START -->MatCha<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mgp-str"><!-- HTML_TAG_START -->MGP-STR<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nougat"><!-- HTML_TAG_START -->Nougat<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/oneformer"><!-- HTML_TAG_START -->OneFormer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/owlvit"><!-- HTML_TAG_START -->OWL-ViT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/perceiver"><!-- HTML_TAG_START -->Perceiver<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/pix2struct"><!-- HTML_TAG_START -->Pix2Struct<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/sam"><!-- HTML_TAG_START -->Segment Anything<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder"><!-- HTML_TAG_START -->Speech Encoder Decoder Models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/tapas"><!-- HTML_TAG_START -->TAPAS<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/trocr"><!-- HTML_TAG_START -->TrOCR<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/tvlt"><!-- HTML_TAG_START -->TVLT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/vilt"><!-- HTML_TAG_START -->ViLT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder"><!-- HTML_TAG_START -->Vision Encoder Decoder Models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder"><!-- HTML_TAG_START -->Vision Text Dual Encoder<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/visual_bert"><!-- HTML_TAG_START -->VisualBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xclip"><!-- HTML_TAG_START -->X-CLIP<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Reinforcement learning models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Time series models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Graph models<!-- HTML_TAG_END --></span> </span></span> </div></div> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Internal Helpers<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/modeling_utils"><!-- HTML_TAG_START -->Custom Layers and Utilities<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/pipelines_utils"><!-- HTML_TAG_START -->Utilities for pipelines<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/tokenization_utils"><!-- HTML_TAG_START -->Utilities for Tokenizers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/trainer_utils"><!-- HTML_TAG_START -->Utilities for Trainer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/generation_utils"><!-- HTML_TAG_START -->Utilities for Generation<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/image_processing_utils"><!-- HTML_TAG_START -->Utilities for Image Processors<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/audio_utils"><!-- HTML_TAG_START -->Utilities for Audio processing<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/file_utils"><!-- HTML_TAG_START -->General Utilities<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/time_series_utils"><!-- HTML_TAG_START -->Utilities for Time Series<!-- HTML_TAG_END --> </a> </div> </div></nav></div></div></div> <div class="z-1 min-w-0 flex-1"> <div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg"> <div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div> <p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience </p> <div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes </div></div></div> <div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a> <p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div> <div class="prose-doc prose relative mx-auto max-w-4xl break-words"> <p></p> <h1 class="relative group"><a id="visiontextdualencoder" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#visiontextdualencoder"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-z5ztu8">VisionTextDualEncoder</span></h1> <h2 class="relative group"><a id="overview" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#overview"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1jsw1pg">Overview</span></h2> <p data-svelte-h="svelte-11uh536">The <a href="/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder#transformers.VisionTextDualEncoderModel">VisionTextDualEncoderModel</a> can be used to initialize a vision-text dual encoder model with any pretrained vision autoencoding model as the vision encoder (<em>e.g.</em> <a href="vit">ViT</a>, <a href="beit">BEiT</a>, <a href="deit">DeiT</a>) and any pretrained text autoencoding model as the text encoder (<em>e.g.</em> <a href="roberta">RoBERTa</a>, <a href="bert">BERT</a>). Two projection layers are added on top of both the vision and text encoder to project the output embeddings to a shared latent space. The projection layers are randomly initialized so the model should be fine-tuned on a downstream task. This model can be used to align the vision-text embeddings using CLIP like contrastive image-text training and then can be used for zero-shot vision tasks such image-classification or retrieval.</p> <p data-svelte-h="svelte-8axln3">In <a href="https://arxiv.org/abs/2111.07991" rel="nofollow">LiT: Zero-Shot Transfer with Locked-image Text Tuning</a> it is shown how leveraging pre-trained (locked/frozen) image and text model for contrastive learning yields significant improvement on new zero-shot vision tasks such as image classification or retrieval.</p> <h2 class="relative group"><a id="transformers.VisionTextDualEncoderConfig" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionTextDualEncoderConfig"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-mk6ipu">VisionTextDualEncoderConfig</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.VisionTextDualEncoderConfig"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">VisionTextDualEncoderConfig</span></span></h3> <a id="transformers.VisionTextDualEncoderConfig" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.VisionTextDualEncoderConfig"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vision_text_dual_encoder/configuration_vision_text_dual_encoder.py#L27" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">projection_dim<span class="opacity-60"> = 512</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">logit_scale_init_value<span class="opacity-60"> = 2.6592</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionTextDualEncoderConfig.text_config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionTextDualEncoderConfig.text_config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>text_config</strong> (<code>dict</code>) — Dictionary of configuration options that defines text model config.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionTextDualEncoderConfig.vision_config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionTextDualEncoderConfig.vision_config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>vision_config</strong> (<code>dict</code>) — Dictionary of configuration options that defines vison model config.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionTextDualEncoderConfig.projection_dim" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionTextDualEncoderConfig.projection_dim"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>projection_dim</strong> (<code>int</code>, <em>optional</em>, defaults to 512) — Dimentionality of text and vision projection layers.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionTextDualEncoderConfig.logit_scale_init_value" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionTextDualEncoderConfig.logit_scale_init_value"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>logit_scale_init_value</strong> (<code>float</code>, <em>optional</em>, defaults to 2.6592) — The inital value of the <em>logit_scale</em> paramter. Default is used as per the original CLIP implementation.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionTextDualEncoderConfig.kwargs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionTextDualEncoderConfig.kwargs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>kwargs</strong> (<em>optional</em>) — Dictionary of keyword arguments.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1s73j4t"><a href="/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder#transformers.VisionTextDualEncoderConfig">VisionTextDualEncoderConfig</a> is the configuration class to store the configuration of a <a href="/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder#transformers.VisionTextDualEncoderModel">VisionTextDualEncoderModel</a>. It is used to instantiate <a href="/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder#transformers.VisionTextDualEncoderModel">VisionTextDualEncoderModel</a> model according to the specified arguments, defining the text model and vision model configs.</p> <p data-svelte-h="svelte-10kqkkl">Configuration objects inherit from <a href="/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> and can be used to control the model outputs. Read the documentation from <a href="/docs/transformers/v4.34.0/en/main_classes/configuration#transformers.PretrainedConfig">PretrainedConfig</a> for more information.</p> <div class="relative group rounded-md"><a id="transformers.VisionTextDualEncoderConfig.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionTextDualEncoderConfig.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-kvfsh7">Examples:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> ViTConfig, BertConfig, VisionTextDualEncoderConfig, VisionTextDualEncoderModel <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Initializing a BERT and ViT configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>config_vision = ViTConfig() <span class="hljs-meta">&gt;&gt;&gt; </span>config_text = BertConfig() <span class="hljs-meta">&gt;&gt;&gt; </span>config = VisionTextDualEncoderConfig.from_vision_text_configs(config_vision, config_text, projection_dim=<span class="hljs-number">512</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Initializing a BERT and ViT model (with random weights)</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model = VisionTextDualEncoderModel(config=config) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Accessing the model configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>config_vision = model.config.vision_config <span class="hljs-meta">&gt;&gt;&gt; </span>config_text = model.config.text_config <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># Saving the model, including its configuration</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model.save_pretrained(<span class="hljs-string">"vit-bert"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># loading model and config from pretrained folder</span> <span class="hljs-meta">&gt;&gt;&gt; </span>vision_text_config = VisionTextDualEncoderConfig.from_pretrained(<span class="hljs-string">"vit-bert"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = VisionTextDualEncoderModel.from_pretrained(<span class="hljs-string">"vit-bert"</span>, config=vision_text_config)</pre></div></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.VisionTextDualEncoderConfig.from_vision_text_configs"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>from_vision_text_configs</span></h4> <a id="transformers.VisionTextDualEncoderConfig.from_vision_text_configs" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.VisionTextDualEncoderConfig.from_vision_text_configs"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vision_text_dual_encoder/configuration_vision_text_dual_encoder.py#L104" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">vision_config<span class="opacity-60">: PretrainedConfig</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">text_config<span class="opacity-60">: PretrainedConfig</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><a href="/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder#transformers.VisionTextDualEncoderConfig">VisionTextDualEncoderConfig</a></span></span></p> <div class="!mb-10 relative docstring-details "> <div id="transformers.VisionTextDualEncoderConfig.from_vision_text_configs.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><a href="/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder#transformers.VisionTextDualEncoderConfig">VisionTextDualEncoderConfig</a></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>An instance of a configuration object</p> </p> </div></div> <p data-svelte-h="svelte-o5h5nm">Instantiate a <a href="/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder#transformers.VisionTextDualEncoderConfig">VisionTextDualEncoderConfig</a> (or a derived class) from text model configuration and vision model configuration.</p></div></div> <h2 class="relative group"><a id="transformers.VisionTextDualEncoderProcessor" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionTextDualEncoderProcessor"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-62ojb2">VisionTextDualEncoderProcessor</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.VisionTextDualEncoderProcessor"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">VisionTextDualEncoderProcessor</span></span></h3> <a id="transformers.VisionTextDualEncoderProcessor" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.VisionTextDualEncoderProcessor"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vision_text_dual_encoder/processing_vision_text_dual_encoder.py#L25" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">image_processor<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">tokenizer<span class="opacity-60"> = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionTextDualEncoderProcessor.image_processor" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionTextDualEncoderProcessor.image_processor"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>image_processor</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoImageProcessor">AutoImageProcessor</a>) — The image processor is a required input.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionTextDualEncoderProcessor.tokenizer" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionTextDualEncoderProcessor.tokenizer"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>tokenizer</strong> (<a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizer">PreTrainedTokenizer</a>) — The tokenizer is a required input.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-c5g5w0">Constructs a VisionTextDualEncoder processor which wraps an image processor and a tokenizer into a single processor.</p> <p data-svelte-h="svelte-1bmll1j"><a href="/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder#transformers.VisionTextDualEncoderProcessor">VisionTextDualEncoderProcessor</a> offers all the functionalities of <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoImageProcessor">AutoImageProcessor</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See the <code>__call__()</code> and <a href="/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder#transformers.VisionTextDualEncoderProcessor.decode">decode()</a> for more information.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.VisionTextDualEncoderProcessor.batch_decode"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>batch_decode</span></h4> <a id="transformers.VisionTextDualEncoderProcessor.batch_decode" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.VisionTextDualEncoderProcessor.batch_decode"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vision_text_dual_encoder/processing_vision_text_dual_encoder.py#L116" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">*args<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> </div></div> <p data-svelte-h="svelte-1gouhdd">This method forwards all its arguments to VisionTextDualEncoderTokenizer’s <a href="/docs/transformers/v4.34.0/en/model_doc/speecht5#transformers.SpeechT5Tokenizer.batch_decode">batch_decode()</a>. Please refer to the docstring of this method for more information.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.VisionTextDualEncoderProcessor.decode"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>decode</span></h4> <a id="transformers.VisionTextDualEncoderProcessor.decode" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.VisionTextDualEncoderProcessor.decode"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vision_text_dual_encoder/processing_vision_text_dual_encoder.py#L123" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">*args<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> </div></div> <p data-svelte-h="svelte-1pzx093">This method forwards all its arguments to VisionTextDualEncoderTokenizer’s <a href="/docs/transformers/v4.34.0/en/model_doc/speecht5#transformers.SpeechT5Tokenizer.decode">decode()</a>. Please refer to the docstring of this method for more information.</p></div></div> <h2 class="relative group"><a id="transformers.VisionTextDualEncoderModel" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionTextDualEncoderModel"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-f0ye25">VisionTextDualEncoderModel</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.VisionTextDualEncoderModel"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">VisionTextDualEncoderModel</span></span></h3> <a id="transformers.VisionTextDualEncoderModel" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.VisionTextDualEncoderModel"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vision_text_dual_encoder/modeling_vision_text_dual_encoder.py#L162" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60">: typing.Optional[transformers.models.vision_text_dual_encoder.configuration_vision_text_dual_encoder.VisionTextDualEncoderConfig] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">vision_model<span class="opacity-60">: typing.Optional[transformers.modeling_utils.PreTrainedModel] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">text_model<span class="opacity-60">: typing.Optional[transformers.modeling_utils.PreTrainedModel] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionTextDualEncoderModel.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionTextDualEncoderModel.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderConfig">VisionEncoderDecoderConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1anl5k7">This class can be used to initialize a vision-text dual encoder model with any pretrained vision autoencoding model as the vision encoder and any pretrained text model as the text encoder. The vision and text encoders are loaded via the <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained">from_pretrained()</a> method. The projection layers are automatically added to the model and should be fine-tuned on a downstream task, like contrastive image-text modeling.</p> <p data-svelte-h="svelte-98iof6">In <a href="https://arxiv.org/abs/2111.07991" rel="nofollow">LiT: Zero-Shot Transfer with Locked-image Text Tuning</a> it is shown how leveraging pre-trained (locked/frozen) image and text model for contrastive learning yields significant improvment on new zero-shot vision tasks such as image classification or retrieval.</p> <p data-svelte-h="svelte-c2j1l6">After such a Vision-Text-Dual-Encoder model has been trained/fine-tuned, it can be saved/loaded just like any other models (see the examples for more information).</p> <p data-svelte-h="svelte-hmtw9k">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-hswkmf">This model is also a PyTorch <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module" rel="nofollow">torch.nn.Module</a> subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.VisionTextDualEncoderModel.forward"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>forward</span></h4> <a id="transformers.VisionTextDualEncoderModel.forward" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.VisionTextDualEncoderModel.forward"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vision_text_dual_encoder/modeling_vision_text_dual_encoder.py#L293" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pixel_values<span class="opacity-60">: typing.Optional[torch.FloatTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: typing.Optional[torch.Tensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_loss<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: typing.Optional[torch.LongTensor] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>transformers.models.clip.modeling_clip.CLIPOutput</code> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 8 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionTextDualEncoderModel.forward.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionTextDualEncoderModel.forward.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionTextDualEncoderModel.forward.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionTextDualEncoderModel.forward.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionTextDualEncoderModel.forward.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionTextDualEncoderModel.forward.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>torch.LongTensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.<p></p> <p><a href="../glossary#position-ids">What are position IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionTextDualEncoderModel.forward.pixel_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionTextDualEncoderModel.forward.pixel_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>pixel_values</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_channels, height, width)</code>) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using an image processor (e.g. if you use ViT as the encoder, you should use <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoImageProcessor">AutoImageProcessor</a>). See <a href="/docs/transformers/v4.34.0/en/model_doc/deit#transformers.DeiTFeatureExtractor.__call__">ViTImageProcessor.<strong>call</strong>()</a> for details.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionTextDualEncoderModel.forward.return_loss" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionTextDualEncoderModel.forward.return_loss"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_loss</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the contrastive loss.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionTextDualEncoderModel.forward.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionTextDualEncoderModel.forward.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionTextDualEncoderModel.forward.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionTextDualEncoderModel.forward.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.VisionTextDualEncoderModel.forward.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionTextDualEncoderModel.forward.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li></ul> <div id="transformers.VisionTextDualEncoderModel.forward.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><code>transformers.models.clip.modeling_clip.CLIPOutput</code> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <code>transformers.models.clip.modeling_clip.CLIPOutput</code> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder#transformers.VisionTextDualEncoderConfig">VisionTextDualEncoderConfig</a>) and inputs.</p> <ul> <li><strong>loss</strong> (<code>torch.FloatTensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>return_loss</code> is <code>True</code>) — Contrastive loss for image-text similarity.</li> <li><strong>logits_per_image:(<code>torch.FloatTensor</code></strong> of shape <code>(image_batch_size, text_batch_size)</code>) — The scaled dot product scores between <code>image_embeds</code> and <code>text_embeds</code>. This represents the image-text similarity scores.</li> <li><strong>logits_per_text:(<code>torch.FloatTensor</code></strong> of shape <code>(text_batch_size, image_batch_size)</code>) — The scaled dot product scores between <code>text_embeds</code> and <code>image_embeds</code>. This represents the text-image similarity scores.</li> <li><strong>text_embeds(<code>torch.FloatTensor</code></strong> of shape <code>(batch_size, output_dim</code>) — The text embeddings obtained by applying the projection layer to the pooled output of <a href="/docs/transformers/v4.34.0/en/model_doc/clip#transformers.CLIPTextModel">CLIPTextModel</a>.</li> <li><strong>image_embeds(<code>torch.FloatTensor</code></strong> of shape <code>(batch_size, output_dim</code>) — The image embeddings obtained by applying the projection layer to the pooled output of <a href="/docs/transformers/v4.34.0/en/model_doc/clip#transformers.CLIPVisionModel">CLIPVisionModel</a>.</li> <li><strong>text_model_output(<code>BaseModelOutputWithPooling</code>):</strong> The output of the <a href="/docs/transformers/v4.34.0/en/model_doc/clip#transformers.CLIPTextModel">CLIPTextModel</a>.</li> <li><strong>vision_model_output(<code>BaseModelOutputWithPooling</code>):</strong> The output of the <a href="/docs/transformers/v4.34.0/en/model_doc/clip#transformers.CLIPVisionModel">CLIPVisionModel</a>.</li> </ul> </p> </div></div> <p data-svelte-h="svelte-qtld5d">The <a href="/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder#transformers.VisionTextDualEncoderModel">VisionTextDualEncoderModel</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.VisionTextDualEncoderModel.forward.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.VisionTextDualEncoderModel.forward.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-kvfsh7">Examples:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> PIL <span class="hljs-keyword">import</span> Image <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> requests <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> ( <span class="hljs-meta">... </span> VisionTextDualEncoderModel, <span class="hljs-meta">... </span> VisionTextDualEncoderProcessor, <span class="hljs-meta">... </span> AutoImageProcessor, <span class="hljs-meta">... </span> AutoTokenizer, <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"bert-base-uncased"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>image_processor = AutoImageProcessor.from_pretrained(<span class="hljs-string">"google/vit-base-patch16-224"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>processor = VisionTextDualEncoderProcessor(image_processor, tokenizer) <span class="hljs-meta">&gt;&gt;&gt; </span>model = VisionTextDualEncoderModel.from_vision_text_pretrained( <span class="hljs-meta">... </span> <span class="hljs-string">"google/vit-base-patch16-224"</span>, <span class="hljs-string">"bert-base-uncased"</span> <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># contrastive training</span> <span class="hljs-meta">&gt;&gt;&gt; </span>urls = [ <span class="hljs-meta">... </span> <span class="hljs-string">"http://images.cocodataset.org/val2017/000000039769.jpg"</span>, <span class="hljs-meta">... </span> <span class="hljs-string">"https://farm3.staticflickr.com/2674/5850229113_4fe05d5265_z.jpg"</span>, <span class="hljs-meta">... </span>] <span class="hljs-meta">&gt;&gt;&gt; </span>images = [Image.<span class="hljs-built_in">open</span>(requests.get(url, stream=<span class="hljs-literal">True</span>).raw) <span class="hljs-keyword">for</span> url <span class="hljs-keyword">in</span> urls] <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = processor( <span class="hljs-meta">... </span> text=[<span class="hljs-string">"a photo of a cat"</span>, <span class="hljs-string">"a photo of a dog"</span>], images=images, return_tensors=<span class="hljs-string">"pt"</span>, padding=<span class="hljs-literal">True</span> <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model( <span class="hljs-meta">... </span> input_ids=inputs.input_ids, <span class="hljs-meta">... </span> attention_mask=inputs.attention_mask, <span class="hljs-meta">... </span> pixel_values=inputs.pixel_values, <span class="hljs-meta">... </span> return_loss=<span class="hljs-literal">True</span>, <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>loss, logits_per_image = outputs.loss, outputs.logits_per_image <span class="hljs-comment"># this is the image-text similarity score</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># save and load from pretrained</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model.save_pretrained(<span class="hljs-string">"vit-bert"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = VisionTextDualEncoderModel.from_pretrained(<span class="hljs-string">"vit-bert"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># inference</span> <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(**inputs) <span class="hljs-meta">&gt;&gt;&gt; </span>logits_per_image = outputs.logits_per_image <span class="hljs-comment"># this is the image-text similarity score</span> <span class="hljs-meta">&gt;&gt;&gt; </span>probs = logits_per_image.softmax(dim=<span class="hljs-number">1</span>) <span class="hljs-comment"># we can take the softmax to get the label probabilities</span></pre></div></div></div></div> <h2 class="relative group"><a id="transformers.FlaxVisionTextDualEncoderModel" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxVisionTextDualEncoderModel"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-136kz3k">FlaxVisionTextDualEncoderModel</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FlaxVisionTextDualEncoderModel"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">FlaxVisionTextDualEncoderModel</span></span></h3> <a id="transformers.FlaxVisionTextDualEncoderModel" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FlaxVisionTextDualEncoderModel"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vision_text_dual_encoder/modeling_flax_vision_text_dual_encoder.py#L219" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">config<span class="opacity-60">: VisionTextDualEncoderConfig</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_shape<span class="opacity-60">: typing.Optional[typing.Tuple] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">seed<span class="opacity-60">: int = 0</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">dtype<span class="opacity-60">: dtype = &lt;class 'jax.numpy.float32'&gt;</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">_do_init<span class="opacity-60">: bool = True</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxVisionTextDualEncoderModel.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxVisionTextDualEncoderModel.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder#transformers.VisionTextDualEncoderConfig">VisionTextDualEncoderConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxVisionTextDualEncoderModel.dtype" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxVisionTextDualEncoderModel.dtype"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>dtype</strong> (<code>jax.numpy.dtype</code>, <em>optional</em>, defaults to <code>jax.numpy.float32</code>) — The data type of the computation. Can be one of <code>jax.numpy.float32</code>, <code>jax.numpy.float16</code> (on GPUs) and <code>jax.numpy.bfloat16</code> (on TPUs).<p></p> <p>This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given <code>dtype</code>.</p> <p><strong>Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.</strong></p> <p>If you wish to change the dtype of the model parameters, see <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.to_fp16">to_fp16()</a> and <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.to_bf16">to_bf16()</a>.</p></span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1anl5k7">This class can be used to initialize a vision-text dual encoder model with any pretrained vision autoencoding model as the vision encoder and any pretrained text model as the text encoder. The vision and text encoders are loaded via the <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained">from_pretrained()</a> method. The projection layers are automatically added to the model and should be fine-tuned on a downstream task, like contrastive image-text modeling.</p> <p data-svelte-h="svelte-98iof6">In <a href="https://arxiv.org/abs/2111.07991" rel="nofollow">LiT: Zero-Shot Transfer with Locked-image Text Tuning</a> it is shown how leveraging pre-trained (locked/frozen) image and text model for contrastive learning yields significant improvment on new zero-shot vision tasks such as image classification or retrieval.</p> <p data-svelte-h="svelte-c2j1l6">After such a Vision-Text-Dual-Encoder model has been trained/fine-tuned, it can be saved/loaded just like any other models (see the examples for more information).</p> <p data-svelte-h="svelte-hmtw9k">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.PreTrainedModel">PreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-9ybkh">This model is also a Flax Linen <a href="https://flax.readthedocs.io/en/latest/flax.linen.html#module" rel="nofollow">flax.linen.Module</a> subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior.</p> <p data-svelte-h="svelte-1pplc4a">Finally, this model supports inherent JAX features such as:</p> <ul data-svelte-h="svelte-1w7z84m"><li><a href="https://jax.readthedocs.io/en/latest/jax.html#just-in-time-compilation-jit" rel="nofollow">Just-In-Time (JIT) compilation</a></li> <li><a href="https://jax.readthedocs.io/en/latest/jax.html#automatic-differentiation" rel="nofollow">Automatic Differentiation</a></li> <li><a href="https://jax.readthedocs.io/en/latest/jax.html#vectorization-vmap" rel="nofollow">Vectorization</a></li> <li><a href="https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap" rel="nofollow">Parallelization</a></li></ul> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.FlaxVisionTextDualEncoderModel.__call__"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>__call__</span></h4> <a id="transformers.FlaxVisionTextDualEncoderModel.__call__" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.FlaxVisionTextDualEncoderModel.__call__"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vision_text_dual_encoder/modeling_flax_vision_text_dual_encoder.py#L269" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60"></span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pixel_values<span class="opacity-60"></span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60"> = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60"> = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">params<span class="opacity-60">: dict = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">dropout_rng<span class="opacity-60">: PRNGKey = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">train<span class="opacity-60">: bool = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: typing.Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: typing.Optional[bool] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>transformers.models.clip.modeling_flax_clip.FlaxCLIPOutput</code> or <code>tuple(torch.FloatTensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 7 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxVisionTextDualEncoderModel.__call__.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxVisionTextDualEncoderModel.__call__.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>numpy.ndarray</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxVisionTextDualEncoderModel.__call__.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxVisionTextDualEncoderModel.__call__.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>torch.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxVisionTextDualEncoderModel.__call__.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxVisionTextDualEncoderModel.__call__.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>numpy.ndarray</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.<p></p> <p><a href="../glossary#position-ids">What are position IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxVisionTextDualEncoderModel.__call__.pixel_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxVisionTextDualEncoderModel.__call__.pixel_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>pixel_values</strong> (<code>torch.FloatTensor</code> of shape <code>(batch_size, num_channels, height, width)</code>) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using an image processor (e.g. if you use ViT as the encoder, you should use <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoImageProcessor">AutoImageProcessor</a>). See <a href="/docs/transformers/v4.34.0/en/model_doc/deit#transformers.DeiTFeatureExtractor.__call__">ViTImageProcessor.<strong>call</strong>()</a> for details.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxVisionTextDualEncoderModel.__call__.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxVisionTextDualEncoderModel.__call__.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxVisionTextDualEncoderModel.__call__.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxVisionTextDualEncoderModel.__call__.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.FlaxVisionTextDualEncoderModel.__call__.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxVisionTextDualEncoderModel.__call__.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li></ul> <div id="transformers.FlaxVisionTextDualEncoderModel.__call__.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><code>transformers.models.clip.modeling_flax_clip.FlaxCLIPOutput</code> or <code>tuple(torch.FloatTensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <code>transformers.models.clip.modeling_flax_clip.FlaxCLIPOutput</code> or a tuple of <code>torch.FloatTensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder#transformers.VisionTextDualEncoderConfig">VisionTextDualEncoderConfig</a>) and inputs.</p> <ul> <li><strong>logits_per_image:(<code>jnp.ndarray</code></strong> of shape <code>(image_batch_size, text_batch_size)</code>) — The scaled dot product scores between <code>image_embeds</code> and <code>text_embeds</code>. This represents the image-text similarity scores.</li> <li><strong>logits_per_text:(<code>jnp.ndarray</code></strong> of shape <code>(text_batch_size, image_batch_size)</code>) — The scaled dot product scores between <code>text_embeds</code> and <code>image_embeds</code>. This represents the text-image similarity scores.</li> <li><strong>text_embeds(<code>jnp.ndarray</code></strong> of shape <code>(batch_size, output_dim</code>) — The text embeddings obtained by applying the projection layer to the pooled output of <a href="/docs/transformers/v4.34.0/en/model_doc/clip#transformers.FlaxCLIPTextModel">FlaxCLIPTextModel</a>.</li> <li><strong>image_embeds(<code>jnp.ndarray</code></strong> of shape <code>(batch_size, output_dim</code>) — The image embeddings obtained by applying the projection layer to the pooled output of <a href="/docs/transformers/v4.34.0/en/model_doc/clip#transformers.FlaxCLIPVisionModel">FlaxCLIPVisionModel</a>.</li> <li><strong>text_model_output(<code>FlaxBaseModelOutputWithPooling</code>):</strong> The output of the <a href="/docs/transformers/v4.34.0/en/model_doc/clip#transformers.FlaxCLIPTextModel">FlaxCLIPTextModel</a>.</li> <li><strong>vision_model_output(<code>FlaxBaseModelOutputWithPooling</code>):</strong> The output of the <a href="/docs/transformers/v4.34.0/en/model_doc/clip#transformers.FlaxCLIPVisionModel">FlaxCLIPVisionModel</a>.</li> </ul> </p> </div></div> <p data-svelte-h="svelte-gepymz">The <a href="/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder#transformers.FlaxVisionTextDualEncoderModel">FlaxVisionTextDualEncoderModel</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.FlaxVisionTextDualEncoderModel.__call__.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.FlaxVisionTextDualEncoderModel.__call__.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-kvfsh7">Examples:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> PIL <span class="hljs-keyword">import</span> Image <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> requests <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> jax <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> ( <span class="hljs-meta">... </span> FlaxVisionTextDualEncoderModel, <span class="hljs-meta">... </span> VisionTextDualEncoderProcessor, <span class="hljs-meta">... </span> AutoImageProcessor, <span class="hljs-meta">... </span> AutoTokenizer, <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"bert-base-uncased"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>image_processor = AutoImageProcesor.from_pretrained(<span class="hljs-string">"google/vit-base-patch16-224"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>processor = VisionTextDualEncoderProcessor(image_processor, tokenizer) <span class="hljs-meta">&gt;&gt;&gt; </span>model = FlaxVisionTextDualEncoderModel.from_vision_text_pretrained( <span class="hljs-meta">... </span> <span class="hljs-string">"google/vit-base-patch16-224"</span>, <span class="hljs-string">"bert-base-uncased"</span> <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># contrastive training</span> <span class="hljs-meta">&gt;&gt;&gt; </span>urls = [ <span class="hljs-meta">... </span> <span class="hljs-string">"http://images.cocodataset.org/val2017/000000039769.jpg"</span>, <span class="hljs-meta">... </span> <span class="hljs-string">"https://farm3.staticflickr.com/2674/5850229113_4fe05d5265_z.jpg"</span>, <span class="hljs-meta">... </span>] <span class="hljs-meta">&gt;&gt;&gt; </span>images = [Image.<span class="hljs-built_in">open</span>(requests.get(url, stream=<span class="hljs-literal">True</span>).raw) <span class="hljs-keyword">for</span> url <span class="hljs-keyword">in</span> urls] <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = processor( <span class="hljs-meta">... </span> text=[<span class="hljs-string">"a photo of a cat"</span>, <span class="hljs-string">"a photo of a dog"</span>], images=images, return_tensors=<span class="hljs-string">"np"</span>, padding=<span class="hljs-literal">True</span> <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model( <span class="hljs-meta">... </span> input_ids=inputs.input_ids, <span class="hljs-meta">... </span> attention_mask=inputs.attention_mask, <span class="hljs-meta">... </span> pixel_values=inputs.pixel_values, <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>logits_per_image = outputs.logits_per_image <span class="hljs-comment"># this is the image-text similarity score</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># save and load from pretrained</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model.save_pretrained(<span class="hljs-string">"vit-bert"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = FlaxVisionTextDualEncoderModel.from_pretrained(<span class="hljs-string">"vit-bert"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># inference</span> <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(**inputs) <span class="hljs-meta">&gt;&gt;&gt; </span>logits_per_image = outputs.logits_per_image <span class="hljs-comment"># this is the image-text similarity score</span> <span class="hljs-meta">&gt;&gt;&gt; </span>probs = jax.nn.softmax(logits_per_image, axis=<span class="hljs-number">1</span>) <span class="hljs-comment"># we can take the softmax to get the label probabilities</span></pre></div></div></div></div> <h2 class="relative group"><a id="transformers.TFVisionTextDualEncoderModel" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionTextDualEncoderModel"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-cbazn">TFVisionTextDualEncoderModel</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFVisionTextDualEncoderModel"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">TFVisionTextDualEncoderModel</span></span></h3> <a id="transformers.TFVisionTextDualEncoderModel" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFVisionTextDualEncoderModel"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vision_text_dual_encoder/modeling_tf_vision_text_dual_encoder.py#L176" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">*args<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFVisionTextDualEncoderModel.config" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionTextDualEncoderModel.config"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>config</strong> (<a href="/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderConfig">VisionEncoderDecoderConfig</a>) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel.from_pretrained">from_pretrained()</a> method to load the model weights.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1anl5k7">This class can be used to initialize a vision-text dual encoder model with any pretrained vision autoencoding model as the vision encoder and any pretrained text model as the text encoder. The vision and text encoders are loaded via the <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.FlaxAutoModelForVision2Seq.from_pretrained">from_pretrained()</a> method. The projection layers are automatically added to the model and should be fine-tuned on a downstream task, like contrastive image-text modeling.</p> <p data-svelte-h="svelte-98iof6">In <a href="https://arxiv.org/abs/2111.07991" rel="nofollow">LiT: Zero-Shot Transfer with Locked-image Text Tuning</a> it is shown how leveraging pre-trained (locked/frozen) image and text model for contrastive learning yields significant improvment on new zero-shot vision tasks such as image classification or retrieval.</p> <p data-svelte-h="svelte-c2j1l6">After such a Vision-Text-Dual-Encoder model has been trained/fine-tuned, it can be saved/loaded just like any other models (see the examples for more information).</p> <p data-svelte-h="svelte-1i0vt4o">This model inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel">TFPreTrainedModel</a>. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)</p> <p data-svelte-h="svelte-y4ylu0">This model is also a Keras <a href="https://www.tensorflow.org/api_docs/python/tf/keras/Model" rel="nofollow">Model</a> subclass. Use it as a regular Keras Model and refer to the TF documentation for all matter related to general usage and behavior.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.TFVisionTextDualEncoderModel.call"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>call</span></h4> <a id="transformers.TFVisionTextDualEncoderModel.call" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.TFVisionTextDualEncoderModel.call"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/vision_text_dual_encoder/modeling_tf_vision_text_dual_encoder.py#L341" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">input_ids<span class="opacity-60">: tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pixel_values<span class="opacity-60">: tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">attention_mask<span class="opacity-60">: tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">position_ids<span class="opacity-60">: tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_loss<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_type_ids<span class="opacity-60">: tf.Tensor | None = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_attentions<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">output_hidden_states<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">return_dict<span class="opacity-60">: Optional[bool] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">training<span class="opacity-60">: bool = False</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>transformers.models.clip.modeling_tf_clip.TFCLIPOutput</code> or <code>tuple(tf.Tensor)</code></span></span></p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 8 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFVisionTextDualEncoderModel.call.input_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionTextDualEncoderModel.call.input_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>input_ids</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.<p></p> <p>Indices can be obtained using <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoTokenizer">AutoTokenizer</a>. See <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.encode">PreTrainedTokenizer.encode()</a> and <a href="/docs/transformers/v4.34.0/en/model_doc/vits#transformers.VitsTokenizer.__call__">PreTrainedTokenizer.<strong>call</strong>()</a> for details.</p> <p><a href="../glossary#input-ids">What are input IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFVisionTextDualEncoderModel.call.attention_mask" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionTextDualEncoderModel.call.attention_mask"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>attention_mask</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Mask to avoid performing attention on padding token indices. Mask values selected in <code>[0, 1]</code>:<p></p> <ul> <li>1 for tokens that are <strong>not masked</strong>,</li> <li>0 for tokens that are <strong>masked</strong>.</li> </ul> <p><a href="../glossary#attention-mask">What are attention masks?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFVisionTextDualEncoderModel.call.position_ids" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionTextDualEncoderModel.call.position_ids"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>position_ids</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, sequence_length)</code>, <em>optional</em>) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range <code>[0, config.max_position_embeddings - 1]</code>.<p></p> <p><a href="../glossary#position-ids">What are position IDs?</a></p></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFVisionTextDualEncoderModel.call.pixel_values" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionTextDualEncoderModel.call.pixel_values"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>pixel_values</strong> (<code>tf.Tensor</code> of shape <code>(batch_size, num_channels, height, width)</code>) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using an image processor (e.g. if you use ViT as the encoder, you should use <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoImageProcessor">AutoImageProcessor</a>). See <a href="/docs/transformers/v4.34.0/en/model_doc/deit#transformers.DeiTFeatureExtractor.__call__">ViTImageProcessor.<strong>call</strong>()</a> for details.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFVisionTextDualEncoderModel.call.return_loss" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionTextDualEncoderModel.call.return_loss"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_loss</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the contrastive loss.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFVisionTextDualEncoderModel.call.output_attentions" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionTextDualEncoderModel.call.output_attentions"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_attentions</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the attentions tensors of all attention layers. See <code>attentions</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFVisionTextDualEncoderModel.call.output_hidden_states" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionTextDualEncoderModel.call.output_hidden_states"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>output_hidden_states</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return the hidden states of all layers. See <code>hidden_states</code> under returned tensors for more detail.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.TFVisionTextDualEncoderModel.call.return_dict" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionTextDualEncoderModel.call.return_dict"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>return_dict</strong> (<code>bool</code>, <em>optional</em>) — Whether or not to return a <a href="/docs/transformers/v4.34.0/en/main_classes/output#transformers.utils.ModelOutput">ModelOutput</a> instead of a plain tuple.</span></span> </li></ul> <div id="transformers.TFVisionTextDualEncoderModel.call.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><code>transformers.models.clip.modeling_tf_clip.TFCLIPOutput</code> or <code>tuple(tf.Tensor)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A <code>transformers.models.clip.modeling_tf_clip.TFCLIPOutput</code> or a tuple of <code>tf.Tensor</code> (if <code>return_dict=False</code> is passed or when <code>config.return_dict=False</code>) comprising various elements depending on the configuration (<a href="/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder#transformers.VisionTextDualEncoderConfig">VisionTextDualEncoderConfig</a>) and inputs.</p> <ul> <li><strong>loss</strong> (<code>tf.Tensor</code> of shape <code>(1,)</code>, <em>optional</em>, returned when <code>return_loss</code> is <code>True</code>) — Contrastive loss for image-text similarity.</li> <li><strong>logits_per_image:(<code>tf.Tensor</code></strong> of shape <code>(image_batch_size, text_batch_size)</code>) — The scaled dot product scores between <code>image_embeds</code> and <code>text_embeds</code>. This represents the image-text similarity scores.</li> <li><strong>logits_per_text:(<code>tf.Tensor</code></strong> of shape <code>(text_batch_size, image_batch_size)</code>) — The scaled dot product scores between <code>text_embeds</code> and <code>image_embeds</code>. This represents the text-image similarity scores.</li> <li><strong>text_embeds(<code>tf.Tensor</code></strong> of shape <code>(batch_size, output_dim</code>) — The text embeddings obtained by applying the projection layer to the pooled output of <a href="/docs/transformers/v4.34.0/en/model_doc/clip#transformers.TFCLIPTextModel">TFCLIPTextModel</a>.</li> <li><strong>image_embeds(<code>tf.Tensor</code></strong> of shape <code>(batch_size, output_dim</code>) — The image embeddings obtained by applying the projection layer to the pooled output of <a href="/docs/transformers/v4.34.0/en/model_doc/clip#transformers.TFCLIPVisionModel">TFCLIPVisionModel</a>.</li> <li><strong>text_model_output(<code>~modeling_tf_utils.TFBaseModelOutputWithPooling</code>):</strong> The output of the <a href="/docs/transformers/v4.34.0/en/model_doc/clip#transformers.TFCLIPTextModel">TFCLIPTextModel</a>.</li> <li><strong>vision_model_output(<code>~modeling_tf_utils.TFBaseModelOutputWithPooling</code>):</strong> The output of the <a href="/docs/transformers/v4.34.0/en/model_doc/clip#transformers.TFCLIPVisionModel">TFCLIPVisionModel</a>.</li> </ul> </p> </div></div> <p data-svelte-h="svelte-riqsfx">The <a href="/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder#transformers.TFVisionTextDualEncoderModel">TFVisionTextDualEncoderModel</a> forward method, overrides the <code>__call__</code> special method.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-fincs2">Although the recipe for forward pass needs to be defined within this function, one should call the <code>Module</code> instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.</p></div> <div class="relative group rounded-md"><a id="transformers.TFVisionTextDualEncoderModel.call.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.TFVisionTextDualEncoderModel.call.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-kvfsh7">Examples:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> PIL <span class="hljs-keyword">import</span> Image <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> requests <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> ( <span class="hljs-meta">... </span> TFVisionTextDualEncoderModel, <span class="hljs-meta">... </span> VisionTextDualEncoderProcessor, <span class="hljs-meta">... </span> AutoImageProcessor, <span class="hljs-meta">... </span> AutoTokenizer, <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"bert-base-uncased"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>image_processor = AutoImageProcessor.from_pretrained(<span class="hljs-string">"google/vit-base-patch16-224"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>processor = VisionTextDualEncoderProcessor(image_processor, tokenizer) <span class="hljs-meta">&gt;&gt;&gt; </span>model = TFVisionTextDualEncoderModel.from_vision_text_pretrained( <span class="hljs-meta">... </span> <span class="hljs-string">"google/vit-base-patch16-224"</span>, <span class="hljs-string">"bert-base-uncased"</span> <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># contrastive training</span> <span class="hljs-meta">&gt;&gt;&gt; </span>urls = [ <span class="hljs-meta">... </span> <span class="hljs-string">"http://images.cocodataset.org/val2017/000000039769.jpg"</span>, <span class="hljs-meta">... </span> <span class="hljs-string">"https://farm3.staticflickr.com/2674/5850229113_4fe05d5265_z.jpg"</span>, <span class="hljs-meta">... </span>] <span class="hljs-meta">&gt;&gt;&gt; </span>images = [Image.<span class="hljs-built_in">open</span>(requests.get(url, stream=<span class="hljs-literal">True</span>).raw) <span class="hljs-keyword">for</span> url <span class="hljs-keyword">in</span> urls] <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = processor( <span class="hljs-meta">... </span> text=[<span class="hljs-string">"a photo of a cat"</span>, <span class="hljs-string">"a photo of a dog"</span>], images=images, return_tensors=<span class="hljs-string">"np"</span>, padding=<span class="hljs-literal">True</span> <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model( <span class="hljs-meta">... </span> input_ids=inputs.input_ids, <span class="hljs-meta">... </span> attention_mask=inputs.attention_mask, <span class="hljs-meta">... </span> pixel_values=inputs.pixel_values, <span class="hljs-meta">... </span> return_loss=<span class="hljs-literal">True</span>, <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>loss, logits_per_image = outputs.loss, outputs.logits_per_image <span class="hljs-comment"># this is the image-text similarity score</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># save and load from pretrained</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model.save_pretrained(<span class="hljs-string">"vit-bert"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = TFVisionTextDualEncoderModel.from_pretrained(<span class="hljs-string">"vit-bert"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># inference</span> <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(**inputs) <span class="hljs-meta">&gt;&gt;&gt; </span>logits_per_image = outputs.logits_per_image <span class="hljs-comment"># this is the image-text similarity score</span> <span class="hljs-meta">&gt;&gt;&gt; </span>probs = tf.nn.softmax(logits_per_image, axis=<span class="hljs-number">1</span>) <span class="hljs-comment"># we can take the softmax to get the label probabilities</span></pre></div></div></div></div> <p></p> <div id="svelte-announcer" aria-live="assertive" aria-atomic="true" style="position: absolute; left: 0px; top: 0px; clip: rect(0px, 0px, 0px, 0px); clip-path: inset(50%); overflow: hidden; white-space: nowrap; width: 1px; height: 1px;"></div></div> <div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>Vision Encoder Decoder Models</a> <a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/visual_bert" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">VisualBERT<span class="ml-2 translate-y-px">→</span></a></div></div></div> <div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapter&quot;:{&quot;title&quot;:&quot;VisionTextDualEncoder&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;visiontextdualencoder&quot;,&quot;url&quot;:&quot;#visiontextdualencoder&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;overview&quot;,&quot;url&quot;:&quot;#overview&quot;},{&quot;title&quot;:&quot;VisionTextDualEncoderConfig&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.VisionTextDualEncoderConfig&quot;,&quot;url&quot;:&quot;#transformers.VisionTextDualEncoderConfig&quot;},{&quot;title&quot;:&quot;VisionTextDualEncoderProcessor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.VisionTextDualEncoderProcessor&quot;,&quot;url&quot;:&quot;#transformers.VisionTextDualEncoderProcessor&quot;},{&quot;title&quot;:&quot;VisionTextDualEncoderModel&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.VisionTextDualEncoderModel&quot;,&quot;url&quot;:&quot;#transformers.VisionTextDualEncoderModel&quot;},{&quot;title&quot;:&quot;FlaxVisionTextDualEncoderModel&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.FlaxVisionTextDualEncoderModel&quot;,&quot;url&quot;:&quot;#transformers.FlaxVisionTextDualEncoderModel&quot;},{&quot;title&quot;:&quot;TFVisionTextDualEncoderModel&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.TFVisionTextDualEncoderModel&quot;,&quot;url&quot;:&quot;#transformers.TFVisionTextDualEncoderModel&quot;}]}}" data-target="SubSideMenu"><nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#visiontextdualencoder" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-visiontextdualencoder"><wbr>Vision<wbr>Text<wbr>Dual<wbr>Encoder</a> <a href="#overview" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-overview"><wbr>Overview</a> <a href="#transformers.VisionTextDualEncoderConfig" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.VisionTextDualEncoderConfig"><wbr>Vision<wbr>Text<wbr>Dual<wbr>Encoder<wbr>Config</a> <a href="#transformers.VisionTextDualEncoderProcessor" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.VisionTextDualEncoderProcessor"><wbr>Vision<wbr>Text<wbr>Dual<wbr>Encoder<wbr>Processor</a> <a href="#transformers.VisionTextDualEncoderModel" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.VisionTextDualEncoderModel"><wbr>Vision<wbr>Text<wbr>Dual<wbr>Encoder<wbr>Model</a> <a href="#transformers.FlaxVisionTextDualEncoderModel" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.FlaxVisionTextDualEncoderModel"><wbr>Flax<wbr>Vision<wbr>Text<wbr>Dual<wbr>Encoder<wbr>Model</a> <a href="#transformers.TFVisionTextDualEncoderModel" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.TFVisionTextDualEncoderModel">TF<wbr>Vision<wbr>Text<wbr>Dual<wbr>Encoder<wbr>Model</a> </nav></div></div></div> <div id="doc-footer"></div></main> </div> <script> import("/front/build/kube-5e23f38/index.js"); window.moonSha = "kube-5e23f38/"; window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc","environment":"production","userAgent":"HuggingFace (production)"}`); </script> <!-- Stripe --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://js.stripe.com/v3/"; script.async = true; document.head.appendChild(script); } </script> <!-- Google analytics v4 --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL"; script.async = true; document.head.appendChild(script); window.dataLayer = window.dataLayer || []; function gtag() { if (window.dataLayer !== undefined) { window.dataLayer.push(arguments); } } gtag("js", new Date()); gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder" }); /// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" }); /// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent /// TODO: ask the user for their consent and update this with gtag('consent', 'update') } </script> <!-- Google Analytics v3 (deprecated) --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { (function (i, s, o, g, r, a, m) { i["GoogleAnalyticsObject"] = r; (i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments); }), (i[r].l = 1 * new Date()); (a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]); a.async = 1; a.src = g; m.parentNode.insertBefore(a, m); })(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics"); ganalytics("create", "UA-83738774-2", "auto"); ganalytics("send", "pageview", "/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder"); } </script> <iframe name="__privateStripeMetricsController6800" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-27c67c0d52761104439bb051c7856ab1.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fv4.34.0%2Fen%2Fmodel_doc%2Fvision-text-dual-encoder%23transformers.VisionTextDualEncoderConfig&amp;title=VisionTextDualEncoder&amp;referrer=&amp;muid=38397bf3-d1df-433f-a1ab-3a999964eeba83e258&amp;sid=7a2cecc6-6b9a-4e4a-88b4-4bd8a189a43fe6315f&amp;version=6&amp;preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html>
2023-10-05T13:33:51.278Z
BertJapanese
https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/bert-japanese#transformers.BertJapaneseTokenizer
# BertJapanese ## Overview The BERT models trained on Japanese text. There are models with two different tokenization methods: - Tokenize with MeCab and WordPiece. This requires some extra dependencies, [fugashi](https://github.com/polm/fugashi) which is a wrapper around [MeCab](https://taku910.github.io/mecab/). - Tokenize into characters. To use _MecabTokenizer_, you should `pip install transformers["ja"]` (or `pip install -e .["ja"]` if you install from source) to install dependencies. See [details on cl-tohoku repository](https://github.com/cl-tohoku/bert-japanese). Example of using a model with MeCab and WordPiece tokenization: ``` >>> import torch >>> from transformers import AutoModel, AutoTokenizer >>> bertjapanese = AutoModel.from_pretrained("cl-tohoku/bert-base-japanese") >>> tokenizer = AutoTokenizer.from_pretrained("cl-tohoku/bert-base-japanese") >>> >>> line = "吾輩は猫である。" >>> inputs = tokenizer(line, return_tensors="pt") >>> print(tokenizer.decode(inputs["input_ids"][0])) [CLS] 吾輩 は 猫 で ある 。 [SEP] >>> outputs = bertjapanese(**inputs) ``` Example of using a model with Character tokenization: ``` >>> bertjapanese = AutoModel.from_pretrained("cl-tohoku/bert-base-japanese-char") >>> tokenizer = AutoTokenizer.from_pretrained("cl-tohoku/bert-base-japanese-char") >>> >>> line = "吾輩は猫である。" >>> inputs = tokenizer(line, return_tensors="pt") >>> print(tokenizer.decode(inputs["input_ids"][0])) [CLS] 吾 輩 は 猫 で あ る 。 [SEP] >>> outputs = bertjapanese(**inputs) ``` Tips: - This implementation is the same as BERT, except for tokenization method. Refer to the [documentation of BERT](bert) for more usage examples. This model was contributed by [cl-tohoku](https://huggingface.co/cl-tohoku). ## BertJapaneseTokenizer ### class transformers.BertJapaneseTokenizer [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/bert_japanese/tokenization_bert_japanese.py#L107) ( vocab\_filespm\_file = Nonedo\_lower\_case = Falsedo\_word\_tokenize = Truedo\_subword\_tokenize = Trueword\_tokenizer\_type = 'basic'subword\_tokenizer\_type = 'wordpiece'never\_split = Noneunk\_token = '\[UNK\]'sep\_token = '\[SEP\]'pad\_token = '\[PAD\]'cls\_token = '\[CLS\]'mask\_token = '\[MASK\]'mecab\_kwargs = Nonesudachi\_kwargs = Nonejumanpp\_kwargs = None\*\*kwargs ) Construct a BERT tokenizer for Japanese text. This tokenizer inherits from [PreTrainedTokenizer](/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizer) which contains most of the main methods. Users should refer to: this superclass for more information regarding those methods. #### build\_inputs\_with\_special\_tokens [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/bert_japanese/tokenization_bert_japanese.py#L307) ( token\_ids\_0: typing.List\[int\]token\_ids\_1: typing.Optional\[typing.List\[int\]\] = None ) → `List[int]` Parameters - **token\_ids\_0** (`List[int]`) — List of IDs to which the special tokens will be added. - **token\_ids\_1** (`List[int]`, _optional_) — Optional second list of IDs for sequence pairs. List of [input IDs](../glossary#input-ids) with the appropriate special tokens. Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A BERT sequence has the following format: - single sequence: `[CLS] X [SEP]` - pair of sequences: `[CLS] A [SEP] B [SEP]` Converts a sequence of tokens (string) in a single string. #### create\_token\_type\_ids\_from\_sequences [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/bert_japanese/tokenization_bert_japanese.py#L362) ( token\_ids\_0: typing.List\[int\]token\_ids\_1: typing.Optional\[typing.List\[int\]\] = None ) → `List[int]` Parameters - **token\_ids\_0** (`List[int]`) — List of IDs. - **token\_ids\_1** (`List[int]`, _optional_) — Optional second list of IDs for sequence pairs. List of [token type IDs](../glossary#token-type-ids) according to the given sequence(s). Create a mask from the two sequences passed to be used in a sequence-pair classification task. A BERT sequence pair mask has the following format: ``` 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 | first sequence | second sequence | ``` If `token_ids_1` is `None`, this method only returns the first portion of the mask (0s). #### get\_special\_tokens\_mask [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/bert_japanese/tokenization_bert_japanese.py#L333) ( token\_ids\_0: typing.List\[int\]token\_ids\_1: typing.Optional\[typing.List\[int\]\] = Nonealready\_has\_special\_tokens: bool = False ) → `List[int]` Parameters - **token\_ids\_0** (`List[int]`) — List of IDs. - **token\_ids\_1** (`List[int]`, _optional_) — Optional second list of IDs for sequence pairs. - **already\_has\_special\_tokens** (`bool`, _optional_, defaults to `False`) — Whether or not the token list is already formatted with special tokens for the model. A list of integers in the range \[0, 1\]: 1 for a special token, 0 for a sequence token. Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer `prepare_for_model` method.
<!DOCTYPE html><html class=""><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"> <meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science."> <meta property="fb:app_id" content="1321688464574422"> <meta name="twitter:card" content="summary_large_image"> <meta name="twitter:site" content="@huggingface"> <meta property="og:title" content="BertJapanese"> <meta property="og:type" content="website"> <meta property="og:url" content="https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/bert-japanese"> <meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png"> <link rel="stylesheet" href="/front/build/kube-5e23f38/style.css"> <link rel="preconnect" href="https://fonts.gstatic.com"> <link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&amp;display=swap" rel="stylesheet"> <link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&amp;display=swap" rel="stylesheet"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" as="style" onload="this.onload=null;this.rel='stylesheet'"> <noscript> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" /> </noscript> <title>BertJapanese</title> <script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script> <script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"><link rel="stylesheet" href="/docs/transformers/v4.34.0/en/_app/immutable/assets/0.e3b0c442.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/1.38c5c2f6.js"><meta name="hf:doc:metadata" content="{&quot;local&quot;:&quot;bertjapanese&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;overview&quot;,&quot;title&quot;:&quot;Overview&quot;},{&quot;local&quot;:&quot;transformers.BertJapaneseTokenizer&quot;,&quot;title&quot;:&quot;BertJapaneseTokenizer&quot;}],&quot;title&quot;:&quot;BertJapanese&quot;}"></head> <body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage"> <div class="flex min-h-screen flex-col"> <div class="SVELTE_HYDRATER contents" data-props="{&quot;classNames&quot;:&quot;&quot;,&quot;isWide&quot;:true,&quot;isZh&quot;:false}" data-target="MainHeader"><header class="border-b border-gray-100 "><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <div class="flex flex-none items-center justify-center p-0.5 place-self-stretch lg:hidden"><button class="relative z-40 flex h-6 w-8 items-center justify-center" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div></div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="rounded-full border border-transparent bg-gray-900 px-3 py-1 leading-none text-white hover:border-black hover:bg-white hover:text-black" href="/join">Sign Up</a></li></ul></nav></div></header></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div> <main class="flex flex-1 flex-col"><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapters&quot;:[{&quot;title&quot;:&quot;Get started&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;🤗 Transformers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;index&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/index&quot;},{&quot;title&quot;:&quot;Quick tour&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;quicktour&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/quicktour&quot;},{&quot;title&quot;:&quot;Installation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;installation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/installation&quot;}]},{&quot;title&quot;:&quot;Tutorials&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Run inference with pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_tutorial&quot;},{&quot;title&quot;:&quot;Write portable code with AutoClass&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;autoclass_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/autoclass_tutorial&quot;},{&quot;title&quot;:&quot;Preprocess data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocessing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/preprocessing&quot;},{&quot;title&quot;:&quot;Fine-tune a pretrained model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;training&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/training&quot;},{&quot;title&quot;:&quot;Train with a script&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;run_scripts&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/run_scripts&quot;},{&quot;title&quot;:&quot;Set up distributed training with 🤗 Accelerate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;accelerate&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/accelerate&quot;},{&quot;title&quot;:&quot;Load and train adapters with 🤗 PEFT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;peft&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/peft&quot;},{&quot;title&quot;:&quot;Share your model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_sharing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_sharing&quot;},{&quot;title&quot;:&quot;Agents&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers_agents&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/transformers_agents&quot;},{&quot;title&quot;:&quot;Generation with LLMs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;llm_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/llm_tutorial&quot;}]},{&quot;title&quot;:&quot;Task Guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Natural Language Processing&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text classification&quot;,&quot;id&quot;:&quot;tasks/sequence_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/sequence_classification&quot;},{&quot;title&quot;:&quot;Token classification&quot;,&quot;id&quot;:&quot;tasks/token_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/token_classification&quot;},{&quot;title&quot;:&quot;Question answering&quot;,&quot;id&quot;:&quot;tasks/question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/question_answering&quot;},{&quot;title&quot;:&quot;Causal language modeling&quot;,&quot;id&quot;:&quot;tasks/language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/language_modeling&quot;},{&quot;title&quot;:&quot;Masked language modeling&quot;,&quot;id&quot;:&quot;tasks/masked_language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/masked_language_modeling&quot;},{&quot;title&quot;:&quot;Translation&quot;,&quot;id&quot;:&quot;tasks/translation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/translation&quot;},{&quot;title&quot;:&quot;Summarization&quot;,&quot;id&quot;:&quot;tasks/summarization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/summarization&quot;},{&quot;title&quot;:&quot;Multiple choice&quot;,&quot;id&quot;:&quot;tasks/multiple_choice&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/multiple_choice&quot;}]},{&quot;title&quot;:&quot;Audio&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio classification&quot;,&quot;id&quot;:&quot;tasks/audio_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/audio_classification&quot;},{&quot;title&quot;:&quot;Automatic speech recognition&quot;,&quot;id&quot;:&quot;tasks/asr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/asr&quot;}]},{&quot;title&quot;:&quot;Computer Vision&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image classification&quot;,&quot;id&quot;:&quot;tasks/image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_classification&quot;},{&quot;title&quot;:&quot;Semantic segmentation&quot;,&quot;id&quot;:&quot;tasks/semantic_segmentation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/semantic_segmentation&quot;},{&quot;title&quot;:&quot;Video classification&quot;,&quot;id&quot;:&quot;tasks/video_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/video_classification&quot;},{&quot;title&quot;:&quot;Object detection&quot;,&quot;id&quot;:&quot;tasks/object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot object detection&quot;,&quot;id&quot;:&quot;tasks/zero_shot_object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot image classification&quot;,&quot;id&quot;:&quot;tasks/zero_shot_image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification&quot;},{&quot;title&quot;:&quot;Depth estimation&quot;,&quot;id&quot;:&quot;tasks/monocular_depth_estimation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation&quot;}]},{&quot;title&quot;:&quot;Multimodal&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image captioning&quot;,&quot;id&quot;:&quot;tasks/image_captioning&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_captioning&quot;},{&quot;title&quot;:&quot;Document Question Answering&quot;,&quot;id&quot;:&quot;tasks/document_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/document_question_answering&quot;},{&quot;title&quot;:&quot;Visual Question Answering&quot;,&quot;id&quot;:&quot;tasks/visual_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/visual_question_answering&quot;},{&quot;title&quot;:&quot;Text to speech&quot;,&quot;id&quot;:&quot;tasks/text-to-speech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/text-to-speech&quot;}]},{&quot;title&quot;:&quot;Generation&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Customize the generation strategy&quot;,&quot;id&quot;:&quot;generation_strategies&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/generation_strategies&quot;}]},{&quot;title&quot;:&quot;Prompting&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image tasks with IDEFICS&quot;,&quot;id&quot;:&quot;tasks/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/idefics&quot;}]}]},{&quot;title&quot;:&quot;Developer guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Use fast tokenizers from 🤗 Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;fast_tokenizers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/fast_tokenizers&quot;},{&quot;title&quot;:&quot;Run inference with multilingual models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;multilingual&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/multilingual&quot;},{&quot;title&quot;:&quot;Use model-specific APIs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;create_a_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/create_a_model&quot;},{&quot;title&quot;:&quot;Share a custom model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_models&quot;},{&quot;title&quot;:&quot;Templates for chat models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;chat_templating&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/chat_templating&quot;},{&quot;title&quot;:&quot;Run training on Amazon SageMaker&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;sagemaker&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/sagemaker&quot;},{&quot;title&quot;:&quot;Export to ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;serialization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/serialization&quot;},{&quot;title&quot;:&quot;Export to TFLite&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tflite&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tflite&quot;},{&quot;title&quot;:&quot;Export to TorchScript&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;torchscript&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/torchscript&quot;},{&quot;title&quot;:&quot;Benchmarks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;benchmarks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/benchmarks&quot;},{&quot;title&quot;:&quot;Notebooks with examples&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;notebooks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/notebooks&quot;},{&quot;title&quot;:&quot;Community resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;community&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/community&quot;},{&quot;title&quot;:&quot;Custom Tools and Prompts&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_tools&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_tools&quot;},{&quot;title&quot;:&quot;Troubleshoot&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;troubleshooting&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/troubleshooting&quot;}]},{&quot;title&quot;:&quot;Performance and scalability&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;performance&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/performance&quot;},{&quot;title&quot;:&quot;Efficient training techniques&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Methods and tools for efficient training on a single GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_one&quot;},{&quot;title&quot;:&quot;Multiple GPUs and parallelism&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_many&quot;},{&quot;title&quot;:&quot;Efficient training on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu&quot;},{&quot;title&quot;:&quot;Distributed CPU training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu_many&quot;},{&quot;title&quot;:&quot;Training on TPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu&quot;},{&quot;title&quot;:&quot;Training on TPU with TensorFlow&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu_tf&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu_tf&quot;},{&quot;title&quot;:&quot;Training on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_special&quot;},{&quot;title&quot;:&quot;Custom hardware for training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_hardware&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_hardware&quot;},{&quot;title&quot;:&quot;Hyperparameter Search using Trainer API&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;hpo_train&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/hpo_train&quot;}]},{&quot;title&quot;:&quot;Optimizing inference&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Inference on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_cpu&quot;},{&quot;title&quot;:&quot;Inference on one GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_one&quot;},{&quot;title&quot;:&quot;Inference on many GPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_many&quot;},{&quot;title&quot;:&quot;Inference on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_special&quot;}]},{&quot;title&quot;:&quot;Instantiating a big model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;big_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/big_models&quot;},{&quot;title&quot;:&quot;Troubleshooting&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;debugging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/debugging&quot;},{&quot;title&quot;:&quot;XLA Integration for TensorFlow Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tf_xla&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tf_xla&quot;},{&quot;title&quot;:&quot;Optimize inference using `torch.compile()`&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_torch_compile&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_torch_compile&quot;}]},{&quot;title&quot;:&quot;Contribute&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;How to contribute to transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;contributing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/contributing&quot;},{&quot;title&quot;:&quot;How to add a model to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_model&quot;},{&quot;title&quot;:&quot;How to convert a 🤗 Transformers model to TensorFlow?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_tensorflow_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_tensorflow_model&quot;},{&quot;title&quot;:&quot;How to add a pipeline to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_pipeline&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_pipeline&quot;},{&quot;title&quot;:&quot;Testing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;testing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/testing&quot;},{&quot;title&quot;:&quot;Checks on a Pull Request&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pr_checks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pr_checks&quot;}]},{&quot;title&quot;:&quot;Conceptual guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Philosophy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;philosophy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/philosophy&quot;},{&quot;title&quot;:&quot;Glossary&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;glossary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/glossary&quot;},{&quot;title&quot;:&quot;What 🤗 Transformers can do&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;task_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/task_summary&quot;},{&quot;title&quot;:&quot;How 🤗 Transformers solve tasks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks_explained&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks_explained&quot;},{&quot;title&quot;:&quot;The Transformer model family&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_summary&quot;},{&quot;title&quot;:&quot;Summary of the tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tokenizer_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tokenizer_summary&quot;},{&quot;title&quot;:&quot;Attention mechanisms&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;attention&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/attention&quot;},{&quot;title&quot;:&quot;Padding and truncation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pad_truncation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pad_truncation&quot;},{&quot;title&quot;:&quot;BERTology&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;bertology&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/bertology&quot;},{&quot;title&quot;:&quot;Perplexity of fixed-length models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perplexity&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perplexity&quot;},{&quot;title&quot;:&quot;Pipelines for webserver inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_webserver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_webserver&quot;},{&quot;title&quot;:&quot;Model training anatomy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_memory_anatomy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_memory_anatomy&quot;}]},{&quot;title&quot;:&quot;API&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Main Classes&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Agents and Tools&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/agent&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/agent&quot;},{&quot;title&quot;:&quot;Auto Classes&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/auto&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/auto&quot;},{&quot;title&quot;:&quot;Callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/callback&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/callback&quot;},{&quot;title&quot;:&quot;Configuration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/configuration&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/configuration&quot;},{&quot;title&quot;:&quot;Data Collator&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/data_collator&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/data_collator&quot;},{&quot;title&quot;:&quot;Keras callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/keras_callbacks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/keras_callbacks&quot;},{&quot;title&quot;:&quot;Logging&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/logging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/logging&quot;},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/model&quot;},{&quot;title&quot;:&quot;Text Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/text_generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/text_generation&quot;},{&quot;title&quot;:&quot;ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/onnx&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/onnx&quot;},{&quot;title&quot;:&quot;Optimization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/optimizer_schedules&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules&quot;},{&quot;title&quot;:&quot;Model outputs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/output&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/output&quot;},{&quot;title&quot;:&quot;Pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/pipelines&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/pipelines&quot;},{&quot;title&quot;:&quot;Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/processors&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/processors&quot;},{&quot;title&quot;:&quot;Quantization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/quantization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/quantization&quot;},{&quot;title&quot;:&quot;Tokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/tokenizer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/tokenizer&quot;},{&quot;title&quot;:&quot;Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/trainer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/trainer&quot;},{&quot;title&quot;:&quot;DeepSpeed Integration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/deepspeed&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/deepspeed&quot;},{&quot;title&quot;:&quot;Feature Extractor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/feature_extractor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/feature_extractor&quot;},{&quot;title&quot;:&quot;Image Processor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/image_processor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/image_processor&quot;}]},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALBERT&quot;,&quot;id&quot;:&quot;model_doc/albert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/albert&quot;},{&quot;title&quot;:&quot;BART&quot;,&quot;id&quot;:&quot;model_doc/bart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bart&quot;},{&quot;title&quot;:&quot;BARThez&quot;,&quot;id&quot;:&quot;model_doc/barthez&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/barthez&quot;},{&quot;title&quot;:&quot;BARTpho&quot;,&quot;id&quot;:&quot;model_doc/bartpho&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bartpho&quot;},{&quot;title&quot;:&quot;BERT&quot;,&quot;id&quot;:&quot;model_doc/bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert&quot;},{&quot;title&quot;:&quot;BertGeneration&quot;,&quot;id&quot;:&quot;model_doc/bert-generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-generation&quot;},{&quot;title&quot;:&quot;BertJapanese&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/bert-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-japanese&quot;},{&quot;title&quot;:&quot;Bertweet&quot;,&quot;id&quot;:&quot;model_doc/bertweet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bertweet&quot;},{&quot;title&quot;:&quot;BigBird&quot;,&quot;id&quot;:&quot;model_doc/big_bird&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/big_bird&quot;},{&quot;title&quot;:&quot;BigBirdPegasus&quot;,&quot;id&quot;:&quot;model_doc/bigbird_pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus&quot;},{&quot;title&quot;:&quot;BioGpt&quot;,&quot;id&quot;:&quot;model_doc/biogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/biogpt&quot;},{&quot;title&quot;:&quot;Blenderbot&quot;,&quot;id&quot;:&quot;model_doc/blenderbot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot&quot;},{&quot;title&quot;:&quot;Blenderbot Small&quot;,&quot;id&quot;:&quot;model_doc/blenderbot-small&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot-small&quot;},{&quot;title&quot;:&quot;BLOOM&quot;,&quot;id&quot;:&quot;model_doc/bloom&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bloom&quot;},{&quot;title&quot;:&quot;BORT&quot;,&quot;id&quot;:&quot;model_doc/bort&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bort&quot;},{&quot;title&quot;:&quot;ByT5&quot;,&quot;id&quot;:&quot;model_doc/byt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/byt5&quot;},{&quot;title&quot;:&quot;CamemBERT&quot;,&quot;id&quot;:&quot;model_doc/camembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/camembert&quot;},{&quot;title&quot;:&quot;CANINE&quot;,&quot;id&quot;:&quot;model_doc/canine&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/canine&quot;},{&quot;title&quot;:&quot;CodeGen&quot;,&quot;id&quot;:&quot;model_doc/codegen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/codegen&quot;},{&quot;title&quot;:&quot;CodeLlama&quot;,&quot;id&quot;:&quot;model_doc/code_llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/code_llama&quot;},{&quot;title&quot;:&quot;ConvBERT&quot;,&quot;id&quot;:&quot;model_doc/convbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convbert&quot;},{&quot;title&quot;:&quot;CPM&quot;,&quot;id&quot;:&quot;model_doc/cpm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpm&quot;},{&quot;title&quot;:&quot;CPMANT&quot;,&quot;id&quot;:&quot;model_doc/cpmant&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpmant&quot;},{&quot;title&quot;:&quot;CTRL&quot;,&quot;id&quot;:&quot;model_doc/ctrl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ctrl&quot;},{&quot;title&quot;:&quot;DeBERTa&quot;,&quot;id&quot;:&quot;model_doc/deberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta&quot;},{&quot;title&quot;:&quot;DeBERTa-v2&quot;,&quot;id&quot;:&quot;model_doc/deberta-v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta-v2&quot;},{&quot;title&quot;:&quot;DialoGPT&quot;,&quot;id&quot;:&quot;model_doc/dialogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dialogpt&quot;},{&quot;title&quot;:&quot;DistilBERT&quot;,&quot;id&quot;:&quot;model_doc/distilbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/distilbert&quot;},{&quot;title&quot;:&quot;DPR&quot;,&quot;id&quot;:&quot;model_doc/dpr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpr&quot;},{&quot;title&quot;:&quot;ELECTRA&quot;,&quot;id&quot;:&quot;model_doc/electra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/electra&quot;},{&quot;title&quot;:&quot;Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encoder-decoder&quot;},{&quot;title&quot;:&quot;ERNIE&quot;,&quot;id&quot;:&quot;model_doc/ernie&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie&quot;},{&quot;title&quot;:&quot;ErnieM&quot;,&quot;id&quot;:&quot;model_doc/ernie_m&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie_m&quot;},{&quot;title&quot;:&quot;ESM&quot;,&quot;id&quot;:&quot;model_doc/esm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/esm&quot;},{&quot;title&quot;:&quot;Falcon&quot;,&quot;id&quot;:&quot;model_doc/falcon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/falcon&quot;},{&quot;title&quot;:&quot;FLAN-T5&quot;,&quot;id&quot;:&quot;model_doc/flan-t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-t5&quot;},{&quot;title&quot;:&quot;FLAN-UL2&quot;,&quot;id&quot;:&quot;model_doc/flan-ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-ul2&quot;},{&quot;title&quot;:&quot;FlauBERT&quot;,&quot;id&quot;:&quot;model_doc/flaubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flaubert&quot;},{&quot;title&quot;:&quot;FNet&quot;,&quot;id&quot;:&quot;model_doc/fnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fnet&quot;},{&quot;title&quot;:&quot;FSMT&quot;,&quot;id&quot;:&quot;model_doc/fsmt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fsmt&quot;},{&quot;title&quot;:&quot;Funnel Transformer&quot;,&quot;id&quot;:&quot;model_doc/funnel&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/funnel&quot;},{&quot;title&quot;:&quot;GPT&quot;,&quot;id&quot;:&quot;model_doc/openai-gpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/openai-gpt&quot;},{&quot;title&quot;:&quot;GPT Neo&quot;,&quot;id&quot;:&quot;model_doc/gpt_neo&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neo&quot;},{&quot;title&quot;:&quot;GPT NeoX&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox&quot;},{&quot;title&quot;:&quot;GPT NeoX Japanese&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox_japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese&quot;},{&quot;title&quot;:&quot;GPT-J&quot;,&quot;id&quot;:&quot;model_doc/gptj&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptj&quot;},{&quot;title&quot;:&quot;GPT2&quot;,&quot;id&quot;:&quot;model_doc/gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt2&quot;},{&quot;title&quot;:&quot;GPTBigCode&quot;,&quot;id&quot;:&quot;model_doc/gpt_bigcode&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode&quot;},{&quot;title&quot;:&quot;GPTSAN Japanese&quot;,&quot;id&quot;:&quot;model_doc/gptsan-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese&quot;},{&quot;title&quot;:&quot;GPTSw3&quot;,&quot;id&quot;:&quot;model_doc/gpt-sw3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt-sw3&quot;},{&quot;title&quot;:&quot;HerBERT&quot;,&quot;id&quot;:&quot;model_doc/herbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/herbert&quot;},{&quot;title&quot;:&quot;I-BERT&quot;,&quot;id&quot;:&quot;model_doc/ibert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ibert&quot;},{&quot;title&quot;:&quot;Jukebox&quot;,&quot;id&quot;:&quot;model_doc/jukebox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/jukebox&quot;},{&quot;title&quot;:&quot;LED&quot;,&quot;id&quot;:&quot;model_doc/led&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/led&quot;},{&quot;title&quot;:&quot;LLaMA&quot;,&quot;id&quot;:&quot;model_doc/llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama&quot;},{&quot;title&quot;:&quot;Llama2&quot;,&quot;id&quot;:&quot;model_doc/llama2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama2&quot;},{&quot;title&quot;:&quot;Longformer&quot;,&quot;id&quot;:&quot;model_doc/longformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longformer&quot;},{&quot;title&quot;:&quot;LongT5&quot;,&quot;id&quot;:&quot;model_doc/longt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longt5&quot;},{&quot;title&quot;:&quot;LUKE&quot;,&quot;id&quot;:&quot;model_doc/luke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/luke&quot;},{&quot;title&quot;:&quot;M2M100&quot;,&quot;id&quot;:&quot;model_doc/m2m_100&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/m2m_100&quot;},{&quot;title&quot;:&quot;MarianMT&quot;,&quot;id&quot;:&quot;model_doc/marian&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/marian&quot;},{&quot;title&quot;:&quot;MarkupLM&quot;,&quot;id&quot;:&quot;model_doc/markuplm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/markuplm&quot;},{&quot;title&quot;:&quot;MBart and MBart-50&quot;,&quot;id&quot;:&quot;model_doc/mbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mbart&quot;},{&quot;title&quot;:&quot;MEGA&quot;,&quot;id&quot;:&quot;model_doc/mega&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mega&quot;},{&quot;title&quot;:&quot;MegatronBERT&quot;,&quot;id&quot;:&quot;model_doc/megatron-bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron-bert&quot;},{&quot;title&quot;:&quot;MegatronGPT2&quot;,&quot;id&quot;:&quot;model_doc/megatron_gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2&quot;},{&quot;title&quot;:&quot;Mistral&quot;,&quot;id&quot;:&quot;model_doc/mistral&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mistral&quot;},{&quot;title&quot;:&quot;mLUKE&quot;,&quot;id&quot;:&quot;model_doc/mluke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mluke&quot;},{&quot;title&quot;:&quot;MobileBERT&quot;,&quot;id&quot;:&quot;model_doc/mobilebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilebert&quot;},{&quot;title&quot;:&quot;MPNet&quot;,&quot;id&quot;:&quot;model_doc/mpnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpnet&quot;},{&quot;title&quot;:&quot;MPT&quot;,&quot;id&quot;:&quot;model_doc/mpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpt&quot;},{&quot;title&quot;:&quot;MRA&quot;,&quot;id&quot;:&quot;model_doc/mra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mra&quot;},{&quot;title&quot;:&quot;MT5&quot;,&quot;id&quot;:&quot;model_doc/mt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mt5&quot;},{&quot;title&quot;:&quot;MVP&quot;,&quot;id&quot;:&quot;model_doc/mvp&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mvp&quot;},{&quot;title&quot;:&quot;NEZHA&quot;,&quot;id&quot;:&quot;model_doc/nezha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nezha&quot;},{&quot;title&quot;:&quot;NLLB&quot;,&quot;id&quot;:&quot;model_doc/nllb&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb&quot;},{&quot;title&quot;:&quot;NLLB-MoE&quot;,&quot;id&quot;:&quot;model_doc/nllb-moe&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb-moe&quot;},{&quot;title&quot;:&quot;Nyströmformer&quot;,&quot;id&quot;:&quot;model_doc/nystromformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nystromformer&quot;},{&quot;title&quot;:&quot;Open-Llama&quot;,&quot;id&quot;:&quot;model_doc/open-llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/open-llama&quot;},{&quot;title&quot;:&quot;OPT&quot;,&quot;id&quot;:&quot;model_doc/opt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/opt&quot;},{&quot;title&quot;:&quot;Pegasus&quot;,&quot;id&quot;:&quot;model_doc/pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus&quot;},{&quot;title&quot;:&quot;PEGASUS-X&quot;,&quot;id&quot;:&quot;model_doc/pegasus_x&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus_x&quot;},{&quot;title&quot;:&quot;Persimmon&quot;,&quot;id&quot;:&quot;model_doc/persimmon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/persimmon&quot;},{&quot;title&quot;:&quot;PhoBERT&quot;,&quot;id&quot;:&quot;model_doc/phobert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/phobert&quot;},{&quot;title&quot;:&quot;PLBart&quot;,&quot;id&quot;:&quot;model_doc/plbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/plbart&quot;},{&quot;title&quot;:&quot;ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/prophetnet&quot;},{&quot;title&quot;:&quot;QDQBert&quot;,&quot;id&quot;:&quot;model_doc/qdqbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/qdqbert&quot;},{&quot;title&quot;:&quot;RAG&quot;,&quot;id&quot;:&quot;model_doc/rag&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rag&quot;},{&quot;title&quot;:&quot;REALM&quot;,&quot;id&quot;:&quot;model_doc/realm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/realm&quot;},{&quot;title&quot;:&quot;Reformer&quot;,&quot;id&quot;:&quot;model_doc/reformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/reformer&quot;},{&quot;title&quot;:&quot;RemBERT&quot;,&quot;id&quot;:&quot;model_doc/rembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rembert&quot;},{&quot;title&quot;:&quot;RetriBERT&quot;,&quot;id&quot;:&quot;model_doc/retribert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/retribert&quot;},{&quot;title&quot;:&quot;RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta&quot;},{&quot;title&quot;:&quot;RoBERTa-PreLayerNorm&quot;,&quot;id&quot;:&quot;model_doc/roberta-prelayernorm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm&quot;},{&quot;title&quot;:&quot;RoCBert&quot;,&quot;id&quot;:&quot;model_doc/roc_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roc_bert&quot;},{&quot;title&quot;:&quot;RoFormer&quot;,&quot;id&quot;:&quot;model_doc/roformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roformer&quot;},{&quot;title&quot;:&quot;RWKV&quot;,&quot;id&quot;:&quot;model_doc/rwkv&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rwkv&quot;},{&quot;title&quot;:&quot;Splinter&quot;,&quot;id&quot;:&quot;model_doc/splinter&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/splinter&quot;},{&quot;title&quot;:&quot;SqueezeBERT&quot;,&quot;id&quot;:&quot;model_doc/squeezebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/squeezebert&quot;},{&quot;title&quot;:&quot;SwitchTransformers&quot;,&quot;id&quot;:&quot;model_doc/switch_transformers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/switch_transformers&quot;},{&quot;title&quot;:&quot;T5&quot;,&quot;id&quot;:&quot;model_doc/t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5&quot;},{&quot;title&quot;:&quot;T5v1.1&quot;,&quot;id&quot;:&quot;model_doc/t5v1.1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5v1.1&quot;},{&quot;title&quot;:&quot;TAPEX&quot;,&quot;id&quot;:&quot;model_doc/tapex&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapex&quot;},{&quot;title&quot;:&quot;Transformer XL&quot;,&quot;id&quot;:&quot;model_doc/transfo-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/transfo-xl&quot;},{&quot;title&quot;:&quot;UL2&quot;,&quot;id&quot;:&quot;model_doc/ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ul2&quot;},{&quot;title&quot;:&quot;UMT5&quot;,&quot;id&quot;:&quot;model_doc/umt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/umt5&quot;},{&quot;title&quot;:&quot;X-MOD&quot;,&quot;id&quot;:&quot;model_doc/xmod&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xmod&quot;},{&quot;title&quot;:&quot;XGLM&quot;,&quot;id&quot;:&quot;model_doc/xglm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xglm&quot;},{&quot;title&quot;:&quot;XLM&quot;,&quot;id&quot;:&quot;model_doc/xlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm&quot;},{&quot;title&quot;:&quot;XLM-ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/xlm-prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa-XL&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl&quot;},{&quot;title&quot;:&quot;XLM-V&quot;,&quot;id&quot;:&quot;model_doc/xlm-v&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-v&quot;},{&quot;title&quot;:&quot;XLNet&quot;,&quot;id&quot;:&quot;model_doc/xlnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlnet&quot;},{&quot;title&quot;:&quot;YOSO&quot;,&quot;id&quot;:&quot;model_doc/yoso&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yoso&quot;}]},{&quot;title&quot;:&quot;Vision models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;BEiT&quot;,&quot;id&quot;:&quot;model_doc/beit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/beit&quot;},{&quot;title&quot;:&quot;BiT&quot;,&quot;id&quot;:&quot;model_doc/bit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bit&quot;},{&quot;title&quot;:&quot;Conditional DETR&quot;,&quot;id&quot;:&quot;model_doc/conditional_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/conditional_detr&quot;},{&quot;title&quot;:&quot;ConvNeXT&quot;,&quot;id&quot;:&quot;model_doc/convnext&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnext&quot;},{&quot;title&quot;:&quot;ConvNeXTV2&quot;,&quot;id&quot;:&quot;model_doc/convnextv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnextv2&quot;},{&quot;title&quot;:&quot;CvT&quot;,&quot;id&quot;:&quot;model_doc/cvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cvt&quot;},{&quot;title&quot;:&quot;Deformable DETR&quot;,&quot;id&quot;:&quot;model_doc/deformable_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deformable_detr&quot;},{&quot;title&quot;:&quot;DeiT&quot;,&quot;id&quot;:&quot;model_doc/deit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deit&quot;},{&quot;title&quot;:&quot;DETA&quot;,&quot;id&quot;:&quot;model_doc/deta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deta&quot;},{&quot;title&quot;:&quot;DETR&quot;,&quot;id&quot;:&quot;model_doc/detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/detr&quot;},{&quot;title&quot;:&quot;DiNAT&quot;,&quot;id&quot;:&quot;model_doc/dinat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinat&quot;},{&quot;title&quot;:&quot;DINO V2&quot;,&quot;id&quot;:&quot;model_doc/dinov2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinov2&quot;},{&quot;title&quot;:&quot;DiT&quot;,&quot;id&quot;:&quot;model_doc/dit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dit&quot;},{&quot;title&quot;:&quot;DPT&quot;,&quot;id&quot;:&quot;model_doc/dpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpt&quot;},{&quot;title&quot;:&quot;EfficientFormer&quot;,&quot;id&quot;:&quot;model_doc/efficientformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientformer&quot;},{&quot;title&quot;:&quot;EfficientNet&quot;,&quot;id&quot;:&quot;model_doc/efficientnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientnet&quot;},{&quot;title&quot;:&quot;FocalNet&quot;,&quot;id&quot;:&quot;model_doc/focalnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/focalnet&quot;},{&quot;title&quot;:&quot;GLPN&quot;,&quot;id&quot;:&quot;model_doc/glpn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/glpn&quot;},{&quot;title&quot;:&quot;ImageGPT&quot;,&quot;id&quot;:&quot;model_doc/imagegpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/imagegpt&quot;},{&quot;title&quot;:&quot;LeViT&quot;,&quot;id&quot;:&quot;model_doc/levit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/levit&quot;},{&quot;title&quot;:&quot;Mask2Former&quot;,&quot;id&quot;:&quot;model_doc/mask2former&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mask2former&quot;},{&quot;title&quot;:&quot;MaskFormer&quot;,&quot;id&quot;:&quot;model_doc/maskformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/maskformer&quot;},{&quot;title&quot;:&quot;MobileNetV1&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v1&quot;},{&quot;title&quot;:&quot;MobileNetV2&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v2&quot;},{&quot;title&quot;:&quot;MobileViT&quot;,&quot;id&quot;:&quot;model_doc/mobilevit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevit&quot;},{&quot;title&quot;:&quot;MobileViTV2&quot;,&quot;id&quot;:&quot;model_doc/mobilevitv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevitv2&quot;},{&quot;title&quot;:&quot;NAT&quot;,&quot;id&quot;:&quot;model_doc/nat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nat&quot;},{&quot;title&quot;:&quot;PoolFormer&quot;,&quot;id&quot;:&quot;model_doc/poolformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/poolformer&quot;},{&quot;title&quot;:&quot;Pyramid Vision Transformer (PVT)&quot;,&quot;id&quot;:&quot;model_doc/pvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pvt&quot;},{&quot;title&quot;:&quot;RegNet&quot;,&quot;id&quot;:&quot;model_doc/regnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/regnet&quot;},{&quot;title&quot;:&quot;ResNet&quot;,&quot;id&quot;:&quot;model_doc/resnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/resnet&quot;},{&quot;title&quot;:&quot;SegFormer&quot;,&quot;id&quot;:&quot;model_doc/segformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/segformer&quot;},{&quot;title&quot;:&quot;SwiftFormer&quot;,&quot;id&quot;:&quot;model_doc/swiftformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swiftformer&quot;},{&quot;title&quot;:&quot;Swin Transformer&quot;,&quot;id&quot;:&quot;model_doc/swin&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin&quot;},{&quot;title&quot;:&quot;Swin Transformer V2&quot;,&quot;id&quot;:&quot;model_doc/swinv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swinv2&quot;},{&quot;title&quot;:&quot;Swin2SR&quot;,&quot;id&quot;:&quot;model_doc/swin2sr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin2sr&quot;},{&quot;title&quot;:&quot;Table Transformer&quot;,&quot;id&quot;:&quot;model_doc/table-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/table-transformer&quot;},{&quot;title&quot;:&quot;TimeSformer&quot;,&quot;id&quot;:&quot;model_doc/timesformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/timesformer&quot;},{&quot;title&quot;:&quot;UperNet&quot;,&quot;id&quot;:&quot;model_doc/upernet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/upernet&quot;},{&quot;title&quot;:&quot;VAN&quot;,&quot;id&quot;:&quot;model_doc/van&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/van&quot;},{&quot;title&quot;:&quot;VideoMAE&quot;,&quot;id&quot;:&quot;model_doc/videomae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/videomae&quot;},{&quot;title&quot;:&quot;Vision Transformer (ViT)&quot;,&quot;id&quot;:&quot;model_doc/vit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit&quot;},{&quot;title&quot;:&quot;ViT Hybrid&quot;,&quot;id&quot;:&quot;model_doc/vit_hybrid&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_hybrid&quot;},{&quot;title&quot;:&quot;ViTDet&quot;,&quot;id&quot;:&quot;model_doc/vitdet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitdet&quot;},{&quot;title&quot;:&quot;ViTMAE&quot;,&quot;id&quot;:&quot;model_doc/vit_mae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_mae&quot;},{&quot;title&quot;:&quot;ViTMatte&quot;,&quot;id&quot;:&quot;model_doc/vitmatte&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitmatte&quot;},{&quot;title&quot;:&quot;ViTMSN&quot;,&quot;id&quot;:&quot;model_doc/vit_msn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_msn&quot;},{&quot;title&quot;:&quot;ViViT&quot;,&quot;id&quot;:&quot;model_doc/vivit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vivit&quot;},{&quot;title&quot;:&quot;YOLOS&quot;,&quot;id&quot;:&quot;model_doc/yolos&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yolos&quot;}]},{&quot;title&quot;:&quot;Audio models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio Spectrogram Transformer&quot;,&quot;id&quot;:&quot;model_doc/audio-spectrogram-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer&quot;},{&quot;title&quot;:&quot;Bark&quot;,&quot;id&quot;:&quot;model_doc/bark&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bark&quot;},{&quot;title&quot;:&quot;CLAP&quot;,&quot;id&quot;:&quot;model_doc/clap&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clap&quot;},{&quot;title&quot;:&quot;EnCodec&quot;,&quot;id&quot;:&quot;model_doc/encodec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encodec&quot;},{&quot;title&quot;:&quot;Hubert&quot;,&quot;id&quot;:&quot;model_doc/hubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/hubert&quot;},{&quot;title&quot;:&quot;MCTCT&quot;,&quot;id&quot;:&quot;model_doc/mctct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mctct&quot;},{&quot;title&quot;:&quot;MMS&quot;,&quot;id&quot;:&quot;model_doc/mms&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mms&quot;},{&quot;title&quot;:&quot;MusicGen&quot;,&quot;id&quot;:&quot;model_doc/musicgen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/musicgen&quot;},{&quot;title&quot;:&quot;Pop2Piano&quot;,&quot;id&quot;:&quot;model_doc/pop2piano&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pop2piano&quot;},{&quot;title&quot;:&quot;SEW&quot;,&quot;id&quot;:&quot;model_doc/sew&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew&quot;},{&quot;title&quot;:&quot;SEW-D&quot;,&quot;id&quot;:&quot;model_doc/sew-d&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew-d&quot;},{&quot;title&quot;:&quot;Speech2Text&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text&quot;},{&quot;title&quot;:&quot;Speech2Text2&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text_2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2&quot;},{&quot;title&quot;:&quot;SpeechT5&quot;,&quot;id&quot;:&quot;model_doc/speecht5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speecht5&quot;},{&quot;title&quot;:&quot;UniSpeech&quot;,&quot;id&quot;:&quot;model_doc/unispeech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech&quot;},{&quot;title&quot;:&quot;UniSpeech-SAT&quot;,&quot;id&quot;:&quot;model_doc/unispeech-sat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech-sat&quot;},{&quot;title&quot;:&quot;VITS&quot;,&quot;id&quot;:&quot;model_doc/vits&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vits&quot;},{&quot;title&quot;:&quot;Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2&quot;},{&quot;title&quot;:&quot;Wav2Vec2-Conformer&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2-conformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer&quot;},{&quot;title&quot;:&quot;Wav2Vec2Phoneme&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2_phoneme&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme&quot;},{&quot;title&quot;:&quot;WavLM&quot;,&quot;id&quot;:&quot;model_doc/wavlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wavlm&quot;},{&quot;title&quot;:&quot;Whisper&quot;,&quot;id&quot;:&quot;model_doc/whisper&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/whisper&quot;},{&quot;title&quot;:&quot;XLS-R&quot;,&quot;id&quot;:&quot;model_doc/xls_r&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xls_r&quot;},{&quot;title&quot;:&quot;XLSR-Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/xlsr_wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2&quot;}]},{&quot;title&quot;:&quot;Multimodal models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALIGN&quot;,&quot;id&quot;:&quot;model_doc/align&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/align&quot;},{&quot;title&quot;:&quot;AltCLIP&quot;,&quot;id&quot;:&quot;model_doc/altclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/altclip&quot;},{&quot;title&quot;:&quot;BLIP&quot;,&quot;id&quot;:&quot;model_doc/blip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip&quot;},{&quot;title&quot;:&quot;BLIP-2&quot;,&quot;id&quot;:&quot;model_doc/blip-2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip-2&quot;},{&quot;title&quot;:&quot;BridgeTower&quot;,&quot;id&quot;:&quot;model_doc/bridgetower&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bridgetower&quot;},{&quot;title&quot;:&quot;BROS&quot;,&quot;id&quot;:&quot;model_doc/bros&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bros&quot;},{&quot;title&quot;:&quot;Chinese-CLIP&quot;,&quot;id&quot;:&quot;model_doc/chinese_clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/chinese_clip&quot;},{&quot;title&quot;:&quot;CLIP&quot;,&quot;id&quot;:&quot;model_doc/clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clip&quot;},{&quot;title&quot;:&quot;CLIPSeg&quot;,&quot;id&quot;:&quot;model_doc/clipseg&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clipseg&quot;},{&quot;title&quot;:&quot;Data2Vec&quot;,&quot;id&quot;:&quot;model_doc/data2vec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/data2vec&quot;},{&quot;title&quot;:&quot;DePlot&quot;,&quot;id&quot;:&quot;model_doc/deplot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deplot&quot;},{&quot;title&quot;:&quot;Donut&quot;,&quot;id&quot;:&quot;model_doc/donut&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/donut&quot;},{&quot;title&quot;:&quot;FLAVA&quot;,&quot;id&quot;:&quot;model_doc/flava&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flava&quot;},{&quot;title&quot;:&quot;GIT&quot;,&quot;id&quot;:&quot;model_doc/git&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/git&quot;},{&quot;title&quot;:&quot;GroupViT&quot;,&quot;id&quot;:&quot;model_doc/groupvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/groupvit&quot;},{&quot;title&quot;:&quot;IDEFICS&quot;,&quot;id&quot;:&quot;model_doc/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/idefics&quot;},{&quot;title&quot;:&quot;InstructBLIP&quot;,&quot;id&quot;:&quot;model_doc/instructblip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/instructblip&quot;},{&quot;title&quot;:&quot;LayoutLM&quot;,&quot;id&quot;:&quot;model_doc/layoutlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlm&quot;},{&quot;title&quot;:&quot;LayoutLMV2&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv2&quot;},{&quot;title&quot;:&quot;LayoutLMV3&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv3&quot;},{&quot;title&quot;:&quot;LayoutXLM&quot;,&quot;id&quot;:&quot;model_doc/layoutxlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutxlm&quot;},{&quot;title&quot;:&quot;LiLT&quot;,&quot;id&quot;:&quot;model_doc/lilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lilt&quot;},{&quot;title&quot;:&quot;LXMERT&quot;,&quot;id&quot;:&quot;model_doc/lxmert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lxmert&quot;},{&quot;title&quot;:&quot;MatCha&quot;,&quot;id&quot;:&quot;model_doc/matcha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/matcha&quot;},{&quot;title&quot;:&quot;MGP-STR&quot;,&quot;id&quot;:&quot;model_doc/mgp-str&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mgp-str&quot;},{&quot;title&quot;:&quot;Nougat&quot;,&quot;id&quot;:&quot;model_doc/nougat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nougat&quot;},{&quot;title&quot;:&quot;OneFormer&quot;,&quot;id&quot;:&quot;model_doc/oneformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/oneformer&quot;},{&quot;title&quot;:&quot;OWL-ViT&quot;,&quot;id&quot;:&quot;model_doc/owlvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/owlvit&quot;},{&quot;title&quot;:&quot;Perceiver&quot;,&quot;id&quot;:&quot;model_doc/perceiver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/perceiver&quot;},{&quot;title&quot;:&quot;Pix2Struct&quot;,&quot;id&quot;:&quot;model_doc/pix2struct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pix2struct&quot;},{&quot;title&quot;:&quot;Segment Anything&quot;,&quot;id&quot;:&quot;model_doc/sam&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sam&quot;},{&quot;title&quot;:&quot;Speech Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/speech-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder&quot;},{&quot;title&quot;:&quot;TAPAS&quot;,&quot;id&quot;:&quot;model_doc/tapas&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapas&quot;},{&quot;title&quot;:&quot;TrOCR&quot;,&quot;id&quot;:&quot;model_doc/trocr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trocr&quot;},{&quot;title&quot;:&quot;TVLT&quot;,&quot;id&quot;:&quot;model_doc/tvlt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tvlt&quot;},{&quot;title&quot;:&quot;ViLT&quot;,&quot;id&quot;:&quot;model_doc/vilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vilt&quot;},{&quot;title&quot;:&quot;Vision Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/vision-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder&quot;},{&quot;title&quot;:&quot;Vision Text Dual Encoder&quot;,&quot;id&quot;:&quot;model_doc/vision-text-dual-encoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder&quot;},{&quot;title&quot;:&quot;VisualBERT&quot;,&quot;id&quot;:&quot;model_doc/visual_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/visual_bert&quot;},{&quot;title&quot;:&quot;X-CLIP&quot;,&quot;id&quot;:&quot;model_doc/xclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xclip&quot;}]},{&quot;title&quot;:&quot;Reinforcement learning models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Decision Transformer&quot;,&quot;id&quot;:&quot;model_doc/decision_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/decision_transformer&quot;},{&quot;title&quot;:&quot;Trajectory Transformer&quot;,&quot;id&quot;:&quot;model_doc/trajectory_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer&quot;}]},{&quot;title&quot;:&quot;Time series models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Autoformer&quot;,&quot;id&quot;:&quot;model_doc/autoformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/autoformer&quot;},{&quot;title&quot;:&quot;Informer&quot;,&quot;id&quot;:&quot;model_doc/informer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/informer&quot;},{&quot;title&quot;:&quot;Time Series Transformer&quot;,&quot;id&quot;:&quot;model_doc/time_series_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/time_series_transformer&quot;}]},{&quot;title&quot;:&quot;Graph models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Graphormer&quot;,&quot;id&quot;:&quot;model_doc/graphormer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/graphormer&quot;}]}]},{&quot;title&quot;:&quot;Internal Helpers&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Custom Layers and Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/modeling_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/modeling_utils&quot;},{&quot;title&quot;:&quot;Utilities for pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/pipelines_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/pipelines_utils&quot;},{&quot;title&quot;:&quot;Utilities for Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/tokenization_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/tokenization_utils&quot;},{&quot;title&quot;:&quot;Utilities for Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/trainer_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/trainer_utils&quot;},{&quot;title&quot;:&quot;Utilities for Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/generation_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/generation_utils&quot;},{&quot;title&quot;:&quot;Utilities for Image Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/image_processing_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/image_processing_utils&quot;},{&quot;title&quot;:&quot;Utilities for Audio processing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/audio_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/audio_utils&quot;},{&quot;title&quot;:&quot;General Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/file_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/file_utils&quot;},{&quot;title&quot;:&quot;Utilities for Time Series&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/time_series_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/time_series_utils&quot;}]}]}],&quot;chapterId&quot;:&quot;model_doc/bert-japanese&quot;,&quot;docType&quot;:&quot;docs&quot;,&quot;isLoggedIn&quot;:false,&quot;lang&quot;:&quot;en&quot;,&quot;langs&quot;:[&quot;de&quot;,&quot;en&quot;,&quot;es&quot;,&quot;fr&quot;,&quot;it&quot;,&quot;ko&quot;,&quot;pt&quot;,&quot;zh&quot;],&quot;library&quot;:&quot;transformers&quot;,&quot;theme&quot;:&quot;light&quot;,&quot;version&quot;:&quot;v4.34.0&quot;,&quot;versions&quot;:[{&quot;version&quot;:&quot;main&quot;},{&quot;version&quot;:&quot;v4.34.0&quot;},{&quot;version&quot;:&quot;v4.33.3&quot;},{&quot;version&quot;:&quot;v4.33.2&quot;},{&quot;version&quot;:&quot;v4.33.0&quot;},{&quot;version&quot;:&quot;v4.32.1&quot;},{&quot;version&quot;:&quot;v4.32.0&quot;},{&quot;version&quot;:&quot;v4.31.0&quot;},{&quot;version&quot;:&quot;v4.30.0&quot;},{&quot;version&quot;:&quot;v4.29.1&quot;},{&quot;version&quot;:&quot;v4.29.0&quot;},{&quot;version&quot;:&quot;v4.28.1&quot;},{&quot;version&quot;:&quot;v4.28.0&quot;},{&quot;version&quot;:&quot;v4.27.2&quot;},{&quot;version&quot;:&quot;v4.27.1&quot;},{&quot;version&quot;:&quot;v4.27.0&quot;},{&quot;version&quot;:&quot;v4.26.1&quot;},{&quot;version&quot;:&quot;v4.26.0&quot;},{&quot;version&quot;:&quot;v4.25.1&quot;},{&quot;version&quot;:&quot;v4.24.0&quot;},{&quot;version&quot;:&quot;v4.23.1&quot;},{&quot;version&quot;:&quot;v4.23.0&quot;},{&quot;version&quot;:&quot;v4.22.2&quot;},{&quot;version&quot;:&quot;v4.22.1&quot;},{&quot;version&quot;:&quot;v4.22.0&quot;},{&quot;version&quot;:&quot;v4.21.3&quot;},{&quot;version&quot;:&quot;v4.21.2&quot;},{&quot;version&quot;:&quot;v4.21.1&quot;},{&quot;version&quot;:&quot;v4.21.0&quot;},{&quot;version&quot;:&quot;v4.20.1&quot;},{&quot;version&quot;:&quot;v4.20.0&quot;},{&quot;version&quot;:&quot;v4.19.4&quot;},{&quot;version&quot;:&quot;v4.19.3&quot;},{&quot;version&quot;:&quot;v4.19.2&quot;},{&quot;version&quot;:&quot;v4.19.0&quot;},{&quot;version&quot;:&quot;v4.18.0&quot;},{&quot;version&quot;:&quot;v4.17.0&quot;},{&quot;version&quot;:&quot;v4.16.2&quot;},{&quot;version&quot;:&quot;v4.16.1&quot;},{&quot;version&quot;:&quot;v4.16.0&quot;},{&quot;version&quot;:&quot;v4.15.0&quot;},{&quot;version&quot;:&quot;v4.14.1&quot;},{&quot;version&quot;:&quot;v4.13.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.5&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.4&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.0.0&quot;},{&quot;version&quot;:&quot;doc-builder-html&quot;}],&quot;title&quot;:&quot;BertJapanese&quot;}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"> <div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation </p> <div class="flex items-center"><p class="font-semibold">BertJapanese</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "> <button class=" " type="button"> <h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> </button> <div class="flex items-center"> <select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1" selected="">v4.34.0</option><option value="2">v4.33.3</option><option value="3">v4.32.1</option><option value="4">v4.31.0</option><option value="5">v4.30.0</option><option value="6">v4.29.1</option><option value="7">v4.28.1</option><option value="8">v4.27.2</option><option value="9">v4.26.1</option><option value="10">v4.25.1</option><option value="11">v4.24.0</option><option value="12">v4.23.1</option><option value="13">v4.22.2</option><option value="14">v4.21.3</option><option value="15">v4.20.1</option><option value="16">v4.19.4</option><option value="17">v4.18.0</option><option value="18">v4.17.0</option><option value="19">v4.16.2</option><option value="20">v4.15.0</option><option value="21">v4.14.1</option><option value="22">v4.13.0</option><option value="23">v4.12.5</option><option value="24">v4.11.3</option><option value="25">v4.10.1</option><option value="26">v4.9.2</option><option value="27">v4.8.2</option><option value="28">v4.7.0</option><option value="29">v4.6.0</option><option value="30">v4.5.1</option><option value="31">v4.4.2</option><option value="32">v4.3.3</option><option value="33">v4.2.2</option><option value="34">v4.1.1</option><option value="35">v4.0.1</option><option value="36">v3.5.1</option><option value="37">v3.4.0</option><option value="38">v3.3.1</option><option value="39">v3.2.0</option><option value="40">v3.1.0</option><option value="41">v3.0.2</option><option value="42">v2.11.0</option><option value="43">v2.10.0</option><option value="44">v2.9.1</option><option value="45">v2.8.0</option><option value="46">v2.7.0</option><option value="47">v2.6.0</option><option value="48">v2.5.1</option><option value="49">v2.4.1</option><option value="50">v2.3.0</option><option value="51">v2.2.2</option><option value="52">v2.1.1</option><option value="53">v2.0.0</option><option value="54">v1.2.0</option><option value="55">v1.1.0</option><option value="56">v1.0.0</option><option value="57">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en" selected="">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"> <button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"> <svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> </a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Get started<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/index"><!-- HTML_TAG_START -->🤗 Transformers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/quicktour"><!-- HTML_TAG_START -->Quick tour<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/installation"><!-- HTML_TAG_START -->Installation<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Tutorials<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_tutorial"><!-- HTML_TAG_START -->Run inference with pipelines<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/autoclass_tutorial"><!-- HTML_TAG_START -->Write portable code with AutoClass<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/preprocessing"><!-- HTML_TAG_START -->Preprocess data<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/training"><!-- HTML_TAG_START -->Fine-tune a pretrained model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/run_scripts"><!-- HTML_TAG_START -->Train with a script<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/accelerate"><!-- HTML_TAG_START -->Set up distributed training with 🤗 Accelerate<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/peft"><!-- HTML_TAG_START -->Load and train adapters with 🤗 PEFT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_sharing"><!-- HTML_TAG_START -->Share your model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/transformers_agents"><!-- HTML_TAG_START -->Agents<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/llm_tutorial"><!-- HTML_TAG_START -->Generation with LLMs<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Task Guides<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Natural Language Processing<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Audio<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Computer Vision<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Multimodal<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Generation<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Prompting<!-- HTML_TAG_END --></span> </span></span> </div></div> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Developer guides<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/fast_tokenizers"><!-- HTML_TAG_START -->Use fast tokenizers from 🤗 Tokenizers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/multilingual"><!-- HTML_TAG_START -->Run inference with multilingual models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/create_a_model"><!-- HTML_TAG_START -->Use model-specific APIs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_models"><!-- HTML_TAG_START -->Share a custom model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/chat_templating"><!-- HTML_TAG_START -->Templates for chat models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/sagemaker"><!-- HTML_TAG_START -->Run training on Amazon SageMaker<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/serialization"><!-- HTML_TAG_START -->Export to ONNX<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tflite"><!-- HTML_TAG_START -->Export to TFLite<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/torchscript"><!-- HTML_TAG_START -->Export to TorchScript<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/benchmarks"><!-- HTML_TAG_START -->Benchmarks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/notebooks"><!-- HTML_TAG_START -->Notebooks with examples<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/community"><!-- HTML_TAG_START -->Community resources<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_tools"><!-- HTML_TAG_START -->Custom Tools and Prompts<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/troubleshooting"><!-- HTML_TAG_START -->Troubleshoot<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Performance and scalability<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/performance"><!-- HTML_TAG_START -->Overview<!-- HTML_TAG_END --> </a> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Efficient training techniques<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_one"><!-- HTML_TAG_START -->Methods and tools for efficient training on a single GPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_many"><!-- HTML_TAG_START -->Multiple GPUs and parallelism<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu"><!-- HTML_TAG_START -->Efficient training on CPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu_many"><!-- HTML_TAG_START -->Distributed CPU training<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu"><!-- HTML_TAG_START -->Training on TPUs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu_tf"><!-- HTML_TAG_START -->Training on TPU with TensorFlow<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_special"><!-- HTML_TAG_START -->Training on Specialized Hardware<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_hardware"><!-- HTML_TAG_START -->Custom hardware for training<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/hpo_train"><!-- HTML_TAG_START -->Hyperparameter Search using Trainer API<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Optimizing inference<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_cpu"><!-- HTML_TAG_START -->Inference on CPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_one"><!-- HTML_TAG_START -->Inference on one GPU<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_many"><!-- HTML_TAG_START -->Inference on many GPUs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_special"><!-- HTML_TAG_START -->Inference on Specialized Hardware<!-- HTML_TAG_END --> </a> </div><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/big_models"><!-- HTML_TAG_START -->Instantiating a big model<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/debugging"><!-- HTML_TAG_START -->Troubleshooting<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tf_xla"><!-- HTML_TAG_START -->XLA Integration for TensorFlow Models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perf_torch_compile"><!-- HTML_TAG_START -->Optimize inference using `torch.compile()`<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Contribute<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/contributing"><!-- HTML_TAG_START -->How to contribute to transformers?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_model"><!-- HTML_TAG_START -->How to add a model to 🤗 Transformers?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_tensorflow_model"><!-- HTML_TAG_START -->How to convert a 🤗 Transformers model to TensorFlow?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_pipeline"><!-- HTML_TAG_START -->How to add a pipeline to 🤗 Transformers?<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/testing"><!-- HTML_TAG_START -->Testing<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pr_checks"><!-- HTML_TAG_START -->Checks on a Pull Request<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Conceptual guides<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/philosophy"><!-- HTML_TAG_START -->Philosophy<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/glossary"><!-- HTML_TAG_START -->Glossary<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/task_summary"><!-- HTML_TAG_START -->What 🤗 Transformers can do<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tasks_explained"><!-- HTML_TAG_START -->How 🤗 Transformers solve tasks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_summary"><!-- HTML_TAG_START -->The Transformer model family<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tokenizer_summary"><!-- HTML_TAG_START -->Summary of the tokenizers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/attention"><!-- HTML_TAG_START -->Attention mechanisms<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pad_truncation"><!-- HTML_TAG_START -->Padding and truncation<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/bertology"><!-- HTML_TAG_START -->BERTology<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perplexity"><!-- HTML_TAG_START -->Perplexity of fixed-length models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_webserver"><!-- HTML_TAG_START -->Pipelines for webserver inference<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_memory_anatomy"><!-- HTML_TAG_START -->Model training anatomy<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->API<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Main Classes<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/agent"><!-- HTML_TAG_START -->Agents and Tools<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/model_doc/auto"><!-- HTML_TAG_START -->Auto Classes<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/callback"><!-- HTML_TAG_START -->Callbacks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/configuration"><!-- HTML_TAG_START -->Configuration<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/data_collator"><!-- HTML_TAG_START -->Data Collator<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks"><!-- HTML_TAG_START -->Keras callbacks<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/logging"><!-- HTML_TAG_START -->Logging<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/model"><!-- HTML_TAG_START -->Models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/text_generation"><!-- HTML_TAG_START -->Text Generation<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/onnx"><!-- HTML_TAG_START -->ONNX<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules"><!-- HTML_TAG_START -->Optimization<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/output"><!-- HTML_TAG_START -->Model outputs<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/pipelines"><!-- HTML_TAG_START -->Pipelines<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/processors"><!-- HTML_TAG_START -->Processors<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/quantization"><!-- HTML_TAG_START -->Quantization<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/tokenizer"><!-- HTML_TAG_START -->Tokenizer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/trainer"><!-- HTML_TAG_START -->Trainer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/deepspeed"><!-- HTML_TAG_START -->DeepSpeed Integration<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor"><!-- HTML_TAG_START -->Feature Extractor<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/image_processor"><!-- HTML_TAG_START -->Image Processor<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Text models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/albert"><!-- HTML_TAG_START -->ALBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bart"><!-- HTML_TAG_START -->BART<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/barthez"><!-- HTML_TAG_START -->BARThez<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bartpho"><!-- HTML_TAG_START -->BARTpho<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bert"><!-- HTML_TAG_START -->BERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bert-generation"><!-- HTML_TAG_START -->BertGeneration<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bert-japanese"><!-- HTML_TAG_START -->BertJapanese<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bertweet"><!-- HTML_TAG_START -->Bertweet<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/big_bird"><!-- HTML_TAG_START -->BigBird<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus"><!-- HTML_TAG_START -->BigBirdPegasus<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/biogpt"><!-- HTML_TAG_START -->BioGpt<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/blenderbot"><!-- HTML_TAG_START -->Blenderbot<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/blenderbot-small"><!-- HTML_TAG_START -->Blenderbot Small<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bloom"><!-- HTML_TAG_START -->BLOOM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bort"><!-- HTML_TAG_START -->BORT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/byt5"><!-- HTML_TAG_START -->ByT5<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/camembert"><!-- HTML_TAG_START -->CamemBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/canine"><!-- HTML_TAG_START -->CANINE<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/codegen"><!-- HTML_TAG_START -->CodeGen<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/code_llama"><!-- HTML_TAG_START -->CodeLlama<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/convbert"><!-- HTML_TAG_START -->ConvBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/cpm"><!-- HTML_TAG_START -->CPM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/cpmant"><!-- HTML_TAG_START -->CPMANT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ctrl"><!-- HTML_TAG_START -->CTRL<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/deberta"><!-- HTML_TAG_START -->DeBERTa<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/deberta-v2"><!-- HTML_TAG_START -->DeBERTa-v2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/dialogpt"><!-- HTML_TAG_START -->DialoGPT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/distilbert"><!-- HTML_TAG_START -->DistilBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/dpr"><!-- HTML_TAG_START -->DPR<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/electra"><!-- HTML_TAG_START -->ELECTRA<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/encoder-decoder"><!-- HTML_TAG_START -->Encoder Decoder Models<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ernie"><!-- HTML_TAG_START -->ERNIE<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ernie_m"><!-- HTML_TAG_START -->ErnieM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/esm"><!-- HTML_TAG_START -->ESM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/falcon"><!-- HTML_TAG_START -->Falcon<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/flan-t5"><!-- HTML_TAG_START -->FLAN-T5<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/flan-ul2"><!-- HTML_TAG_START -->FLAN-UL2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/flaubert"><!-- HTML_TAG_START -->FlauBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/fnet"><!-- HTML_TAG_START -->FNet<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/fsmt"><!-- HTML_TAG_START -->FSMT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/funnel"><!-- HTML_TAG_START -->Funnel Transformer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/openai-gpt"><!-- HTML_TAG_START -->GPT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt_neo"><!-- HTML_TAG_START -->GPT Neo<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt_neox"><!-- HTML_TAG_START -->GPT NeoX<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese"><!-- HTML_TAG_START -->GPT NeoX Japanese<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gptj"><!-- HTML_TAG_START -->GPT-J<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt2"><!-- HTML_TAG_START -->GPT2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode"><!-- HTML_TAG_START -->GPTBigCode<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese"><!-- HTML_TAG_START -->GPTSAN Japanese<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt-sw3"><!-- HTML_TAG_START -->GPTSw3<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/herbert"><!-- HTML_TAG_START -->HerBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ibert"><!-- HTML_TAG_START -->I-BERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/jukebox"><!-- HTML_TAG_START -->Jukebox<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/led"><!-- HTML_TAG_START -->LED<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/llama"><!-- HTML_TAG_START -->LLaMA<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/llama2"><!-- HTML_TAG_START -->Llama2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/longformer"><!-- HTML_TAG_START -->Longformer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/longt5"><!-- HTML_TAG_START -->LongT5<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/luke"><!-- HTML_TAG_START -->LUKE<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/m2m_100"><!-- HTML_TAG_START -->M2M100<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/marian"><!-- HTML_TAG_START -->MarianMT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/markuplm"><!-- HTML_TAG_START -->MarkupLM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mbart"><!-- HTML_TAG_START -->MBart and MBart-50<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mega"><!-- HTML_TAG_START -->MEGA<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/megatron-bert"><!-- HTML_TAG_START -->MegatronBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2"><!-- HTML_TAG_START -->MegatronGPT2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mistral"><!-- HTML_TAG_START -->Mistral<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mluke"><!-- HTML_TAG_START -->mLUKE<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mobilebert"><!-- HTML_TAG_START -->MobileBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mpnet"><!-- HTML_TAG_START -->MPNet<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mpt"><!-- HTML_TAG_START -->MPT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mra"><!-- HTML_TAG_START -->MRA<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mt5"><!-- HTML_TAG_START -->MT5<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mvp"><!-- HTML_TAG_START -->MVP<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nezha"><!-- HTML_TAG_START -->NEZHA<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nllb"><!-- HTML_TAG_START -->NLLB<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nllb-moe"><!-- HTML_TAG_START -->NLLB-MoE<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nystromformer"><!-- HTML_TAG_START -->Nyströmformer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/open-llama"><!-- HTML_TAG_START -->Open-Llama<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/opt"><!-- HTML_TAG_START -->OPT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/pegasus"><!-- HTML_TAG_START -->Pegasus<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/pegasus_x"><!-- HTML_TAG_START -->PEGASUS-X<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/persimmon"><!-- HTML_TAG_START -->Persimmon<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/phobert"><!-- HTML_TAG_START -->PhoBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/plbart"><!-- HTML_TAG_START -->PLBart<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/prophetnet"><!-- HTML_TAG_START -->ProphetNet<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/qdqbert"><!-- HTML_TAG_START -->QDQBert<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/rag"><!-- HTML_TAG_START -->RAG<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/realm"><!-- HTML_TAG_START -->REALM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/reformer"><!-- HTML_TAG_START -->Reformer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/rembert"><!-- HTML_TAG_START -->RemBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/retribert"><!-- HTML_TAG_START -->RetriBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/roberta"><!-- HTML_TAG_START -->RoBERTa<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm"><!-- HTML_TAG_START -->RoBERTa-PreLayerNorm<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/roc_bert"><!-- HTML_TAG_START -->RoCBert<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/roformer"><!-- HTML_TAG_START -->RoFormer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/rwkv"><!-- HTML_TAG_START -->RWKV<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/splinter"><!-- HTML_TAG_START -->Splinter<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/squeezebert"><!-- HTML_TAG_START -->SqueezeBERT<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/switch_transformers"><!-- HTML_TAG_START -->SwitchTransformers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/t5"><!-- HTML_TAG_START -->T5<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/t5v1.1"><!-- HTML_TAG_START -->T5v1.1<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/tapex"><!-- HTML_TAG_START -->TAPEX<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/transfo-xl"><!-- HTML_TAG_START -->Transformer XL<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ul2"><!-- HTML_TAG_START -->UL2<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/umt5"><!-- HTML_TAG_START -->UMT5<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xmod"><!-- HTML_TAG_START -->X-MOD<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xglm"><!-- HTML_TAG_START -->XGLM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm"><!-- HTML_TAG_START -->XLM<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet"><!-- HTML_TAG_START -->XLM-ProphetNet<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta"><!-- HTML_TAG_START -->XLM-RoBERTa<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl"><!-- HTML_TAG_START -->XLM-RoBERTa-XL<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm-v"><!-- HTML_TAG_START -->XLM-V<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlnet"><!-- HTML_TAG_START -->XLNet<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/yoso"><!-- HTML_TAG_START -->YOSO<!-- HTML_TAG_END --> </a> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Vision models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Audio models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Multimodal models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Reinforcement learning models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Time series models<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Graph models<!-- HTML_TAG_END --></span> </span></span> </div></div> </div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span><!-- HTML_TAG_START -->Internal Helpers<!-- HTML_TAG_END --></span> </span></span> </div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/modeling_utils"><!-- HTML_TAG_START -->Custom Layers and Utilities<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/pipelines_utils"><!-- HTML_TAG_START -->Utilities for pipelines<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/tokenization_utils"><!-- HTML_TAG_START -->Utilities for Tokenizers<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/trainer_utils"><!-- HTML_TAG_START -->Utilities for Trainer<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/generation_utils"><!-- HTML_TAG_START -->Utilities for Generation<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/image_processing_utils"><!-- HTML_TAG_START -->Utilities for Image Processors<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/audio_utils"><!-- HTML_TAG_START -->Utilities for Audio processing<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/file_utils"><!-- HTML_TAG_START -->General Utilities<!-- HTML_TAG_END --> </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/time_series_utils"><!-- HTML_TAG_START -->Utilities for Time Series<!-- HTML_TAG_END --> </a> </div> </div></nav></div></div></div> <div class="z-1 min-w-0 flex-1"> <div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg"> <div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div> <p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience </p> <div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes </div></div></div> <div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a> <p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div> <div class="prose-doc prose relative mx-auto max-w-4xl break-words"> <p></p> <h1 class="relative group"><a id="bertjapanese" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#bertjapanese"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-swoogh">BertJapanese</span></h1> <h2 class="relative group"><a id="overview" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#overview"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1jsw1pg">Overview</span></h2> <p data-svelte-h="svelte-q6gk8k">The BERT models trained on Japanese text.</p> <p data-svelte-h="svelte-130rfh4">There are models with two different tokenization methods:</p> <ul data-svelte-h="svelte-1nk5iha"><li>Tokenize with MeCab and WordPiece. This requires some extra dependencies, <a href="https://github.com/polm/fugashi" rel="nofollow">fugashi</a> which is a wrapper around <a href="https://taku910.github.io/mecab/" rel="nofollow">MeCab</a>.</li> <li>Tokenize into characters.</li></ul> <p data-svelte-h="svelte-1dsac8d">To use <em>MecabTokenizer</em>, you should <code>pip install transformers["ja"]</code> (or <code>pip install -e .["ja"]</code> if you install from source) to install dependencies.</p> <p data-svelte-h="svelte-u7nt0b">See <a href="https://github.com/cl-tohoku/bert-japanese" rel="nofollow">details on cl-tohoku repository</a>.</p> <p data-svelte-h="svelte-vqnu8f">Example of using a model with MeCab and WordPiece tokenization:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoModel, AutoTokenizer <span class="hljs-meta">&gt;&gt;&gt; </span>bertjapanese = AutoModel.from_pretrained(<span class="hljs-string">"cl-tohoku/bert-base-japanese"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"cl-tohoku/bert-base-japanese"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment">## Input Japanese Text</span> <span class="hljs-meta">&gt;&gt;&gt; </span>line = <span class="hljs-string">"吾輩は猫である。"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(line, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-built_in">print</span>(tokenizer.decode(inputs[<span class="hljs-string">"input_ids"</span>][<span class="hljs-number">0</span>])) [CLS] 吾輩 は 猫 で ある 。 [SEP] <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = bertjapanese(**inputs)</pre></div> <p data-svelte-h="svelte-m6bjin">Example of using a model with Character tokenization:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>bertjapanese = AutoModel.from_pretrained(<span class="hljs-string">"cl-tohoku/bert-base-japanese-char"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"cl-tohoku/bert-base-japanese-char"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment">## Input Japanese Text</span> <span class="hljs-meta">&gt;&gt;&gt; </span>line = <span class="hljs-string">"吾輩は猫である。"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(line, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-built_in">print</span>(tokenizer.decode(inputs[<span class="hljs-string">"input_ids"</span>][<span class="hljs-number">0</span>])) [CLS] 吾 輩 は 猫 で あ る 。 [SEP] <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = bertjapanese(**inputs)</pre></div> <p data-svelte-h="svelte-axv494">Tips:</p> <ul data-svelte-h="svelte-14vlmtl"><li>This implementation is the same as BERT, except for tokenization method. Refer to the <a href="bert">documentation of BERT</a> for more usage examples.</li></ul> <p data-svelte-h="svelte-1ykd1l9">This model was contributed by <a href="https://huggingface.co/cl-tohoku" rel="nofollow">cl-tohoku</a>.</p> <h2 class="relative group"><a id="transformers.BertJapaneseTokenizer" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.BertJapaneseTokenizer"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-18b117u">BertJapaneseTokenizer</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.BertJapaneseTokenizer"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">BertJapaneseTokenizer</span></span></h3> <a id="transformers.BertJapaneseTokenizer" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.BertJapaneseTokenizer"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/bert_japanese/tokenization_bert_japanese.py#L107" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">vocab_file<span class="opacity-60"></span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">spm_file<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">do_lower_case<span class="opacity-60"> = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">do_word_tokenize<span class="opacity-60"> = True</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">do_subword_tokenize<span class="opacity-60"> = True</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">word_tokenizer_type<span class="opacity-60"> = 'basic'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">subword_tokenizer_type<span class="opacity-60"> = 'wordpiece'</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">never_split<span class="opacity-60"> = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">unk_token<span class="opacity-60"> = '[UNK]'</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">sep_token<span class="opacity-60"> = '[SEP]'</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">pad_token<span class="opacity-60"> = '[PAD]'</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">cls_token<span class="opacity-60"> = '[CLS]'</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mask_token<span class="opacity-60"> = '[MASK]'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">mecab_kwargs<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">sudachi_kwargs<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">jumanpp_kwargs<span class="opacity-60"> = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 10 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.BertJapaneseTokenizer.vocab_file" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.BertJapaneseTokenizer.vocab_file"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>vocab_file</strong> (<code>str</code>) — Path to a one-wordpiece-per-line vocabulary file.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.BertJapaneseTokenizer.spm_file" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.BertJapaneseTokenizer.spm_file"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>spm_file</strong> (<code>str</code>, <em>optional</em>) — Path to <a href="https://github.com/google/sentencepiece" rel="nofollow">SentencePiece</a> file (generally has a .spm or .model extension) that contains the vocabulary.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.BertJapaneseTokenizer.do_lower_case" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.BertJapaneseTokenizer.do_lower_case"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>do_lower_case</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether to lower case the input. Only has an effect when do_basic_tokenize=True.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.BertJapaneseTokenizer.do_word_tokenize" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.BertJapaneseTokenizer.do_word_tokenize"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>do_word_tokenize</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether to do word tokenization.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.BertJapaneseTokenizer.do_subword_tokenize" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.BertJapaneseTokenizer.do_subword_tokenize"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>do_subword_tokenize</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether to do subword tokenization.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.BertJapaneseTokenizer.word_tokenizer_type" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.BertJapaneseTokenizer.word_tokenizer_type"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>word_tokenizer_type</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"basic"</code>) — Type of word tokenizer. Choose from [“basic”, “mecab”, “sudachi”, “jumanpp”].</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.BertJapaneseTokenizer.subword_tokenizer_type" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.BertJapaneseTokenizer.subword_tokenizer_type"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>subword_tokenizer_type</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"wordpiece"</code>) — Type of subword tokenizer. Choose from [“wordpiece”, “character”, “sentencepiece”,].</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.BertJapaneseTokenizer.mecab_kwargs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.BertJapaneseTokenizer.mecab_kwargs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>mecab_kwargs</strong> (<code>dict</code>, <em>optional</em>) — Dictionary passed to the <code>MecabTokenizer</code> constructor.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.BertJapaneseTokenizer.sudachi_kwargs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.BertJapaneseTokenizer.sudachi_kwargs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>sudachi_kwargs</strong> (<code>dict</code>, <em>optional</em>) — Dictionary passed to the <code>SudachiTokenizer</code> constructor.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.BertJapaneseTokenizer.jumanpp_kwargs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.BertJapaneseTokenizer.jumanpp_kwargs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>jumanpp_kwargs</strong> (<code>dict</code>, <em>optional</em>) — Dictionary passed to the <code>JumanppTokenizer</code> constructor.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1tm7ou1">Construct a BERT tokenizer for Japanese text.</p> <p data-svelte-h="svelte-1vzsz7y">This tokenizer inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizer">PreTrainedTokenizer</a> which contains most of the main methods. Users should refer to: this superclass for more information regarding those methods.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.BertJapaneseTokenizer.build_inputs_with_special_tokens"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>build_inputs_with_special_tokens</span></h4> <a id="transformers.BertJapaneseTokenizer.build_inputs_with_special_tokens" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.BertJapaneseTokenizer.build_inputs_with_special_tokens"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/bert_japanese/tokenization_bert_japanese.py#L307" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_0<span class="opacity-60">: typing.List[int]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_1<span class="opacity-60">: typing.Optional[typing.List[int]] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>List[int]</code></span></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.BertJapaneseTokenizer.build_inputs_with_special_tokens.token_ids_0" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.BertJapaneseTokenizer.build_inputs_with_special_tokens.token_ids_0"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_0</strong> (<code>List[int]</code>) — List of IDs to which the special tokens will be added.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.BertJapaneseTokenizer.build_inputs_with_special_tokens.token_ids_1" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.BertJapaneseTokenizer.build_inputs_with_special_tokens.token_ids_1"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_1</strong> (<code>List[int]</code>, <em>optional</em>) — Optional second list of IDs for sequence pairs.</span></span> </li></ul> <div id="transformers.BertJapaneseTokenizer.build_inputs_with_special_tokens.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><code>List[int]</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>List of <a href="../glossary#input-ids">input IDs</a> with the appropriate special tokens.</p> </p> </div></div> <p data-svelte-h="svelte-t7qurq">Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A BERT sequence has the following format:</p> <ul data-svelte-h="svelte-xi6653"><li>single sequence: <code>[CLS] X [SEP]</code></li> <li>pair of sequences: <code>[CLS] A [SEP] B [SEP]</code></li></ul></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.BertJapaneseTokenizer.convert_tokens_to_string"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>convert_tokens_to_string</span></h4> <a id="transformers.BertJapaneseTokenizer.convert_tokens_to_string" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.BertJapaneseTokenizer.convert_tokens_to_string"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/bert_japanese/tokenization_bert_japanese.py#L299" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">tokens<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> </div></div> <p data-svelte-h="svelte-b3k2yi">Converts a sequence of tokens (string) in a single string.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.BertJapaneseTokenizer.create_token_type_ids_from_sequences"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>create_token_type_ids_from_sequences</span></h4> <a id="transformers.BertJapaneseTokenizer.create_token_type_ids_from_sequences" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.BertJapaneseTokenizer.create_token_type_ids_from_sequences"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/bert_japanese/tokenization_bert_japanese.py#L362" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_0<span class="opacity-60">: typing.List[int]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_1<span class="opacity-60">: typing.Optional[typing.List[int]] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>List[int]</code></span></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.BertJapaneseTokenizer.create_token_type_ids_from_sequences.token_ids_0" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.BertJapaneseTokenizer.create_token_type_ids_from_sequences.token_ids_0"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_0</strong> (<code>List[int]</code>) — List of IDs.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.BertJapaneseTokenizer.create_token_type_ids_from_sequences.token_ids_1" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.BertJapaneseTokenizer.create_token_type_ids_from_sequences.token_ids_1"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_1</strong> (<code>List[int]</code>, <em>optional</em>) — Optional second list of IDs for sequence pairs.</span></span> </li></ul> <div id="transformers.BertJapaneseTokenizer.create_token_type_ids_from_sequences.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><code>List[int]</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>List of <a href="../glossary#token-type-ids">token type IDs</a> according to the given sequence(s).</p> </p> </div></div> <p data-svelte-h="svelte-gn6wi7">Create a mask from the two sequences passed to be used in a sequence-pair classification task. A BERT sequence</p> <div class="relative group rounded-md"><a id="transformers.BertJapaneseTokenizer.create_token_type_ids_from_sequences.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.BertJapaneseTokenizer.create_token_type_ids_from_sequences.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-qjgeij">pair mask has the following format:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class="">0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 1 </span>1<span class="hljs-number"> 1 </span>1<span class="hljs-number"> 1 </span>1<span class="hljs-number"> 1 </span>1 1 | first sequence | second sequence |</pre></div></div> <p data-svelte-h="svelte-owoxgn">If <code>token_ids_1</code> is <code>None</code>, this method only returns the first portion of the mask (0s).</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.BertJapaneseTokenizer.get_special_tokens_mask"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>get_special_tokens_mask</span></h4> <a id="transformers.BertJapaneseTokenizer.get_special_tokens_mask" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.BertJapaneseTokenizer.get_special_tokens_mask"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/bert_japanese/tokenization_bert_japanese.py#L333" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_0<span class="opacity-60">: typing.List[int]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_1<span class="opacity-60">: typing.Optional[typing.List[int]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">already_has_special_tokens<span class="opacity-60">: bool = False</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>List[int]</code></span></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.BertJapaneseTokenizer.get_special_tokens_mask.token_ids_0" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.BertJapaneseTokenizer.get_special_tokens_mask.token_ids_0"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_0</strong> (<code>List[int]</code>) — List of IDs.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.BertJapaneseTokenizer.get_special_tokens_mask.token_ids_1" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.BertJapaneseTokenizer.get_special_tokens_mask.token_ids_1"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_1</strong> (<code>List[int]</code>, <em>optional</em>) — Optional second list of IDs for sequence pairs.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.BertJapaneseTokenizer.get_special_tokens_mask.already_has_special_tokens" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.BertJapaneseTokenizer.get_special_tokens_mask.already_has_special_tokens"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>already_has_special_tokens</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not the token list is already formatted with special tokens for the model.</span></span> </li></ul> <div id="transformers.BertJapaneseTokenizer.get_special_tokens_mask.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><code>List[int]</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.</p> </p> </div></div> <p data-svelte-h="svelte-1f4f5kp">Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer <code>prepare_for_model</code> method.</p></div></div> <p></p> <div id="svelte-announcer" aria-live="assertive" aria-atomic="true" style="position: absolute; left: 0px; top: 0px; clip: rect(0px, 0px, 0px, 0px); clip-path: inset(50%); overflow: hidden; white-space: nowrap; width: 1px; height: 1px;"></div></div> <div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/bert-generation" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>BertGeneration</a> <a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/bertweet" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">Bertweet<span class="ml-2 translate-y-px">→</span></a></div></div></div> <div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapter&quot;:{&quot;title&quot;:&quot;BertJapanese&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;bertjapanese&quot;,&quot;url&quot;:&quot;#bertjapanese&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;overview&quot;,&quot;url&quot;:&quot;#overview&quot;},{&quot;title&quot;:&quot;BertJapaneseTokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.BertJapaneseTokenizer&quot;,&quot;url&quot;:&quot;#transformers.BertJapaneseTokenizer&quot;}]}}" data-target="SubSideMenu"> <nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#bertjapanese" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-bertjapanese"><!-- HTML_TAG_START --><wbr>Bert<wbr>Japanese<!-- HTML_TAG_END --></a> <a href="#overview" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-overview"><!-- HTML_TAG_START --><wbr>Overview<!-- HTML_TAG_END --></a> <a href="#transformers.BertJapaneseTokenizer" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.BertJapaneseTokenizer"><!-- HTML_TAG_START --><wbr>Bert<wbr>Japanese<wbr>Tokenizer<!-- HTML_TAG_END --></a> </nav></div></div></div> <div id="doc-footer"></div></main> </div> <script> import("/front/build/kube-5e23f38/index.js"); window.moonSha = "kube-5e23f38/"; window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc","environment":"production","userAgent":"HuggingFace (production)"}`); </script> <!-- Stripe --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://js.stripe.com/v3/"; script.async = true; document.head.appendChild(script); } </script> <!-- Google analytics v4 --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL"; script.async = true; document.head.appendChild(script); window.dataLayer = window.dataLayer || []; function gtag() { if (window.dataLayer !== undefined) { window.dataLayer.push(arguments); } } gtag("js", new Date()); gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/v4.34.0/en/model_doc/bert-japanese" }); /// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" }); /// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent /// TODO: ask the user for their consent and update this with gtag('consent', 'update') } </script> <!-- Google Analytics v3 (deprecated) --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { (function (i, s, o, g, r, a, m) { i["GoogleAnalyticsObject"] = r; (i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments); }), (i[r].l = 1 * new Date()); (a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]); a.async = 1; a.src = g; m.parentNode.insertBefore(a, m); })(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics"); ganalytics("create", "UA-83738774-2", "auto"); ganalytics("send", "pageview", "/docs/transformers/v4.34.0/en/model_doc/bert-japanese"); } </script> <iframe name="__privateStripeMetricsController3050" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-27c67c0d52761104439bb051c7856ab1.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fv4.34.0%2Fen%2Fmodel_doc%2Fbert-japanese%23transformers.BertJapaneseTokenizer&amp;title=BertJapanese&amp;referrer=&amp;muid=38397bf3-d1df-433f-a1ab-3a999964eeba83e258&amp;sid=7a2cecc6-6b9a-4e4a-88b4-4bd8a189a43fe6315f&amp;version=6&amp;preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html>
2023-10-05T13:33:51.519Z
CodeLlama
https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/code_llama#transformers.CodeLlamaTokenizer
# CodeLlama ## Overview The Code Llama model was proposed in [Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) by Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve. The abstract from the paper is the following: _We release Code Llama, a family of large language models for code based on Llama 2 providing state-of-the-art performance among open models, infilling capabilities, support for large input contexts, and zero-shot instruction following ability for programming tasks. We provide multiple flavors to cover a wide range of applications: foundation models (Code Llama), Python specializations (Code Llama - Python), and instruction-following models (Code Llama - Instruct) with 7B, 13B and 34B parameters each. All models are trained on sequences of 16k tokens and show improvements on inputs with up to 100k tokens. 7B and 13B Code Llama and Code Llama - Instruct variants support infilling based on surrounding content. Code Llama reaches state-of-the-art performance among open models on several code benchmarks, with scores of up to 53% and 55% on HumanEval and MBPP, respectively. Notably, Code Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our models outperform every other publicly available model on MultiPL-E. We release Code Llama under a permissive license that allows for both research and commercial use._ Check out all Code Llama models [here](https://huggingface.co/models?search=code_llama) and the officially released ones in the [codellama org](https://huggingface.co/codellama). The `Llama2` family models, on which Code Llama is based, were trained using `bfloat16`, but the original inference uses `float16`. Let’s look at the different precisions: - `float32`: PyTorch convention on model initialization is to load models in `float32`, no matter with which `dtype` the model weights were stored. `transformers` also follows this convention for consistency with PyTorch. This will be picked by default. If you want the `AutoModel` API to cast the load the checkpoints with the storage weights type, you must specify `torch_dtype="auto"`, e.g. `model = AutoModelForCausalLM.from_pretrained("path", torch_dtype = "auto")`. - `bfloat16`: Code Llama was trained with this precision, so we recommend using it for further training or fine-tuning. - `float16`: We recommend running inference using this precision, as it’s usually faster than `bfloat16`, and evaluation metrics show no discernible degradation with respect to `bfloat16`. You can also run inference using `bfloat16`, and we recommend you check inference results with both `float16` and `bfloat16` after fine-tuning. As mentioned above, the `dtype` of the storage weights is mostly irrelevant unless you are using `torch_dtype="auto"` when initializing a model using. The reason is that the model will first be downloaded (using the `dtype` of the checkpoints online) and then will be casted to the default `dtype` of `torch` (becomes `torch.float32`). If there is a specified `torch_dtype`, it will be used instead. Tips: - These models have the same architecture as the `Llama2` models - The infilling task is supported out of the box. You should be using the `tokenizer.fill_token` where you want your input to be filled. - The model conversion script is the same as for the `Llama2` family: Here is a sample usage ``` python src/transformers/models/llama/convert_llama_weights_to_hf.py \ --input_dir /path/to/downloaded/llama/weights --model_size 7B --output_dir /output/path ``` Note that executing the script requires enough CPU RAM to host the whole model in float16 precision (even if the biggest versions come in several checkpoints they each contain a part of each weight of the model, so we need to load them all in RAM). - After conversion, the model and tokenizer can be loaded via: ``` >>> from transformers import LlamaForCausalLM, CodeLlamaTokenizer >>> tokenizer = CodeLlamaTokenizer.from_pretrained("codellama/CodeLlama-7b-hf") >>> model = LlamaForCausalLM.from_pretrained("codellama/CodeLlama-7b-hf") >>> PROMPT = '''def remove_non_ascii(s: str) -> str: """ <FILL_ME> return result ''' >>> input_ids = tokenizer(PROMPT, return_tensors="pt")["input_ids"] >>> generated_ids = model.generate(input_ids, max_new_tokens=128) >>> filling = tokenizer.batch_decode(generated_ids[:, input_ids.shape[1]:], skip_special_tokens = True)[0] >>> print(PROMPT.replace("<FILL_ME>", filling)) def remove_non_ascii(s: str) -> str: """ Remove non-ASCII characters from a string. Args: s: The string to remove non-ASCII characters from. Returns: The string with non-ASCII characters removed. """ result = "" for c in s: if ord(c) < 128: result += c return result ``` If you only want the infilled part: ``` >>> from transformers import pipeline >>> import torch >>> generator = pipeline("text-generation",model="codellama/CodeLlama-7b-hf",torch_dtype=torch.float16, device_map="auto") >>> generator('def remove_non_ascii(s: str) -> str:\n """ <FILL_ME>\n return result', max_new_tokens = 128, return_type = 1) ``` Under the hood, the tokenizer [automatically splits by `<FILL_ME>`](https://huggingface.co/docs/transformers/main/model_doc/code_llama#transformers.CodeLlamaTokenizer.fill_token) to create a formatted input string that follows [the original training pattern](https://github.com/facebookresearch/codellama/blob/cb51c14ec761370ba2e2bc351374a79265d0465e/llama/generation.py#L402). This is more robust than preparing the pattern yourself: it avoids pitfalls, such as token glueing, that are very hard to debug. To see how much CPU and GPU memory you need for this model or others, try [this calculator](https://huggingface.co/spaces/hf-accelerate/model-memory-usage) which can help determine that value. - The LLaMA tokenizer is a BPE model based on [sentencepiece](https://github.com/google/sentencepiece). One quirk of sentencepiece is that when decoding a sequence, if the first token is the start of the word (e.g. “Banana”), the tokenizer does not prepend the prefix space to the string. This model was contributed by [ArthurZucker](https://huggingface.co/ArthurZ). The original code of the authors can be found [here](https://github.com/facebookresearch/llama). ## CodeLlamaTokenizer ### class transformers.CodeLlamaTokenizer [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/code_llama/tokenization_code_llama.py#L59) ( vocab\_fileunk\_token = '<unk>'bos\_token = '<s>'eos\_token = '</s>'prefix\_token = '▁<PRE>'middle\_token = '▁<MID>'suffix\_token = '▁<SUF>'eot\_token = '▁<EOT>'fill\_token = '<FILL\_ME>'suffix\_first = Falsesp\_model\_kwargs: typing.Union\[typing.Dict\[str, typing.Any\], NoneType\] = Noneadd\_bos\_token = Trueadd\_eos\_token = Falseclean\_up\_tokenization\_spaces = Falseadditional\_special\_tokens = Noneuse\_default\_system\_prompt = False\*\*kwargs ) Construct a CodeLlama tokenizer. Based on byte-level Byte-Pair-Encoding. The default padding token is unset as there is no padding token in the original model. The default configuration match that of [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf/blob/main/tokenizer_config.json) which supports prompt infilling. #### build\_inputs\_with\_special\_tokens [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/code_llama/tokenization_code_llama.py#L361) ( token\_ids\_0token\_ids\_1 = None ) #### get\_special\_tokens\_mask [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/code_llama/tokenization_code_llama.py#L373) ( token\_ids\_0: typing.List\[int\]token\_ids\_1: typing.Optional\[typing.List\[int\]\] = Nonealready\_has\_special\_tokens: bool = False ) → `List[int]` Parameters - **token\_ids\_0** (`List[int]`) — List of IDs. - **token\_ids\_1** (`List[int]`, _optional_) — Optional second list of IDs for sequence pairs. - **already\_has\_special\_tokens** (`bool`, _optional_, defaults to `False`) — Whether or not the token list is already formatted with special tokens for the model. A list of integers in the range \[0, 1\]: 1 for a special token, 0 for a sequence token. Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer `prepare_for_model` method. #### create\_token\_type\_ids\_from\_sequences [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/code_llama/tokenization_code_llama.py#L411) ( token\_ids\_0: typing.List\[int\]token\_ids\_1: typing.Optional\[typing.List\[int\]\] = None ) → `List[int]` Parameters - **token\_ids\_0** (`List[int]`) — List of ids. - **token\_ids\_1** (`List[int]`, _optional_) — Optional second list of IDs for sequence pairs. List of [token type IDs](../glossary#token-type-ids) according to the given sequence(s). Creates a mask from the two sequences passed to be used in a sequence-pair classification task. An ALBERT sequence pair mask has the following format: ``` 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 | first sequence | second sequence | ``` if token\_ids\_1 is None, only returns the first portion of the mask (0s). #### save\_vocabulary [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/code_llama/tokenization_code_llama.py#L333) ( save\_directoryfilename\_prefix: typing.Optional\[str\] = None ) → `Tuple(str)` Parameters - **save\_directory** (`str`) — The directory in which to save the vocabulary. Paths to the files saved. Save the vocabulary and special tokens file to a directory. ## CodeLlamaTokenizerFast ### class transformers.CodeLlamaTokenizerFast [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/code_llama/tokenization_code_llama_fast.py#L52) ( vocab\_file = Nonetokenizer\_file = Noneclean\_up\_tokenization\_spaces = Falseunk\_token = '<unk>'bos\_token = '<s>'eos\_token = '</s>'prefix\_token = '▁<PRE>'middle\_token = '▁<MID>'suffix\_token = '▁<SUF>'eot\_token = '▁<EOT>'fill\_token = '<FILL\_ME>'additional\_special\_tokens = Noneadd\_bos\_token = Trueadd\_eos\_token = Falseuse\_default\_system\_prompt = False\*\*kwargs ) Construct a Llama tokenizer. Based on byte-level Byte-Pair-Encoding. This uses notably ByteFallback and no normalization. ``` >>> from transformers import CodeLlamaTokenizerFast >>> tokenizer = CodeLlamaTokenizerFast.from_pretrained("hf-internal-testing/llama-tokenizer") >>> tokenizer.encode("Hello this is a test") [1, 15043, 445, 338, 263, 1243] ``` If you want to change the `bos_token` or the `eos_token`, make sure to specify them when initializing the model, or call `tokenizer.update_post_processor()` to make sure that the post-processing is correctly done (otherwise the values of the first token and final token of an encoded sequence will not be correct). For more details, checkout \[post-processors\] ([https://huggingface.co/docs/tokenizers/api/post-processors](https://huggingface.co/docs/tokenizers/api/post-processors)) documentation. This tokenizer inherits from [PreTrainedTokenizerFast](/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast) which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. The default configuration match that of [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf/blob/main/tokenizer_config.json) which supports prompt infilling. #### build\_inputs\_with\_special\_tokens [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/code_llama/tokenization_code_llama_fast.py#L396) ( token\_ids\_0: typing.List\[int\]token\_ids\_1: typing.Optional\[typing.List\[int\]\] = None ) → `List[int]` Parameters - **token\_ids\_0** (`List[int]`) — List of IDs to which the special tokens will be added. - **token\_ids\_1** (`List[int]`, _optional_) — Optional second list of IDs for sequence pairs. list of [input IDs](../glossary#input-ids) with the appropriate special tokens. Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. The special tokens depend on calling set\_lang. An NLLB sequence has the following format, where `X` represents the sequence: - `input_ids` (for encoder) `X [eos, src_lang_code]` - `decoder_input_ids`: (for decoder) `X [eos, tgt_lang_code]` BOS is never used. Pairs of sequences are not the expected use case, but they will be handled without a separator. #### get\_special\_tokens\_mask [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/tokenization_utils_base.py#L3770) ( token\_ids\_0: typing.List\[int\]token\_ids\_1: typing.Optional\[typing.List\[int\]\] = Nonealready\_has\_special\_tokens: bool = False ) → A list of integers in the range \[0, 1\] Parameters - **token\_ids\_0** (`List[int]`) — List of ids of the first sequence. - **token\_ids\_1** (`List[int]`, _optional_) — List of ids of the second sequence. - **already\_has\_special\_tokens** (`bool`, _optional_, defaults to `False`) — Whether or not the token list is already formatted with special tokens for the model. Returns A list of integers in the range \[0, 1\] 1 for a special token, 0 for a sequence token. Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer `prepare_for_model` or `encode_plus` methods. #### create\_token\_type\_ids\_from\_sequences [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/tokenization_utils_base.py#L3305) ( token\_ids\_0: typing.List\[int\]token\_ids\_1: typing.Optional\[typing.List\[int\]\] = None ) → `List[int]` Parameters - **token\_ids\_0** (`List[int]`) — The first tokenized sequence. - **token\_ids\_1** (`List[int]`, _optional_) — The second tokenized sequence. The token type ids. Create the token type IDs corresponding to the sequences passed. [What are token type IDs?](../glossary#token-type-ids) Should be overridden in a subclass if the model has a special way of building those. Updates the underlying post processor with the current `bos_token` and `eos_token`. #### save\_vocabulary [< source \>](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/code_llama/tokenization_code_llama_fast.py#L325) ( save\_directory: strfilename\_prefix: typing.Optional\[str\] = None )
<!DOCTYPE html><html class=""><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"> <meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science."> <meta property="fb:app_id" content="1321688464574422"> <meta name="twitter:card" content="summary_large_image"> <meta name="twitter:site" content="@huggingface"> <meta property="og:title" content="CodeLlama"> <meta property="og:type" content="website"> <meta property="og:url" content="https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/code_llama"> <meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png"> <link rel="stylesheet" href="/front/build/kube-5e23f38/style.css"> <link rel="preconnect" href="https://fonts.gstatic.com"> <link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&amp;display=swap" rel="stylesheet"> <link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&amp;display=swap" rel="stylesheet"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" as="style" onload="this.onload=null;this.rel='stylesheet'"> <noscript> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" /> </noscript> <title>CodeLlama</title> <script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script> <script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><link rel="stylesheet" href="/docs/transformers/v4.34.0/en/_app/immutable/assets/0.e3b0c442.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/1.38c5c2f6.js"><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"><meta name="hf:doc:metadata" content="{&quot;local&quot;:&quot;codellama&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;overview&quot;,&quot;title&quot;:&quot;Overview&quot;},{&quot;local&quot;:&quot;transformers.CodeLlamaTokenizer&quot;,&quot;title&quot;:&quot;CodeLlamaTokenizer&quot;},{&quot;local&quot;:&quot;transformers.CodeLlamaTokenizerFast&quot;,&quot;title&quot;:&quot;CodeLlamaTokenizerFast&quot;}],&quot;title&quot;:&quot;CodeLlama&quot;}"></head> <body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage"> <div class="flex min-h-screen flex-col"> <div class="SVELTE_HYDRATER contents" data-props="{&quot;classNames&quot;:&quot;&quot;,&quot;isWide&quot;:true,&quot;isZh&quot;:false}" data-target="MainHeader"><header class="border-b border-gray-100 "><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <div class="flex flex-none items-center justify-center p-0.5 place-self-stretch lg:hidden"><button class="relative z-40 flex h-6 w-8 items-center justify-center" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div></div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="rounded-full border border-transparent bg-gray-900 px-3 py-1 leading-none text-white hover:border-black hover:bg-white hover:text-black" href="/join">Sign Up</a></li></ul></nav></div></header></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div> <main class="flex flex-1 flex-col"><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapters&quot;:[{&quot;title&quot;:&quot;Get started&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;🤗 Transformers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;index&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/index&quot;},{&quot;title&quot;:&quot;Quick tour&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;quicktour&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/quicktour&quot;},{&quot;title&quot;:&quot;Installation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;installation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/installation&quot;}]},{&quot;title&quot;:&quot;Tutorials&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Run inference with pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_tutorial&quot;},{&quot;title&quot;:&quot;Write portable code with AutoClass&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;autoclass_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/autoclass_tutorial&quot;},{&quot;title&quot;:&quot;Preprocess data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocessing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/preprocessing&quot;},{&quot;title&quot;:&quot;Fine-tune a pretrained model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;training&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/training&quot;},{&quot;title&quot;:&quot;Train with a script&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;run_scripts&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/run_scripts&quot;},{&quot;title&quot;:&quot;Set up distributed training with 🤗 Accelerate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;accelerate&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/accelerate&quot;},{&quot;title&quot;:&quot;Load and train adapters with 🤗 PEFT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;peft&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/peft&quot;},{&quot;title&quot;:&quot;Share your model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_sharing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_sharing&quot;},{&quot;title&quot;:&quot;Agents&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers_agents&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/transformers_agents&quot;},{&quot;title&quot;:&quot;Generation with LLMs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;llm_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/llm_tutorial&quot;}]},{&quot;title&quot;:&quot;Task Guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Natural Language Processing&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text classification&quot;,&quot;id&quot;:&quot;tasks/sequence_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/sequence_classification&quot;},{&quot;title&quot;:&quot;Token classification&quot;,&quot;id&quot;:&quot;tasks/token_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/token_classification&quot;},{&quot;title&quot;:&quot;Question answering&quot;,&quot;id&quot;:&quot;tasks/question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/question_answering&quot;},{&quot;title&quot;:&quot;Causal language modeling&quot;,&quot;id&quot;:&quot;tasks/language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/language_modeling&quot;},{&quot;title&quot;:&quot;Masked language modeling&quot;,&quot;id&quot;:&quot;tasks/masked_language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/masked_language_modeling&quot;},{&quot;title&quot;:&quot;Translation&quot;,&quot;id&quot;:&quot;tasks/translation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/translation&quot;},{&quot;title&quot;:&quot;Summarization&quot;,&quot;id&quot;:&quot;tasks/summarization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/summarization&quot;},{&quot;title&quot;:&quot;Multiple choice&quot;,&quot;id&quot;:&quot;tasks/multiple_choice&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/multiple_choice&quot;}]},{&quot;title&quot;:&quot;Audio&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio classification&quot;,&quot;id&quot;:&quot;tasks/audio_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/audio_classification&quot;},{&quot;title&quot;:&quot;Automatic speech recognition&quot;,&quot;id&quot;:&quot;tasks/asr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/asr&quot;}]},{&quot;title&quot;:&quot;Computer Vision&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image classification&quot;,&quot;id&quot;:&quot;tasks/image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_classification&quot;},{&quot;title&quot;:&quot;Semantic segmentation&quot;,&quot;id&quot;:&quot;tasks/semantic_segmentation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/semantic_segmentation&quot;},{&quot;title&quot;:&quot;Video classification&quot;,&quot;id&quot;:&quot;tasks/video_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/video_classification&quot;},{&quot;title&quot;:&quot;Object detection&quot;,&quot;id&quot;:&quot;tasks/object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot object detection&quot;,&quot;id&quot;:&quot;tasks/zero_shot_object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot image classification&quot;,&quot;id&quot;:&quot;tasks/zero_shot_image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification&quot;},{&quot;title&quot;:&quot;Depth estimation&quot;,&quot;id&quot;:&quot;tasks/monocular_depth_estimation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation&quot;}]},{&quot;title&quot;:&quot;Multimodal&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image captioning&quot;,&quot;id&quot;:&quot;tasks/image_captioning&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_captioning&quot;},{&quot;title&quot;:&quot;Document Question Answering&quot;,&quot;id&quot;:&quot;tasks/document_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/document_question_answering&quot;},{&quot;title&quot;:&quot;Visual Question Answering&quot;,&quot;id&quot;:&quot;tasks/visual_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/visual_question_answering&quot;},{&quot;title&quot;:&quot;Text to speech&quot;,&quot;id&quot;:&quot;tasks/text-to-speech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/text-to-speech&quot;}]},{&quot;title&quot;:&quot;Generation&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Customize the generation strategy&quot;,&quot;id&quot;:&quot;generation_strategies&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/generation_strategies&quot;}]},{&quot;title&quot;:&quot;Prompting&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image tasks with IDEFICS&quot;,&quot;id&quot;:&quot;tasks/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/idefics&quot;}]}]},{&quot;title&quot;:&quot;Developer guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Use fast tokenizers from 🤗 Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;fast_tokenizers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/fast_tokenizers&quot;},{&quot;title&quot;:&quot;Run inference with multilingual models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;multilingual&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/multilingual&quot;},{&quot;title&quot;:&quot;Use model-specific APIs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;create_a_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/create_a_model&quot;},{&quot;title&quot;:&quot;Share a custom model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_models&quot;},{&quot;title&quot;:&quot;Templates for chat models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;chat_templating&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/chat_templating&quot;},{&quot;title&quot;:&quot;Run training on Amazon SageMaker&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;sagemaker&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/sagemaker&quot;},{&quot;title&quot;:&quot;Export to ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;serialization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/serialization&quot;},{&quot;title&quot;:&quot;Export to TFLite&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tflite&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tflite&quot;},{&quot;title&quot;:&quot;Export to TorchScript&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;torchscript&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/torchscript&quot;},{&quot;title&quot;:&quot;Benchmarks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;benchmarks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/benchmarks&quot;},{&quot;title&quot;:&quot;Notebooks with examples&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;notebooks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/notebooks&quot;},{&quot;title&quot;:&quot;Community resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;community&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/community&quot;},{&quot;title&quot;:&quot;Custom Tools and Prompts&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_tools&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_tools&quot;},{&quot;title&quot;:&quot;Troubleshoot&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;troubleshooting&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/troubleshooting&quot;}]},{&quot;title&quot;:&quot;Performance and scalability&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;performance&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/performance&quot;},{&quot;title&quot;:&quot;Efficient training techniques&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Methods and tools for efficient training on a single GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_one&quot;},{&quot;title&quot;:&quot;Multiple GPUs and parallelism&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_many&quot;},{&quot;title&quot;:&quot;Efficient training on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu&quot;},{&quot;title&quot;:&quot;Distributed CPU training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu_many&quot;},{&quot;title&quot;:&quot;Training on TPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu&quot;},{&quot;title&quot;:&quot;Training on TPU with TensorFlow&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu_tf&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu_tf&quot;},{&quot;title&quot;:&quot;Training on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_special&quot;},{&quot;title&quot;:&quot;Custom hardware for training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_hardware&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_hardware&quot;},{&quot;title&quot;:&quot;Hyperparameter Search using Trainer API&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;hpo_train&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/hpo_train&quot;}]},{&quot;title&quot;:&quot;Optimizing inference&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Inference on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_cpu&quot;},{&quot;title&quot;:&quot;Inference on one GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_one&quot;},{&quot;title&quot;:&quot;Inference on many GPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_many&quot;},{&quot;title&quot;:&quot;Inference on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_special&quot;}]},{&quot;title&quot;:&quot;Instantiating a big model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;big_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/big_models&quot;},{&quot;title&quot;:&quot;Troubleshooting&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;debugging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/debugging&quot;},{&quot;title&quot;:&quot;XLA Integration for TensorFlow Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tf_xla&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tf_xla&quot;},{&quot;title&quot;:&quot;Optimize inference using `torch.compile()`&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_torch_compile&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_torch_compile&quot;}]},{&quot;title&quot;:&quot;Contribute&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;How to contribute to transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;contributing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/contributing&quot;},{&quot;title&quot;:&quot;How to add a model to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_model&quot;},{&quot;title&quot;:&quot;How to convert a 🤗 Transformers model to TensorFlow?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_tensorflow_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_tensorflow_model&quot;},{&quot;title&quot;:&quot;How to add a pipeline to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_pipeline&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_pipeline&quot;},{&quot;title&quot;:&quot;Testing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;testing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/testing&quot;},{&quot;title&quot;:&quot;Checks on a Pull Request&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pr_checks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pr_checks&quot;}]},{&quot;title&quot;:&quot;Conceptual guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Philosophy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;philosophy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/philosophy&quot;},{&quot;title&quot;:&quot;Glossary&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;glossary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/glossary&quot;},{&quot;title&quot;:&quot;What 🤗 Transformers can do&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;task_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/task_summary&quot;},{&quot;title&quot;:&quot;How 🤗 Transformers solve tasks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks_explained&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks_explained&quot;},{&quot;title&quot;:&quot;The Transformer model family&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_summary&quot;},{&quot;title&quot;:&quot;Summary of the tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tokenizer_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tokenizer_summary&quot;},{&quot;title&quot;:&quot;Attention mechanisms&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;attention&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/attention&quot;},{&quot;title&quot;:&quot;Padding and truncation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pad_truncation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pad_truncation&quot;},{&quot;title&quot;:&quot;BERTology&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;bertology&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/bertology&quot;},{&quot;title&quot;:&quot;Perplexity of fixed-length models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perplexity&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perplexity&quot;},{&quot;title&quot;:&quot;Pipelines for webserver inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_webserver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_webserver&quot;},{&quot;title&quot;:&quot;Model training anatomy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_memory_anatomy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_memory_anatomy&quot;}]},{&quot;title&quot;:&quot;API&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Main Classes&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Agents and Tools&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/agent&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/agent&quot;},{&quot;title&quot;:&quot;Auto Classes&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/auto&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/auto&quot;},{&quot;title&quot;:&quot;Callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/callback&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/callback&quot;},{&quot;title&quot;:&quot;Configuration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/configuration&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/configuration&quot;},{&quot;title&quot;:&quot;Data Collator&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/data_collator&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/data_collator&quot;},{&quot;title&quot;:&quot;Keras callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/keras_callbacks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/keras_callbacks&quot;},{&quot;title&quot;:&quot;Logging&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/logging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/logging&quot;},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/model&quot;},{&quot;title&quot;:&quot;Text Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/text_generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/text_generation&quot;},{&quot;title&quot;:&quot;ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/onnx&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/onnx&quot;},{&quot;title&quot;:&quot;Optimization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/optimizer_schedules&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules&quot;},{&quot;title&quot;:&quot;Model outputs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/output&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/output&quot;},{&quot;title&quot;:&quot;Pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/pipelines&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/pipelines&quot;},{&quot;title&quot;:&quot;Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/processors&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/processors&quot;},{&quot;title&quot;:&quot;Quantization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/quantization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/quantization&quot;},{&quot;title&quot;:&quot;Tokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/tokenizer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/tokenizer&quot;},{&quot;title&quot;:&quot;Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/trainer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/trainer&quot;},{&quot;title&quot;:&quot;DeepSpeed Integration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/deepspeed&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/deepspeed&quot;},{&quot;title&quot;:&quot;Feature Extractor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/feature_extractor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/feature_extractor&quot;},{&quot;title&quot;:&quot;Image Processor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/image_processor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/image_processor&quot;}]},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALBERT&quot;,&quot;id&quot;:&quot;model_doc/albert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/albert&quot;},{&quot;title&quot;:&quot;BART&quot;,&quot;id&quot;:&quot;model_doc/bart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bart&quot;},{&quot;title&quot;:&quot;BARThez&quot;,&quot;id&quot;:&quot;model_doc/barthez&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/barthez&quot;},{&quot;title&quot;:&quot;BARTpho&quot;,&quot;id&quot;:&quot;model_doc/bartpho&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bartpho&quot;},{&quot;title&quot;:&quot;BERT&quot;,&quot;id&quot;:&quot;model_doc/bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert&quot;},{&quot;title&quot;:&quot;BertGeneration&quot;,&quot;id&quot;:&quot;model_doc/bert-generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-generation&quot;},{&quot;title&quot;:&quot;BertJapanese&quot;,&quot;id&quot;:&quot;model_doc/bert-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-japanese&quot;},{&quot;title&quot;:&quot;Bertweet&quot;,&quot;id&quot;:&quot;model_doc/bertweet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bertweet&quot;},{&quot;title&quot;:&quot;BigBird&quot;,&quot;id&quot;:&quot;model_doc/big_bird&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/big_bird&quot;},{&quot;title&quot;:&quot;BigBirdPegasus&quot;,&quot;id&quot;:&quot;model_doc/bigbird_pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus&quot;},{&quot;title&quot;:&quot;BioGpt&quot;,&quot;id&quot;:&quot;model_doc/biogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/biogpt&quot;},{&quot;title&quot;:&quot;Blenderbot&quot;,&quot;id&quot;:&quot;model_doc/blenderbot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot&quot;},{&quot;title&quot;:&quot;Blenderbot Small&quot;,&quot;id&quot;:&quot;model_doc/blenderbot-small&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot-small&quot;},{&quot;title&quot;:&quot;BLOOM&quot;,&quot;id&quot;:&quot;model_doc/bloom&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bloom&quot;},{&quot;title&quot;:&quot;BORT&quot;,&quot;id&quot;:&quot;model_doc/bort&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bort&quot;},{&quot;title&quot;:&quot;ByT5&quot;,&quot;id&quot;:&quot;model_doc/byt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/byt5&quot;},{&quot;title&quot;:&quot;CamemBERT&quot;,&quot;id&quot;:&quot;model_doc/camembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/camembert&quot;},{&quot;title&quot;:&quot;CANINE&quot;,&quot;id&quot;:&quot;model_doc/canine&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/canine&quot;},{&quot;title&quot;:&quot;CodeGen&quot;,&quot;id&quot;:&quot;model_doc/codegen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/codegen&quot;},{&quot;title&quot;:&quot;CodeLlama&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/code_llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/code_llama&quot;},{&quot;title&quot;:&quot;ConvBERT&quot;,&quot;id&quot;:&quot;model_doc/convbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convbert&quot;},{&quot;title&quot;:&quot;CPM&quot;,&quot;id&quot;:&quot;model_doc/cpm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpm&quot;},{&quot;title&quot;:&quot;CPMANT&quot;,&quot;id&quot;:&quot;model_doc/cpmant&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpmant&quot;},{&quot;title&quot;:&quot;CTRL&quot;,&quot;id&quot;:&quot;model_doc/ctrl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ctrl&quot;},{&quot;title&quot;:&quot;DeBERTa&quot;,&quot;id&quot;:&quot;model_doc/deberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta&quot;},{&quot;title&quot;:&quot;DeBERTa-v2&quot;,&quot;id&quot;:&quot;model_doc/deberta-v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta-v2&quot;},{&quot;title&quot;:&quot;DialoGPT&quot;,&quot;id&quot;:&quot;model_doc/dialogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dialogpt&quot;},{&quot;title&quot;:&quot;DistilBERT&quot;,&quot;id&quot;:&quot;model_doc/distilbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/distilbert&quot;},{&quot;title&quot;:&quot;DPR&quot;,&quot;id&quot;:&quot;model_doc/dpr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpr&quot;},{&quot;title&quot;:&quot;ELECTRA&quot;,&quot;id&quot;:&quot;model_doc/electra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/electra&quot;},{&quot;title&quot;:&quot;Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encoder-decoder&quot;},{&quot;title&quot;:&quot;ERNIE&quot;,&quot;id&quot;:&quot;model_doc/ernie&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie&quot;},{&quot;title&quot;:&quot;ErnieM&quot;,&quot;id&quot;:&quot;model_doc/ernie_m&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie_m&quot;},{&quot;title&quot;:&quot;ESM&quot;,&quot;id&quot;:&quot;model_doc/esm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/esm&quot;},{&quot;title&quot;:&quot;Falcon&quot;,&quot;id&quot;:&quot;model_doc/falcon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/falcon&quot;},{&quot;title&quot;:&quot;FLAN-T5&quot;,&quot;id&quot;:&quot;model_doc/flan-t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-t5&quot;},{&quot;title&quot;:&quot;FLAN-UL2&quot;,&quot;id&quot;:&quot;model_doc/flan-ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-ul2&quot;},{&quot;title&quot;:&quot;FlauBERT&quot;,&quot;id&quot;:&quot;model_doc/flaubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flaubert&quot;},{&quot;title&quot;:&quot;FNet&quot;,&quot;id&quot;:&quot;model_doc/fnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fnet&quot;},{&quot;title&quot;:&quot;FSMT&quot;,&quot;id&quot;:&quot;model_doc/fsmt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fsmt&quot;},{&quot;title&quot;:&quot;Funnel Transformer&quot;,&quot;id&quot;:&quot;model_doc/funnel&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/funnel&quot;},{&quot;title&quot;:&quot;GPT&quot;,&quot;id&quot;:&quot;model_doc/openai-gpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/openai-gpt&quot;},{&quot;title&quot;:&quot;GPT Neo&quot;,&quot;id&quot;:&quot;model_doc/gpt_neo&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neo&quot;},{&quot;title&quot;:&quot;GPT NeoX&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox&quot;},{&quot;title&quot;:&quot;GPT NeoX Japanese&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox_japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese&quot;},{&quot;title&quot;:&quot;GPT-J&quot;,&quot;id&quot;:&quot;model_doc/gptj&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptj&quot;},{&quot;title&quot;:&quot;GPT2&quot;,&quot;id&quot;:&quot;model_doc/gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt2&quot;},{&quot;title&quot;:&quot;GPTBigCode&quot;,&quot;id&quot;:&quot;model_doc/gpt_bigcode&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode&quot;},{&quot;title&quot;:&quot;GPTSAN Japanese&quot;,&quot;id&quot;:&quot;model_doc/gptsan-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese&quot;},{&quot;title&quot;:&quot;GPTSw3&quot;,&quot;id&quot;:&quot;model_doc/gpt-sw3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt-sw3&quot;},{&quot;title&quot;:&quot;HerBERT&quot;,&quot;id&quot;:&quot;model_doc/herbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/herbert&quot;},{&quot;title&quot;:&quot;I-BERT&quot;,&quot;id&quot;:&quot;model_doc/ibert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ibert&quot;},{&quot;title&quot;:&quot;Jukebox&quot;,&quot;id&quot;:&quot;model_doc/jukebox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/jukebox&quot;},{&quot;title&quot;:&quot;LED&quot;,&quot;id&quot;:&quot;model_doc/led&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/led&quot;},{&quot;title&quot;:&quot;LLaMA&quot;,&quot;id&quot;:&quot;model_doc/llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama&quot;},{&quot;title&quot;:&quot;Llama2&quot;,&quot;id&quot;:&quot;model_doc/llama2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama2&quot;},{&quot;title&quot;:&quot;Longformer&quot;,&quot;id&quot;:&quot;model_doc/longformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longformer&quot;},{&quot;title&quot;:&quot;LongT5&quot;,&quot;id&quot;:&quot;model_doc/longt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longt5&quot;},{&quot;title&quot;:&quot;LUKE&quot;,&quot;id&quot;:&quot;model_doc/luke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/luke&quot;},{&quot;title&quot;:&quot;M2M100&quot;,&quot;id&quot;:&quot;model_doc/m2m_100&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/m2m_100&quot;},{&quot;title&quot;:&quot;MarianMT&quot;,&quot;id&quot;:&quot;model_doc/marian&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/marian&quot;},{&quot;title&quot;:&quot;MarkupLM&quot;,&quot;id&quot;:&quot;model_doc/markuplm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/markuplm&quot;},{&quot;title&quot;:&quot;MBart and MBart-50&quot;,&quot;id&quot;:&quot;model_doc/mbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mbart&quot;},{&quot;title&quot;:&quot;MEGA&quot;,&quot;id&quot;:&quot;model_doc/mega&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mega&quot;},{&quot;title&quot;:&quot;MegatronBERT&quot;,&quot;id&quot;:&quot;model_doc/megatron-bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron-bert&quot;},{&quot;title&quot;:&quot;MegatronGPT2&quot;,&quot;id&quot;:&quot;model_doc/megatron_gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2&quot;},{&quot;title&quot;:&quot;Mistral&quot;,&quot;id&quot;:&quot;model_doc/mistral&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mistral&quot;},{&quot;title&quot;:&quot;mLUKE&quot;,&quot;id&quot;:&quot;model_doc/mluke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mluke&quot;},{&quot;title&quot;:&quot;MobileBERT&quot;,&quot;id&quot;:&quot;model_doc/mobilebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilebert&quot;},{&quot;title&quot;:&quot;MPNet&quot;,&quot;id&quot;:&quot;model_doc/mpnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpnet&quot;},{&quot;title&quot;:&quot;MPT&quot;,&quot;id&quot;:&quot;model_doc/mpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpt&quot;},{&quot;title&quot;:&quot;MRA&quot;,&quot;id&quot;:&quot;model_doc/mra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mra&quot;},{&quot;title&quot;:&quot;MT5&quot;,&quot;id&quot;:&quot;model_doc/mt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mt5&quot;},{&quot;title&quot;:&quot;MVP&quot;,&quot;id&quot;:&quot;model_doc/mvp&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mvp&quot;},{&quot;title&quot;:&quot;NEZHA&quot;,&quot;id&quot;:&quot;model_doc/nezha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nezha&quot;},{&quot;title&quot;:&quot;NLLB&quot;,&quot;id&quot;:&quot;model_doc/nllb&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb&quot;},{&quot;title&quot;:&quot;NLLB-MoE&quot;,&quot;id&quot;:&quot;model_doc/nllb-moe&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb-moe&quot;},{&quot;title&quot;:&quot;Nyströmformer&quot;,&quot;id&quot;:&quot;model_doc/nystromformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nystromformer&quot;},{&quot;title&quot;:&quot;Open-Llama&quot;,&quot;id&quot;:&quot;model_doc/open-llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/open-llama&quot;},{&quot;title&quot;:&quot;OPT&quot;,&quot;id&quot;:&quot;model_doc/opt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/opt&quot;},{&quot;title&quot;:&quot;Pegasus&quot;,&quot;id&quot;:&quot;model_doc/pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus&quot;},{&quot;title&quot;:&quot;PEGASUS-X&quot;,&quot;id&quot;:&quot;model_doc/pegasus_x&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus_x&quot;},{&quot;title&quot;:&quot;Persimmon&quot;,&quot;id&quot;:&quot;model_doc/persimmon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/persimmon&quot;},{&quot;title&quot;:&quot;PhoBERT&quot;,&quot;id&quot;:&quot;model_doc/phobert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/phobert&quot;},{&quot;title&quot;:&quot;PLBart&quot;,&quot;id&quot;:&quot;model_doc/plbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/plbart&quot;},{&quot;title&quot;:&quot;ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/prophetnet&quot;},{&quot;title&quot;:&quot;QDQBert&quot;,&quot;id&quot;:&quot;model_doc/qdqbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/qdqbert&quot;},{&quot;title&quot;:&quot;RAG&quot;,&quot;id&quot;:&quot;model_doc/rag&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rag&quot;},{&quot;title&quot;:&quot;REALM&quot;,&quot;id&quot;:&quot;model_doc/realm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/realm&quot;},{&quot;title&quot;:&quot;Reformer&quot;,&quot;id&quot;:&quot;model_doc/reformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/reformer&quot;},{&quot;title&quot;:&quot;RemBERT&quot;,&quot;id&quot;:&quot;model_doc/rembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rembert&quot;},{&quot;title&quot;:&quot;RetriBERT&quot;,&quot;id&quot;:&quot;model_doc/retribert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/retribert&quot;},{&quot;title&quot;:&quot;RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta&quot;},{&quot;title&quot;:&quot;RoBERTa-PreLayerNorm&quot;,&quot;id&quot;:&quot;model_doc/roberta-prelayernorm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm&quot;},{&quot;title&quot;:&quot;RoCBert&quot;,&quot;id&quot;:&quot;model_doc/roc_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roc_bert&quot;},{&quot;title&quot;:&quot;RoFormer&quot;,&quot;id&quot;:&quot;model_doc/roformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roformer&quot;},{&quot;title&quot;:&quot;RWKV&quot;,&quot;id&quot;:&quot;model_doc/rwkv&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rwkv&quot;},{&quot;title&quot;:&quot;Splinter&quot;,&quot;id&quot;:&quot;model_doc/splinter&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/splinter&quot;},{&quot;title&quot;:&quot;SqueezeBERT&quot;,&quot;id&quot;:&quot;model_doc/squeezebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/squeezebert&quot;},{&quot;title&quot;:&quot;SwitchTransformers&quot;,&quot;id&quot;:&quot;model_doc/switch_transformers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/switch_transformers&quot;},{&quot;title&quot;:&quot;T5&quot;,&quot;id&quot;:&quot;model_doc/t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5&quot;},{&quot;title&quot;:&quot;T5v1.1&quot;,&quot;id&quot;:&quot;model_doc/t5v1.1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5v1.1&quot;},{&quot;title&quot;:&quot;TAPEX&quot;,&quot;id&quot;:&quot;model_doc/tapex&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapex&quot;},{&quot;title&quot;:&quot;Transformer XL&quot;,&quot;id&quot;:&quot;model_doc/transfo-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/transfo-xl&quot;},{&quot;title&quot;:&quot;UL2&quot;,&quot;id&quot;:&quot;model_doc/ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ul2&quot;},{&quot;title&quot;:&quot;UMT5&quot;,&quot;id&quot;:&quot;model_doc/umt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/umt5&quot;},{&quot;title&quot;:&quot;X-MOD&quot;,&quot;id&quot;:&quot;model_doc/xmod&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xmod&quot;},{&quot;title&quot;:&quot;XGLM&quot;,&quot;id&quot;:&quot;model_doc/xglm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xglm&quot;},{&quot;title&quot;:&quot;XLM&quot;,&quot;id&quot;:&quot;model_doc/xlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm&quot;},{&quot;title&quot;:&quot;XLM-ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/xlm-prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa-XL&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl&quot;},{&quot;title&quot;:&quot;XLM-V&quot;,&quot;id&quot;:&quot;model_doc/xlm-v&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-v&quot;},{&quot;title&quot;:&quot;XLNet&quot;,&quot;id&quot;:&quot;model_doc/xlnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlnet&quot;},{&quot;title&quot;:&quot;YOSO&quot;,&quot;id&quot;:&quot;model_doc/yoso&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yoso&quot;}]},{&quot;title&quot;:&quot;Vision models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;BEiT&quot;,&quot;id&quot;:&quot;model_doc/beit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/beit&quot;},{&quot;title&quot;:&quot;BiT&quot;,&quot;id&quot;:&quot;model_doc/bit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bit&quot;},{&quot;title&quot;:&quot;Conditional DETR&quot;,&quot;id&quot;:&quot;model_doc/conditional_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/conditional_detr&quot;},{&quot;title&quot;:&quot;ConvNeXT&quot;,&quot;id&quot;:&quot;model_doc/convnext&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnext&quot;},{&quot;title&quot;:&quot;ConvNeXTV2&quot;,&quot;id&quot;:&quot;model_doc/convnextv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnextv2&quot;},{&quot;title&quot;:&quot;CvT&quot;,&quot;id&quot;:&quot;model_doc/cvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cvt&quot;},{&quot;title&quot;:&quot;Deformable DETR&quot;,&quot;id&quot;:&quot;model_doc/deformable_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deformable_detr&quot;},{&quot;title&quot;:&quot;DeiT&quot;,&quot;id&quot;:&quot;model_doc/deit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deit&quot;},{&quot;title&quot;:&quot;DETA&quot;,&quot;id&quot;:&quot;model_doc/deta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deta&quot;},{&quot;title&quot;:&quot;DETR&quot;,&quot;id&quot;:&quot;model_doc/detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/detr&quot;},{&quot;title&quot;:&quot;DiNAT&quot;,&quot;id&quot;:&quot;model_doc/dinat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinat&quot;},{&quot;title&quot;:&quot;DINO V2&quot;,&quot;id&quot;:&quot;model_doc/dinov2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinov2&quot;},{&quot;title&quot;:&quot;DiT&quot;,&quot;id&quot;:&quot;model_doc/dit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dit&quot;},{&quot;title&quot;:&quot;DPT&quot;,&quot;id&quot;:&quot;model_doc/dpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpt&quot;},{&quot;title&quot;:&quot;EfficientFormer&quot;,&quot;id&quot;:&quot;model_doc/efficientformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientformer&quot;},{&quot;title&quot;:&quot;EfficientNet&quot;,&quot;id&quot;:&quot;model_doc/efficientnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientnet&quot;},{&quot;title&quot;:&quot;FocalNet&quot;,&quot;id&quot;:&quot;model_doc/focalnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/focalnet&quot;},{&quot;title&quot;:&quot;GLPN&quot;,&quot;id&quot;:&quot;model_doc/glpn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/glpn&quot;},{&quot;title&quot;:&quot;ImageGPT&quot;,&quot;id&quot;:&quot;model_doc/imagegpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/imagegpt&quot;},{&quot;title&quot;:&quot;LeViT&quot;,&quot;id&quot;:&quot;model_doc/levit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/levit&quot;},{&quot;title&quot;:&quot;Mask2Former&quot;,&quot;id&quot;:&quot;model_doc/mask2former&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mask2former&quot;},{&quot;title&quot;:&quot;MaskFormer&quot;,&quot;id&quot;:&quot;model_doc/maskformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/maskformer&quot;},{&quot;title&quot;:&quot;MobileNetV1&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v1&quot;},{&quot;title&quot;:&quot;MobileNetV2&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v2&quot;},{&quot;title&quot;:&quot;MobileViT&quot;,&quot;id&quot;:&quot;model_doc/mobilevit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevit&quot;},{&quot;title&quot;:&quot;MobileViTV2&quot;,&quot;id&quot;:&quot;model_doc/mobilevitv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevitv2&quot;},{&quot;title&quot;:&quot;NAT&quot;,&quot;id&quot;:&quot;model_doc/nat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nat&quot;},{&quot;title&quot;:&quot;PoolFormer&quot;,&quot;id&quot;:&quot;model_doc/poolformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/poolformer&quot;},{&quot;title&quot;:&quot;Pyramid Vision Transformer (PVT)&quot;,&quot;id&quot;:&quot;model_doc/pvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pvt&quot;},{&quot;title&quot;:&quot;RegNet&quot;,&quot;id&quot;:&quot;model_doc/regnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/regnet&quot;},{&quot;title&quot;:&quot;ResNet&quot;,&quot;id&quot;:&quot;model_doc/resnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/resnet&quot;},{&quot;title&quot;:&quot;SegFormer&quot;,&quot;id&quot;:&quot;model_doc/segformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/segformer&quot;},{&quot;title&quot;:&quot;SwiftFormer&quot;,&quot;id&quot;:&quot;model_doc/swiftformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swiftformer&quot;},{&quot;title&quot;:&quot;Swin Transformer&quot;,&quot;id&quot;:&quot;model_doc/swin&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin&quot;},{&quot;title&quot;:&quot;Swin Transformer V2&quot;,&quot;id&quot;:&quot;model_doc/swinv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swinv2&quot;},{&quot;title&quot;:&quot;Swin2SR&quot;,&quot;id&quot;:&quot;model_doc/swin2sr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin2sr&quot;},{&quot;title&quot;:&quot;Table Transformer&quot;,&quot;id&quot;:&quot;model_doc/table-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/table-transformer&quot;},{&quot;title&quot;:&quot;TimeSformer&quot;,&quot;id&quot;:&quot;model_doc/timesformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/timesformer&quot;},{&quot;title&quot;:&quot;UperNet&quot;,&quot;id&quot;:&quot;model_doc/upernet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/upernet&quot;},{&quot;title&quot;:&quot;VAN&quot;,&quot;id&quot;:&quot;model_doc/van&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/van&quot;},{&quot;title&quot;:&quot;VideoMAE&quot;,&quot;id&quot;:&quot;model_doc/videomae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/videomae&quot;},{&quot;title&quot;:&quot;Vision Transformer (ViT)&quot;,&quot;id&quot;:&quot;model_doc/vit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit&quot;},{&quot;title&quot;:&quot;ViT Hybrid&quot;,&quot;id&quot;:&quot;model_doc/vit_hybrid&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_hybrid&quot;},{&quot;title&quot;:&quot;ViTDet&quot;,&quot;id&quot;:&quot;model_doc/vitdet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitdet&quot;},{&quot;title&quot;:&quot;ViTMAE&quot;,&quot;id&quot;:&quot;model_doc/vit_mae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_mae&quot;},{&quot;title&quot;:&quot;ViTMatte&quot;,&quot;id&quot;:&quot;model_doc/vitmatte&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitmatte&quot;},{&quot;title&quot;:&quot;ViTMSN&quot;,&quot;id&quot;:&quot;model_doc/vit_msn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_msn&quot;},{&quot;title&quot;:&quot;ViViT&quot;,&quot;id&quot;:&quot;model_doc/vivit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vivit&quot;},{&quot;title&quot;:&quot;YOLOS&quot;,&quot;id&quot;:&quot;model_doc/yolos&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yolos&quot;}]},{&quot;title&quot;:&quot;Audio models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio Spectrogram Transformer&quot;,&quot;id&quot;:&quot;model_doc/audio-spectrogram-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer&quot;},{&quot;title&quot;:&quot;Bark&quot;,&quot;id&quot;:&quot;model_doc/bark&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bark&quot;},{&quot;title&quot;:&quot;CLAP&quot;,&quot;id&quot;:&quot;model_doc/clap&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clap&quot;},{&quot;title&quot;:&quot;EnCodec&quot;,&quot;id&quot;:&quot;model_doc/encodec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encodec&quot;},{&quot;title&quot;:&quot;Hubert&quot;,&quot;id&quot;:&quot;model_doc/hubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/hubert&quot;},{&quot;title&quot;:&quot;MCTCT&quot;,&quot;id&quot;:&quot;model_doc/mctct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mctct&quot;},{&quot;title&quot;:&quot;MMS&quot;,&quot;id&quot;:&quot;model_doc/mms&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mms&quot;},{&quot;title&quot;:&quot;MusicGen&quot;,&quot;id&quot;:&quot;model_doc/musicgen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/musicgen&quot;},{&quot;title&quot;:&quot;Pop2Piano&quot;,&quot;id&quot;:&quot;model_doc/pop2piano&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pop2piano&quot;},{&quot;title&quot;:&quot;SEW&quot;,&quot;id&quot;:&quot;model_doc/sew&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew&quot;},{&quot;title&quot;:&quot;SEW-D&quot;,&quot;id&quot;:&quot;model_doc/sew-d&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew-d&quot;},{&quot;title&quot;:&quot;Speech2Text&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text&quot;},{&quot;title&quot;:&quot;Speech2Text2&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text_2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2&quot;},{&quot;title&quot;:&quot;SpeechT5&quot;,&quot;id&quot;:&quot;model_doc/speecht5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speecht5&quot;},{&quot;title&quot;:&quot;UniSpeech&quot;,&quot;id&quot;:&quot;model_doc/unispeech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech&quot;},{&quot;title&quot;:&quot;UniSpeech-SAT&quot;,&quot;id&quot;:&quot;model_doc/unispeech-sat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech-sat&quot;},{&quot;title&quot;:&quot;VITS&quot;,&quot;id&quot;:&quot;model_doc/vits&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vits&quot;},{&quot;title&quot;:&quot;Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2&quot;},{&quot;title&quot;:&quot;Wav2Vec2-Conformer&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2-conformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer&quot;},{&quot;title&quot;:&quot;Wav2Vec2Phoneme&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2_phoneme&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme&quot;},{&quot;title&quot;:&quot;WavLM&quot;,&quot;id&quot;:&quot;model_doc/wavlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wavlm&quot;},{&quot;title&quot;:&quot;Whisper&quot;,&quot;id&quot;:&quot;model_doc/whisper&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/whisper&quot;},{&quot;title&quot;:&quot;XLS-R&quot;,&quot;id&quot;:&quot;model_doc/xls_r&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xls_r&quot;},{&quot;title&quot;:&quot;XLSR-Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/xlsr_wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2&quot;}]},{&quot;title&quot;:&quot;Multimodal models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALIGN&quot;,&quot;id&quot;:&quot;model_doc/align&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/align&quot;},{&quot;title&quot;:&quot;AltCLIP&quot;,&quot;id&quot;:&quot;model_doc/altclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/altclip&quot;},{&quot;title&quot;:&quot;BLIP&quot;,&quot;id&quot;:&quot;model_doc/blip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip&quot;},{&quot;title&quot;:&quot;BLIP-2&quot;,&quot;id&quot;:&quot;model_doc/blip-2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip-2&quot;},{&quot;title&quot;:&quot;BridgeTower&quot;,&quot;id&quot;:&quot;model_doc/bridgetower&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bridgetower&quot;},{&quot;title&quot;:&quot;BROS&quot;,&quot;id&quot;:&quot;model_doc/bros&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bros&quot;},{&quot;title&quot;:&quot;Chinese-CLIP&quot;,&quot;id&quot;:&quot;model_doc/chinese_clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/chinese_clip&quot;},{&quot;title&quot;:&quot;CLIP&quot;,&quot;id&quot;:&quot;model_doc/clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clip&quot;},{&quot;title&quot;:&quot;CLIPSeg&quot;,&quot;id&quot;:&quot;model_doc/clipseg&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clipseg&quot;},{&quot;title&quot;:&quot;Data2Vec&quot;,&quot;id&quot;:&quot;model_doc/data2vec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/data2vec&quot;},{&quot;title&quot;:&quot;DePlot&quot;,&quot;id&quot;:&quot;model_doc/deplot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deplot&quot;},{&quot;title&quot;:&quot;Donut&quot;,&quot;id&quot;:&quot;model_doc/donut&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/donut&quot;},{&quot;title&quot;:&quot;FLAVA&quot;,&quot;id&quot;:&quot;model_doc/flava&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flava&quot;},{&quot;title&quot;:&quot;GIT&quot;,&quot;id&quot;:&quot;model_doc/git&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/git&quot;},{&quot;title&quot;:&quot;GroupViT&quot;,&quot;id&quot;:&quot;model_doc/groupvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/groupvit&quot;},{&quot;title&quot;:&quot;IDEFICS&quot;,&quot;id&quot;:&quot;model_doc/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/idefics&quot;},{&quot;title&quot;:&quot;InstructBLIP&quot;,&quot;id&quot;:&quot;model_doc/instructblip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/instructblip&quot;},{&quot;title&quot;:&quot;LayoutLM&quot;,&quot;id&quot;:&quot;model_doc/layoutlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlm&quot;},{&quot;title&quot;:&quot;LayoutLMV2&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv2&quot;},{&quot;title&quot;:&quot;LayoutLMV3&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv3&quot;},{&quot;title&quot;:&quot;LayoutXLM&quot;,&quot;id&quot;:&quot;model_doc/layoutxlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutxlm&quot;},{&quot;title&quot;:&quot;LiLT&quot;,&quot;id&quot;:&quot;model_doc/lilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lilt&quot;},{&quot;title&quot;:&quot;LXMERT&quot;,&quot;id&quot;:&quot;model_doc/lxmert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lxmert&quot;},{&quot;title&quot;:&quot;MatCha&quot;,&quot;id&quot;:&quot;model_doc/matcha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/matcha&quot;},{&quot;title&quot;:&quot;MGP-STR&quot;,&quot;id&quot;:&quot;model_doc/mgp-str&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mgp-str&quot;},{&quot;title&quot;:&quot;Nougat&quot;,&quot;id&quot;:&quot;model_doc/nougat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nougat&quot;},{&quot;title&quot;:&quot;OneFormer&quot;,&quot;id&quot;:&quot;model_doc/oneformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/oneformer&quot;},{&quot;title&quot;:&quot;OWL-ViT&quot;,&quot;id&quot;:&quot;model_doc/owlvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/owlvit&quot;},{&quot;title&quot;:&quot;Perceiver&quot;,&quot;id&quot;:&quot;model_doc/perceiver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/perceiver&quot;},{&quot;title&quot;:&quot;Pix2Struct&quot;,&quot;id&quot;:&quot;model_doc/pix2struct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pix2struct&quot;},{&quot;title&quot;:&quot;Segment Anything&quot;,&quot;id&quot;:&quot;model_doc/sam&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sam&quot;},{&quot;title&quot;:&quot;Speech Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/speech-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder&quot;},{&quot;title&quot;:&quot;TAPAS&quot;,&quot;id&quot;:&quot;model_doc/tapas&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapas&quot;},{&quot;title&quot;:&quot;TrOCR&quot;,&quot;id&quot;:&quot;model_doc/trocr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trocr&quot;},{&quot;title&quot;:&quot;TVLT&quot;,&quot;id&quot;:&quot;model_doc/tvlt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tvlt&quot;},{&quot;title&quot;:&quot;ViLT&quot;,&quot;id&quot;:&quot;model_doc/vilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vilt&quot;},{&quot;title&quot;:&quot;Vision Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/vision-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder&quot;},{&quot;title&quot;:&quot;Vision Text Dual Encoder&quot;,&quot;id&quot;:&quot;model_doc/vision-text-dual-encoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder&quot;},{&quot;title&quot;:&quot;VisualBERT&quot;,&quot;id&quot;:&quot;model_doc/visual_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/visual_bert&quot;},{&quot;title&quot;:&quot;X-CLIP&quot;,&quot;id&quot;:&quot;model_doc/xclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xclip&quot;}]},{&quot;title&quot;:&quot;Reinforcement learning models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Decision Transformer&quot;,&quot;id&quot;:&quot;model_doc/decision_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/decision_transformer&quot;},{&quot;title&quot;:&quot;Trajectory Transformer&quot;,&quot;id&quot;:&quot;model_doc/trajectory_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer&quot;}]},{&quot;title&quot;:&quot;Time series models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Autoformer&quot;,&quot;id&quot;:&quot;model_doc/autoformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/autoformer&quot;},{&quot;title&quot;:&quot;Informer&quot;,&quot;id&quot;:&quot;model_doc/informer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/informer&quot;},{&quot;title&quot;:&quot;Time Series Transformer&quot;,&quot;id&quot;:&quot;model_doc/time_series_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/time_series_transformer&quot;}]},{&quot;title&quot;:&quot;Graph models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Graphormer&quot;,&quot;id&quot;:&quot;model_doc/graphormer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/graphormer&quot;}]}]},{&quot;title&quot;:&quot;Internal Helpers&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Custom Layers and Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/modeling_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/modeling_utils&quot;},{&quot;title&quot;:&quot;Utilities for pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/pipelines_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/pipelines_utils&quot;},{&quot;title&quot;:&quot;Utilities for Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/tokenization_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/tokenization_utils&quot;},{&quot;title&quot;:&quot;Utilities for Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/trainer_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/trainer_utils&quot;},{&quot;title&quot;:&quot;Utilities for Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/generation_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/generation_utils&quot;},{&quot;title&quot;:&quot;Utilities for Image Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/image_processing_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/image_processing_utils&quot;},{&quot;title&quot;:&quot;Utilities for Audio processing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/audio_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/audio_utils&quot;},{&quot;title&quot;:&quot;General Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/file_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/file_utils&quot;},{&quot;title&quot;:&quot;Utilities for Time Series&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/time_series_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/time_series_utils&quot;}]}]}],&quot;chapterId&quot;:&quot;model_doc/code_llama&quot;,&quot;docType&quot;:&quot;docs&quot;,&quot;isLoggedIn&quot;:false,&quot;lang&quot;:&quot;en&quot;,&quot;langs&quot;:[&quot;de&quot;,&quot;en&quot;,&quot;es&quot;,&quot;fr&quot;,&quot;it&quot;,&quot;ko&quot;,&quot;pt&quot;,&quot;zh&quot;],&quot;library&quot;:&quot;transformers&quot;,&quot;theme&quot;:&quot;light&quot;,&quot;version&quot;:&quot;v4.34.0&quot;,&quot;versions&quot;:[{&quot;version&quot;:&quot;main&quot;},{&quot;version&quot;:&quot;v4.34.0&quot;},{&quot;version&quot;:&quot;v4.33.3&quot;},{&quot;version&quot;:&quot;v4.33.2&quot;},{&quot;version&quot;:&quot;v4.33.0&quot;},{&quot;version&quot;:&quot;v4.32.1&quot;},{&quot;version&quot;:&quot;v4.32.0&quot;},{&quot;version&quot;:&quot;v4.31.0&quot;},{&quot;version&quot;:&quot;v4.30.0&quot;},{&quot;version&quot;:&quot;v4.29.1&quot;},{&quot;version&quot;:&quot;v4.29.0&quot;},{&quot;version&quot;:&quot;v4.28.1&quot;},{&quot;version&quot;:&quot;v4.28.0&quot;},{&quot;version&quot;:&quot;v4.27.2&quot;},{&quot;version&quot;:&quot;v4.27.1&quot;},{&quot;version&quot;:&quot;v4.27.0&quot;},{&quot;version&quot;:&quot;v4.26.1&quot;},{&quot;version&quot;:&quot;v4.26.0&quot;},{&quot;version&quot;:&quot;v4.25.1&quot;},{&quot;version&quot;:&quot;v4.24.0&quot;},{&quot;version&quot;:&quot;v4.23.1&quot;},{&quot;version&quot;:&quot;v4.23.0&quot;},{&quot;version&quot;:&quot;v4.22.2&quot;},{&quot;version&quot;:&quot;v4.22.1&quot;},{&quot;version&quot;:&quot;v4.22.0&quot;},{&quot;version&quot;:&quot;v4.21.3&quot;},{&quot;version&quot;:&quot;v4.21.2&quot;},{&quot;version&quot;:&quot;v4.21.1&quot;},{&quot;version&quot;:&quot;v4.21.0&quot;},{&quot;version&quot;:&quot;v4.20.1&quot;},{&quot;version&quot;:&quot;v4.20.0&quot;},{&quot;version&quot;:&quot;v4.19.4&quot;},{&quot;version&quot;:&quot;v4.19.3&quot;},{&quot;version&quot;:&quot;v4.19.2&quot;},{&quot;version&quot;:&quot;v4.19.0&quot;},{&quot;version&quot;:&quot;v4.18.0&quot;},{&quot;version&quot;:&quot;v4.17.0&quot;},{&quot;version&quot;:&quot;v4.16.2&quot;},{&quot;version&quot;:&quot;v4.16.1&quot;},{&quot;version&quot;:&quot;v4.16.0&quot;},{&quot;version&quot;:&quot;v4.15.0&quot;},{&quot;version&quot;:&quot;v4.14.1&quot;},{&quot;version&quot;:&quot;v4.13.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.5&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.4&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.0.0&quot;},{&quot;version&quot;:&quot;doc-builder-html&quot;}],&quot;title&quot;:&quot;CodeLlama&quot;}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"><div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation</p> <div class="flex items-center"><p class="font-semibold">CodeLlama</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "><button class=" " type="button"><h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> <span class="ml-auto rounded border border-gray-200 bg-gray-100 px-0.5 text-xs dark:border-gray-800 dark:bg-gray-800"><kbd class="font-sans">⌘K</kbd></span></button> <div class="flex items-center"><select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1">v4.34.0</option><option value="2">v4.33.3</option><option value="3">v4.32.1</option><option value="4">v4.31.0</option><option value="5">v4.30.0</option><option value="6">v4.29.1</option><option value="7">v4.28.1</option><option value="8">v4.27.2</option><option value="9">v4.26.1</option><option value="10">v4.25.1</option><option value="11">v4.24.0</option><option value="12">v4.23.1</option><option value="13">v4.22.2</option><option value="14">v4.21.3</option><option value="15">v4.20.1</option><option value="16">v4.19.4</option><option value="17">v4.18.0</option><option value="18">v4.17.0</option><option value="19">v4.16.2</option><option value="20">v4.15.0</option><option value="21">v4.14.1</option><option value="22">v4.13.0</option><option value="23">v4.12.5</option><option value="24">v4.11.3</option><option value="25">v4.10.1</option><option value="26">v4.9.2</option><option value="27">v4.8.2</option><option value="28">v4.7.0</option><option value="29">v4.6.0</option><option value="30">v4.5.1</option><option value="31">v4.4.2</option><option value="32">v4.3.3</option><option value="33">v4.2.2</option><option value="34">v4.1.1</option><option value="35">v4.0.1</option><option value="36">v3.5.1</option><option value="37">v3.4.0</option><option value="38">v3.3.1</option><option value="39">v3.2.0</option><option value="40">v3.1.0</option><option value="41">v3.0.2</option><option value="42">v2.11.0</option><option value="43">v2.10.0</option><option value="44">v2.9.1</option><option value="45">v2.8.0</option><option value="46">v2.7.0</option><option value="47">v2.6.0</option><option value="48">v2.5.1</option><option value="49">v2.4.1</option><option value="50">v2.3.0</option><option value="51">v2.2.2</option><option value="52">v2.1.1</option><option value="53">v2.0.0</option><option value="54">v1.2.0</option><option value="55">v1.1.0</option><option value="56">v1.0.0</option><option value="57">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"><button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"><svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> 112,792</a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Get started</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/index">🤗 Transformers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/quicktour">Quick tour </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/installation">Installation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Tutorials</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_tutorial">Run inference with pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/autoclass_tutorial">Write portable code with AutoClass </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/preprocessing">Preprocess data </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/training">Fine-tune a pretrained model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/run_scripts">Train with a script </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/accelerate">Set up distributed training with 🤗 Accelerate </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/peft">Load and train adapters with 🤗 PEFT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_sharing">Share your model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/transformers_agents">Agents </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/llm_tutorial">Generation with LLMs </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Task Guides</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Natural Language Processing</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Computer Vision</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Generation</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Prompting</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Developer guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/fast_tokenizers">Use fast tokenizers from 🤗 Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/multilingual">Run inference with multilingual models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/create_a_model">Use model-specific APIs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_models">Share a custom model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/chat_templating">Templates for chat models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/sagemaker">Run training on Amazon SageMaker </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/serialization">Export to ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tflite">Export to TFLite </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/torchscript">Export to TorchScript </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/benchmarks">Benchmarks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/notebooks">Notebooks with examples </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/community">Community resources </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_tools">Custom Tools and Prompts </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/troubleshooting">Troubleshoot </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Performance and scalability</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/performance">Overview </a><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Efficient training techniques</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_one">Methods and tools for efficient training on a single GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_many">Multiple GPUs and parallelism </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu">Efficient training on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu_many">Distributed CPU training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu">Training on TPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu_tf">Training on TPU with TensorFlow </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_special">Training on Specialized Hardware </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_hardware">Custom hardware for training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/hpo_train">Hyperparameter Search using Trainer API </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Optimizing inference</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_cpu">Inference on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_one">Inference on one GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_many">Inference on many GPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_special">Inference on Specialized Hardware </a> </div><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/big_models">Instantiating a big model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/debugging">Troubleshooting </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tf_xla">XLA Integration for TensorFlow Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perf_torch_compile">Optimize inference using `torch.compile()` </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Contribute</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/contributing">How to contribute to transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_model">How to add a model to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_tensorflow_model">How to convert a 🤗 Transformers model to TensorFlow? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_pipeline">How to add a pipeline to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/testing">Testing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pr_checks">Checks on a Pull Request </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Conceptual guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/philosophy">Philosophy </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/glossary">Glossary </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/task_summary">What 🤗 Transformers can do </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tasks_explained">How 🤗 Transformers solve tasks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_summary">The Transformer model family </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tokenizer_summary">Summary of the tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/attention">Attention mechanisms </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pad_truncation">Padding and truncation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/bertology">BERTology </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perplexity">Perplexity of fixed-length models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_webserver">Pipelines for webserver inference </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_memory_anatomy">Model training anatomy </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>API</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Main Classes</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/agent">Agents and Tools </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/model_doc/auto">Auto Classes </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/callback">Callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/configuration">Configuration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/data_collator">Data Collator </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks">Keras callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/logging">Logging </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/model">Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/text_generation">Text Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/onnx">ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules">Optimization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/output">Model outputs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/pipelines">Pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/processors">Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/quantization">Quantization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/tokenizer">Tokenizer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/trainer">Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/deepspeed">DeepSpeed Integration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor">Feature Extractor </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/image_processor">Image Processor </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Models</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Text models</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/albert">ALBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bart">BART </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/barthez">BARThez </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bartpho">BARTpho </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bert">BERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bert-generation">BertGeneration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bert-japanese">BertJapanese </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bertweet">Bertweet </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/big_bird">BigBird </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus">BigBirdPegasus </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/biogpt">BioGpt </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/blenderbot">Blenderbot </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/blenderbot-small">Blenderbot Small </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bloom">BLOOM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/bort">BORT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/byt5">ByT5 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/camembert">CamemBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/canine">CANINE </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/codegen">CodeGen </a><a data-sveltekit-reload="" class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/code_llama">CodeLlama </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/convbert">ConvBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/cpm">CPM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/cpmant">CPMANT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ctrl">CTRL </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/deberta">DeBERTa </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/deberta-v2">DeBERTa-v2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/dialogpt">DialoGPT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/distilbert">DistilBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/dpr">DPR </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/electra">ELECTRA </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/encoder-decoder">Encoder Decoder Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ernie">ERNIE </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ernie_m">ErnieM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/esm">ESM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/falcon">Falcon </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/flan-t5">FLAN-T5 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/flan-ul2">FLAN-UL2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/flaubert">FlauBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/fnet">FNet </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/fsmt">FSMT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/funnel">Funnel Transformer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/openai-gpt">GPT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt_neo">GPT Neo </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt_neox">GPT NeoX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese">GPT NeoX Japanese </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gptj">GPT-J </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt2">GPT2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode">GPTBigCode </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese">GPTSAN Japanese </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/gpt-sw3">GPTSw3 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/herbert">HerBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ibert">I-BERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/jukebox">Jukebox </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/led">LED </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/llama">LLaMA </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/llama2">Llama2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/longformer">Longformer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/longt5">LongT5 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/luke">LUKE </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/m2m_100">M2M100 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/marian">MarianMT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/markuplm">MarkupLM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mbart">MBart and MBart-50 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mega">MEGA </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/megatron-bert">MegatronBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2">MegatronGPT2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mistral">Mistral </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mluke">mLUKE </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mobilebert">MobileBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mpnet">MPNet </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mpt">MPT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mra">MRA </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mt5">MT5 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/mvp">MVP </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nezha">NEZHA </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nllb">NLLB </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nllb-moe">NLLB-MoE </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/nystromformer">Nyströmformer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/open-llama">Open-Llama </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/opt">OPT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/pegasus">Pegasus </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/pegasus_x">PEGASUS-X </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/persimmon">Persimmon </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/phobert">PhoBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/plbart">PLBart </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/prophetnet">ProphetNet </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/qdqbert">QDQBert </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/rag">RAG </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/realm">REALM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/reformer">Reformer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/rembert">RemBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/retribert">RetriBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/roberta">RoBERTa </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm">RoBERTa-PreLayerNorm </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/roc_bert">RoCBert </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/roformer">RoFormer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/rwkv">RWKV </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/splinter">Splinter </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/squeezebert">SqueezeBERT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/switch_transformers">SwitchTransformers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/t5">T5 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/t5v1.1">T5v1.1 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/tapex">TAPEX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/transfo-xl">Transformer XL </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/ul2">UL2 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/umt5">UMT5 </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xmod">X-MOD </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xglm">XGLM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm">XLM </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet">XLM-ProphetNet </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta">XLM-RoBERTa </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl">XLM-RoBERTa-XL </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlm-v">XLM-V </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/xlnet">XLNet </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-6" href="/docs/transformers/v4.34.0/en/model_doc/yoso">YOSO </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Vision models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Reinforcement learning models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Time series models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Graph models</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Internal Helpers</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/modeling_utils">Custom Layers and Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/pipelines_utils">Utilities for pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/tokenization_utils">Utilities for Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/trainer_utils">Utilities for Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/generation_utils">Utilities for Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/image_processing_utils">Utilities for Image Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/audio_utils">Utilities for Audio processing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/file_utils">General Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/time_series_utils">Utilities for Time Series </a> </div> </div></nav></div></div></div> <div class="z-1 min-w-0 flex-1"> <div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg"> <div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div> <p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience </p> <div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes </div></div></div> <div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a> <p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div> <div class="prose-doc prose relative mx-auto max-w-4xl break-words"> <p></p> <h1 class="relative group"><a id="codellama" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#codellama"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1wz1vb5">CodeLlama</span></h1> <h2 class="relative group"><a id="overview" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#overview"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1jsw1pg">Overview</span></h2> <p data-svelte-h="svelte-1s05gbs">The Code Llama model was proposed in <a href="https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/" rel="nofollow">Code Llama: Open Foundation Models for Code</a> by Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve.</p> <p data-svelte-h="svelte-vfdo9a">The abstract from the paper is the following:</p> <p data-svelte-h="svelte-mbr49z"><em>We release Code Llama, a family of large language models for code based on Llama 2 providing state-of-the-art performance among open models, infilling capabilities, support for large input contexts, and zero-shot instruction following ability for programming tasks. We provide multiple flavors to cover a wide range of applications: foundation models (Code Llama), Python specializations (Code Llama - Python), and instruction-following models (Code Llama - Instruct) with 7B, 13B and 34B parameters each. All models are trained on sequences of 16k tokens and show improvements on inputs with up to 100k tokens. 7B and 13B Code Llama and Code Llama - Instruct variants support infilling based on surrounding content. Code Llama reaches state-of-the-art performance among open models on several code benchmarks, with scores of up to 53% and 55% on HumanEval and MBPP, respectively. Notably, Code Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our models outperform every other publicly available model on MultiPL-E. We release Code Llama under a permissive license that allows for both research and commercial use.</em></p> <p data-svelte-h="svelte-h8tmfu">Check out all Code Llama models <a href="https://huggingface.co/models?search=code_llama" rel="nofollow">here</a> and the officially released ones in the <a href="https://huggingface.co/codellama" rel="nofollow">codellama org</a>.</p> <div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400"><p data-svelte-h="svelte-8yf686">The <code>Llama2</code> family models, on which Code Llama is based, were trained using <code>bfloat16</code>, but the original inference uses <code>float16</code>. Let’s look at the different precisions:</p> <ul data-svelte-h="svelte-17c64tm"><li><code>float32</code>: PyTorch convention on model initialization is to load models in <code>float32</code>, no matter with which <code>dtype</code> the model weights were stored. <code>transformers</code> also follows this convention for consistency with PyTorch. This will be picked by default. If you want the <code>AutoModel</code> API to cast the load the checkpoints with the storage weights type, you must specify <code>torch_dtype="auto"</code>, e.g. <code>model = AutoModelForCausalLM.from_pretrained("path", torch_dtype = "auto")</code>.</li> <li><code>bfloat16</code>: Code Llama was trained with this precision, so we recommend using it for further training or fine-tuning.</li> <li><code>float16</code>: We recommend running inference using this precision, as it’s usually faster than <code>bfloat16</code>, and evaluation metrics show no discernible degradation with respect to <code>bfloat16</code>. You can also run inference using <code>bfloat16</code>, and we recommend you check inference results with both <code>float16</code> and <code>bfloat16</code> after fine-tuning.</li></ul> <p data-svelte-h="svelte-1bzil5i">As mentioned above, the <code>dtype</code> of the storage weights is mostly irrelevant unless you are using <code>torch_dtype="auto"</code> when initializing a model using. The reason is that the model will first be downloaded (using the <code>dtype</code> of the checkpoints online) and then will be casted to the default <code>dtype</code> of <code>torch</code> (becomes <code>torch.float32</code>). If there is a specified <code>torch_dtype</code>, it will be used instead.</p></div> <p data-svelte-h="svelte-axv494">Tips:</p> <ul data-svelte-h="svelte-1molplc"><li>These models have the same architecture as the <code>Llama2</code> models</li> <li>The infilling task is supported out of the box. You should be using the <code>tokenizer.fill_token</code> where you want your input to be filled.</li> <li>The model conversion script is the same as for the <code>Llama2</code> family:</li></ul> <p data-svelte-h="svelte-eidtfm">Here is a sample usage</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class="">python src/transformers/models/llama/convert_llama_weights_to_hf.py \ --input_dir /path/to/downloaded/llama/weights --model_size 7B --output_dir /output/path</pre></div> <p data-svelte-h="svelte-1ovdf29">Note that executing the script requires enough CPU RAM to host the whole model in float16 precision (even if the biggest versions come in several checkpoints they each contain a part of each weight of the model, so we need to load them all in RAM).</p> <ul data-svelte-h="svelte-yjbkch"><li>After conversion, the model and tokenizer can be loaded via:</li></ul> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> LlamaForCausalLM, CodeLlamaTokenizer <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = CodeLlamaTokenizer.from_pretrained(<span class="hljs-string">"codellama/CodeLlama-7b-hf"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = LlamaForCausalLM.from_pretrained(<span class="hljs-string">"codellama/CodeLlama-7b-hf"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>PROMPT = <span class="hljs-string">'''def remove_non_ascii(s: str) -&gt; str: """ &lt;FILL_ME&gt; return result '''</span> <span class="hljs-meta">&gt;&gt;&gt; </span>input_ids = tokenizer(PROMPT, return_tensors=<span class="hljs-string">"pt"</span>)[<span class="hljs-string">"input_ids"</span>] <span class="hljs-meta">&gt;&gt;&gt; </span>generated_ids = model.generate(input_ids, max_new_tokens=<span class="hljs-number">128</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>filling = tokenizer.batch_decode(generated_ids[:, input_ids.shape[<span class="hljs-number">1</span>]:], skip_special_tokens = <span class="hljs-literal">True</span>)[<span class="hljs-number">0</span>] <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-built_in">print</span>(PROMPT.replace(<span class="hljs-string">"&lt;FILL_ME&gt;"</span>, filling)) <span class="hljs-keyword">def</span> <span class="hljs-title function_">remove_non_ascii</span>(<span class="hljs-params">s: <span class="hljs-built_in">str</span></span>) -&gt; <span class="hljs-built_in">str</span>: <span class="hljs-string">""" Remove non-ASCII characters from a string. Args: s: The string to remove non-ASCII characters from. Returns: The string with non-ASCII characters removed. """</span> result = <span class="hljs-string">""</span> <span class="hljs-keyword">for</span> c <span class="hljs-keyword">in</span> s: <span class="hljs-keyword">if</span> <span class="hljs-built_in">ord</span>(c) &lt; <span class="hljs-number">128</span>: result += c <span class="hljs-keyword">return</span> result</pre></div> <p data-svelte-h="svelte-cm0lfz">If you only want the infilled part:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> pipeline <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>generator = pipeline(<span class="hljs-string">"text-generation"</span>,model=<span class="hljs-string">"codellama/CodeLlama-7b-hf"</span>,torch_dtype=torch.float16, device_map=<span class="hljs-string">"auto"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>generator(<span class="hljs-string">'def remove_non_ascii(s: str) -&gt; str:\n """ &lt;FILL_ME&gt;\n return result'</span>, max_new_tokens = <span class="hljs-number">128</span>, return_type = <span class="hljs-number">1</span>)</pre></div> <p data-svelte-h="svelte-xefpq6">Under the hood, the tokenizer <a href="https://huggingface.co/docs/transformers/main/model_doc/code_llama#transformers.CodeLlamaTokenizer.fill_token" rel="nofollow">automatically splits by <code>&lt;FILL_ME&gt;</code></a> to create a formatted input string that follows <a href="https://github.com/facebookresearch/codellama/blob/cb51c14ec761370ba2e2bc351374a79265d0465e/llama/generation.py#L402" rel="nofollow">the original training pattern</a>. This is more robust than preparing the pattern yourself: it avoids pitfalls, such as token glueing, that are very hard to debug. To see how much CPU and GPU memory you need for this model or others, try <a href="https://huggingface.co/spaces/hf-accelerate/model-memory-usage" rel="nofollow">this calculator</a> which can help determine that value.</p> <ul data-svelte-h="svelte-1xmggxn"><li>The LLaMA tokenizer is a BPE model based on <a href="https://github.com/google/sentencepiece" rel="nofollow">sentencepiece</a>. One quirk of sentencepiece is that when decoding a sequence, if the first token is the start of the word (e.g. “Banana”), the tokenizer does not prepend the prefix space to the string.</li></ul> <p data-svelte-h="svelte-z5ta87">This model was contributed by <a href="https://huggingface.co/ArthurZ" rel="nofollow">ArthurZucker</a>. The original code of the authors can be found <a href="https://github.com/facebookresearch/llama" rel="nofollow">here</a>.</p> <h2 class="relative group"><a id="transformers.CodeLlamaTokenizer" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.CodeLlamaTokenizer"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1fvap6s">CodeLlamaTokenizer</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.CodeLlamaTokenizer"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">CodeLlamaTokenizer</span></span></h3> <a id="transformers.CodeLlamaTokenizer" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.CodeLlamaTokenizer"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/code_llama/tokenization_code_llama.py#L59" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">vocab_file<span class="opacity-60"></span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">unk_token<span class="opacity-60"> = '&lt;unk&gt;'</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">bos_token<span class="opacity-60"> = '&lt;s&gt;'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">eos_token<span class="opacity-60"> = '&lt;/s&gt;'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">prefix_token<span class="opacity-60"> = '▁&lt;PRE&gt;'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">middle_token<span class="opacity-60"> = '▁&lt;MID&gt;'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">suffix_token<span class="opacity-60"> = '▁&lt;SUF&gt;'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">eot_token<span class="opacity-60"> = '▁&lt;EOT&gt;'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">fill_token<span class="opacity-60"> = '&lt;FILL_ME&gt;'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">suffix_first<span class="opacity-60"> = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">sp_model_kwargs<span class="opacity-60">: typing.Union[typing.Dict[str, typing.Any], NoneType] = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">add_bos_token<span class="opacity-60"> = True</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">add_eos_token<span class="opacity-60"> = False</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">clean_up_tokenization_spaces<span class="opacity-60"> = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">additional_special_tokens<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_default_system_prompt<span class="opacity-60"> = False</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 12 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.CodeLlamaTokenizer.vocab_file" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.CodeLlamaTokenizer.vocab_file"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>vocab_file</strong> (<code>str</code>) — Path to the vocabulary file.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.CodeLlamaTokenizer.eos_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.CodeLlamaTokenizer.eos_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>eos_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;/s&gt;"</code>) — The end of sequence token.<p></p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"> <p>When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the <code>sep_token</code>.</p> </div></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.CodeLlamaTokenizer.unk_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.CodeLlamaTokenizer.unk_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>unk_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;unk&gt;"</code>) — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.CodeLlamaTokenizer.prefix_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.CodeLlamaTokenizer.prefix_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>prefix_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"▁&lt;PRE&gt;"</code>) — Prefix token used for infilling.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.CodeLlamaTokenizer.suffix_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.CodeLlamaTokenizer.suffix_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>suffix_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"▁&lt;SUF&gt;"</code>) — Suffix token used for infilling.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.CodeLlamaTokenizer.middle_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.CodeLlamaTokenizer.middle_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>middle_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"▁&lt;MID&gt;"</code>) — Middle token used for infilling.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.CodeLlamaTokenizer.eot_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.CodeLlamaTokenizer.eot_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>eot_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"▁&lt;EOT&gt;"</code>) — End of text token used for infilling.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.CodeLlamaTokenizer.fill_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.CodeLlamaTokenizer.fill_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>fill_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;FILL_ME&gt;"</code>) — The token used to split the input between the prefix and suffix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.CodeLlamaTokenizer.suffix_first" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.CodeLlamaTokenizer.suffix_first"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>suffix_first</strong> (<code>bool</code>, <em>optional</em>, default to <code>False</code>) — Whether the input prompt and suffix should be formatted with the suffix first.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.CodeLlamaTokenizer.additional_special_tokens" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.CodeLlamaTokenizer.additional_special_tokens"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>additional_special_tokens</strong> (<code>List[str]</code>, <em>optional</em>) — Additional special tokens used by the tokenizer.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.CodeLlamaTokenizer.sp_model_kwargs" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.CodeLlamaTokenizer.sp_model_kwargs"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>sp_model_kwargs</strong> (<code>dict</code>, <em>optional</em>) — Will be passed to the <code>SentencePieceProcessor.__init__()</code> method. The <a href="https://github.com/google/sentencepiece/tree/master/python" rel="nofollow">Python wrapper for SentencePiece</a> can be used, among other things, to set:<p></p> <ul> <li> <p><code>enable_sampling</code>: Enable subword regularization.</p> </li> <li> <p><code>nbest_size</code>: Sampling parameters for unigram. Invalid for BPE-Dropout.</p> <ul> <li><code>nbest_size = {0,1}</code>: No sampling is performed.</li> <li><code>nbest_size &gt; 1</code>: samples from the nbest_size results.</li> <li><code>nbest_size &lt; 0</code>: assuming that nbest_size is infinite and samples from the all hypothesis (lattice) using forward-filtering-and-backward-sampling algorithm.</li> </ul> </li> <li> <p><code>alpha</code>: Smoothing parameter for unigram sampling, and dropout probability of merge operations for BPE-dropout.</p> </li> </ul></span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.CodeLlamaTokenizer.use_default_system_prompt" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.CodeLlamaTokenizer.use_default_system_prompt"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>use_default_system_prompt</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not the default system prompt for Llama should be used.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-1vmwvh5">Construct a CodeLlama tokenizer. Based on byte-level Byte-Pair-Encoding. The default padding token is unset as there is no padding token in the original model.</p> <p data-svelte-h="svelte-1thmlao">The default configuration match that of <a href="https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf/blob/main/tokenizer_config.json" rel="nofollow">codellama/CodeLlama-7b-Instruct-hf</a> which supports prompt infilling.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.CodeLlamaTokenizer.build_inputs_with_special_tokens"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>build_inputs_with_special_tokens</span></h4> <a id="transformers.CodeLlamaTokenizer.build_inputs_with_special_tokens" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.CodeLlamaTokenizer.build_inputs_with_special_tokens"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/code_llama/tokenization_code_llama.py#L361" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_0<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_1<span class="opacity-60"> = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> </div></div></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.CodeLlamaTokenizer.get_special_tokens_mask"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>get_special_tokens_mask</span></h4> <a id="transformers.CodeLlamaTokenizer.get_special_tokens_mask" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.CodeLlamaTokenizer.get_special_tokens_mask"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/code_llama/tokenization_code_llama.py#L373" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_0<span class="opacity-60">: typing.List[int]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_1<span class="opacity-60">: typing.Optional[typing.List[int]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">already_has_special_tokens<span class="opacity-60">: bool = False</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>List[int]</code></span></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.CodeLlamaTokenizer.get_special_tokens_mask.token_ids_0" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.CodeLlamaTokenizer.get_special_tokens_mask.token_ids_0"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_0</strong> (<code>List[int]</code>) — List of IDs.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.CodeLlamaTokenizer.get_special_tokens_mask.token_ids_1" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.CodeLlamaTokenizer.get_special_tokens_mask.token_ids_1"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_1</strong> (<code>List[int]</code>, <em>optional</em>) — Optional second list of IDs for sequence pairs.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.CodeLlamaTokenizer.get_special_tokens_mask.already_has_special_tokens" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.CodeLlamaTokenizer.get_special_tokens_mask.already_has_special_tokens"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>already_has_special_tokens</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not the token list is already formatted with special tokens for the model.</span></span> </li></ul> <div id="transformers.CodeLlamaTokenizer.get_special_tokens_mask.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><code>List[int]</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.</p> </p> </div></div> <p data-svelte-h="svelte-1f4f5kp">Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer <code>prepare_for_model</code> method.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.CodeLlamaTokenizer.create_token_type_ids_from_sequences"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>create_token_type_ids_from_sequences</span></h4> <a id="transformers.CodeLlamaTokenizer.create_token_type_ids_from_sequences" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.CodeLlamaTokenizer.create_token_type_ids_from_sequences"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/code_llama/tokenization_code_llama.py#L411" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_0<span class="opacity-60">: typing.List[int]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_1<span class="opacity-60">: typing.Optional[typing.List[int]] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>List[int]</code></span></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.CodeLlamaTokenizer.create_token_type_ids_from_sequences.token_ids_0" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.CodeLlamaTokenizer.create_token_type_ids_from_sequences.token_ids_0"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_0</strong> (<code>List[int]</code>) — List of ids.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.CodeLlamaTokenizer.create_token_type_ids_from_sequences.token_ids_1" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.CodeLlamaTokenizer.create_token_type_ids_from_sequences.token_ids_1"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_1</strong> (<code>List[int]</code>, <em>optional</em>) — Optional second list of IDs for sequence pairs.</span></span> </li></ul> <div id="transformers.CodeLlamaTokenizer.create_token_type_ids_from_sequences.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><code>List[int]</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>List of <a href="../glossary#token-type-ids">token type IDs</a> according to the given sequence(s).</p> </p> </div></div> <p data-svelte-h="svelte-13bfd60">Creates a mask from the two sequences passed to be used in a sequence-pair classification task. An ALBERT</p> <div class="relative group rounded-md"><a id="transformers.CodeLlamaTokenizer.create_token_type_ids_from_sequences.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.CodeLlamaTokenizer.create_token_type_ids_from_sequences.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <p data-svelte-h="svelte-16klr56">sequence pair mask has the following format:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class="">0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 0 </span>0<span class="hljs-number"> 1 </span>1<span class="hljs-number"> 1 </span>1<span class="hljs-number"> 1 </span>1<span class="hljs-number"> 1 </span>1 1 | first sequence | second sequence |</pre></div></div> <p data-svelte-h="svelte-wtrslu">if token_ids_1 is None, only returns the first portion of the mask (0s).</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.CodeLlamaTokenizer.save_vocabulary"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>save_vocabulary</span></h4> <a id="transformers.CodeLlamaTokenizer.save_vocabulary" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.CodeLlamaTokenizer.save_vocabulary"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/code_llama/tokenization_code_llama.py#L333" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">save_directory<span class="opacity-60"></span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">filename_prefix<span class="opacity-60">: typing.Optional[str] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>Tuple(str)</code></span></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.CodeLlamaTokenizer.save_vocabulary.save_directory" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.CodeLlamaTokenizer.save_vocabulary.save_directory"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>save_directory</strong> (<code>str</code>) — The directory in which to save the vocabulary.</span></span> </li></ul> <div id="transformers.CodeLlamaTokenizer.save_vocabulary.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><code>Tuple(str)</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>Paths to the files saved.</p> </p> </div></div> <p data-svelte-h="svelte-1slb66l">Save the vocabulary and special tokens file to a directory.</p></div></div> <h2 class="relative group"><a id="transformers.CodeLlamaTokenizerFast" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.CodeLlamaTokenizerFast"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-17l88z2">CodeLlamaTokenizerFast</span></h2> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.CodeLlamaTokenizerFast"><h3 class="!m-0"><span class="flex-1 break-all md:text-lg bg-gradient-to-r px-2.5 py-1.5 rounded-xl from-indigo-50/70 to-white dark:from-gray-900 dark:to-gray-950 dark:text-indigo-300 text-indigo-700"><svg class="mr-1.5 text-indigo-500 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width=".8em" height=".8em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg><span class="font-light">class</span> <span class="font-medium">transformers.</span><span class="font-semibold">CodeLlamaTokenizerFast</span></span></h3> <a id="transformers.CodeLlamaTokenizerFast" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.CodeLlamaTokenizerFast"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/code_llama/tokenization_code_llama_fast.py#L52" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">vocab_file<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">tokenizer_file<span class="opacity-60"> = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">clean_up_tokenization_spaces<span class="opacity-60"> = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">unk_token<span class="opacity-60"> = '&lt;unk&gt;'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">bos_token<span class="opacity-60"> = '&lt;s&gt;'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">eos_token<span class="opacity-60"> = '&lt;/s&gt;'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">prefix_token<span class="opacity-60"> = '▁&lt;PRE&gt;'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">middle_token<span class="opacity-60"> = '▁&lt;MID&gt;'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">suffix_token<span class="opacity-60"> = '▁&lt;SUF&gt;'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">eot_token<span class="opacity-60"> = '▁&lt;EOT&gt;'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">fill_token<span class="opacity-60"> = '&lt;FILL_ME&gt;'</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">additional_special_tokens<span class="opacity-60"> = None</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">add_bos_token<span class="opacity-60"> = True</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">add_eos_token<span class="opacity-60"> = False</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">use_default_system_prompt<span class="opacity-60"> = False</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">**kwargs<span class="opacity-60"></span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details max-h-96 overflow-hidden"><div class="absolute inset-0 bg-gradient-to-t from-white to-white/0 dark:from-gray-950 dark:to-gray-950/0 z-10 flex justify-center"><button class="absolute leading-tight px-3 py-1.5 dark:bg-gray-900 bg-black text-gray-200 hover:text-white rounded-xl bottom-12 ring-offset-2 hover:ring-black hover:ring-2">Expand 14 parameters</button></div> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.CodeLlamaTokenizerFast.vocab_file" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.CodeLlamaTokenizerFast.vocab_file"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>vocab_file</strong> (<code>str</code>) — <a href="https://github.com/google/sentencepiece" rel="nofollow">SentencePiece</a> file (generally has a .model extension) that contains the vocabulary necessary to instantiate a tokenizer.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.CodeLlamaTokenizerFast.tokenizer_file" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.CodeLlamaTokenizerFast.tokenizer_file"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>tokenizer_file</strong> (<code>str</code>) — <a href="https://github.com/huggingface/tokenizers" rel="nofollow">tokenizers</a> file (generally has a .json extension) that contains everything needed to load the tokenizer.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.CodeLlamaTokenizerFast.clean_up_tokenization_spaces" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.CodeLlamaTokenizerFast.clean_up_tokenization_spaces"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>clean_up_tokenization_spaces</strong> (<code>str</code>, <em>optional</em>, defaults to <code>False</code>) — Wether to cleanup spaces after decoding, cleanup consists in removing potential artifacts like extra spaces.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.CodeLlamaTokenizerFast.bos_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.CodeLlamaTokenizerFast.bos_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>bos_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;s&gt;"</code>) — The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.CodeLlamaTokenizerFast.eos_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.CodeLlamaTokenizerFast.eos_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>eos_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;/s&gt;"</code>) — The end of sequence token.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.CodeLlamaTokenizerFast.unk_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.CodeLlamaTokenizerFast.unk_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>unk_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;unk&gt;"</code>) — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.CodeLlamaTokenizerFast.prefix_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.CodeLlamaTokenizerFast.prefix_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>prefix_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"▁&lt;PRE&gt;"</code>) — Prefix token used for infilling.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.CodeLlamaTokenizerFast.suffix_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.CodeLlamaTokenizerFast.suffix_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>suffix_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"▁&lt;SUF&gt;"</code>) — Suffix token used for infilling.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.CodeLlamaTokenizerFast.middle_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.CodeLlamaTokenizerFast.middle_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>middle_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"▁&lt;MID&gt;"</code>) — Middle token used for infilling.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.CodeLlamaTokenizerFast.eot_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.CodeLlamaTokenizerFast.eot_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>eot_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"▁&lt;EOT&gt;"</code>) — End of text token used for infilling.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.CodeLlamaTokenizerFast.fill_token" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.CodeLlamaTokenizerFast.fill_token"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>fill_token</strong> (<code>str</code>, <em>optional</em>, defaults to <code>"&lt;FILL_ME&gt;"</code>) — The token used to split the input between the prefix and suffix.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.CodeLlamaTokenizerFast.suffix_first" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.CodeLlamaTokenizerFast.suffix_first"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>suffix_first</strong> (<code>bool</code>, <em>optional</em>, default to <code>False</code>) — Whether the input prompt and suffix should be formatted with the suffix first.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.CodeLlamaTokenizerFast.additional_special_tokens" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.CodeLlamaTokenizerFast.additional_special_tokens"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>additional_special_tokens</strong> (<code>List[str]</code>, <em>optional</em>) — Additional special tokens used by the tokenizer.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.CodeLlamaTokenizerFast.use_default_system_prompt" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.CodeLlamaTokenizerFast.use_default_system_prompt"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>use_default_system_prompt</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>True</code>) — Whether or not the default system prompt for Llama should be used.</span></span> </li></ul> </div></div> <p data-svelte-h="svelte-15tdcz8">Construct a Llama tokenizer. Based on byte-level Byte-Pair-Encoding.</p> <p data-svelte-h="svelte-llhmpa">This uses notably ByteFallback and no normalization.</p> <div class="relative group rounded-md"><a id="transformers.CodeLlamaTokenizerFast.example" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.CodeLlamaTokenizerFast.example"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> CodeLlamaTokenizerFast <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = CodeLlamaTokenizerFast.from_pretrained(<span class="hljs-string">"hf-internal-testing/llama-tokenizer"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer.encode(<span class="hljs-string">"Hello this is a test"</span>) [<span class="hljs-number">1</span>, <span class="hljs-number">15043</span>, <span class="hljs-number">445</span>, <span class="hljs-number">338</span>, <span class="hljs-number">263</span>, <span class="hljs-number">1243</span>]</pre></div></div> <p data-svelte-h="svelte-cnb6q1">If you want to change the <code>bos_token</code> or the <code>eos_token</code>, make sure to specify them when initializing the model, or call <code>tokenizer.update_post_processor()</code> to make sure that the post-processing is correctly done (otherwise the values of the first token and final token of an encoded sequence will not be correct). For more details, checkout [post-processors] (<a href="https://huggingface.co/docs/tokenizers/api/post-processors" rel="nofollow">https://huggingface.co/docs/tokenizers/api/post-processors</a>) documentation.</p> <p data-svelte-h="svelte-y8tlv2">This tokenizer inherits from <a href="/docs/transformers/v4.34.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast">PreTrainedTokenizerFast</a> which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. The default configuration match that of <a href="https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf/blob/main/tokenizer_config.json" rel="nofollow">codellama/CodeLlama-7b-Instruct-hf</a> which supports prompt infilling.</p> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.CodeLlamaTokenizerFast.build_inputs_with_special_tokens"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>build_inputs_with_special_tokens</span></h4> <a id="transformers.CodeLlamaTokenizerFast.build_inputs_with_special_tokens" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.CodeLlamaTokenizerFast.build_inputs_with_special_tokens"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/code_llama/tokenization_code_llama_fast.py#L396" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_0<span class="opacity-60">: typing.List[int]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_1<span class="opacity-60">: typing.Optional[typing.List[int]] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>List[int]</code></span></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.CodeLlamaTokenizerFast.build_inputs_with_special_tokens.token_ids_0" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.CodeLlamaTokenizerFast.build_inputs_with_special_tokens.token_ids_0"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_0</strong> (<code>List[int]</code>) — List of IDs to which the special tokens will be added.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.CodeLlamaTokenizerFast.build_inputs_with_special_tokens.token_ids_1" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.CodeLlamaTokenizerFast.build_inputs_with_special_tokens.token_ids_1"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_1</strong> (<code>List[int]</code>, <em>optional</em>) — Optional second list of IDs for sequence pairs.</span></span> </li></ul> <div id="transformers.CodeLlamaTokenizerFast.build_inputs_with_special_tokens.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><code>List[int]</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>list of <a href="../glossary#input-ids">input IDs</a> with the appropriate special tokens.</p> </p> </div></div> <p data-svelte-h="svelte-1vll0v2">Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. The special tokens depend on calling set_lang.</p> <p data-svelte-h="svelte-90np8u">An NLLB sequence has the following format, where <code>X</code> represents the sequence:</p> <ul data-svelte-h="svelte-mlrsks"><li><code>input_ids</code> (for encoder) <code>X [eos, src_lang_code]</code></li> <li><code>decoder_input_ids</code>: (for decoder) <code>X [eos, tgt_lang_code]</code></li></ul> <p data-svelte-h="svelte-46aam0">BOS is never used. Pairs of sequences are not the expected use case, but they will be handled without a separator.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.CodeLlamaTokenizerFast.get_special_tokens_mask"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>get_special_tokens_mask</span></h4> <a id="transformers.CodeLlamaTokenizerFast.get_special_tokens_mask" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.CodeLlamaTokenizerFast.get_special_tokens_mask"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/tokenization_utils_base.py#L3770" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_0<span class="opacity-60">: typing.List[int]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_1<span class="opacity-60">: typing.Optional[typing.List[int]] = None</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">already_has_special_tokens<span class="opacity-60">: bool = False</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span>A list of integers in the range [0, 1]</span></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.CodeLlamaTokenizerFast.get_special_tokens_mask.token_ids_0" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.CodeLlamaTokenizerFast.get_special_tokens_mask.token_ids_0"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_0</strong> (<code>List[int]</code>) — List of ids of the first sequence.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.CodeLlamaTokenizerFast.get_special_tokens_mask.token_ids_1" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.CodeLlamaTokenizerFast.get_special_tokens_mask.token_ids_1"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_1</strong> (<code>List[int]</code>, <em>optional</em>) — List of ids of the second sequence.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.CodeLlamaTokenizerFast.get_special_tokens_mask.already_has_special_tokens" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.CodeLlamaTokenizerFast.get_special_tokens_mask.already_has_special_tokens"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>already_has_special_tokens</strong> (<code>bool</code>, <em>optional</em>, defaults to <code>False</code>) — Whether or not the token list is already formatted with special tokens for the model.</span></span> </li></ul> <div id="transformers.CodeLlamaTokenizerFast.get_special_tokens_mask.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p>A list of integers in the range [0, 1]</p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>1 for a special token, 0 for a sequence token.</p> </p> </div></div> <p data-svelte-h="svelte-1wmjg8a">Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer <code>prepare_for_model</code> or <code>encode_plus</code> methods.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.CodeLlamaTokenizerFast.create_token_type_ids_from_sequences"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>create_token_type_ids_from_sequences</span></h4> <a id="transformers.CodeLlamaTokenizerFast.create_token_type_ids_from_sequences" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.CodeLlamaTokenizerFast.create_token_type_ids_from_sequences"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/tokenization_utils_base.py#L3305" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_0<span class="opacity-60">: typing.List[int]</span></span></span><span class="comma cursor-pointer"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">token_ids_1<span class="opacity-60">: typing.Optional[typing.List[int]] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> <span class="font-bold" data-svelte-h="svelte-1j6k10o">→</span> <span class="rounded hover:bg-gray-400 cursor-pointer"><span><code>List[int]</code></span></span></p> <div class="!mb-10 relative docstring-details "> <p class="flex items-center font-semibold !mt-2 !mb-2 text-gray-800" data-svelte-h="svelte-lt6pb6">Parameters <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700 ml-3"></span></p> <ul class="px-2"><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.CodeLlamaTokenizerFast.create_token_type_ids_from_sequences.token_ids_0" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.CodeLlamaTokenizerFast.create_token_type_ids_from_sequences.token_ids_0"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_0</strong> (<code>List[int]</code>) — The first tokenized sequence.</span></span> </li><li class="text-base !pl-4 my-3 rounded "><span class="group flex space-x-1.5 items-start"><a id="transformers.CodeLlamaTokenizerFast.create_token_type_ids_from_sequences.token_ids_1" class="header-link block pr-0.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers.CodeLlamaTokenizerFast.create_token_type_ids_from_sequences.token_ids_1"><span><svg class="text-smd" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span><strong>token_ids_1</strong> (<code>List[int]</code>, <em>optional</em>) — The second tokenized sequence.</span></span> </li></ul> <div id="transformers.CodeLlamaTokenizerFast.create_token_type_ids_from_sequences.returns" class="flex items-center font-semibold space-x-3 text-base !mt-0 !mb-0 text-gray-800 rounded "><p class="text-base">Returns</p> <p><code>List[int]</code></p> <span class="flex-auto border-t-2 border-gray-100 dark:border-gray-700"></span></div> <p class="text-base"> <p>The token type ids.</p> </p> </div></div> <p data-svelte-h="svelte-zj1vf1">Create the token type IDs corresponding to the sequences passed. <a href="../glossary#token-type-ids">What are token type IDs?</a></p> <p data-svelte-h="svelte-9vptpw">Should be overridden in a subclass if the model has a special way of building those.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.CodeLlamaTokenizerFast.update_post_processor"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>update_post_processor</span></h4> <a id="transformers.CodeLlamaTokenizerFast.update_post_processor" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.CodeLlamaTokenizerFast.update_post_processor"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/code_llama/tokenization_code_llama_fast.py#L175" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> </div></div> <p data-svelte-h="svelte-nfci2w">Updates the underlying post processor with the current <code>bos_token</code> and <code>eos_token</code>.</p></div> <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"><div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="transformers.CodeLlamaTokenizerFast.save_vocabulary"><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"></path></svg>save_vocabulary</span></h4> <a id="transformers.CodeLlamaTokenizerFast.save_vocabulary" class="header-link invisible with-hover:group-hover:visible pr-2" href="#transformers.CodeLlamaTokenizerFast.save_vocabulary"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/code_llama/tokenization_code_llama_fast.py#L325" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span class="hidden md:block mx-0.5 hover:!underline" data-svelte-h="svelte-122apf4">source</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span></a></span> <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span data-svelte-h="svelte-8mvn6a">(</span> <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">save_directory<span class="opacity-60">: str</span></span></span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">filename_prefix<span class="opacity-60">: typing.Optional[str] = None</span></span></span> <span data-svelte-h="svelte-1jq0pl7">)</span> </p> <div class="!mb-10 relative docstring-details "> </div></div></div></div> <p></p> <div id="svelte-announcer" aria-live="assertive" aria-atomic="true" style="position: absolute; left: 0px; top: 0px; clip: rect(0px, 0px, 0px, 0px); clip-path: inset(50%); overflow: hidden; white-space: nowrap; width: 1px; height: 1px;"></div></div> <div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/codegen" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>CodeGen</a> <a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/model_doc/convbert" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">ConvBERT<span class="ml-2 translate-y-px">→</span></a></div></div></div> <div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapter&quot;:{&quot;title&quot;:&quot;CodeLlama&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;codellama&quot;,&quot;url&quot;:&quot;#codellama&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;overview&quot;,&quot;url&quot;:&quot;#overview&quot;},{&quot;title&quot;:&quot;CodeLlamaTokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.CodeLlamaTokenizer&quot;,&quot;url&quot;:&quot;#transformers.CodeLlamaTokenizer&quot;},{&quot;title&quot;:&quot;CodeLlamaTokenizerFast&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers.CodeLlamaTokenizerFast&quot;,&quot;url&quot;:&quot;#transformers.CodeLlamaTokenizerFast&quot;}]}}" data-target="SubSideMenu"><nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#codellama" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-codellama"><wbr>Code<wbr>Llama</a> <a href="#overview" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-overview"><wbr>Overview</a> <a href="#transformers.CodeLlamaTokenizer" class="pl-4 text-gray-700 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.CodeLlamaTokenizer"><wbr>Code<wbr>Llama<wbr>Tokenizer</a> <a href="#transformers.CodeLlamaTokenizerFast" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-transformers.CodeLlamaTokenizerFast"><wbr>Code<wbr>Llama<wbr>Tokenizer<wbr>Fast</a> </nav></div></div></div> <div id="doc-footer"></div></main> </div> <script> import("/front/build/kube-5e23f38/index.js"); window.moonSha = "kube-5e23f38/"; window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc","environment":"production","userAgent":"HuggingFace (production)"}`); </script> <!-- Stripe --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://js.stripe.com/v3/"; script.async = true; document.head.appendChild(script); } </script> <!-- Google analytics v4 --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL"; script.async = true; document.head.appendChild(script); window.dataLayer = window.dataLayer || []; function gtag() { if (window.dataLayer !== undefined) { window.dataLayer.push(arguments); } } gtag("js", new Date()); gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/v4.34.0/en/model_doc/code_llama" }); /// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" }); /// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent /// TODO: ask the user for their consent and update this with gtag('consent', 'update') } </script> <!-- Google Analytics v3 (deprecated) --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { (function (i, s, o, g, r, a, m) { i["GoogleAnalyticsObject"] = r; (i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments); }), (i[r].l = 1 * new Date()); (a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]); a.async = 1; a.src = g; m.parentNode.insertBefore(a, m); })(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics"); ganalytics("create", "UA-83738774-2", "auto"); ganalytics("send", "pageview", "/docs/transformers/v4.34.0/en/model_doc/code_llama"); } </script> <iframe name="__privateStripeMetricsController5180" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-27c67c0d52761104439bb051c7856ab1.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fv4.34.0%2Fen%2Fmodel_doc%2Fcode_llama%23transformers.CodeLlamaTokenizer&amp;title=CodeLlama&amp;referrer=&amp;muid=38397bf3-d1df-433f-a1ab-3a999964eeba83e258&amp;sid=7a2cecc6-6b9a-4e4a-88b4-4bd8a189a43fe6315f&amp;version=6&amp;preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html>
2023-10-05T13:33:52.046Z
https://huggingface.co/docs/transformers/v4.34.0/en/main_classes/Deepspeed
The documentation page MAIN\_CLASSES/DEEPSPEED doesn’t exist in v4.34.0, but exists on the main version. Click [here](/docs/transformers/main/en/main_classes/Deepspeed) to redirect to the main version of the documentation.
<html><head></head><body>The documentation page MAIN_CLASSES/DEEPSPEED doesn’t exist in v4.34.0, but exists on the main version. Click <a href="/docs/transformers/main/en/main_classes/Deepspeed">here</a> to redirect to the main version of the documentation.</body></html>
2023-10-05T13:33:52.286Z
https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/model_doc/ctrl#transformers.CTRLModel.forward
The documentation page MODEL\_DOC/MODEL\_DOC/CTRL doesn’t exist in v4.34.0, but exists on the main version. Click [here](/docs/transformers/main/en/model_doc/model_doc/ctrl) to redirect to the main version of the documentation.
<html><head></head><body>The documentation page MODEL_DOC/MODEL_DOC/CTRL doesn’t exist in v4.34.0, but exists on the main version. Click <a href="/docs/transformers/main/en/model_doc/model_doc/ctrl">here</a> to redirect to the main version of the documentation.</body></html>
2023-10-05T13:33:52.735Z
Translation
https://huggingface.co/docs/transformers/v4.34.0/en/tasks/translation
# Translation Translation converts a sequence of text from one language to another. It is one of several tasks you can formulate as a sequence-to-sequence problem, a powerful framework for returning some output from an input, like translation or summarization. Translation systems are commonly used for translation between different language texts, but it can also be used for speech or some combination in between like text-to-speech or speech-to-text. This guide will show you how to: 1. Finetune [T5](https://huggingface.co/t5-small) on the English-French subset of the [OPUS Books](https://huggingface.co/datasets/opus_books) dataset to translate English text to French. 2. Use your finetuned model for inference. The task illustrated in this tutorial is supported by the following model architectures: [BART](../model_doc/bart), [BigBird-Pegasus](../model_doc/bigbird_pegasus), [Blenderbot](../model_doc/blenderbot), [BlenderbotSmall](../model_doc/blenderbot-small), [Encoder decoder](../model_doc/encoder-decoder), [FairSeq Machine-Translation](../model_doc/fsmt), [GPTSAN-japanese](../model_doc/gptsan-japanese), [LED](../model_doc/led), [LongT5](../model_doc/longt5), [M2M100](../model_doc/m2m_100), [Marian](../model_doc/marian), [mBART](../model_doc/mbart), [MT5](../model_doc/mt5), [MVP](../model_doc/mvp), [NLLB](../model_doc/nllb), [NLLB-MOE](../model_doc/nllb-moe), [Pegasus](../model_doc/pegasus), [PEGASUS-X](../model_doc/pegasus_x), [PLBart](../model_doc/plbart), [ProphetNet](../model_doc/prophetnet), [SwitchTransformers](../model_doc/switch_transformers), [T5](../model_doc/t5), [UMT5](../model_doc/umt5), [XLM-ProphetNet](../model_doc/xlm-prophetnet) Before you begin, make sure you have all the necessary libraries installed: ``` pip install transformers datasets evaluate sacrebleu ``` We encourage you to login to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to login: ``` >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## Load OPUS Books dataset Start by loading the English-French subset of the [OPUS Books](https://huggingface.co/datasets/opus_books) dataset from the 🤗 Datasets library: ``` >>> from datasets import load_dataset >>> books = load_dataset("opus_books", "en-fr") ``` Split the dataset into a train and test set with the [train\_test\_split](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.train_test_split) method: ``` >>> books = books["train"].train_test_split(test_size=0.2) ``` Then take a look at an example: ``` >>> books["train"][0] {'id': '90560', 'translation': {'en': 'But this lofty plateau measured only a few fathoms, and soon we reentered Our Element.', 'fr': 'Mais ce plateau élevé ne mesurait que quelques toises, et bientôt nous fûmes rentrés dans notre élément.'}} ``` `translation`: an English and French translation of the text. ## Preprocess The next step is to load a T5 tokenizer to process the English-French language pairs: ``` >>> from transformers import AutoTokenizer >>> checkpoint = "t5-small" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) ``` The preprocessing function you want to create needs to: 1. Prefix the input with a prompt so T5 knows this is a translation task. Some models capable of multiple NLP tasks require prompting for specific tasks. 2. Tokenize the input (English) and target (French) separately because you can’t tokenize French text with a tokenizer pretrained on an English vocabulary. 3. Truncate sequences to be no longer than the maximum length set by the `max_length` parameter. ``` >>> source_lang = "en" >>> target_lang = "fr" >>> prefix = "translate English to French: " >>> def preprocess_function(examples): ... inputs = [prefix + example[source_lang] for example in examples["translation"]] ... targets = [example[target_lang] for example in examples["translation"]] ... model_inputs = tokenizer(inputs, text_target=targets, max_length=128, truncation=True) ... return model_inputs ``` To apply the preprocessing function over the entire dataset, use 🤗 Datasets [map](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.map) method. You can speed up the `map` function by setting `batched=True` to process multiple elements of the dataset at once: ``` >>> tokenized_books = books.map(preprocess_function, batched=True) ``` Now create a batch of examples using [DataCollatorForSeq2Seq](/docs/transformers/v4.34.0/en/main_classes/data_collator#transformers.DataCollatorForSeq2Seq). It’s more efficient to _dynamically pad_ the sentences to the longest length in a batch during collation, instead of padding the whole dataset to the maximum length. ``` >>> from transformers import DataCollatorForSeq2Seq >>> data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=checkpoint) ``` ``` >>> from transformers import DataCollatorForSeq2Seq >>> data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=checkpoint, return_tensors="tf") ``` ## Evaluate Including a metric during training is often helpful for evaluating your model’s performance. You can quickly load a evaluation method with the 🤗 [Evaluate](https://huggingface.co/docs/evaluate/index) library. For this task, load the [SacreBLEU](https://huggingface.co/spaces/evaluate-metric/sacrebleu) metric (see the 🤗 Evaluate [quick tour](https://huggingface.co/docs/evaluate/a_quick_tour) to learn more about how to load and compute a metric): ``` >>> import evaluate >>> metric = evaluate.load("sacrebleu") ``` Then create a function that passes your predictions and labels to [compute](https://huggingface.co/docs/evaluate/v0.4.0/en/package_reference/main_classes#evaluate.EvaluationModule.compute) to calculate the SacreBLEU score: ``` >>> import numpy as np >>> def postprocess_text(preds, labels): ... preds = [pred.strip() for pred in preds] ... labels = [[label.strip()] for label in labels] ... return preds, labels >>> def compute_metrics(eval_preds): ... preds, labels = eval_preds ... if isinstance(preds, tuple): ... preds = preds[0] ... decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True) ... labels = np.where(labels != -100, labels, tokenizer.pad_token_id) ... decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True) ... decoded_preds, decoded_labels = postprocess_text(decoded_preds, decoded_labels) ... result = metric.compute(predictions=decoded_preds, references=decoded_labels) ... result = {"bleu": result["score"]} ... prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in preds] ... result["gen_len"] = np.mean(prediction_lens) ... result = {k: round(v, 4) for k, v in result.items()} ... return result ``` Your `compute_metrics` function is ready to go now, and you’ll return to it when you setup your training. ## Train If you aren’t familiar with finetuning a model with the [Trainer](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer), take a look at the basic tutorial [here](../training#train-with-pytorch-trainer)! You’re ready to start training your model now! Load T5 with [AutoModelForSeq2SeqLM](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoModelForSeq2SeqLM): ``` >>> from transformers import AutoModelForSeq2SeqLM, Seq2SeqTrainingArguments, Seq2SeqTrainer >>> model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint) ``` At this point, only three steps remain: 1. Define your training hyperparameters in [Seq2SeqTrainingArguments](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Seq2SeqTrainingArguments). The only required parameter is `output_dir` which specifies where to save your model. You’ll push this model to the Hub by setting `push_to_hub=True` (you need to be signed in to Hugging Face to upload your model). At the end of each epoch, the [Trainer](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer) will evaluate the SacreBLEU metric and save the training checkpoint. 2. Pass the training arguments to [Seq2SeqTrainer](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Seq2SeqTrainer) along with the model, dataset, tokenizer, data collator, and `compute_metrics` function. 3. Call [train()](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.train) to finetune your model. ``` >>> training_args = Seq2SeqTrainingArguments( ... output_dir="my_awesome_opus_books_model", ... evaluation_strategy="epoch", ... learning_rate=2e-5, ... per_device_train_batch_size=16, ... per_device_eval_batch_size=16, ... weight_decay=0.01, ... save_total_limit=3, ... num_train_epochs=2, ... predict_with_generate=True, ... fp16=True, ... push_to_hub=True, ... ) >>> trainer = Seq2SeqTrainer( ... model=model, ... args=training_args, ... train_dataset=tokenized_books["train"], ... eval_dataset=tokenized_books["test"], ... tokenizer=tokenizer, ... data_collator=data_collator, ... compute_metrics=compute_metrics, ... ) >>> trainer.train() ``` Once training is completed, share your model to the Hub with the [push\_to\_hub()](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.push_to_hub) method so everyone can use your model: ``` >>> trainer.push_to_hub() ``` If you aren’t familiar with finetuning a model with Keras, take a look at the basic tutorial [here](../training#train-a-tensorflow-model-with-keras)! To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters: ``` >>> from transformers import AdamWeightDecay >>> optimizer = AdamWeightDecay(learning_rate=2e-5, weight_decay_rate=0.01) ``` Then you can load T5 with [TFAutoModelForSeq2SeqLM](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.TFAutoModelForSeq2SeqLM): ``` >>> from transformers import TFAutoModelForSeq2SeqLM >>> model = TFAutoModelForSeq2SeqLM.from_pretrained(checkpoint) ``` Convert your datasets to the `tf.data.Dataset` format with [prepare\_tf\_dataset()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel.prepare_tf_dataset): ``` >>> tf_train_set = model.prepare_tf_dataset( ... tokenized_books["train"], ... shuffle=True, ... batch_size=16, ... collate_fn=data_collator, ... ) >>> tf_test_set = model.prepare_tf_dataset( ... tokenized_books["test"], ... shuffle=False, ... batch_size=16, ... collate_fn=data_collator, ... ) ``` Configure the model for training with [`compile`](https://keras.io/api/models/model_training_apis/#compile-method). Note that Transformers models all have a default task-relevant loss function, so you don’t need to specify one unless you want to: ``` >>> import tensorflow as tf >>> model.compile(optimizer=optimizer) ``` The last two things to setup before you start training is to compute the SacreBLEU metric from the predictions, and provide a way to push your model to the Hub. Both are done by using [Keras callbacks](../main_classes/keras_callbacks). Pass your `compute_metrics` function to [KerasMetricCallback](/docs/transformers/v4.34.0/en/main_classes/keras_callbacks#transformers.KerasMetricCallback): ``` >>> from transformers.keras_callbacks import KerasMetricCallback >>> metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_set) ``` Specify where to push your model and tokenizer in the [PushToHubCallback](/docs/transformers/v4.34.0/en/main_classes/keras_callbacks#transformers.PushToHubCallback): ``` >>> from transformers.keras_callbacks import PushToHubCallback >>> push_to_hub_callback = PushToHubCallback( ... output_dir="my_awesome_opus_books_model", ... tokenizer=tokenizer, ... ) ``` Then bundle your callbacks together: ``` >>> callbacks = [metric_callback, push_to_hub_callback] ``` Finally, you’re ready to start training your model! Call [`fit`](https://keras.io/api/models/model_training_apis/#fit-method) with your training and validation datasets, the number of epochs, and your callbacks to finetune the model: ``` >>> model.fit(x=tf_train_set, validation_data=tf_test_set, epochs=3, callbacks=callbacks) ``` Once training is completed, your model is automatically uploaded to the Hub so everyone can use it! For a more in-depth example of how to finetune a model for translation, take a look at the corresponding [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/translation.ipynb) or [TensorFlow notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/translation-tf.ipynb). ## Inference Great, now that you’ve finetuned a model, you can use it for inference! Come up with some text you’d like to translate to another language. For T5, you need to prefix your input depending on the task you’re working on. For translation from English to French, you should prefix your input as shown below: ``` >>> text = "translate English to French: Legumes share resources with nitrogen-fixing bacteria." ``` The simplest way to try out your finetuned model for inference is to use it in a [pipeline()](/docs/transformers/v4.34.0/en/main_classes/pipelines#transformers.pipeline). Instantiate a `pipeline` for translation with your model, and pass your text to it: ``` >>> from transformers import pipeline >>> translator = pipeline("translation", model="my_awesome_opus_books_model") >>> translator(text) [{'translation_text': 'Legumes partagent des ressources avec des bactéries azotantes.'}] ``` You can also manually replicate the results of the `pipeline` if you’d like: Tokenize the text and return the `input_ids` as PyTorch tensors: ``` >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_opus_books_model") >>> inputs = tokenizer(text, return_tensors="pt").input_ids ``` Use the [generate()](/docs/transformers/v4.34.0/en/main_classes/text_generation#transformers.GenerationMixin.generate) method to create the translation. For more details about the different text generation strategies and parameters for controlling generation, check out the [Text Generation](../main_classes/text_generation) API. ``` >>> from transformers import AutoModelForSeq2SeqLM >>> model = AutoModelForSeq2SeqLM.from_pretrained("my_awesome_opus_books_model") >>> outputs = model.generate(inputs, max_new_tokens=40, do_sample=True, top_k=30, top_p=0.95) ``` Decode the generated token ids back into text: ``` >>> tokenizer.decode(outputs[0], skip_special_tokens=True) 'Les lignées partagent des ressources avec des bactéries enfixant l'azote.' ``` Tokenize the text and return the `input_ids` as TensorFlow tensors: ``` >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_opus_books_model") >>> inputs = tokenizer(text, return_tensors="tf").input_ids ``` Use the [generate()](/docs/transformers/v4.34.0/en/main_classes/text_generation#transformers.TFGenerationMixin.generate) method to create the translation. For more details about the different text generation strategies and parameters for controlling generation, check out the [Text Generation](../main_classes/text_generation) API. ``` >>> from transformers import TFAutoModelForSeq2SeqLM >>> model = TFAutoModelForSeq2SeqLM.from_pretrained("my_awesome_opus_books_model") >>> outputs = model.generate(inputs, max_new_tokens=40, do_sample=True, top_k=30, top_p=0.95) ``` Decode the generated token ids back into text: ``` >>> tokenizer.decode(outputs[0], skip_special_tokens=True) 'Les lugumes partagent les ressources avec des bactéries fixatrices d'azote.' ```
<!DOCTYPE html><html class=""><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"> <meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science."> <meta property="fb:app_id" content="1321688464574422"> <meta name="twitter:card" content="summary_large_image"> <meta name="twitter:site" content="@huggingface"> <meta property="og:title" content="Translation"> <meta property="og:type" content="website"> <meta property="og:url" content="https://huggingface.co/docs/transformers/v4.34.0/en/tasks/translation"> <meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png"> <link rel="stylesheet" href="/front/build/kube-5e23f38/style.css"> <link rel="preconnect" href="https://fonts.gstatic.com"> <link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&amp;display=swap" rel="stylesheet"> <link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&amp;display=swap" rel="stylesheet"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" as="style" onload="this.onload=null;this.rel='stylesheet'"> <noscript> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" /> </noscript> <title>Translation</title> <script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script> <script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"><link rel="stylesheet" href="/docs/transformers/v4.34.0/en/_app/immutable/assets/0.e3b0c442.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/1.38c5c2f6.js"><meta name="hf:doc:metadata" content="{&quot;local&quot;:&quot;translation&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;load-opus-books-dataset&quot;,&quot;title&quot;:&quot;Load OPUS Books dataset&quot;},{&quot;local&quot;:&quot;preprocess&quot;,&quot;title&quot;:&quot;Preprocess&quot;},{&quot;local&quot;:&quot;evaluate&quot;,&quot;title&quot;:&quot;Evaluate&quot;},{&quot;local&quot;:&quot;train&quot;,&quot;title&quot;:&quot;Train&quot;},{&quot;local&quot;:&quot;inference&quot;,&quot;title&quot;:&quot;Inference&quot;}],&quot;title&quot;:&quot;Translation&quot;}"></head> <body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage"> <div class="flex min-h-screen flex-col"> <div class="SVELTE_HYDRATER contents" data-props="{&quot;classNames&quot;:&quot;&quot;,&quot;isWide&quot;:true,&quot;isZh&quot;:false}" data-target="MainHeader"><header class="border-b border-gray-100 "><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <div class="flex flex-none items-center justify-center p-0.5 place-self-stretch lg:hidden"><button class="relative z-40 flex h-6 w-8 items-center justify-center" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div></div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="rounded-full border border-transparent bg-gray-900 px-3 py-1 leading-none text-white hover:border-black hover:bg-white hover:text-black" href="/join">Sign Up</a></li></ul></nav></div></header></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div> <main class="flex flex-1 flex-col"><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapters&quot;:[{&quot;title&quot;:&quot;Get started&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;🤗 Transformers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;index&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/index&quot;},{&quot;title&quot;:&quot;Quick tour&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;quicktour&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/quicktour&quot;},{&quot;title&quot;:&quot;Installation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;installation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/installation&quot;}]},{&quot;title&quot;:&quot;Tutorials&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Run inference with pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_tutorial&quot;},{&quot;title&quot;:&quot;Write portable code with AutoClass&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;autoclass_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/autoclass_tutorial&quot;},{&quot;title&quot;:&quot;Preprocess data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocessing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/preprocessing&quot;},{&quot;title&quot;:&quot;Fine-tune a pretrained model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;training&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/training&quot;},{&quot;title&quot;:&quot;Train with a script&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;run_scripts&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/run_scripts&quot;},{&quot;title&quot;:&quot;Set up distributed training with 🤗 Accelerate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;accelerate&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/accelerate&quot;},{&quot;title&quot;:&quot;Load and train adapters with 🤗 PEFT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;peft&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/peft&quot;},{&quot;title&quot;:&quot;Share your model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_sharing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_sharing&quot;},{&quot;title&quot;:&quot;Agents&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers_agents&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/transformers_agents&quot;},{&quot;title&quot;:&quot;Generation with LLMs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;llm_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/llm_tutorial&quot;}]},{&quot;title&quot;:&quot;Task Guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Natural Language Processing&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text classification&quot;,&quot;id&quot;:&quot;tasks/sequence_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/sequence_classification&quot;},{&quot;title&quot;:&quot;Token classification&quot;,&quot;id&quot;:&quot;tasks/token_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/token_classification&quot;},{&quot;title&quot;:&quot;Question answering&quot;,&quot;id&quot;:&quot;tasks/question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/question_answering&quot;},{&quot;title&quot;:&quot;Causal language modeling&quot;,&quot;id&quot;:&quot;tasks/language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/language_modeling&quot;},{&quot;title&quot;:&quot;Masked language modeling&quot;,&quot;id&quot;:&quot;tasks/masked_language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/masked_language_modeling&quot;},{&quot;title&quot;:&quot;Translation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks/translation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/translation&quot;},{&quot;title&quot;:&quot;Summarization&quot;,&quot;id&quot;:&quot;tasks/summarization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/summarization&quot;},{&quot;title&quot;:&quot;Multiple choice&quot;,&quot;id&quot;:&quot;tasks/multiple_choice&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/multiple_choice&quot;}]},{&quot;title&quot;:&quot;Audio&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio classification&quot;,&quot;id&quot;:&quot;tasks/audio_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/audio_classification&quot;},{&quot;title&quot;:&quot;Automatic speech recognition&quot;,&quot;id&quot;:&quot;tasks/asr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/asr&quot;}]},{&quot;title&quot;:&quot;Computer Vision&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image classification&quot;,&quot;id&quot;:&quot;tasks/image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_classification&quot;},{&quot;title&quot;:&quot;Semantic segmentation&quot;,&quot;id&quot;:&quot;tasks/semantic_segmentation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/semantic_segmentation&quot;},{&quot;title&quot;:&quot;Video classification&quot;,&quot;id&quot;:&quot;tasks/video_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/video_classification&quot;},{&quot;title&quot;:&quot;Object detection&quot;,&quot;id&quot;:&quot;tasks/object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot object detection&quot;,&quot;id&quot;:&quot;tasks/zero_shot_object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot image classification&quot;,&quot;id&quot;:&quot;tasks/zero_shot_image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification&quot;},{&quot;title&quot;:&quot;Depth estimation&quot;,&quot;id&quot;:&quot;tasks/monocular_depth_estimation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation&quot;}]},{&quot;title&quot;:&quot;Multimodal&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image captioning&quot;,&quot;id&quot;:&quot;tasks/image_captioning&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_captioning&quot;},{&quot;title&quot;:&quot;Document Question Answering&quot;,&quot;id&quot;:&quot;tasks/document_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/document_question_answering&quot;},{&quot;title&quot;:&quot;Visual Question Answering&quot;,&quot;id&quot;:&quot;tasks/visual_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/visual_question_answering&quot;},{&quot;title&quot;:&quot;Text to speech&quot;,&quot;id&quot;:&quot;tasks/text-to-speech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/text-to-speech&quot;}]},{&quot;title&quot;:&quot;Generation&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Customize the generation strategy&quot;,&quot;id&quot;:&quot;generation_strategies&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/generation_strategies&quot;}]},{&quot;title&quot;:&quot;Prompting&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image tasks with IDEFICS&quot;,&quot;id&quot;:&quot;tasks/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/idefics&quot;}]}]},{&quot;title&quot;:&quot;Developer guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Use fast tokenizers from 🤗 Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;fast_tokenizers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/fast_tokenizers&quot;},{&quot;title&quot;:&quot;Run inference with multilingual models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;multilingual&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/multilingual&quot;},{&quot;title&quot;:&quot;Use model-specific APIs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;create_a_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/create_a_model&quot;},{&quot;title&quot;:&quot;Share a custom model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_models&quot;},{&quot;title&quot;:&quot;Templates for chat models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;chat_templating&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/chat_templating&quot;},{&quot;title&quot;:&quot;Run training on Amazon SageMaker&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;sagemaker&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/sagemaker&quot;},{&quot;title&quot;:&quot;Export to ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;serialization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/serialization&quot;},{&quot;title&quot;:&quot;Export to TFLite&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tflite&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tflite&quot;},{&quot;title&quot;:&quot;Export to TorchScript&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;torchscript&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/torchscript&quot;},{&quot;title&quot;:&quot;Benchmarks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;benchmarks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/benchmarks&quot;},{&quot;title&quot;:&quot;Notebooks with examples&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;notebooks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/notebooks&quot;},{&quot;title&quot;:&quot;Community resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;community&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/community&quot;},{&quot;title&quot;:&quot;Custom Tools and Prompts&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_tools&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_tools&quot;},{&quot;title&quot;:&quot;Troubleshoot&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;troubleshooting&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/troubleshooting&quot;}]},{&quot;title&quot;:&quot;Performance and scalability&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;performance&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/performance&quot;},{&quot;title&quot;:&quot;Efficient training techniques&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Methods and tools for efficient training on a single GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_one&quot;},{&quot;title&quot;:&quot;Multiple GPUs and parallelism&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_many&quot;},{&quot;title&quot;:&quot;Efficient training on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu&quot;},{&quot;title&quot;:&quot;Distributed CPU training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu_many&quot;},{&quot;title&quot;:&quot;Training on TPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu&quot;},{&quot;title&quot;:&quot;Training on TPU with TensorFlow&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu_tf&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu_tf&quot;},{&quot;title&quot;:&quot;Training on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_special&quot;},{&quot;title&quot;:&quot;Custom hardware for training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_hardware&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_hardware&quot;},{&quot;title&quot;:&quot;Hyperparameter Search using Trainer API&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;hpo_train&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/hpo_train&quot;}]},{&quot;title&quot;:&quot;Optimizing inference&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Inference on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_cpu&quot;},{&quot;title&quot;:&quot;Inference on one GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_one&quot;},{&quot;title&quot;:&quot;Inference on many GPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_many&quot;},{&quot;title&quot;:&quot;Inference on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_special&quot;}]},{&quot;title&quot;:&quot;Instantiating a big model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;big_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/big_models&quot;},{&quot;title&quot;:&quot;Troubleshooting&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;debugging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/debugging&quot;},{&quot;title&quot;:&quot;XLA Integration for TensorFlow Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tf_xla&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tf_xla&quot;},{&quot;title&quot;:&quot;Optimize inference using `torch.compile()`&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_torch_compile&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_torch_compile&quot;}]},{&quot;title&quot;:&quot;Contribute&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;How to contribute to transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;contributing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/contributing&quot;},{&quot;title&quot;:&quot;How to add a model to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_model&quot;},{&quot;title&quot;:&quot;How to convert a 🤗 Transformers model to TensorFlow?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_tensorflow_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_tensorflow_model&quot;},{&quot;title&quot;:&quot;How to add a pipeline to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_pipeline&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_pipeline&quot;},{&quot;title&quot;:&quot;Testing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;testing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/testing&quot;},{&quot;title&quot;:&quot;Checks on a Pull Request&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pr_checks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pr_checks&quot;}]},{&quot;title&quot;:&quot;Conceptual guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Philosophy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;philosophy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/philosophy&quot;},{&quot;title&quot;:&quot;Glossary&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;glossary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/glossary&quot;},{&quot;title&quot;:&quot;What 🤗 Transformers can do&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;task_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/task_summary&quot;},{&quot;title&quot;:&quot;How 🤗 Transformers solve tasks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks_explained&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks_explained&quot;},{&quot;title&quot;:&quot;The Transformer model family&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_summary&quot;},{&quot;title&quot;:&quot;Summary of the tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tokenizer_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tokenizer_summary&quot;},{&quot;title&quot;:&quot;Attention mechanisms&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;attention&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/attention&quot;},{&quot;title&quot;:&quot;Padding and truncation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pad_truncation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pad_truncation&quot;},{&quot;title&quot;:&quot;BERTology&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;bertology&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/bertology&quot;},{&quot;title&quot;:&quot;Perplexity of fixed-length models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perplexity&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perplexity&quot;},{&quot;title&quot;:&quot;Pipelines for webserver inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_webserver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_webserver&quot;},{&quot;title&quot;:&quot;Model training anatomy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_memory_anatomy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_memory_anatomy&quot;}]},{&quot;title&quot;:&quot;API&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Main Classes&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Agents and Tools&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/agent&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/agent&quot;},{&quot;title&quot;:&quot;Auto Classes&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/auto&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/auto&quot;},{&quot;title&quot;:&quot;Callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/callback&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/callback&quot;},{&quot;title&quot;:&quot;Configuration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/configuration&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/configuration&quot;},{&quot;title&quot;:&quot;Data Collator&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/data_collator&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/data_collator&quot;},{&quot;title&quot;:&quot;Keras callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/keras_callbacks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/keras_callbacks&quot;},{&quot;title&quot;:&quot;Logging&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/logging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/logging&quot;},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/model&quot;},{&quot;title&quot;:&quot;Text Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/text_generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/text_generation&quot;},{&quot;title&quot;:&quot;ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/onnx&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/onnx&quot;},{&quot;title&quot;:&quot;Optimization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/optimizer_schedules&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules&quot;},{&quot;title&quot;:&quot;Model outputs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/output&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/output&quot;},{&quot;title&quot;:&quot;Pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/pipelines&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/pipelines&quot;},{&quot;title&quot;:&quot;Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/processors&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/processors&quot;},{&quot;title&quot;:&quot;Quantization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/quantization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/quantization&quot;},{&quot;title&quot;:&quot;Tokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/tokenizer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/tokenizer&quot;},{&quot;title&quot;:&quot;Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/trainer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/trainer&quot;},{&quot;title&quot;:&quot;DeepSpeed Integration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/deepspeed&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/deepspeed&quot;},{&quot;title&quot;:&quot;Feature Extractor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/feature_extractor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/feature_extractor&quot;},{&quot;title&quot;:&quot;Image Processor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/image_processor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/image_processor&quot;}]},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALBERT&quot;,&quot;id&quot;:&quot;model_doc/albert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/albert&quot;},{&quot;title&quot;:&quot;BART&quot;,&quot;id&quot;:&quot;model_doc/bart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bart&quot;},{&quot;title&quot;:&quot;BARThez&quot;,&quot;id&quot;:&quot;model_doc/barthez&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/barthez&quot;},{&quot;title&quot;:&quot;BARTpho&quot;,&quot;id&quot;:&quot;model_doc/bartpho&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bartpho&quot;},{&quot;title&quot;:&quot;BERT&quot;,&quot;id&quot;:&quot;model_doc/bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert&quot;},{&quot;title&quot;:&quot;BertGeneration&quot;,&quot;id&quot;:&quot;model_doc/bert-generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-generation&quot;},{&quot;title&quot;:&quot;BertJapanese&quot;,&quot;id&quot;:&quot;model_doc/bert-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-japanese&quot;},{&quot;title&quot;:&quot;Bertweet&quot;,&quot;id&quot;:&quot;model_doc/bertweet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bertweet&quot;},{&quot;title&quot;:&quot;BigBird&quot;,&quot;id&quot;:&quot;model_doc/big_bird&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/big_bird&quot;},{&quot;title&quot;:&quot;BigBirdPegasus&quot;,&quot;id&quot;:&quot;model_doc/bigbird_pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus&quot;},{&quot;title&quot;:&quot;BioGpt&quot;,&quot;id&quot;:&quot;model_doc/biogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/biogpt&quot;},{&quot;title&quot;:&quot;Blenderbot&quot;,&quot;id&quot;:&quot;model_doc/blenderbot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot&quot;},{&quot;title&quot;:&quot;Blenderbot Small&quot;,&quot;id&quot;:&quot;model_doc/blenderbot-small&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot-small&quot;},{&quot;title&quot;:&quot;BLOOM&quot;,&quot;id&quot;:&quot;model_doc/bloom&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bloom&quot;},{&quot;title&quot;:&quot;BORT&quot;,&quot;id&quot;:&quot;model_doc/bort&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bort&quot;},{&quot;title&quot;:&quot;ByT5&quot;,&quot;id&quot;:&quot;model_doc/byt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/byt5&quot;},{&quot;title&quot;:&quot;CamemBERT&quot;,&quot;id&quot;:&quot;model_doc/camembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/camembert&quot;},{&quot;title&quot;:&quot;CANINE&quot;,&quot;id&quot;:&quot;model_doc/canine&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/canine&quot;},{&quot;title&quot;:&quot;CodeGen&quot;,&quot;id&quot;:&quot;model_doc/codegen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/codegen&quot;},{&quot;title&quot;:&quot;CodeLlama&quot;,&quot;id&quot;:&quot;model_doc/code_llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/code_llama&quot;},{&quot;title&quot;:&quot;ConvBERT&quot;,&quot;id&quot;:&quot;model_doc/convbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convbert&quot;},{&quot;title&quot;:&quot;CPM&quot;,&quot;id&quot;:&quot;model_doc/cpm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpm&quot;},{&quot;title&quot;:&quot;CPMANT&quot;,&quot;id&quot;:&quot;model_doc/cpmant&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpmant&quot;},{&quot;title&quot;:&quot;CTRL&quot;,&quot;id&quot;:&quot;model_doc/ctrl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ctrl&quot;},{&quot;title&quot;:&quot;DeBERTa&quot;,&quot;id&quot;:&quot;model_doc/deberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta&quot;},{&quot;title&quot;:&quot;DeBERTa-v2&quot;,&quot;id&quot;:&quot;model_doc/deberta-v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta-v2&quot;},{&quot;title&quot;:&quot;DialoGPT&quot;,&quot;id&quot;:&quot;model_doc/dialogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dialogpt&quot;},{&quot;title&quot;:&quot;DistilBERT&quot;,&quot;id&quot;:&quot;model_doc/distilbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/distilbert&quot;},{&quot;title&quot;:&quot;DPR&quot;,&quot;id&quot;:&quot;model_doc/dpr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpr&quot;},{&quot;title&quot;:&quot;ELECTRA&quot;,&quot;id&quot;:&quot;model_doc/electra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/electra&quot;},{&quot;title&quot;:&quot;Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encoder-decoder&quot;},{&quot;title&quot;:&quot;ERNIE&quot;,&quot;id&quot;:&quot;model_doc/ernie&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie&quot;},{&quot;title&quot;:&quot;ErnieM&quot;,&quot;id&quot;:&quot;model_doc/ernie_m&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie_m&quot;},{&quot;title&quot;:&quot;ESM&quot;,&quot;id&quot;:&quot;model_doc/esm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/esm&quot;},{&quot;title&quot;:&quot;Falcon&quot;,&quot;id&quot;:&quot;model_doc/falcon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/falcon&quot;},{&quot;title&quot;:&quot;FLAN-T5&quot;,&quot;id&quot;:&quot;model_doc/flan-t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-t5&quot;},{&quot;title&quot;:&quot;FLAN-UL2&quot;,&quot;id&quot;:&quot;model_doc/flan-ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-ul2&quot;},{&quot;title&quot;:&quot;FlauBERT&quot;,&quot;id&quot;:&quot;model_doc/flaubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flaubert&quot;},{&quot;title&quot;:&quot;FNet&quot;,&quot;id&quot;:&quot;model_doc/fnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fnet&quot;},{&quot;title&quot;:&quot;FSMT&quot;,&quot;id&quot;:&quot;model_doc/fsmt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fsmt&quot;},{&quot;title&quot;:&quot;Funnel Transformer&quot;,&quot;id&quot;:&quot;model_doc/funnel&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/funnel&quot;},{&quot;title&quot;:&quot;GPT&quot;,&quot;id&quot;:&quot;model_doc/openai-gpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/openai-gpt&quot;},{&quot;title&quot;:&quot;GPT Neo&quot;,&quot;id&quot;:&quot;model_doc/gpt_neo&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neo&quot;},{&quot;title&quot;:&quot;GPT NeoX&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox&quot;},{&quot;title&quot;:&quot;GPT NeoX Japanese&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox_japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese&quot;},{&quot;title&quot;:&quot;GPT-J&quot;,&quot;id&quot;:&quot;model_doc/gptj&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptj&quot;},{&quot;title&quot;:&quot;GPT2&quot;,&quot;id&quot;:&quot;model_doc/gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt2&quot;},{&quot;title&quot;:&quot;GPTBigCode&quot;,&quot;id&quot;:&quot;model_doc/gpt_bigcode&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode&quot;},{&quot;title&quot;:&quot;GPTSAN Japanese&quot;,&quot;id&quot;:&quot;model_doc/gptsan-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese&quot;},{&quot;title&quot;:&quot;GPTSw3&quot;,&quot;id&quot;:&quot;model_doc/gpt-sw3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt-sw3&quot;},{&quot;title&quot;:&quot;HerBERT&quot;,&quot;id&quot;:&quot;model_doc/herbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/herbert&quot;},{&quot;title&quot;:&quot;I-BERT&quot;,&quot;id&quot;:&quot;model_doc/ibert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ibert&quot;},{&quot;title&quot;:&quot;Jukebox&quot;,&quot;id&quot;:&quot;model_doc/jukebox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/jukebox&quot;},{&quot;title&quot;:&quot;LED&quot;,&quot;id&quot;:&quot;model_doc/led&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/led&quot;},{&quot;title&quot;:&quot;LLaMA&quot;,&quot;id&quot;:&quot;model_doc/llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama&quot;},{&quot;title&quot;:&quot;Llama2&quot;,&quot;id&quot;:&quot;model_doc/llama2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama2&quot;},{&quot;title&quot;:&quot;Longformer&quot;,&quot;id&quot;:&quot;model_doc/longformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longformer&quot;},{&quot;title&quot;:&quot;LongT5&quot;,&quot;id&quot;:&quot;model_doc/longt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longt5&quot;},{&quot;title&quot;:&quot;LUKE&quot;,&quot;id&quot;:&quot;model_doc/luke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/luke&quot;},{&quot;title&quot;:&quot;M2M100&quot;,&quot;id&quot;:&quot;model_doc/m2m_100&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/m2m_100&quot;},{&quot;title&quot;:&quot;MarianMT&quot;,&quot;id&quot;:&quot;model_doc/marian&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/marian&quot;},{&quot;title&quot;:&quot;MarkupLM&quot;,&quot;id&quot;:&quot;model_doc/markuplm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/markuplm&quot;},{&quot;title&quot;:&quot;MBart and MBart-50&quot;,&quot;id&quot;:&quot;model_doc/mbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mbart&quot;},{&quot;title&quot;:&quot;MEGA&quot;,&quot;id&quot;:&quot;model_doc/mega&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mega&quot;},{&quot;title&quot;:&quot;MegatronBERT&quot;,&quot;id&quot;:&quot;model_doc/megatron-bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron-bert&quot;},{&quot;title&quot;:&quot;MegatronGPT2&quot;,&quot;id&quot;:&quot;model_doc/megatron_gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2&quot;},{&quot;title&quot;:&quot;Mistral&quot;,&quot;id&quot;:&quot;model_doc/mistral&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mistral&quot;},{&quot;title&quot;:&quot;mLUKE&quot;,&quot;id&quot;:&quot;model_doc/mluke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mluke&quot;},{&quot;title&quot;:&quot;MobileBERT&quot;,&quot;id&quot;:&quot;model_doc/mobilebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilebert&quot;},{&quot;title&quot;:&quot;MPNet&quot;,&quot;id&quot;:&quot;model_doc/mpnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpnet&quot;},{&quot;title&quot;:&quot;MPT&quot;,&quot;id&quot;:&quot;model_doc/mpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpt&quot;},{&quot;title&quot;:&quot;MRA&quot;,&quot;id&quot;:&quot;model_doc/mra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mra&quot;},{&quot;title&quot;:&quot;MT5&quot;,&quot;id&quot;:&quot;model_doc/mt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mt5&quot;},{&quot;title&quot;:&quot;MVP&quot;,&quot;id&quot;:&quot;model_doc/mvp&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mvp&quot;},{&quot;title&quot;:&quot;NEZHA&quot;,&quot;id&quot;:&quot;model_doc/nezha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nezha&quot;},{&quot;title&quot;:&quot;NLLB&quot;,&quot;id&quot;:&quot;model_doc/nllb&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb&quot;},{&quot;title&quot;:&quot;NLLB-MoE&quot;,&quot;id&quot;:&quot;model_doc/nllb-moe&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb-moe&quot;},{&quot;title&quot;:&quot;Nyströmformer&quot;,&quot;id&quot;:&quot;model_doc/nystromformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nystromformer&quot;},{&quot;title&quot;:&quot;Open-Llama&quot;,&quot;id&quot;:&quot;model_doc/open-llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/open-llama&quot;},{&quot;title&quot;:&quot;OPT&quot;,&quot;id&quot;:&quot;model_doc/opt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/opt&quot;},{&quot;title&quot;:&quot;Pegasus&quot;,&quot;id&quot;:&quot;model_doc/pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus&quot;},{&quot;title&quot;:&quot;PEGASUS-X&quot;,&quot;id&quot;:&quot;model_doc/pegasus_x&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus_x&quot;},{&quot;title&quot;:&quot;Persimmon&quot;,&quot;id&quot;:&quot;model_doc/persimmon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/persimmon&quot;},{&quot;title&quot;:&quot;PhoBERT&quot;,&quot;id&quot;:&quot;model_doc/phobert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/phobert&quot;},{&quot;title&quot;:&quot;PLBart&quot;,&quot;id&quot;:&quot;model_doc/plbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/plbart&quot;},{&quot;title&quot;:&quot;ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/prophetnet&quot;},{&quot;title&quot;:&quot;QDQBert&quot;,&quot;id&quot;:&quot;model_doc/qdqbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/qdqbert&quot;},{&quot;title&quot;:&quot;RAG&quot;,&quot;id&quot;:&quot;model_doc/rag&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rag&quot;},{&quot;title&quot;:&quot;REALM&quot;,&quot;id&quot;:&quot;model_doc/realm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/realm&quot;},{&quot;title&quot;:&quot;Reformer&quot;,&quot;id&quot;:&quot;model_doc/reformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/reformer&quot;},{&quot;title&quot;:&quot;RemBERT&quot;,&quot;id&quot;:&quot;model_doc/rembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rembert&quot;},{&quot;title&quot;:&quot;RetriBERT&quot;,&quot;id&quot;:&quot;model_doc/retribert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/retribert&quot;},{&quot;title&quot;:&quot;RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta&quot;},{&quot;title&quot;:&quot;RoBERTa-PreLayerNorm&quot;,&quot;id&quot;:&quot;model_doc/roberta-prelayernorm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm&quot;},{&quot;title&quot;:&quot;RoCBert&quot;,&quot;id&quot;:&quot;model_doc/roc_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roc_bert&quot;},{&quot;title&quot;:&quot;RoFormer&quot;,&quot;id&quot;:&quot;model_doc/roformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roformer&quot;},{&quot;title&quot;:&quot;RWKV&quot;,&quot;id&quot;:&quot;model_doc/rwkv&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rwkv&quot;},{&quot;title&quot;:&quot;Splinter&quot;,&quot;id&quot;:&quot;model_doc/splinter&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/splinter&quot;},{&quot;title&quot;:&quot;SqueezeBERT&quot;,&quot;id&quot;:&quot;model_doc/squeezebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/squeezebert&quot;},{&quot;title&quot;:&quot;SwitchTransformers&quot;,&quot;id&quot;:&quot;model_doc/switch_transformers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/switch_transformers&quot;},{&quot;title&quot;:&quot;T5&quot;,&quot;id&quot;:&quot;model_doc/t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5&quot;},{&quot;title&quot;:&quot;T5v1.1&quot;,&quot;id&quot;:&quot;model_doc/t5v1.1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5v1.1&quot;},{&quot;title&quot;:&quot;TAPEX&quot;,&quot;id&quot;:&quot;model_doc/tapex&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapex&quot;},{&quot;title&quot;:&quot;Transformer XL&quot;,&quot;id&quot;:&quot;model_doc/transfo-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/transfo-xl&quot;},{&quot;title&quot;:&quot;UL2&quot;,&quot;id&quot;:&quot;model_doc/ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ul2&quot;},{&quot;title&quot;:&quot;UMT5&quot;,&quot;id&quot;:&quot;model_doc/umt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/umt5&quot;},{&quot;title&quot;:&quot;X-MOD&quot;,&quot;id&quot;:&quot;model_doc/xmod&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xmod&quot;},{&quot;title&quot;:&quot;XGLM&quot;,&quot;id&quot;:&quot;model_doc/xglm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xglm&quot;},{&quot;title&quot;:&quot;XLM&quot;,&quot;id&quot;:&quot;model_doc/xlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm&quot;},{&quot;title&quot;:&quot;XLM-ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/xlm-prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa-XL&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl&quot;},{&quot;title&quot;:&quot;XLM-V&quot;,&quot;id&quot;:&quot;model_doc/xlm-v&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-v&quot;},{&quot;title&quot;:&quot;XLNet&quot;,&quot;id&quot;:&quot;model_doc/xlnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlnet&quot;},{&quot;title&quot;:&quot;YOSO&quot;,&quot;id&quot;:&quot;model_doc/yoso&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yoso&quot;}]},{&quot;title&quot;:&quot;Vision models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;BEiT&quot;,&quot;id&quot;:&quot;model_doc/beit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/beit&quot;},{&quot;title&quot;:&quot;BiT&quot;,&quot;id&quot;:&quot;model_doc/bit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bit&quot;},{&quot;title&quot;:&quot;Conditional DETR&quot;,&quot;id&quot;:&quot;model_doc/conditional_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/conditional_detr&quot;},{&quot;title&quot;:&quot;ConvNeXT&quot;,&quot;id&quot;:&quot;model_doc/convnext&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnext&quot;},{&quot;title&quot;:&quot;ConvNeXTV2&quot;,&quot;id&quot;:&quot;model_doc/convnextv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnextv2&quot;},{&quot;title&quot;:&quot;CvT&quot;,&quot;id&quot;:&quot;model_doc/cvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cvt&quot;},{&quot;title&quot;:&quot;Deformable DETR&quot;,&quot;id&quot;:&quot;model_doc/deformable_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deformable_detr&quot;},{&quot;title&quot;:&quot;DeiT&quot;,&quot;id&quot;:&quot;model_doc/deit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deit&quot;},{&quot;title&quot;:&quot;DETA&quot;,&quot;id&quot;:&quot;model_doc/deta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deta&quot;},{&quot;title&quot;:&quot;DETR&quot;,&quot;id&quot;:&quot;model_doc/detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/detr&quot;},{&quot;title&quot;:&quot;DiNAT&quot;,&quot;id&quot;:&quot;model_doc/dinat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinat&quot;},{&quot;title&quot;:&quot;DINO V2&quot;,&quot;id&quot;:&quot;model_doc/dinov2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinov2&quot;},{&quot;title&quot;:&quot;DiT&quot;,&quot;id&quot;:&quot;model_doc/dit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dit&quot;},{&quot;title&quot;:&quot;DPT&quot;,&quot;id&quot;:&quot;model_doc/dpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpt&quot;},{&quot;title&quot;:&quot;EfficientFormer&quot;,&quot;id&quot;:&quot;model_doc/efficientformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientformer&quot;},{&quot;title&quot;:&quot;EfficientNet&quot;,&quot;id&quot;:&quot;model_doc/efficientnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientnet&quot;},{&quot;title&quot;:&quot;FocalNet&quot;,&quot;id&quot;:&quot;model_doc/focalnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/focalnet&quot;},{&quot;title&quot;:&quot;GLPN&quot;,&quot;id&quot;:&quot;model_doc/glpn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/glpn&quot;},{&quot;title&quot;:&quot;ImageGPT&quot;,&quot;id&quot;:&quot;model_doc/imagegpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/imagegpt&quot;},{&quot;title&quot;:&quot;LeViT&quot;,&quot;id&quot;:&quot;model_doc/levit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/levit&quot;},{&quot;title&quot;:&quot;Mask2Former&quot;,&quot;id&quot;:&quot;model_doc/mask2former&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mask2former&quot;},{&quot;title&quot;:&quot;MaskFormer&quot;,&quot;id&quot;:&quot;model_doc/maskformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/maskformer&quot;},{&quot;title&quot;:&quot;MobileNetV1&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v1&quot;},{&quot;title&quot;:&quot;MobileNetV2&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v2&quot;},{&quot;title&quot;:&quot;MobileViT&quot;,&quot;id&quot;:&quot;model_doc/mobilevit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevit&quot;},{&quot;title&quot;:&quot;MobileViTV2&quot;,&quot;id&quot;:&quot;model_doc/mobilevitv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevitv2&quot;},{&quot;title&quot;:&quot;NAT&quot;,&quot;id&quot;:&quot;model_doc/nat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nat&quot;},{&quot;title&quot;:&quot;PoolFormer&quot;,&quot;id&quot;:&quot;model_doc/poolformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/poolformer&quot;},{&quot;title&quot;:&quot;Pyramid Vision Transformer (PVT)&quot;,&quot;id&quot;:&quot;model_doc/pvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pvt&quot;},{&quot;title&quot;:&quot;RegNet&quot;,&quot;id&quot;:&quot;model_doc/regnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/regnet&quot;},{&quot;title&quot;:&quot;ResNet&quot;,&quot;id&quot;:&quot;model_doc/resnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/resnet&quot;},{&quot;title&quot;:&quot;SegFormer&quot;,&quot;id&quot;:&quot;model_doc/segformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/segformer&quot;},{&quot;title&quot;:&quot;SwiftFormer&quot;,&quot;id&quot;:&quot;model_doc/swiftformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swiftformer&quot;},{&quot;title&quot;:&quot;Swin Transformer&quot;,&quot;id&quot;:&quot;model_doc/swin&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin&quot;},{&quot;title&quot;:&quot;Swin Transformer V2&quot;,&quot;id&quot;:&quot;model_doc/swinv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swinv2&quot;},{&quot;title&quot;:&quot;Swin2SR&quot;,&quot;id&quot;:&quot;model_doc/swin2sr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin2sr&quot;},{&quot;title&quot;:&quot;Table Transformer&quot;,&quot;id&quot;:&quot;model_doc/table-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/table-transformer&quot;},{&quot;title&quot;:&quot;TimeSformer&quot;,&quot;id&quot;:&quot;model_doc/timesformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/timesformer&quot;},{&quot;title&quot;:&quot;UperNet&quot;,&quot;id&quot;:&quot;model_doc/upernet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/upernet&quot;},{&quot;title&quot;:&quot;VAN&quot;,&quot;id&quot;:&quot;model_doc/van&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/van&quot;},{&quot;title&quot;:&quot;VideoMAE&quot;,&quot;id&quot;:&quot;model_doc/videomae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/videomae&quot;},{&quot;title&quot;:&quot;Vision Transformer (ViT)&quot;,&quot;id&quot;:&quot;model_doc/vit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit&quot;},{&quot;title&quot;:&quot;ViT Hybrid&quot;,&quot;id&quot;:&quot;model_doc/vit_hybrid&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_hybrid&quot;},{&quot;title&quot;:&quot;ViTDet&quot;,&quot;id&quot;:&quot;model_doc/vitdet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitdet&quot;},{&quot;title&quot;:&quot;ViTMAE&quot;,&quot;id&quot;:&quot;model_doc/vit_mae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_mae&quot;},{&quot;title&quot;:&quot;ViTMatte&quot;,&quot;id&quot;:&quot;model_doc/vitmatte&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitmatte&quot;},{&quot;title&quot;:&quot;ViTMSN&quot;,&quot;id&quot;:&quot;model_doc/vit_msn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_msn&quot;},{&quot;title&quot;:&quot;ViViT&quot;,&quot;id&quot;:&quot;model_doc/vivit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vivit&quot;},{&quot;title&quot;:&quot;YOLOS&quot;,&quot;id&quot;:&quot;model_doc/yolos&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yolos&quot;}]},{&quot;title&quot;:&quot;Audio models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio Spectrogram Transformer&quot;,&quot;id&quot;:&quot;model_doc/audio-spectrogram-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer&quot;},{&quot;title&quot;:&quot;Bark&quot;,&quot;id&quot;:&quot;model_doc/bark&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bark&quot;},{&quot;title&quot;:&quot;CLAP&quot;,&quot;id&quot;:&quot;model_doc/clap&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clap&quot;},{&quot;title&quot;:&quot;EnCodec&quot;,&quot;id&quot;:&quot;model_doc/encodec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encodec&quot;},{&quot;title&quot;:&quot;Hubert&quot;,&quot;id&quot;:&quot;model_doc/hubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/hubert&quot;},{&quot;title&quot;:&quot;MCTCT&quot;,&quot;id&quot;:&quot;model_doc/mctct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mctct&quot;},{&quot;title&quot;:&quot;MMS&quot;,&quot;id&quot;:&quot;model_doc/mms&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mms&quot;},{&quot;title&quot;:&quot;MusicGen&quot;,&quot;id&quot;:&quot;model_doc/musicgen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/musicgen&quot;},{&quot;title&quot;:&quot;Pop2Piano&quot;,&quot;id&quot;:&quot;model_doc/pop2piano&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pop2piano&quot;},{&quot;title&quot;:&quot;SEW&quot;,&quot;id&quot;:&quot;model_doc/sew&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew&quot;},{&quot;title&quot;:&quot;SEW-D&quot;,&quot;id&quot;:&quot;model_doc/sew-d&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew-d&quot;},{&quot;title&quot;:&quot;Speech2Text&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text&quot;},{&quot;title&quot;:&quot;Speech2Text2&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text_2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2&quot;},{&quot;title&quot;:&quot;SpeechT5&quot;,&quot;id&quot;:&quot;model_doc/speecht5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speecht5&quot;},{&quot;title&quot;:&quot;UniSpeech&quot;,&quot;id&quot;:&quot;model_doc/unispeech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech&quot;},{&quot;title&quot;:&quot;UniSpeech-SAT&quot;,&quot;id&quot;:&quot;model_doc/unispeech-sat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech-sat&quot;},{&quot;title&quot;:&quot;VITS&quot;,&quot;id&quot;:&quot;model_doc/vits&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vits&quot;},{&quot;title&quot;:&quot;Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2&quot;},{&quot;title&quot;:&quot;Wav2Vec2-Conformer&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2-conformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer&quot;},{&quot;title&quot;:&quot;Wav2Vec2Phoneme&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2_phoneme&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme&quot;},{&quot;title&quot;:&quot;WavLM&quot;,&quot;id&quot;:&quot;model_doc/wavlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wavlm&quot;},{&quot;title&quot;:&quot;Whisper&quot;,&quot;id&quot;:&quot;model_doc/whisper&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/whisper&quot;},{&quot;title&quot;:&quot;XLS-R&quot;,&quot;id&quot;:&quot;model_doc/xls_r&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xls_r&quot;},{&quot;title&quot;:&quot;XLSR-Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/xlsr_wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2&quot;}]},{&quot;title&quot;:&quot;Multimodal models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALIGN&quot;,&quot;id&quot;:&quot;model_doc/align&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/align&quot;},{&quot;title&quot;:&quot;AltCLIP&quot;,&quot;id&quot;:&quot;model_doc/altclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/altclip&quot;},{&quot;title&quot;:&quot;BLIP&quot;,&quot;id&quot;:&quot;model_doc/blip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip&quot;},{&quot;title&quot;:&quot;BLIP-2&quot;,&quot;id&quot;:&quot;model_doc/blip-2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip-2&quot;},{&quot;title&quot;:&quot;BridgeTower&quot;,&quot;id&quot;:&quot;model_doc/bridgetower&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bridgetower&quot;},{&quot;title&quot;:&quot;BROS&quot;,&quot;id&quot;:&quot;model_doc/bros&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bros&quot;},{&quot;title&quot;:&quot;Chinese-CLIP&quot;,&quot;id&quot;:&quot;model_doc/chinese_clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/chinese_clip&quot;},{&quot;title&quot;:&quot;CLIP&quot;,&quot;id&quot;:&quot;model_doc/clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clip&quot;},{&quot;title&quot;:&quot;CLIPSeg&quot;,&quot;id&quot;:&quot;model_doc/clipseg&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clipseg&quot;},{&quot;title&quot;:&quot;Data2Vec&quot;,&quot;id&quot;:&quot;model_doc/data2vec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/data2vec&quot;},{&quot;title&quot;:&quot;DePlot&quot;,&quot;id&quot;:&quot;model_doc/deplot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deplot&quot;},{&quot;title&quot;:&quot;Donut&quot;,&quot;id&quot;:&quot;model_doc/donut&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/donut&quot;},{&quot;title&quot;:&quot;FLAVA&quot;,&quot;id&quot;:&quot;model_doc/flava&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flava&quot;},{&quot;title&quot;:&quot;GIT&quot;,&quot;id&quot;:&quot;model_doc/git&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/git&quot;},{&quot;title&quot;:&quot;GroupViT&quot;,&quot;id&quot;:&quot;model_doc/groupvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/groupvit&quot;},{&quot;title&quot;:&quot;IDEFICS&quot;,&quot;id&quot;:&quot;model_doc/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/idefics&quot;},{&quot;title&quot;:&quot;InstructBLIP&quot;,&quot;id&quot;:&quot;model_doc/instructblip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/instructblip&quot;},{&quot;title&quot;:&quot;LayoutLM&quot;,&quot;id&quot;:&quot;model_doc/layoutlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlm&quot;},{&quot;title&quot;:&quot;LayoutLMV2&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv2&quot;},{&quot;title&quot;:&quot;LayoutLMV3&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv3&quot;},{&quot;title&quot;:&quot;LayoutXLM&quot;,&quot;id&quot;:&quot;model_doc/layoutxlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutxlm&quot;},{&quot;title&quot;:&quot;LiLT&quot;,&quot;id&quot;:&quot;model_doc/lilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lilt&quot;},{&quot;title&quot;:&quot;LXMERT&quot;,&quot;id&quot;:&quot;model_doc/lxmert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lxmert&quot;},{&quot;title&quot;:&quot;MatCha&quot;,&quot;id&quot;:&quot;model_doc/matcha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/matcha&quot;},{&quot;title&quot;:&quot;MGP-STR&quot;,&quot;id&quot;:&quot;model_doc/mgp-str&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mgp-str&quot;},{&quot;title&quot;:&quot;Nougat&quot;,&quot;id&quot;:&quot;model_doc/nougat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nougat&quot;},{&quot;title&quot;:&quot;OneFormer&quot;,&quot;id&quot;:&quot;model_doc/oneformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/oneformer&quot;},{&quot;title&quot;:&quot;OWL-ViT&quot;,&quot;id&quot;:&quot;model_doc/owlvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/owlvit&quot;},{&quot;title&quot;:&quot;Perceiver&quot;,&quot;id&quot;:&quot;model_doc/perceiver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/perceiver&quot;},{&quot;title&quot;:&quot;Pix2Struct&quot;,&quot;id&quot;:&quot;model_doc/pix2struct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pix2struct&quot;},{&quot;title&quot;:&quot;Segment Anything&quot;,&quot;id&quot;:&quot;model_doc/sam&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sam&quot;},{&quot;title&quot;:&quot;Speech Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/speech-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder&quot;},{&quot;title&quot;:&quot;TAPAS&quot;,&quot;id&quot;:&quot;model_doc/tapas&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapas&quot;},{&quot;title&quot;:&quot;TrOCR&quot;,&quot;id&quot;:&quot;model_doc/trocr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trocr&quot;},{&quot;title&quot;:&quot;TVLT&quot;,&quot;id&quot;:&quot;model_doc/tvlt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tvlt&quot;},{&quot;title&quot;:&quot;ViLT&quot;,&quot;id&quot;:&quot;model_doc/vilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vilt&quot;},{&quot;title&quot;:&quot;Vision Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/vision-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder&quot;},{&quot;title&quot;:&quot;Vision Text Dual Encoder&quot;,&quot;id&quot;:&quot;model_doc/vision-text-dual-encoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder&quot;},{&quot;title&quot;:&quot;VisualBERT&quot;,&quot;id&quot;:&quot;model_doc/visual_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/visual_bert&quot;},{&quot;title&quot;:&quot;X-CLIP&quot;,&quot;id&quot;:&quot;model_doc/xclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xclip&quot;}]},{&quot;title&quot;:&quot;Reinforcement learning models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Decision Transformer&quot;,&quot;id&quot;:&quot;model_doc/decision_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/decision_transformer&quot;},{&quot;title&quot;:&quot;Trajectory Transformer&quot;,&quot;id&quot;:&quot;model_doc/trajectory_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer&quot;}]},{&quot;title&quot;:&quot;Time series models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Autoformer&quot;,&quot;id&quot;:&quot;model_doc/autoformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/autoformer&quot;},{&quot;title&quot;:&quot;Informer&quot;,&quot;id&quot;:&quot;model_doc/informer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/informer&quot;},{&quot;title&quot;:&quot;Time Series Transformer&quot;,&quot;id&quot;:&quot;model_doc/time_series_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/time_series_transformer&quot;}]},{&quot;title&quot;:&quot;Graph models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Graphormer&quot;,&quot;id&quot;:&quot;model_doc/graphormer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/graphormer&quot;}]}]},{&quot;title&quot;:&quot;Internal Helpers&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Custom Layers and Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/modeling_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/modeling_utils&quot;},{&quot;title&quot;:&quot;Utilities for pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/pipelines_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/pipelines_utils&quot;},{&quot;title&quot;:&quot;Utilities for Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/tokenization_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/tokenization_utils&quot;},{&quot;title&quot;:&quot;Utilities for Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/trainer_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/trainer_utils&quot;},{&quot;title&quot;:&quot;Utilities for Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/generation_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/generation_utils&quot;},{&quot;title&quot;:&quot;Utilities for Image Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/image_processing_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/image_processing_utils&quot;},{&quot;title&quot;:&quot;Utilities for Audio processing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/audio_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/audio_utils&quot;},{&quot;title&quot;:&quot;General Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/file_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/file_utils&quot;},{&quot;title&quot;:&quot;Utilities for Time Series&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/time_series_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/time_series_utils&quot;}]}]}],&quot;chapterId&quot;:&quot;tasks/translation&quot;,&quot;docType&quot;:&quot;docs&quot;,&quot;isLoggedIn&quot;:false,&quot;lang&quot;:&quot;en&quot;,&quot;langs&quot;:[&quot;de&quot;,&quot;en&quot;,&quot;es&quot;,&quot;fr&quot;,&quot;it&quot;,&quot;ko&quot;,&quot;pt&quot;,&quot;zh&quot;],&quot;library&quot;:&quot;transformers&quot;,&quot;theme&quot;:&quot;light&quot;,&quot;version&quot;:&quot;v4.34.0&quot;,&quot;versions&quot;:[{&quot;version&quot;:&quot;main&quot;},{&quot;version&quot;:&quot;v4.34.0&quot;},{&quot;version&quot;:&quot;v4.33.3&quot;},{&quot;version&quot;:&quot;v4.33.2&quot;},{&quot;version&quot;:&quot;v4.33.0&quot;},{&quot;version&quot;:&quot;v4.32.1&quot;},{&quot;version&quot;:&quot;v4.32.0&quot;},{&quot;version&quot;:&quot;v4.31.0&quot;},{&quot;version&quot;:&quot;v4.30.0&quot;},{&quot;version&quot;:&quot;v4.29.1&quot;},{&quot;version&quot;:&quot;v4.29.0&quot;},{&quot;version&quot;:&quot;v4.28.1&quot;},{&quot;version&quot;:&quot;v4.28.0&quot;},{&quot;version&quot;:&quot;v4.27.2&quot;},{&quot;version&quot;:&quot;v4.27.1&quot;},{&quot;version&quot;:&quot;v4.27.0&quot;},{&quot;version&quot;:&quot;v4.26.1&quot;},{&quot;version&quot;:&quot;v4.26.0&quot;},{&quot;version&quot;:&quot;v4.25.1&quot;},{&quot;version&quot;:&quot;v4.24.0&quot;},{&quot;version&quot;:&quot;v4.23.1&quot;},{&quot;version&quot;:&quot;v4.23.0&quot;},{&quot;version&quot;:&quot;v4.22.2&quot;},{&quot;version&quot;:&quot;v4.22.1&quot;},{&quot;version&quot;:&quot;v4.22.0&quot;},{&quot;version&quot;:&quot;v4.21.3&quot;},{&quot;version&quot;:&quot;v4.21.2&quot;},{&quot;version&quot;:&quot;v4.21.1&quot;},{&quot;version&quot;:&quot;v4.21.0&quot;},{&quot;version&quot;:&quot;v4.20.1&quot;},{&quot;version&quot;:&quot;v4.20.0&quot;},{&quot;version&quot;:&quot;v4.19.4&quot;},{&quot;version&quot;:&quot;v4.19.3&quot;},{&quot;version&quot;:&quot;v4.19.2&quot;},{&quot;version&quot;:&quot;v4.19.0&quot;},{&quot;version&quot;:&quot;v4.18.0&quot;},{&quot;version&quot;:&quot;v4.17.0&quot;},{&quot;version&quot;:&quot;v4.16.2&quot;},{&quot;version&quot;:&quot;v4.16.1&quot;},{&quot;version&quot;:&quot;v4.16.0&quot;},{&quot;version&quot;:&quot;v4.15.0&quot;},{&quot;version&quot;:&quot;v4.14.1&quot;},{&quot;version&quot;:&quot;v4.13.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.5&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.4&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.0.0&quot;},{&quot;version&quot;:&quot;doc-builder-html&quot;}],&quot;title&quot;:&quot;Translation&quot;}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"><div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation</p> <div class="flex items-center"><p class="font-semibold">Translation</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "><button class=" " type="button"><h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> <span class="ml-auto rounded border border-gray-200 bg-gray-100 px-0.5 text-xs dark:border-gray-800 dark:bg-gray-800"><kbd class="font-sans">⌘K</kbd></span></button> <div class="flex items-center"><select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1">v4.34.0</option><option value="2">v4.33.3</option><option value="3">v4.32.1</option><option value="4">v4.31.0</option><option value="5">v4.30.0</option><option value="6">v4.29.1</option><option value="7">v4.28.1</option><option value="8">v4.27.2</option><option value="9">v4.26.1</option><option value="10">v4.25.1</option><option value="11">v4.24.0</option><option value="12">v4.23.1</option><option value="13">v4.22.2</option><option value="14">v4.21.3</option><option value="15">v4.20.1</option><option value="16">v4.19.4</option><option value="17">v4.18.0</option><option value="18">v4.17.0</option><option value="19">v4.16.2</option><option value="20">v4.15.0</option><option value="21">v4.14.1</option><option value="22">v4.13.0</option><option value="23">v4.12.5</option><option value="24">v4.11.3</option><option value="25">v4.10.1</option><option value="26">v4.9.2</option><option value="27">v4.8.2</option><option value="28">v4.7.0</option><option value="29">v4.6.0</option><option value="30">v4.5.1</option><option value="31">v4.4.2</option><option value="32">v4.3.3</option><option value="33">v4.2.2</option><option value="34">v4.1.1</option><option value="35">v4.0.1</option><option value="36">v3.5.1</option><option value="37">v3.4.0</option><option value="38">v3.3.1</option><option value="39">v3.2.0</option><option value="40">v3.1.0</option><option value="41">v3.0.2</option><option value="42">v2.11.0</option><option value="43">v2.10.0</option><option value="44">v2.9.1</option><option value="45">v2.8.0</option><option value="46">v2.7.0</option><option value="47">v2.6.0</option><option value="48">v2.5.1</option><option value="49">v2.4.1</option><option value="50">v2.3.0</option><option value="51">v2.2.2</option><option value="52">v2.1.1</option><option value="53">v2.0.0</option><option value="54">v1.2.0</option><option value="55">v1.1.0</option><option value="56">v1.0.0</option><option value="57">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"><button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"><svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> 112,792</a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Get started</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/index">🤗 Transformers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/quicktour">Quick tour </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/installation">Installation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Tutorials</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_tutorial">Run inference with pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/autoclass_tutorial">Write portable code with AutoClass </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/preprocessing">Preprocess data </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/training">Fine-tune a pretrained model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/run_scripts">Train with a script </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/accelerate">Set up distributed training with 🤗 Accelerate </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/peft">Load and train adapters with 🤗 PEFT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_sharing">Share your model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/transformers_agents">Agents </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/llm_tutorial">Generation with LLMs </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Task Guides</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Natural Language Processing</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/sequence_classification">Text classification </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/token_classification">Token classification </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/question_answering">Question answering </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/language_modeling">Causal language modeling </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/masked_language_modeling">Masked language modeling </a><a data-sveltekit-reload="" class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-4" href="/docs/transformers/v4.34.0/en/tasks/translation">Translation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/summarization">Summarization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/multiple_choice">Multiple choice </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Computer Vision</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Generation</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Prompting</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Developer guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/fast_tokenizers">Use fast tokenizers from 🤗 Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/multilingual">Run inference with multilingual models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/create_a_model">Use model-specific APIs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_models">Share a custom model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/chat_templating">Templates for chat models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/sagemaker">Run training on Amazon SageMaker </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/serialization">Export to ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tflite">Export to TFLite </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/torchscript">Export to TorchScript </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/benchmarks">Benchmarks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/notebooks">Notebooks with examples </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/community">Community resources </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_tools">Custom Tools and Prompts </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/troubleshooting">Troubleshoot </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Performance and scalability</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/performance">Overview </a><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Efficient training techniques</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_one">Methods and tools for efficient training on a single GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_many">Multiple GPUs and parallelism </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu">Efficient training on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu_many">Distributed CPU training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu">Training on TPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu_tf">Training on TPU with TensorFlow </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_special">Training on Specialized Hardware </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_hardware">Custom hardware for training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/hpo_train">Hyperparameter Search using Trainer API </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Optimizing inference</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_cpu">Inference on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_one">Inference on one GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_many">Inference on many GPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_special">Inference on Specialized Hardware </a> </div><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/big_models">Instantiating a big model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/debugging">Troubleshooting </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tf_xla">XLA Integration for TensorFlow Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perf_torch_compile">Optimize inference using `torch.compile()` </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Contribute</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/contributing">How to contribute to transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_model">How to add a model to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_tensorflow_model">How to convert a 🤗 Transformers model to TensorFlow? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_pipeline">How to add a pipeline to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/testing">Testing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pr_checks">Checks on a Pull Request </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Conceptual guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/philosophy">Philosophy </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/glossary">Glossary </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/task_summary">What 🤗 Transformers can do </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tasks_explained">How 🤗 Transformers solve tasks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_summary">The Transformer model family </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tokenizer_summary">Summary of the tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/attention">Attention mechanisms </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pad_truncation">Padding and truncation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/bertology">BERTology </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perplexity">Perplexity of fixed-length models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_webserver">Pipelines for webserver inference </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_memory_anatomy">Model training anatomy </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>API</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Main Classes</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/agent">Agents and Tools </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/model_doc/auto">Auto Classes </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/callback">Callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/configuration">Configuration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/data_collator">Data Collator </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks">Keras callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/logging">Logging </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/model">Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/text_generation">Text Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/onnx">ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules">Optimization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/output">Model outputs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/pipelines">Pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/processors">Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/quantization">Quantization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/tokenizer">Tokenizer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/trainer">Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/deepspeed">DeepSpeed Integration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor">Feature Extractor </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/image_processor">Image Processor </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Models</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Text models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Vision models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Reinforcement learning models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Time series models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Graph models</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Internal Helpers</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/modeling_utils">Custom Layers and Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/pipelines_utils">Utilities for pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/tokenization_utils">Utilities for Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/trainer_utils">Utilities for Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/generation_utils">Utilities for Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/image_processing_utils">Utilities for Image Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/audio_utils">Utilities for Audio processing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/file_utils">General Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/time_series_utils">Utilities for Time Series </a> </div> </div></nav></div></div></div> <div class="z-1 min-w-0 flex-1"> <div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg"> <div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div> <p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience </p> <div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes </div></div></div> <div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a> <p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div> <div class="prose-doc prose relative mx-auto max-w-4xl break-words"> <p></p> <h1 class="relative group"><a id="translation" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#translation"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-qg4vk8">Translation</span></h1> <div class="flex space-x-1 absolute z-10 right-0 top-0"> <div class="relative colab-dropdown "><button class=" " type="button"><img alt="Open In Colab" class="!m-0" src="https://colab.research.google.com/assets/colab-badge.svg"></button> </div> <div class="relative colab-dropdown "><button class=" " type="button"><img alt="Open In Studio Lab" class="!m-0" src="https://studiolab.sagemaker.aws/studiolab.svg"></button> </div></div> <iframe class="w-full xl:w-4/6 h-80" src="https://www.youtube-nocookie.com/embed/1JvfrvZgi6c" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe> <p data-svelte-h="svelte-x8f5ei">Translation converts a sequence of text from one language to another. It is one of several tasks you can formulate as a sequence-to-sequence problem, a powerful framework for returning some output from an input, like translation or summarization. Translation systems are commonly used for translation between different language texts, but it can also be used for speech or some combination in between like text-to-speech or speech-to-text.</p> <p data-svelte-h="svelte-1aff4p7">This guide will show you how to:</p> <ol data-svelte-h="svelte-1yi27h4"><li>Finetune <a href="https://huggingface.co/t5-small" rel="nofollow">T5</a> on the English-French subset of the <a href="https://huggingface.co/datasets/opus_books" rel="nofollow">OPUS Books</a> dataset to translate English text to French.</li> <li>Use your finetuned model for inference.</li></ol> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400">The task illustrated in this tutorial is supported by the following model architectures: <p data-svelte-h="svelte-2s8ce4"><a href="../model_doc/bart">BART</a>, <a href="../model_doc/bigbird_pegasus">BigBird-Pegasus</a>, <a href="../model_doc/blenderbot">Blenderbot</a>, <a href="../model_doc/blenderbot-small">BlenderbotSmall</a>, <a href="../model_doc/encoder-decoder">Encoder decoder</a>, <a href="../model_doc/fsmt">FairSeq Machine-Translation</a>, <a href="../model_doc/gptsan-japanese">GPTSAN-japanese</a>, <a href="../model_doc/led">LED</a>, <a href="../model_doc/longt5">LongT5</a>, <a href="../model_doc/m2m_100">M2M100</a>, <a href="../model_doc/marian">Marian</a>, <a href="../model_doc/mbart">mBART</a>, <a href="../model_doc/mt5">MT5</a>, <a href="../model_doc/mvp">MVP</a>, <a href="../model_doc/nllb">NLLB</a>, <a href="../model_doc/nllb-moe">NLLB-MOE</a>, <a href="../model_doc/pegasus">Pegasus</a>, <a href="../model_doc/pegasus_x">PEGASUS-X</a>, <a href="../model_doc/plbart">PLBart</a>, <a href="../model_doc/prophetnet">ProphetNet</a>, <a href="../model_doc/switch_transformers">SwitchTransformers</a>, <a href="../model_doc/t5">T5</a>, <a href="../model_doc/umt5">UMT5</a>, <a href="../model_doc/xlm-prophetnet">XLM-ProphetNet</a></p></div> <p data-svelte-h="svelte-1c9nexd">Before you begin, make sure you have all the necessary libraries installed:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class="">pip install transformers datasets evaluate sacrebleu</pre></div> <p data-svelte-h="svelte-k76o1m">We encourage you to login to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to login:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> huggingface_hub <span class="hljs-keyword">import</span> notebook_login <span class="hljs-meta">&gt;&gt;&gt; </span>notebook_login()</pre></div> <h2 class="relative group"><a id="load-opus-books-dataset" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#load-opus-books-dataset"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-9zawv2">Load OPUS Books dataset</span></h2> <p data-svelte-h="svelte-1r4htuf">Start by loading the English-French subset of the <a href="https://huggingface.co/datasets/opus_books" rel="nofollow">OPUS Books</a> dataset from the 🤗 Datasets library:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset <span class="hljs-meta">&gt;&gt;&gt; </span>books = load_dataset(<span class="hljs-string">"opus_books"</span>, <span class="hljs-string">"en-fr"</span>)</pre></div> <p data-svelte-h="svelte-gqiacy">Split the dataset into a train and test set with the <a href="https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.train_test_split" rel="nofollow">train_test_split</a> method:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>books = books[<span class="hljs-string">"train"</span>].train_test_split(test_size=<span class="hljs-number">0.2</span>)</pre></div> <p data-svelte-h="svelte-1m91ua0">Then take a look at an example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>books[<span class="hljs-string">"train"</span>][<span class="hljs-number">0</span>] {<span class="hljs-string">'id'</span>: <span class="hljs-string">'90560'</span>, <span class="hljs-string">'translation'</span>: {<span class="hljs-string">'en'</span>: <span class="hljs-string">'But this lofty plateau measured only a few fathoms, and soon we reentered Our Element.'</span>, <span class="hljs-string">'fr'</span>: <span class="hljs-string">'Mais ce plateau élevé ne mesurait que quelques toises, et bientôt nous fûmes rentrés dans notre élément.'</span>}}</pre></div> <p data-svelte-h="svelte-pj869u"><code>translation</code>: an English and French translation of the text.</p> <h2 class="relative group"><a id="preprocess" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#preprocess"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1cg9qj">Preprocess</span></h2> <iframe class="w-full xl:w-4/6 h-80" src="https://www.youtube-nocookie.com/embed/XAR8jnZZuUs" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe> <p data-svelte-h="svelte-1m3bu0h">The next step is to load a T5 tokenizer to process the English-French language pairs:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer <span class="hljs-meta">&gt;&gt;&gt; </span>checkpoint = <span class="hljs-string">"t5-small"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(checkpoint)</pre></div> <p data-svelte-h="svelte-pduvot">The preprocessing function you want to create needs to:</p> <ol data-svelte-h="svelte-18wa00d"><li>Prefix the input with a prompt so T5 knows this is a translation task. Some models capable of multiple NLP tasks require prompting for specific tasks.</li> <li>Tokenize the input (English) and target (French) separately because you can’t tokenize French text with a tokenizer pretrained on an English vocabulary.</li> <li>Truncate sequences to be no longer than the maximum length set by the <code>max_length</code> parameter.</li></ol> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>source_lang = <span class="hljs-string">"en"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>target_lang = <span class="hljs-string">"fr"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>prefix = <span class="hljs-string">"translate English to French: "</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">preprocess_function</span>(<span class="hljs-params">examples</span>): <span class="hljs-meta">... </span> inputs = [prefix + example[source_lang] <span class="hljs-keyword">for</span> example <span class="hljs-keyword">in</span> examples[<span class="hljs-string">"translation"</span>]] <span class="hljs-meta">... </span> targets = [example[target_lang] <span class="hljs-keyword">for</span> example <span class="hljs-keyword">in</span> examples[<span class="hljs-string">"translation"</span>]] <span class="hljs-meta">... </span> model_inputs = tokenizer(inputs, text_target=targets, max_length=<span class="hljs-number">128</span>, truncation=<span class="hljs-literal">True</span>) <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> model_inputs</pre></div> <p data-svelte-h="svelte-ndcj3d">To apply the preprocessing function over the entire dataset, use 🤗 Datasets <a href="https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.map" rel="nofollow">map</a> method. You can speed up the <code>map</code> function by setting <code>batched=True</code> to process multiple elements of the dataset at once:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>tokenized_books = books.<span class="hljs-built_in">map</span>(preprocess_function, batched=<span class="hljs-literal">True</span>)</pre></div> <p data-svelte-h="svelte-5dvr4x">Now create a batch of examples using <a href="/docs/transformers/v4.34.0/en/main_classes/data_collator#transformers.DataCollatorForSeq2Seq">DataCollatorForSeq2Seq</a>. It’s more efficient to <em>dynamically pad</em> the sentences to the longest length in a batch during collation, instead of padding the whole dataset to the maximum length.</p> <div class="space-y-10 py-6 2xl:py-8 2xl:-mx-4"><div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><defs><clipPath id="a"><rect x="3.05" y="0.5" width="25.73" height="31" fill="none"></rect></clipPath></defs><g clip-path="url(#a)"><path d="M24.94,9.51a12.81,12.81,0,0,1,0,18.16,12.68,12.68,0,0,1-18,0,12.81,12.81,0,0,1,0-18.16l9-9V5l-.84.83-6,6a9.58,9.58,0,1,0,13.55,0ZM20.44,9a1.68,1.68,0,1,1,1.67-1.67A1.68,1.68,0,0,1,20.44,9Z" fill="#ee4c2c"></path></g></svg> <span>Pytorch</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide Pytorch content</span></div></div> <div class="framework-content"><div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> DataCollatorForSeq2Seq <span class="hljs-meta">&gt;&gt;&gt; </span>data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=checkpoint)</pre></div></div></div> <div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="0.94em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 274"><path d="M145.726 42.065v42.07l72.861 42.07v-42.07l-72.86-42.07zM0 84.135v42.07l36.43 21.03V105.17L0 84.135zm109.291 21.035l-36.43 21.034v126.2l36.43 21.035v-84.135l36.435 21.035v-42.07l-36.435-21.034V105.17z" fill="#E55B2D"></path><path d="M145.726 42.065L36.43 105.17v42.065l72.861-42.065v42.065l36.435-21.03v-84.14zM255.022 63.1l-36.435 21.035v42.07l36.435-21.035V63.1zm-72.865 84.135l-36.43 21.035v42.07l36.43-21.036v-42.07zm-36.43 63.104l-36.436-21.035v84.135l36.435-21.035V210.34z" fill="#ED8E24"></path><path d="M145.726 0L0 84.135l36.43 21.035l109.296-63.105l72.861 42.07L255.022 63.1L145.726 0zm0 126.204l-36.435 21.03l36.435 21.036l36.43-21.035l-36.43-21.03z" fill="#F8BF3C"></path></svg> <span>TensorFlow</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide TensorFlow content</span></div></div> <div class="framework-content"><div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> DataCollatorForSeq2Seq <span class="hljs-meta">&gt;&gt;&gt; </span>data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=checkpoint, return_tensors=<span class="hljs-string">"tf"</span>)</pre></div></div></div> </div> <h2 class="relative group"><a id="evaluate" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#evaluate"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-sh8s6s">Evaluate</span></h2> <p data-svelte-h="svelte-1mb37bt">Including a metric during training is often helpful for evaluating your model’s performance. You can quickly load a evaluation method with the 🤗 <a href="https://huggingface.co/docs/evaluate/index" rel="nofollow">Evaluate</a> library. For this task, load the <a href="https://huggingface.co/spaces/evaluate-metric/sacrebleu" rel="nofollow">SacreBLEU</a> metric (see the 🤗 Evaluate <a href="https://huggingface.co/docs/evaluate/a_quick_tour" rel="nofollow">quick tour</a> to learn more about how to load and compute a metric):</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> evaluate <span class="hljs-meta">&gt;&gt;&gt; </span>metric = evaluate.load(<span class="hljs-string">"sacrebleu"</span>)</pre></div> <p data-svelte-h="svelte-6tzthx">Then create a function that passes your predictions and labels to <a href="https://huggingface.co/docs/evaluate/v0.4.0/en/package_reference/main_classes#evaluate.EvaluationModule.compute" rel="nofollow">compute</a> to calculate the SacreBLEU score:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">postprocess_text</span>(<span class="hljs-params">preds, labels</span>): <span class="hljs-meta">... </span> preds = [pred.strip() <span class="hljs-keyword">for</span> pred <span class="hljs-keyword">in</span> preds] <span class="hljs-meta">... </span> labels = [[label.strip()] <span class="hljs-keyword">for</span> label <span class="hljs-keyword">in</span> labels] <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> preds, labels <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">compute_metrics</span>(<span class="hljs-params">eval_preds</span>): <span class="hljs-meta">... </span> preds, labels = eval_preds <span class="hljs-meta">... </span> <span class="hljs-keyword">if</span> <span class="hljs-built_in">isinstance</span>(preds, <span class="hljs-built_in">tuple</span>): <span class="hljs-meta">... </span> preds = preds[<span class="hljs-number">0</span>] <span class="hljs-meta">... </span> decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=<span class="hljs-literal">True</span>) <span class="hljs-meta">... </span> labels = np.where(labels != -<span class="hljs-number">100</span>, labels, tokenizer.pad_token_id) <span class="hljs-meta">... </span> decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=<span class="hljs-literal">True</span>) <span class="hljs-meta">... </span> decoded_preds, decoded_labels = postprocess_text(decoded_preds, decoded_labels) <span class="hljs-meta">... </span> result = metric.compute(predictions=decoded_preds, references=decoded_labels) <span class="hljs-meta">... </span> result = {<span class="hljs-string">"bleu"</span>: result[<span class="hljs-string">"score"</span>]} <span class="hljs-meta">... </span> prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) <span class="hljs-keyword">for</span> pred <span class="hljs-keyword">in</span> preds] <span class="hljs-meta">... </span> result[<span class="hljs-string">"gen_len"</span>] = np.mean(prediction_lens) <span class="hljs-meta">... </span> result = {k: <span class="hljs-built_in">round</span>(v, <span class="hljs-number">4</span>) <span class="hljs-keyword">for</span> k, v <span class="hljs-keyword">in</span> result.items()} <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> result</pre></div> <p data-svelte-h="svelte-183aynn">Your <code>compute_metrics</code> function is ready to go now, and you’ll return to it when you setup your training.</p> <h2 class="relative group"><a id="train" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#train"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-5arm0l">Train</span></h2> <div class="space-y-10 py-6 2xl:py-8 2xl:-mx-4"><div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><defs><clipPath id="a"><rect x="3.05" y="0.5" width="25.73" height="31" fill="none"></rect></clipPath></defs><g clip-path="url(#a)"><path d="M24.94,9.51a12.81,12.81,0,0,1,0,18.16,12.68,12.68,0,0,1-18,0,12.81,12.81,0,0,1,0-18.16l9-9V5l-.84.83-6,6a9.58,9.58,0,1,0,13.55,0ZM20.44,9a1.68,1.68,0,1,1,1.67-1.67A1.68,1.68,0,0,1,20.44,9Z" fill="#ee4c2c"></path></g></svg> <span>Pytorch</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide Pytorch content</span></div></div> <div class="framework-content"><div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1ufp0ay">If you aren’t familiar with finetuning a model with the <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer">Trainer</a>, take a look at the basic tutorial <a href="../training#train-with-pytorch-trainer">here</a>!</p></div> <p data-svelte-h="svelte-1h2b6hn">You’re ready to start training your model now! Load T5 with <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoModelForSeq2SeqLM">AutoModelForSeq2SeqLM</a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoModelForSeq2SeqLM, Seq2SeqTrainingArguments, Seq2SeqTrainer <span class="hljs-meta">&gt;&gt;&gt; </span>model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint)</pre></div> <p data-svelte-h="svelte-l42k0i">At this point, only three steps remain:</p> <ol data-svelte-h="svelte-aresqc"><li>Define your training hyperparameters in <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Seq2SeqTrainingArguments">Seq2SeqTrainingArguments</a>. The only required parameter is <code>output_dir</code> which specifies where to save your model. You’ll push this model to the Hub by setting <code>push_to_hub=True</code> (you need to be signed in to Hugging Face to upload your model). At the end of each epoch, the <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer">Trainer</a> will evaluate the SacreBLEU metric and save the training checkpoint.</li> <li>Pass the training arguments to <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Seq2SeqTrainer">Seq2SeqTrainer</a> along with the model, dataset, tokenizer, data collator, and <code>compute_metrics</code> function.</li> <li>Call <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.train">train()</a> to finetune your model.</li></ol> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>training_args = Seq2SeqTrainingArguments( <span class="hljs-meta">... </span> output_dir=<span class="hljs-string">"my_awesome_opus_books_model"</span>, <span class="hljs-meta">... </span> evaluation_strategy=<span class="hljs-string">"epoch"</span>, <span class="hljs-meta">... </span> learning_rate=<span class="hljs-number">2e-5</span>, <span class="hljs-meta">... </span> per_device_train_batch_size=<span class="hljs-number">16</span>, <span class="hljs-meta">... </span> per_device_eval_batch_size=<span class="hljs-number">16</span>, <span class="hljs-meta">... </span> weight_decay=<span class="hljs-number">0.01</span>, <span class="hljs-meta">... </span> save_total_limit=<span class="hljs-number">3</span>, <span class="hljs-meta">... </span> num_train_epochs=<span class="hljs-number">2</span>, <span class="hljs-meta">... </span> predict_with_generate=<span class="hljs-literal">True</span>, <span class="hljs-meta">... </span> fp16=<span class="hljs-literal">True</span>, <span class="hljs-meta">... </span> push_to_hub=<span class="hljs-literal">True</span>, <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>trainer = Seq2SeqTrainer( <span class="hljs-meta">... </span> model=model, <span class="hljs-meta">... </span> args=training_args, <span class="hljs-meta">... </span> train_dataset=tokenized_books[<span class="hljs-string">"train"</span>], <span class="hljs-meta">... </span> eval_dataset=tokenized_books[<span class="hljs-string">"test"</span>], <span class="hljs-meta">... </span> tokenizer=tokenizer, <span class="hljs-meta">... </span> data_collator=data_collator, <span class="hljs-meta">... </span> compute_metrics=compute_metrics, <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>trainer.train()</pre></div> <p data-svelte-h="svelte-cv8z08">Once training is completed, share your model to the Hub with the <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.push_to_hub">push_to_hub()</a> method so everyone can use your model:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>trainer.push_to_hub()</pre></div></div></div> <div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="0.94em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 274"><path d="M145.726 42.065v42.07l72.861 42.07v-42.07l-72.86-42.07zM0 84.135v42.07l36.43 21.03V105.17L0 84.135zm109.291 21.035l-36.43 21.034v126.2l36.43 21.035v-84.135l36.435 21.035v-42.07l-36.435-21.034V105.17z" fill="#E55B2D"></path><path d="M145.726 42.065L36.43 105.17v42.065l72.861-42.065v42.065l36.435-21.03v-84.14zM255.022 63.1l-36.435 21.035v42.07l36.435-21.035V63.1zm-72.865 84.135l-36.43 21.035v42.07l36.43-21.036v-42.07zm-36.43 63.104l-36.436-21.035v84.135l36.435-21.035V210.34z" fill="#ED8E24"></path><path d="M145.726 0L0 84.135l36.43 21.035l109.296-63.105l72.861 42.07L255.022 63.1L145.726 0zm0 126.204l-36.435 21.03l36.435 21.036l36.43-21.035l-36.43-21.03z" fill="#F8BF3C"></path></svg> <span>TensorFlow</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide TensorFlow content</span></div></div> <div class="framework-content"><div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1rd4nl8">If you aren’t familiar with finetuning a model with Keras, take a look at the basic tutorial <a href="../training#train-a-tensorflow-model-with-keras">here</a>!</p></div> To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters: <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AdamWeightDecay <span class="hljs-meta">&gt;&gt;&gt; </span>optimizer = AdamWeightDecay(learning_rate=<span class="hljs-number">2e-5</span>, weight_decay_rate=<span class="hljs-number">0.01</span>)</pre></div> <p data-svelte-h="svelte-d8bckx">Then you can load T5 with <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.TFAutoModelForSeq2SeqLM">TFAutoModelForSeq2SeqLM</a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> TFAutoModelForSeq2SeqLM <span class="hljs-meta">&gt;&gt;&gt; </span>model = TFAutoModelForSeq2SeqLM.from_pretrained(checkpoint)</pre></div> <p data-svelte-h="svelte-qmwuyd">Convert your datasets to the <code>tf.data.Dataset</code> format with <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel.prepare_tf_dataset">prepare_tf_dataset()</a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>tf_train_set = model.prepare_tf_dataset( <span class="hljs-meta">... </span> tokenized_books[<span class="hljs-string">"train"</span>], <span class="hljs-meta">... </span> shuffle=<span class="hljs-literal">True</span>, <span class="hljs-meta">... </span> batch_size=<span class="hljs-number">16</span>, <span class="hljs-meta">... </span> collate_fn=data_collator, <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>tf_test_set = model.prepare_tf_dataset( <span class="hljs-meta">... </span> tokenized_books[<span class="hljs-string">"test"</span>], <span class="hljs-meta">... </span> shuffle=<span class="hljs-literal">False</span>, <span class="hljs-meta">... </span> batch_size=<span class="hljs-number">16</span>, <span class="hljs-meta">... </span> collate_fn=data_collator, <span class="hljs-meta">... </span>)</pre></div> <p data-svelte-h="svelte-17cxx5e">Configure the model for training with <a href="https://keras.io/api/models/model_training_apis/#compile-method" rel="nofollow"><code>compile</code></a>. Note that Transformers models all have a default task-relevant loss function, so you don’t need to specify one unless you want to:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> tensorflow <span class="hljs-keyword">as</span> tf <span class="hljs-meta">&gt;&gt;&gt; </span>model.<span class="hljs-built_in">compile</span>(optimizer=optimizer) <span class="hljs-comment"># No loss argument!</span></pre></div> <p data-svelte-h="svelte-13cue08">The last two things to setup before you start training is to compute the SacreBLEU metric from the predictions, and provide a way to push your model to the Hub. Both are done by using <a href="../main_classes/keras_callbacks">Keras callbacks</a>.</p> <p data-svelte-h="svelte-6vs5z9">Pass your <code>compute_metrics</code> function to <a href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks#transformers.KerasMetricCallback">KerasMetricCallback</a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers.keras_callbacks <span class="hljs-keyword">import</span> KerasMetricCallback <span class="hljs-meta">&gt;&gt;&gt; </span>metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_set)</pre></div> <p data-svelte-h="svelte-b2vwd">Specify where to push your model and tokenizer in the <a href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks#transformers.PushToHubCallback">PushToHubCallback</a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers.keras_callbacks <span class="hljs-keyword">import</span> PushToHubCallback <span class="hljs-meta">&gt;&gt;&gt; </span>push_to_hub_callback = PushToHubCallback( <span class="hljs-meta">... </span> output_dir=<span class="hljs-string">"my_awesome_opus_books_model"</span>, <span class="hljs-meta">... </span> tokenizer=tokenizer, <span class="hljs-meta">... </span>)</pre></div> <p data-svelte-h="svelte-1lw9xm8">Then bundle your callbacks together:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>callbacks = [metric_callback, push_to_hub_callback]</pre></div> <p data-svelte-h="svelte-1hrpv1v">Finally, you’re ready to start training your model! Call <a href="https://keras.io/api/models/model_training_apis/#fit-method" rel="nofollow"><code>fit</code></a> with your training and validation datasets, the number of epochs, and your callbacks to finetune the model:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>model.fit(x=tf_train_set, validation_data=tf_test_set, epochs=<span class="hljs-number">3</span>, callbacks=callbacks)</pre></div> <p data-svelte-h="svelte-2s71om">Once training is completed, your model is automatically uploaded to the Hub so everyone can use it!</p></div></div> </div> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-ikyjb6">For a more in-depth example of how to finetune a model for translation, take a look at the corresponding <a href="https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/translation.ipynb" rel="nofollow">PyTorch notebook</a> or <a href="https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/translation-tf.ipynb" rel="nofollow">TensorFlow notebook</a>.</p></div> <h2 class="relative group"><a id="inference" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#inference"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-199uz7g">Inference</span></h2> <p data-svelte-h="svelte-633ppb">Great, now that you’ve finetuned a model, you can use it for inference!</p> <p data-svelte-h="svelte-1js4i9r">Come up with some text you’d like to translate to another language. For T5, you need to prefix your input depending on the task you’re working on. For translation from English to French, you should prefix your input as shown below:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>text = <span class="hljs-string">"translate English to French: Legumes share resources with nitrogen-fixing bacteria."</span></pre></div> <p data-svelte-h="svelte-4um69t">The simplest way to try out your finetuned model for inference is to use it in a <a href="/docs/transformers/v4.34.0/en/main_classes/pipelines#transformers.pipeline">pipeline()</a>. Instantiate a <code>pipeline</code> for translation with your model, and pass your text to it:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> pipeline <span class="hljs-meta">&gt;&gt;&gt; </span>translator = pipeline(<span class="hljs-string">"translation"</span>, model=<span class="hljs-string">"my_awesome_opus_books_model"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>translator(text) [{<span class="hljs-string">'translation_text'</span>: <span class="hljs-string">'Legumes partagent des ressources avec des bactéries azotantes.'</span>}]</pre></div> <p data-svelte-h="svelte-1njl8vm">You can also manually replicate the results of the <code>pipeline</code> if you’d like:</p> <div class="space-y-10 py-6 2xl:py-8 2xl:-mx-4"><div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><defs><clipPath id="a"><rect x="3.05" y="0.5" width="25.73" height="31" fill="none"></rect></clipPath></defs><g clip-path="url(#a)"><path d="M24.94,9.51a12.81,12.81,0,0,1,0,18.16,12.68,12.68,0,0,1-18,0,12.81,12.81,0,0,1,0-18.16l9-9V5l-.84.83-6,6a9.58,9.58,0,1,0,13.55,0ZM20.44,9a1.68,1.68,0,1,1,1.67-1.67A1.68,1.68,0,0,1,20.44,9Z" fill="#ee4c2c"></path></g></svg> <span>Pytorch</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide Pytorch content</span></div></div> <div class="framework-content"><p data-svelte-h="svelte-1c2y1ia">Tokenize the text and return the <code>input_ids</code> as PyTorch tensors:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"my_awesome_opus_books_model"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(text, return_tensors=<span class="hljs-string">"pt"</span>).input_ids</pre></div> <p data-svelte-h="svelte-1bedpus">Use the <a href="/docs/transformers/v4.34.0/en/main_classes/text_generation#transformers.GenerationMixin.generate">generate()</a> method to create the translation. For more details about the different text generation strategies and parameters for controlling generation, check out the <a href="../main_classes/text_generation">Text Generation</a> API.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoModelForSeq2SeqLM <span class="hljs-meta">&gt;&gt;&gt; </span>model = AutoModelForSeq2SeqLM.from_pretrained(<span class="hljs-string">"my_awesome_opus_books_model"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model.generate(inputs, max_new_tokens=<span class="hljs-number">40</span>, do_sample=<span class="hljs-literal">True</span>, top_k=<span class="hljs-number">30</span>, top_p=<span class="hljs-number">0.95</span>)</pre></div> <p data-svelte-h="svelte-1918fu9">Decode the generated token ids back into text:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer.decode(outputs[<span class="hljs-number">0</span>], skip_special_tokens=<span class="hljs-literal">True</span>) <span class="hljs-string">'Les lignées partagent des ressources avec des bactéries enfixant l'</span>azote.<span class="hljs-string">'</span></pre></div></div></div> <div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="0.94em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 274"><path d="M145.726 42.065v42.07l72.861 42.07v-42.07l-72.86-42.07zM0 84.135v42.07l36.43 21.03V105.17L0 84.135zm109.291 21.035l-36.43 21.034v126.2l36.43 21.035v-84.135l36.435 21.035v-42.07l-36.435-21.034V105.17z" fill="#E55B2D"></path><path d="M145.726 42.065L36.43 105.17v42.065l72.861-42.065v42.065l36.435-21.03v-84.14zM255.022 63.1l-36.435 21.035v42.07l36.435-21.035V63.1zm-72.865 84.135l-36.43 21.035v42.07l36.43-21.036v-42.07zm-36.43 63.104l-36.436-21.035v84.135l36.435-21.035V210.34z" fill="#ED8E24"></path><path d="M145.726 0L0 84.135l36.43 21.035l109.296-63.105l72.861 42.07L255.022 63.1L145.726 0zm0 126.204l-36.435 21.03l36.435 21.036l36.43-21.035l-36.43-21.03z" fill="#F8BF3C"></path></svg> <span>TensorFlow</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide TensorFlow content</span></div></div> <div class="framework-content"><p data-svelte-h="svelte-hw2mu6">Tokenize the text and return the <code>input_ids</code> as TensorFlow tensors:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"my_awesome_opus_books_model"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(text, return_tensors=<span class="hljs-string">"tf"</span>).input_ids</pre></div> <p data-svelte-h="svelte-d60kj6">Use the <a href="/docs/transformers/v4.34.0/en/main_classes/text_generation#transformers.TFGenerationMixin.generate">generate()</a> method to create the translation. For more details about the different text generation strategies and parameters for controlling generation, check out the <a href="../main_classes/text_generation">Text Generation</a> API.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> TFAutoModelForSeq2SeqLM <span class="hljs-meta">&gt;&gt;&gt; </span>model = TFAutoModelForSeq2SeqLM.from_pretrained(<span class="hljs-string">"my_awesome_opus_books_model"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model.generate(inputs, max_new_tokens=<span class="hljs-number">40</span>, do_sample=<span class="hljs-literal">True</span>, top_k=<span class="hljs-number">30</span>, top_p=<span class="hljs-number">0.95</span>)</pre></div> <p data-svelte-h="svelte-1918fu9">Decode the generated token ids back into text:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer.decode(outputs[<span class="hljs-number">0</span>], skip_special_tokens=<span class="hljs-literal">True</span>) <span class="hljs-string">'Les lugumes partagent les ressources avec des bactéries fixatrices d'</span>azote.<span class="hljs-string">'</span></pre></div></div></div> </div> <p></p> <div id="svelte-announcer" aria-live="assertive" aria-atomic="true" style="position: absolute; left: 0px; top: 0px; clip: rect(0px, 0px, 0px, 0px); clip-path: inset(50%); overflow: hidden; white-space: nowrap; width: 1px; height: 1px;"></div></div> <div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/tasks/masked_language_modeling" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>Masked language modeling</a> <a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/tasks/summarization" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">Summarization<span class="ml-2 translate-y-px">→</span></a></div></div></div> <div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapter&quot;:{&quot;title&quot;:&quot;Translation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;translation&quot;,&quot;url&quot;:&quot;#translation&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Load OPUS Books dataset&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;load-opus-books-dataset&quot;,&quot;url&quot;:&quot;#load-opus-books-dataset&quot;},{&quot;title&quot;:&quot;Preprocess&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocess&quot;,&quot;url&quot;:&quot;#preprocess&quot;},{&quot;title&quot;:&quot;Evaluate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;evaluate&quot;,&quot;url&quot;:&quot;#evaluate&quot;},{&quot;title&quot;:&quot;Train&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;train&quot;,&quot;url&quot;:&quot;#train&quot;},{&quot;title&quot;:&quot;Inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;inference&quot;,&quot;url&quot;:&quot;#inference&quot;}]}}" data-target="SubSideMenu"><nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#translation" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-translation"><wbr>Translation</a> <a href="#load-opus-books-dataset" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-load-opus-books-dataset"><wbr>Load OPU<wbr>S <wbr>Books dataset</a> <a href="#preprocess" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-preprocess"><wbr>Preprocess</a> <a href="#evaluate" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-evaluate"><wbr>Evaluate</a> <a href="#train" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-train"><wbr>Train</a> <a href="#inference" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-inference"><wbr>Inference</a> </nav></div></div></div> <div id="doc-footer"></div></main> </div> <script> import("/front/build/kube-5e23f38/index.js"); window.moonSha = "kube-5e23f38/"; window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc","environment":"production","userAgent":"HuggingFace (production)"}`); </script> <!-- Stripe --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://js.stripe.com/v3/"; script.async = true; document.head.appendChild(script); } </script> <!-- Google analytics v4 --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL"; script.async = true; document.head.appendChild(script); window.dataLayer = window.dataLayer || []; function gtag() { if (window.dataLayer !== undefined) { window.dataLayer.push(arguments); } } gtag("js", new Date()); gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/v4.34.0/en/tasks/translation" }); /// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" }); /// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent /// TODO: ask the user for their consent and update this with gtag('consent', 'update') } </script> <!-- Google Analytics v3 (deprecated) --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { (function (i, s, o, g, r, a, m) { i["GoogleAnalyticsObject"] = r; (i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments); }), (i[r].l = 1 * new Date()); (a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]); a.async = 1; a.src = g; m.parentNode.insertBefore(a, m); })(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics"); ganalytics("create", "UA-83738774-2", "auto"); ganalytics("send", "pageview", "/docs/transformers/v4.34.0/en/tasks/translation"); } </script> <iframe name="__privateStripeMetricsController5020" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-27c67c0d52761104439bb051c7856ab1.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fv4.34.0%2Fen%2Ftasks%2Ftranslation&amp;title=Translation&amp;referrer=&amp;muid=38397bf3-d1df-433f-a1ab-3a999964eeba83e258&amp;sid=7a2cecc6-6b9a-4e4a-88b4-4bd8a189a43fe6315f&amp;version=6&amp;preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html>
2023-10-05T13:33:53.214Z
https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/INSERT%20LINK%20TO%20GITHUB%20REPO%20HERE
The documentation page MODEL\_DOC/INSERT LINK TO GITHUB REPO HERE doesn’t exist in v4.34.0, but exists on the main version. Click [here](/docs/transformers/main/en/model_doc/INSERT%20LINK%20TO%20GITHUB%20REPO%20HERE) to redirect to the main version of the documentation.
<html><head></head><body>The documentation page MODEL_DOC/INSERT LINK TO GITHUB REPO HERE doesn’t exist in v4.34.0, but exists on the main version. Click <a href="/docs/transformers/main/en/model_doc/INSERT%20LINK%20TO%20GITHUB%20REPO%20HERE">here</a> to redirect to the main version of the documentation.</body></html>
2023-10-05T13:33:53.270Z
Masked language modeling
https://huggingface.co/docs/transformers/v4.34.0/en/tasks/masked_language_modeling
# Masked language modeling Masked language modeling predicts a masked token in a sequence, and the model can attend to tokens bidirectionally. This means the model has full access to the tokens on the left and right. Masked language modeling is great for tasks that require a good contextual understanding of an entire sequence. BERT is an example of a masked language model. This guide will show you how to: 1. Finetune [DistilRoBERTa](https://huggingface.co/distilroberta-base) on the [r/askscience](https://www.reddit.com/r/askscience/) subset of the [ELI5](https://huggingface.co/datasets/eli5) dataset. 2. Use your finetuned model for inference. You can finetune other architectures for masked language modeling following the same steps in this guide. Choose one of the following architectures: [ALBERT](../model_doc/albert), [BART](../model_doc/bart), [BERT](../model_doc/bert), [BigBird](../model_doc/big_bird), [CamemBERT](../model_doc/camembert), [ConvBERT](../model_doc/convbert), [Data2VecText](../model_doc/data2vec-text), [DeBERTa](../model_doc/deberta), [DeBERTa-v2](../model_doc/deberta-v2), [DistilBERT](../model_doc/distilbert), [ELECTRA](../model_doc/electra), [ERNIE](../model_doc/ernie), [ESM](../model_doc/esm), [FlauBERT](../model_doc/flaubert), [FNet](../model_doc/fnet), [Funnel Transformer](../model_doc/funnel), [I-BERT](../model_doc/ibert), [LayoutLM](../model_doc/layoutlm), [Longformer](../model_doc/longformer), [LUKE](../model_doc/luke), [mBART](../model_doc/mbart), [MEGA](../model_doc/mega), [Megatron-BERT](../model_doc/megatron-bert), [MobileBERT](../model_doc/mobilebert), [MPNet](../model_doc/mpnet), [MRA](../model_doc/mra), [MVP](../model_doc/mvp), [Nezha](../model_doc/nezha), [Nyströmformer](../model_doc/nystromformer), [Perceiver](../model_doc/perceiver), [QDQBert](../model_doc/qdqbert), [Reformer](../model_doc/reformer), [RemBERT](../model_doc/rembert), [RoBERTa](../model_doc/roberta), [RoBERTa-PreLayerNorm](../model_doc/roberta-prelayernorm), [RoCBert](../model_doc/roc_bert), [RoFormer](../model_doc/roformer), [SqueezeBERT](../model_doc/squeezebert), [TAPAS](../model_doc/tapas), [Wav2Vec2](../model_doc/wav2vec2), [XLM](../model_doc/xlm), [XLM-RoBERTa](../model_doc/xlm-roberta), [XLM-RoBERTa-XL](../model_doc/xlm-roberta-xl), [X-MOD](../model_doc/xmod), [YOSO](../model_doc/yoso) Before you begin, make sure you have all the necessary libraries installed: ``` pip install transformers datasets evaluate ``` We encourage you to log in to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to log in: ``` >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## Load ELI5 dataset Start by loading a smaller subset of the r/askscience subset of the ELI5 dataset from the 🤗 Datasets library. This’ll give you a chance to experiment and make sure everything works before spending more time training on the full dataset. ``` >>> from datasets import load_dataset >>> eli5 = load_dataset("eli5", split="train_asks[:5000]") ``` Split the dataset’s `train_asks` split into a train and test set with the [train\_test\_split](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.train_test_split) method: ``` >>> eli5 = eli5.train_test_split(test_size=0.2) ``` Then take a look at an example: ``` >>> eli5["train"][0] {'answers': {'a_id': ['c3d1aib', 'c3d4lya'], 'score': [6, 3], 'text': ["The velocity needed to remain in orbit is equal to the square root of Newton's constant times the mass of earth divided by the distance from the center of the earth. I don't know the altitude of that specific mission, but they're usually around 300 km. That means he's going 7-8 km/s.\n\nIn space there are no other forces acting on either the shuttle or the guy, so they stay in the same position relative to each other. If he were to become unable to return to the ship, he would presumably run out of oxygen, or slowly fall into the atmosphere and burn up.", "Hope you don't mind me asking another question, but why aren't there any stars visible in this photo?"]}, 'answers_urls': {'url': []}, 'document': '', 'q_id': 'nyxfp', 'selftext': '_URL_0_\n\nThis was on the front page earlier and I have a few questions about it. Is it possible to calculate how fast the astronaut would be orbiting the earth? Also how does he stay close to the shuttle so that he can return safely, i.e is he orbiting at the same speed and can therefore stay next to it? And finally if his propulsion system failed, would he eventually re-enter the atmosphere and presumably die?', 'selftext_urls': {'url': ['http://apod.nasa.gov/apod/image/1201/freeflyer_nasa_3000.jpg']}, 'subreddit': 'askscience', 'title': 'Few questions about this space walk photograph.', 'title_urls': {'url': []}} ``` While this may look like a lot, you’re only really interested in the `text` field. What’s cool about language modeling tasks is you don’t need labels (also known as an unsupervised task) because the next word _is_ the label. ## Preprocess For masked language modeling, the next step is to load a DistilRoBERTa tokenizer to process the `text` subfield: ``` >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("distilroberta-base") ``` You’ll notice from the example above, the `text` field is actually nested inside `answers`. This means you’ll need to e xtract the `text` subfield from its nested structure with the [`flatten`](https://huggingface.co/docs/datasets/process.html#flatten) method: ``` >>> eli5 = eli5.flatten() >>> eli5["train"][0] {'answers.a_id': ['c3d1aib', 'c3d4lya'], 'answers.score': [6, 3], 'answers.text': ["The velocity needed to remain in orbit is equal to the square root of Newton's constant times the mass of earth divided by the distance from the center of the earth. I don't know the altitude of that specific mission, but they're usually around 300 km. That means he's going 7-8 km/s.\n\nIn space there are no other forces acting on either the shuttle or the guy, so they stay in the same position relative to each other. If he were to become unable to return to the ship, he would presumably run out of oxygen, or slowly fall into the atmosphere and burn up.", "Hope you don't mind me asking another question, but why aren't there any stars visible in this photo?"], 'answers_urls.url': [], 'document': '', 'q_id': 'nyxfp', 'selftext': '_URL_0_\n\nThis was on the front page earlier and I have a few questions about it. Is it possible to calculate how fast the astronaut would be orbiting the earth? Also how does he stay close to the shuttle so that he can return safely, i.e is he orbiting at the same speed and can therefore stay next to it? And finally if his propulsion system failed, would he eventually re-enter the atmosphere and presumably die?', 'selftext_urls.url': ['http://apod.nasa.gov/apod/image/1201/freeflyer_nasa_3000.jpg'], 'subreddit': 'askscience', 'title': 'Few questions about this space walk photograph.', 'title_urls.url': []} ``` Each subfield is now a separate column as indicated by the `answers` prefix, and the `text` field is a list now. Instead of tokenizing each sentence separately, convert the list to a string so you can jointly tokenize them. Here is a first preprocessing function to join the list of strings for each example and tokenize the result: ``` >>> def preprocess_function(examples): ... return tokenizer([" ".join(x) for x in examples["answers.text"]]) ``` To apply this preprocessing function over the entire dataset, use the 🤗 Datasets [map](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.map) method. You can speed up the `map` function by setting `batched=True` to process multiple elements of the dataset at once, and increasing the number of processes with `num_proc`. Remove any columns you don’t need: ``` >>> tokenized_eli5 = eli5.map( ... preprocess_function, ... batched=True, ... num_proc=4, ... remove_columns=eli5["train"].column_names, ... ) ``` This dataset contains the token sequences, but some of these are longer than the maximum input length for the model. You can now use a second preprocessing function to - concatenate all the sequences - split the concatenated sequences into shorter chunks defined by `block_size`, which should be both shorter than the maximum input length and short enough for your GPU RAM. ``` >>> block_size = 128 >>> def group_texts(examples): ... ... concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()} ... total_length = len(concatenated_examples[list(examples.keys())[0]]) ... ... ... if total_length >= block_size: ... total_length = (total_length // block_size) * block_size ... ... result = { ... k: [t[i : i + block_size] for i in range(0, total_length, block_size)] ... for k, t in concatenated_examples.items() ... } ... return result ``` Apply the `group_texts` function over the entire dataset: ``` >>> lm_dataset = tokenized_eli5.map(group_texts, batched=True, num_proc=4) ``` Now create a batch of examples using [DataCollatorForLanguageModeling](/docs/transformers/v4.34.0/en/main_classes/data_collator#transformers.DataCollatorForLanguageModeling). It’s more efficient to _dynamically pad_ the sentences to the longest length in a batch during collation, instead of padding the whole dataset to the maximum length. Use the end-of-sequence token as the padding token and specify `mlm_probability` to randomly mask tokens each time you iterate over the data: ``` >>> from transformers import DataCollatorForLanguageModeling >>> tokenizer.pad_token = tokenizer.eos_token >>> data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm_probability=0.15) ``` Use the end-of-sequence token as the padding token and specify `mlm_probability` to randomly mask tokens each time you iterate over the data: ``` >>> from transformers import DataCollatorForLanguageModeling >>> data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm_probability=0.15, return_tensors="tf") ``` ## Train If you aren’t familiar with finetuning a model with the [Trainer](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer), take a look at the basic tutorial [here](../training#train-with-pytorch-trainer)! You’re ready to start training your model now! Load DistilRoBERTa with [AutoModelForMaskedLM](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoModelForMaskedLM): ``` >>> from transformers import AutoModelForMaskedLM >>> model = AutoModelForMaskedLM.from_pretrained("distilroberta-base") ``` At this point, only three steps remain: 1. Define your training hyperparameters in [TrainingArguments](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.TrainingArguments). The only required parameter is `output_dir` which specifies where to save your model. You’ll push this model to the Hub by setting `push_to_hub=True` (you need to be signed in to Hugging Face to upload your model). 2. Pass the training arguments to [Trainer](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer) along with the model, datasets, and data collator. 3. Call [train()](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.train) to finetune your model. ``` >>> training_args = TrainingArguments( ... output_dir="my_awesome_eli5_mlm_model", ... evaluation_strategy="epoch", ... learning_rate=2e-5, ... num_train_epochs=3, ... weight_decay=0.01, ... push_to_hub=True, ... ) >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=lm_dataset["train"], ... eval_dataset=lm_dataset["test"], ... data_collator=data_collator, ... ) >>> trainer.train() ``` Once training is completed, use the [evaluate()](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.evaluate) method to evaluate your model and get its perplexity: ``` >>> import math >>> eval_results = trainer.evaluate() >>> print(f"Perplexity: {math.exp(eval_results['eval_loss']):.2f}") Perplexity: 8.76 ``` Then share your model to the Hub with the [push\_to\_hub()](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.push_to_hub) method so everyone can use your model: ``` >>> trainer.push_to_hub() ``` If you aren’t familiar with finetuning a model with Keras, take a look at the basic tutorial [here](../training#train-a-tensorflow-model-with-keras)! To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters: ``` >>> from transformers import create_optimizer, AdamWeightDecay >>> optimizer = AdamWeightDecay(learning_rate=2e-5, weight_decay_rate=0.01) ``` Then you can load DistilRoBERTa with [TFAutoModelForMaskedLM](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.TFAutoModelForMaskedLM): ``` >>> from transformers import TFAutoModelForMaskedLM >>> model = TFAutoModelForMaskedLM.from_pretrained("distilroberta-base") ``` Convert your datasets to the `tf.data.Dataset` format with [prepare\_tf\_dataset()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel.prepare_tf_dataset): ``` >>> tf_train_set = model.prepare_tf_dataset( ... lm_dataset["train"], ... shuffle=True, ... batch_size=16, ... collate_fn=data_collator, ... ) >>> tf_test_set = model.prepare_tf_dataset( ... lm_dataset["test"], ... shuffle=False, ... batch_size=16, ... collate_fn=data_collator, ... ) ``` Configure the model for training with [`compile`](https://keras.io/api/models/model_training_apis/#compile-method). Note that Transformers models all have a default task-relevant loss function, so you don’t need to specify one unless you want to: ``` >>> import tensorflow as tf >>> model.compile(optimizer=optimizer) ``` This can be done by specifying where to push your model and tokenizer in the [PushToHubCallback](/docs/transformers/v4.34.0/en/main_classes/keras_callbacks#transformers.PushToHubCallback): ``` >>> from transformers.keras_callbacks import PushToHubCallback >>> callback = PushToHubCallback( ... output_dir="my_awesome_eli5_mlm_model", ... tokenizer=tokenizer, ... ) ``` Finally, you’re ready to start training your model! Call [`fit`](https://keras.io/api/models/model_training_apis/#fit-method) with your training and validation datasets, the number of epochs, and your callback to finetune the model: ``` >>> model.fit(x=tf_train_set, validation_data=tf_test_set, epochs=3, callbacks=[callback]) ``` Once training is completed, your model is automatically uploaded to the Hub so everyone can use it! For a more in-depth example of how to finetune a model for masked language modeling, take a look at the corresponding [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb) or [TensorFlow notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb). ## Inference Great, now that you’ve finetuned a model, you can use it for inference! Come up with some text you’d like the model to fill in the blank with, and use the special `<mask>` token to indicate the blank: ``` >>> text = "The Milky Way is a <mask> galaxy." ``` The simplest way to try out your finetuned model for inference is to use it in a [pipeline()](/docs/transformers/v4.34.0/en/main_classes/pipelines#transformers.pipeline). Instantiate a `pipeline` for fill-mask with your model, and pass your text to it. If you like, you can use the `top_k` parameter to specify how many predictions to return: ``` >>> from transformers import pipeline >>> mask_filler = pipeline("fill-mask", "stevhliu/my_awesome_eli5_mlm_model") >>> mask_filler(text, top_k=3) [{'score': 0.5150994658470154, 'token': 21300, 'token_str': ' spiral', 'sequence': 'The Milky Way is a spiral galaxy.'}, {'score': 0.07087188959121704, 'token': 2232, 'token_str': ' massive', 'sequence': 'The Milky Way is a massive galaxy.'}, {'score': 0.06434620916843414, 'token': 650, 'token_str': ' small', 'sequence': 'The Milky Way is a small galaxy.'}] ``` Tokenize the text and return the `input_ids` as PyTorch tensors. You’ll also need to specify the position of the `<mask>` token: ``` >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("stevhliu/my_awesome_eli5_mlm_model") >>> inputs = tokenizer(text, return_tensors="pt") >>> mask_token_index = torch.where(inputs["input_ids"] == tokenizer.mask_token_id)[1] ``` Pass your inputs to the model and return the `logits` of the masked token: ``` >>> from transformers import AutoModelForMaskedLM >>> model = AutoModelForMaskedLM.from_pretrained("stevhliu/my_awesome_eli5_mlm_model") >>> logits = model(**inputs).logits >>> mask_token_logits = logits[0, mask_token_index, :] ``` Then return the three masked tokens with the highest probability and print them out: ``` >>> top_3_tokens = torch.topk(mask_token_logits, 3, dim=1).indices[0].tolist() >>> for token in top_3_tokens: ... print(text.replace(tokenizer.mask_token, tokenizer.decode([token]))) The Milky Way is a spiral galaxy. The Milky Way is a massive galaxy. The Milky Way is a small galaxy. ``` Tokenize the text and return the `input_ids` as TensorFlow tensors. You’ll also need to specify the position of the `<mask>` token: ``` >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("stevhliu/my_awesome_eli5_mlm_model") >>> inputs = tokenizer(text, return_tensors="tf") >>> mask_token_index = tf.where(inputs["input_ids"] == tokenizer.mask_token_id)[0, 1] ``` Pass your inputs to the model and return the `logits` of the masked token: ``` >>> from transformers import TFAutoModelForMaskedLM >>> model = TFAutoModelForMaskedLM.from_pretrained("stevhliu/my_awesome_eli5_mlm_model") >>> logits = model(**inputs).logits >>> mask_token_logits = logits[0, mask_token_index, :] ``` Then return the three masked tokens with the highest probability and print them out: ``` >>> top_3_tokens = tf.math.top_k(mask_token_logits, 3).indices.numpy() >>> for token in top_3_tokens: ... print(text.replace(tokenizer.mask_token, tokenizer.decode([token]))) The Milky Way is a spiral galaxy. The Milky Way is a massive galaxy. The Milky Way is a small galaxy. ```
<!DOCTYPE html><html class=""><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"> <meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science."> <meta property="fb:app_id" content="1321688464574422"> <meta name="twitter:card" content="summary_large_image"> <meta name="twitter:site" content="@huggingface"> <meta property="og:title" content="Masked language modeling"> <meta property="og:type" content="website"> <meta property="og:url" content="https://huggingface.co/docs/transformers/v4.34.0/en/tasks/masked_language_modeling"> <meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png"> <link rel="stylesheet" href="/front/build/kube-5e23f38/style.css"> <link rel="preconnect" href="https://fonts.gstatic.com"> <link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&amp;display=swap" rel="stylesheet"> <link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&amp;display=swap" rel="stylesheet"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" as="style" onload="this.onload=null;this.rel='stylesheet'"> <noscript> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" /> </noscript> <title>Masked language modeling</title> <script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script> <script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"><link rel="stylesheet" href="/docs/transformers/v4.34.0/en/_app/immutable/assets/0.e3b0c442.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/1.38c5c2f6.js"><meta name="hf:doc:metadata" content="{&quot;local&quot;:&quot;masked-language-modeling&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;load-eli5-dataset&quot;,&quot;title&quot;:&quot;Load ELI5 dataset&quot;},{&quot;local&quot;:&quot;preprocess&quot;,&quot;title&quot;:&quot;Preprocess&quot;},{&quot;local&quot;:&quot;train&quot;,&quot;title&quot;:&quot;Train&quot;},{&quot;local&quot;:&quot;inference&quot;,&quot;title&quot;:&quot;Inference&quot;}],&quot;title&quot;:&quot;Masked language modeling&quot;}"></head> <body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage"> <div class="flex min-h-screen flex-col"> <div class="SVELTE_HYDRATER contents" data-props="{&quot;classNames&quot;:&quot;&quot;,&quot;isWide&quot;:true,&quot;isZh&quot;:false}" data-target="MainHeader"><header class="border-b border-gray-100 "><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <div class="flex flex-none items-center justify-center p-0.5 place-self-stretch lg:hidden"><button class="relative z-40 flex h-6 w-8 items-center justify-center" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div></div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="rounded-full border border-transparent bg-gray-900 px-3 py-1 leading-none text-white hover:border-black hover:bg-white hover:text-black" href="/join">Sign Up</a></li></ul></nav></div></header></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div> <main class="flex flex-1 flex-col"><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapters&quot;:[{&quot;title&quot;:&quot;Get started&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;🤗 Transformers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;index&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/index&quot;},{&quot;title&quot;:&quot;Quick tour&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;quicktour&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/quicktour&quot;},{&quot;title&quot;:&quot;Installation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;installation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/installation&quot;}]},{&quot;title&quot;:&quot;Tutorials&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Run inference with pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_tutorial&quot;},{&quot;title&quot;:&quot;Write portable code with AutoClass&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;autoclass_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/autoclass_tutorial&quot;},{&quot;title&quot;:&quot;Preprocess data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocessing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/preprocessing&quot;},{&quot;title&quot;:&quot;Fine-tune a pretrained model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;training&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/training&quot;},{&quot;title&quot;:&quot;Train with a script&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;run_scripts&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/run_scripts&quot;},{&quot;title&quot;:&quot;Set up distributed training with 🤗 Accelerate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;accelerate&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/accelerate&quot;},{&quot;title&quot;:&quot;Load and train adapters with 🤗 PEFT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;peft&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/peft&quot;},{&quot;title&quot;:&quot;Share your model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_sharing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_sharing&quot;},{&quot;title&quot;:&quot;Agents&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers_agents&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/transformers_agents&quot;},{&quot;title&quot;:&quot;Generation with LLMs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;llm_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/llm_tutorial&quot;}]},{&quot;title&quot;:&quot;Task Guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Natural Language Processing&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text classification&quot;,&quot;id&quot;:&quot;tasks/sequence_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/sequence_classification&quot;},{&quot;title&quot;:&quot;Token classification&quot;,&quot;id&quot;:&quot;tasks/token_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/token_classification&quot;},{&quot;title&quot;:&quot;Question answering&quot;,&quot;id&quot;:&quot;tasks/question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/question_answering&quot;},{&quot;title&quot;:&quot;Causal language modeling&quot;,&quot;id&quot;:&quot;tasks/language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/language_modeling&quot;},{&quot;title&quot;:&quot;Masked language modeling&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks/masked_language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/masked_language_modeling&quot;},{&quot;title&quot;:&quot;Translation&quot;,&quot;id&quot;:&quot;tasks/translation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/translation&quot;},{&quot;title&quot;:&quot;Summarization&quot;,&quot;id&quot;:&quot;tasks/summarization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/summarization&quot;},{&quot;title&quot;:&quot;Multiple choice&quot;,&quot;id&quot;:&quot;tasks/multiple_choice&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/multiple_choice&quot;}]},{&quot;title&quot;:&quot;Audio&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio classification&quot;,&quot;id&quot;:&quot;tasks/audio_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/audio_classification&quot;},{&quot;title&quot;:&quot;Automatic speech recognition&quot;,&quot;id&quot;:&quot;tasks/asr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/asr&quot;}]},{&quot;title&quot;:&quot;Computer Vision&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image classification&quot;,&quot;id&quot;:&quot;tasks/image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_classification&quot;},{&quot;title&quot;:&quot;Semantic segmentation&quot;,&quot;id&quot;:&quot;tasks/semantic_segmentation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/semantic_segmentation&quot;},{&quot;title&quot;:&quot;Video classification&quot;,&quot;id&quot;:&quot;tasks/video_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/video_classification&quot;},{&quot;title&quot;:&quot;Object detection&quot;,&quot;id&quot;:&quot;tasks/object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot object detection&quot;,&quot;id&quot;:&quot;tasks/zero_shot_object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot image classification&quot;,&quot;id&quot;:&quot;tasks/zero_shot_image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification&quot;},{&quot;title&quot;:&quot;Depth estimation&quot;,&quot;id&quot;:&quot;tasks/monocular_depth_estimation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation&quot;}]},{&quot;title&quot;:&quot;Multimodal&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image captioning&quot;,&quot;id&quot;:&quot;tasks/image_captioning&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_captioning&quot;},{&quot;title&quot;:&quot;Document Question Answering&quot;,&quot;id&quot;:&quot;tasks/document_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/document_question_answering&quot;},{&quot;title&quot;:&quot;Visual Question Answering&quot;,&quot;id&quot;:&quot;tasks/visual_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/visual_question_answering&quot;},{&quot;title&quot;:&quot;Text to speech&quot;,&quot;id&quot;:&quot;tasks/text-to-speech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/text-to-speech&quot;}]},{&quot;title&quot;:&quot;Generation&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Customize the generation strategy&quot;,&quot;id&quot;:&quot;generation_strategies&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/generation_strategies&quot;}]},{&quot;title&quot;:&quot;Prompting&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image tasks with IDEFICS&quot;,&quot;id&quot;:&quot;tasks/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/idefics&quot;}]}]},{&quot;title&quot;:&quot;Developer guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Use fast tokenizers from 🤗 Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;fast_tokenizers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/fast_tokenizers&quot;},{&quot;title&quot;:&quot;Run inference with multilingual models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;multilingual&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/multilingual&quot;},{&quot;title&quot;:&quot;Use model-specific APIs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;create_a_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/create_a_model&quot;},{&quot;title&quot;:&quot;Share a custom model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_models&quot;},{&quot;title&quot;:&quot;Templates for chat models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;chat_templating&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/chat_templating&quot;},{&quot;title&quot;:&quot;Run training on Amazon SageMaker&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;sagemaker&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/sagemaker&quot;},{&quot;title&quot;:&quot;Export to ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;serialization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/serialization&quot;},{&quot;title&quot;:&quot;Export to TFLite&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tflite&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tflite&quot;},{&quot;title&quot;:&quot;Export to TorchScript&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;torchscript&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/torchscript&quot;},{&quot;title&quot;:&quot;Benchmarks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;benchmarks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/benchmarks&quot;},{&quot;title&quot;:&quot;Notebooks with examples&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;notebooks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/notebooks&quot;},{&quot;title&quot;:&quot;Community resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;community&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/community&quot;},{&quot;title&quot;:&quot;Custom Tools and Prompts&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_tools&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_tools&quot;},{&quot;title&quot;:&quot;Troubleshoot&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;troubleshooting&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/troubleshooting&quot;}]},{&quot;title&quot;:&quot;Performance and scalability&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;performance&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/performance&quot;},{&quot;title&quot;:&quot;Efficient training techniques&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Methods and tools for efficient training on a single GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_one&quot;},{&quot;title&quot;:&quot;Multiple GPUs and parallelism&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_many&quot;},{&quot;title&quot;:&quot;Efficient training on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu&quot;},{&quot;title&quot;:&quot;Distributed CPU training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu_many&quot;},{&quot;title&quot;:&quot;Training on TPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu&quot;},{&quot;title&quot;:&quot;Training on TPU with TensorFlow&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu_tf&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu_tf&quot;},{&quot;title&quot;:&quot;Training on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_special&quot;},{&quot;title&quot;:&quot;Custom hardware for training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_hardware&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_hardware&quot;},{&quot;title&quot;:&quot;Hyperparameter Search using Trainer API&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;hpo_train&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/hpo_train&quot;}]},{&quot;title&quot;:&quot;Optimizing inference&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Inference on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_cpu&quot;},{&quot;title&quot;:&quot;Inference on one GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_one&quot;},{&quot;title&quot;:&quot;Inference on many GPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_many&quot;},{&quot;title&quot;:&quot;Inference on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_special&quot;}]},{&quot;title&quot;:&quot;Instantiating a big model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;big_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/big_models&quot;},{&quot;title&quot;:&quot;Troubleshooting&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;debugging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/debugging&quot;},{&quot;title&quot;:&quot;XLA Integration for TensorFlow Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tf_xla&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tf_xla&quot;},{&quot;title&quot;:&quot;Optimize inference using `torch.compile()`&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_torch_compile&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_torch_compile&quot;}]},{&quot;title&quot;:&quot;Contribute&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;How to contribute to transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;contributing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/contributing&quot;},{&quot;title&quot;:&quot;How to add a model to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_model&quot;},{&quot;title&quot;:&quot;How to convert a 🤗 Transformers model to TensorFlow?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_tensorflow_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_tensorflow_model&quot;},{&quot;title&quot;:&quot;How to add a pipeline to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_pipeline&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_pipeline&quot;},{&quot;title&quot;:&quot;Testing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;testing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/testing&quot;},{&quot;title&quot;:&quot;Checks on a Pull Request&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pr_checks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pr_checks&quot;}]},{&quot;title&quot;:&quot;Conceptual guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Philosophy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;philosophy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/philosophy&quot;},{&quot;title&quot;:&quot;Glossary&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;glossary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/glossary&quot;},{&quot;title&quot;:&quot;What 🤗 Transformers can do&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;task_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/task_summary&quot;},{&quot;title&quot;:&quot;How 🤗 Transformers solve tasks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks_explained&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks_explained&quot;},{&quot;title&quot;:&quot;The Transformer model family&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_summary&quot;},{&quot;title&quot;:&quot;Summary of the tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tokenizer_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tokenizer_summary&quot;},{&quot;title&quot;:&quot;Attention mechanisms&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;attention&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/attention&quot;},{&quot;title&quot;:&quot;Padding and truncation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pad_truncation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pad_truncation&quot;},{&quot;title&quot;:&quot;BERTology&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;bertology&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/bertology&quot;},{&quot;title&quot;:&quot;Perplexity of fixed-length models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perplexity&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perplexity&quot;},{&quot;title&quot;:&quot;Pipelines for webserver inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_webserver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_webserver&quot;},{&quot;title&quot;:&quot;Model training anatomy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_memory_anatomy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_memory_anatomy&quot;}]},{&quot;title&quot;:&quot;API&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Main Classes&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Agents and Tools&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/agent&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/agent&quot;},{&quot;title&quot;:&quot;Auto Classes&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/auto&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/auto&quot;},{&quot;title&quot;:&quot;Callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/callback&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/callback&quot;},{&quot;title&quot;:&quot;Configuration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/configuration&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/configuration&quot;},{&quot;title&quot;:&quot;Data Collator&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/data_collator&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/data_collator&quot;},{&quot;title&quot;:&quot;Keras callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/keras_callbacks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/keras_callbacks&quot;},{&quot;title&quot;:&quot;Logging&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/logging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/logging&quot;},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/model&quot;},{&quot;title&quot;:&quot;Text Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/text_generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/text_generation&quot;},{&quot;title&quot;:&quot;ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/onnx&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/onnx&quot;},{&quot;title&quot;:&quot;Optimization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/optimizer_schedules&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules&quot;},{&quot;title&quot;:&quot;Model outputs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/output&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/output&quot;},{&quot;title&quot;:&quot;Pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/pipelines&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/pipelines&quot;},{&quot;title&quot;:&quot;Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/processors&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/processors&quot;},{&quot;title&quot;:&quot;Quantization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/quantization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/quantization&quot;},{&quot;title&quot;:&quot;Tokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/tokenizer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/tokenizer&quot;},{&quot;title&quot;:&quot;Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/trainer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/trainer&quot;},{&quot;title&quot;:&quot;DeepSpeed Integration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/deepspeed&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/deepspeed&quot;},{&quot;title&quot;:&quot;Feature Extractor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/feature_extractor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/feature_extractor&quot;},{&quot;title&quot;:&quot;Image Processor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/image_processor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/image_processor&quot;}]},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALBERT&quot;,&quot;id&quot;:&quot;model_doc/albert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/albert&quot;},{&quot;title&quot;:&quot;BART&quot;,&quot;id&quot;:&quot;model_doc/bart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bart&quot;},{&quot;title&quot;:&quot;BARThez&quot;,&quot;id&quot;:&quot;model_doc/barthez&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/barthez&quot;},{&quot;title&quot;:&quot;BARTpho&quot;,&quot;id&quot;:&quot;model_doc/bartpho&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bartpho&quot;},{&quot;title&quot;:&quot;BERT&quot;,&quot;id&quot;:&quot;model_doc/bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert&quot;},{&quot;title&quot;:&quot;BertGeneration&quot;,&quot;id&quot;:&quot;model_doc/bert-generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-generation&quot;},{&quot;title&quot;:&quot;BertJapanese&quot;,&quot;id&quot;:&quot;model_doc/bert-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-japanese&quot;},{&quot;title&quot;:&quot;Bertweet&quot;,&quot;id&quot;:&quot;model_doc/bertweet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bertweet&quot;},{&quot;title&quot;:&quot;BigBird&quot;,&quot;id&quot;:&quot;model_doc/big_bird&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/big_bird&quot;},{&quot;title&quot;:&quot;BigBirdPegasus&quot;,&quot;id&quot;:&quot;model_doc/bigbird_pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus&quot;},{&quot;title&quot;:&quot;BioGpt&quot;,&quot;id&quot;:&quot;model_doc/biogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/biogpt&quot;},{&quot;title&quot;:&quot;Blenderbot&quot;,&quot;id&quot;:&quot;model_doc/blenderbot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot&quot;},{&quot;title&quot;:&quot;Blenderbot Small&quot;,&quot;id&quot;:&quot;model_doc/blenderbot-small&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot-small&quot;},{&quot;title&quot;:&quot;BLOOM&quot;,&quot;id&quot;:&quot;model_doc/bloom&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bloom&quot;},{&quot;title&quot;:&quot;BORT&quot;,&quot;id&quot;:&quot;model_doc/bort&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bort&quot;},{&quot;title&quot;:&quot;ByT5&quot;,&quot;id&quot;:&quot;model_doc/byt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/byt5&quot;},{&quot;title&quot;:&quot;CamemBERT&quot;,&quot;id&quot;:&quot;model_doc/camembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/camembert&quot;},{&quot;title&quot;:&quot;CANINE&quot;,&quot;id&quot;:&quot;model_doc/canine&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/canine&quot;},{&quot;title&quot;:&quot;CodeGen&quot;,&quot;id&quot;:&quot;model_doc/codegen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/codegen&quot;},{&quot;title&quot;:&quot;CodeLlama&quot;,&quot;id&quot;:&quot;model_doc/code_llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/code_llama&quot;},{&quot;title&quot;:&quot;ConvBERT&quot;,&quot;id&quot;:&quot;model_doc/convbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convbert&quot;},{&quot;title&quot;:&quot;CPM&quot;,&quot;id&quot;:&quot;model_doc/cpm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpm&quot;},{&quot;title&quot;:&quot;CPMANT&quot;,&quot;id&quot;:&quot;model_doc/cpmant&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpmant&quot;},{&quot;title&quot;:&quot;CTRL&quot;,&quot;id&quot;:&quot;model_doc/ctrl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ctrl&quot;},{&quot;title&quot;:&quot;DeBERTa&quot;,&quot;id&quot;:&quot;model_doc/deberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta&quot;},{&quot;title&quot;:&quot;DeBERTa-v2&quot;,&quot;id&quot;:&quot;model_doc/deberta-v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta-v2&quot;},{&quot;title&quot;:&quot;DialoGPT&quot;,&quot;id&quot;:&quot;model_doc/dialogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dialogpt&quot;},{&quot;title&quot;:&quot;DistilBERT&quot;,&quot;id&quot;:&quot;model_doc/distilbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/distilbert&quot;},{&quot;title&quot;:&quot;DPR&quot;,&quot;id&quot;:&quot;model_doc/dpr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpr&quot;},{&quot;title&quot;:&quot;ELECTRA&quot;,&quot;id&quot;:&quot;model_doc/electra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/electra&quot;},{&quot;title&quot;:&quot;Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encoder-decoder&quot;},{&quot;title&quot;:&quot;ERNIE&quot;,&quot;id&quot;:&quot;model_doc/ernie&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie&quot;},{&quot;title&quot;:&quot;ErnieM&quot;,&quot;id&quot;:&quot;model_doc/ernie_m&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie_m&quot;},{&quot;title&quot;:&quot;ESM&quot;,&quot;id&quot;:&quot;model_doc/esm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/esm&quot;},{&quot;title&quot;:&quot;Falcon&quot;,&quot;id&quot;:&quot;model_doc/falcon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/falcon&quot;},{&quot;title&quot;:&quot;FLAN-T5&quot;,&quot;id&quot;:&quot;model_doc/flan-t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-t5&quot;},{&quot;title&quot;:&quot;FLAN-UL2&quot;,&quot;id&quot;:&quot;model_doc/flan-ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-ul2&quot;},{&quot;title&quot;:&quot;FlauBERT&quot;,&quot;id&quot;:&quot;model_doc/flaubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flaubert&quot;},{&quot;title&quot;:&quot;FNet&quot;,&quot;id&quot;:&quot;model_doc/fnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fnet&quot;},{&quot;title&quot;:&quot;FSMT&quot;,&quot;id&quot;:&quot;model_doc/fsmt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fsmt&quot;},{&quot;title&quot;:&quot;Funnel Transformer&quot;,&quot;id&quot;:&quot;model_doc/funnel&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/funnel&quot;},{&quot;title&quot;:&quot;GPT&quot;,&quot;id&quot;:&quot;model_doc/openai-gpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/openai-gpt&quot;},{&quot;title&quot;:&quot;GPT Neo&quot;,&quot;id&quot;:&quot;model_doc/gpt_neo&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neo&quot;},{&quot;title&quot;:&quot;GPT NeoX&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox&quot;},{&quot;title&quot;:&quot;GPT NeoX Japanese&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox_japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese&quot;},{&quot;title&quot;:&quot;GPT-J&quot;,&quot;id&quot;:&quot;model_doc/gptj&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptj&quot;},{&quot;title&quot;:&quot;GPT2&quot;,&quot;id&quot;:&quot;model_doc/gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt2&quot;},{&quot;title&quot;:&quot;GPTBigCode&quot;,&quot;id&quot;:&quot;model_doc/gpt_bigcode&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode&quot;},{&quot;title&quot;:&quot;GPTSAN Japanese&quot;,&quot;id&quot;:&quot;model_doc/gptsan-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese&quot;},{&quot;title&quot;:&quot;GPTSw3&quot;,&quot;id&quot;:&quot;model_doc/gpt-sw3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt-sw3&quot;},{&quot;title&quot;:&quot;HerBERT&quot;,&quot;id&quot;:&quot;model_doc/herbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/herbert&quot;},{&quot;title&quot;:&quot;I-BERT&quot;,&quot;id&quot;:&quot;model_doc/ibert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ibert&quot;},{&quot;title&quot;:&quot;Jukebox&quot;,&quot;id&quot;:&quot;model_doc/jukebox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/jukebox&quot;},{&quot;title&quot;:&quot;LED&quot;,&quot;id&quot;:&quot;model_doc/led&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/led&quot;},{&quot;title&quot;:&quot;LLaMA&quot;,&quot;id&quot;:&quot;model_doc/llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama&quot;},{&quot;title&quot;:&quot;Llama2&quot;,&quot;id&quot;:&quot;model_doc/llama2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama2&quot;},{&quot;title&quot;:&quot;Longformer&quot;,&quot;id&quot;:&quot;model_doc/longformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longformer&quot;},{&quot;title&quot;:&quot;LongT5&quot;,&quot;id&quot;:&quot;model_doc/longt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longt5&quot;},{&quot;title&quot;:&quot;LUKE&quot;,&quot;id&quot;:&quot;model_doc/luke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/luke&quot;},{&quot;title&quot;:&quot;M2M100&quot;,&quot;id&quot;:&quot;model_doc/m2m_100&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/m2m_100&quot;},{&quot;title&quot;:&quot;MarianMT&quot;,&quot;id&quot;:&quot;model_doc/marian&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/marian&quot;},{&quot;title&quot;:&quot;MarkupLM&quot;,&quot;id&quot;:&quot;model_doc/markuplm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/markuplm&quot;},{&quot;title&quot;:&quot;MBart and MBart-50&quot;,&quot;id&quot;:&quot;model_doc/mbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mbart&quot;},{&quot;title&quot;:&quot;MEGA&quot;,&quot;id&quot;:&quot;model_doc/mega&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mega&quot;},{&quot;title&quot;:&quot;MegatronBERT&quot;,&quot;id&quot;:&quot;model_doc/megatron-bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron-bert&quot;},{&quot;title&quot;:&quot;MegatronGPT2&quot;,&quot;id&quot;:&quot;model_doc/megatron_gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2&quot;},{&quot;title&quot;:&quot;Mistral&quot;,&quot;id&quot;:&quot;model_doc/mistral&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mistral&quot;},{&quot;title&quot;:&quot;mLUKE&quot;,&quot;id&quot;:&quot;model_doc/mluke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mluke&quot;},{&quot;title&quot;:&quot;MobileBERT&quot;,&quot;id&quot;:&quot;model_doc/mobilebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilebert&quot;},{&quot;title&quot;:&quot;MPNet&quot;,&quot;id&quot;:&quot;model_doc/mpnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpnet&quot;},{&quot;title&quot;:&quot;MPT&quot;,&quot;id&quot;:&quot;model_doc/mpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpt&quot;},{&quot;title&quot;:&quot;MRA&quot;,&quot;id&quot;:&quot;model_doc/mra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mra&quot;},{&quot;title&quot;:&quot;MT5&quot;,&quot;id&quot;:&quot;model_doc/mt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mt5&quot;},{&quot;title&quot;:&quot;MVP&quot;,&quot;id&quot;:&quot;model_doc/mvp&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mvp&quot;},{&quot;title&quot;:&quot;NEZHA&quot;,&quot;id&quot;:&quot;model_doc/nezha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nezha&quot;},{&quot;title&quot;:&quot;NLLB&quot;,&quot;id&quot;:&quot;model_doc/nllb&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb&quot;},{&quot;title&quot;:&quot;NLLB-MoE&quot;,&quot;id&quot;:&quot;model_doc/nllb-moe&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb-moe&quot;},{&quot;title&quot;:&quot;Nyströmformer&quot;,&quot;id&quot;:&quot;model_doc/nystromformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nystromformer&quot;},{&quot;title&quot;:&quot;Open-Llama&quot;,&quot;id&quot;:&quot;model_doc/open-llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/open-llama&quot;},{&quot;title&quot;:&quot;OPT&quot;,&quot;id&quot;:&quot;model_doc/opt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/opt&quot;},{&quot;title&quot;:&quot;Pegasus&quot;,&quot;id&quot;:&quot;model_doc/pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus&quot;},{&quot;title&quot;:&quot;PEGASUS-X&quot;,&quot;id&quot;:&quot;model_doc/pegasus_x&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus_x&quot;},{&quot;title&quot;:&quot;Persimmon&quot;,&quot;id&quot;:&quot;model_doc/persimmon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/persimmon&quot;},{&quot;title&quot;:&quot;PhoBERT&quot;,&quot;id&quot;:&quot;model_doc/phobert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/phobert&quot;},{&quot;title&quot;:&quot;PLBart&quot;,&quot;id&quot;:&quot;model_doc/plbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/plbart&quot;},{&quot;title&quot;:&quot;ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/prophetnet&quot;},{&quot;title&quot;:&quot;QDQBert&quot;,&quot;id&quot;:&quot;model_doc/qdqbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/qdqbert&quot;},{&quot;title&quot;:&quot;RAG&quot;,&quot;id&quot;:&quot;model_doc/rag&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rag&quot;},{&quot;title&quot;:&quot;REALM&quot;,&quot;id&quot;:&quot;model_doc/realm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/realm&quot;},{&quot;title&quot;:&quot;Reformer&quot;,&quot;id&quot;:&quot;model_doc/reformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/reformer&quot;},{&quot;title&quot;:&quot;RemBERT&quot;,&quot;id&quot;:&quot;model_doc/rembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rembert&quot;},{&quot;title&quot;:&quot;RetriBERT&quot;,&quot;id&quot;:&quot;model_doc/retribert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/retribert&quot;},{&quot;title&quot;:&quot;RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta&quot;},{&quot;title&quot;:&quot;RoBERTa-PreLayerNorm&quot;,&quot;id&quot;:&quot;model_doc/roberta-prelayernorm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm&quot;},{&quot;title&quot;:&quot;RoCBert&quot;,&quot;id&quot;:&quot;model_doc/roc_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roc_bert&quot;},{&quot;title&quot;:&quot;RoFormer&quot;,&quot;id&quot;:&quot;model_doc/roformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roformer&quot;},{&quot;title&quot;:&quot;RWKV&quot;,&quot;id&quot;:&quot;model_doc/rwkv&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rwkv&quot;},{&quot;title&quot;:&quot;Splinter&quot;,&quot;id&quot;:&quot;model_doc/splinter&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/splinter&quot;},{&quot;title&quot;:&quot;SqueezeBERT&quot;,&quot;id&quot;:&quot;model_doc/squeezebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/squeezebert&quot;},{&quot;title&quot;:&quot;SwitchTransformers&quot;,&quot;id&quot;:&quot;model_doc/switch_transformers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/switch_transformers&quot;},{&quot;title&quot;:&quot;T5&quot;,&quot;id&quot;:&quot;model_doc/t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5&quot;},{&quot;title&quot;:&quot;T5v1.1&quot;,&quot;id&quot;:&quot;model_doc/t5v1.1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5v1.1&quot;},{&quot;title&quot;:&quot;TAPEX&quot;,&quot;id&quot;:&quot;model_doc/tapex&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapex&quot;},{&quot;title&quot;:&quot;Transformer XL&quot;,&quot;id&quot;:&quot;model_doc/transfo-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/transfo-xl&quot;},{&quot;title&quot;:&quot;UL2&quot;,&quot;id&quot;:&quot;model_doc/ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ul2&quot;},{&quot;title&quot;:&quot;UMT5&quot;,&quot;id&quot;:&quot;model_doc/umt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/umt5&quot;},{&quot;title&quot;:&quot;X-MOD&quot;,&quot;id&quot;:&quot;model_doc/xmod&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xmod&quot;},{&quot;title&quot;:&quot;XGLM&quot;,&quot;id&quot;:&quot;model_doc/xglm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xglm&quot;},{&quot;title&quot;:&quot;XLM&quot;,&quot;id&quot;:&quot;model_doc/xlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm&quot;},{&quot;title&quot;:&quot;XLM-ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/xlm-prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa-XL&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl&quot;},{&quot;title&quot;:&quot;XLM-V&quot;,&quot;id&quot;:&quot;model_doc/xlm-v&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-v&quot;},{&quot;title&quot;:&quot;XLNet&quot;,&quot;id&quot;:&quot;model_doc/xlnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlnet&quot;},{&quot;title&quot;:&quot;YOSO&quot;,&quot;id&quot;:&quot;model_doc/yoso&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yoso&quot;}]},{&quot;title&quot;:&quot;Vision models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;BEiT&quot;,&quot;id&quot;:&quot;model_doc/beit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/beit&quot;},{&quot;title&quot;:&quot;BiT&quot;,&quot;id&quot;:&quot;model_doc/bit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bit&quot;},{&quot;title&quot;:&quot;Conditional DETR&quot;,&quot;id&quot;:&quot;model_doc/conditional_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/conditional_detr&quot;},{&quot;title&quot;:&quot;ConvNeXT&quot;,&quot;id&quot;:&quot;model_doc/convnext&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnext&quot;},{&quot;title&quot;:&quot;ConvNeXTV2&quot;,&quot;id&quot;:&quot;model_doc/convnextv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnextv2&quot;},{&quot;title&quot;:&quot;CvT&quot;,&quot;id&quot;:&quot;model_doc/cvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cvt&quot;},{&quot;title&quot;:&quot;Deformable DETR&quot;,&quot;id&quot;:&quot;model_doc/deformable_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deformable_detr&quot;},{&quot;title&quot;:&quot;DeiT&quot;,&quot;id&quot;:&quot;model_doc/deit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deit&quot;},{&quot;title&quot;:&quot;DETA&quot;,&quot;id&quot;:&quot;model_doc/deta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deta&quot;},{&quot;title&quot;:&quot;DETR&quot;,&quot;id&quot;:&quot;model_doc/detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/detr&quot;},{&quot;title&quot;:&quot;DiNAT&quot;,&quot;id&quot;:&quot;model_doc/dinat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinat&quot;},{&quot;title&quot;:&quot;DINO V2&quot;,&quot;id&quot;:&quot;model_doc/dinov2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinov2&quot;},{&quot;title&quot;:&quot;DiT&quot;,&quot;id&quot;:&quot;model_doc/dit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dit&quot;},{&quot;title&quot;:&quot;DPT&quot;,&quot;id&quot;:&quot;model_doc/dpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpt&quot;},{&quot;title&quot;:&quot;EfficientFormer&quot;,&quot;id&quot;:&quot;model_doc/efficientformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientformer&quot;},{&quot;title&quot;:&quot;EfficientNet&quot;,&quot;id&quot;:&quot;model_doc/efficientnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientnet&quot;},{&quot;title&quot;:&quot;FocalNet&quot;,&quot;id&quot;:&quot;model_doc/focalnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/focalnet&quot;},{&quot;title&quot;:&quot;GLPN&quot;,&quot;id&quot;:&quot;model_doc/glpn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/glpn&quot;},{&quot;title&quot;:&quot;ImageGPT&quot;,&quot;id&quot;:&quot;model_doc/imagegpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/imagegpt&quot;},{&quot;title&quot;:&quot;LeViT&quot;,&quot;id&quot;:&quot;model_doc/levit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/levit&quot;},{&quot;title&quot;:&quot;Mask2Former&quot;,&quot;id&quot;:&quot;model_doc/mask2former&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mask2former&quot;},{&quot;title&quot;:&quot;MaskFormer&quot;,&quot;id&quot;:&quot;model_doc/maskformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/maskformer&quot;},{&quot;title&quot;:&quot;MobileNetV1&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v1&quot;},{&quot;title&quot;:&quot;MobileNetV2&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v2&quot;},{&quot;title&quot;:&quot;MobileViT&quot;,&quot;id&quot;:&quot;model_doc/mobilevit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevit&quot;},{&quot;title&quot;:&quot;MobileViTV2&quot;,&quot;id&quot;:&quot;model_doc/mobilevitv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevitv2&quot;},{&quot;title&quot;:&quot;NAT&quot;,&quot;id&quot;:&quot;model_doc/nat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nat&quot;},{&quot;title&quot;:&quot;PoolFormer&quot;,&quot;id&quot;:&quot;model_doc/poolformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/poolformer&quot;},{&quot;title&quot;:&quot;Pyramid Vision Transformer (PVT)&quot;,&quot;id&quot;:&quot;model_doc/pvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pvt&quot;},{&quot;title&quot;:&quot;RegNet&quot;,&quot;id&quot;:&quot;model_doc/regnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/regnet&quot;},{&quot;title&quot;:&quot;ResNet&quot;,&quot;id&quot;:&quot;model_doc/resnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/resnet&quot;},{&quot;title&quot;:&quot;SegFormer&quot;,&quot;id&quot;:&quot;model_doc/segformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/segformer&quot;},{&quot;title&quot;:&quot;SwiftFormer&quot;,&quot;id&quot;:&quot;model_doc/swiftformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swiftformer&quot;},{&quot;title&quot;:&quot;Swin Transformer&quot;,&quot;id&quot;:&quot;model_doc/swin&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin&quot;},{&quot;title&quot;:&quot;Swin Transformer V2&quot;,&quot;id&quot;:&quot;model_doc/swinv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swinv2&quot;},{&quot;title&quot;:&quot;Swin2SR&quot;,&quot;id&quot;:&quot;model_doc/swin2sr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin2sr&quot;},{&quot;title&quot;:&quot;Table Transformer&quot;,&quot;id&quot;:&quot;model_doc/table-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/table-transformer&quot;},{&quot;title&quot;:&quot;TimeSformer&quot;,&quot;id&quot;:&quot;model_doc/timesformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/timesformer&quot;},{&quot;title&quot;:&quot;UperNet&quot;,&quot;id&quot;:&quot;model_doc/upernet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/upernet&quot;},{&quot;title&quot;:&quot;VAN&quot;,&quot;id&quot;:&quot;model_doc/van&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/van&quot;},{&quot;title&quot;:&quot;VideoMAE&quot;,&quot;id&quot;:&quot;model_doc/videomae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/videomae&quot;},{&quot;title&quot;:&quot;Vision Transformer (ViT)&quot;,&quot;id&quot;:&quot;model_doc/vit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit&quot;},{&quot;title&quot;:&quot;ViT Hybrid&quot;,&quot;id&quot;:&quot;model_doc/vit_hybrid&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_hybrid&quot;},{&quot;title&quot;:&quot;ViTDet&quot;,&quot;id&quot;:&quot;model_doc/vitdet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitdet&quot;},{&quot;title&quot;:&quot;ViTMAE&quot;,&quot;id&quot;:&quot;model_doc/vit_mae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_mae&quot;},{&quot;title&quot;:&quot;ViTMatte&quot;,&quot;id&quot;:&quot;model_doc/vitmatte&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitmatte&quot;},{&quot;title&quot;:&quot;ViTMSN&quot;,&quot;id&quot;:&quot;model_doc/vit_msn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_msn&quot;},{&quot;title&quot;:&quot;ViViT&quot;,&quot;id&quot;:&quot;model_doc/vivit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vivit&quot;},{&quot;title&quot;:&quot;YOLOS&quot;,&quot;id&quot;:&quot;model_doc/yolos&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yolos&quot;}]},{&quot;title&quot;:&quot;Audio models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio Spectrogram Transformer&quot;,&quot;id&quot;:&quot;model_doc/audio-spectrogram-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer&quot;},{&quot;title&quot;:&quot;Bark&quot;,&quot;id&quot;:&quot;model_doc/bark&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bark&quot;},{&quot;title&quot;:&quot;CLAP&quot;,&quot;id&quot;:&quot;model_doc/clap&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clap&quot;},{&quot;title&quot;:&quot;EnCodec&quot;,&quot;id&quot;:&quot;model_doc/encodec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encodec&quot;},{&quot;title&quot;:&quot;Hubert&quot;,&quot;id&quot;:&quot;model_doc/hubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/hubert&quot;},{&quot;title&quot;:&quot;MCTCT&quot;,&quot;id&quot;:&quot;model_doc/mctct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mctct&quot;},{&quot;title&quot;:&quot;MMS&quot;,&quot;id&quot;:&quot;model_doc/mms&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mms&quot;},{&quot;title&quot;:&quot;MusicGen&quot;,&quot;id&quot;:&quot;model_doc/musicgen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/musicgen&quot;},{&quot;title&quot;:&quot;Pop2Piano&quot;,&quot;id&quot;:&quot;model_doc/pop2piano&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pop2piano&quot;},{&quot;title&quot;:&quot;SEW&quot;,&quot;id&quot;:&quot;model_doc/sew&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew&quot;},{&quot;title&quot;:&quot;SEW-D&quot;,&quot;id&quot;:&quot;model_doc/sew-d&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew-d&quot;},{&quot;title&quot;:&quot;Speech2Text&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text&quot;},{&quot;title&quot;:&quot;Speech2Text2&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text_2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2&quot;},{&quot;title&quot;:&quot;SpeechT5&quot;,&quot;id&quot;:&quot;model_doc/speecht5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speecht5&quot;},{&quot;title&quot;:&quot;UniSpeech&quot;,&quot;id&quot;:&quot;model_doc/unispeech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech&quot;},{&quot;title&quot;:&quot;UniSpeech-SAT&quot;,&quot;id&quot;:&quot;model_doc/unispeech-sat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech-sat&quot;},{&quot;title&quot;:&quot;VITS&quot;,&quot;id&quot;:&quot;model_doc/vits&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vits&quot;},{&quot;title&quot;:&quot;Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2&quot;},{&quot;title&quot;:&quot;Wav2Vec2-Conformer&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2-conformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer&quot;},{&quot;title&quot;:&quot;Wav2Vec2Phoneme&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2_phoneme&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme&quot;},{&quot;title&quot;:&quot;WavLM&quot;,&quot;id&quot;:&quot;model_doc/wavlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wavlm&quot;},{&quot;title&quot;:&quot;Whisper&quot;,&quot;id&quot;:&quot;model_doc/whisper&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/whisper&quot;},{&quot;title&quot;:&quot;XLS-R&quot;,&quot;id&quot;:&quot;model_doc/xls_r&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xls_r&quot;},{&quot;title&quot;:&quot;XLSR-Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/xlsr_wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2&quot;}]},{&quot;title&quot;:&quot;Multimodal models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALIGN&quot;,&quot;id&quot;:&quot;model_doc/align&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/align&quot;},{&quot;title&quot;:&quot;AltCLIP&quot;,&quot;id&quot;:&quot;model_doc/altclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/altclip&quot;},{&quot;title&quot;:&quot;BLIP&quot;,&quot;id&quot;:&quot;model_doc/blip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip&quot;},{&quot;title&quot;:&quot;BLIP-2&quot;,&quot;id&quot;:&quot;model_doc/blip-2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip-2&quot;},{&quot;title&quot;:&quot;BridgeTower&quot;,&quot;id&quot;:&quot;model_doc/bridgetower&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bridgetower&quot;},{&quot;title&quot;:&quot;BROS&quot;,&quot;id&quot;:&quot;model_doc/bros&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bros&quot;},{&quot;title&quot;:&quot;Chinese-CLIP&quot;,&quot;id&quot;:&quot;model_doc/chinese_clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/chinese_clip&quot;},{&quot;title&quot;:&quot;CLIP&quot;,&quot;id&quot;:&quot;model_doc/clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clip&quot;},{&quot;title&quot;:&quot;CLIPSeg&quot;,&quot;id&quot;:&quot;model_doc/clipseg&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clipseg&quot;},{&quot;title&quot;:&quot;Data2Vec&quot;,&quot;id&quot;:&quot;model_doc/data2vec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/data2vec&quot;},{&quot;title&quot;:&quot;DePlot&quot;,&quot;id&quot;:&quot;model_doc/deplot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deplot&quot;},{&quot;title&quot;:&quot;Donut&quot;,&quot;id&quot;:&quot;model_doc/donut&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/donut&quot;},{&quot;title&quot;:&quot;FLAVA&quot;,&quot;id&quot;:&quot;model_doc/flava&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flava&quot;},{&quot;title&quot;:&quot;GIT&quot;,&quot;id&quot;:&quot;model_doc/git&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/git&quot;},{&quot;title&quot;:&quot;GroupViT&quot;,&quot;id&quot;:&quot;model_doc/groupvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/groupvit&quot;},{&quot;title&quot;:&quot;IDEFICS&quot;,&quot;id&quot;:&quot;model_doc/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/idefics&quot;},{&quot;title&quot;:&quot;InstructBLIP&quot;,&quot;id&quot;:&quot;model_doc/instructblip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/instructblip&quot;},{&quot;title&quot;:&quot;LayoutLM&quot;,&quot;id&quot;:&quot;model_doc/layoutlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlm&quot;},{&quot;title&quot;:&quot;LayoutLMV2&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv2&quot;},{&quot;title&quot;:&quot;LayoutLMV3&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv3&quot;},{&quot;title&quot;:&quot;LayoutXLM&quot;,&quot;id&quot;:&quot;model_doc/layoutxlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutxlm&quot;},{&quot;title&quot;:&quot;LiLT&quot;,&quot;id&quot;:&quot;model_doc/lilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lilt&quot;},{&quot;title&quot;:&quot;LXMERT&quot;,&quot;id&quot;:&quot;model_doc/lxmert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lxmert&quot;},{&quot;title&quot;:&quot;MatCha&quot;,&quot;id&quot;:&quot;model_doc/matcha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/matcha&quot;},{&quot;title&quot;:&quot;MGP-STR&quot;,&quot;id&quot;:&quot;model_doc/mgp-str&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mgp-str&quot;},{&quot;title&quot;:&quot;Nougat&quot;,&quot;id&quot;:&quot;model_doc/nougat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nougat&quot;},{&quot;title&quot;:&quot;OneFormer&quot;,&quot;id&quot;:&quot;model_doc/oneformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/oneformer&quot;},{&quot;title&quot;:&quot;OWL-ViT&quot;,&quot;id&quot;:&quot;model_doc/owlvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/owlvit&quot;},{&quot;title&quot;:&quot;Perceiver&quot;,&quot;id&quot;:&quot;model_doc/perceiver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/perceiver&quot;},{&quot;title&quot;:&quot;Pix2Struct&quot;,&quot;id&quot;:&quot;model_doc/pix2struct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pix2struct&quot;},{&quot;title&quot;:&quot;Segment Anything&quot;,&quot;id&quot;:&quot;model_doc/sam&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sam&quot;},{&quot;title&quot;:&quot;Speech Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/speech-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder&quot;},{&quot;title&quot;:&quot;TAPAS&quot;,&quot;id&quot;:&quot;model_doc/tapas&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapas&quot;},{&quot;title&quot;:&quot;TrOCR&quot;,&quot;id&quot;:&quot;model_doc/trocr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trocr&quot;},{&quot;title&quot;:&quot;TVLT&quot;,&quot;id&quot;:&quot;model_doc/tvlt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tvlt&quot;},{&quot;title&quot;:&quot;ViLT&quot;,&quot;id&quot;:&quot;model_doc/vilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vilt&quot;},{&quot;title&quot;:&quot;Vision Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/vision-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder&quot;},{&quot;title&quot;:&quot;Vision Text Dual Encoder&quot;,&quot;id&quot;:&quot;model_doc/vision-text-dual-encoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder&quot;},{&quot;title&quot;:&quot;VisualBERT&quot;,&quot;id&quot;:&quot;model_doc/visual_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/visual_bert&quot;},{&quot;title&quot;:&quot;X-CLIP&quot;,&quot;id&quot;:&quot;model_doc/xclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xclip&quot;}]},{&quot;title&quot;:&quot;Reinforcement learning models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Decision Transformer&quot;,&quot;id&quot;:&quot;model_doc/decision_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/decision_transformer&quot;},{&quot;title&quot;:&quot;Trajectory Transformer&quot;,&quot;id&quot;:&quot;model_doc/trajectory_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer&quot;}]},{&quot;title&quot;:&quot;Time series models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Autoformer&quot;,&quot;id&quot;:&quot;model_doc/autoformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/autoformer&quot;},{&quot;title&quot;:&quot;Informer&quot;,&quot;id&quot;:&quot;model_doc/informer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/informer&quot;},{&quot;title&quot;:&quot;Time Series Transformer&quot;,&quot;id&quot;:&quot;model_doc/time_series_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/time_series_transformer&quot;}]},{&quot;title&quot;:&quot;Graph models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Graphormer&quot;,&quot;id&quot;:&quot;model_doc/graphormer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/graphormer&quot;}]}]},{&quot;title&quot;:&quot;Internal Helpers&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Custom Layers and Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/modeling_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/modeling_utils&quot;},{&quot;title&quot;:&quot;Utilities for pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/pipelines_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/pipelines_utils&quot;},{&quot;title&quot;:&quot;Utilities for Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/tokenization_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/tokenization_utils&quot;},{&quot;title&quot;:&quot;Utilities for Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/trainer_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/trainer_utils&quot;},{&quot;title&quot;:&quot;Utilities for Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/generation_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/generation_utils&quot;},{&quot;title&quot;:&quot;Utilities for Image Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/image_processing_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/image_processing_utils&quot;},{&quot;title&quot;:&quot;Utilities for Audio processing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/audio_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/audio_utils&quot;},{&quot;title&quot;:&quot;General Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/file_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/file_utils&quot;},{&quot;title&quot;:&quot;Utilities for Time Series&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/time_series_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/time_series_utils&quot;}]}]}],&quot;chapterId&quot;:&quot;tasks/masked_language_modeling&quot;,&quot;docType&quot;:&quot;docs&quot;,&quot;isLoggedIn&quot;:false,&quot;lang&quot;:&quot;en&quot;,&quot;langs&quot;:[&quot;de&quot;,&quot;en&quot;,&quot;es&quot;,&quot;fr&quot;,&quot;it&quot;,&quot;ko&quot;,&quot;pt&quot;,&quot;zh&quot;],&quot;library&quot;:&quot;transformers&quot;,&quot;theme&quot;:&quot;light&quot;,&quot;version&quot;:&quot;v4.34.0&quot;,&quot;versions&quot;:[{&quot;version&quot;:&quot;main&quot;},{&quot;version&quot;:&quot;v4.34.0&quot;},{&quot;version&quot;:&quot;v4.33.3&quot;},{&quot;version&quot;:&quot;v4.33.2&quot;},{&quot;version&quot;:&quot;v4.33.0&quot;},{&quot;version&quot;:&quot;v4.32.1&quot;},{&quot;version&quot;:&quot;v4.32.0&quot;},{&quot;version&quot;:&quot;v4.31.0&quot;},{&quot;version&quot;:&quot;v4.30.0&quot;},{&quot;version&quot;:&quot;v4.29.1&quot;},{&quot;version&quot;:&quot;v4.29.0&quot;},{&quot;version&quot;:&quot;v4.28.1&quot;},{&quot;version&quot;:&quot;v4.28.0&quot;},{&quot;version&quot;:&quot;v4.27.2&quot;},{&quot;version&quot;:&quot;v4.27.1&quot;},{&quot;version&quot;:&quot;v4.27.0&quot;},{&quot;version&quot;:&quot;v4.26.1&quot;},{&quot;version&quot;:&quot;v4.26.0&quot;},{&quot;version&quot;:&quot;v4.25.1&quot;},{&quot;version&quot;:&quot;v4.24.0&quot;},{&quot;version&quot;:&quot;v4.23.1&quot;},{&quot;version&quot;:&quot;v4.23.0&quot;},{&quot;version&quot;:&quot;v4.22.2&quot;},{&quot;version&quot;:&quot;v4.22.1&quot;},{&quot;version&quot;:&quot;v4.22.0&quot;},{&quot;version&quot;:&quot;v4.21.3&quot;},{&quot;version&quot;:&quot;v4.21.2&quot;},{&quot;version&quot;:&quot;v4.21.1&quot;},{&quot;version&quot;:&quot;v4.21.0&quot;},{&quot;version&quot;:&quot;v4.20.1&quot;},{&quot;version&quot;:&quot;v4.20.0&quot;},{&quot;version&quot;:&quot;v4.19.4&quot;},{&quot;version&quot;:&quot;v4.19.3&quot;},{&quot;version&quot;:&quot;v4.19.2&quot;},{&quot;version&quot;:&quot;v4.19.0&quot;},{&quot;version&quot;:&quot;v4.18.0&quot;},{&quot;version&quot;:&quot;v4.17.0&quot;},{&quot;version&quot;:&quot;v4.16.2&quot;},{&quot;version&quot;:&quot;v4.16.1&quot;},{&quot;version&quot;:&quot;v4.16.0&quot;},{&quot;version&quot;:&quot;v4.15.0&quot;},{&quot;version&quot;:&quot;v4.14.1&quot;},{&quot;version&quot;:&quot;v4.13.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.5&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.4&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.0.0&quot;},{&quot;version&quot;:&quot;doc-builder-html&quot;}],&quot;title&quot;:&quot;Masked language modeling&quot;}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"><div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation</p> <div class="flex items-center"><p class="font-semibold">Masked language modeling</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "><button class=" " type="button"><h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> <span class="ml-auto rounded border border-gray-200 bg-gray-100 px-0.5 text-xs dark:border-gray-800 dark:bg-gray-800"><kbd class="font-sans">⌘K</kbd></span></button> <div class="flex items-center"><select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1">v4.34.0</option><option value="2">v4.33.3</option><option value="3">v4.32.1</option><option value="4">v4.31.0</option><option value="5">v4.30.0</option><option value="6">v4.29.1</option><option value="7">v4.28.1</option><option value="8">v4.27.2</option><option value="9">v4.26.1</option><option value="10">v4.25.1</option><option value="11">v4.24.0</option><option value="12">v4.23.1</option><option value="13">v4.22.2</option><option value="14">v4.21.3</option><option value="15">v4.20.1</option><option value="16">v4.19.4</option><option value="17">v4.18.0</option><option value="18">v4.17.0</option><option value="19">v4.16.2</option><option value="20">v4.15.0</option><option value="21">v4.14.1</option><option value="22">v4.13.0</option><option value="23">v4.12.5</option><option value="24">v4.11.3</option><option value="25">v4.10.1</option><option value="26">v4.9.2</option><option value="27">v4.8.2</option><option value="28">v4.7.0</option><option value="29">v4.6.0</option><option value="30">v4.5.1</option><option value="31">v4.4.2</option><option value="32">v4.3.3</option><option value="33">v4.2.2</option><option value="34">v4.1.1</option><option value="35">v4.0.1</option><option value="36">v3.5.1</option><option value="37">v3.4.0</option><option value="38">v3.3.1</option><option value="39">v3.2.0</option><option value="40">v3.1.0</option><option value="41">v3.0.2</option><option value="42">v2.11.0</option><option value="43">v2.10.0</option><option value="44">v2.9.1</option><option value="45">v2.8.0</option><option value="46">v2.7.0</option><option value="47">v2.6.0</option><option value="48">v2.5.1</option><option value="49">v2.4.1</option><option value="50">v2.3.0</option><option value="51">v2.2.2</option><option value="52">v2.1.1</option><option value="53">v2.0.0</option><option value="54">v1.2.0</option><option value="55">v1.1.0</option><option value="56">v1.0.0</option><option value="57">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"><button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"><svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> 112,792</a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Get started</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/index">🤗 Transformers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/quicktour">Quick tour </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/installation">Installation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Tutorials</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_tutorial">Run inference with pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/autoclass_tutorial">Write portable code with AutoClass </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/preprocessing">Preprocess data </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/training">Fine-tune a pretrained model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/run_scripts">Train with a script </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/accelerate">Set up distributed training with 🤗 Accelerate </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/peft">Load and train adapters with 🤗 PEFT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_sharing">Share your model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/transformers_agents">Agents </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/llm_tutorial">Generation with LLMs </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Task Guides</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Natural Language Processing</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/sequence_classification">Text classification </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/token_classification">Token classification </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/question_answering">Question answering </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/language_modeling">Causal language modeling </a><a data-sveltekit-reload="" class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-4" href="/docs/transformers/v4.34.0/en/tasks/masked_language_modeling">Masked language modeling </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/translation">Translation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/summarization">Summarization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/multiple_choice">Multiple choice </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Computer Vision</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Generation</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Prompting</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Developer guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/fast_tokenizers">Use fast tokenizers from 🤗 Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/multilingual">Run inference with multilingual models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/create_a_model">Use model-specific APIs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_models">Share a custom model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/chat_templating">Templates for chat models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/sagemaker">Run training on Amazon SageMaker </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/serialization">Export to ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tflite">Export to TFLite </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/torchscript">Export to TorchScript </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/benchmarks">Benchmarks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/notebooks">Notebooks with examples </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/community">Community resources </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_tools">Custom Tools and Prompts </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/troubleshooting">Troubleshoot </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Performance and scalability</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/performance">Overview </a><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Efficient training techniques</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_one">Methods and tools for efficient training on a single GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_many">Multiple GPUs and parallelism </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu">Efficient training on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu_many">Distributed CPU training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu">Training on TPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu_tf">Training on TPU with TensorFlow </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_special">Training on Specialized Hardware </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_hardware">Custom hardware for training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/hpo_train">Hyperparameter Search using Trainer API </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Optimizing inference</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_cpu">Inference on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_one">Inference on one GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_many">Inference on many GPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_special">Inference on Specialized Hardware </a> </div><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/big_models">Instantiating a big model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/debugging">Troubleshooting </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tf_xla">XLA Integration for TensorFlow Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perf_torch_compile">Optimize inference using `torch.compile()` </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Contribute</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/contributing">How to contribute to transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_model">How to add a model to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_tensorflow_model">How to convert a 🤗 Transformers model to TensorFlow? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_pipeline">How to add a pipeline to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/testing">Testing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pr_checks">Checks on a Pull Request </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Conceptual guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/philosophy">Philosophy </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/glossary">Glossary </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/task_summary">What 🤗 Transformers can do </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tasks_explained">How 🤗 Transformers solve tasks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_summary">The Transformer model family </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tokenizer_summary">Summary of the tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/attention">Attention mechanisms </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pad_truncation">Padding and truncation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/bertology">BERTology </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perplexity">Perplexity of fixed-length models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_webserver">Pipelines for webserver inference </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_memory_anatomy">Model training anatomy </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>API</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Main Classes</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/agent">Agents and Tools </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/model_doc/auto">Auto Classes </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/callback">Callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/configuration">Configuration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/data_collator">Data Collator </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks">Keras callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/logging">Logging </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/model">Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/text_generation">Text Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/onnx">ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules">Optimization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/output">Model outputs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/pipelines">Pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/processors">Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/quantization">Quantization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/tokenizer">Tokenizer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/trainer">Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/deepspeed">DeepSpeed Integration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor">Feature Extractor </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/image_processor">Image Processor </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Models</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Text models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Vision models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Reinforcement learning models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Time series models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Graph models</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Internal Helpers</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/modeling_utils">Custom Layers and Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/pipelines_utils">Utilities for pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/tokenization_utils">Utilities for Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/trainer_utils">Utilities for Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/generation_utils">Utilities for Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/image_processing_utils">Utilities for Image Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/audio_utils">Utilities for Audio processing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/file_utils">General Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/time_series_utils">Utilities for Time Series </a> </div> </div></nav></div></div></div> <div class="z-1 min-w-0 flex-1"> <div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg"> <div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div> <p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience </p> <div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes </div></div></div> <div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a> <p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div> <div class="prose-doc prose relative mx-auto max-w-4xl break-words"> <p></p> <h1 class="relative group"><a id="masked-language-modeling" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#masked-language-modeling"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1vb8xp5">Masked language modeling</span></h1> <div class="flex space-x-1 absolute z-10 right-0 top-0"> <div class="relative colab-dropdown "><button class=" " type="button"><img alt="Open In Colab" class="!m-0" src="https://colab.research.google.com/assets/colab-badge.svg"></button> </div> <div class="relative colab-dropdown "><button class=" " type="button"><img alt="Open In Studio Lab" class="!m-0" src="https://studiolab.sagemaker.aws/studiolab.svg"></button> </div></div> <iframe class="w-full xl:w-4/6 h-80" src="https://www.youtube-nocookie.com/embed/mqElG5QJWUg" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe> <p data-svelte-h="svelte-ed5ap0">Masked language modeling predicts a masked token in a sequence, and the model can attend to tokens bidirectionally. This means the model has full access to the tokens on the left and right. Masked language modeling is great for tasks that require a good contextual understanding of an entire sequence. BERT is an example of a masked language model.</p> <p data-svelte-h="svelte-1aff4p7">This guide will show you how to:</p> <ol data-svelte-h="svelte-1iq9zup"><li>Finetune <a href="https://huggingface.co/distilroberta-base" rel="nofollow">DistilRoBERTa</a> on the <a href="https://www.reddit.com/r/askscience/" rel="nofollow">r/askscience</a> subset of the <a href="https://huggingface.co/datasets/eli5" rel="nofollow">ELI5</a> dataset.</li> <li>Use your finetuned model for inference.</li></ol> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400">You can finetune other architectures for masked language modeling following the same steps in this guide. Choose one of the following architectures: <p data-svelte-h="svelte-e35b6o"><a href="../model_doc/albert">ALBERT</a>, <a href="../model_doc/bart">BART</a>, <a href="../model_doc/bert">BERT</a>, <a href="../model_doc/big_bird">BigBird</a>, <a href="../model_doc/camembert">CamemBERT</a>, <a href="../model_doc/convbert">ConvBERT</a>, <a href="../model_doc/data2vec-text">Data2VecText</a>, <a href="../model_doc/deberta">DeBERTa</a>, <a href="../model_doc/deberta-v2">DeBERTa-v2</a>, <a href="../model_doc/distilbert">DistilBERT</a>, <a href="../model_doc/electra">ELECTRA</a>, <a href="../model_doc/ernie">ERNIE</a>, <a href="../model_doc/esm">ESM</a>, <a href="../model_doc/flaubert">FlauBERT</a>, <a href="../model_doc/fnet">FNet</a>, <a href="../model_doc/funnel">Funnel Transformer</a>, <a href="../model_doc/ibert">I-BERT</a>, <a href="../model_doc/layoutlm">LayoutLM</a>, <a href="../model_doc/longformer">Longformer</a>, <a href="../model_doc/luke">LUKE</a>, <a href="../model_doc/mbart">mBART</a>, <a href="../model_doc/mega">MEGA</a>, <a href="../model_doc/megatron-bert">Megatron-BERT</a>, <a href="../model_doc/mobilebert">MobileBERT</a>, <a href="../model_doc/mpnet">MPNet</a>, <a href="../model_doc/mra">MRA</a>, <a href="../model_doc/mvp">MVP</a>, <a href="../model_doc/nezha">Nezha</a>, <a href="../model_doc/nystromformer">Nyströmformer</a>, <a href="../model_doc/perceiver">Perceiver</a>, <a href="../model_doc/qdqbert">QDQBert</a>, <a href="../model_doc/reformer">Reformer</a>, <a href="../model_doc/rembert">RemBERT</a>, <a href="../model_doc/roberta">RoBERTa</a>, <a href="../model_doc/roberta-prelayernorm">RoBERTa-PreLayerNorm</a>, <a href="../model_doc/roc_bert">RoCBert</a>, <a href="../model_doc/roformer">RoFormer</a>, <a href="../model_doc/squeezebert">SqueezeBERT</a>, <a href="../model_doc/tapas">TAPAS</a>, <a href="../model_doc/wav2vec2">Wav2Vec2</a>, <a href="../model_doc/xlm">XLM</a>, <a href="../model_doc/xlm-roberta">XLM-RoBERTa</a>, <a href="../model_doc/xlm-roberta-xl">XLM-RoBERTa-XL</a>, <a href="../model_doc/xmod">X-MOD</a>, <a href="../model_doc/yoso">YOSO</a></p></div> <p data-svelte-h="svelte-1c9nexd">Before you begin, make sure you have all the necessary libraries installed:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class="">pip install transformers datasets evaluate</pre></div> <p data-svelte-h="svelte-27hn0u">We encourage you to log in to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to log in:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> huggingface_hub <span class="hljs-keyword">import</span> notebook_login <span class="hljs-meta">&gt;&gt;&gt; </span>notebook_login()</pre></div> <h2 class="relative group"><a id="load-eli5-dataset" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#load-eli5-dataset"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-vt5erg">Load ELI5 dataset</span></h2> <p data-svelte-h="svelte-1elcj66">Start by loading a smaller subset of the r/askscience subset of the ELI5 dataset from the 🤗 Datasets library. This’ll give you a chance to experiment and make sure everything works before spending more time training on the full dataset.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset <span class="hljs-meta">&gt;&gt;&gt; </span>eli5 = load_dataset(<span class="hljs-string">"eli5"</span>, split=<span class="hljs-string">"train_asks[:5000]"</span>)</pre></div> <p data-svelte-h="svelte-4cgbu8">Split the dataset’s <code>train_asks</code> split into a train and test set with the <a href="https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.train_test_split" rel="nofollow">train_test_split</a> method:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>eli5 = eli5.train_test_split(test_size=<span class="hljs-number">0.2</span>)</pre></div> <p data-svelte-h="svelte-1m91ua0">Then take a look at an example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>eli5[<span class="hljs-string">"train"</span>][<span class="hljs-number">0</span>] {<span class="hljs-string">'answers'</span>: {<span class="hljs-string">'a_id'</span>: [<span class="hljs-string">'c3d1aib'</span>, <span class="hljs-string">'c3d4lya'</span>], <span class="hljs-string">'score'</span>: [<span class="hljs-number">6</span>, <span class="hljs-number">3</span>], <span class="hljs-string">'text'</span>: [<span class="hljs-string">"The velocity needed to remain in orbit is equal to the square root of Newton's constant times the mass of earth divided by the distance from the center of the earth. I don't know the altitude of that specific mission, but they're usually around 300 km. That means he's going 7-8 km/s.\n\nIn space there are no other forces acting on either the shuttle or the guy, so they stay in the same position relative to each other. If he were to become unable to return to the ship, he would presumably run out of oxygen, or slowly fall into the atmosphere and burn up."</span>, <span class="hljs-string">"Hope you don't mind me asking another question, but why aren't there any stars visible in this photo?"</span>]}, <span class="hljs-string">'answers_urls'</span>: {<span class="hljs-string">'url'</span>: []}, <span class="hljs-string">'document'</span>: <span class="hljs-string">''</span>, <span class="hljs-string">'q_id'</span>: <span class="hljs-string">'nyxfp'</span>, <span class="hljs-string">'selftext'</span>: <span class="hljs-string">'_URL_0_\n\nThis was on the front page earlier and I have a few questions about it. Is it possible to calculate how fast the astronaut would be orbiting the earth? Also how does he stay close to the shuttle so that he can return safely, i.e is he orbiting at the same speed and can therefore stay next to it? And finally if his propulsion system failed, would he eventually re-enter the atmosphere and presumably die?'</span>, <span class="hljs-string">'selftext_urls'</span>: {<span class="hljs-string">'url'</span>: [<span class="hljs-string">'http://apod.nasa.gov/apod/image/1201/freeflyer_nasa_3000.jpg'</span>]}, <span class="hljs-string">'subreddit'</span>: <span class="hljs-string">'askscience'</span>, <span class="hljs-string">'title'</span>: <span class="hljs-string">'Few questions about this space walk photograph.'</span>, <span class="hljs-string">'title_urls'</span>: {<span class="hljs-string">'url'</span>: []}}</pre></div> <p data-svelte-h="svelte-dtsbii">While this may look like a lot, you’re only really interested in the <code>text</code> field. What’s cool about language modeling tasks is you don’t need labels (also known as an unsupervised task) because the next word <em>is</em> the label.</p> <h2 class="relative group"><a id="preprocess" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#preprocess"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1cg9qj">Preprocess</span></h2> <iframe class="w-full xl:w-4/6 h-80" src="https://www.youtube-nocookie.com/embed/8PmhEIXhBvI" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe> <p data-svelte-h="svelte-i99oeh">For masked language modeling, the next step is to load a DistilRoBERTa tokenizer to process the <code>text</code> subfield:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"distilroberta-base"</span>)</pre></div> <p data-svelte-h="svelte-1oxkzu9">You’ll notice from the example above, the <code>text</code> field is actually nested inside <code>answers</code>. This means you’ll need to e xtract the <code>text</code> subfield from its nested structure with the <a href="https://huggingface.co/docs/datasets/process.html#flatten" rel="nofollow"><code>flatten</code></a> method:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>eli5 = eli5.flatten() <span class="hljs-meta">&gt;&gt;&gt; </span>eli5[<span class="hljs-string">"train"</span>][<span class="hljs-number">0</span>] {<span class="hljs-string">'answers.a_id'</span>: [<span class="hljs-string">'c3d1aib'</span>, <span class="hljs-string">'c3d4lya'</span>], <span class="hljs-string">'answers.score'</span>: [<span class="hljs-number">6</span>, <span class="hljs-number">3</span>], <span class="hljs-string">'answers.text'</span>: [<span class="hljs-string">"The velocity needed to remain in orbit is equal to the square root of Newton's constant times the mass of earth divided by the distance from the center of the earth. I don't know the altitude of that specific mission, but they're usually around 300 km. That means he's going 7-8 km/s.\n\nIn space there are no other forces acting on either the shuttle or the guy, so they stay in the same position relative to each other. If he were to become unable to return to the ship, he would presumably run out of oxygen, or slowly fall into the atmosphere and burn up."</span>, <span class="hljs-string">"Hope you don't mind me asking another question, but why aren't there any stars visible in this photo?"</span>], <span class="hljs-string">'answers_urls.url'</span>: [], <span class="hljs-string">'document'</span>: <span class="hljs-string">''</span>, <span class="hljs-string">'q_id'</span>: <span class="hljs-string">'nyxfp'</span>, <span class="hljs-string">'selftext'</span>: <span class="hljs-string">'_URL_0_\n\nThis was on the front page earlier and I have a few questions about it. Is it possible to calculate how fast the astronaut would be orbiting the earth? Also how does he stay close to the shuttle so that he can return safely, i.e is he orbiting at the same speed and can therefore stay next to it? And finally if his propulsion system failed, would he eventually re-enter the atmosphere and presumably die?'</span>, <span class="hljs-string">'selftext_urls.url'</span>: [<span class="hljs-string">'http://apod.nasa.gov/apod/image/1201/freeflyer_nasa_3000.jpg'</span>], <span class="hljs-string">'subreddit'</span>: <span class="hljs-string">'askscience'</span>, <span class="hljs-string">'title'</span>: <span class="hljs-string">'Few questions about this space walk photograph.'</span>, <span class="hljs-string">'title_urls.url'</span>: []}</pre></div> <p data-svelte-h="svelte-1mdv3gu">Each subfield is now a separate column as indicated by the <code>answers</code> prefix, and the <code>text</code> field is a list now. Instead of tokenizing each sentence separately, convert the list to a string so you can jointly tokenize them.</p> <p data-svelte-h="svelte-njkc6i">Here is a first preprocessing function to join the list of strings for each example and tokenize the result:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">preprocess_function</span>(<span class="hljs-params">examples</span>): <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> tokenizer([<span class="hljs-string">" "</span>.join(x) <span class="hljs-keyword">for</span> x <span class="hljs-keyword">in</span> examples[<span class="hljs-string">"answers.text"</span>]])</pre></div> <p data-svelte-h="svelte-1iccbqj">To apply this preprocessing function over the entire dataset, use the 🤗 Datasets <a href="https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.map" rel="nofollow">map</a> method. You can speed up the <code>map</code> function by setting <code>batched=True</code> to process multiple elements of the dataset at once, and increasing the number of processes with <code>num_proc</code>. Remove any columns you don’t need:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>tokenized_eli5 = eli5.<span class="hljs-built_in">map</span>( <span class="hljs-meta">... </span> preprocess_function, <span class="hljs-meta">... </span> batched=<span class="hljs-literal">True</span>, <span class="hljs-meta">... </span> num_proc=<span class="hljs-number">4</span>, <span class="hljs-meta">... </span> remove_columns=eli5[<span class="hljs-string">"train"</span>].column_names, <span class="hljs-meta">... </span>)</pre></div> <p data-svelte-h="svelte-pz0l04">This dataset contains the token sequences, but some of these are longer than the maximum input length for the model.</p> <p data-svelte-h="svelte-5guq64">You can now use a second preprocessing function to</p> <ul data-svelte-h="svelte-vz17je"><li>concatenate all the sequences</li> <li>split the concatenated sequences into shorter chunks defined by <code>block_size</code>, which should be both shorter than the maximum input length and short enough for your GPU RAM.</li></ul> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>block_size = <span class="hljs-number">128</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">group_texts</span>(<span class="hljs-params">examples</span>): <span class="hljs-meta">... </span> <span class="hljs-comment"># Concatenate all texts.</span> <span class="hljs-meta">... </span> concatenated_examples = {k: <span class="hljs-built_in">sum</span>(examples[k], []) <span class="hljs-keyword">for</span> k <span class="hljs-keyword">in</span> examples.keys()} <span class="hljs-meta">... </span> total_length = <span class="hljs-built_in">len</span>(concatenated_examples[<span class="hljs-built_in">list</span>(examples.keys())[<span class="hljs-number">0</span>]]) <span class="hljs-meta">... </span> <span class="hljs-comment"># We drop the small remainder, we could add padding if the model supported it instead of this drop, you can</span> <span class="hljs-meta">... </span> <span class="hljs-comment"># customize this part to your needs.</span> <span class="hljs-meta">... </span> <span class="hljs-keyword">if</span> total_length &gt;= block_size: <span class="hljs-meta">... </span> total_length = (total_length // block_size) * block_size <span class="hljs-meta">... </span> <span class="hljs-comment"># Split by chunks of block_size.</span> <span class="hljs-meta">... </span> result = { <span class="hljs-meta">... </span> k: [t[i : i + block_size] <span class="hljs-keyword">for</span> i <span class="hljs-keyword">in</span> <span class="hljs-built_in">range</span>(<span class="hljs-number">0</span>, total_length, block_size)] <span class="hljs-meta">... </span> <span class="hljs-keyword">for</span> k, t <span class="hljs-keyword">in</span> concatenated_examples.items() <span class="hljs-meta">... </span> } <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> result</pre></div> <p data-svelte-h="svelte-1o69amy">Apply the <code>group_texts</code> function over the entire dataset:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>lm_dataset = tokenized_eli5.<span class="hljs-built_in">map</span>(group_texts, batched=<span class="hljs-literal">True</span>, num_proc=<span class="hljs-number">4</span>)</pre></div> <p data-svelte-h="svelte-1fu1iu7">Now create a batch of examples using <a href="/docs/transformers/v4.34.0/en/main_classes/data_collator#transformers.DataCollatorForLanguageModeling">DataCollatorForLanguageModeling</a>. It’s more efficient to <em>dynamically pad</em> the sentences to the longest length in a batch during collation, instead of padding the whole dataset to the maximum length.</p> <div class="space-y-10 py-6 2xl:py-8 2xl:-mx-4"><div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><defs><clipPath id="a"><rect x="3.05" y="0.5" width="25.73" height="31" fill="none"></rect></clipPath></defs><g clip-path="url(#a)"><path d="M24.94,9.51a12.81,12.81,0,0,1,0,18.16,12.68,12.68,0,0,1-18,0,12.81,12.81,0,0,1,0-18.16l9-9V5l-.84.83-6,6a9.58,9.58,0,1,0,13.55,0ZM20.44,9a1.68,1.68,0,1,1,1.67-1.67A1.68,1.68,0,0,1,20.44,9Z" fill="#ee4c2c"></path></g></svg> <span>Pytorch</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide Pytorch content</span></div></div> <div class="framework-content"><p data-svelte-h="svelte-z8dnpj">Use the end-of-sequence token as the padding token and specify <code>mlm_probability</code> to randomly mask tokens each time you iterate over the data:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> DataCollatorForLanguageModeling <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer.pad_token = tokenizer.eos_token <span class="hljs-meta">&gt;&gt;&gt; </span>data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm_probability=<span class="hljs-number">0.15</span>)</pre></div></div></div> <div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="0.94em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 274"><path d="M145.726 42.065v42.07l72.861 42.07v-42.07l-72.86-42.07zM0 84.135v42.07l36.43 21.03V105.17L0 84.135zm109.291 21.035l-36.43 21.034v126.2l36.43 21.035v-84.135l36.435 21.035v-42.07l-36.435-21.034V105.17z" fill="#E55B2D"></path><path d="M145.726 42.065L36.43 105.17v42.065l72.861-42.065v42.065l36.435-21.03v-84.14zM255.022 63.1l-36.435 21.035v42.07l36.435-21.035V63.1zm-72.865 84.135l-36.43 21.035v42.07l36.43-21.036v-42.07zm-36.43 63.104l-36.436-21.035v84.135l36.435-21.035V210.34z" fill="#ED8E24"></path><path d="M145.726 0L0 84.135l36.43 21.035l109.296-63.105l72.861 42.07L255.022 63.1L145.726 0zm0 126.204l-36.435 21.03l36.435 21.036l36.43-21.035l-36.43-21.03z" fill="#F8BF3C"></path></svg> <span>TensorFlow</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide TensorFlow content</span></div></div> <div class="framework-content"><p data-svelte-h="svelte-z8dnpj">Use the end-of-sequence token as the padding token and specify <code>mlm_probability</code> to randomly mask tokens each time you iterate over the data:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> DataCollatorForLanguageModeling <span class="hljs-meta">&gt;&gt;&gt; </span>data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm_probability=<span class="hljs-number">0.15</span>, return_tensors=<span class="hljs-string">"tf"</span>)</pre></div></div></div> </div> <h2 class="relative group"><a id="train" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#train"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-5arm0l">Train</span></h2> <div class="space-y-10 py-6 2xl:py-8 2xl:-mx-4"><div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><defs><clipPath id="a"><rect x="3.05" y="0.5" width="25.73" height="31" fill="none"></rect></clipPath></defs><g clip-path="url(#a)"><path d="M24.94,9.51a12.81,12.81,0,0,1,0,18.16,12.68,12.68,0,0,1-18,0,12.81,12.81,0,0,1,0-18.16l9-9V5l-.84.83-6,6a9.58,9.58,0,1,0,13.55,0ZM20.44,9a1.68,1.68,0,1,1,1.67-1.67A1.68,1.68,0,0,1,20.44,9Z" fill="#ee4c2c"></path></g></svg> <span>Pytorch</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide Pytorch content</span></div></div> <div class="framework-content"><div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1ufp0ay">If you aren’t familiar with finetuning a model with the <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer">Trainer</a>, take a look at the basic tutorial <a href="../training#train-with-pytorch-trainer">here</a>!</p></div> <p data-svelte-h="svelte-1vthcgg">You’re ready to start training your model now! Load DistilRoBERTa with <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoModelForMaskedLM">AutoModelForMaskedLM</a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoModelForMaskedLM <span class="hljs-meta">&gt;&gt;&gt; </span>model = AutoModelForMaskedLM.from_pretrained(<span class="hljs-string">"distilroberta-base"</span>)</pre></div> <p data-svelte-h="svelte-l42k0i">At this point, only three steps remain:</p> <ol data-svelte-h="svelte-6uyzj4"><li>Define your training hyperparameters in <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.TrainingArguments">TrainingArguments</a>. The only required parameter is <code>output_dir</code> which specifies where to save your model. You’ll push this model to the Hub by setting <code>push_to_hub=True</code> (you need to be signed in to Hugging Face to upload your model).</li> <li>Pass the training arguments to <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer">Trainer</a> along with the model, datasets, and data collator.</li> <li>Call <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.train">train()</a> to finetune your model.</li></ol> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>training_args = TrainingArguments( <span class="hljs-meta">... </span> output_dir=<span class="hljs-string">"my_awesome_eli5_mlm_model"</span>, <span class="hljs-meta">... </span> evaluation_strategy=<span class="hljs-string">"epoch"</span>, <span class="hljs-meta">... </span> learning_rate=<span class="hljs-number">2e-5</span>, <span class="hljs-meta">... </span> num_train_epochs=<span class="hljs-number">3</span>, <span class="hljs-meta">... </span> weight_decay=<span class="hljs-number">0.01</span>, <span class="hljs-meta">... </span> push_to_hub=<span class="hljs-literal">True</span>, <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>trainer = Trainer( <span class="hljs-meta">... </span> model=model, <span class="hljs-meta">... </span> args=training_args, <span class="hljs-meta">... </span> train_dataset=lm_dataset[<span class="hljs-string">"train"</span>], <span class="hljs-meta">... </span> eval_dataset=lm_dataset[<span class="hljs-string">"test"</span>], <span class="hljs-meta">... </span> data_collator=data_collator, <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>trainer.train()</pre></div> <p data-svelte-h="svelte-1h6sduc">Once training is completed, use the <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.evaluate">evaluate()</a> method to evaluate your model and get its perplexity:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> math <span class="hljs-meta">&gt;&gt;&gt; </span>eval_results = trainer.evaluate() <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-built_in">print</span>(<span class="hljs-string">f"Perplexity: <span class="hljs-subst">{math.exp(eval_results[<span class="hljs-string">'eval_loss'</span>]):<span class="hljs-number">.2</span>f}</span>"</span>) Perplexity: <span class="hljs-number">8.76</span></pre></div> <p data-svelte-h="svelte-xx2cfv">Then share your model to the Hub with the <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.push_to_hub">push_to_hub()</a> method so everyone can use your model:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>trainer.push_to_hub()</pre></div></div></div> <div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="0.94em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 274"><path d="M145.726 42.065v42.07l72.861 42.07v-42.07l-72.86-42.07zM0 84.135v42.07l36.43 21.03V105.17L0 84.135zm109.291 21.035l-36.43 21.034v126.2l36.43 21.035v-84.135l36.435 21.035v-42.07l-36.435-21.034V105.17z" fill="#E55B2D"></path><path d="M145.726 42.065L36.43 105.17v42.065l72.861-42.065v42.065l36.435-21.03v-84.14zM255.022 63.1l-36.435 21.035v42.07l36.435-21.035V63.1zm-72.865 84.135l-36.43 21.035v42.07l36.43-21.036v-42.07zm-36.43 63.104l-36.436-21.035v84.135l36.435-21.035V210.34z" fill="#ED8E24"></path><path d="M145.726 0L0 84.135l36.43 21.035l109.296-63.105l72.861 42.07L255.022 63.1L145.726 0zm0 126.204l-36.435 21.03l36.435 21.036l36.43-21.035l-36.43-21.03z" fill="#F8BF3C"></path></svg> <span>TensorFlow</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide TensorFlow content</span></div></div> <div class="framework-content"><div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1rd4nl8">If you aren’t familiar with finetuning a model with Keras, take a look at the basic tutorial <a href="../training#train-a-tensorflow-model-with-keras">here</a>!</p></div> To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters: <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> create_optimizer, AdamWeightDecay <span class="hljs-meta">&gt;&gt;&gt; </span>optimizer = AdamWeightDecay(learning_rate=<span class="hljs-number">2e-5</span>, weight_decay_rate=<span class="hljs-number">0.01</span>)</pre></div> <p data-svelte-h="svelte-17rpkxm">Then you can load DistilRoBERTa with <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.TFAutoModelForMaskedLM">TFAutoModelForMaskedLM</a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> TFAutoModelForMaskedLM <span class="hljs-meta">&gt;&gt;&gt; </span>model = TFAutoModelForMaskedLM.from_pretrained(<span class="hljs-string">"distilroberta-base"</span>)</pre></div> <p data-svelte-h="svelte-qmwuyd">Convert your datasets to the <code>tf.data.Dataset</code> format with <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel.prepare_tf_dataset">prepare_tf_dataset()</a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>tf_train_set = model.prepare_tf_dataset( <span class="hljs-meta">... </span> lm_dataset[<span class="hljs-string">"train"</span>], <span class="hljs-meta">... </span> shuffle=<span class="hljs-literal">True</span>, <span class="hljs-meta">... </span> batch_size=<span class="hljs-number">16</span>, <span class="hljs-meta">... </span> collate_fn=data_collator, <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>tf_test_set = model.prepare_tf_dataset( <span class="hljs-meta">... </span> lm_dataset[<span class="hljs-string">"test"</span>], <span class="hljs-meta">... </span> shuffle=<span class="hljs-literal">False</span>, <span class="hljs-meta">... </span> batch_size=<span class="hljs-number">16</span>, <span class="hljs-meta">... </span> collate_fn=data_collator, <span class="hljs-meta">... </span>)</pre></div> <p data-svelte-h="svelte-17cxx5e">Configure the model for training with <a href="https://keras.io/api/models/model_training_apis/#compile-method" rel="nofollow"><code>compile</code></a>. Note that Transformers models all have a default task-relevant loss function, so you don’t need to specify one unless you want to:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> tensorflow <span class="hljs-keyword">as</span> tf <span class="hljs-meta">&gt;&gt;&gt; </span>model.<span class="hljs-built_in">compile</span>(optimizer=optimizer) <span class="hljs-comment"># No loss argument!</span></pre></div> <p data-svelte-h="svelte-ufj5fr">This can be done by specifying where to push your model and tokenizer in the <a href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks#transformers.PushToHubCallback">PushToHubCallback</a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers.keras_callbacks <span class="hljs-keyword">import</span> PushToHubCallback <span class="hljs-meta">&gt;&gt;&gt; </span>callback = PushToHubCallback( <span class="hljs-meta">... </span> output_dir=<span class="hljs-string">"my_awesome_eli5_mlm_model"</span>, <span class="hljs-meta">... </span> tokenizer=tokenizer, <span class="hljs-meta">... </span>)</pre></div> <p data-svelte-h="svelte-1pfsro2">Finally, you’re ready to start training your model! Call <a href="https://keras.io/api/models/model_training_apis/#fit-method" rel="nofollow"><code>fit</code></a> with your training and validation datasets, the number of epochs, and your callback to finetune the model:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>model.fit(x=tf_train_set, validation_data=tf_test_set, epochs=<span class="hljs-number">3</span>, callbacks=[callback])</pre></div> <p data-svelte-h="svelte-2s71om">Once training is completed, your model is automatically uploaded to the Hub so everyone can use it!</p></div></div> </div> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1c4g6cv">For a more in-depth example of how to finetune a model for masked language modeling, take a look at the corresponding <a href="https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb" rel="nofollow">PyTorch notebook</a> or <a href="https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb" rel="nofollow">TensorFlow notebook</a>.</p></div> <h2 class="relative group"><a id="inference" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#inference"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-199uz7g">Inference</span></h2> <p data-svelte-h="svelte-633ppb">Great, now that you’ve finetuned a model, you can use it for inference!</p> <p data-svelte-h="svelte-9jago5">Come up with some text you’d like the model to fill in the blank with, and use the special <code>&lt;mask&gt;</code> token to indicate the blank:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>text = <span class="hljs-string">"The Milky Way is a &lt;mask&gt; galaxy."</span></pre></div> <p data-svelte-h="svelte-5vt6cp">The simplest way to try out your finetuned model for inference is to use it in a <a href="/docs/transformers/v4.34.0/en/main_classes/pipelines#transformers.pipeline">pipeline()</a>. Instantiate a <code>pipeline</code> for fill-mask with your model, and pass your text to it. If you like, you can use the <code>top_k</code> parameter to specify how many predictions to return:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> pipeline <span class="hljs-meta">&gt;&gt;&gt; </span>mask_filler = pipeline(<span class="hljs-string">"fill-mask"</span>, <span class="hljs-string">"stevhliu/my_awesome_eli5_mlm_model"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>mask_filler(text, top_k=<span class="hljs-number">3</span>) [{<span class="hljs-string">'score'</span>: <span class="hljs-number">0.5150994658470154</span>, <span class="hljs-string">'token'</span>: <span class="hljs-number">21300</span>, <span class="hljs-string">'token_str'</span>: <span class="hljs-string">' spiral'</span>, <span class="hljs-string">'sequence'</span>: <span class="hljs-string">'The Milky Way is a spiral galaxy.'</span>}, {<span class="hljs-string">'score'</span>: <span class="hljs-number">0.07087188959121704</span>, <span class="hljs-string">'token'</span>: <span class="hljs-number">2232</span>, <span class="hljs-string">'token_str'</span>: <span class="hljs-string">' massive'</span>, <span class="hljs-string">'sequence'</span>: <span class="hljs-string">'The Milky Way is a massive galaxy.'</span>}, {<span class="hljs-string">'score'</span>: <span class="hljs-number">0.06434620916843414</span>, <span class="hljs-string">'token'</span>: <span class="hljs-number">650</span>, <span class="hljs-string">'token_str'</span>: <span class="hljs-string">' small'</span>, <span class="hljs-string">'sequence'</span>: <span class="hljs-string">'The Milky Way is a small galaxy.'</span>}]</pre></div> <div class="space-y-10 py-6 2xl:py-8 2xl:-mx-4"><div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><defs><clipPath id="a"><rect x="3.05" y="0.5" width="25.73" height="31" fill="none"></rect></clipPath></defs><g clip-path="url(#a)"><path d="M24.94,9.51a12.81,12.81,0,0,1,0,18.16,12.68,12.68,0,0,1-18,0,12.81,12.81,0,0,1,0-18.16l9-9V5l-.84.83-6,6a9.58,9.58,0,1,0,13.55,0ZM20.44,9a1.68,1.68,0,1,1,1.67-1.67A1.68,1.68,0,0,1,20.44,9Z" fill="#ee4c2c"></path></g></svg> <span>Pytorch</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide Pytorch content</span></div></div> <div class="framework-content"><p data-svelte-h="svelte-1rt88cg">Tokenize the text and return the <code>input_ids</code> as PyTorch tensors. You’ll also need to specify the position of the <code>&lt;mask&gt;</code> token:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"stevhliu/my_awesome_eli5_mlm_model"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(text, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>mask_token_index = torch.where(inputs[<span class="hljs-string">"input_ids"</span>] == tokenizer.mask_token_id)[<span class="hljs-number">1</span>]</pre></div> <p data-svelte-h="svelte-1abk23t">Pass your inputs to the model and return the <code>logits</code> of the masked token:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoModelForMaskedLM <span class="hljs-meta">&gt;&gt;&gt; </span>model = AutoModelForMaskedLM.from_pretrained(<span class="hljs-string">"stevhliu/my_awesome_eli5_mlm_model"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>logits = model(**inputs).logits <span class="hljs-meta">&gt;&gt;&gt; </span>mask_token_logits = logits[<span class="hljs-number">0</span>, mask_token_index, :]</pre></div> <p data-svelte-h="svelte-jux2mn">Then return the three masked tokens with the highest probability and print them out:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>top_3_tokens = torch.topk(mask_token_logits, <span class="hljs-number">3</span>, dim=<span class="hljs-number">1</span>).indices[<span class="hljs-number">0</span>].tolist() <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">for</span> token <span class="hljs-keyword">in</span> top_3_tokens: <span class="hljs-meta">... </span> <span class="hljs-built_in">print</span>(text.replace(tokenizer.mask_token, tokenizer.decode([token]))) The Milky Way <span class="hljs-keyword">is</span> a spiral galaxy. The Milky Way <span class="hljs-keyword">is</span> a massive galaxy. The Milky Way <span class="hljs-keyword">is</span> a small galaxy.</pre></div></div></div> <div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="0.94em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 274"><path d="M145.726 42.065v42.07l72.861 42.07v-42.07l-72.86-42.07zM0 84.135v42.07l36.43 21.03V105.17L0 84.135zm109.291 21.035l-36.43 21.034v126.2l36.43 21.035v-84.135l36.435 21.035v-42.07l-36.435-21.034V105.17z" fill="#E55B2D"></path><path d="M145.726 42.065L36.43 105.17v42.065l72.861-42.065v42.065l36.435-21.03v-84.14zM255.022 63.1l-36.435 21.035v42.07l36.435-21.035V63.1zm-72.865 84.135l-36.43 21.035v42.07l36.43-21.036v-42.07zm-36.43 63.104l-36.436-21.035v84.135l36.435-21.035V210.34z" fill="#ED8E24"></path><path d="M145.726 0L0 84.135l36.43 21.035l109.296-63.105l72.861 42.07L255.022 63.1L145.726 0zm0 126.204l-36.435 21.03l36.435 21.036l36.43-21.035l-36.43-21.03z" fill="#F8BF3C"></path></svg> <span>TensorFlow</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide TensorFlow content</span></div></div> <div class="framework-content"><p data-svelte-h="svelte-h6kxlc">Tokenize the text and return the <code>input_ids</code> as TensorFlow tensors. You’ll also need to specify the position of the <code>&lt;mask&gt;</code> token:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"stevhliu/my_awesome_eli5_mlm_model"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer(text, return_tensors=<span class="hljs-string">"tf"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>mask_token_index = tf.where(inputs[<span class="hljs-string">"input_ids"</span>] == tokenizer.mask_token_id)[<span class="hljs-number">0</span>, <span class="hljs-number">1</span>]</pre></div> <p data-svelte-h="svelte-1abk23t">Pass your inputs to the model and return the <code>logits</code> of the masked token:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> TFAutoModelForMaskedLM <span class="hljs-meta">&gt;&gt;&gt; </span>model = TFAutoModelForMaskedLM.from_pretrained(<span class="hljs-string">"stevhliu/my_awesome_eli5_mlm_model"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>logits = model(**inputs).logits <span class="hljs-meta">&gt;&gt;&gt; </span>mask_token_logits = logits[<span class="hljs-number">0</span>, mask_token_index, :]</pre></div> <p data-svelte-h="svelte-jux2mn">Then return the three masked tokens with the highest probability and print them out:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>top_3_tokens = tf.math.top_k(mask_token_logits, <span class="hljs-number">3</span>).indices.numpy() <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">for</span> token <span class="hljs-keyword">in</span> top_3_tokens: <span class="hljs-meta">... </span> <span class="hljs-built_in">print</span>(text.replace(tokenizer.mask_token, tokenizer.decode([token]))) The Milky Way <span class="hljs-keyword">is</span> a spiral galaxy. The Milky Way <span class="hljs-keyword">is</span> a massive galaxy. The Milky Way <span class="hljs-keyword">is</span> a small galaxy.</pre></div></div></div> </div> <p></p> <div id="svelte-announcer" aria-live="assertive" aria-atomic="true" style="position: absolute; left: 0px; top: 0px; clip: rect(0px, 0px, 0px, 0px); clip-path: inset(50%); overflow: hidden; white-space: nowrap; width: 1px; height: 1px;"></div></div> <div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/tasks/language_modeling" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>Causal language modeling</a> <a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/tasks/translation" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">Translation<span class="ml-2 translate-y-px">→</span></a></div></div></div> <div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapter&quot;:{&quot;title&quot;:&quot;Masked language modeling&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;masked-language-modeling&quot;,&quot;url&quot;:&quot;#masked-language-modeling&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Load ELI5 dataset&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;load-eli5-dataset&quot;,&quot;url&quot;:&quot;#load-eli5-dataset&quot;},{&quot;title&quot;:&quot;Preprocess&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocess&quot;,&quot;url&quot;:&quot;#preprocess&quot;},{&quot;title&quot;:&quot;Train&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;train&quot;,&quot;url&quot;:&quot;#train&quot;},{&quot;title&quot;:&quot;Inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;inference&quot;,&quot;url&quot;:&quot;#inference&quot;}]}}" data-target="SubSideMenu"><nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#masked-language-modeling" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-masked-language-modeling"><wbr>Masked language modeling</a> <a href="#load-eli5-dataset" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-load-eli5-dataset"><wbr>Load EL<wbr>I5 dataset</a> <a href="#preprocess" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-preprocess"><wbr>Preprocess</a> <a href="#train" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-train"><wbr>Train</a> <a href="#inference" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-inference"><wbr>Inference</a> </nav></div></div></div> <div id="doc-footer"></div></main> </div> <script> import("/front/build/kube-5e23f38/index.js"); window.moonSha = "kube-5e23f38/"; window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc","environment":"production","userAgent":"HuggingFace (production)"}`); </script> <!-- Stripe --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://js.stripe.com/v3/"; script.async = true; document.head.appendChild(script); } </script> <!-- Google analytics v4 --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL"; script.async = true; document.head.appendChild(script); window.dataLayer = window.dataLayer || []; function gtag() { if (window.dataLayer !== undefined) { window.dataLayer.push(arguments); } } gtag("js", new Date()); gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/v4.34.0/en/tasks/masked_language_modeling" }); /// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" }); /// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent /// TODO: ask the user for their consent and update this with gtag('consent', 'update') } </script> <!-- Google Analytics v3 (deprecated) --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { (function (i, s, o, g, r, a, m) { i["GoogleAnalyticsObject"] = r; (i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments); }), (i[r].l = 1 * new Date()); (a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]); a.async = 1; a.src = g; m.parentNode.insertBefore(a, m); })(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics"); ganalytics("create", "UA-83738774-2", "auto"); ganalytics("send", "pageview", "/docs/transformers/v4.34.0/en/tasks/masked_language_modeling"); } </script> <iframe name="__privateStripeMetricsController3630" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-27c67c0d52761104439bb051c7856ab1.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fv4.34.0%2Fen%2Ftasks%2Fmasked_language_modeling&amp;title=Masked%20language%20modeling&amp;referrer=&amp;muid=38397bf3-d1df-433f-a1ab-3a999964eeba83e258&amp;sid=7a2cecc6-6b9a-4e4a-88b4-4bd8a189a43fe6315f&amp;version=6&amp;preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html>
2023-10-05T13:33:53.464Z
Multiple choice
https://huggingface.co/docs/transformers/v4.34.0/en/tasks/multiple_choice
# Multiple choice A multiple choice task is similar to question answering, except several candidate answers are provided along with a context and the model is trained to select the correct answer. This guide will show you how to: 1. Finetune [BERT](https://huggingface.co/bert-base-uncased) on the `regular` configuration of the [SWAG](https://huggingface.co/datasets/swag) dataset to select the best answer given multiple options and some context. 2. Use your finetuned model for inference. The task illustrated in this tutorial is supported by the following model architectures: [ALBERT](../model_doc/albert), [BERT](../model_doc/bert), [BigBird](../model_doc/big_bird), [CamemBERT](../model_doc/camembert), [CANINE](../model_doc/canine), [ConvBERT](../model_doc/convbert), [Data2VecText](../model_doc/data2vec-text), [DeBERTa-v2](../model_doc/deberta-v2), [DistilBERT](../model_doc/distilbert), [ELECTRA](../model_doc/electra), [ERNIE](../model_doc/ernie), [ErnieM](../model_doc/ernie_m), [FlauBERT](../model_doc/flaubert), [FNet](../model_doc/fnet), [Funnel Transformer](../model_doc/funnel), [I-BERT](../model_doc/ibert), [Longformer](../model_doc/longformer), [LUKE](../model_doc/luke), [MEGA](../model_doc/mega), [Megatron-BERT](../model_doc/megatron-bert), [MobileBERT](../model_doc/mobilebert), [MPNet](../model_doc/mpnet), [MRA](../model_doc/mra), [Nezha](../model_doc/nezha), [Nyströmformer](../model_doc/nystromformer), [QDQBert](../model_doc/qdqbert), [RemBERT](../model_doc/rembert), [RoBERTa](../model_doc/roberta), [RoBERTa-PreLayerNorm](../model_doc/roberta-prelayernorm), [RoCBert](../model_doc/roc_bert), [RoFormer](../model_doc/roformer), [SqueezeBERT](../model_doc/squeezebert), [XLM](../model_doc/xlm), [XLM-RoBERTa](../model_doc/xlm-roberta), [XLM-RoBERTa-XL](../model_doc/xlm-roberta-xl), [XLNet](../model_doc/xlnet), [X-MOD](../model_doc/xmod), [YOSO](../model_doc/yoso) Before you begin, make sure you have all the necessary libraries installed: ``` pip install transformers datasets evaluate ``` We encourage you to login to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to login: ``` >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## Load SWAG dataset Start by loading the `regular` configuration of the SWAG dataset from the 🤗 Datasets library: ``` >>> from datasets import load_dataset >>> swag = load_dataset("swag", "regular") ``` Then take a look at an example: ``` >>> swag["train"][0] {'ending0': 'passes by walking down the street playing their instruments.', 'ending1': 'has heard approaching them.', 'ending2': "arrives and they're outside dancing and asleep.", 'ending3': 'turns the lead singer watches the performance.', 'fold-ind': '3416', 'gold-source': 'gold', 'label': 0, 'sent1': 'Members of the procession walk down the street holding small horn brass instruments.', 'sent2': 'A drum line', 'startphrase': 'Members of the procession walk down the street holding small horn brass instruments. A drum line', 'video-id': 'anetv_jkn6uvmqwh4'} ``` While it looks like there are a lot of fields here, it is actually pretty straightforward: - `sent1` and `sent2`: these fields show how a sentence starts, and if you put the two together, you get the `startphrase` field. - `ending`: suggests a possible ending for how a sentence can end, but only one of them is correct. - `label`: identifies the correct sentence ending. ## Preprocess The next step is to load a BERT tokenizer to process the sentence starts and the four possible endings: ``` >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") ``` The preprocessing function you want to create needs to: 1. Make four copies of the `sent1` field and combine each of them with `sent2` to recreate how a sentence starts. 2. Combine `sent2` with each of the four possible sentence endings. 3. Flatten these two lists so you can tokenize them, and then unflatten them afterward so each example has a corresponding `input_ids`, `attention_mask`, and `labels` field. ``` >>> ending_names = ["ending0", "ending1", "ending2", "ending3"] >>> def preprocess_function(examples): ... first_sentences = [[context] * 4 for context in examples["sent1"]] ... question_headers = examples["sent2"] ... second_sentences = [ ... [f"{header} {examples[end][i]}" for end in ending_names] for i, header in enumerate(question_headers) ... ] ... first_sentences = sum(first_sentences, []) ... second_sentences = sum(second_sentences, []) ... tokenized_examples = tokenizer(first_sentences, second_sentences, truncation=True) ... return {k: [v[i : i + 4] for i in range(0, len(v), 4)] for k, v in tokenized_examples.items()} ``` To apply the preprocessing function over the entire dataset, use 🤗 Datasets [map](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.map) method. You can speed up the `map` function by setting `batched=True` to process multiple elements of the dataset at once: ``` tokenized_swag = swag.map(preprocess_function, batched=True) ``` 🤗 Transformers doesn’t have a data collator for multiple choice, so you’ll need to adapt the [DataCollatorWithPadding](/docs/transformers/v4.34.0/en/main_classes/data_collator#transformers.DataCollatorWithPadding) to create a batch of examples. It’s more efficient to _dynamically pad_ the sentences to the longest length in a batch during collation, instead of padding the whole dataset to the maximum length. `DataCollatorForMultipleChoice` flattens all the model inputs, applies padding, and then unflattens the results: ``` >>> from dataclasses import dataclass >>> from transformers.tokenization_utils_base import PreTrainedTokenizerBase, PaddingStrategy >>> from typing import Optional, Union >>> import torch >>> @dataclass ... class DataCollatorForMultipleChoice: ... """ ... Data collator that will dynamically pad the inputs for multiple choice received. ... """ ... tokenizer: PreTrainedTokenizerBase ... padding: Union[bool, str, PaddingStrategy] = True ... max_length: Optional[int] = None ... pad_to_multiple_of: Optional[int] = None ... def __call__(self, features): ... label_name = "label" if "label" in features[0].keys() else "labels" ... labels = [feature.pop(label_name) for feature in features] ... batch_size = len(features) ... num_choices = len(features[0]["input_ids"]) ... flattened_features = [ ... [{k: v[i] for k, v in feature.items()} for i in range(num_choices)] for feature in features ... ] ... flattened_features = sum(flattened_features, []) ... batch = self.tokenizer.pad( ... flattened_features, ... padding=self.padding, ... max_length=self.max_length, ... pad_to_multiple_of=self.pad_to_multiple_of, ... return_tensors="pt", ... ) ... batch = {k: v.view(batch_size, num_choices, -1) for k, v in batch.items()} ... batch["labels"] = torch.tensor(labels, dtype=torch.int64) ... return batch ``` ``` >>> from dataclasses import dataclass >>> from transformers.tokenization_utils_base import PreTrainedTokenizerBase, PaddingStrategy >>> from typing import Optional, Union >>> import tensorflow as tf >>> @dataclass ... class DataCollatorForMultipleChoice: ... """ ... Data collator that will dynamically pad the inputs for multiple choice received. ... """ ... tokenizer: PreTrainedTokenizerBase ... padding: Union[bool, str, PaddingStrategy] = True ... max_length: Optional[int] = None ... pad_to_multiple_of: Optional[int] = None ... def __call__(self, features): ... label_name = "label" if "label" in features[0].keys() else "labels" ... labels = [feature.pop(label_name) for feature in features] ... batch_size = len(features) ... num_choices = len(features[0]["input_ids"]) ... flattened_features = [ ... [{k: v[i] for k, v in feature.items()} for i in range(num_choices)] for feature in features ... ] ... flattened_features = sum(flattened_features, []) ... batch = self.tokenizer.pad( ... flattened_features, ... padding=self.padding, ... max_length=self.max_length, ... pad_to_multiple_of=self.pad_to_multiple_of, ... return_tensors="tf", ... ) ... batch = {k: tf.reshape(v, (batch_size, num_choices, -1)) for k, v in batch.items()} ... batch["labels"] = tf.convert_to_tensor(labels, dtype=tf.int64) ... return batch ``` ## Evaluate Including a metric during training is often helpful for evaluating your model’s performance. You can quickly load a evaluation method with the 🤗 [Evaluate](https://huggingface.co/docs/evaluate/index) library. For this task, load the [accuracy](https://huggingface.co/spaces/evaluate-metric/accuracy) metric (see the 🤗 Evaluate [quick tour](https://huggingface.co/docs/evaluate/a_quick_tour) to learn more about how to load and compute a metric): ``` >>> import evaluate >>> accuracy = evaluate.load("accuracy") ``` Then create a function that passes your predictions and labels to [compute](https://huggingface.co/docs/evaluate/v0.4.0/en/package_reference/main_classes#evaluate.EvaluationModule.compute) to calculate the accuracy: ``` >>> import numpy as np >>> def compute_metrics(eval_pred): ... predictions, labels = eval_pred ... predictions = np.argmax(predictions, axis=1) ... return accuracy.compute(predictions=predictions, references=labels) ``` Your `compute_metrics` function is ready to go now, and you’ll return to it when you setup your training. ## Train If you aren’t familiar with finetuning a model with the [Trainer](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer), take a look at the basic tutorial [here](../training#train-with-pytorch-trainer)! You’re ready to start training your model now! Load BERT with [AutoModelForMultipleChoice](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoModelForMultipleChoice): ``` >>> from transformers import AutoModelForMultipleChoice, TrainingArguments, Trainer >>> model = AutoModelForMultipleChoice.from_pretrained("bert-base-uncased") ``` At this point, only three steps remain: 1. Define your training hyperparameters in [TrainingArguments](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.TrainingArguments). The only required parameter is `output_dir` which specifies where to save your model. You’ll push this model to the Hub by setting `push_to_hub=True` (you need to be signed in to Hugging Face to upload your model). At the end of each epoch, the [Trainer](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer) will evaluate the accuracy and save the training checkpoint. 2. Pass the training arguments to [Trainer](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer) along with the model, dataset, tokenizer, data collator, and `compute_metrics` function. 3. Call [train()](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.train) to finetune your model. ``` >>> training_args = TrainingArguments( ... output_dir="my_awesome_swag_model", ... evaluation_strategy="epoch", ... save_strategy="epoch", ... load_best_model_at_end=True, ... learning_rate=5e-5, ... per_device_train_batch_size=16, ... per_device_eval_batch_size=16, ... num_train_epochs=3, ... weight_decay=0.01, ... push_to_hub=True, ... ) >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=tokenized_swag["train"], ... eval_dataset=tokenized_swag["validation"], ... tokenizer=tokenizer, ... data_collator=DataCollatorForMultipleChoice(tokenizer=tokenizer), ... compute_metrics=compute_metrics, ... ) >>> trainer.train() ``` Once training is completed, share your model to the Hub with the [push\_to\_hub()](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.push_to_hub) method so everyone can use your model: ``` >>> trainer.push_to_hub() ``` If you aren’t familiar with finetuning a model with Keras, take a look at the basic tutorial [here](../training#train-a-tensorflow-model-with-keras)! To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters: ``` >>> from transformers import create_optimizer >>> batch_size = 16 >>> num_train_epochs = 2 >>> total_train_steps = (len(tokenized_swag["train"]) // batch_size) * num_train_epochs >>> optimizer, schedule = create_optimizer(init_lr=5e-5, num_warmup_steps=0, num_train_steps=total_train_steps) ``` Then you can load BERT with [TFAutoModelForMultipleChoice](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.TFAutoModelForMultipleChoice): ``` >>> from transformers import TFAutoModelForMultipleChoice >>> model = TFAutoModelForMultipleChoice.from_pretrained("bert-base-uncased") ``` Convert your datasets to the `tf.data.Dataset` format with [prepare\_tf\_dataset()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel.prepare_tf_dataset): ``` >>> data_collator = DataCollatorForMultipleChoice(tokenizer=tokenizer) >>> tf_train_set = model.prepare_tf_dataset( ... tokenized_swag["train"], ... shuffle=True, ... batch_size=batch_size, ... collate_fn=data_collator, ... ) >>> tf_validation_set = model.prepare_tf_dataset( ... tokenized_swag["validation"], ... shuffle=False, ... batch_size=batch_size, ... collate_fn=data_collator, ... ) ``` Configure the model for training with [`compile`](https://keras.io/api/models/model_training_apis/#compile-method). Note that Transformers models all have a default task-relevant loss function, so you don’t need to specify one unless you want to: ``` >>> model.compile(optimizer=optimizer) ``` The last two things to setup before you start training is to compute the accuracy from the predictions, and provide a way to push your model to the Hub. Both are done by using [Keras callbacks](../main_classes/keras_callbacks). Pass your `compute_metrics` function to [KerasMetricCallback](/docs/transformers/v4.34.0/en/main_classes/keras_callbacks#transformers.KerasMetricCallback): ``` >>> from transformers.keras_callbacks import KerasMetricCallback >>> metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_set) ``` Specify where to push your model and tokenizer in the [PushToHubCallback](/docs/transformers/v4.34.0/en/main_classes/keras_callbacks#transformers.PushToHubCallback): ``` >>> from transformers.keras_callbacks import PushToHubCallback >>> push_to_hub_callback = PushToHubCallback( ... output_dir="my_awesome_model", ... tokenizer=tokenizer, ... ) ``` Then bundle your callbacks together: ``` >>> callbacks = [metric_callback, push_to_hub_callback] ``` Finally, you’re ready to start training your model! Call [`fit`](https://keras.io/api/models/model_training_apis/#fit-method) with your training and validation datasets, the number of epochs, and your callbacks to finetune the model: ``` >>> model.fit(x=tf_train_set, validation_data=tf_validation_set, epochs=2, callbacks=callbacks) ``` Once training is completed, your model is automatically uploaded to the Hub so everyone can use it! For a more in-depth example of how to finetune a model for multiple choice, take a look at the corresponding [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice.ipynb) or [TensorFlow notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice-tf.ipynb). ## Inference Great, now that you’ve finetuned a model, you can use it for inference! Come up with some text and two candidate answers: ``` >>> prompt = "France has a bread law, Le Décret Pain, with strict rules on what is allowed in a traditional baguette." >>> candidate1 = "The law does not apply to croissants and brioche." >>> candidate2 = "The law applies to baguettes." ``` Tokenize each prompt and candidate answer pair and return PyTorch tensors. You should also create some `labels`: ``` >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_swag_model") >>> inputs = tokenizer([[prompt, candidate1], [prompt, candidate2]], return_tensors="pt", padding=True) >>> labels = torch.tensor(0).unsqueeze(0) ``` Pass your inputs and labels to the model and return the `logits`: ``` >>> from transformers import AutoModelForMultipleChoice >>> model = AutoModelForMultipleChoice.from_pretrained("my_awesome_swag_model") >>> outputs = model(**{k: v.unsqueeze(0) for k, v in inputs.items()}, labels=labels) >>> logits = outputs.logits ``` Get the class with the highest probability: ``` >>> predicted_class = logits.argmax().item() >>> predicted_class '0' ``` Tokenize each prompt and candidate answer pair and return TensorFlow tensors: ``` >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_swag_model") >>> inputs = tokenizer([[prompt, candidate1], [prompt, candidate2]], return_tensors="tf", padding=True) ``` Pass your inputs to the model and return the `logits`: ``` >>> from transformers import TFAutoModelForMultipleChoice >>> model = TFAutoModelForMultipleChoice.from_pretrained("my_awesome_swag_model") >>> inputs = {k: tf.expand_dims(v, 0) for k, v in inputs.items()} >>> outputs = model(inputs) >>> logits = outputs.logits ``` Get the class with the highest probability: ``` >>> predicted_class = int(tf.math.argmax(logits, axis=-1)[0]) >>> predicted_class '0' ```
<!DOCTYPE html><html class=""><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"> <meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science."> <meta property="fb:app_id" content="1321688464574422"> <meta name="twitter:card" content="summary_large_image"> <meta name="twitter:site" content="@huggingface"> <meta property="og:title" content="Multiple choice"> <meta property="og:type" content="website"> <meta property="og:url" content="https://huggingface.co/docs/transformers/v4.34.0/en/tasks/multiple_choice"> <meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png"> <link rel="stylesheet" href="/front/build/kube-5e23f38/style.css"> <link rel="preconnect" href="https://fonts.gstatic.com"> <link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&amp;display=swap" rel="stylesheet"> <link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&amp;display=swap" rel="stylesheet"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" as="style" onload="this.onload=null;this.rel='stylesheet'"> <noscript> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" /> </noscript> <title>Multiple choice</title> <script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script> <script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"><link rel="stylesheet" href="/docs/transformers/v4.34.0/en/_app/immutable/assets/0.e3b0c442.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/1.38c5c2f6.js"><meta name="hf:doc:metadata" content="{&quot;local&quot;:&quot;multiple-choice&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;load-swag-dataset&quot;,&quot;title&quot;:&quot;Load SWAG dataset&quot;},{&quot;local&quot;:&quot;preprocess&quot;,&quot;title&quot;:&quot;Preprocess&quot;},{&quot;local&quot;:&quot;evaluate&quot;,&quot;title&quot;:&quot;Evaluate&quot;},{&quot;local&quot;:&quot;train&quot;,&quot;title&quot;:&quot;Train&quot;},{&quot;local&quot;:&quot;inference&quot;,&quot;title&quot;:&quot;Inference&quot;}],&quot;title&quot;:&quot;Multiple choice&quot;}"></head> <body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage"> <div class="flex min-h-screen flex-col"> <div class="SVELTE_HYDRATER contents" data-props="{&quot;classNames&quot;:&quot;&quot;,&quot;isWide&quot;:true,&quot;isZh&quot;:false}" data-target="MainHeader"><header class="border-b border-gray-100 "><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <div class="flex flex-none items-center justify-center p-0.5 place-self-stretch lg:hidden"><button class="relative z-40 flex h-6 w-8 items-center justify-center" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div></div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="rounded-full border border-transparent bg-gray-900 px-3 py-1 leading-none text-white hover:border-black hover:bg-white hover:text-black" href="/join">Sign Up</a></li></ul></nav></div></header></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div> <main class="flex flex-1 flex-col"><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapters&quot;:[{&quot;title&quot;:&quot;Get started&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;🤗 Transformers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;index&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/index&quot;},{&quot;title&quot;:&quot;Quick tour&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;quicktour&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/quicktour&quot;},{&quot;title&quot;:&quot;Installation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;installation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/installation&quot;}]},{&quot;title&quot;:&quot;Tutorials&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Run inference with pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_tutorial&quot;},{&quot;title&quot;:&quot;Write portable code with AutoClass&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;autoclass_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/autoclass_tutorial&quot;},{&quot;title&quot;:&quot;Preprocess data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocessing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/preprocessing&quot;},{&quot;title&quot;:&quot;Fine-tune a pretrained model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;training&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/training&quot;},{&quot;title&quot;:&quot;Train with a script&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;run_scripts&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/run_scripts&quot;},{&quot;title&quot;:&quot;Set up distributed training with 🤗 Accelerate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;accelerate&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/accelerate&quot;},{&quot;title&quot;:&quot;Load and train adapters with 🤗 PEFT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;peft&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/peft&quot;},{&quot;title&quot;:&quot;Share your model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_sharing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_sharing&quot;},{&quot;title&quot;:&quot;Agents&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers_agents&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/transformers_agents&quot;},{&quot;title&quot;:&quot;Generation with LLMs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;llm_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/llm_tutorial&quot;}]},{&quot;title&quot;:&quot;Task Guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Natural Language Processing&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text classification&quot;,&quot;id&quot;:&quot;tasks/sequence_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/sequence_classification&quot;},{&quot;title&quot;:&quot;Token classification&quot;,&quot;id&quot;:&quot;tasks/token_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/token_classification&quot;},{&quot;title&quot;:&quot;Question answering&quot;,&quot;id&quot;:&quot;tasks/question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/question_answering&quot;},{&quot;title&quot;:&quot;Causal language modeling&quot;,&quot;id&quot;:&quot;tasks/language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/language_modeling&quot;},{&quot;title&quot;:&quot;Masked language modeling&quot;,&quot;id&quot;:&quot;tasks/masked_language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/masked_language_modeling&quot;},{&quot;title&quot;:&quot;Translation&quot;,&quot;id&quot;:&quot;tasks/translation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/translation&quot;},{&quot;title&quot;:&quot;Summarization&quot;,&quot;id&quot;:&quot;tasks/summarization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/summarization&quot;},{&quot;title&quot;:&quot;Multiple choice&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks/multiple_choice&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/multiple_choice&quot;}]},{&quot;title&quot;:&quot;Audio&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio classification&quot;,&quot;id&quot;:&quot;tasks/audio_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/audio_classification&quot;},{&quot;title&quot;:&quot;Automatic speech recognition&quot;,&quot;id&quot;:&quot;tasks/asr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/asr&quot;}]},{&quot;title&quot;:&quot;Computer Vision&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image classification&quot;,&quot;id&quot;:&quot;tasks/image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_classification&quot;},{&quot;title&quot;:&quot;Semantic segmentation&quot;,&quot;id&quot;:&quot;tasks/semantic_segmentation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/semantic_segmentation&quot;},{&quot;title&quot;:&quot;Video classification&quot;,&quot;id&quot;:&quot;tasks/video_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/video_classification&quot;},{&quot;title&quot;:&quot;Object detection&quot;,&quot;id&quot;:&quot;tasks/object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot object detection&quot;,&quot;id&quot;:&quot;tasks/zero_shot_object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot image classification&quot;,&quot;id&quot;:&quot;tasks/zero_shot_image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification&quot;},{&quot;title&quot;:&quot;Depth estimation&quot;,&quot;id&quot;:&quot;tasks/monocular_depth_estimation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation&quot;}]},{&quot;title&quot;:&quot;Multimodal&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image captioning&quot;,&quot;id&quot;:&quot;tasks/image_captioning&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_captioning&quot;},{&quot;title&quot;:&quot;Document Question Answering&quot;,&quot;id&quot;:&quot;tasks/document_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/document_question_answering&quot;},{&quot;title&quot;:&quot;Visual Question Answering&quot;,&quot;id&quot;:&quot;tasks/visual_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/visual_question_answering&quot;},{&quot;title&quot;:&quot;Text to speech&quot;,&quot;id&quot;:&quot;tasks/text-to-speech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/text-to-speech&quot;}]},{&quot;title&quot;:&quot;Generation&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Customize the generation strategy&quot;,&quot;id&quot;:&quot;generation_strategies&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/generation_strategies&quot;}]},{&quot;title&quot;:&quot;Prompting&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image tasks with IDEFICS&quot;,&quot;id&quot;:&quot;tasks/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/idefics&quot;}]}]},{&quot;title&quot;:&quot;Developer guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Use fast tokenizers from 🤗 Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;fast_tokenizers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/fast_tokenizers&quot;},{&quot;title&quot;:&quot;Run inference with multilingual models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;multilingual&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/multilingual&quot;},{&quot;title&quot;:&quot;Use model-specific APIs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;create_a_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/create_a_model&quot;},{&quot;title&quot;:&quot;Share a custom model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_models&quot;},{&quot;title&quot;:&quot;Templates for chat models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;chat_templating&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/chat_templating&quot;},{&quot;title&quot;:&quot;Run training on Amazon SageMaker&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;sagemaker&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/sagemaker&quot;},{&quot;title&quot;:&quot;Export to ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;serialization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/serialization&quot;},{&quot;title&quot;:&quot;Export to TFLite&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tflite&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tflite&quot;},{&quot;title&quot;:&quot;Export to TorchScript&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;torchscript&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/torchscript&quot;},{&quot;title&quot;:&quot;Benchmarks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;benchmarks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/benchmarks&quot;},{&quot;title&quot;:&quot;Notebooks with examples&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;notebooks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/notebooks&quot;},{&quot;title&quot;:&quot;Community resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;community&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/community&quot;},{&quot;title&quot;:&quot;Custom Tools and Prompts&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_tools&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_tools&quot;},{&quot;title&quot;:&quot;Troubleshoot&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;troubleshooting&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/troubleshooting&quot;}]},{&quot;title&quot;:&quot;Performance and scalability&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;performance&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/performance&quot;},{&quot;title&quot;:&quot;Efficient training techniques&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Methods and tools for efficient training on a single GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_one&quot;},{&quot;title&quot;:&quot;Multiple GPUs and parallelism&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_many&quot;},{&quot;title&quot;:&quot;Efficient training on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu&quot;},{&quot;title&quot;:&quot;Distributed CPU training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu_many&quot;},{&quot;title&quot;:&quot;Training on TPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu&quot;},{&quot;title&quot;:&quot;Training on TPU with TensorFlow&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu_tf&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu_tf&quot;},{&quot;title&quot;:&quot;Training on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_special&quot;},{&quot;title&quot;:&quot;Custom hardware for training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_hardware&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_hardware&quot;},{&quot;title&quot;:&quot;Hyperparameter Search using Trainer API&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;hpo_train&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/hpo_train&quot;}]},{&quot;title&quot;:&quot;Optimizing inference&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Inference on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_cpu&quot;},{&quot;title&quot;:&quot;Inference on one GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_one&quot;},{&quot;title&quot;:&quot;Inference on many GPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_many&quot;},{&quot;title&quot;:&quot;Inference on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_special&quot;}]},{&quot;title&quot;:&quot;Instantiating a big model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;big_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/big_models&quot;},{&quot;title&quot;:&quot;Troubleshooting&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;debugging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/debugging&quot;},{&quot;title&quot;:&quot;XLA Integration for TensorFlow Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tf_xla&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tf_xla&quot;},{&quot;title&quot;:&quot;Optimize inference using `torch.compile()`&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_torch_compile&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_torch_compile&quot;}]},{&quot;title&quot;:&quot;Contribute&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;How to contribute to transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;contributing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/contributing&quot;},{&quot;title&quot;:&quot;How to add a model to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_model&quot;},{&quot;title&quot;:&quot;How to convert a 🤗 Transformers model to TensorFlow?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_tensorflow_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_tensorflow_model&quot;},{&quot;title&quot;:&quot;How to add a pipeline to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_pipeline&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_pipeline&quot;},{&quot;title&quot;:&quot;Testing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;testing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/testing&quot;},{&quot;title&quot;:&quot;Checks on a Pull Request&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pr_checks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pr_checks&quot;}]},{&quot;title&quot;:&quot;Conceptual guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Philosophy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;philosophy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/philosophy&quot;},{&quot;title&quot;:&quot;Glossary&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;glossary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/glossary&quot;},{&quot;title&quot;:&quot;What 🤗 Transformers can do&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;task_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/task_summary&quot;},{&quot;title&quot;:&quot;How 🤗 Transformers solve tasks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks_explained&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks_explained&quot;},{&quot;title&quot;:&quot;The Transformer model family&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_summary&quot;},{&quot;title&quot;:&quot;Summary of the tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tokenizer_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tokenizer_summary&quot;},{&quot;title&quot;:&quot;Attention mechanisms&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;attention&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/attention&quot;},{&quot;title&quot;:&quot;Padding and truncation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pad_truncation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pad_truncation&quot;},{&quot;title&quot;:&quot;BERTology&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;bertology&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/bertology&quot;},{&quot;title&quot;:&quot;Perplexity of fixed-length models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perplexity&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perplexity&quot;},{&quot;title&quot;:&quot;Pipelines for webserver inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_webserver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_webserver&quot;},{&quot;title&quot;:&quot;Model training anatomy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_memory_anatomy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_memory_anatomy&quot;}]},{&quot;title&quot;:&quot;API&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Main Classes&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Agents and Tools&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/agent&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/agent&quot;},{&quot;title&quot;:&quot;Auto Classes&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/auto&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/auto&quot;},{&quot;title&quot;:&quot;Callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/callback&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/callback&quot;},{&quot;title&quot;:&quot;Configuration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/configuration&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/configuration&quot;},{&quot;title&quot;:&quot;Data Collator&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/data_collator&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/data_collator&quot;},{&quot;title&quot;:&quot;Keras callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/keras_callbacks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/keras_callbacks&quot;},{&quot;title&quot;:&quot;Logging&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/logging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/logging&quot;},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/model&quot;},{&quot;title&quot;:&quot;Text Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/text_generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/text_generation&quot;},{&quot;title&quot;:&quot;ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/onnx&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/onnx&quot;},{&quot;title&quot;:&quot;Optimization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/optimizer_schedules&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules&quot;},{&quot;title&quot;:&quot;Model outputs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/output&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/output&quot;},{&quot;title&quot;:&quot;Pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/pipelines&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/pipelines&quot;},{&quot;title&quot;:&quot;Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/processors&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/processors&quot;},{&quot;title&quot;:&quot;Quantization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/quantization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/quantization&quot;},{&quot;title&quot;:&quot;Tokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/tokenizer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/tokenizer&quot;},{&quot;title&quot;:&quot;Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/trainer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/trainer&quot;},{&quot;title&quot;:&quot;DeepSpeed Integration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/deepspeed&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/deepspeed&quot;},{&quot;title&quot;:&quot;Feature Extractor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/feature_extractor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/feature_extractor&quot;},{&quot;title&quot;:&quot;Image Processor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/image_processor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/image_processor&quot;}]},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALBERT&quot;,&quot;id&quot;:&quot;model_doc/albert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/albert&quot;},{&quot;title&quot;:&quot;BART&quot;,&quot;id&quot;:&quot;model_doc/bart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bart&quot;},{&quot;title&quot;:&quot;BARThez&quot;,&quot;id&quot;:&quot;model_doc/barthez&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/barthez&quot;},{&quot;title&quot;:&quot;BARTpho&quot;,&quot;id&quot;:&quot;model_doc/bartpho&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bartpho&quot;},{&quot;title&quot;:&quot;BERT&quot;,&quot;id&quot;:&quot;model_doc/bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert&quot;},{&quot;title&quot;:&quot;BertGeneration&quot;,&quot;id&quot;:&quot;model_doc/bert-generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-generation&quot;},{&quot;title&quot;:&quot;BertJapanese&quot;,&quot;id&quot;:&quot;model_doc/bert-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-japanese&quot;},{&quot;title&quot;:&quot;Bertweet&quot;,&quot;id&quot;:&quot;model_doc/bertweet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bertweet&quot;},{&quot;title&quot;:&quot;BigBird&quot;,&quot;id&quot;:&quot;model_doc/big_bird&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/big_bird&quot;},{&quot;title&quot;:&quot;BigBirdPegasus&quot;,&quot;id&quot;:&quot;model_doc/bigbird_pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus&quot;},{&quot;title&quot;:&quot;BioGpt&quot;,&quot;id&quot;:&quot;model_doc/biogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/biogpt&quot;},{&quot;title&quot;:&quot;Blenderbot&quot;,&quot;id&quot;:&quot;model_doc/blenderbot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot&quot;},{&quot;title&quot;:&quot;Blenderbot Small&quot;,&quot;id&quot;:&quot;model_doc/blenderbot-small&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot-small&quot;},{&quot;title&quot;:&quot;BLOOM&quot;,&quot;id&quot;:&quot;model_doc/bloom&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bloom&quot;},{&quot;title&quot;:&quot;BORT&quot;,&quot;id&quot;:&quot;model_doc/bort&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bort&quot;},{&quot;title&quot;:&quot;ByT5&quot;,&quot;id&quot;:&quot;model_doc/byt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/byt5&quot;},{&quot;title&quot;:&quot;CamemBERT&quot;,&quot;id&quot;:&quot;model_doc/camembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/camembert&quot;},{&quot;title&quot;:&quot;CANINE&quot;,&quot;id&quot;:&quot;model_doc/canine&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/canine&quot;},{&quot;title&quot;:&quot;CodeGen&quot;,&quot;id&quot;:&quot;model_doc/codegen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/codegen&quot;},{&quot;title&quot;:&quot;CodeLlama&quot;,&quot;id&quot;:&quot;model_doc/code_llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/code_llama&quot;},{&quot;title&quot;:&quot;ConvBERT&quot;,&quot;id&quot;:&quot;model_doc/convbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convbert&quot;},{&quot;title&quot;:&quot;CPM&quot;,&quot;id&quot;:&quot;model_doc/cpm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpm&quot;},{&quot;title&quot;:&quot;CPMANT&quot;,&quot;id&quot;:&quot;model_doc/cpmant&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpmant&quot;},{&quot;title&quot;:&quot;CTRL&quot;,&quot;id&quot;:&quot;model_doc/ctrl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ctrl&quot;},{&quot;title&quot;:&quot;DeBERTa&quot;,&quot;id&quot;:&quot;model_doc/deberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta&quot;},{&quot;title&quot;:&quot;DeBERTa-v2&quot;,&quot;id&quot;:&quot;model_doc/deberta-v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta-v2&quot;},{&quot;title&quot;:&quot;DialoGPT&quot;,&quot;id&quot;:&quot;model_doc/dialogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dialogpt&quot;},{&quot;title&quot;:&quot;DistilBERT&quot;,&quot;id&quot;:&quot;model_doc/distilbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/distilbert&quot;},{&quot;title&quot;:&quot;DPR&quot;,&quot;id&quot;:&quot;model_doc/dpr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpr&quot;},{&quot;title&quot;:&quot;ELECTRA&quot;,&quot;id&quot;:&quot;model_doc/electra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/electra&quot;},{&quot;title&quot;:&quot;Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encoder-decoder&quot;},{&quot;title&quot;:&quot;ERNIE&quot;,&quot;id&quot;:&quot;model_doc/ernie&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie&quot;},{&quot;title&quot;:&quot;ErnieM&quot;,&quot;id&quot;:&quot;model_doc/ernie_m&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie_m&quot;},{&quot;title&quot;:&quot;ESM&quot;,&quot;id&quot;:&quot;model_doc/esm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/esm&quot;},{&quot;title&quot;:&quot;Falcon&quot;,&quot;id&quot;:&quot;model_doc/falcon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/falcon&quot;},{&quot;title&quot;:&quot;FLAN-T5&quot;,&quot;id&quot;:&quot;model_doc/flan-t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-t5&quot;},{&quot;title&quot;:&quot;FLAN-UL2&quot;,&quot;id&quot;:&quot;model_doc/flan-ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-ul2&quot;},{&quot;title&quot;:&quot;FlauBERT&quot;,&quot;id&quot;:&quot;model_doc/flaubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flaubert&quot;},{&quot;title&quot;:&quot;FNet&quot;,&quot;id&quot;:&quot;model_doc/fnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fnet&quot;},{&quot;title&quot;:&quot;FSMT&quot;,&quot;id&quot;:&quot;model_doc/fsmt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fsmt&quot;},{&quot;title&quot;:&quot;Funnel Transformer&quot;,&quot;id&quot;:&quot;model_doc/funnel&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/funnel&quot;},{&quot;title&quot;:&quot;GPT&quot;,&quot;id&quot;:&quot;model_doc/openai-gpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/openai-gpt&quot;},{&quot;title&quot;:&quot;GPT Neo&quot;,&quot;id&quot;:&quot;model_doc/gpt_neo&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neo&quot;},{&quot;title&quot;:&quot;GPT NeoX&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox&quot;},{&quot;title&quot;:&quot;GPT NeoX Japanese&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox_japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese&quot;},{&quot;title&quot;:&quot;GPT-J&quot;,&quot;id&quot;:&quot;model_doc/gptj&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptj&quot;},{&quot;title&quot;:&quot;GPT2&quot;,&quot;id&quot;:&quot;model_doc/gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt2&quot;},{&quot;title&quot;:&quot;GPTBigCode&quot;,&quot;id&quot;:&quot;model_doc/gpt_bigcode&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode&quot;},{&quot;title&quot;:&quot;GPTSAN Japanese&quot;,&quot;id&quot;:&quot;model_doc/gptsan-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese&quot;},{&quot;title&quot;:&quot;GPTSw3&quot;,&quot;id&quot;:&quot;model_doc/gpt-sw3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt-sw3&quot;},{&quot;title&quot;:&quot;HerBERT&quot;,&quot;id&quot;:&quot;model_doc/herbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/herbert&quot;},{&quot;title&quot;:&quot;I-BERT&quot;,&quot;id&quot;:&quot;model_doc/ibert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ibert&quot;},{&quot;title&quot;:&quot;Jukebox&quot;,&quot;id&quot;:&quot;model_doc/jukebox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/jukebox&quot;},{&quot;title&quot;:&quot;LED&quot;,&quot;id&quot;:&quot;model_doc/led&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/led&quot;},{&quot;title&quot;:&quot;LLaMA&quot;,&quot;id&quot;:&quot;model_doc/llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama&quot;},{&quot;title&quot;:&quot;Llama2&quot;,&quot;id&quot;:&quot;model_doc/llama2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama2&quot;},{&quot;title&quot;:&quot;Longformer&quot;,&quot;id&quot;:&quot;model_doc/longformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longformer&quot;},{&quot;title&quot;:&quot;LongT5&quot;,&quot;id&quot;:&quot;model_doc/longt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longt5&quot;},{&quot;title&quot;:&quot;LUKE&quot;,&quot;id&quot;:&quot;model_doc/luke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/luke&quot;},{&quot;title&quot;:&quot;M2M100&quot;,&quot;id&quot;:&quot;model_doc/m2m_100&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/m2m_100&quot;},{&quot;title&quot;:&quot;MarianMT&quot;,&quot;id&quot;:&quot;model_doc/marian&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/marian&quot;},{&quot;title&quot;:&quot;MarkupLM&quot;,&quot;id&quot;:&quot;model_doc/markuplm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/markuplm&quot;},{&quot;title&quot;:&quot;MBart and MBart-50&quot;,&quot;id&quot;:&quot;model_doc/mbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mbart&quot;},{&quot;title&quot;:&quot;MEGA&quot;,&quot;id&quot;:&quot;model_doc/mega&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mega&quot;},{&quot;title&quot;:&quot;MegatronBERT&quot;,&quot;id&quot;:&quot;model_doc/megatron-bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron-bert&quot;},{&quot;title&quot;:&quot;MegatronGPT2&quot;,&quot;id&quot;:&quot;model_doc/megatron_gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2&quot;},{&quot;title&quot;:&quot;Mistral&quot;,&quot;id&quot;:&quot;model_doc/mistral&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mistral&quot;},{&quot;title&quot;:&quot;mLUKE&quot;,&quot;id&quot;:&quot;model_doc/mluke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mluke&quot;},{&quot;title&quot;:&quot;MobileBERT&quot;,&quot;id&quot;:&quot;model_doc/mobilebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilebert&quot;},{&quot;title&quot;:&quot;MPNet&quot;,&quot;id&quot;:&quot;model_doc/mpnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpnet&quot;},{&quot;title&quot;:&quot;MPT&quot;,&quot;id&quot;:&quot;model_doc/mpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpt&quot;},{&quot;title&quot;:&quot;MRA&quot;,&quot;id&quot;:&quot;model_doc/mra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mra&quot;},{&quot;title&quot;:&quot;MT5&quot;,&quot;id&quot;:&quot;model_doc/mt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mt5&quot;},{&quot;title&quot;:&quot;MVP&quot;,&quot;id&quot;:&quot;model_doc/mvp&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mvp&quot;},{&quot;title&quot;:&quot;NEZHA&quot;,&quot;id&quot;:&quot;model_doc/nezha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nezha&quot;},{&quot;title&quot;:&quot;NLLB&quot;,&quot;id&quot;:&quot;model_doc/nllb&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb&quot;},{&quot;title&quot;:&quot;NLLB-MoE&quot;,&quot;id&quot;:&quot;model_doc/nllb-moe&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb-moe&quot;},{&quot;title&quot;:&quot;Nyströmformer&quot;,&quot;id&quot;:&quot;model_doc/nystromformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nystromformer&quot;},{&quot;title&quot;:&quot;Open-Llama&quot;,&quot;id&quot;:&quot;model_doc/open-llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/open-llama&quot;},{&quot;title&quot;:&quot;OPT&quot;,&quot;id&quot;:&quot;model_doc/opt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/opt&quot;},{&quot;title&quot;:&quot;Pegasus&quot;,&quot;id&quot;:&quot;model_doc/pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus&quot;},{&quot;title&quot;:&quot;PEGASUS-X&quot;,&quot;id&quot;:&quot;model_doc/pegasus_x&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus_x&quot;},{&quot;title&quot;:&quot;Persimmon&quot;,&quot;id&quot;:&quot;model_doc/persimmon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/persimmon&quot;},{&quot;title&quot;:&quot;PhoBERT&quot;,&quot;id&quot;:&quot;model_doc/phobert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/phobert&quot;},{&quot;title&quot;:&quot;PLBart&quot;,&quot;id&quot;:&quot;model_doc/plbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/plbart&quot;},{&quot;title&quot;:&quot;ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/prophetnet&quot;},{&quot;title&quot;:&quot;QDQBert&quot;,&quot;id&quot;:&quot;model_doc/qdqbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/qdqbert&quot;},{&quot;title&quot;:&quot;RAG&quot;,&quot;id&quot;:&quot;model_doc/rag&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rag&quot;},{&quot;title&quot;:&quot;REALM&quot;,&quot;id&quot;:&quot;model_doc/realm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/realm&quot;},{&quot;title&quot;:&quot;Reformer&quot;,&quot;id&quot;:&quot;model_doc/reformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/reformer&quot;},{&quot;title&quot;:&quot;RemBERT&quot;,&quot;id&quot;:&quot;model_doc/rembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rembert&quot;},{&quot;title&quot;:&quot;RetriBERT&quot;,&quot;id&quot;:&quot;model_doc/retribert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/retribert&quot;},{&quot;title&quot;:&quot;RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta&quot;},{&quot;title&quot;:&quot;RoBERTa-PreLayerNorm&quot;,&quot;id&quot;:&quot;model_doc/roberta-prelayernorm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm&quot;},{&quot;title&quot;:&quot;RoCBert&quot;,&quot;id&quot;:&quot;model_doc/roc_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roc_bert&quot;},{&quot;title&quot;:&quot;RoFormer&quot;,&quot;id&quot;:&quot;model_doc/roformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roformer&quot;},{&quot;title&quot;:&quot;RWKV&quot;,&quot;id&quot;:&quot;model_doc/rwkv&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rwkv&quot;},{&quot;title&quot;:&quot;Splinter&quot;,&quot;id&quot;:&quot;model_doc/splinter&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/splinter&quot;},{&quot;title&quot;:&quot;SqueezeBERT&quot;,&quot;id&quot;:&quot;model_doc/squeezebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/squeezebert&quot;},{&quot;title&quot;:&quot;SwitchTransformers&quot;,&quot;id&quot;:&quot;model_doc/switch_transformers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/switch_transformers&quot;},{&quot;title&quot;:&quot;T5&quot;,&quot;id&quot;:&quot;model_doc/t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5&quot;},{&quot;title&quot;:&quot;T5v1.1&quot;,&quot;id&quot;:&quot;model_doc/t5v1.1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5v1.1&quot;},{&quot;title&quot;:&quot;TAPEX&quot;,&quot;id&quot;:&quot;model_doc/tapex&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapex&quot;},{&quot;title&quot;:&quot;Transformer XL&quot;,&quot;id&quot;:&quot;model_doc/transfo-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/transfo-xl&quot;},{&quot;title&quot;:&quot;UL2&quot;,&quot;id&quot;:&quot;model_doc/ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ul2&quot;},{&quot;title&quot;:&quot;UMT5&quot;,&quot;id&quot;:&quot;model_doc/umt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/umt5&quot;},{&quot;title&quot;:&quot;X-MOD&quot;,&quot;id&quot;:&quot;model_doc/xmod&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xmod&quot;},{&quot;title&quot;:&quot;XGLM&quot;,&quot;id&quot;:&quot;model_doc/xglm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xglm&quot;},{&quot;title&quot;:&quot;XLM&quot;,&quot;id&quot;:&quot;model_doc/xlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm&quot;},{&quot;title&quot;:&quot;XLM-ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/xlm-prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa-XL&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl&quot;},{&quot;title&quot;:&quot;XLM-V&quot;,&quot;id&quot;:&quot;model_doc/xlm-v&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-v&quot;},{&quot;title&quot;:&quot;XLNet&quot;,&quot;id&quot;:&quot;model_doc/xlnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlnet&quot;},{&quot;title&quot;:&quot;YOSO&quot;,&quot;id&quot;:&quot;model_doc/yoso&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yoso&quot;}]},{&quot;title&quot;:&quot;Vision models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;BEiT&quot;,&quot;id&quot;:&quot;model_doc/beit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/beit&quot;},{&quot;title&quot;:&quot;BiT&quot;,&quot;id&quot;:&quot;model_doc/bit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bit&quot;},{&quot;title&quot;:&quot;Conditional DETR&quot;,&quot;id&quot;:&quot;model_doc/conditional_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/conditional_detr&quot;},{&quot;title&quot;:&quot;ConvNeXT&quot;,&quot;id&quot;:&quot;model_doc/convnext&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnext&quot;},{&quot;title&quot;:&quot;ConvNeXTV2&quot;,&quot;id&quot;:&quot;model_doc/convnextv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnextv2&quot;},{&quot;title&quot;:&quot;CvT&quot;,&quot;id&quot;:&quot;model_doc/cvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cvt&quot;},{&quot;title&quot;:&quot;Deformable DETR&quot;,&quot;id&quot;:&quot;model_doc/deformable_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deformable_detr&quot;},{&quot;title&quot;:&quot;DeiT&quot;,&quot;id&quot;:&quot;model_doc/deit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deit&quot;},{&quot;title&quot;:&quot;DETA&quot;,&quot;id&quot;:&quot;model_doc/deta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deta&quot;},{&quot;title&quot;:&quot;DETR&quot;,&quot;id&quot;:&quot;model_doc/detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/detr&quot;},{&quot;title&quot;:&quot;DiNAT&quot;,&quot;id&quot;:&quot;model_doc/dinat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinat&quot;},{&quot;title&quot;:&quot;DINO V2&quot;,&quot;id&quot;:&quot;model_doc/dinov2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinov2&quot;},{&quot;title&quot;:&quot;DiT&quot;,&quot;id&quot;:&quot;model_doc/dit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dit&quot;},{&quot;title&quot;:&quot;DPT&quot;,&quot;id&quot;:&quot;model_doc/dpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpt&quot;},{&quot;title&quot;:&quot;EfficientFormer&quot;,&quot;id&quot;:&quot;model_doc/efficientformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientformer&quot;},{&quot;title&quot;:&quot;EfficientNet&quot;,&quot;id&quot;:&quot;model_doc/efficientnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientnet&quot;},{&quot;title&quot;:&quot;FocalNet&quot;,&quot;id&quot;:&quot;model_doc/focalnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/focalnet&quot;},{&quot;title&quot;:&quot;GLPN&quot;,&quot;id&quot;:&quot;model_doc/glpn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/glpn&quot;},{&quot;title&quot;:&quot;ImageGPT&quot;,&quot;id&quot;:&quot;model_doc/imagegpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/imagegpt&quot;},{&quot;title&quot;:&quot;LeViT&quot;,&quot;id&quot;:&quot;model_doc/levit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/levit&quot;},{&quot;title&quot;:&quot;Mask2Former&quot;,&quot;id&quot;:&quot;model_doc/mask2former&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mask2former&quot;},{&quot;title&quot;:&quot;MaskFormer&quot;,&quot;id&quot;:&quot;model_doc/maskformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/maskformer&quot;},{&quot;title&quot;:&quot;MobileNetV1&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v1&quot;},{&quot;title&quot;:&quot;MobileNetV2&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v2&quot;},{&quot;title&quot;:&quot;MobileViT&quot;,&quot;id&quot;:&quot;model_doc/mobilevit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevit&quot;},{&quot;title&quot;:&quot;MobileViTV2&quot;,&quot;id&quot;:&quot;model_doc/mobilevitv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevitv2&quot;},{&quot;title&quot;:&quot;NAT&quot;,&quot;id&quot;:&quot;model_doc/nat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nat&quot;},{&quot;title&quot;:&quot;PoolFormer&quot;,&quot;id&quot;:&quot;model_doc/poolformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/poolformer&quot;},{&quot;title&quot;:&quot;Pyramid Vision Transformer (PVT)&quot;,&quot;id&quot;:&quot;model_doc/pvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pvt&quot;},{&quot;title&quot;:&quot;RegNet&quot;,&quot;id&quot;:&quot;model_doc/regnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/regnet&quot;},{&quot;title&quot;:&quot;ResNet&quot;,&quot;id&quot;:&quot;model_doc/resnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/resnet&quot;},{&quot;title&quot;:&quot;SegFormer&quot;,&quot;id&quot;:&quot;model_doc/segformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/segformer&quot;},{&quot;title&quot;:&quot;SwiftFormer&quot;,&quot;id&quot;:&quot;model_doc/swiftformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swiftformer&quot;},{&quot;title&quot;:&quot;Swin Transformer&quot;,&quot;id&quot;:&quot;model_doc/swin&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin&quot;},{&quot;title&quot;:&quot;Swin Transformer V2&quot;,&quot;id&quot;:&quot;model_doc/swinv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swinv2&quot;},{&quot;title&quot;:&quot;Swin2SR&quot;,&quot;id&quot;:&quot;model_doc/swin2sr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin2sr&quot;},{&quot;title&quot;:&quot;Table Transformer&quot;,&quot;id&quot;:&quot;model_doc/table-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/table-transformer&quot;},{&quot;title&quot;:&quot;TimeSformer&quot;,&quot;id&quot;:&quot;model_doc/timesformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/timesformer&quot;},{&quot;title&quot;:&quot;UperNet&quot;,&quot;id&quot;:&quot;model_doc/upernet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/upernet&quot;},{&quot;title&quot;:&quot;VAN&quot;,&quot;id&quot;:&quot;model_doc/van&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/van&quot;},{&quot;title&quot;:&quot;VideoMAE&quot;,&quot;id&quot;:&quot;model_doc/videomae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/videomae&quot;},{&quot;title&quot;:&quot;Vision Transformer (ViT)&quot;,&quot;id&quot;:&quot;model_doc/vit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit&quot;},{&quot;title&quot;:&quot;ViT Hybrid&quot;,&quot;id&quot;:&quot;model_doc/vit_hybrid&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_hybrid&quot;},{&quot;title&quot;:&quot;ViTDet&quot;,&quot;id&quot;:&quot;model_doc/vitdet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitdet&quot;},{&quot;title&quot;:&quot;ViTMAE&quot;,&quot;id&quot;:&quot;model_doc/vit_mae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_mae&quot;},{&quot;title&quot;:&quot;ViTMatte&quot;,&quot;id&quot;:&quot;model_doc/vitmatte&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitmatte&quot;},{&quot;title&quot;:&quot;ViTMSN&quot;,&quot;id&quot;:&quot;model_doc/vit_msn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_msn&quot;},{&quot;title&quot;:&quot;ViViT&quot;,&quot;id&quot;:&quot;model_doc/vivit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vivit&quot;},{&quot;title&quot;:&quot;YOLOS&quot;,&quot;id&quot;:&quot;model_doc/yolos&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yolos&quot;}]},{&quot;title&quot;:&quot;Audio models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio Spectrogram Transformer&quot;,&quot;id&quot;:&quot;model_doc/audio-spectrogram-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer&quot;},{&quot;title&quot;:&quot;Bark&quot;,&quot;id&quot;:&quot;model_doc/bark&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bark&quot;},{&quot;title&quot;:&quot;CLAP&quot;,&quot;id&quot;:&quot;model_doc/clap&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clap&quot;},{&quot;title&quot;:&quot;EnCodec&quot;,&quot;id&quot;:&quot;model_doc/encodec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encodec&quot;},{&quot;title&quot;:&quot;Hubert&quot;,&quot;id&quot;:&quot;model_doc/hubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/hubert&quot;},{&quot;title&quot;:&quot;MCTCT&quot;,&quot;id&quot;:&quot;model_doc/mctct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mctct&quot;},{&quot;title&quot;:&quot;MMS&quot;,&quot;id&quot;:&quot;model_doc/mms&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mms&quot;},{&quot;title&quot;:&quot;MusicGen&quot;,&quot;id&quot;:&quot;model_doc/musicgen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/musicgen&quot;},{&quot;title&quot;:&quot;Pop2Piano&quot;,&quot;id&quot;:&quot;model_doc/pop2piano&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pop2piano&quot;},{&quot;title&quot;:&quot;SEW&quot;,&quot;id&quot;:&quot;model_doc/sew&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew&quot;},{&quot;title&quot;:&quot;SEW-D&quot;,&quot;id&quot;:&quot;model_doc/sew-d&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew-d&quot;},{&quot;title&quot;:&quot;Speech2Text&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text&quot;},{&quot;title&quot;:&quot;Speech2Text2&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text_2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2&quot;},{&quot;title&quot;:&quot;SpeechT5&quot;,&quot;id&quot;:&quot;model_doc/speecht5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speecht5&quot;},{&quot;title&quot;:&quot;UniSpeech&quot;,&quot;id&quot;:&quot;model_doc/unispeech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech&quot;},{&quot;title&quot;:&quot;UniSpeech-SAT&quot;,&quot;id&quot;:&quot;model_doc/unispeech-sat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech-sat&quot;},{&quot;title&quot;:&quot;VITS&quot;,&quot;id&quot;:&quot;model_doc/vits&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vits&quot;},{&quot;title&quot;:&quot;Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2&quot;},{&quot;title&quot;:&quot;Wav2Vec2-Conformer&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2-conformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer&quot;},{&quot;title&quot;:&quot;Wav2Vec2Phoneme&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2_phoneme&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme&quot;},{&quot;title&quot;:&quot;WavLM&quot;,&quot;id&quot;:&quot;model_doc/wavlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wavlm&quot;},{&quot;title&quot;:&quot;Whisper&quot;,&quot;id&quot;:&quot;model_doc/whisper&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/whisper&quot;},{&quot;title&quot;:&quot;XLS-R&quot;,&quot;id&quot;:&quot;model_doc/xls_r&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xls_r&quot;},{&quot;title&quot;:&quot;XLSR-Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/xlsr_wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2&quot;}]},{&quot;title&quot;:&quot;Multimodal models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALIGN&quot;,&quot;id&quot;:&quot;model_doc/align&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/align&quot;},{&quot;title&quot;:&quot;AltCLIP&quot;,&quot;id&quot;:&quot;model_doc/altclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/altclip&quot;},{&quot;title&quot;:&quot;BLIP&quot;,&quot;id&quot;:&quot;model_doc/blip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip&quot;},{&quot;title&quot;:&quot;BLIP-2&quot;,&quot;id&quot;:&quot;model_doc/blip-2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip-2&quot;},{&quot;title&quot;:&quot;BridgeTower&quot;,&quot;id&quot;:&quot;model_doc/bridgetower&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bridgetower&quot;},{&quot;title&quot;:&quot;BROS&quot;,&quot;id&quot;:&quot;model_doc/bros&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bros&quot;},{&quot;title&quot;:&quot;Chinese-CLIP&quot;,&quot;id&quot;:&quot;model_doc/chinese_clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/chinese_clip&quot;},{&quot;title&quot;:&quot;CLIP&quot;,&quot;id&quot;:&quot;model_doc/clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clip&quot;},{&quot;title&quot;:&quot;CLIPSeg&quot;,&quot;id&quot;:&quot;model_doc/clipseg&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clipseg&quot;},{&quot;title&quot;:&quot;Data2Vec&quot;,&quot;id&quot;:&quot;model_doc/data2vec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/data2vec&quot;},{&quot;title&quot;:&quot;DePlot&quot;,&quot;id&quot;:&quot;model_doc/deplot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deplot&quot;},{&quot;title&quot;:&quot;Donut&quot;,&quot;id&quot;:&quot;model_doc/donut&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/donut&quot;},{&quot;title&quot;:&quot;FLAVA&quot;,&quot;id&quot;:&quot;model_doc/flava&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flava&quot;},{&quot;title&quot;:&quot;GIT&quot;,&quot;id&quot;:&quot;model_doc/git&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/git&quot;},{&quot;title&quot;:&quot;GroupViT&quot;,&quot;id&quot;:&quot;model_doc/groupvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/groupvit&quot;},{&quot;title&quot;:&quot;IDEFICS&quot;,&quot;id&quot;:&quot;model_doc/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/idefics&quot;},{&quot;title&quot;:&quot;InstructBLIP&quot;,&quot;id&quot;:&quot;model_doc/instructblip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/instructblip&quot;},{&quot;title&quot;:&quot;LayoutLM&quot;,&quot;id&quot;:&quot;model_doc/layoutlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlm&quot;},{&quot;title&quot;:&quot;LayoutLMV2&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv2&quot;},{&quot;title&quot;:&quot;LayoutLMV3&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv3&quot;},{&quot;title&quot;:&quot;LayoutXLM&quot;,&quot;id&quot;:&quot;model_doc/layoutxlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutxlm&quot;},{&quot;title&quot;:&quot;LiLT&quot;,&quot;id&quot;:&quot;model_doc/lilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lilt&quot;},{&quot;title&quot;:&quot;LXMERT&quot;,&quot;id&quot;:&quot;model_doc/lxmert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lxmert&quot;},{&quot;title&quot;:&quot;MatCha&quot;,&quot;id&quot;:&quot;model_doc/matcha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/matcha&quot;},{&quot;title&quot;:&quot;MGP-STR&quot;,&quot;id&quot;:&quot;model_doc/mgp-str&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mgp-str&quot;},{&quot;title&quot;:&quot;Nougat&quot;,&quot;id&quot;:&quot;model_doc/nougat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nougat&quot;},{&quot;title&quot;:&quot;OneFormer&quot;,&quot;id&quot;:&quot;model_doc/oneformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/oneformer&quot;},{&quot;title&quot;:&quot;OWL-ViT&quot;,&quot;id&quot;:&quot;model_doc/owlvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/owlvit&quot;},{&quot;title&quot;:&quot;Perceiver&quot;,&quot;id&quot;:&quot;model_doc/perceiver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/perceiver&quot;},{&quot;title&quot;:&quot;Pix2Struct&quot;,&quot;id&quot;:&quot;model_doc/pix2struct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pix2struct&quot;},{&quot;title&quot;:&quot;Segment Anything&quot;,&quot;id&quot;:&quot;model_doc/sam&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sam&quot;},{&quot;title&quot;:&quot;Speech Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/speech-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder&quot;},{&quot;title&quot;:&quot;TAPAS&quot;,&quot;id&quot;:&quot;model_doc/tapas&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapas&quot;},{&quot;title&quot;:&quot;TrOCR&quot;,&quot;id&quot;:&quot;model_doc/trocr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trocr&quot;},{&quot;title&quot;:&quot;TVLT&quot;,&quot;id&quot;:&quot;model_doc/tvlt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tvlt&quot;},{&quot;title&quot;:&quot;ViLT&quot;,&quot;id&quot;:&quot;model_doc/vilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vilt&quot;},{&quot;title&quot;:&quot;Vision Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/vision-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder&quot;},{&quot;title&quot;:&quot;Vision Text Dual Encoder&quot;,&quot;id&quot;:&quot;model_doc/vision-text-dual-encoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder&quot;},{&quot;title&quot;:&quot;VisualBERT&quot;,&quot;id&quot;:&quot;model_doc/visual_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/visual_bert&quot;},{&quot;title&quot;:&quot;X-CLIP&quot;,&quot;id&quot;:&quot;model_doc/xclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xclip&quot;}]},{&quot;title&quot;:&quot;Reinforcement learning models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Decision Transformer&quot;,&quot;id&quot;:&quot;model_doc/decision_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/decision_transformer&quot;},{&quot;title&quot;:&quot;Trajectory Transformer&quot;,&quot;id&quot;:&quot;model_doc/trajectory_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer&quot;}]},{&quot;title&quot;:&quot;Time series models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Autoformer&quot;,&quot;id&quot;:&quot;model_doc/autoformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/autoformer&quot;},{&quot;title&quot;:&quot;Informer&quot;,&quot;id&quot;:&quot;model_doc/informer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/informer&quot;},{&quot;title&quot;:&quot;Time Series Transformer&quot;,&quot;id&quot;:&quot;model_doc/time_series_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/time_series_transformer&quot;}]},{&quot;title&quot;:&quot;Graph models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Graphormer&quot;,&quot;id&quot;:&quot;model_doc/graphormer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/graphormer&quot;}]}]},{&quot;title&quot;:&quot;Internal Helpers&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Custom Layers and Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/modeling_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/modeling_utils&quot;},{&quot;title&quot;:&quot;Utilities for pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/pipelines_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/pipelines_utils&quot;},{&quot;title&quot;:&quot;Utilities for Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/tokenization_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/tokenization_utils&quot;},{&quot;title&quot;:&quot;Utilities for Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/trainer_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/trainer_utils&quot;},{&quot;title&quot;:&quot;Utilities for Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/generation_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/generation_utils&quot;},{&quot;title&quot;:&quot;Utilities for Image Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/image_processing_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/image_processing_utils&quot;},{&quot;title&quot;:&quot;Utilities for Audio processing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/audio_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/audio_utils&quot;},{&quot;title&quot;:&quot;General Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/file_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/file_utils&quot;},{&quot;title&quot;:&quot;Utilities for Time Series&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/time_series_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/time_series_utils&quot;}]}]}],&quot;chapterId&quot;:&quot;tasks/multiple_choice&quot;,&quot;docType&quot;:&quot;docs&quot;,&quot;isLoggedIn&quot;:false,&quot;lang&quot;:&quot;en&quot;,&quot;langs&quot;:[&quot;de&quot;,&quot;en&quot;,&quot;es&quot;,&quot;fr&quot;,&quot;it&quot;,&quot;ko&quot;,&quot;pt&quot;,&quot;zh&quot;],&quot;library&quot;:&quot;transformers&quot;,&quot;theme&quot;:&quot;light&quot;,&quot;version&quot;:&quot;v4.34.0&quot;,&quot;versions&quot;:[{&quot;version&quot;:&quot;main&quot;},{&quot;version&quot;:&quot;v4.34.0&quot;},{&quot;version&quot;:&quot;v4.33.3&quot;},{&quot;version&quot;:&quot;v4.33.2&quot;},{&quot;version&quot;:&quot;v4.33.0&quot;},{&quot;version&quot;:&quot;v4.32.1&quot;},{&quot;version&quot;:&quot;v4.32.0&quot;},{&quot;version&quot;:&quot;v4.31.0&quot;},{&quot;version&quot;:&quot;v4.30.0&quot;},{&quot;version&quot;:&quot;v4.29.1&quot;},{&quot;version&quot;:&quot;v4.29.0&quot;},{&quot;version&quot;:&quot;v4.28.1&quot;},{&quot;version&quot;:&quot;v4.28.0&quot;},{&quot;version&quot;:&quot;v4.27.2&quot;},{&quot;version&quot;:&quot;v4.27.1&quot;},{&quot;version&quot;:&quot;v4.27.0&quot;},{&quot;version&quot;:&quot;v4.26.1&quot;},{&quot;version&quot;:&quot;v4.26.0&quot;},{&quot;version&quot;:&quot;v4.25.1&quot;},{&quot;version&quot;:&quot;v4.24.0&quot;},{&quot;version&quot;:&quot;v4.23.1&quot;},{&quot;version&quot;:&quot;v4.23.0&quot;},{&quot;version&quot;:&quot;v4.22.2&quot;},{&quot;version&quot;:&quot;v4.22.1&quot;},{&quot;version&quot;:&quot;v4.22.0&quot;},{&quot;version&quot;:&quot;v4.21.3&quot;},{&quot;version&quot;:&quot;v4.21.2&quot;},{&quot;version&quot;:&quot;v4.21.1&quot;},{&quot;version&quot;:&quot;v4.21.0&quot;},{&quot;version&quot;:&quot;v4.20.1&quot;},{&quot;version&quot;:&quot;v4.20.0&quot;},{&quot;version&quot;:&quot;v4.19.4&quot;},{&quot;version&quot;:&quot;v4.19.3&quot;},{&quot;version&quot;:&quot;v4.19.2&quot;},{&quot;version&quot;:&quot;v4.19.0&quot;},{&quot;version&quot;:&quot;v4.18.0&quot;},{&quot;version&quot;:&quot;v4.17.0&quot;},{&quot;version&quot;:&quot;v4.16.2&quot;},{&quot;version&quot;:&quot;v4.16.1&quot;},{&quot;version&quot;:&quot;v4.16.0&quot;},{&quot;version&quot;:&quot;v4.15.0&quot;},{&quot;version&quot;:&quot;v4.14.1&quot;},{&quot;version&quot;:&quot;v4.13.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.5&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.4&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.0.0&quot;},{&quot;version&quot;:&quot;doc-builder-html&quot;}],&quot;title&quot;:&quot;Multiple choice&quot;}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"><div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation</p> <div class="flex items-center"><p class="font-semibold">Multiple choice</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "><button class=" " type="button"><h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> <span class="ml-auto rounded border border-gray-200 bg-gray-100 px-0.5 text-xs dark:border-gray-800 dark:bg-gray-800"><kbd class="font-sans">⌘K</kbd></span></button> <div class="flex items-center"><select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1">v4.34.0</option><option value="2">v4.33.3</option><option value="3">v4.32.1</option><option value="4">v4.31.0</option><option value="5">v4.30.0</option><option value="6">v4.29.1</option><option value="7">v4.28.1</option><option value="8">v4.27.2</option><option value="9">v4.26.1</option><option value="10">v4.25.1</option><option value="11">v4.24.0</option><option value="12">v4.23.1</option><option value="13">v4.22.2</option><option value="14">v4.21.3</option><option value="15">v4.20.1</option><option value="16">v4.19.4</option><option value="17">v4.18.0</option><option value="18">v4.17.0</option><option value="19">v4.16.2</option><option value="20">v4.15.0</option><option value="21">v4.14.1</option><option value="22">v4.13.0</option><option value="23">v4.12.5</option><option value="24">v4.11.3</option><option value="25">v4.10.1</option><option value="26">v4.9.2</option><option value="27">v4.8.2</option><option value="28">v4.7.0</option><option value="29">v4.6.0</option><option value="30">v4.5.1</option><option value="31">v4.4.2</option><option value="32">v4.3.3</option><option value="33">v4.2.2</option><option value="34">v4.1.1</option><option value="35">v4.0.1</option><option value="36">v3.5.1</option><option value="37">v3.4.0</option><option value="38">v3.3.1</option><option value="39">v3.2.0</option><option value="40">v3.1.0</option><option value="41">v3.0.2</option><option value="42">v2.11.0</option><option value="43">v2.10.0</option><option value="44">v2.9.1</option><option value="45">v2.8.0</option><option value="46">v2.7.0</option><option value="47">v2.6.0</option><option value="48">v2.5.1</option><option value="49">v2.4.1</option><option value="50">v2.3.0</option><option value="51">v2.2.2</option><option value="52">v2.1.1</option><option value="53">v2.0.0</option><option value="54">v1.2.0</option><option value="55">v1.1.0</option><option value="56">v1.0.0</option><option value="57">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"><button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"><svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> 112,792</a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Get started</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/index">🤗 Transformers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/quicktour">Quick tour </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/installation">Installation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Tutorials</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_tutorial">Run inference with pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/autoclass_tutorial">Write portable code with AutoClass </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/preprocessing">Preprocess data </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/training">Fine-tune a pretrained model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/run_scripts">Train with a script </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/accelerate">Set up distributed training with 🤗 Accelerate </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/peft">Load and train adapters with 🤗 PEFT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_sharing">Share your model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/transformers_agents">Agents </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/llm_tutorial">Generation with LLMs </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Task Guides</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Natural Language Processing</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/sequence_classification">Text classification </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/token_classification">Token classification </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/question_answering">Question answering </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/language_modeling">Causal language modeling </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/masked_language_modeling">Masked language modeling </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/translation">Translation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/summarization">Summarization </a><a data-sveltekit-reload="" class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-4" href="/docs/transformers/v4.34.0/en/tasks/multiple_choice">Multiple choice </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Computer Vision</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Generation</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Prompting</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Developer guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/fast_tokenizers">Use fast tokenizers from 🤗 Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/multilingual">Run inference with multilingual models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/create_a_model">Use model-specific APIs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_models">Share a custom model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/chat_templating">Templates for chat models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/sagemaker">Run training on Amazon SageMaker </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/serialization">Export to ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tflite">Export to TFLite </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/torchscript">Export to TorchScript </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/benchmarks">Benchmarks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/notebooks">Notebooks with examples </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/community">Community resources </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_tools">Custom Tools and Prompts </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/troubleshooting">Troubleshoot </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Performance and scalability</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/performance">Overview </a><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Efficient training techniques</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_one">Methods and tools for efficient training on a single GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_many">Multiple GPUs and parallelism </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu">Efficient training on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu_many">Distributed CPU training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu">Training on TPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu_tf">Training on TPU with TensorFlow </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_special">Training on Specialized Hardware </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_hardware">Custom hardware for training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/hpo_train">Hyperparameter Search using Trainer API </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Optimizing inference</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_cpu">Inference on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_one">Inference on one GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_many">Inference on many GPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_special">Inference on Specialized Hardware </a> </div><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/big_models">Instantiating a big model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/debugging">Troubleshooting </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tf_xla">XLA Integration for TensorFlow Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perf_torch_compile">Optimize inference using `torch.compile()` </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Contribute</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/contributing">How to contribute to transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_model">How to add a model to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_tensorflow_model">How to convert a 🤗 Transformers model to TensorFlow? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_pipeline">How to add a pipeline to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/testing">Testing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pr_checks">Checks on a Pull Request </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Conceptual guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/philosophy">Philosophy </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/glossary">Glossary </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/task_summary">What 🤗 Transformers can do </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tasks_explained">How 🤗 Transformers solve tasks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_summary">The Transformer model family </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tokenizer_summary">Summary of the tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/attention">Attention mechanisms </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pad_truncation">Padding and truncation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/bertology">BERTology </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perplexity">Perplexity of fixed-length models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_webserver">Pipelines for webserver inference </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_memory_anatomy">Model training anatomy </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>API</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Main Classes</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/agent">Agents and Tools </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/model_doc/auto">Auto Classes </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/callback">Callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/configuration">Configuration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/data_collator">Data Collator </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks">Keras callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/logging">Logging </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/model">Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/text_generation">Text Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/onnx">ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules">Optimization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/output">Model outputs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/pipelines">Pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/processors">Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/quantization">Quantization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/tokenizer">Tokenizer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/trainer">Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/deepspeed">DeepSpeed Integration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor">Feature Extractor </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/image_processor">Image Processor </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Models</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Text models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Vision models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Reinforcement learning models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Time series models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Graph models</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Internal Helpers</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/modeling_utils">Custom Layers and Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/pipelines_utils">Utilities for pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/tokenization_utils">Utilities for Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/trainer_utils">Utilities for Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/generation_utils">Utilities for Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/image_processing_utils">Utilities for Image Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/audio_utils">Utilities for Audio processing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/file_utils">General Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/time_series_utils">Utilities for Time Series </a> </div> </div></nav></div></div></div> <div class="z-1 min-w-0 flex-1"> <div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg"> <div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div> <p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience </p> <div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes </div></div></div> <div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a> <p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div> <div class="prose-doc prose relative mx-auto max-w-4xl break-words"> <p></p> <h1 class="relative group"><a id="multiple-choice" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#multiple-choice"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1wubvu4">Multiple choice</span></h1> <div class="flex space-x-1 absolute z-10 right-0 top-0"> <div class="relative colab-dropdown "><button class=" " type="button"><img alt="Open In Colab" class="!m-0" src="https://colab.research.google.com/assets/colab-badge.svg"></button> </div> <div class="relative colab-dropdown "><button class=" " type="button"><img alt="Open In Studio Lab" class="!m-0" src="https://studiolab.sagemaker.aws/studiolab.svg"></button> </div></div> <p data-svelte-h="svelte-gcifhg">A multiple choice task is similar to question answering, except several candidate answers are provided along with a context and the model is trained to select the correct answer.</p> <p data-svelte-h="svelte-1aff4p7">This guide will show you how to:</p> <ol data-svelte-h="svelte-sta7hj"><li>Finetune <a href="https://huggingface.co/bert-base-uncased" rel="nofollow">BERT</a> on the <code>regular</code> configuration of the <a href="https://huggingface.co/datasets/swag" rel="nofollow">SWAG</a> dataset to select the best answer given multiple options and some context.</li> <li>Use your finetuned model for inference.</li></ol> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400">The task illustrated in this tutorial is supported by the following model architectures: <p data-svelte-h="svelte-nxrbbi"><a href="../model_doc/albert">ALBERT</a>, <a href="../model_doc/bert">BERT</a>, <a href="../model_doc/big_bird">BigBird</a>, <a href="../model_doc/camembert">CamemBERT</a>, <a href="../model_doc/canine">CANINE</a>, <a href="../model_doc/convbert">ConvBERT</a>, <a href="../model_doc/data2vec-text">Data2VecText</a>, <a href="../model_doc/deberta-v2">DeBERTa-v2</a>, <a href="../model_doc/distilbert">DistilBERT</a>, <a href="../model_doc/electra">ELECTRA</a>, <a href="../model_doc/ernie">ERNIE</a>, <a href="../model_doc/ernie_m">ErnieM</a>, <a href="../model_doc/flaubert">FlauBERT</a>, <a href="../model_doc/fnet">FNet</a>, <a href="../model_doc/funnel">Funnel Transformer</a>, <a href="../model_doc/ibert">I-BERT</a>, <a href="../model_doc/longformer">Longformer</a>, <a href="../model_doc/luke">LUKE</a>, <a href="../model_doc/mega">MEGA</a>, <a href="../model_doc/megatron-bert">Megatron-BERT</a>, <a href="../model_doc/mobilebert">MobileBERT</a>, <a href="../model_doc/mpnet">MPNet</a>, <a href="../model_doc/mra">MRA</a>, <a href="../model_doc/nezha">Nezha</a>, <a href="../model_doc/nystromformer">Nyströmformer</a>, <a href="../model_doc/qdqbert">QDQBert</a>, <a href="../model_doc/rembert">RemBERT</a>, <a href="../model_doc/roberta">RoBERTa</a>, <a href="../model_doc/roberta-prelayernorm">RoBERTa-PreLayerNorm</a>, <a href="../model_doc/roc_bert">RoCBert</a>, <a href="../model_doc/roformer">RoFormer</a>, <a href="../model_doc/squeezebert">SqueezeBERT</a>, <a href="../model_doc/xlm">XLM</a>, <a href="../model_doc/xlm-roberta">XLM-RoBERTa</a>, <a href="../model_doc/xlm-roberta-xl">XLM-RoBERTa-XL</a>, <a href="../model_doc/xlnet">XLNet</a>, <a href="../model_doc/xmod">X-MOD</a>, <a href="../model_doc/yoso">YOSO</a></p></div> <p data-svelte-h="svelte-1c9nexd">Before you begin, make sure you have all the necessary libraries installed:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class="">pip install transformers datasets evaluate</pre></div> <p data-svelte-h="svelte-k76o1m">We encourage you to login to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to login:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> huggingface_hub <span class="hljs-keyword">import</span> notebook_login <span class="hljs-meta">&gt;&gt;&gt; </span>notebook_login()</pre></div> <h2 class="relative group"><a id="load-swag-dataset" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#load-swag-dataset"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1kph5b5">Load SWAG dataset</span></h2> <p data-svelte-h="svelte-ndvump">Start by loading the <code>regular</code> configuration of the SWAG dataset from the 🤗 Datasets library:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset <span class="hljs-meta">&gt;&gt;&gt; </span>swag = load_dataset(<span class="hljs-string">"swag"</span>, <span class="hljs-string">"regular"</span>)</pre></div> <p data-svelte-h="svelte-1m91ua0">Then take a look at an example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>swag[<span class="hljs-string">"train"</span>][<span class="hljs-number">0</span>] {<span class="hljs-string">'ending0'</span>: <span class="hljs-string">'passes by walking down the street playing their instruments.'</span>, <span class="hljs-string">'ending1'</span>: <span class="hljs-string">'has heard approaching them.'</span>, <span class="hljs-string">'ending2'</span>: <span class="hljs-string">"arrives and they're outside dancing and asleep."</span>, <span class="hljs-string">'ending3'</span>: <span class="hljs-string">'turns the lead singer watches the performance.'</span>, <span class="hljs-string">'fold-ind'</span>: <span class="hljs-string">'3416'</span>, <span class="hljs-string">'gold-source'</span>: <span class="hljs-string">'gold'</span>, <span class="hljs-string">'label'</span>: <span class="hljs-number">0</span>, <span class="hljs-string">'sent1'</span>: <span class="hljs-string">'Members of the procession walk down the street holding small horn brass instruments.'</span>, <span class="hljs-string">'sent2'</span>: <span class="hljs-string">'A drum line'</span>, <span class="hljs-string">'startphrase'</span>: <span class="hljs-string">'Members of the procession walk down the street holding small horn brass instruments. A drum line'</span>, <span class="hljs-string">'video-id'</span>: <span class="hljs-string">'anetv_jkn6uvmqwh4'</span>}</pre></div> <p data-svelte-h="svelte-1dgdrcg">While it looks like there are a lot of fields here, it is actually pretty straightforward:</p> <ul data-svelte-h="svelte-vj0noe"><li><code>sent1</code> and <code>sent2</code>: these fields show how a sentence starts, and if you put the two together, you get the <code>startphrase</code> field.</li> <li><code>ending</code>: suggests a possible ending for how a sentence can end, but only one of them is correct.</li> <li><code>label</code>: identifies the correct sentence ending.</li></ul> <h2 class="relative group"><a id="preprocess" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#preprocess"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1cg9qj">Preprocess</span></h2> <p data-svelte-h="svelte-j3i2fe">The next step is to load a BERT tokenizer to process the sentence starts and the four possible endings:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"bert-base-uncased"</span>)</pre></div> <p data-svelte-h="svelte-pduvot">The preprocessing function you want to create needs to:</p> <ol data-svelte-h="svelte-tso4vc"><li>Make four copies of the <code>sent1</code> field and combine each of them with <code>sent2</code> to recreate how a sentence starts.</li> <li>Combine <code>sent2</code> with each of the four possible sentence endings.</li> <li>Flatten these two lists so you can tokenize them, and then unflatten them afterward so each example has a corresponding <code>input_ids</code>, <code>attention_mask</code>, and <code>labels</code> field.</li></ol> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>ending_names = [<span class="hljs-string">"ending0"</span>, <span class="hljs-string">"ending1"</span>, <span class="hljs-string">"ending2"</span>, <span class="hljs-string">"ending3"</span>] <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">preprocess_function</span>(<span class="hljs-params">examples</span>): <span class="hljs-meta">... </span> first_sentences = [[context] * <span class="hljs-number">4</span> <span class="hljs-keyword">for</span> context <span class="hljs-keyword">in</span> examples[<span class="hljs-string">"sent1"</span>]] <span class="hljs-meta">... </span> question_headers = examples[<span class="hljs-string">"sent2"</span>] <span class="hljs-meta">... </span> second_sentences = [ <span class="hljs-meta">... </span> [<span class="hljs-string">f"<span class="hljs-subst">{header}</span> <span class="hljs-subst">{examples[end][i]}</span>"</span> <span class="hljs-keyword">for</span> end <span class="hljs-keyword">in</span> ending_names] <span class="hljs-keyword">for</span> i, header <span class="hljs-keyword">in</span> <span class="hljs-built_in">enumerate</span>(question_headers) <span class="hljs-meta">... </span> ] <span class="hljs-meta">... </span> first_sentences = <span class="hljs-built_in">sum</span>(first_sentences, []) <span class="hljs-meta">... </span> second_sentences = <span class="hljs-built_in">sum</span>(second_sentences, []) <span class="hljs-meta">... </span> tokenized_examples = tokenizer(first_sentences, second_sentences, truncation=<span class="hljs-literal">True</span>) <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> {k: [v[i : i + <span class="hljs-number">4</span>] <span class="hljs-keyword">for</span> i <span class="hljs-keyword">in</span> <span class="hljs-built_in">range</span>(<span class="hljs-number">0</span>, <span class="hljs-built_in">len</span>(v), <span class="hljs-number">4</span>)] <span class="hljs-keyword">for</span> k, v <span class="hljs-keyword">in</span> tokenized_examples.items()}</pre></div> <p data-svelte-h="svelte-ndcj3d">To apply the preprocessing function over the entire dataset, use 🤗 Datasets <a href="https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.map" rel="nofollow">map</a> method. You can speed up the <code>map</code> function by setting <code>batched=True</code> to process multiple elements of the dataset at once:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class="">tokenized_swag = swag.<span class="hljs-built_in">map</span>(preprocess_function, batched=<span class="hljs-literal">True</span>)</pre></div> <p data-svelte-h="svelte-13wh8q8">🤗 Transformers doesn’t have a data collator for multiple choice, so you’ll need to adapt the <a href="/docs/transformers/v4.34.0/en/main_classes/data_collator#transformers.DataCollatorWithPadding">DataCollatorWithPadding</a> to create a batch of examples. It’s more efficient to <em>dynamically pad</em> the sentences to the longest length in a batch during collation, instead of padding the whole dataset to the maximum length.</p> <p data-svelte-h="svelte-1ytir7e"><code>DataCollatorForMultipleChoice</code> flattens all the model inputs, applies padding, and then unflattens the results:</p> <div class="space-y-10 py-6 2xl:py-8 2xl:-mx-4"><div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><defs><clipPath id="a"><rect x="3.05" y="0.5" width="25.73" height="31" fill="none"></rect></clipPath></defs><g clip-path="url(#a)"><path d="M24.94,9.51a12.81,12.81,0,0,1,0,18.16,12.68,12.68,0,0,1-18,0,12.81,12.81,0,0,1,0-18.16l9-9V5l-.84.83-6,6a9.58,9.58,0,1,0,13.55,0ZM20.44,9a1.68,1.68,0,1,1,1.67-1.67A1.68,1.68,0,0,1,20.44,9Z" fill="#ee4c2c"></path></g></svg> <span>Pytorch</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide Pytorch content</span></div></div> <div class="framework-content"><div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> dataclasses <span class="hljs-keyword">import</span> dataclass <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers.tokenization_utils_base <span class="hljs-keyword">import</span> PreTrainedTokenizerBase, PaddingStrategy <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> typing <span class="hljs-keyword">import</span> <span class="hljs-type">Optional</span>, <span class="hljs-type">Union</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>@dataclass <span class="hljs-meta">... </span><span class="hljs-keyword">class</span> <span class="hljs-title class_">DataCollatorForMultipleChoice</span>: <span class="hljs-meta">... </span> <span class="hljs-string">""" <span class="hljs-meta">... </span> Data collator that will dynamically pad the inputs for multiple choice received. <span class="hljs-meta">... </span> """</span> <span class="hljs-meta">... </span> tokenizer: PreTrainedTokenizerBase <span class="hljs-meta">... </span> padding: <span class="hljs-type">Union</span>[<span class="hljs-built_in">bool</span>, <span class="hljs-built_in">str</span>, PaddingStrategy] = <span class="hljs-literal">True</span> <span class="hljs-meta">... </span> max_length: <span class="hljs-type">Optional</span>[<span class="hljs-built_in">int</span>] = <span class="hljs-literal">None</span> <span class="hljs-meta">... </span> pad_to_multiple_of: <span class="hljs-type">Optional</span>[<span class="hljs-built_in">int</span>] = <span class="hljs-literal">None</span> <span class="hljs-meta">... </span> <span class="hljs-keyword">def</span> <span class="hljs-title function_">__call__</span>(<span class="hljs-params">self, features</span>): <span class="hljs-meta">... </span> label_name = <span class="hljs-string">"label"</span> <span class="hljs-keyword">if</span> <span class="hljs-string">"label"</span> <span class="hljs-keyword">in</span> features[<span class="hljs-number">0</span>].keys() <span class="hljs-keyword">else</span> <span class="hljs-string">"labels"</span> <span class="hljs-meta">... </span> labels = [feature.pop(label_name) <span class="hljs-keyword">for</span> feature <span class="hljs-keyword">in</span> features] <span class="hljs-meta">... </span> batch_size = <span class="hljs-built_in">len</span>(features) <span class="hljs-meta">... </span> num_choices = <span class="hljs-built_in">len</span>(features[<span class="hljs-number">0</span>][<span class="hljs-string">"input_ids"</span>]) <span class="hljs-meta">... </span> flattened_features = [ <span class="hljs-meta">... </span> [{k: v[i] <span class="hljs-keyword">for</span> k, v <span class="hljs-keyword">in</span> feature.items()} <span class="hljs-keyword">for</span> i <span class="hljs-keyword">in</span> <span class="hljs-built_in">range</span>(num_choices)] <span class="hljs-keyword">for</span> feature <span class="hljs-keyword">in</span> features <span class="hljs-meta">... </span> ] <span class="hljs-meta">... </span> flattened_features = <span class="hljs-built_in">sum</span>(flattened_features, []) <span class="hljs-meta">... </span> batch = self.tokenizer.pad( <span class="hljs-meta">... </span> flattened_features, <span class="hljs-meta">... </span> padding=self.padding, <span class="hljs-meta">... </span> max_length=self.max_length, <span class="hljs-meta">... </span> pad_to_multiple_of=self.pad_to_multiple_of, <span class="hljs-meta">... </span> return_tensors=<span class="hljs-string">"pt"</span>, <span class="hljs-meta">... </span> ) <span class="hljs-meta">... </span> batch = {k: v.view(batch_size, num_choices, -<span class="hljs-number">1</span>) <span class="hljs-keyword">for</span> k, v <span class="hljs-keyword">in</span> batch.items()} <span class="hljs-meta">... </span> batch[<span class="hljs-string">"labels"</span>] = torch.tensor(labels, dtype=torch.int64) <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> batch</pre></div></div></div> <div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="0.94em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 274"><path d="M145.726 42.065v42.07l72.861 42.07v-42.07l-72.86-42.07zM0 84.135v42.07l36.43 21.03V105.17L0 84.135zm109.291 21.035l-36.43 21.034v126.2l36.43 21.035v-84.135l36.435 21.035v-42.07l-36.435-21.034V105.17z" fill="#E55B2D"></path><path d="M145.726 42.065L36.43 105.17v42.065l72.861-42.065v42.065l36.435-21.03v-84.14zM255.022 63.1l-36.435 21.035v42.07l36.435-21.035V63.1zm-72.865 84.135l-36.43 21.035v42.07l36.43-21.036v-42.07zm-36.43 63.104l-36.436-21.035v84.135l36.435-21.035V210.34z" fill="#ED8E24"></path><path d="M145.726 0L0 84.135l36.43 21.035l109.296-63.105l72.861 42.07L255.022 63.1L145.726 0zm0 126.204l-36.435 21.03l36.435 21.036l36.43-21.035l-36.43-21.03z" fill="#F8BF3C"></path></svg> <span>TensorFlow</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide TensorFlow content</span></div></div> <div class="framework-content"><div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> dataclasses <span class="hljs-keyword">import</span> dataclass <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers.tokenization_utils_base <span class="hljs-keyword">import</span> PreTrainedTokenizerBase, PaddingStrategy <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> typing <span class="hljs-keyword">import</span> <span class="hljs-type">Optional</span>, <span class="hljs-type">Union</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> tensorflow <span class="hljs-keyword">as</span> tf <span class="hljs-meta">&gt;&gt;&gt; </span>@dataclass <span class="hljs-meta">... </span><span class="hljs-keyword">class</span> <span class="hljs-title class_">DataCollatorForMultipleChoice</span>: <span class="hljs-meta">... </span> <span class="hljs-string">""" <span class="hljs-meta">... </span> Data collator that will dynamically pad the inputs for multiple choice received. <span class="hljs-meta">... </span> """</span> <span class="hljs-meta">... </span> tokenizer: PreTrainedTokenizerBase <span class="hljs-meta">... </span> padding: <span class="hljs-type">Union</span>[<span class="hljs-built_in">bool</span>, <span class="hljs-built_in">str</span>, PaddingStrategy] = <span class="hljs-literal">True</span> <span class="hljs-meta">... </span> max_length: <span class="hljs-type">Optional</span>[<span class="hljs-built_in">int</span>] = <span class="hljs-literal">None</span> <span class="hljs-meta">... </span> pad_to_multiple_of: <span class="hljs-type">Optional</span>[<span class="hljs-built_in">int</span>] = <span class="hljs-literal">None</span> <span class="hljs-meta">... </span> <span class="hljs-keyword">def</span> <span class="hljs-title function_">__call__</span>(<span class="hljs-params">self, features</span>): <span class="hljs-meta">... </span> label_name = <span class="hljs-string">"label"</span> <span class="hljs-keyword">if</span> <span class="hljs-string">"label"</span> <span class="hljs-keyword">in</span> features[<span class="hljs-number">0</span>].keys() <span class="hljs-keyword">else</span> <span class="hljs-string">"labels"</span> <span class="hljs-meta">... </span> labels = [feature.pop(label_name) <span class="hljs-keyword">for</span> feature <span class="hljs-keyword">in</span> features] <span class="hljs-meta">... </span> batch_size = <span class="hljs-built_in">len</span>(features) <span class="hljs-meta">... </span> num_choices = <span class="hljs-built_in">len</span>(features[<span class="hljs-number">0</span>][<span class="hljs-string">"input_ids"</span>]) <span class="hljs-meta">... </span> flattened_features = [ <span class="hljs-meta">... </span> [{k: v[i] <span class="hljs-keyword">for</span> k, v <span class="hljs-keyword">in</span> feature.items()} <span class="hljs-keyword">for</span> i <span class="hljs-keyword">in</span> <span class="hljs-built_in">range</span>(num_choices)] <span class="hljs-keyword">for</span> feature <span class="hljs-keyword">in</span> features <span class="hljs-meta">... </span> ] <span class="hljs-meta">... </span> flattened_features = <span class="hljs-built_in">sum</span>(flattened_features, []) <span class="hljs-meta">... </span> batch = self.tokenizer.pad( <span class="hljs-meta">... </span> flattened_features, <span class="hljs-meta">... </span> padding=self.padding, <span class="hljs-meta">... </span> max_length=self.max_length, <span class="hljs-meta">... </span> pad_to_multiple_of=self.pad_to_multiple_of, <span class="hljs-meta">... </span> return_tensors=<span class="hljs-string">"tf"</span>, <span class="hljs-meta">... </span> ) <span class="hljs-meta">... </span> batch = {k: tf.reshape(v, (batch_size, num_choices, -<span class="hljs-number">1</span>)) <span class="hljs-keyword">for</span> k, v <span class="hljs-keyword">in</span> batch.items()} <span class="hljs-meta">... </span> batch[<span class="hljs-string">"labels"</span>] = tf.convert_to_tensor(labels, dtype=tf.int64) <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> batch</pre></div></div></div> </div> <h2 class="relative group"><a id="evaluate" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#evaluate"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-sh8s6s">Evaluate</span></h2> <p data-svelte-h="svelte-j1ipe9">Including a metric during training is often helpful for evaluating your model’s performance. You can quickly load a evaluation method with the 🤗 <a href="https://huggingface.co/docs/evaluate/index" rel="nofollow">Evaluate</a> library. For this task, load the <a href="https://huggingface.co/spaces/evaluate-metric/accuracy" rel="nofollow">accuracy</a> metric (see the 🤗 Evaluate <a href="https://huggingface.co/docs/evaluate/a_quick_tour" rel="nofollow">quick tour</a> to learn more about how to load and compute a metric):</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> evaluate <span class="hljs-meta">&gt;&gt;&gt; </span>accuracy = evaluate.load(<span class="hljs-string">"accuracy"</span>)</pre></div> <p data-svelte-h="svelte-14oy2j6">Then create a function that passes your predictions and labels to <a href="https://huggingface.co/docs/evaluate/v0.4.0/en/package_reference/main_classes#evaluate.EvaluationModule.compute" rel="nofollow">compute</a> to calculate the accuracy:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">compute_metrics</span>(<span class="hljs-params">eval_pred</span>): <span class="hljs-meta">... </span> predictions, labels = eval_pred <span class="hljs-meta">... </span> predictions = np.argmax(predictions, axis=<span class="hljs-number">1</span>) <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> accuracy.compute(predictions=predictions, references=labels)</pre></div> <p data-svelte-h="svelte-183aynn">Your <code>compute_metrics</code> function is ready to go now, and you’ll return to it when you setup your training.</p> <h2 class="relative group"><a id="train" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#train"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-5arm0l">Train</span></h2> <div class="space-y-10 py-6 2xl:py-8 2xl:-mx-4"><div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><defs><clipPath id="a"><rect x="3.05" y="0.5" width="25.73" height="31" fill="none"></rect></clipPath></defs><g clip-path="url(#a)"><path d="M24.94,9.51a12.81,12.81,0,0,1,0,18.16,12.68,12.68,0,0,1-18,0,12.81,12.81,0,0,1,0-18.16l9-9V5l-.84.83-6,6a9.58,9.58,0,1,0,13.55,0ZM20.44,9a1.68,1.68,0,1,1,1.67-1.67A1.68,1.68,0,0,1,20.44,9Z" fill="#ee4c2c"></path></g></svg> <span>Pytorch</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide Pytorch content</span></div></div> <div class="framework-content"><div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1ufp0ay">If you aren’t familiar with finetuning a model with the <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer">Trainer</a>, take a look at the basic tutorial <a href="../training#train-with-pytorch-trainer">here</a>!</p></div> <p data-svelte-h="svelte-u7xjkb">You’re ready to start training your model now! Load BERT with <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoModelForMultipleChoice">AutoModelForMultipleChoice</a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoModelForMultipleChoice, TrainingArguments, Trainer <span class="hljs-meta">&gt;&gt;&gt; </span>model = AutoModelForMultipleChoice.from_pretrained(<span class="hljs-string">"bert-base-uncased"</span>)</pre></div> <p data-svelte-h="svelte-l42k0i">At this point, only three steps remain:</p> <ol data-svelte-h="svelte-777kp9"><li>Define your training hyperparameters in <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.TrainingArguments">TrainingArguments</a>. The only required parameter is <code>output_dir</code> which specifies where to save your model. You’ll push this model to the Hub by setting <code>push_to_hub=True</code> (you need to be signed in to Hugging Face to upload your model). At the end of each epoch, the <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer">Trainer</a> will evaluate the accuracy and save the training checkpoint.</li> <li>Pass the training arguments to <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer">Trainer</a> along with the model, dataset, tokenizer, data collator, and <code>compute_metrics</code> function.</li> <li>Call <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.train">train()</a> to finetune your model.</li></ol> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>training_args = TrainingArguments( <span class="hljs-meta">... </span> output_dir=<span class="hljs-string">"my_awesome_swag_model"</span>, <span class="hljs-meta">... </span> evaluation_strategy=<span class="hljs-string">"epoch"</span>, <span class="hljs-meta">... </span> save_strategy=<span class="hljs-string">"epoch"</span>, <span class="hljs-meta">... </span> load_best_model_at_end=<span class="hljs-literal">True</span>, <span class="hljs-meta">... </span> learning_rate=<span class="hljs-number">5e-5</span>, <span class="hljs-meta">... </span> per_device_train_batch_size=<span class="hljs-number">16</span>, <span class="hljs-meta">... </span> per_device_eval_batch_size=<span class="hljs-number">16</span>, <span class="hljs-meta">... </span> num_train_epochs=<span class="hljs-number">3</span>, <span class="hljs-meta">... </span> weight_decay=<span class="hljs-number">0.01</span>, <span class="hljs-meta">... </span> push_to_hub=<span class="hljs-literal">True</span>, <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>trainer = Trainer( <span class="hljs-meta">... </span> model=model, <span class="hljs-meta">... </span> args=training_args, <span class="hljs-meta">... </span> train_dataset=tokenized_swag[<span class="hljs-string">"train"</span>], <span class="hljs-meta">... </span> eval_dataset=tokenized_swag[<span class="hljs-string">"validation"</span>], <span class="hljs-meta">... </span> tokenizer=tokenizer, <span class="hljs-meta">... </span> data_collator=DataCollatorForMultipleChoice(tokenizer=tokenizer), <span class="hljs-meta">... </span> compute_metrics=compute_metrics, <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>trainer.train()</pre></div> <p data-svelte-h="svelte-cv8z08">Once training is completed, share your model to the Hub with the <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.push_to_hub">push_to_hub()</a> method so everyone can use your model:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>trainer.push_to_hub()</pre></div></div></div> <div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="0.94em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 274"><path d="M145.726 42.065v42.07l72.861 42.07v-42.07l-72.86-42.07zM0 84.135v42.07l36.43 21.03V105.17L0 84.135zm109.291 21.035l-36.43 21.034v126.2l36.43 21.035v-84.135l36.435 21.035v-42.07l-36.435-21.034V105.17z" fill="#E55B2D"></path><path d="M145.726 42.065L36.43 105.17v42.065l72.861-42.065v42.065l36.435-21.03v-84.14zM255.022 63.1l-36.435 21.035v42.07l36.435-21.035V63.1zm-72.865 84.135l-36.43 21.035v42.07l36.43-21.036v-42.07zm-36.43 63.104l-36.436-21.035v84.135l36.435-21.035V210.34z" fill="#ED8E24"></path><path d="M145.726 0L0 84.135l36.43 21.035l109.296-63.105l72.861 42.07L255.022 63.1L145.726 0zm0 126.204l-36.435 21.03l36.435 21.036l36.43-21.035l-36.43-21.03z" fill="#F8BF3C"></path></svg> <span>TensorFlow</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide TensorFlow content</span></div></div> <div class="framework-content"><div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1rd4nl8">If you aren’t familiar with finetuning a model with Keras, take a look at the basic tutorial <a href="../training#train-a-tensorflow-model-with-keras">here</a>!</p></div> To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters: <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> create_optimizer <span class="hljs-meta">&gt;&gt;&gt; </span>batch_size = <span class="hljs-number">16</span> <span class="hljs-meta">&gt;&gt;&gt; </span>num_train_epochs = <span class="hljs-number">2</span> <span class="hljs-meta">&gt;&gt;&gt; </span>total_train_steps = (<span class="hljs-built_in">len</span>(tokenized_swag[<span class="hljs-string">"train"</span>]) // batch_size) * num_train_epochs <span class="hljs-meta">&gt;&gt;&gt; </span>optimizer, schedule = create_optimizer(init_lr=<span class="hljs-number">5e-5</span>, num_warmup_steps=<span class="hljs-number">0</span>, num_train_steps=total_train_steps)</pre></div> <p data-svelte-h="svelte-lbqn4t">Then you can load BERT with <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.TFAutoModelForMultipleChoice">TFAutoModelForMultipleChoice</a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> TFAutoModelForMultipleChoice <span class="hljs-meta">&gt;&gt;&gt; </span>model = TFAutoModelForMultipleChoice.from_pretrained(<span class="hljs-string">"bert-base-uncased"</span>)</pre></div> <p data-svelte-h="svelte-qmwuyd">Convert your datasets to the <code>tf.data.Dataset</code> format with <a href="/docs/transformers/v4.34.0/en/main_classes/model#transformers.TFPreTrainedModel.prepare_tf_dataset">prepare_tf_dataset()</a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>data_collator = DataCollatorForMultipleChoice(tokenizer=tokenizer) <span class="hljs-meta">&gt;&gt;&gt; </span>tf_train_set = model.prepare_tf_dataset( <span class="hljs-meta">... </span> tokenized_swag[<span class="hljs-string">"train"</span>], <span class="hljs-meta">... </span> shuffle=<span class="hljs-literal">True</span>, <span class="hljs-meta">... </span> batch_size=batch_size, <span class="hljs-meta">... </span> collate_fn=data_collator, <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>tf_validation_set = model.prepare_tf_dataset( <span class="hljs-meta">... </span> tokenized_swag[<span class="hljs-string">"validation"</span>], <span class="hljs-meta">... </span> shuffle=<span class="hljs-literal">False</span>, <span class="hljs-meta">... </span> batch_size=batch_size, <span class="hljs-meta">... </span> collate_fn=data_collator, <span class="hljs-meta">... </span>)</pre></div> <p data-svelte-h="svelte-17cxx5e">Configure the model for training with <a href="https://keras.io/api/models/model_training_apis/#compile-method" rel="nofollow"><code>compile</code></a>. Note that Transformers models all have a default task-relevant loss function, so you don’t need to specify one unless you want to:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>model.<span class="hljs-built_in">compile</span>(optimizer=optimizer) <span class="hljs-comment"># No loss argument!</span></pre></div> <p data-svelte-h="svelte-6l1wkp">The last two things to setup before you start training is to compute the accuracy from the predictions, and provide a way to push your model to the Hub. Both are done by using <a href="../main_classes/keras_callbacks">Keras callbacks</a>.</p> <p data-svelte-h="svelte-6vs5z9">Pass your <code>compute_metrics</code> function to <a href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks#transformers.KerasMetricCallback">KerasMetricCallback</a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers.keras_callbacks <span class="hljs-keyword">import</span> KerasMetricCallback <span class="hljs-meta">&gt;&gt;&gt; </span>metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_set)</pre></div> <p data-svelte-h="svelte-b2vwd">Specify where to push your model and tokenizer in the <a href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks#transformers.PushToHubCallback">PushToHubCallback</a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers.keras_callbacks <span class="hljs-keyword">import</span> PushToHubCallback <span class="hljs-meta">&gt;&gt;&gt; </span>push_to_hub_callback = PushToHubCallback( <span class="hljs-meta">... </span> output_dir=<span class="hljs-string">"my_awesome_model"</span>, <span class="hljs-meta">... </span> tokenizer=tokenizer, <span class="hljs-meta">... </span>)</pre></div> <p data-svelte-h="svelte-1lw9xm8">Then bundle your callbacks together:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>callbacks = [metric_callback, push_to_hub_callback]</pre></div> <p data-svelte-h="svelte-1hrpv1v">Finally, you’re ready to start training your model! Call <a href="https://keras.io/api/models/model_training_apis/#fit-method" rel="nofollow"><code>fit</code></a> with your training and validation datasets, the number of epochs, and your callbacks to finetune the model:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>model.fit(x=tf_train_set, validation_data=tf_validation_set, epochs=<span class="hljs-number">2</span>, callbacks=callbacks)</pre></div> <p data-svelte-h="svelte-2s71om">Once training is completed, your model is automatically uploaded to the Hub so everyone can use it!</p></div></div> </div> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1c9mh52">For a more in-depth example of how to finetune a model for multiple choice, take a look at the corresponding <a href="https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice.ipynb" rel="nofollow">PyTorch notebook</a> or <a href="https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice-tf.ipynb" rel="nofollow">TensorFlow notebook</a>.</p></div> <h2 class="relative group"><a id="inference" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#inference"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-199uz7g">Inference</span></h2> <p data-svelte-h="svelte-633ppb">Great, now that you’ve finetuned a model, you can use it for inference!</p> <p data-svelte-h="svelte-nni2mt">Come up with some text and two candidate answers:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>prompt = <span class="hljs-string">"France has a bread law, Le Décret Pain, with strict rules on what is allowed in a traditional baguette."</span> <span class="hljs-meta">&gt;&gt;&gt; </span>candidate1 = <span class="hljs-string">"The law does not apply to croissants and brioche."</span> <span class="hljs-meta">&gt;&gt;&gt; </span>candidate2 = <span class="hljs-string">"The law applies to baguettes."</span></pre></div> <div class="space-y-10 py-6 2xl:py-8 2xl:-mx-4"><div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><defs><clipPath id="a"><rect x="3.05" y="0.5" width="25.73" height="31" fill="none"></rect></clipPath></defs><g clip-path="url(#a)"><path d="M24.94,9.51a12.81,12.81,0,0,1,0,18.16,12.68,12.68,0,0,1-18,0,12.81,12.81,0,0,1,0-18.16l9-9V5l-.84.83-6,6a9.58,9.58,0,1,0,13.55,0ZM20.44,9a1.68,1.68,0,1,1,1.67-1.67A1.68,1.68,0,0,1,20.44,9Z" fill="#ee4c2c"></path></g></svg> <span>Pytorch</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide Pytorch content</span></div></div> <div class="framework-content"><p data-svelte-h="svelte-1qxmddt">Tokenize each prompt and candidate answer pair and return PyTorch tensors. You should also create some <code>labels</code>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"my_awesome_swag_model"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer([[prompt, candidate1], [prompt, candidate2]], return_tensors=<span class="hljs-string">"pt"</span>, padding=<span class="hljs-literal">True</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>labels = torch.tensor(<span class="hljs-number">0</span>).unsqueeze(<span class="hljs-number">0</span>)</pre></div> <p data-svelte-h="svelte-10twm8n">Pass your inputs and labels to the model and return the <code>logits</code>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoModelForMultipleChoice <span class="hljs-meta">&gt;&gt;&gt; </span>model = AutoModelForMultipleChoice.from_pretrained(<span class="hljs-string">"my_awesome_swag_model"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(**{k: v.unsqueeze(<span class="hljs-number">0</span>) <span class="hljs-keyword">for</span> k, v <span class="hljs-keyword">in</span> inputs.items()}, labels=labels) <span class="hljs-meta">&gt;&gt;&gt; </span>logits = outputs.logits</pre></div> <p data-svelte-h="svelte-18e4iwl">Get the class with the highest probability:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>predicted_class = logits.argmax().item() <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_class <span class="hljs-string">'0'</span></pre></div></div></div> <div class="border border-gray-200 rounded-xl px-4 relative"><div class="flex h-[22px] mt-[-12.5px] justify-between leading-none"><div class="flex px-1 items-center space-x-1 bg-white dark:bg-gray-950"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="0.94em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 274"><path d="M145.726 42.065v42.07l72.861 42.07v-42.07l-72.86-42.07zM0 84.135v42.07l36.43 21.03V105.17L0 84.135zm109.291 21.035l-36.43 21.034v126.2l36.43 21.035v-84.135l36.435 21.035v-42.07l-36.435-21.034V105.17z" fill="#E55B2D"></path><path d="M145.726 42.065L36.43 105.17v42.065l72.861-42.065v42.065l36.435-21.03v-84.14zM255.022 63.1l-36.435 21.035v42.07l36.435-21.035V63.1zm-72.865 84.135l-36.43 21.035v42.07l36.43-21.036v-42.07zm-36.43 63.104l-36.436-21.035v84.135l36.435-21.035V210.34z" fill="#ED8E24"></path><path d="M145.726 0L0 84.135l36.43 21.035l109.296-63.105l72.861 42.07L255.022 63.1L145.726 0zm0 126.204l-36.435 21.03l36.435 21.036l36.43-21.035l-36.43-21.03z" fill="#F8BF3C"></path></svg> <span>TensorFlow</span></div> <div class="cursor-pointer flex items-center justify-center space-x-1 text-sm px-2 bg-white dark:bg-gray-950 hover:underline leading-none"><svg class="" width="0.9em" height="0.9em" viewBox="0 0 10 9" fill="currentColor" xmlns="http://www.w3.org/2000/svg"><path d="M1.39125 1.9725L0.0883333 0.669997L0.677917 0.0804138L8.9275 8.33041L8.33792 8.91958L6.95875 7.54041C6.22592 8.00523 5.37572 8.25138 4.50792 8.25C2.26125 8.25 0.392083 6.63333 0 4.5C0.179179 3.52946 0.667345 2.64287 1.39167 1.9725H1.39125ZM5.65667 6.23833L5.04667 5.62833C4.81335 5.73996 4.55116 5.77647 4.29622 5.73282C4.04129 5.68918 3.80617 5.56752 3.62328 5.38463C3.44039 5.20175 3.31874 4.96663 3.27509 4.71169C3.23144 4.45676 3.26795 4.19456 3.37958 3.96125L2.76958 3.35125C2.50447 3.75187 2.38595 4.2318 2.4341 4.70978C2.48225 5.18777 2.6941 5.63442 3.0338 5.97411C3.37349 6.31381 3.82015 6.52567 4.29813 6.57382C4.77611 6.62197 5.25605 6.50345 5.65667 6.23833ZM2.83042 1.06666C3.35 0.862497 3.91625 0.749997 4.50792 0.749997C6.75458 0.749997 8.62375 2.36666 9.01583 4.5C8.88816 5.19404 8.60119 5.84899 8.1775 6.41333L6.56917 4.805C6.61694 4.48317 6.58868 4.15463 6.48664 3.84569C6.3846 3.53675 6.21162 3.256 5.98156 3.02594C5.7515 2.79588 5.47075 2.6229 5.16181 2.52086C4.85287 2.41882 4.52433 2.39056 4.2025 2.43833L2.83042 1.06708V1.06666Z" fill="currentColor"></path></svg> <span>Hide TensorFlow content</span></div></div> <div class="framework-content"><p data-svelte-h="svelte-q8vskg">Tokenize each prompt and candidate answer pair and return TensorFlow tensors:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = AutoTokenizer.from_pretrained(<span class="hljs-string">"my_awesome_swag_model"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = tokenizer([[prompt, candidate1], [prompt, candidate2]], return_tensors=<span class="hljs-string">"tf"</span>, padding=<span class="hljs-literal">True</span>)</pre></div> <p data-svelte-h="svelte-f3g043">Pass your inputs to the model and return the <code>logits</code>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> TFAutoModelForMultipleChoice <span class="hljs-meta">&gt;&gt;&gt; </span>model = TFAutoModelForMultipleChoice.from_pretrained(<span class="hljs-string">"my_awesome_swag_model"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = {k: tf.expand_dims(v, <span class="hljs-number">0</span>) <span class="hljs-keyword">for</span> k, v <span class="hljs-keyword">in</span> inputs.items()} <span class="hljs-meta">&gt;&gt;&gt; </span>outputs = model(inputs) <span class="hljs-meta">&gt;&gt;&gt; </span>logits = outputs.logits</pre></div> <p data-svelte-h="svelte-18e4iwl">Get the class with the highest probability:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>predicted_class = <span class="hljs-built_in">int</span>(tf.math.argmax(logits, axis=-<span class="hljs-number">1</span>)[<span class="hljs-number">0</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span>predicted_class <span class="hljs-string">'0'</span></pre></div></div></div> </div> <p></p> <div id="svelte-announcer" aria-live="assertive" aria-atomic="true" style="position: absolute; left: 0px; top: 0px; clip: rect(0px, 0px, 0px, 0px); clip-path: inset(50%); overflow: hidden; white-space: nowrap; width: 1px; height: 1px;"></div></div> <div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/tasks/summarization" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>Summarization</a> <a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/tasks/audio_classification" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">Audio classification<span class="ml-2 translate-y-px">→</span></a></div></div></div> <div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapter&quot;:{&quot;title&quot;:&quot;Multiple choice&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;multiple-choice&quot;,&quot;url&quot;:&quot;#multiple-choice&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Load SWAG dataset&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;load-swag-dataset&quot;,&quot;url&quot;:&quot;#load-swag-dataset&quot;},{&quot;title&quot;:&quot;Preprocess&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocess&quot;,&quot;url&quot;:&quot;#preprocess&quot;},{&quot;title&quot;:&quot;Evaluate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;evaluate&quot;,&quot;url&quot;:&quot;#evaluate&quot;},{&quot;title&quot;:&quot;Train&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;train&quot;,&quot;url&quot;:&quot;#train&quot;},{&quot;title&quot;:&quot;Inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;inference&quot;,&quot;url&quot;:&quot;#inference&quot;}]}}" data-target="SubSideMenu"><nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#multiple-choice" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-multiple-choice"><wbr>Multiple choice</a> <a href="#load-swag-dataset" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-load-swag-dataset"><wbr>Load SWA<wbr>G dataset</a> <a href="#preprocess" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-preprocess"><wbr>Preprocess</a> <a href="#evaluate" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-evaluate"><wbr>Evaluate</a> <a href="#train" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-train"><wbr>Train</a> <a href="#inference" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-inference"><wbr>Inference</a> </nav></div></div></div> <div id="doc-footer"></div></main> </div> <script> import("/front/build/kube-5e23f38/index.js"); window.moonSha = "kube-5e23f38/"; window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc","environment":"production","userAgent":"HuggingFace (production)"}`); </script> <!-- Stripe --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://js.stripe.com/v3/"; script.async = true; document.head.appendChild(script); } </script> <!-- Google analytics v4 --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL"; script.async = true; document.head.appendChild(script); window.dataLayer = window.dataLayer || []; function gtag() { if (window.dataLayer !== undefined) { window.dataLayer.push(arguments); } } gtag("js", new Date()); gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/v4.34.0/en/tasks/multiple_choice" }); /// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" }); /// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent /// TODO: ask the user for their consent and update this with gtag('consent', 'update') } </script> <!-- Google Analytics v3 (deprecated) --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { (function (i, s, o, g, r, a, m) { i["GoogleAnalyticsObject"] = r; (i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments); }), (i[r].l = 1 * new Date()); (a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]); a.async = 1; a.src = g; m.parentNode.insertBefore(a, m); })(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics"); ganalytics("create", "UA-83738774-2", "auto"); ganalytics("send", "pageview", "/docs/transformers/v4.34.0/en/tasks/multiple_choice"); } </script> <iframe name="__privateStripeMetricsController2390" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-27c67c0d52761104439bb051c7856ab1.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fv4.34.0%2Fen%2Ftasks%2Fmultiple_choice&amp;title=Multiple%20choice&amp;referrer=&amp;muid=38397bf3-d1df-433f-a1ab-3a999964eeba83e258&amp;sid=7a2cecc6-6b9a-4e4a-88b4-4bd8a189a43fe6315f&amp;version=6&amp;preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html>
2023-10-05T13:33:53.751Z
https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/blip2
The documentation page MODEL\_DOC/BLIP2 doesn’t exist in v4.34.0, but exists on the main version. Click [here](/docs/transformers/main/en/model_doc/blip2) to redirect to the main version of the documentation.
<html><head></head><body>The documentation page MODEL_DOC/BLIP2 doesn’t exist in v4.34.0, but exists on the main version. Click <a href="/docs/transformers/main/en/model_doc/blip2">here</a> to redirect to the main version of the documentation.</body></html>
2023-10-05T13:33:53.784Z
https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/(http://www.robots.ox.ac.uk/~vgg/data/text/)
The documentation page MODEL\_DOC/(HTTP://WWW.ROBOTS.OX.AC.UK/~VGG/DATA/TEXT/) doesn’t exist in v4.34.0, but exists on the main version. Click [here](/docs/transformers/main/en/model_doc/(http://www.robots.ox.ac.uk/~vgg/data/text/)) to redirect to the main version of the documentation.
<html><head></head><body>The documentation page MODEL_DOC/(HTTP://WWW.ROBOTS.OX.AC.UK/~VGG/DATA/TEXT/) doesn’t exist in v4.34.0, but exists on the main version. Click <a href="/docs/transformers/main/en/model_doc/(http://www.robots.ox.ac.uk/~vgg/data/text/)">here</a> to redirect to the main version of the documentation.</body></html>
2023-10-05T13:33:53.919Z
Monocular depth estimation
https://huggingface.co/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation
# Monocular depth estimation Monocular depth estimation is a computer vision task that involves predicting the depth information of a scene from a single image. In other words, it is the process of estimating the distance of objects in a scene from a single camera viewpoint. Monocular depth estimation has various applications, including 3D reconstruction, augmented reality, autonomous driving, and robotics. It is a challenging task as it requires the model to understand the complex relationships between objects in the scene and the corresponding depth information, which can be affected by factors such as lighting conditions, occlusion, and texture. The task illustrated in this tutorial is supported by the following model architectures: [DPT](../model_doc/dpt), [GLPN](../model_doc/glpn) In this guide you’ll learn how to: - create a depth estimation pipeline - run depth estimation inference by hand Before you begin, make sure you have all the necessary libraries installed: ``` pip install -q transformers ``` ## Depth estimation pipeline The simplest way to try out inference with a model supporting depth estimation is to use the corresponding [pipeline()](/docs/transformers/v4.34.0/en/main_classes/pipelines#transformers.pipeline). Instantiate a pipeline from a [checkpoint on the Hugging Face Hub](https://huggingface.co/models?pipeline_tag=depth-estimation&sort=downloads): ``` >>> from transformers import pipeline >>> checkpoint = "vinvino02/glpn-nyu" >>> depth_estimator = pipeline("depth-estimation", model=checkpoint) ``` Next, choose an image to analyze: ``` >>> from PIL import Image >>> import requests >>> url = "https://unsplash.com/photos/HwBAsSbPBDU/download?ixid=MnwxMjA3fDB8MXxzZWFyY2h8MzR8fGNhciUyMGluJTIwdGhlJTIwc3RyZWV0fGVufDB8MHx8fDE2Nzg5MDEwODg&force=true&w=640" >>> image = Image.open(requests.get(url, stream=True).raw) >>> image ``` ![Photo of a busy street](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/depth-estimation-example.jpg) Pass the image to the pipeline. ``` >>> predictions = depth_estimator(image) ``` The pipeline returns a dictionary with two entries. The first one, called `predicted_depth`, is a tensor with the values being the depth expressed in meters for each pixel. The second one, `depth`, is a PIL image that visualizes the depth estimation result. Let’s take a look at the visualized result: ![Depth estimation visualization](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/depth-visualization.png) ## Depth estimation inference by hand Now that you’ve seen how to use the depth estimation pipeline, let’s see how we can replicate the same result by hand. Start by loading the model and associated processor from a [checkpoint on the Hugging Face Hub](https://huggingface.co/models?pipeline_tag=depth-estimation&sort=downloads). Here we’ll use the same checkpoint as before: ``` >>> from transformers import AutoImageProcessor, AutoModelForDepthEstimation >>> checkpoint = "vinvino02/glpn-nyu" >>> image_processor = AutoImageProcessor.from_pretrained(checkpoint) >>> model = AutoModelForDepthEstimation.from_pretrained(checkpoint) ``` Prepare the image input for the model using the `image_processor` that will take care of the necessary image transformations such as resizing and normalization: ``` >>> pixel_values = image_processor(image, return_tensors="pt").pixel_values ``` Pass the prepared inputs through the model: ``` >>> import torch >>> with torch.no_grad(): ... outputs = model(pixel_values) ... predicted_depth = outputs.predicted_depth ``` Visualize the results: ``` >>> import numpy as np >>> >>> prediction = torch.nn.functional.interpolate( ... predicted_depth.unsqueeze(1), ... size=image.size[::-1], ... mode="bicubic", ... align_corners=False, ... ).squeeze() >>> output = prediction.numpy() >>> formatted = (output * 255 / np.max(output)).astype("uint8") >>> depth = Image.fromarray(formatted) >>> depth ``` ![Depth estimation visualization](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/depth-visualization.png)
<!DOCTYPE html><html class=""><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"> <meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science."> <meta property="fb:app_id" content="1321688464574422"> <meta name="twitter:card" content="summary_large_image"> <meta name="twitter:site" content="@huggingface"> <meta property="og:title" content="Monocular depth estimation"> <meta property="og:type" content="website"> <meta property="og:url" content="https://huggingface.co/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation"> <meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png"> <link rel="stylesheet" href="/front/build/kube-5e23f38/style.css"> <link rel="preconnect" href="https://fonts.gstatic.com"> <link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&amp;display=swap" rel="stylesheet"> <link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&amp;display=swap" rel="stylesheet"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" as="style" onload="this.onload=null;this.rel='stylesheet'"> <noscript> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" /> </noscript> <title>Monocular depth estimation</title> <script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script> <script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"><link rel="stylesheet" href="/docs/transformers/v4.34.0/en/_app/immutable/assets/0.e3b0c442.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/1.38c5c2f6.js"><meta name="hf:doc:metadata" content="{&quot;local&quot;:&quot;monocular-depth-estimation&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;depth-estimation-pipeline&quot;,&quot;title&quot;:&quot;Depth estimation pipeline&quot;},{&quot;local&quot;:&quot;depth-estimation-inference-by-hand&quot;,&quot;title&quot;:&quot;Depth estimation inference by hand&quot;}],&quot;title&quot;:&quot;Monocular depth estimation&quot;}"></head> <body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage"> <div class="flex min-h-screen flex-col"> <div class="SVELTE_HYDRATER contents" data-props="{&quot;classNames&quot;:&quot;&quot;,&quot;isWide&quot;:true,&quot;isZh&quot;:false}" data-target="MainHeader"><header class="border-b border-gray-100 "><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <div class="flex flex-none items-center justify-center p-0.5 place-self-stretch lg:hidden"><button class="relative z-40 flex h-6 w-8 items-center justify-center" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div></div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="rounded-full border border-transparent bg-gray-900 px-3 py-1 leading-none text-white hover:border-black hover:bg-white hover:text-black" href="/join">Sign Up</a></li></ul></nav></div></header></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div> <main class="flex flex-1 flex-col"><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapters&quot;:[{&quot;title&quot;:&quot;Get started&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;🤗 Transformers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;index&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/index&quot;},{&quot;title&quot;:&quot;Quick tour&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;quicktour&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/quicktour&quot;},{&quot;title&quot;:&quot;Installation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;installation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/installation&quot;}]},{&quot;title&quot;:&quot;Tutorials&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Run inference with pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_tutorial&quot;},{&quot;title&quot;:&quot;Write portable code with AutoClass&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;autoclass_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/autoclass_tutorial&quot;},{&quot;title&quot;:&quot;Preprocess data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocessing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/preprocessing&quot;},{&quot;title&quot;:&quot;Fine-tune a pretrained model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;training&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/training&quot;},{&quot;title&quot;:&quot;Train with a script&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;run_scripts&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/run_scripts&quot;},{&quot;title&quot;:&quot;Set up distributed training with 🤗 Accelerate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;accelerate&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/accelerate&quot;},{&quot;title&quot;:&quot;Load and train adapters with 🤗 PEFT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;peft&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/peft&quot;},{&quot;title&quot;:&quot;Share your model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_sharing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_sharing&quot;},{&quot;title&quot;:&quot;Agents&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers_agents&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/transformers_agents&quot;},{&quot;title&quot;:&quot;Generation with LLMs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;llm_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/llm_tutorial&quot;}]},{&quot;title&quot;:&quot;Task Guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Natural Language Processing&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text classification&quot;,&quot;id&quot;:&quot;tasks/sequence_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/sequence_classification&quot;},{&quot;title&quot;:&quot;Token classification&quot;,&quot;id&quot;:&quot;tasks/token_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/token_classification&quot;},{&quot;title&quot;:&quot;Question answering&quot;,&quot;id&quot;:&quot;tasks/question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/question_answering&quot;},{&quot;title&quot;:&quot;Causal language modeling&quot;,&quot;id&quot;:&quot;tasks/language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/language_modeling&quot;},{&quot;title&quot;:&quot;Masked language modeling&quot;,&quot;id&quot;:&quot;tasks/masked_language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/masked_language_modeling&quot;},{&quot;title&quot;:&quot;Translation&quot;,&quot;id&quot;:&quot;tasks/translation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/translation&quot;},{&quot;title&quot;:&quot;Summarization&quot;,&quot;id&quot;:&quot;tasks/summarization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/summarization&quot;},{&quot;title&quot;:&quot;Multiple choice&quot;,&quot;id&quot;:&quot;tasks/multiple_choice&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/multiple_choice&quot;}]},{&quot;title&quot;:&quot;Audio&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio classification&quot;,&quot;id&quot;:&quot;tasks/audio_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/audio_classification&quot;},{&quot;title&quot;:&quot;Automatic speech recognition&quot;,&quot;id&quot;:&quot;tasks/asr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/asr&quot;}]},{&quot;title&quot;:&quot;Computer Vision&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image classification&quot;,&quot;id&quot;:&quot;tasks/image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_classification&quot;},{&quot;title&quot;:&quot;Semantic segmentation&quot;,&quot;id&quot;:&quot;tasks/semantic_segmentation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/semantic_segmentation&quot;},{&quot;title&quot;:&quot;Video classification&quot;,&quot;id&quot;:&quot;tasks/video_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/video_classification&quot;},{&quot;title&quot;:&quot;Object detection&quot;,&quot;id&quot;:&quot;tasks/object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot object detection&quot;,&quot;id&quot;:&quot;tasks/zero_shot_object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot image classification&quot;,&quot;id&quot;:&quot;tasks/zero_shot_image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification&quot;},{&quot;title&quot;:&quot;Depth estimation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks/monocular_depth_estimation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation&quot;}]},{&quot;title&quot;:&quot;Multimodal&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image captioning&quot;,&quot;id&quot;:&quot;tasks/image_captioning&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_captioning&quot;},{&quot;title&quot;:&quot;Document Question Answering&quot;,&quot;id&quot;:&quot;tasks/document_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/document_question_answering&quot;},{&quot;title&quot;:&quot;Visual Question Answering&quot;,&quot;id&quot;:&quot;tasks/visual_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/visual_question_answering&quot;},{&quot;title&quot;:&quot;Text to speech&quot;,&quot;id&quot;:&quot;tasks/text-to-speech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/text-to-speech&quot;}]},{&quot;title&quot;:&quot;Generation&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Customize the generation strategy&quot;,&quot;id&quot;:&quot;generation_strategies&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/generation_strategies&quot;}]},{&quot;title&quot;:&quot;Prompting&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image tasks with IDEFICS&quot;,&quot;id&quot;:&quot;tasks/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/idefics&quot;}]}]},{&quot;title&quot;:&quot;Developer guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Use fast tokenizers from 🤗 Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;fast_tokenizers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/fast_tokenizers&quot;},{&quot;title&quot;:&quot;Run inference with multilingual models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;multilingual&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/multilingual&quot;},{&quot;title&quot;:&quot;Use model-specific APIs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;create_a_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/create_a_model&quot;},{&quot;title&quot;:&quot;Share a custom model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_models&quot;},{&quot;title&quot;:&quot;Templates for chat models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;chat_templating&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/chat_templating&quot;},{&quot;title&quot;:&quot;Run training on Amazon SageMaker&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;sagemaker&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/sagemaker&quot;},{&quot;title&quot;:&quot;Export to ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;serialization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/serialization&quot;},{&quot;title&quot;:&quot;Export to TFLite&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tflite&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tflite&quot;},{&quot;title&quot;:&quot;Export to TorchScript&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;torchscript&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/torchscript&quot;},{&quot;title&quot;:&quot;Benchmarks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;benchmarks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/benchmarks&quot;},{&quot;title&quot;:&quot;Notebooks with examples&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;notebooks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/notebooks&quot;},{&quot;title&quot;:&quot;Community resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;community&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/community&quot;},{&quot;title&quot;:&quot;Custom Tools and Prompts&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_tools&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_tools&quot;},{&quot;title&quot;:&quot;Troubleshoot&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;troubleshooting&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/troubleshooting&quot;}]},{&quot;title&quot;:&quot;Performance and scalability&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;performance&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/performance&quot;},{&quot;title&quot;:&quot;Efficient training techniques&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Methods and tools for efficient training on a single GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_one&quot;},{&quot;title&quot;:&quot;Multiple GPUs and parallelism&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_many&quot;},{&quot;title&quot;:&quot;Efficient training on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu&quot;},{&quot;title&quot;:&quot;Distributed CPU training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu_many&quot;},{&quot;title&quot;:&quot;Training on TPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu&quot;},{&quot;title&quot;:&quot;Training on TPU with TensorFlow&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu_tf&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu_tf&quot;},{&quot;title&quot;:&quot;Training on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_special&quot;},{&quot;title&quot;:&quot;Custom hardware for training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_hardware&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_hardware&quot;},{&quot;title&quot;:&quot;Hyperparameter Search using Trainer API&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;hpo_train&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/hpo_train&quot;}]},{&quot;title&quot;:&quot;Optimizing inference&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Inference on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_cpu&quot;},{&quot;title&quot;:&quot;Inference on one GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_one&quot;},{&quot;title&quot;:&quot;Inference on many GPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_many&quot;},{&quot;title&quot;:&quot;Inference on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_special&quot;}]},{&quot;title&quot;:&quot;Instantiating a big model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;big_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/big_models&quot;},{&quot;title&quot;:&quot;Troubleshooting&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;debugging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/debugging&quot;},{&quot;title&quot;:&quot;XLA Integration for TensorFlow Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tf_xla&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tf_xla&quot;},{&quot;title&quot;:&quot;Optimize inference using `torch.compile()`&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_torch_compile&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_torch_compile&quot;}]},{&quot;title&quot;:&quot;Contribute&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;How to contribute to transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;contributing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/contributing&quot;},{&quot;title&quot;:&quot;How to add a model to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_model&quot;},{&quot;title&quot;:&quot;How to convert a 🤗 Transformers model to TensorFlow?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_tensorflow_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_tensorflow_model&quot;},{&quot;title&quot;:&quot;How to add a pipeline to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_pipeline&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_pipeline&quot;},{&quot;title&quot;:&quot;Testing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;testing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/testing&quot;},{&quot;title&quot;:&quot;Checks on a Pull Request&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pr_checks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pr_checks&quot;}]},{&quot;title&quot;:&quot;Conceptual guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Philosophy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;philosophy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/philosophy&quot;},{&quot;title&quot;:&quot;Glossary&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;glossary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/glossary&quot;},{&quot;title&quot;:&quot;What 🤗 Transformers can do&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;task_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/task_summary&quot;},{&quot;title&quot;:&quot;How 🤗 Transformers solve tasks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks_explained&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks_explained&quot;},{&quot;title&quot;:&quot;The Transformer model family&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_summary&quot;},{&quot;title&quot;:&quot;Summary of the tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tokenizer_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tokenizer_summary&quot;},{&quot;title&quot;:&quot;Attention mechanisms&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;attention&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/attention&quot;},{&quot;title&quot;:&quot;Padding and truncation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pad_truncation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pad_truncation&quot;},{&quot;title&quot;:&quot;BERTology&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;bertology&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/bertology&quot;},{&quot;title&quot;:&quot;Perplexity of fixed-length models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perplexity&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perplexity&quot;},{&quot;title&quot;:&quot;Pipelines for webserver inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_webserver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_webserver&quot;},{&quot;title&quot;:&quot;Model training anatomy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_memory_anatomy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_memory_anatomy&quot;}]},{&quot;title&quot;:&quot;API&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Main Classes&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Agents and Tools&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/agent&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/agent&quot;},{&quot;title&quot;:&quot;Auto Classes&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/auto&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/auto&quot;},{&quot;title&quot;:&quot;Callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/callback&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/callback&quot;},{&quot;title&quot;:&quot;Configuration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/configuration&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/configuration&quot;},{&quot;title&quot;:&quot;Data Collator&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/data_collator&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/data_collator&quot;},{&quot;title&quot;:&quot;Keras callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/keras_callbacks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/keras_callbacks&quot;},{&quot;title&quot;:&quot;Logging&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/logging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/logging&quot;},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/model&quot;},{&quot;title&quot;:&quot;Text Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/text_generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/text_generation&quot;},{&quot;title&quot;:&quot;ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/onnx&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/onnx&quot;},{&quot;title&quot;:&quot;Optimization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/optimizer_schedules&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules&quot;},{&quot;title&quot;:&quot;Model outputs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/output&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/output&quot;},{&quot;title&quot;:&quot;Pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/pipelines&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/pipelines&quot;},{&quot;title&quot;:&quot;Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/processors&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/processors&quot;},{&quot;title&quot;:&quot;Quantization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/quantization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/quantization&quot;},{&quot;title&quot;:&quot;Tokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/tokenizer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/tokenizer&quot;},{&quot;title&quot;:&quot;Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/trainer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/trainer&quot;},{&quot;title&quot;:&quot;DeepSpeed Integration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/deepspeed&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/deepspeed&quot;},{&quot;title&quot;:&quot;Feature Extractor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/feature_extractor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/feature_extractor&quot;},{&quot;title&quot;:&quot;Image Processor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/image_processor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/image_processor&quot;}]},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALBERT&quot;,&quot;id&quot;:&quot;model_doc/albert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/albert&quot;},{&quot;title&quot;:&quot;BART&quot;,&quot;id&quot;:&quot;model_doc/bart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bart&quot;},{&quot;title&quot;:&quot;BARThez&quot;,&quot;id&quot;:&quot;model_doc/barthez&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/barthez&quot;},{&quot;title&quot;:&quot;BARTpho&quot;,&quot;id&quot;:&quot;model_doc/bartpho&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bartpho&quot;},{&quot;title&quot;:&quot;BERT&quot;,&quot;id&quot;:&quot;model_doc/bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert&quot;},{&quot;title&quot;:&quot;BertGeneration&quot;,&quot;id&quot;:&quot;model_doc/bert-generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-generation&quot;},{&quot;title&quot;:&quot;BertJapanese&quot;,&quot;id&quot;:&quot;model_doc/bert-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-japanese&quot;},{&quot;title&quot;:&quot;Bertweet&quot;,&quot;id&quot;:&quot;model_doc/bertweet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bertweet&quot;},{&quot;title&quot;:&quot;BigBird&quot;,&quot;id&quot;:&quot;model_doc/big_bird&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/big_bird&quot;},{&quot;title&quot;:&quot;BigBirdPegasus&quot;,&quot;id&quot;:&quot;model_doc/bigbird_pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus&quot;},{&quot;title&quot;:&quot;BioGpt&quot;,&quot;id&quot;:&quot;model_doc/biogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/biogpt&quot;},{&quot;title&quot;:&quot;Blenderbot&quot;,&quot;id&quot;:&quot;model_doc/blenderbot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot&quot;},{&quot;title&quot;:&quot;Blenderbot Small&quot;,&quot;id&quot;:&quot;model_doc/blenderbot-small&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot-small&quot;},{&quot;title&quot;:&quot;BLOOM&quot;,&quot;id&quot;:&quot;model_doc/bloom&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bloom&quot;},{&quot;title&quot;:&quot;BORT&quot;,&quot;id&quot;:&quot;model_doc/bort&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bort&quot;},{&quot;title&quot;:&quot;ByT5&quot;,&quot;id&quot;:&quot;model_doc/byt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/byt5&quot;},{&quot;title&quot;:&quot;CamemBERT&quot;,&quot;id&quot;:&quot;model_doc/camembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/camembert&quot;},{&quot;title&quot;:&quot;CANINE&quot;,&quot;id&quot;:&quot;model_doc/canine&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/canine&quot;},{&quot;title&quot;:&quot;CodeGen&quot;,&quot;id&quot;:&quot;model_doc/codegen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/codegen&quot;},{&quot;title&quot;:&quot;CodeLlama&quot;,&quot;id&quot;:&quot;model_doc/code_llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/code_llama&quot;},{&quot;title&quot;:&quot;ConvBERT&quot;,&quot;id&quot;:&quot;model_doc/convbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convbert&quot;},{&quot;title&quot;:&quot;CPM&quot;,&quot;id&quot;:&quot;model_doc/cpm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpm&quot;},{&quot;title&quot;:&quot;CPMANT&quot;,&quot;id&quot;:&quot;model_doc/cpmant&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpmant&quot;},{&quot;title&quot;:&quot;CTRL&quot;,&quot;id&quot;:&quot;model_doc/ctrl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ctrl&quot;},{&quot;title&quot;:&quot;DeBERTa&quot;,&quot;id&quot;:&quot;model_doc/deberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta&quot;},{&quot;title&quot;:&quot;DeBERTa-v2&quot;,&quot;id&quot;:&quot;model_doc/deberta-v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta-v2&quot;},{&quot;title&quot;:&quot;DialoGPT&quot;,&quot;id&quot;:&quot;model_doc/dialogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dialogpt&quot;},{&quot;title&quot;:&quot;DistilBERT&quot;,&quot;id&quot;:&quot;model_doc/distilbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/distilbert&quot;},{&quot;title&quot;:&quot;DPR&quot;,&quot;id&quot;:&quot;model_doc/dpr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpr&quot;},{&quot;title&quot;:&quot;ELECTRA&quot;,&quot;id&quot;:&quot;model_doc/electra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/electra&quot;},{&quot;title&quot;:&quot;Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encoder-decoder&quot;},{&quot;title&quot;:&quot;ERNIE&quot;,&quot;id&quot;:&quot;model_doc/ernie&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie&quot;},{&quot;title&quot;:&quot;ErnieM&quot;,&quot;id&quot;:&quot;model_doc/ernie_m&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie_m&quot;},{&quot;title&quot;:&quot;ESM&quot;,&quot;id&quot;:&quot;model_doc/esm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/esm&quot;},{&quot;title&quot;:&quot;Falcon&quot;,&quot;id&quot;:&quot;model_doc/falcon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/falcon&quot;},{&quot;title&quot;:&quot;FLAN-T5&quot;,&quot;id&quot;:&quot;model_doc/flan-t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-t5&quot;},{&quot;title&quot;:&quot;FLAN-UL2&quot;,&quot;id&quot;:&quot;model_doc/flan-ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-ul2&quot;},{&quot;title&quot;:&quot;FlauBERT&quot;,&quot;id&quot;:&quot;model_doc/flaubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flaubert&quot;},{&quot;title&quot;:&quot;FNet&quot;,&quot;id&quot;:&quot;model_doc/fnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fnet&quot;},{&quot;title&quot;:&quot;FSMT&quot;,&quot;id&quot;:&quot;model_doc/fsmt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fsmt&quot;},{&quot;title&quot;:&quot;Funnel Transformer&quot;,&quot;id&quot;:&quot;model_doc/funnel&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/funnel&quot;},{&quot;title&quot;:&quot;GPT&quot;,&quot;id&quot;:&quot;model_doc/openai-gpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/openai-gpt&quot;},{&quot;title&quot;:&quot;GPT Neo&quot;,&quot;id&quot;:&quot;model_doc/gpt_neo&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neo&quot;},{&quot;title&quot;:&quot;GPT NeoX&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox&quot;},{&quot;title&quot;:&quot;GPT NeoX Japanese&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox_japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese&quot;},{&quot;title&quot;:&quot;GPT-J&quot;,&quot;id&quot;:&quot;model_doc/gptj&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptj&quot;},{&quot;title&quot;:&quot;GPT2&quot;,&quot;id&quot;:&quot;model_doc/gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt2&quot;},{&quot;title&quot;:&quot;GPTBigCode&quot;,&quot;id&quot;:&quot;model_doc/gpt_bigcode&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode&quot;},{&quot;title&quot;:&quot;GPTSAN Japanese&quot;,&quot;id&quot;:&quot;model_doc/gptsan-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese&quot;},{&quot;title&quot;:&quot;GPTSw3&quot;,&quot;id&quot;:&quot;model_doc/gpt-sw3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt-sw3&quot;},{&quot;title&quot;:&quot;HerBERT&quot;,&quot;id&quot;:&quot;model_doc/herbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/herbert&quot;},{&quot;title&quot;:&quot;I-BERT&quot;,&quot;id&quot;:&quot;model_doc/ibert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ibert&quot;},{&quot;title&quot;:&quot;Jukebox&quot;,&quot;id&quot;:&quot;model_doc/jukebox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/jukebox&quot;},{&quot;title&quot;:&quot;LED&quot;,&quot;id&quot;:&quot;model_doc/led&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/led&quot;},{&quot;title&quot;:&quot;LLaMA&quot;,&quot;id&quot;:&quot;model_doc/llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama&quot;},{&quot;title&quot;:&quot;Llama2&quot;,&quot;id&quot;:&quot;model_doc/llama2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama2&quot;},{&quot;title&quot;:&quot;Longformer&quot;,&quot;id&quot;:&quot;model_doc/longformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longformer&quot;},{&quot;title&quot;:&quot;LongT5&quot;,&quot;id&quot;:&quot;model_doc/longt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longt5&quot;},{&quot;title&quot;:&quot;LUKE&quot;,&quot;id&quot;:&quot;model_doc/luke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/luke&quot;},{&quot;title&quot;:&quot;M2M100&quot;,&quot;id&quot;:&quot;model_doc/m2m_100&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/m2m_100&quot;},{&quot;title&quot;:&quot;MarianMT&quot;,&quot;id&quot;:&quot;model_doc/marian&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/marian&quot;},{&quot;title&quot;:&quot;MarkupLM&quot;,&quot;id&quot;:&quot;model_doc/markuplm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/markuplm&quot;},{&quot;title&quot;:&quot;MBart and MBart-50&quot;,&quot;id&quot;:&quot;model_doc/mbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mbart&quot;},{&quot;title&quot;:&quot;MEGA&quot;,&quot;id&quot;:&quot;model_doc/mega&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mega&quot;},{&quot;title&quot;:&quot;MegatronBERT&quot;,&quot;id&quot;:&quot;model_doc/megatron-bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron-bert&quot;},{&quot;title&quot;:&quot;MegatronGPT2&quot;,&quot;id&quot;:&quot;model_doc/megatron_gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2&quot;},{&quot;title&quot;:&quot;Mistral&quot;,&quot;id&quot;:&quot;model_doc/mistral&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mistral&quot;},{&quot;title&quot;:&quot;mLUKE&quot;,&quot;id&quot;:&quot;model_doc/mluke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mluke&quot;},{&quot;title&quot;:&quot;MobileBERT&quot;,&quot;id&quot;:&quot;model_doc/mobilebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilebert&quot;},{&quot;title&quot;:&quot;MPNet&quot;,&quot;id&quot;:&quot;model_doc/mpnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpnet&quot;},{&quot;title&quot;:&quot;MPT&quot;,&quot;id&quot;:&quot;model_doc/mpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpt&quot;},{&quot;title&quot;:&quot;MRA&quot;,&quot;id&quot;:&quot;model_doc/mra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mra&quot;},{&quot;title&quot;:&quot;MT5&quot;,&quot;id&quot;:&quot;model_doc/mt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mt5&quot;},{&quot;title&quot;:&quot;MVP&quot;,&quot;id&quot;:&quot;model_doc/mvp&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mvp&quot;},{&quot;title&quot;:&quot;NEZHA&quot;,&quot;id&quot;:&quot;model_doc/nezha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nezha&quot;},{&quot;title&quot;:&quot;NLLB&quot;,&quot;id&quot;:&quot;model_doc/nllb&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb&quot;},{&quot;title&quot;:&quot;NLLB-MoE&quot;,&quot;id&quot;:&quot;model_doc/nllb-moe&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb-moe&quot;},{&quot;title&quot;:&quot;Nyströmformer&quot;,&quot;id&quot;:&quot;model_doc/nystromformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nystromformer&quot;},{&quot;title&quot;:&quot;Open-Llama&quot;,&quot;id&quot;:&quot;model_doc/open-llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/open-llama&quot;},{&quot;title&quot;:&quot;OPT&quot;,&quot;id&quot;:&quot;model_doc/opt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/opt&quot;},{&quot;title&quot;:&quot;Pegasus&quot;,&quot;id&quot;:&quot;model_doc/pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus&quot;},{&quot;title&quot;:&quot;PEGASUS-X&quot;,&quot;id&quot;:&quot;model_doc/pegasus_x&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus_x&quot;},{&quot;title&quot;:&quot;Persimmon&quot;,&quot;id&quot;:&quot;model_doc/persimmon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/persimmon&quot;},{&quot;title&quot;:&quot;PhoBERT&quot;,&quot;id&quot;:&quot;model_doc/phobert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/phobert&quot;},{&quot;title&quot;:&quot;PLBart&quot;,&quot;id&quot;:&quot;model_doc/plbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/plbart&quot;},{&quot;title&quot;:&quot;ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/prophetnet&quot;},{&quot;title&quot;:&quot;QDQBert&quot;,&quot;id&quot;:&quot;model_doc/qdqbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/qdqbert&quot;},{&quot;title&quot;:&quot;RAG&quot;,&quot;id&quot;:&quot;model_doc/rag&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rag&quot;},{&quot;title&quot;:&quot;REALM&quot;,&quot;id&quot;:&quot;model_doc/realm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/realm&quot;},{&quot;title&quot;:&quot;Reformer&quot;,&quot;id&quot;:&quot;model_doc/reformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/reformer&quot;},{&quot;title&quot;:&quot;RemBERT&quot;,&quot;id&quot;:&quot;model_doc/rembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rembert&quot;},{&quot;title&quot;:&quot;RetriBERT&quot;,&quot;id&quot;:&quot;model_doc/retribert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/retribert&quot;},{&quot;title&quot;:&quot;RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta&quot;},{&quot;title&quot;:&quot;RoBERTa-PreLayerNorm&quot;,&quot;id&quot;:&quot;model_doc/roberta-prelayernorm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm&quot;},{&quot;title&quot;:&quot;RoCBert&quot;,&quot;id&quot;:&quot;model_doc/roc_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roc_bert&quot;},{&quot;title&quot;:&quot;RoFormer&quot;,&quot;id&quot;:&quot;model_doc/roformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roformer&quot;},{&quot;title&quot;:&quot;RWKV&quot;,&quot;id&quot;:&quot;model_doc/rwkv&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rwkv&quot;},{&quot;title&quot;:&quot;Splinter&quot;,&quot;id&quot;:&quot;model_doc/splinter&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/splinter&quot;},{&quot;title&quot;:&quot;SqueezeBERT&quot;,&quot;id&quot;:&quot;model_doc/squeezebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/squeezebert&quot;},{&quot;title&quot;:&quot;SwitchTransformers&quot;,&quot;id&quot;:&quot;model_doc/switch_transformers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/switch_transformers&quot;},{&quot;title&quot;:&quot;T5&quot;,&quot;id&quot;:&quot;model_doc/t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5&quot;},{&quot;title&quot;:&quot;T5v1.1&quot;,&quot;id&quot;:&quot;model_doc/t5v1.1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5v1.1&quot;},{&quot;title&quot;:&quot;TAPEX&quot;,&quot;id&quot;:&quot;model_doc/tapex&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapex&quot;},{&quot;title&quot;:&quot;Transformer XL&quot;,&quot;id&quot;:&quot;model_doc/transfo-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/transfo-xl&quot;},{&quot;title&quot;:&quot;UL2&quot;,&quot;id&quot;:&quot;model_doc/ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ul2&quot;},{&quot;title&quot;:&quot;UMT5&quot;,&quot;id&quot;:&quot;model_doc/umt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/umt5&quot;},{&quot;title&quot;:&quot;X-MOD&quot;,&quot;id&quot;:&quot;model_doc/xmod&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xmod&quot;},{&quot;title&quot;:&quot;XGLM&quot;,&quot;id&quot;:&quot;model_doc/xglm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xglm&quot;},{&quot;title&quot;:&quot;XLM&quot;,&quot;id&quot;:&quot;model_doc/xlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm&quot;},{&quot;title&quot;:&quot;XLM-ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/xlm-prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa-XL&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl&quot;},{&quot;title&quot;:&quot;XLM-V&quot;,&quot;id&quot;:&quot;model_doc/xlm-v&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-v&quot;},{&quot;title&quot;:&quot;XLNet&quot;,&quot;id&quot;:&quot;model_doc/xlnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlnet&quot;},{&quot;title&quot;:&quot;YOSO&quot;,&quot;id&quot;:&quot;model_doc/yoso&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yoso&quot;}]},{&quot;title&quot;:&quot;Vision models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;BEiT&quot;,&quot;id&quot;:&quot;model_doc/beit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/beit&quot;},{&quot;title&quot;:&quot;BiT&quot;,&quot;id&quot;:&quot;model_doc/bit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bit&quot;},{&quot;title&quot;:&quot;Conditional DETR&quot;,&quot;id&quot;:&quot;model_doc/conditional_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/conditional_detr&quot;},{&quot;title&quot;:&quot;ConvNeXT&quot;,&quot;id&quot;:&quot;model_doc/convnext&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnext&quot;},{&quot;title&quot;:&quot;ConvNeXTV2&quot;,&quot;id&quot;:&quot;model_doc/convnextv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnextv2&quot;},{&quot;title&quot;:&quot;CvT&quot;,&quot;id&quot;:&quot;model_doc/cvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cvt&quot;},{&quot;title&quot;:&quot;Deformable DETR&quot;,&quot;id&quot;:&quot;model_doc/deformable_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deformable_detr&quot;},{&quot;title&quot;:&quot;DeiT&quot;,&quot;id&quot;:&quot;model_doc/deit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deit&quot;},{&quot;title&quot;:&quot;DETA&quot;,&quot;id&quot;:&quot;model_doc/deta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deta&quot;},{&quot;title&quot;:&quot;DETR&quot;,&quot;id&quot;:&quot;model_doc/detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/detr&quot;},{&quot;title&quot;:&quot;DiNAT&quot;,&quot;id&quot;:&quot;model_doc/dinat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinat&quot;},{&quot;title&quot;:&quot;DINO V2&quot;,&quot;id&quot;:&quot;model_doc/dinov2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinov2&quot;},{&quot;title&quot;:&quot;DiT&quot;,&quot;id&quot;:&quot;model_doc/dit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dit&quot;},{&quot;title&quot;:&quot;DPT&quot;,&quot;id&quot;:&quot;model_doc/dpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpt&quot;},{&quot;title&quot;:&quot;EfficientFormer&quot;,&quot;id&quot;:&quot;model_doc/efficientformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientformer&quot;},{&quot;title&quot;:&quot;EfficientNet&quot;,&quot;id&quot;:&quot;model_doc/efficientnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientnet&quot;},{&quot;title&quot;:&quot;FocalNet&quot;,&quot;id&quot;:&quot;model_doc/focalnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/focalnet&quot;},{&quot;title&quot;:&quot;GLPN&quot;,&quot;id&quot;:&quot;model_doc/glpn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/glpn&quot;},{&quot;title&quot;:&quot;ImageGPT&quot;,&quot;id&quot;:&quot;model_doc/imagegpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/imagegpt&quot;},{&quot;title&quot;:&quot;LeViT&quot;,&quot;id&quot;:&quot;model_doc/levit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/levit&quot;},{&quot;title&quot;:&quot;Mask2Former&quot;,&quot;id&quot;:&quot;model_doc/mask2former&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mask2former&quot;},{&quot;title&quot;:&quot;MaskFormer&quot;,&quot;id&quot;:&quot;model_doc/maskformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/maskformer&quot;},{&quot;title&quot;:&quot;MobileNetV1&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v1&quot;},{&quot;title&quot;:&quot;MobileNetV2&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v2&quot;},{&quot;title&quot;:&quot;MobileViT&quot;,&quot;id&quot;:&quot;model_doc/mobilevit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevit&quot;},{&quot;title&quot;:&quot;MobileViTV2&quot;,&quot;id&quot;:&quot;model_doc/mobilevitv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevitv2&quot;},{&quot;title&quot;:&quot;NAT&quot;,&quot;id&quot;:&quot;model_doc/nat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nat&quot;},{&quot;title&quot;:&quot;PoolFormer&quot;,&quot;id&quot;:&quot;model_doc/poolformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/poolformer&quot;},{&quot;title&quot;:&quot;Pyramid Vision Transformer (PVT)&quot;,&quot;id&quot;:&quot;model_doc/pvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pvt&quot;},{&quot;title&quot;:&quot;RegNet&quot;,&quot;id&quot;:&quot;model_doc/regnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/regnet&quot;},{&quot;title&quot;:&quot;ResNet&quot;,&quot;id&quot;:&quot;model_doc/resnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/resnet&quot;},{&quot;title&quot;:&quot;SegFormer&quot;,&quot;id&quot;:&quot;model_doc/segformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/segformer&quot;},{&quot;title&quot;:&quot;SwiftFormer&quot;,&quot;id&quot;:&quot;model_doc/swiftformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swiftformer&quot;},{&quot;title&quot;:&quot;Swin Transformer&quot;,&quot;id&quot;:&quot;model_doc/swin&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin&quot;},{&quot;title&quot;:&quot;Swin Transformer V2&quot;,&quot;id&quot;:&quot;model_doc/swinv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swinv2&quot;},{&quot;title&quot;:&quot;Swin2SR&quot;,&quot;id&quot;:&quot;model_doc/swin2sr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin2sr&quot;},{&quot;title&quot;:&quot;Table Transformer&quot;,&quot;id&quot;:&quot;model_doc/table-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/table-transformer&quot;},{&quot;title&quot;:&quot;TimeSformer&quot;,&quot;id&quot;:&quot;model_doc/timesformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/timesformer&quot;},{&quot;title&quot;:&quot;UperNet&quot;,&quot;id&quot;:&quot;model_doc/upernet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/upernet&quot;},{&quot;title&quot;:&quot;VAN&quot;,&quot;id&quot;:&quot;model_doc/van&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/van&quot;},{&quot;title&quot;:&quot;VideoMAE&quot;,&quot;id&quot;:&quot;model_doc/videomae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/videomae&quot;},{&quot;title&quot;:&quot;Vision Transformer (ViT)&quot;,&quot;id&quot;:&quot;model_doc/vit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit&quot;},{&quot;title&quot;:&quot;ViT Hybrid&quot;,&quot;id&quot;:&quot;model_doc/vit_hybrid&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_hybrid&quot;},{&quot;title&quot;:&quot;ViTDet&quot;,&quot;id&quot;:&quot;model_doc/vitdet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitdet&quot;},{&quot;title&quot;:&quot;ViTMAE&quot;,&quot;id&quot;:&quot;model_doc/vit_mae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_mae&quot;},{&quot;title&quot;:&quot;ViTMatte&quot;,&quot;id&quot;:&quot;model_doc/vitmatte&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitmatte&quot;},{&quot;title&quot;:&quot;ViTMSN&quot;,&quot;id&quot;:&quot;model_doc/vit_msn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_msn&quot;},{&quot;title&quot;:&quot;ViViT&quot;,&quot;id&quot;:&quot;model_doc/vivit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vivit&quot;},{&quot;title&quot;:&quot;YOLOS&quot;,&quot;id&quot;:&quot;model_doc/yolos&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yolos&quot;}]},{&quot;title&quot;:&quot;Audio models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio Spectrogram Transformer&quot;,&quot;id&quot;:&quot;model_doc/audio-spectrogram-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer&quot;},{&quot;title&quot;:&quot;Bark&quot;,&quot;id&quot;:&quot;model_doc/bark&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bark&quot;},{&quot;title&quot;:&quot;CLAP&quot;,&quot;id&quot;:&quot;model_doc/clap&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clap&quot;},{&quot;title&quot;:&quot;EnCodec&quot;,&quot;id&quot;:&quot;model_doc/encodec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encodec&quot;},{&quot;title&quot;:&quot;Hubert&quot;,&quot;id&quot;:&quot;model_doc/hubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/hubert&quot;},{&quot;title&quot;:&quot;MCTCT&quot;,&quot;id&quot;:&quot;model_doc/mctct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mctct&quot;},{&quot;title&quot;:&quot;MMS&quot;,&quot;id&quot;:&quot;model_doc/mms&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mms&quot;},{&quot;title&quot;:&quot;MusicGen&quot;,&quot;id&quot;:&quot;model_doc/musicgen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/musicgen&quot;},{&quot;title&quot;:&quot;Pop2Piano&quot;,&quot;id&quot;:&quot;model_doc/pop2piano&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pop2piano&quot;},{&quot;title&quot;:&quot;SEW&quot;,&quot;id&quot;:&quot;model_doc/sew&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew&quot;},{&quot;title&quot;:&quot;SEW-D&quot;,&quot;id&quot;:&quot;model_doc/sew-d&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew-d&quot;},{&quot;title&quot;:&quot;Speech2Text&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text&quot;},{&quot;title&quot;:&quot;Speech2Text2&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text_2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2&quot;},{&quot;title&quot;:&quot;SpeechT5&quot;,&quot;id&quot;:&quot;model_doc/speecht5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speecht5&quot;},{&quot;title&quot;:&quot;UniSpeech&quot;,&quot;id&quot;:&quot;model_doc/unispeech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech&quot;},{&quot;title&quot;:&quot;UniSpeech-SAT&quot;,&quot;id&quot;:&quot;model_doc/unispeech-sat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech-sat&quot;},{&quot;title&quot;:&quot;VITS&quot;,&quot;id&quot;:&quot;model_doc/vits&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vits&quot;},{&quot;title&quot;:&quot;Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2&quot;},{&quot;title&quot;:&quot;Wav2Vec2-Conformer&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2-conformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer&quot;},{&quot;title&quot;:&quot;Wav2Vec2Phoneme&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2_phoneme&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme&quot;},{&quot;title&quot;:&quot;WavLM&quot;,&quot;id&quot;:&quot;model_doc/wavlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wavlm&quot;},{&quot;title&quot;:&quot;Whisper&quot;,&quot;id&quot;:&quot;model_doc/whisper&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/whisper&quot;},{&quot;title&quot;:&quot;XLS-R&quot;,&quot;id&quot;:&quot;model_doc/xls_r&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xls_r&quot;},{&quot;title&quot;:&quot;XLSR-Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/xlsr_wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2&quot;}]},{&quot;title&quot;:&quot;Multimodal models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALIGN&quot;,&quot;id&quot;:&quot;model_doc/align&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/align&quot;},{&quot;title&quot;:&quot;AltCLIP&quot;,&quot;id&quot;:&quot;model_doc/altclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/altclip&quot;},{&quot;title&quot;:&quot;BLIP&quot;,&quot;id&quot;:&quot;model_doc/blip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip&quot;},{&quot;title&quot;:&quot;BLIP-2&quot;,&quot;id&quot;:&quot;model_doc/blip-2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip-2&quot;},{&quot;title&quot;:&quot;BridgeTower&quot;,&quot;id&quot;:&quot;model_doc/bridgetower&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bridgetower&quot;},{&quot;title&quot;:&quot;BROS&quot;,&quot;id&quot;:&quot;model_doc/bros&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bros&quot;},{&quot;title&quot;:&quot;Chinese-CLIP&quot;,&quot;id&quot;:&quot;model_doc/chinese_clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/chinese_clip&quot;},{&quot;title&quot;:&quot;CLIP&quot;,&quot;id&quot;:&quot;model_doc/clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clip&quot;},{&quot;title&quot;:&quot;CLIPSeg&quot;,&quot;id&quot;:&quot;model_doc/clipseg&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clipseg&quot;},{&quot;title&quot;:&quot;Data2Vec&quot;,&quot;id&quot;:&quot;model_doc/data2vec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/data2vec&quot;},{&quot;title&quot;:&quot;DePlot&quot;,&quot;id&quot;:&quot;model_doc/deplot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deplot&quot;},{&quot;title&quot;:&quot;Donut&quot;,&quot;id&quot;:&quot;model_doc/donut&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/donut&quot;},{&quot;title&quot;:&quot;FLAVA&quot;,&quot;id&quot;:&quot;model_doc/flava&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flava&quot;},{&quot;title&quot;:&quot;GIT&quot;,&quot;id&quot;:&quot;model_doc/git&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/git&quot;},{&quot;title&quot;:&quot;GroupViT&quot;,&quot;id&quot;:&quot;model_doc/groupvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/groupvit&quot;},{&quot;title&quot;:&quot;IDEFICS&quot;,&quot;id&quot;:&quot;model_doc/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/idefics&quot;},{&quot;title&quot;:&quot;InstructBLIP&quot;,&quot;id&quot;:&quot;model_doc/instructblip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/instructblip&quot;},{&quot;title&quot;:&quot;LayoutLM&quot;,&quot;id&quot;:&quot;model_doc/layoutlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlm&quot;},{&quot;title&quot;:&quot;LayoutLMV2&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv2&quot;},{&quot;title&quot;:&quot;LayoutLMV3&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv3&quot;},{&quot;title&quot;:&quot;LayoutXLM&quot;,&quot;id&quot;:&quot;model_doc/layoutxlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutxlm&quot;},{&quot;title&quot;:&quot;LiLT&quot;,&quot;id&quot;:&quot;model_doc/lilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lilt&quot;},{&quot;title&quot;:&quot;LXMERT&quot;,&quot;id&quot;:&quot;model_doc/lxmert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lxmert&quot;},{&quot;title&quot;:&quot;MatCha&quot;,&quot;id&quot;:&quot;model_doc/matcha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/matcha&quot;},{&quot;title&quot;:&quot;MGP-STR&quot;,&quot;id&quot;:&quot;model_doc/mgp-str&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mgp-str&quot;},{&quot;title&quot;:&quot;Nougat&quot;,&quot;id&quot;:&quot;model_doc/nougat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nougat&quot;},{&quot;title&quot;:&quot;OneFormer&quot;,&quot;id&quot;:&quot;model_doc/oneformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/oneformer&quot;},{&quot;title&quot;:&quot;OWL-ViT&quot;,&quot;id&quot;:&quot;model_doc/owlvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/owlvit&quot;},{&quot;title&quot;:&quot;Perceiver&quot;,&quot;id&quot;:&quot;model_doc/perceiver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/perceiver&quot;},{&quot;title&quot;:&quot;Pix2Struct&quot;,&quot;id&quot;:&quot;model_doc/pix2struct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pix2struct&quot;},{&quot;title&quot;:&quot;Segment Anything&quot;,&quot;id&quot;:&quot;model_doc/sam&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sam&quot;},{&quot;title&quot;:&quot;Speech Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/speech-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder&quot;},{&quot;title&quot;:&quot;TAPAS&quot;,&quot;id&quot;:&quot;model_doc/tapas&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapas&quot;},{&quot;title&quot;:&quot;TrOCR&quot;,&quot;id&quot;:&quot;model_doc/trocr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trocr&quot;},{&quot;title&quot;:&quot;TVLT&quot;,&quot;id&quot;:&quot;model_doc/tvlt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tvlt&quot;},{&quot;title&quot;:&quot;ViLT&quot;,&quot;id&quot;:&quot;model_doc/vilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vilt&quot;},{&quot;title&quot;:&quot;Vision Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/vision-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder&quot;},{&quot;title&quot;:&quot;Vision Text Dual Encoder&quot;,&quot;id&quot;:&quot;model_doc/vision-text-dual-encoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder&quot;},{&quot;title&quot;:&quot;VisualBERT&quot;,&quot;id&quot;:&quot;model_doc/visual_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/visual_bert&quot;},{&quot;title&quot;:&quot;X-CLIP&quot;,&quot;id&quot;:&quot;model_doc/xclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xclip&quot;}]},{&quot;title&quot;:&quot;Reinforcement learning models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Decision Transformer&quot;,&quot;id&quot;:&quot;model_doc/decision_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/decision_transformer&quot;},{&quot;title&quot;:&quot;Trajectory Transformer&quot;,&quot;id&quot;:&quot;model_doc/trajectory_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer&quot;}]},{&quot;title&quot;:&quot;Time series models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Autoformer&quot;,&quot;id&quot;:&quot;model_doc/autoformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/autoformer&quot;},{&quot;title&quot;:&quot;Informer&quot;,&quot;id&quot;:&quot;model_doc/informer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/informer&quot;},{&quot;title&quot;:&quot;Time Series Transformer&quot;,&quot;id&quot;:&quot;model_doc/time_series_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/time_series_transformer&quot;}]},{&quot;title&quot;:&quot;Graph models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Graphormer&quot;,&quot;id&quot;:&quot;model_doc/graphormer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/graphormer&quot;}]}]},{&quot;title&quot;:&quot;Internal Helpers&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Custom Layers and Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/modeling_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/modeling_utils&quot;},{&quot;title&quot;:&quot;Utilities for pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/pipelines_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/pipelines_utils&quot;},{&quot;title&quot;:&quot;Utilities for Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/tokenization_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/tokenization_utils&quot;},{&quot;title&quot;:&quot;Utilities for Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/trainer_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/trainer_utils&quot;},{&quot;title&quot;:&quot;Utilities for Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/generation_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/generation_utils&quot;},{&quot;title&quot;:&quot;Utilities for Image Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/image_processing_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/image_processing_utils&quot;},{&quot;title&quot;:&quot;Utilities for Audio processing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/audio_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/audio_utils&quot;},{&quot;title&quot;:&quot;General Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/file_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/file_utils&quot;},{&quot;title&quot;:&quot;Utilities for Time Series&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/time_series_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/time_series_utils&quot;}]}]}],&quot;chapterId&quot;:&quot;tasks/monocular_depth_estimation&quot;,&quot;docType&quot;:&quot;docs&quot;,&quot;isLoggedIn&quot;:false,&quot;lang&quot;:&quot;en&quot;,&quot;langs&quot;:[&quot;de&quot;,&quot;en&quot;,&quot;es&quot;,&quot;fr&quot;,&quot;it&quot;,&quot;ko&quot;,&quot;pt&quot;,&quot;zh&quot;],&quot;library&quot;:&quot;transformers&quot;,&quot;theme&quot;:&quot;light&quot;,&quot;version&quot;:&quot;v4.34.0&quot;,&quot;versions&quot;:[{&quot;version&quot;:&quot;main&quot;},{&quot;version&quot;:&quot;v4.34.0&quot;},{&quot;version&quot;:&quot;v4.33.3&quot;},{&quot;version&quot;:&quot;v4.33.2&quot;},{&quot;version&quot;:&quot;v4.33.0&quot;},{&quot;version&quot;:&quot;v4.32.1&quot;},{&quot;version&quot;:&quot;v4.32.0&quot;},{&quot;version&quot;:&quot;v4.31.0&quot;},{&quot;version&quot;:&quot;v4.30.0&quot;},{&quot;version&quot;:&quot;v4.29.1&quot;},{&quot;version&quot;:&quot;v4.29.0&quot;},{&quot;version&quot;:&quot;v4.28.1&quot;},{&quot;version&quot;:&quot;v4.28.0&quot;},{&quot;version&quot;:&quot;v4.27.2&quot;},{&quot;version&quot;:&quot;v4.27.1&quot;},{&quot;version&quot;:&quot;v4.27.0&quot;},{&quot;version&quot;:&quot;v4.26.1&quot;},{&quot;version&quot;:&quot;v4.26.0&quot;},{&quot;version&quot;:&quot;v4.25.1&quot;},{&quot;version&quot;:&quot;v4.24.0&quot;},{&quot;version&quot;:&quot;v4.23.1&quot;},{&quot;version&quot;:&quot;v4.23.0&quot;},{&quot;version&quot;:&quot;v4.22.2&quot;},{&quot;version&quot;:&quot;v4.22.1&quot;},{&quot;version&quot;:&quot;v4.22.0&quot;},{&quot;version&quot;:&quot;v4.21.3&quot;},{&quot;version&quot;:&quot;v4.21.2&quot;},{&quot;version&quot;:&quot;v4.21.1&quot;},{&quot;version&quot;:&quot;v4.21.0&quot;},{&quot;version&quot;:&quot;v4.20.1&quot;},{&quot;version&quot;:&quot;v4.20.0&quot;},{&quot;version&quot;:&quot;v4.19.4&quot;},{&quot;version&quot;:&quot;v4.19.3&quot;},{&quot;version&quot;:&quot;v4.19.2&quot;},{&quot;version&quot;:&quot;v4.19.0&quot;},{&quot;version&quot;:&quot;v4.18.0&quot;},{&quot;version&quot;:&quot;v4.17.0&quot;},{&quot;version&quot;:&quot;v4.16.2&quot;},{&quot;version&quot;:&quot;v4.16.1&quot;},{&quot;version&quot;:&quot;v4.16.0&quot;},{&quot;version&quot;:&quot;v4.15.0&quot;},{&quot;version&quot;:&quot;v4.14.1&quot;},{&quot;version&quot;:&quot;v4.13.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.5&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.4&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.0.0&quot;},{&quot;version&quot;:&quot;doc-builder-html&quot;}],&quot;title&quot;:&quot;Monocular depth estimation&quot;}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"><div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation</p> <div class="flex items-center"><p class="font-semibold">Monocular depth estimation</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "><button class=" " type="button"><h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> <span class="ml-auto rounded border border-gray-200 bg-gray-100 px-0.5 text-xs dark:border-gray-800 dark:bg-gray-800"><kbd class="font-sans">⌘K</kbd></span></button> <div class="flex items-center"><select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1">v4.34.0</option><option value="2">v4.33.3</option><option value="3">v4.32.1</option><option value="4">v4.31.0</option><option value="5">v4.30.0</option><option value="6">v4.29.1</option><option value="7">v4.28.1</option><option value="8">v4.27.2</option><option value="9">v4.26.1</option><option value="10">v4.25.1</option><option value="11">v4.24.0</option><option value="12">v4.23.1</option><option value="13">v4.22.2</option><option value="14">v4.21.3</option><option value="15">v4.20.1</option><option value="16">v4.19.4</option><option value="17">v4.18.0</option><option value="18">v4.17.0</option><option value="19">v4.16.2</option><option value="20">v4.15.0</option><option value="21">v4.14.1</option><option value="22">v4.13.0</option><option value="23">v4.12.5</option><option value="24">v4.11.3</option><option value="25">v4.10.1</option><option value="26">v4.9.2</option><option value="27">v4.8.2</option><option value="28">v4.7.0</option><option value="29">v4.6.0</option><option value="30">v4.5.1</option><option value="31">v4.4.2</option><option value="32">v4.3.3</option><option value="33">v4.2.2</option><option value="34">v4.1.1</option><option value="35">v4.0.1</option><option value="36">v3.5.1</option><option value="37">v3.4.0</option><option value="38">v3.3.1</option><option value="39">v3.2.0</option><option value="40">v3.1.0</option><option value="41">v3.0.2</option><option value="42">v2.11.0</option><option value="43">v2.10.0</option><option value="44">v2.9.1</option><option value="45">v2.8.0</option><option value="46">v2.7.0</option><option value="47">v2.6.0</option><option value="48">v2.5.1</option><option value="49">v2.4.1</option><option value="50">v2.3.0</option><option value="51">v2.2.2</option><option value="52">v2.1.1</option><option value="53">v2.0.0</option><option value="54">v1.2.0</option><option value="55">v1.1.0</option><option value="56">v1.0.0</option><option value="57">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"><button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"><svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> 112,792</a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Get started</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/index">🤗 Transformers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/quicktour">Quick tour </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/installation">Installation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Tutorials</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_tutorial">Run inference with pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/autoclass_tutorial">Write portable code with AutoClass </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/preprocessing">Preprocess data </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/training">Fine-tune a pretrained model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/run_scripts">Train with a script </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/accelerate">Set up distributed training with 🤗 Accelerate </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/peft">Load and train adapters with 🤗 PEFT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_sharing">Share your model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/transformers_agents">Agents </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/llm_tutorial">Generation with LLMs </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Task Guides</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Natural Language Processing</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Computer Vision</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/image_classification">Image classification </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/semantic_segmentation">Semantic segmentation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/video_classification">Video classification </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/object_detection">Object detection </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection">Zero-shot object detection </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification">Zero-shot image classification </a><a data-sveltekit-reload="" class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-4" href="/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation">Depth estimation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Generation</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Prompting</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Developer guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/fast_tokenizers">Use fast tokenizers from 🤗 Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/multilingual">Run inference with multilingual models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/create_a_model">Use model-specific APIs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_models">Share a custom model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/chat_templating">Templates for chat models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/sagemaker">Run training on Amazon SageMaker </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/serialization">Export to ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tflite">Export to TFLite </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/torchscript">Export to TorchScript </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/benchmarks">Benchmarks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/notebooks">Notebooks with examples </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/community">Community resources </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_tools">Custom Tools and Prompts </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/troubleshooting">Troubleshoot </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Performance and scalability</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/performance">Overview </a><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Efficient training techniques</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_one">Methods and tools for efficient training on a single GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_many">Multiple GPUs and parallelism </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu">Efficient training on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu_many">Distributed CPU training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu">Training on TPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu_tf">Training on TPU with TensorFlow </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_special">Training on Specialized Hardware </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_hardware">Custom hardware for training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/hpo_train">Hyperparameter Search using Trainer API </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Optimizing inference</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_cpu">Inference on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_one">Inference on one GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_many">Inference on many GPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_special">Inference on Specialized Hardware </a> </div><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/big_models">Instantiating a big model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/debugging">Troubleshooting </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tf_xla">XLA Integration for TensorFlow Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perf_torch_compile">Optimize inference using `torch.compile()` </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Contribute</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/contributing">How to contribute to transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_model">How to add a model to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_tensorflow_model">How to convert a 🤗 Transformers model to TensorFlow? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_pipeline">How to add a pipeline to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/testing">Testing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pr_checks">Checks on a Pull Request </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Conceptual guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/philosophy">Philosophy </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/glossary">Glossary </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/task_summary">What 🤗 Transformers can do </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tasks_explained">How 🤗 Transformers solve tasks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_summary">The Transformer model family </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tokenizer_summary">Summary of the tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/attention">Attention mechanisms </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pad_truncation">Padding and truncation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/bertology">BERTology </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perplexity">Perplexity of fixed-length models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_webserver">Pipelines for webserver inference </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_memory_anatomy">Model training anatomy </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>API</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Main Classes</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/agent">Agents and Tools </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/model_doc/auto">Auto Classes </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/callback">Callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/configuration">Configuration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/data_collator">Data Collator </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks">Keras callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/logging">Logging </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/model">Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/text_generation">Text Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/onnx">ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules">Optimization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/output">Model outputs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/pipelines">Pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/processors">Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/quantization">Quantization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/tokenizer">Tokenizer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/trainer">Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/deepspeed">DeepSpeed Integration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor">Feature Extractor </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/image_processor">Image Processor </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Models</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Text models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Vision models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Reinforcement learning models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Time series models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Graph models</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Internal Helpers</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/modeling_utils">Custom Layers and Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/pipelines_utils">Utilities for pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/tokenization_utils">Utilities for Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/trainer_utils">Utilities for Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/generation_utils">Utilities for Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/image_processing_utils">Utilities for Image Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/audio_utils">Utilities for Audio processing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/file_utils">General Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/time_series_utils">Utilities for Time Series </a> </div> </div></nav></div></div></div> <div class="z-1 min-w-0 flex-1"> <div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg"> <div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div> <p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience </p> <div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes </div></div></div> <div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a> <p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div> <div class="prose-doc prose relative mx-auto max-w-4xl break-words"> <p></p> <h1 class="relative group"><a id="monocular-depth-estimation" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#monocular-depth-estimation"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-o9kwwn">Monocular depth estimation</span></h1> <p data-svelte-h="svelte-1elgt0a">Monocular depth estimation is a computer vision task that involves predicting the depth information of a scene from a single image. In other words, it is the process of estimating the distance of objects in a scene from a single camera viewpoint.</p> <p data-svelte-h="svelte-1bkqs5v">Monocular depth estimation has various applications, including 3D reconstruction, augmented reality, autonomous driving, and robotics. It is a challenging task as it requires the model to understand the complex relationships between objects in the scene and the corresponding depth information, which can be affected by factors such as lighting conditions, occlusion, and texture.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400">The task illustrated in this tutorial is supported by the following model architectures: <p data-svelte-h="svelte-940txo"><a href="../model_doc/dpt">DPT</a>, <a href="../model_doc/glpn">GLPN</a></p></div> <p data-svelte-h="svelte-jr2b5g">In this guide you’ll learn how to:</p> <ul data-svelte-h="svelte-tj6led"><li>create a depth estimation pipeline</li> <li>run depth estimation inference by hand</li></ul> <p data-svelte-h="svelte-1c9nexd">Before you begin, make sure you have all the necessary libraries installed:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class="">pip install -q transformers</pre></div> <h2 class="relative group"><a id="depth-estimation-pipeline" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#depth-estimation-pipeline"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1fdist1">Depth estimation pipeline</span></h2> <p data-svelte-h="svelte-1i3q232">The simplest way to try out inference with a model supporting depth estimation is to use the corresponding <a href="/docs/transformers/v4.34.0/en/main_classes/pipelines#transformers.pipeline">pipeline()</a>. Instantiate a pipeline from a <a href="https://huggingface.co/models?pipeline_tag=depth-estimation&amp;sort=downloads" rel="nofollow">checkpoint on the Hugging Face Hub</a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> pipeline <span class="hljs-meta">&gt;&gt;&gt; </span>checkpoint = <span class="hljs-string">"vinvino02/glpn-nyu"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>depth_estimator = pipeline(<span class="hljs-string">"depth-estimation"</span>, model=checkpoint)</pre></div> <p data-svelte-h="svelte-wuz5lr">Next, choose an image to analyze:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> PIL <span class="hljs-keyword">import</span> Image <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> requests <span class="hljs-meta">&gt;&gt;&gt; </span>url = <span class="hljs-string">"https://unsplash.com/photos/HwBAsSbPBDU/download?ixid=MnwxMjA3fDB8MXxzZWFyY2h8MzR8fGNhciUyMGluJTIwdGhlJTIwc3RyZWV0fGVufDB8MHx8fDE2Nzg5MDEwODg&amp;force=true&amp;w=640"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>image = Image.<span class="hljs-built_in">open</span>(requests.get(url, stream=<span class="hljs-literal">True</span>).raw) <span class="hljs-meta">&gt;&gt;&gt; </span>image</pre></div> <div class="flex justify-center" data-svelte-h="svelte-10bakl"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/depth-estimation-example.jpg" alt="Photo of a busy street"></div> <p data-svelte-h="svelte-mcr1tn">Pass the image to the pipeline.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>predictions = depth_estimator(image)</pre></div> <p data-svelte-h="svelte-1jckqfu">The pipeline returns a dictionary with two entries. The first one, called <code>predicted_depth</code>, is a tensor with the values being the depth expressed in meters for each pixel. The second one, <code>depth</code>, is a PIL image that visualizes the depth estimation result.</p> <p data-svelte-h="svelte-1dzpyfr">Let’s take a look at the visualized result:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>predictions[<span class="hljs-string">"depth"</span>]</pre></div> <div class="flex justify-center" data-svelte-h="svelte-43wxxb"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/depth-visualization.png" alt="Depth estimation visualization"></div> <h2 class="relative group"><a id="depth-estimation-inference-by-hand" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#depth-estimation-inference-by-hand"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-i1ezn2">Depth estimation inference by hand</span></h2> <p data-svelte-h="svelte-1u79cc9">Now that you’ve seen how to use the depth estimation pipeline, let’s see how we can replicate the same result by hand.</p> <p data-svelte-h="svelte-1r8mctn">Start by loading the model and associated processor from a <a href="https://huggingface.co/models?pipeline_tag=depth-estimation&amp;sort=downloads" rel="nofollow">checkpoint on the Hugging Face Hub</a>. Here we’ll use the same checkpoint as before:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoImageProcessor, AutoModelForDepthEstimation <span class="hljs-meta">&gt;&gt;&gt; </span>checkpoint = <span class="hljs-string">"vinvino02/glpn-nyu"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>image_processor = AutoImageProcessor.from_pretrained(checkpoint) <span class="hljs-meta">&gt;&gt;&gt; </span>model = AutoModelForDepthEstimation.from_pretrained(checkpoint)</pre></div> <p data-svelte-h="svelte-hhrw3">Prepare the image input for the model using the <code>image_processor</code> that will take care of the necessary image transformations such as resizing and normalization:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>pixel_values = image_processor(image, return_tensors=<span class="hljs-string">"pt"</span>).pixel_values</pre></div> <p data-svelte-h="svelte-1yk8q0z">Pass the prepared inputs through the model:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> outputs = model(pixel_values) <span class="hljs-meta">... </span> predicted_depth = outputs.predicted_depth</pre></div> <p data-svelte-h="svelte-6ebm65">Visualize the results:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># interpolate to original size</span> <span class="hljs-meta">&gt;&gt;&gt; </span>prediction = torch.nn.functional.interpolate( <span class="hljs-meta">... </span> predicted_depth.unsqueeze(<span class="hljs-number">1</span>), <span class="hljs-meta">... </span> size=image.size[::-<span class="hljs-number">1</span>], <span class="hljs-meta">... </span> mode=<span class="hljs-string">"bicubic"</span>, <span class="hljs-meta">... </span> align_corners=<span class="hljs-literal">False</span>, <span class="hljs-meta">... </span>).squeeze() <span class="hljs-meta">&gt;&gt;&gt; </span>output = prediction.numpy() <span class="hljs-meta">&gt;&gt;&gt; </span>formatted = (output * <span class="hljs-number">255</span> / np.<span class="hljs-built_in">max</span>(output)).astype(<span class="hljs-string">"uint8"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>depth = Image.fromarray(formatted) <span class="hljs-meta">&gt;&gt;&gt; </span>depth</pre></div> <div class="flex justify-center" data-svelte-h="svelte-43wxxb"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/depth-visualization.png" alt="Depth estimation visualization"></div> <p></p> <div id="svelte-announcer" aria-live="assertive" aria-atomic="true" style="position: absolute; left: 0px; top: 0px; clip: rect(0px, 0px, 0px, 0px); clip-path: inset(50%); overflow: hidden; white-space: nowrap; width: 1px; height: 1px;"></div></div> <div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>Zero-shot image classification</a> <a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/tasks/image_captioning" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">Image captioning<span class="ml-2 translate-y-px">→</span></a></div></div></div> <div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapter&quot;:{&quot;title&quot;:&quot;Monocular depth estimation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;monocular-depth-estimation&quot;,&quot;url&quot;:&quot;#monocular-depth-estimation&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Depth estimation pipeline&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;depth-estimation-pipeline&quot;,&quot;url&quot;:&quot;#depth-estimation-pipeline&quot;},{&quot;title&quot;:&quot;Depth estimation inference by hand&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;depth-estimation-inference-by-hand&quot;,&quot;url&quot;:&quot;#depth-estimation-inference-by-hand&quot;}]}}" data-target="SubSideMenu"><nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#monocular-depth-estimation" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-monocular-depth-estimation"><wbr>Monocular depth estimation</a> <a href="#depth-estimation-pipeline" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-depth-estimation-pipeline"><wbr>Depth estimation pipeline</a> <a href="#depth-estimation-inference-by-hand" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-depth-estimation-inference-by-hand"><wbr>Depth estimation inference by hand</a> </nav></div></div></div> <div id="doc-footer"></div></main> </div> <script> import("/front/build/kube-5e23f38/index.js"); window.moonSha = "kube-5e23f38/"; window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc","environment":"production","userAgent":"HuggingFace (production)"}`); </script> <!-- Stripe --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://js.stripe.com/v3/"; script.async = true; document.head.appendChild(script); } </script> <!-- Google analytics v4 --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL"; script.async = true; document.head.appendChild(script); window.dataLayer = window.dataLayer || []; function gtag() { if (window.dataLayer !== undefined) { window.dataLayer.push(arguments); } } gtag("js", new Date()); gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation" }); /// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" }); /// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent /// TODO: ask the user for their consent and update this with gtag('consent', 'update') } </script> <!-- Google Analytics v3 (deprecated) --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { (function (i, s, o, g, r, a, m) { i["GoogleAnalyticsObject"] = r; (i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments); }), (i[r].l = 1 * new Date()); (a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]); a.async = 1; a.src = g; m.parentNode.insertBefore(a, m); })(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics"); ganalytics("create", "UA-83738774-2", "auto"); ganalytics("send", "pageview", "/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation"); } </script> <iframe name="__privateStripeMetricsController6570" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-27c67c0d52761104439bb051c7856ab1.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fv4.34.0%2Fen%2Ftasks%2Fmonocular_depth_estimation&amp;title=Monocular%20depth%20estimation&amp;referrer=&amp;muid=NA&amp;sid=NA&amp;version=6&amp;preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html>
2023-10-05T13:33:54.169Z
https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/Pop2Piano#training
The documentation page MODEL\_DOC/POP2PIANO doesn’t exist in v4.34.0, but exists on the main version. Click [here](/docs/transformers/main/en/model_doc/Pop2Piano) to redirect to the main version of the documentation.
<html><head></head><body>The documentation page MODEL_DOC/POP2PIANO doesn’t exist in v4.34.0, but exists on the main version. Click <a href="/docs/transformers/main/en/model_doc/Pop2Piano">here</a> to redirect to the main version of the documentation.</body></html>
2023-10-05T13:33:54.209Z
https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/generation_strategies
The documentation page MODEL\_DOC/GENERATION\_STRATEGIES doesn’t exist in v4.34.0, but exists on the main version. Click [here](/docs/transformers/main/en/model_doc/generation_strategies) to redirect to the main version of the documentation.
<html><head></head><body>The documentation page MODEL_DOC/GENERATION_STRATEGIES doesn’t exist in v4.34.0, but exists on the main version. Click <a href="/docs/transformers/main/en/model_doc/generation_strategies">here</a> to redirect to the main version of the documentation.</body></html>
2023-10-05T13:33:54.215Z
https://huggingface.co/docs/transformers/v4.34.0/en/tasks/question-answering
The documentation page TASKS/QUESTION-ANSWERING doesn’t exist in v4.34.0, but exists on the main version. Click [here](/docs/transformers/main/en/tasks/question-answering) to redirect to the main version of the documentation.
<html><head></head><body>The documentation page TASKS/QUESTION-ANSWERING doesn’t exist in v4.34.0, but exists on the main version. Click <a href="/docs/transformers/main/en/tasks/question-answering">here</a> to redirect to the main version of the documentation.</body></html>
2023-10-05T13:33:54.309Z
https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/examples/research_projects/quantization-qdqbert
The documentation page MODEL\_DOC/EXAMPLES/RESEARCH\_PROJECTS/QUANTIZATION-QDQBERT doesn’t exist in v4.34.0, but exists on the main version. Click [here](/docs/transformers/main/en/model_doc/examples/research_projects/quantization-qdqbert) to redirect to the main version of the documentation.
<html><head></head><body>The documentation page MODEL_DOC/EXAMPLES/RESEARCH_PROJECTS/QUANTIZATION-QDQBERT doesn’t exist in v4.34.0, but exists on the main version. Click <a href="/docs/transformers/main/en/model_doc/examples/research_projects/quantization-qdqbert">here</a> to redirect to the main version of the documentation.</body></html>
2023-10-05T13:33:54.599Z
https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/[https://arxiv.org/abs/2102.03334](https://arxiv.org/abs/2209.14156)
The documentation page MODEL\_DOC/\[HTTPS://ARXIV.ORG/ABS/2102.03334\](HTTPS://ARXIV.ORG/ABS/2209.14156) doesn’t exist in v4.34.0, but exists on the main version. Click [here](/docs/transformers/main/en/model_doc/%5Bhttps://arxiv.org/abs/2102.03334%5D(https://arxiv.org/abs/2209.14156)) to redirect to the main version of the documentation.
<html><head></head><body>The documentation page MODEL_DOC/[HTTPS://ARXIV.ORG/ABS/2102.03334](HTTPS://ARXIV.ORG/ABS/2209.14156) doesn’t exist in v4.34.0, but exists on the main version. Click <a href="/docs/transformers/main/en/model_doc/%5Bhttps://arxiv.org/abs/2102.03334%5D(https://arxiv.org/abs/2209.14156)">here</a> to redirect to the main version of the documentation.</body></html>
2023-10-05T13:33:54.649Z
https://huggingface.co/docs/transformers/v4.34.0/en/main_classes/text_generation.md
The documentation page MAIN\_CLASSES/TEXT\_GENERATION.MD doesn’t exist in v4.34.0, but exists on the main version. Click [here](/docs/transformers/main/en/main_classes/text_generation.md) to redirect to the main version of the documentation.
<html><head></head><body>The documentation page MAIN_CLASSES/TEXT_GENERATION.MD doesn’t exist in v4.34.0, but exists on the main version. Click <a href="/docs/transformers/main/en/main_classes/text_generation.md">here</a> to redirect to the main version of the documentation.</body></html>
2023-10-05T13:33:54.679Z
https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/data2vec-text
The documentation page MODEL\_DOC/DATA2VEC-TEXT doesn’t exist in v4.34.0, but exists on the main version. Click [here](/docs/transformers/main/en/model_doc/data2vec-text) to redirect to the main version of the documentation.
<html><head></head><body>The documentation page MODEL_DOC/DATA2VEC-TEXT doesn’t exist in v4.34.0, but exists on the main version. Click <a href="/docs/transformers/main/en/model_doc/data2vec-text">here</a> to redirect to the main version of the documentation.</body></html>
2023-10-05T13:33:54.838Z
https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/data2vec-audio
The documentation page MODEL\_DOC/DATA2VEC-AUDIO doesn’t exist in v4.34.0, but exists on the main version. Click [here](/docs/transformers/main/en/model_doc/data2vec-audio) to redirect to the main version of the documentation.
<html><head></head><body>The documentation page MODEL_DOC/DATA2VEC-AUDIO doesn’t exist in v4.34.0, but exists on the main version. Click <a href="/docs/transformers/main/en/model_doc/data2vec-audio">here</a> to redirect to the main version of the documentation.</body></html>
2023-10-05T13:33:54.875Z
Document Question Answering
https://huggingface.co/docs/transformers/v4.34.0/en/tasks/document_question_answering
# Document Question Answering Document Question Answering, also referred to as Document Visual Question Answering, is a task that involves providing answers to questions posed about document images. The input to models supporting this task is typically a combination of an image and a question, and the output is an answer expressed in natural language. These models utilize multiple modalities, including text, the positions of words (bounding boxes), and the image itself. This guide illustrates how to: - Fine-tune [LayoutLMv2](../model_doc/layoutlmv2) on the [DocVQA dataset](https://huggingface.co/datasets/nielsr/docvqa_1200_examples_donut). - Use your fine-tuned model for inference. The task illustrated in this tutorial is supported by the following model architectures: [LayoutLM](../model_doc/layoutlm), [LayoutLMv2](../model_doc/layoutlmv2), [LayoutLMv3](../model_doc/layoutlmv3) LayoutLMv2 solves the document question-answering task by adding a question-answering head on top of the final hidden states of the tokens, to predict the positions of the start and end tokens of the answer. In other words, the problem is treated as extractive question answering: given the context, extract which piece of information answers the question. The context comes from the output of an OCR engine, here it is Google’s Tesseract. Before you begin, make sure you have all the necessary libraries installed. LayoutLMv2 depends on detectron2, torchvision and tesseract. ``` pip install -q transformers datasets ``` ``` pip install 'git+https://github.com/facebookresearch/detectron2.git' pip install torchvision ``` ``` sudo apt install tesseract-ocr pip install -q pytesseract ``` Once you have installed all of the dependencies, restart your runtime. We encourage you to share your model with the community. Log in to your Hugging Face account to upload it to the 🤗 Hub. When prompted, enter your token to log in: ``` >>> from huggingface_hub import notebook_login >>> notebook_login() ``` Let’s define some global variables. ``` >>> model_checkpoint = "microsoft/layoutlmv2-base-uncased" >>> batch_size = 4 ``` ## Load the data In this guide we use a small sample of preprocessed DocVQA that you can find on 🤗 Hub. If you’d like to use the full DocVQA dataset, you can register and download it on [DocVQA homepage](https://rrc.cvc.uab.es/?ch=17). If you do so, to proceed with this guide check out [how to load files into a 🤗 dataset](https://huggingface.co/docs/datasets/loading#local-and-remote-files). ``` >>> from datasets import load_dataset >>> dataset = load_dataset("nielsr/docvqa_1200_examples") >>> dataset DatasetDict({ train: Dataset({ features: ['id', 'image', 'query', 'answers', 'words', 'bounding_boxes', 'answer'], num_rows: 1000 }) test: Dataset({ features: ['id', 'image', 'query', 'answers', 'words', 'bounding_boxes', 'answer'], num_rows: 200 }) }) ``` As you can see, the dataset is split into train and test sets already. Take a look at a random example to familiarize yourself with the features. ``` >>> dataset["train"].features ``` Here’s what the individual fields represent: - `id`: the example’s id - `image`: a PIL.Image.Image object containing the document image - `query`: the question string - natural language asked question, in several languages - `answers`: a list of correct answers provided by human annotators - `words` and `bounding_boxes`: the results of OCR, which we will not use here - `answer`: an answer matched by a different model which we will not use here Let’s leave only English questions, and drop the `answer` feature which appears to contain predictions by another model. We’ll also take the first of the answers from the set provided by the annotators. Alternatively, you can randomly sample it. ``` >>> updated_dataset = dataset.map(lambda example: {"question": example["query"]["en"]}, remove_columns=["query"]) >>> updated_dataset = updated_dataset.map( ... lambda example: {"answer": example["answers"][0]}, remove_columns=["answer", "answers"] ... ) ``` Note that the LayoutLMv2 checkpoint that we use in this guide has been trained with `max_position_embeddings = 512` (you can find this information in the [checkpoint’s `config.json` file](https://huggingface.co/microsoft/layoutlmv2-base-uncased/blob/main/config.json#L18)). We can truncate the examples but to avoid the situation where the answer might be at the end of a large document and end up truncated, here we’ll remove the few examples where the embedding is likely to end up longer than 512. If most of the documents in your dataset are long, you can implement a sliding window strategy - check out [this notebook](https://github.com/huggingface/notebooks/blob/main/examples/question_answering.ipynb) for details. ``` >>> updated_dataset = updated_dataset.filter(lambda x: len(x["words"]) + len(x["question"].split()) < 512) ``` At this point let’s also remove the OCR features from this dataset. These are a result of OCR for fine-tuning a different model. They would still require some processing if we wanted to use them, as they do not match the input requirements of the model we use in this guide. Instead, we can use the [LayoutLMv2Processor](/docs/transformers/v4.34.0/en/model_doc/layoutlmv2#transformers.LayoutLMv2Processor) on the original data for both OCR and tokenization. This way we’ll get the inputs that match model’s expected input. If you want to process images manually, check out the [`LayoutLMv2` model documentation](../model_doc/layoutlmv2) to learn what input format the model expects. ``` >>> updated_dataset = updated_dataset.remove_columns("words") >>> updated_dataset = updated_dataset.remove_columns("bounding_boxes") ``` Finally, the data exploration won’t be complete if we don’t peek at an image example. ``` >>> updated_dataset["train"][11]["image"] ``` ![DocVQA Image Example](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/docvqa_example.jpg) ## Preprocess the data The Document Question Answering task is a multimodal task, and you need to make sure that the inputs from each modality are preprocessed according to the model’s expectations. Let’s start by loading the [LayoutLMv2Processor](/docs/transformers/v4.34.0/en/model_doc/layoutlmv2#transformers.LayoutLMv2Processor), which internally combines an image processor that can handle image data and a tokenizer that can encode text data. ``` >>> from transformers import AutoProcessor >>> processor = AutoProcessor.from_pretrained(model_checkpoint) ``` ### Preprocessing document images First, let’s prepare the document images for the model with the help of the `image_processor` from the processor. By default, image processor resizes the images to 224x224, makes sure they have the correct order of color channels, applies OCR with tesseract to get words and normalized bounding boxes. In this tutorial, all of these defaults are exactly what we need. Write a function that applies the default image processing to a batch of images and returns the results of OCR. ``` >>> image_processor = processor.image_processor >>> def get_ocr_words_and_boxes(examples): ... images = [image.convert("RGB") for image in examples["image"]] ... encoded_inputs = image_processor(images) ... examples["image"] = encoded_inputs.pixel_values ... examples["words"] = encoded_inputs.words ... examples["boxes"] = encoded_inputs.boxes ... return examples ``` To apply this preprocessing to the entire dataset in a fast way, use [map](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.map). ``` >>> dataset_with_ocr = updated_dataset.map(get_ocr_words_and_boxes, batched=True, batch_size=2) ``` ### Preprocessing text data Once we have applied OCR to the images, we need to encode the text part of the dataset to prepare it for the model. This involves converting the words and boxes that we got in the previous step to token-level `input_ids`, `attention_mask`, `token_type_ids` and `bbox`. For preprocessing text, we’ll need the `tokenizer` from the processor. ``` >>> tokenizer = processor.tokenizer ``` On top of the preprocessing mentioned above, we also need to add the labels for the model. For `xxxForQuestionAnswering` models in 🤗 Transformers, the labels consist of the `start_positions` and `end_positions`, indicating which token is at the start and which token is at the end of the answer. Let’s start with that. Define a helper function that can find a sublist (the answer split into words) in a larger list (the words list). This function will take two lists as input, `words_list` and `answer_list`. It will then iterate over the `words_list` and check if the current word in the `words_list` (words\_list\[i\]) is equal to the first word of answer\_list (answer\_list\[0\]) and if the sublist of `words_list` starting from the current word and of the same length as `answer_list` is equal `to answer_list`. If this condition is true, it means that a match has been found, and the function will record the match, its starting index (idx), and its ending index (idx + len(answer\_list) - 1). If more than one match was found, the function will return only the first one. If no match is found, the function returns (`None`, 0, and 0). ``` >>> def subfinder(words_list, answer_list): ... matches = [] ... start_indices = [] ... end_indices = [] ... for idx, i in enumerate(range(len(words_list))): ... if words_list[i] == answer_list[0] and words_list[i : i + len(answer_list)] == answer_list: ... matches.append(answer_list) ... start_indices.append(idx) ... end_indices.append(idx + len(answer_list) - 1) ... if matches: ... return matches[0], start_indices[0], end_indices[0] ... else: ... return None, 0, 0 ``` To illustrate how this function finds the position of the answer, let’s use it on an example: ``` >>> example = dataset_with_ocr["train"][1] >>> words = [word.lower() for word in example["words"]] >>> match, word_idx_start, word_idx_end = subfinder(words, example["answer"].lower().split()) >>> print("Question: ", example["question"]) >>> print("Words:", words) >>> print("Answer: ", example["answer"]) >>> print("start_index", word_idx_start) >>> print("end_index", word_idx_end) Question: Who is in cc in this letter? Words: ['wie', 'baw', 'brown', '&', 'williamson', 'tobacco', 'corporation', 'research', '&', 'development', 'internal', 'correspondence', 'to:', 'r.', 'h.', 'honeycutt', 'ce:', 't.f.', 'riehl', 'from:', '.', 'c.j.', 'cook', 'date:', 'may', '8,', '1995', 'subject:', 'review', 'of', 'existing', 'brainstorming', 'ideas/483', 'the', 'major', 'function', 'of', 'the', 'product', 'innovation', 'graup', 'is', 'to', 'develop', 'marketable', 'nove!', 'products', 'that', 'would', 'be', 'profitable', 'to', 'manufacture', 'and', 'sell.', 'novel', 'is', 'defined', 'as:', 'of', 'a', 'new', 'kind,', 'or', 'different', 'from', 'anything', 'seen', 'or', 'known', 'before.', 'innovation', 'is', 'defined', 'as:', 'something', 'new', 'or', 'different', 'introduced;', 'act', 'of', 'innovating;', 'introduction', 'of', 'new', 'things', 'or', 'methods.', 'the', 'products', 'may', 'incorporate', 'the', 'latest', 'technologies,', 'materials', 'and', 'know-how', 'available', 'to', 'give', 'then', 'a', 'unique', 'taste', 'or', 'look.', 'the', 'first', 'task', 'of', 'the', 'product', 'innovation', 'group', 'was', 'to', 'assemble,', 'review', 'and', 'categorize', 'a', 'list', 'of', 'existing', 'brainstorming', 'ideas.', 'ideas', 'were', 'grouped', 'into', 'two', 'major', 'categories', 'labeled', 'appearance', 'and', 'taste/aroma.', 'these', 'categories', 'are', 'used', 'for', 'novel', 'products', 'that', 'may', 'differ', 'from', 'a', 'visual', 'and/or', 'taste/aroma', 'point', 'of', 'view', 'compared', 'to', 'canventional', 'cigarettes.', 'other', 'categories', 'include', 'a', 'combination', 'of', 'the', 'above,', 'filters,', 'packaging', 'and', 'brand', 'extensions.', 'appearance', 'this', 'category', 'is', 'used', 'for', 'novel', 'cigarette', 'constructions', 'that', 'yield', 'visually', 'different', 'products', 'with', 'minimal', 'changes', 'in', 'smoke', 'chemistry', 'two', 'cigarettes', 'in', 'cne.', 'emulti-plug', 'te', 'build', 'yaur', 'awn', 'cigarette.', 'eswitchable', 'menthol', 'or', 'non', 'menthol', 'cigarette.', '*cigarettes', 'with', 'interspaced', 'perforations', 'to', 'enable', 'smoker', 'to', 'separate', 'unburned', 'section', 'for', 'future', 'smoking.', '«short', 'cigarette,', 'tobacco', 'section', '30', 'mm.', '«extremely', 'fast', 'buming', 'cigarette.', '«novel', 'cigarette', 'constructions', 'that', 'permit', 'a', 'significant', 'reduction', 'iretobacco', 'weight', 'while', 'maintaining', 'smoking', 'mechanics', 'and', 'visual', 'characteristics.', 'higher', 'basis', 'weight', 'paper:', 'potential', 'reduction', 'in', 'tobacco', 'weight.', '«more', 'rigid', 'tobacco', 'column;', 'stiffing', 'agent', 'for', 'tobacco;', 'e.g.', 'starch', '*colored', 'tow', 'and', 'cigarette', 'papers;', 'seasonal', 'promotions,', 'e.g.', 'pastel', 'colored', 'cigarettes', 'for', 'easter', 'or', 'in', 'an', 'ebony', 'and', 'ivory', 'brand', 'containing', 'a', 'mixture', 'of', 'all', 'black', '(black', 'paper', 'and', 'tow)', 'and', 'ail', 'white', 'cigarettes.', '499150498'] Answer: T.F. Riehl start_index 17 end_index 18 ``` Once examples are encoded, however, they will look like this: ``` >>> encoding = tokenizer(example["question"], example["words"], example["boxes"]) >>> tokenizer.decode(encoding["input_ids"]) [CLS] who is in cc in this letter? [SEP] wie baw brown & williamson tobacco corporation research & development ... ``` We’ll need to find the position of the answer in the encoded input. - `token_type_ids` tells us which tokens are part of the question, and which ones are part of the document’s words. - `tokenizer.cls_token_id` will help find the special token at the beginning of the input. - `word_ids` will help match the answer found in the original `words` to the same answer in the full encoded input and determine the start/end position of the answer in the encoded input. With that in mind, let’s create a function to encode a batch of examples in the dataset: ``` >>> def encode_dataset(examples, max_length=512): ... questions = examples["question"] ... words = examples["words"] ... boxes = examples["boxes"] ... answers = examples["answer"] ... ... encoding = tokenizer(questions, words, boxes, max_length=max_length, padding="max_length", truncation=True) ... start_positions = [] ... end_positions = [] ... ... for i in range(len(questions)): ... cls_index = encoding["input_ids"][i].index(tokenizer.cls_token_id) ... ... words_example = [word.lower() for word in words[i]] ... answer = answers[i] ... match, word_idx_start, word_idx_end = subfinder(words_example, answer.lower().split()) ... if match: ... ... token_type_ids = encoding["token_type_ids"][i] ... token_start_index = 0 ... while token_type_ids[token_start_index] != 1: ... token_start_index += 1 ... token_end_index = len(encoding["input_ids"][i]) - 1 ... while token_type_ids[token_end_index] != 1: ... token_end_index -= 1 ... word_ids = encoding.word_ids(i)[token_start_index : token_end_index + 1] ... start_position = cls_index ... end_position = cls_index ... ... ... for id in word_ids: ... if id == word_idx_start: ... start_position = token_start_index ... else: ... token_start_index += 1 ... ... for id in word_ids[::-1]: ... if id == word_idx_end: ... end_position = token_end_index ... else: ... token_end_index -= 1 ... start_positions.append(start_position) ... end_positions.append(end_position) ... else: ... start_positions.append(cls_index) ... end_positions.append(cls_index) ... encoding["image"] = examples["image"] ... encoding["start_positions"] = start_positions ... encoding["end_positions"] = end_positions ... return encoding ``` Now that we have this preprocessing function, we can encode the entire dataset: ``` >>> encoded_train_dataset = dataset_with_ocr["train"].map( ... encode_dataset, batched=True, batch_size=2, remove_columns=dataset_with_ocr["train"].column_names ... ) >>> encoded_test_dataset = dataset_with_ocr["test"].map( ... encode_dataset, batched=True, batch_size=2, remove_columns=dataset_with_ocr["test"].column_names ... ) ``` Let’s check what the features of the encoded dataset look like: ``` >>> encoded_train_dataset.features {'image': Sequence(feature=Sequence(feature=Sequence(feature=Value(dtype='uint8', id=None), length=-1, id=None), length=-1, id=None), length=-1, id=None), 'input_ids': Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None), 'token_type_ids': Sequence(feature=Value(dtype='int8', id=None), length=-1, id=None), 'attention_mask': Sequence(feature=Value(dtype='int8', id=None), length=-1, id=None), 'bbox': Sequence(feature=Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), length=-1, id=None), 'start_positions': Value(dtype='int64', id=None), 'end_positions': Value(dtype='int64', id=None)} ``` ## Evaluation Evaluation for document question answering requires a significant amount of postprocessing. To avoid taking up too much of your time, this guide skips the evaluation step. The [Trainer](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer) still calculates the evaluation loss during training so you’re not completely in the dark about your model’s performance. Extractive question answering is typically evaluated using F1/exact match. If you’d like to implement it yourself, check out the [Question Answering chapter](https://huggingface.co/course/chapter7/7?fw=pt#postprocessing) of the Hugging Face course for inspiration. ## Train Congratulations! You’ve successfully navigated the toughest part of this guide and now you are ready to train your own model. Training involves the following steps: - Load the model with [AutoModelForDocumentQuestionAnswering](/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoModelForDocumentQuestionAnswering) using the same checkpoint as in the preprocessing. - Define your training hyperparameters in [TrainingArguments](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.TrainingArguments). - Define a function to batch examples together, here the [DefaultDataCollator](/docs/transformers/v4.34.0/en/main_classes/data_collator#transformers.DefaultDataCollator) will do just fine - Pass the training arguments to [Trainer](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer) along with the model, dataset, and data collator. - Call [train()](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.train) to finetune your model. ``` >>> from transformers import AutoModelForDocumentQuestionAnswering >>> model = AutoModelForDocumentQuestionAnswering.from_pretrained(model_checkpoint) ``` In the [TrainingArguments](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.TrainingArguments) use `output_dir` to specify where to save your model, and configure hyperparameters as you see fit. If you wish to share your model with the community, set `push_to_hub` to `True` (you must be signed in to Hugging Face to upload your model). In this case the `output_dir` will also be the name of the repo where your model checkpoint will be pushed. ``` >>> from transformers import TrainingArguments >>> >>> repo_id = "MariaK/layoutlmv2-base-uncased_finetuned_docvqa" >>> training_args = TrainingArguments( ... output_dir=repo_id, ... per_device_train_batch_size=4, ... num_train_epochs=20, ... save_steps=200, ... logging_steps=50, ... evaluation_strategy="steps", ... learning_rate=5e-5, ... save_total_limit=2, ... remove_unused_columns=False, ... push_to_hub=True, ... ) ``` Define a simple data collator to batch examples together. ``` >>> from transformers import DefaultDataCollator >>> data_collator = DefaultDataCollator() ``` Finally, bring everything together, and call [train()](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.train): ``` >>> from transformers import Trainer >>> trainer = Trainer( ... model=model, ... args=training_args, ... data_collator=data_collator, ... train_dataset=encoded_train_dataset, ... eval_dataset=encoded_test_dataset, ... tokenizer=processor, ... ) >>> trainer.train() ``` To add the final model to 🤗 Hub, create a model card and call `push_to_hub`: ``` >>> trainer.create_model_card() >>> trainer.push_to_hub() ``` ## Inference Now that you have finetuned a LayoutLMv2 model, and uploaded it to the 🤗 Hub, you can use it for inference. The simplest way to try out your finetuned model for inference is to use it in a [Pipeline](/docs/transformers/v4.34.0/en/main_classes/pipelines#transformers.Pipeline). Let’s take an example: ``` >>> example = dataset["test"][2] >>> question = example["query"]["en"] >>> image = example["image"] >>> print(question) >>> print(example["answers"]) 'Who is ‘presiding’ TRRF GENERAL SESSION (PART 1)?' ['TRRF Vice President', 'lee a. waller'] ``` Next, instantiate a pipeline for document question answering with your model, and pass the image + question combination to it. ``` >>> from transformers import pipeline >>> qa_pipeline = pipeline("document-question-answering", model="MariaK/layoutlmv2-base-uncased_finetuned_docvqa") >>> qa_pipeline(image, question) [{'score': 0.9949808120727539, 'answer': 'Lee A. Waller', 'start': 55, 'end': 57}] ``` You can also manually replicate the results of the pipeline if you’d like: 1. Take an image and a question, prepare them for the model using the processor from your model. 2. Forward the result or preprocessing through the model. 3. The model returns `start_logits` and `end_logits`, which indicate which token is at the start of the answer and which token is at the end of the answer. Both have shape (batch\_size, sequence\_length). 4. Take an argmax on the last dimension of both the `start_logits` and `end_logits` to get the predicted `start_idx` and `end_idx`. 5. Decode the answer with the tokenizer. ``` >>> import torch >>> from transformers import AutoProcessor >>> from transformers import AutoModelForDocumentQuestionAnswering >>> processor = AutoProcessor.from_pretrained("MariaK/layoutlmv2-base-uncased_finetuned_docvqa") >>> model = AutoModelForDocumentQuestionAnswering.from_pretrained("MariaK/layoutlmv2-base-uncased_finetuned_docvqa") >>> with torch.no_grad(): ... encoding = processor(image.convert("RGB"), question, return_tensors="pt") ... outputs = model(**encoding) ... start_logits = outputs.start_logits ... end_logits = outputs.end_logits ... predicted_start_idx = start_logits.argmax(-1).item() ... predicted_end_idx = end_logits.argmax(-1).item() >>> processor.tokenizer.decode(encoding.input_ids.squeeze()[predicted_start_idx : predicted_end_idx + 1]) 'lee a. waller' ```
<!DOCTYPE html><html class=""><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"> <meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science."> <meta property="fb:app_id" content="1321688464574422"> <meta name="twitter:card" content="summary_large_image"> <meta name="twitter:site" content="@huggingface"> <meta property="og:title" content="Document Question Answering"> <meta property="og:type" content="website"> <meta property="og:url" content="https://huggingface.co/docs/transformers/v4.34.0/en/tasks/document_question_answering"> <meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png"> <link rel="stylesheet" href="/front/build/kube-5e23f38/style.css"> <link rel="preconnect" href="https://fonts.gstatic.com"> <link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&amp;display=swap" rel="stylesheet"> <link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&amp;display=swap" rel="stylesheet"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" as="style" onload="this.onload=null;this.rel='stylesheet'"> <noscript> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" /> </noscript> <title>Document Question Answering</title> <script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script> <script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"><link rel="stylesheet" href="/docs/transformers/v4.34.0/en/_app/immutable/assets/0.e3b0c442.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/1.38c5c2f6.js"><meta name="hf:doc:metadata" content="{&quot;local&quot;:&quot;document-question-answering&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;load-the-data&quot;,&quot;title&quot;:&quot;Load the data&quot;},{&quot;local&quot;:&quot;preprocess-the-data&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;preprocessing-document-images&quot;,&quot;title&quot;:&quot;Preprocessing document images&quot;},{&quot;local&quot;:&quot;preprocessing-text-data&quot;,&quot;title&quot;:&quot;Preprocessing text data&quot;}],&quot;title&quot;:&quot;Preprocess the data&quot;},{&quot;local&quot;:&quot;evaluation&quot;,&quot;title&quot;:&quot;Evaluation&quot;},{&quot;local&quot;:&quot;train&quot;,&quot;title&quot;:&quot;Train&quot;},{&quot;local&quot;:&quot;inference&quot;,&quot;title&quot;:&quot;Inference&quot;}],&quot;title&quot;:&quot;Document Question Answering&quot;}"></head> <body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage"> <div class="flex min-h-screen flex-col"> <div class="SVELTE_HYDRATER contents" data-props="{&quot;classNames&quot;:&quot;&quot;,&quot;isWide&quot;:true,&quot;isZh&quot;:false}" data-target="MainHeader"><header class="border-b border-gray-100 "><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <div class="flex flex-none items-center justify-center p-0.5 place-self-stretch lg:hidden"><button class="relative z-40 flex h-6 w-8 items-center justify-center" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div></div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="rounded-full border border-transparent bg-gray-900 px-3 py-1 leading-none text-white hover:border-black hover:bg-white hover:text-black" href="/join">Sign Up</a></li></ul></nav></div></header></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div> <main class="flex flex-1 flex-col"><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapters&quot;:[{&quot;title&quot;:&quot;Get started&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;🤗 Transformers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;index&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/index&quot;},{&quot;title&quot;:&quot;Quick tour&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;quicktour&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/quicktour&quot;},{&quot;title&quot;:&quot;Installation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;installation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/installation&quot;}]},{&quot;title&quot;:&quot;Tutorials&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Run inference with pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_tutorial&quot;},{&quot;title&quot;:&quot;Write portable code with AutoClass&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;autoclass_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/autoclass_tutorial&quot;},{&quot;title&quot;:&quot;Preprocess data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocessing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/preprocessing&quot;},{&quot;title&quot;:&quot;Fine-tune a pretrained model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;training&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/training&quot;},{&quot;title&quot;:&quot;Train with a script&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;run_scripts&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/run_scripts&quot;},{&quot;title&quot;:&quot;Set up distributed training with 🤗 Accelerate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;accelerate&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/accelerate&quot;},{&quot;title&quot;:&quot;Load and train adapters with 🤗 PEFT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;peft&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/peft&quot;},{&quot;title&quot;:&quot;Share your model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_sharing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_sharing&quot;},{&quot;title&quot;:&quot;Agents&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers_agents&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/transformers_agents&quot;},{&quot;title&quot;:&quot;Generation with LLMs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;llm_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/llm_tutorial&quot;}]},{&quot;title&quot;:&quot;Task Guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Natural Language Processing&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text classification&quot;,&quot;id&quot;:&quot;tasks/sequence_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/sequence_classification&quot;},{&quot;title&quot;:&quot;Token classification&quot;,&quot;id&quot;:&quot;tasks/token_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/token_classification&quot;},{&quot;title&quot;:&quot;Question answering&quot;,&quot;id&quot;:&quot;tasks/question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/question_answering&quot;},{&quot;title&quot;:&quot;Causal language modeling&quot;,&quot;id&quot;:&quot;tasks/language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/language_modeling&quot;},{&quot;title&quot;:&quot;Masked language modeling&quot;,&quot;id&quot;:&quot;tasks/masked_language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/masked_language_modeling&quot;},{&quot;title&quot;:&quot;Translation&quot;,&quot;id&quot;:&quot;tasks/translation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/translation&quot;},{&quot;title&quot;:&quot;Summarization&quot;,&quot;id&quot;:&quot;tasks/summarization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/summarization&quot;},{&quot;title&quot;:&quot;Multiple choice&quot;,&quot;id&quot;:&quot;tasks/multiple_choice&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/multiple_choice&quot;}]},{&quot;title&quot;:&quot;Audio&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio classification&quot;,&quot;id&quot;:&quot;tasks/audio_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/audio_classification&quot;},{&quot;title&quot;:&quot;Automatic speech recognition&quot;,&quot;id&quot;:&quot;tasks/asr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/asr&quot;}]},{&quot;title&quot;:&quot;Computer Vision&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image classification&quot;,&quot;id&quot;:&quot;tasks/image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_classification&quot;},{&quot;title&quot;:&quot;Semantic segmentation&quot;,&quot;id&quot;:&quot;tasks/semantic_segmentation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/semantic_segmentation&quot;},{&quot;title&quot;:&quot;Video classification&quot;,&quot;id&quot;:&quot;tasks/video_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/video_classification&quot;},{&quot;title&quot;:&quot;Object detection&quot;,&quot;id&quot;:&quot;tasks/object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot object detection&quot;,&quot;id&quot;:&quot;tasks/zero_shot_object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot image classification&quot;,&quot;id&quot;:&quot;tasks/zero_shot_image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification&quot;},{&quot;title&quot;:&quot;Depth estimation&quot;,&quot;id&quot;:&quot;tasks/monocular_depth_estimation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation&quot;}]},{&quot;title&quot;:&quot;Multimodal&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image captioning&quot;,&quot;id&quot;:&quot;tasks/image_captioning&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_captioning&quot;},{&quot;title&quot;:&quot;Document Question Answering&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks/document_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/document_question_answering&quot;},{&quot;title&quot;:&quot;Visual Question Answering&quot;,&quot;id&quot;:&quot;tasks/visual_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/visual_question_answering&quot;},{&quot;title&quot;:&quot;Text to speech&quot;,&quot;id&quot;:&quot;tasks/text-to-speech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/text-to-speech&quot;}]},{&quot;title&quot;:&quot;Generation&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Customize the generation strategy&quot;,&quot;id&quot;:&quot;generation_strategies&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/generation_strategies&quot;}]},{&quot;title&quot;:&quot;Prompting&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image tasks with IDEFICS&quot;,&quot;id&quot;:&quot;tasks/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/idefics&quot;}]}]},{&quot;title&quot;:&quot;Developer guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Use fast tokenizers from 🤗 Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;fast_tokenizers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/fast_tokenizers&quot;},{&quot;title&quot;:&quot;Run inference with multilingual models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;multilingual&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/multilingual&quot;},{&quot;title&quot;:&quot;Use model-specific APIs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;create_a_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/create_a_model&quot;},{&quot;title&quot;:&quot;Share a custom model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_models&quot;},{&quot;title&quot;:&quot;Templates for chat models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;chat_templating&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/chat_templating&quot;},{&quot;title&quot;:&quot;Run training on Amazon SageMaker&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;sagemaker&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/sagemaker&quot;},{&quot;title&quot;:&quot;Export to ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;serialization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/serialization&quot;},{&quot;title&quot;:&quot;Export to TFLite&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tflite&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tflite&quot;},{&quot;title&quot;:&quot;Export to TorchScript&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;torchscript&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/torchscript&quot;},{&quot;title&quot;:&quot;Benchmarks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;benchmarks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/benchmarks&quot;},{&quot;title&quot;:&quot;Notebooks with examples&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;notebooks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/notebooks&quot;},{&quot;title&quot;:&quot;Community resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;community&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/community&quot;},{&quot;title&quot;:&quot;Custom Tools and Prompts&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_tools&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_tools&quot;},{&quot;title&quot;:&quot;Troubleshoot&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;troubleshooting&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/troubleshooting&quot;}]},{&quot;title&quot;:&quot;Performance and scalability&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;performance&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/performance&quot;},{&quot;title&quot;:&quot;Efficient training techniques&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Methods and tools for efficient training on a single GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_one&quot;},{&quot;title&quot;:&quot;Multiple GPUs and parallelism&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_many&quot;},{&quot;title&quot;:&quot;Efficient training on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu&quot;},{&quot;title&quot;:&quot;Distributed CPU training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu_many&quot;},{&quot;title&quot;:&quot;Training on TPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu&quot;},{&quot;title&quot;:&quot;Training on TPU with TensorFlow&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu_tf&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu_tf&quot;},{&quot;title&quot;:&quot;Training on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_special&quot;},{&quot;title&quot;:&quot;Custom hardware for training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_hardware&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_hardware&quot;},{&quot;title&quot;:&quot;Hyperparameter Search using Trainer API&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;hpo_train&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/hpo_train&quot;}]},{&quot;title&quot;:&quot;Optimizing inference&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Inference on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_cpu&quot;},{&quot;title&quot;:&quot;Inference on one GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_one&quot;},{&quot;title&quot;:&quot;Inference on many GPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_many&quot;},{&quot;title&quot;:&quot;Inference on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_special&quot;}]},{&quot;title&quot;:&quot;Instantiating a big model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;big_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/big_models&quot;},{&quot;title&quot;:&quot;Troubleshooting&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;debugging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/debugging&quot;},{&quot;title&quot;:&quot;XLA Integration for TensorFlow Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tf_xla&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tf_xla&quot;},{&quot;title&quot;:&quot;Optimize inference using `torch.compile()`&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_torch_compile&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_torch_compile&quot;}]},{&quot;title&quot;:&quot;Contribute&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;How to contribute to transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;contributing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/contributing&quot;},{&quot;title&quot;:&quot;How to add a model to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_model&quot;},{&quot;title&quot;:&quot;How to convert a 🤗 Transformers model to TensorFlow?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_tensorflow_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_tensorflow_model&quot;},{&quot;title&quot;:&quot;How to add a pipeline to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_pipeline&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_pipeline&quot;},{&quot;title&quot;:&quot;Testing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;testing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/testing&quot;},{&quot;title&quot;:&quot;Checks on a Pull Request&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pr_checks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pr_checks&quot;}]},{&quot;title&quot;:&quot;Conceptual guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Philosophy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;philosophy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/philosophy&quot;},{&quot;title&quot;:&quot;Glossary&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;glossary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/glossary&quot;},{&quot;title&quot;:&quot;What 🤗 Transformers can do&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;task_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/task_summary&quot;},{&quot;title&quot;:&quot;How 🤗 Transformers solve tasks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks_explained&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks_explained&quot;},{&quot;title&quot;:&quot;The Transformer model family&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_summary&quot;},{&quot;title&quot;:&quot;Summary of the tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tokenizer_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tokenizer_summary&quot;},{&quot;title&quot;:&quot;Attention mechanisms&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;attention&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/attention&quot;},{&quot;title&quot;:&quot;Padding and truncation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pad_truncation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pad_truncation&quot;},{&quot;title&quot;:&quot;BERTology&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;bertology&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/bertology&quot;},{&quot;title&quot;:&quot;Perplexity of fixed-length models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perplexity&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perplexity&quot;},{&quot;title&quot;:&quot;Pipelines for webserver inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_webserver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_webserver&quot;},{&quot;title&quot;:&quot;Model training anatomy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_memory_anatomy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_memory_anatomy&quot;}]},{&quot;title&quot;:&quot;API&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Main Classes&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Agents and Tools&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/agent&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/agent&quot;},{&quot;title&quot;:&quot;Auto Classes&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/auto&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/auto&quot;},{&quot;title&quot;:&quot;Callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/callback&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/callback&quot;},{&quot;title&quot;:&quot;Configuration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/configuration&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/configuration&quot;},{&quot;title&quot;:&quot;Data Collator&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/data_collator&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/data_collator&quot;},{&quot;title&quot;:&quot;Keras callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/keras_callbacks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/keras_callbacks&quot;},{&quot;title&quot;:&quot;Logging&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/logging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/logging&quot;},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/model&quot;},{&quot;title&quot;:&quot;Text Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/text_generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/text_generation&quot;},{&quot;title&quot;:&quot;ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/onnx&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/onnx&quot;},{&quot;title&quot;:&quot;Optimization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/optimizer_schedules&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules&quot;},{&quot;title&quot;:&quot;Model outputs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/output&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/output&quot;},{&quot;title&quot;:&quot;Pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/pipelines&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/pipelines&quot;},{&quot;title&quot;:&quot;Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/processors&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/processors&quot;},{&quot;title&quot;:&quot;Quantization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/quantization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/quantization&quot;},{&quot;title&quot;:&quot;Tokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/tokenizer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/tokenizer&quot;},{&quot;title&quot;:&quot;Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/trainer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/trainer&quot;},{&quot;title&quot;:&quot;DeepSpeed Integration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/deepspeed&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/deepspeed&quot;},{&quot;title&quot;:&quot;Feature Extractor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/feature_extractor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/feature_extractor&quot;},{&quot;title&quot;:&quot;Image Processor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/image_processor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/image_processor&quot;}]},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALBERT&quot;,&quot;id&quot;:&quot;model_doc/albert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/albert&quot;},{&quot;title&quot;:&quot;BART&quot;,&quot;id&quot;:&quot;model_doc/bart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bart&quot;},{&quot;title&quot;:&quot;BARThez&quot;,&quot;id&quot;:&quot;model_doc/barthez&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/barthez&quot;},{&quot;title&quot;:&quot;BARTpho&quot;,&quot;id&quot;:&quot;model_doc/bartpho&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bartpho&quot;},{&quot;title&quot;:&quot;BERT&quot;,&quot;id&quot;:&quot;model_doc/bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert&quot;},{&quot;title&quot;:&quot;BertGeneration&quot;,&quot;id&quot;:&quot;model_doc/bert-generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-generation&quot;},{&quot;title&quot;:&quot;BertJapanese&quot;,&quot;id&quot;:&quot;model_doc/bert-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-japanese&quot;},{&quot;title&quot;:&quot;Bertweet&quot;,&quot;id&quot;:&quot;model_doc/bertweet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bertweet&quot;},{&quot;title&quot;:&quot;BigBird&quot;,&quot;id&quot;:&quot;model_doc/big_bird&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/big_bird&quot;},{&quot;title&quot;:&quot;BigBirdPegasus&quot;,&quot;id&quot;:&quot;model_doc/bigbird_pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus&quot;},{&quot;title&quot;:&quot;BioGpt&quot;,&quot;id&quot;:&quot;model_doc/biogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/biogpt&quot;},{&quot;title&quot;:&quot;Blenderbot&quot;,&quot;id&quot;:&quot;model_doc/blenderbot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot&quot;},{&quot;title&quot;:&quot;Blenderbot Small&quot;,&quot;id&quot;:&quot;model_doc/blenderbot-small&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot-small&quot;},{&quot;title&quot;:&quot;BLOOM&quot;,&quot;id&quot;:&quot;model_doc/bloom&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bloom&quot;},{&quot;title&quot;:&quot;BORT&quot;,&quot;id&quot;:&quot;model_doc/bort&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bort&quot;},{&quot;title&quot;:&quot;ByT5&quot;,&quot;id&quot;:&quot;model_doc/byt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/byt5&quot;},{&quot;title&quot;:&quot;CamemBERT&quot;,&quot;id&quot;:&quot;model_doc/camembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/camembert&quot;},{&quot;title&quot;:&quot;CANINE&quot;,&quot;id&quot;:&quot;model_doc/canine&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/canine&quot;},{&quot;title&quot;:&quot;CodeGen&quot;,&quot;id&quot;:&quot;model_doc/codegen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/codegen&quot;},{&quot;title&quot;:&quot;CodeLlama&quot;,&quot;id&quot;:&quot;model_doc/code_llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/code_llama&quot;},{&quot;title&quot;:&quot;ConvBERT&quot;,&quot;id&quot;:&quot;model_doc/convbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convbert&quot;},{&quot;title&quot;:&quot;CPM&quot;,&quot;id&quot;:&quot;model_doc/cpm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpm&quot;},{&quot;title&quot;:&quot;CPMANT&quot;,&quot;id&quot;:&quot;model_doc/cpmant&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpmant&quot;},{&quot;title&quot;:&quot;CTRL&quot;,&quot;id&quot;:&quot;model_doc/ctrl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ctrl&quot;},{&quot;title&quot;:&quot;DeBERTa&quot;,&quot;id&quot;:&quot;model_doc/deberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta&quot;},{&quot;title&quot;:&quot;DeBERTa-v2&quot;,&quot;id&quot;:&quot;model_doc/deberta-v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta-v2&quot;},{&quot;title&quot;:&quot;DialoGPT&quot;,&quot;id&quot;:&quot;model_doc/dialogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dialogpt&quot;},{&quot;title&quot;:&quot;DistilBERT&quot;,&quot;id&quot;:&quot;model_doc/distilbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/distilbert&quot;},{&quot;title&quot;:&quot;DPR&quot;,&quot;id&quot;:&quot;model_doc/dpr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpr&quot;},{&quot;title&quot;:&quot;ELECTRA&quot;,&quot;id&quot;:&quot;model_doc/electra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/electra&quot;},{&quot;title&quot;:&quot;Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encoder-decoder&quot;},{&quot;title&quot;:&quot;ERNIE&quot;,&quot;id&quot;:&quot;model_doc/ernie&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie&quot;},{&quot;title&quot;:&quot;ErnieM&quot;,&quot;id&quot;:&quot;model_doc/ernie_m&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie_m&quot;},{&quot;title&quot;:&quot;ESM&quot;,&quot;id&quot;:&quot;model_doc/esm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/esm&quot;},{&quot;title&quot;:&quot;Falcon&quot;,&quot;id&quot;:&quot;model_doc/falcon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/falcon&quot;},{&quot;title&quot;:&quot;FLAN-T5&quot;,&quot;id&quot;:&quot;model_doc/flan-t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-t5&quot;},{&quot;title&quot;:&quot;FLAN-UL2&quot;,&quot;id&quot;:&quot;model_doc/flan-ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-ul2&quot;},{&quot;title&quot;:&quot;FlauBERT&quot;,&quot;id&quot;:&quot;model_doc/flaubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flaubert&quot;},{&quot;title&quot;:&quot;FNet&quot;,&quot;id&quot;:&quot;model_doc/fnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fnet&quot;},{&quot;title&quot;:&quot;FSMT&quot;,&quot;id&quot;:&quot;model_doc/fsmt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fsmt&quot;},{&quot;title&quot;:&quot;Funnel Transformer&quot;,&quot;id&quot;:&quot;model_doc/funnel&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/funnel&quot;},{&quot;title&quot;:&quot;GPT&quot;,&quot;id&quot;:&quot;model_doc/openai-gpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/openai-gpt&quot;},{&quot;title&quot;:&quot;GPT Neo&quot;,&quot;id&quot;:&quot;model_doc/gpt_neo&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neo&quot;},{&quot;title&quot;:&quot;GPT NeoX&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox&quot;},{&quot;title&quot;:&quot;GPT NeoX Japanese&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox_japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese&quot;},{&quot;title&quot;:&quot;GPT-J&quot;,&quot;id&quot;:&quot;model_doc/gptj&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptj&quot;},{&quot;title&quot;:&quot;GPT2&quot;,&quot;id&quot;:&quot;model_doc/gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt2&quot;},{&quot;title&quot;:&quot;GPTBigCode&quot;,&quot;id&quot;:&quot;model_doc/gpt_bigcode&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode&quot;},{&quot;title&quot;:&quot;GPTSAN Japanese&quot;,&quot;id&quot;:&quot;model_doc/gptsan-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese&quot;},{&quot;title&quot;:&quot;GPTSw3&quot;,&quot;id&quot;:&quot;model_doc/gpt-sw3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt-sw3&quot;},{&quot;title&quot;:&quot;HerBERT&quot;,&quot;id&quot;:&quot;model_doc/herbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/herbert&quot;},{&quot;title&quot;:&quot;I-BERT&quot;,&quot;id&quot;:&quot;model_doc/ibert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ibert&quot;},{&quot;title&quot;:&quot;Jukebox&quot;,&quot;id&quot;:&quot;model_doc/jukebox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/jukebox&quot;},{&quot;title&quot;:&quot;LED&quot;,&quot;id&quot;:&quot;model_doc/led&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/led&quot;},{&quot;title&quot;:&quot;LLaMA&quot;,&quot;id&quot;:&quot;model_doc/llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama&quot;},{&quot;title&quot;:&quot;Llama2&quot;,&quot;id&quot;:&quot;model_doc/llama2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama2&quot;},{&quot;title&quot;:&quot;Longformer&quot;,&quot;id&quot;:&quot;model_doc/longformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longformer&quot;},{&quot;title&quot;:&quot;LongT5&quot;,&quot;id&quot;:&quot;model_doc/longt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longt5&quot;},{&quot;title&quot;:&quot;LUKE&quot;,&quot;id&quot;:&quot;model_doc/luke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/luke&quot;},{&quot;title&quot;:&quot;M2M100&quot;,&quot;id&quot;:&quot;model_doc/m2m_100&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/m2m_100&quot;},{&quot;title&quot;:&quot;MarianMT&quot;,&quot;id&quot;:&quot;model_doc/marian&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/marian&quot;},{&quot;title&quot;:&quot;MarkupLM&quot;,&quot;id&quot;:&quot;model_doc/markuplm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/markuplm&quot;},{&quot;title&quot;:&quot;MBart and MBart-50&quot;,&quot;id&quot;:&quot;model_doc/mbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mbart&quot;},{&quot;title&quot;:&quot;MEGA&quot;,&quot;id&quot;:&quot;model_doc/mega&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mega&quot;},{&quot;title&quot;:&quot;MegatronBERT&quot;,&quot;id&quot;:&quot;model_doc/megatron-bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron-bert&quot;},{&quot;title&quot;:&quot;MegatronGPT2&quot;,&quot;id&quot;:&quot;model_doc/megatron_gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2&quot;},{&quot;title&quot;:&quot;Mistral&quot;,&quot;id&quot;:&quot;model_doc/mistral&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mistral&quot;},{&quot;title&quot;:&quot;mLUKE&quot;,&quot;id&quot;:&quot;model_doc/mluke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mluke&quot;},{&quot;title&quot;:&quot;MobileBERT&quot;,&quot;id&quot;:&quot;model_doc/mobilebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilebert&quot;},{&quot;title&quot;:&quot;MPNet&quot;,&quot;id&quot;:&quot;model_doc/mpnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpnet&quot;},{&quot;title&quot;:&quot;MPT&quot;,&quot;id&quot;:&quot;model_doc/mpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpt&quot;},{&quot;title&quot;:&quot;MRA&quot;,&quot;id&quot;:&quot;model_doc/mra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mra&quot;},{&quot;title&quot;:&quot;MT5&quot;,&quot;id&quot;:&quot;model_doc/mt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mt5&quot;},{&quot;title&quot;:&quot;MVP&quot;,&quot;id&quot;:&quot;model_doc/mvp&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mvp&quot;},{&quot;title&quot;:&quot;NEZHA&quot;,&quot;id&quot;:&quot;model_doc/nezha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nezha&quot;},{&quot;title&quot;:&quot;NLLB&quot;,&quot;id&quot;:&quot;model_doc/nllb&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb&quot;},{&quot;title&quot;:&quot;NLLB-MoE&quot;,&quot;id&quot;:&quot;model_doc/nllb-moe&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb-moe&quot;},{&quot;title&quot;:&quot;Nyströmformer&quot;,&quot;id&quot;:&quot;model_doc/nystromformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nystromformer&quot;},{&quot;title&quot;:&quot;Open-Llama&quot;,&quot;id&quot;:&quot;model_doc/open-llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/open-llama&quot;},{&quot;title&quot;:&quot;OPT&quot;,&quot;id&quot;:&quot;model_doc/opt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/opt&quot;},{&quot;title&quot;:&quot;Pegasus&quot;,&quot;id&quot;:&quot;model_doc/pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus&quot;},{&quot;title&quot;:&quot;PEGASUS-X&quot;,&quot;id&quot;:&quot;model_doc/pegasus_x&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus_x&quot;},{&quot;title&quot;:&quot;Persimmon&quot;,&quot;id&quot;:&quot;model_doc/persimmon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/persimmon&quot;},{&quot;title&quot;:&quot;PhoBERT&quot;,&quot;id&quot;:&quot;model_doc/phobert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/phobert&quot;},{&quot;title&quot;:&quot;PLBart&quot;,&quot;id&quot;:&quot;model_doc/plbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/plbart&quot;},{&quot;title&quot;:&quot;ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/prophetnet&quot;},{&quot;title&quot;:&quot;QDQBert&quot;,&quot;id&quot;:&quot;model_doc/qdqbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/qdqbert&quot;},{&quot;title&quot;:&quot;RAG&quot;,&quot;id&quot;:&quot;model_doc/rag&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rag&quot;},{&quot;title&quot;:&quot;REALM&quot;,&quot;id&quot;:&quot;model_doc/realm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/realm&quot;},{&quot;title&quot;:&quot;Reformer&quot;,&quot;id&quot;:&quot;model_doc/reformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/reformer&quot;},{&quot;title&quot;:&quot;RemBERT&quot;,&quot;id&quot;:&quot;model_doc/rembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rembert&quot;},{&quot;title&quot;:&quot;RetriBERT&quot;,&quot;id&quot;:&quot;model_doc/retribert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/retribert&quot;},{&quot;title&quot;:&quot;RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta&quot;},{&quot;title&quot;:&quot;RoBERTa-PreLayerNorm&quot;,&quot;id&quot;:&quot;model_doc/roberta-prelayernorm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm&quot;},{&quot;title&quot;:&quot;RoCBert&quot;,&quot;id&quot;:&quot;model_doc/roc_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roc_bert&quot;},{&quot;title&quot;:&quot;RoFormer&quot;,&quot;id&quot;:&quot;model_doc/roformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roformer&quot;},{&quot;title&quot;:&quot;RWKV&quot;,&quot;id&quot;:&quot;model_doc/rwkv&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rwkv&quot;},{&quot;title&quot;:&quot;Splinter&quot;,&quot;id&quot;:&quot;model_doc/splinter&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/splinter&quot;},{&quot;title&quot;:&quot;SqueezeBERT&quot;,&quot;id&quot;:&quot;model_doc/squeezebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/squeezebert&quot;},{&quot;title&quot;:&quot;SwitchTransformers&quot;,&quot;id&quot;:&quot;model_doc/switch_transformers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/switch_transformers&quot;},{&quot;title&quot;:&quot;T5&quot;,&quot;id&quot;:&quot;model_doc/t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5&quot;},{&quot;title&quot;:&quot;T5v1.1&quot;,&quot;id&quot;:&quot;model_doc/t5v1.1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5v1.1&quot;},{&quot;title&quot;:&quot;TAPEX&quot;,&quot;id&quot;:&quot;model_doc/tapex&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapex&quot;},{&quot;title&quot;:&quot;Transformer XL&quot;,&quot;id&quot;:&quot;model_doc/transfo-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/transfo-xl&quot;},{&quot;title&quot;:&quot;UL2&quot;,&quot;id&quot;:&quot;model_doc/ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ul2&quot;},{&quot;title&quot;:&quot;UMT5&quot;,&quot;id&quot;:&quot;model_doc/umt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/umt5&quot;},{&quot;title&quot;:&quot;X-MOD&quot;,&quot;id&quot;:&quot;model_doc/xmod&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xmod&quot;},{&quot;title&quot;:&quot;XGLM&quot;,&quot;id&quot;:&quot;model_doc/xglm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xglm&quot;},{&quot;title&quot;:&quot;XLM&quot;,&quot;id&quot;:&quot;model_doc/xlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm&quot;},{&quot;title&quot;:&quot;XLM-ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/xlm-prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa-XL&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl&quot;},{&quot;title&quot;:&quot;XLM-V&quot;,&quot;id&quot;:&quot;model_doc/xlm-v&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-v&quot;},{&quot;title&quot;:&quot;XLNet&quot;,&quot;id&quot;:&quot;model_doc/xlnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlnet&quot;},{&quot;title&quot;:&quot;YOSO&quot;,&quot;id&quot;:&quot;model_doc/yoso&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yoso&quot;}]},{&quot;title&quot;:&quot;Vision models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;BEiT&quot;,&quot;id&quot;:&quot;model_doc/beit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/beit&quot;},{&quot;title&quot;:&quot;BiT&quot;,&quot;id&quot;:&quot;model_doc/bit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bit&quot;},{&quot;title&quot;:&quot;Conditional DETR&quot;,&quot;id&quot;:&quot;model_doc/conditional_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/conditional_detr&quot;},{&quot;title&quot;:&quot;ConvNeXT&quot;,&quot;id&quot;:&quot;model_doc/convnext&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnext&quot;},{&quot;title&quot;:&quot;ConvNeXTV2&quot;,&quot;id&quot;:&quot;model_doc/convnextv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnextv2&quot;},{&quot;title&quot;:&quot;CvT&quot;,&quot;id&quot;:&quot;model_doc/cvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cvt&quot;},{&quot;title&quot;:&quot;Deformable DETR&quot;,&quot;id&quot;:&quot;model_doc/deformable_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deformable_detr&quot;},{&quot;title&quot;:&quot;DeiT&quot;,&quot;id&quot;:&quot;model_doc/deit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deit&quot;},{&quot;title&quot;:&quot;DETA&quot;,&quot;id&quot;:&quot;model_doc/deta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deta&quot;},{&quot;title&quot;:&quot;DETR&quot;,&quot;id&quot;:&quot;model_doc/detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/detr&quot;},{&quot;title&quot;:&quot;DiNAT&quot;,&quot;id&quot;:&quot;model_doc/dinat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinat&quot;},{&quot;title&quot;:&quot;DINO V2&quot;,&quot;id&quot;:&quot;model_doc/dinov2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinov2&quot;},{&quot;title&quot;:&quot;DiT&quot;,&quot;id&quot;:&quot;model_doc/dit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dit&quot;},{&quot;title&quot;:&quot;DPT&quot;,&quot;id&quot;:&quot;model_doc/dpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpt&quot;},{&quot;title&quot;:&quot;EfficientFormer&quot;,&quot;id&quot;:&quot;model_doc/efficientformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientformer&quot;},{&quot;title&quot;:&quot;EfficientNet&quot;,&quot;id&quot;:&quot;model_doc/efficientnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientnet&quot;},{&quot;title&quot;:&quot;FocalNet&quot;,&quot;id&quot;:&quot;model_doc/focalnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/focalnet&quot;},{&quot;title&quot;:&quot;GLPN&quot;,&quot;id&quot;:&quot;model_doc/glpn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/glpn&quot;},{&quot;title&quot;:&quot;ImageGPT&quot;,&quot;id&quot;:&quot;model_doc/imagegpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/imagegpt&quot;},{&quot;title&quot;:&quot;LeViT&quot;,&quot;id&quot;:&quot;model_doc/levit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/levit&quot;},{&quot;title&quot;:&quot;Mask2Former&quot;,&quot;id&quot;:&quot;model_doc/mask2former&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mask2former&quot;},{&quot;title&quot;:&quot;MaskFormer&quot;,&quot;id&quot;:&quot;model_doc/maskformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/maskformer&quot;},{&quot;title&quot;:&quot;MobileNetV1&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v1&quot;},{&quot;title&quot;:&quot;MobileNetV2&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v2&quot;},{&quot;title&quot;:&quot;MobileViT&quot;,&quot;id&quot;:&quot;model_doc/mobilevit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevit&quot;},{&quot;title&quot;:&quot;MobileViTV2&quot;,&quot;id&quot;:&quot;model_doc/mobilevitv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevitv2&quot;},{&quot;title&quot;:&quot;NAT&quot;,&quot;id&quot;:&quot;model_doc/nat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nat&quot;},{&quot;title&quot;:&quot;PoolFormer&quot;,&quot;id&quot;:&quot;model_doc/poolformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/poolformer&quot;},{&quot;title&quot;:&quot;Pyramid Vision Transformer (PVT)&quot;,&quot;id&quot;:&quot;model_doc/pvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pvt&quot;},{&quot;title&quot;:&quot;RegNet&quot;,&quot;id&quot;:&quot;model_doc/regnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/regnet&quot;},{&quot;title&quot;:&quot;ResNet&quot;,&quot;id&quot;:&quot;model_doc/resnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/resnet&quot;},{&quot;title&quot;:&quot;SegFormer&quot;,&quot;id&quot;:&quot;model_doc/segformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/segformer&quot;},{&quot;title&quot;:&quot;SwiftFormer&quot;,&quot;id&quot;:&quot;model_doc/swiftformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swiftformer&quot;},{&quot;title&quot;:&quot;Swin Transformer&quot;,&quot;id&quot;:&quot;model_doc/swin&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin&quot;},{&quot;title&quot;:&quot;Swin Transformer V2&quot;,&quot;id&quot;:&quot;model_doc/swinv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swinv2&quot;},{&quot;title&quot;:&quot;Swin2SR&quot;,&quot;id&quot;:&quot;model_doc/swin2sr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin2sr&quot;},{&quot;title&quot;:&quot;Table Transformer&quot;,&quot;id&quot;:&quot;model_doc/table-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/table-transformer&quot;},{&quot;title&quot;:&quot;TimeSformer&quot;,&quot;id&quot;:&quot;model_doc/timesformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/timesformer&quot;},{&quot;title&quot;:&quot;UperNet&quot;,&quot;id&quot;:&quot;model_doc/upernet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/upernet&quot;},{&quot;title&quot;:&quot;VAN&quot;,&quot;id&quot;:&quot;model_doc/van&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/van&quot;},{&quot;title&quot;:&quot;VideoMAE&quot;,&quot;id&quot;:&quot;model_doc/videomae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/videomae&quot;},{&quot;title&quot;:&quot;Vision Transformer (ViT)&quot;,&quot;id&quot;:&quot;model_doc/vit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit&quot;},{&quot;title&quot;:&quot;ViT Hybrid&quot;,&quot;id&quot;:&quot;model_doc/vit_hybrid&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_hybrid&quot;},{&quot;title&quot;:&quot;ViTDet&quot;,&quot;id&quot;:&quot;model_doc/vitdet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitdet&quot;},{&quot;title&quot;:&quot;ViTMAE&quot;,&quot;id&quot;:&quot;model_doc/vit_mae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_mae&quot;},{&quot;title&quot;:&quot;ViTMatte&quot;,&quot;id&quot;:&quot;model_doc/vitmatte&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitmatte&quot;},{&quot;title&quot;:&quot;ViTMSN&quot;,&quot;id&quot;:&quot;model_doc/vit_msn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_msn&quot;},{&quot;title&quot;:&quot;ViViT&quot;,&quot;id&quot;:&quot;model_doc/vivit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vivit&quot;},{&quot;title&quot;:&quot;YOLOS&quot;,&quot;id&quot;:&quot;model_doc/yolos&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yolos&quot;}]},{&quot;title&quot;:&quot;Audio models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio Spectrogram Transformer&quot;,&quot;id&quot;:&quot;model_doc/audio-spectrogram-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer&quot;},{&quot;title&quot;:&quot;Bark&quot;,&quot;id&quot;:&quot;model_doc/bark&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bark&quot;},{&quot;title&quot;:&quot;CLAP&quot;,&quot;id&quot;:&quot;model_doc/clap&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clap&quot;},{&quot;title&quot;:&quot;EnCodec&quot;,&quot;id&quot;:&quot;model_doc/encodec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encodec&quot;},{&quot;title&quot;:&quot;Hubert&quot;,&quot;id&quot;:&quot;model_doc/hubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/hubert&quot;},{&quot;title&quot;:&quot;MCTCT&quot;,&quot;id&quot;:&quot;model_doc/mctct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mctct&quot;},{&quot;title&quot;:&quot;MMS&quot;,&quot;id&quot;:&quot;model_doc/mms&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mms&quot;},{&quot;title&quot;:&quot;MusicGen&quot;,&quot;id&quot;:&quot;model_doc/musicgen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/musicgen&quot;},{&quot;title&quot;:&quot;Pop2Piano&quot;,&quot;id&quot;:&quot;model_doc/pop2piano&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pop2piano&quot;},{&quot;title&quot;:&quot;SEW&quot;,&quot;id&quot;:&quot;model_doc/sew&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew&quot;},{&quot;title&quot;:&quot;SEW-D&quot;,&quot;id&quot;:&quot;model_doc/sew-d&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew-d&quot;},{&quot;title&quot;:&quot;Speech2Text&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text&quot;},{&quot;title&quot;:&quot;Speech2Text2&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text_2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2&quot;},{&quot;title&quot;:&quot;SpeechT5&quot;,&quot;id&quot;:&quot;model_doc/speecht5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speecht5&quot;},{&quot;title&quot;:&quot;UniSpeech&quot;,&quot;id&quot;:&quot;model_doc/unispeech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech&quot;},{&quot;title&quot;:&quot;UniSpeech-SAT&quot;,&quot;id&quot;:&quot;model_doc/unispeech-sat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech-sat&quot;},{&quot;title&quot;:&quot;VITS&quot;,&quot;id&quot;:&quot;model_doc/vits&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vits&quot;},{&quot;title&quot;:&quot;Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2&quot;},{&quot;title&quot;:&quot;Wav2Vec2-Conformer&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2-conformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer&quot;},{&quot;title&quot;:&quot;Wav2Vec2Phoneme&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2_phoneme&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme&quot;},{&quot;title&quot;:&quot;WavLM&quot;,&quot;id&quot;:&quot;model_doc/wavlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wavlm&quot;},{&quot;title&quot;:&quot;Whisper&quot;,&quot;id&quot;:&quot;model_doc/whisper&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/whisper&quot;},{&quot;title&quot;:&quot;XLS-R&quot;,&quot;id&quot;:&quot;model_doc/xls_r&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xls_r&quot;},{&quot;title&quot;:&quot;XLSR-Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/xlsr_wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2&quot;}]},{&quot;title&quot;:&quot;Multimodal models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALIGN&quot;,&quot;id&quot;:&quot;model_doc/align&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/align&quot;},{&quot;title&quot;:&quot;AltCLIP&quot;,&quot;id&quot;:&quot;model_doc/altclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/altclip&quot;},{&quot;title&quot;:&quot;BLIP&quot;,&quot;id&quot;:&quot;model_doc/blip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip&quot;},{&quot;title&quot;:&quot;BLIP-2&quot;,&quot;id&quot;:&quot;model_doc/blip-2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip-2&quot;},{&quot;title&quot;:&quot;BridgeTower&quot;,&quot;id&quot;:&quot;model_doc/bridgetower&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bridgetower&quot;},{&quot;title&quot;:&quot;BROS&quot;,&quot;id&quot;:&quot;model_doc/bros&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bros&quot;},{&quot;title&quot;:&quot;Chinese-CLIP&quot;,&quot;id&quot;:&quot;model_doc/chinese_clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/chinese_clip&quot;},{&quot;title&quot;:&quot;CLIP&quot;,&quot;id&quot;:&quot;model_doc/clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clip&quot;},{&quot;title&quot;:&quot;CLIPSeg&quot;,&quot;id&quot;:&quot;model_doc/clipseg&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clipseg&quot;},{&quot;title&quot;:&quot;Data2Vec&quot;,&quot;id&quot;:&quot;model_doc/data2vec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/data2vec&quot;},{&quot;title&quot;:&quot;DePlot&quot;,&quot;id&quot;:&quot;model_doc/deplot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deplot&quot;},{&quot;title&quot;:&quot;Donut&quot;,&quot;id&quot;:&quot;model_doc/donut&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/donut&quot;},{&quot;title&quot;:&quot;FLAVA&quot;,&quot;id&quot;:&quot;model_doc/flava&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flava&quot;},{&quot;title&quot;:&quot;GIT&quot;,&quot;id&quot;:&quot;model_doc/git&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/git&quot;},{&quot;title&quot;:&quot;GroupViT&quot;,&quot;id&quot;:&quot;model_doc/groupvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/groupvit&quot;},{&quot;title&quot;:&quot;IDEFICS&quot;,&quot;id&quot;:&quot;model_doc/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/idefics&quot;},{&quot;title&quot;:&quot;InstructBLIP&quot;,&quot;id&quot;:&quot;model_doc/instructblip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/instructblip&quot;},{&quot;title&quot;:&quot;LayoutLM&quot;,&quot;id&quot;:&quot;model_doc/layoutlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlm&quot;},{&quot;title&quot;:&quot;LayoutLMV2&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv2&quot;},{&quot;title&quot;:&quot;LayoutLMV3&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv3&quot;},{&quot;title&quot;:&quot;LayoutXLM&quot;,&quot;id&quot;:&quot;model_doc/layoutxlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutxlm&quot;},{&quot;title&quot;:&quot;LiLT&quot;,&quot;id&quot;:&quot;model_doc/lilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lilt&quot;},{&quot;title&quot;:&quot;LXMERT&quot;,&quot;id&quot;:&quot;model_doc/lxmert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lxmert&quot;},{&quot;title&quot;:&quot;MatCha&quot;,&quot;id&quot;:&quot;model_doc/matcha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/matcha&quot;},{&quot;title&quot;:&quot;MGP-STR&quot;,&quot;id&quot;:&quot;model_doc/mgp-str&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mgp-str&quot;},{&quot;title&quot;:&quot;Nougat&quot;,&quot;id&quot;:&quot;model_doc/nougat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nougat&quot;},{&quot;title&quot;:&quot;OneFormer&quot;,&quot;id&quot;:&quot;model_doc/oneformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/oneformer&quot;},{&quot;title&quot;:&quot;OWL-ViT&quot;,&quot;id&quot;:&quot;model_doc/owlvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/owlvit&quot;},{&quot;title&quot;:&quot;Perceiver&quot;,&quot;id&quot;:&quot;model_doc/perceiver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/perceiver&quot;},{&quot;title&quot;:&quot;Pix2Struct&quot;,&quot;id&quot;:&quot;model_doc/pix2struct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pix2struct&quot;},{&quot;title&quot;:&quot;Segment Anything&quot;,&quot;id&quot;:&quot;model_doc/sam&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sam&quot;},{&quot;title&quot;:&quot;Speech Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/speech-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder&quot;},{&quot;title&quot;:&quot;TAPAS&quot;,&quot;id&quot;:&quot;model_doc/tapas&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapas&quot;},{&quot;title&quot;:&quot;TrOCR&quot;,&quot;id&quot;:&quot;model_doc/trocr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trocr&quot;},{&quot;title&quot;:&quot;TVLT&quot;,&quot;id&quot;:&quot;model_doc/tvlt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tvlt&quot;},{&quot;title&quot;:&quot;ViLT&quot;,&quot;id&quot;:&quot;model_doc/vilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vilt&quot;},{&quot;title&quot;:&quot;Vision Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/vision-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder&quot;},{&quot;title&quot;:&quot;Vision Text Dual Encoder&quot;,&quot;id&quot;:&quot;model_doc/vision-text-dual-encoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder&quot;},{&quot;title&quot;:&quot;VisualBERT&quot;,&quot;id&quot;:&quot;model_doc/visual_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/visual_bert&quot;},{&quot;title&quot;:&quot;X-CLIP&quot;,&quot;id&quot;:&quot;model_doc/xclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xclip&quot;}]},{&quot;title&quot;:&quot;Reinforcement learning models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Decision Transformer&quot;,&quot;id&quot;:&quot;model_doc/decision_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/decision_transformer&quot;},{&quot;title&quot;:&quot;Trajectory Transformer&quot;,&quot;id&quot;:&quot;model_doc/trajectory_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer&quot;}]},{&quot;title&quot;:&quot;Time series models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Autoformer&quot;,&quot;id&quot;:&quot;model_doc/autoformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/autoformer&quot;},{&quot;title&quot;:&quot;Informer&quot;,&quot;id&quot;:&quot;model_doc/informer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/informer&quot;},{&quot;title&quot;:&quot;Time Series Transformer&quot;,&quot;id&quot;:&quot;model_doc/time_series_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/time_series_transformer&quot;}]},{&quot;title&quot;:&quot;Graph models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Graphormer&quot;,&quot;id&quot;:&quot;model_doc/graphormer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/graphormer&quot;}]}]},{&quot;title&quot;:&quot;Internal Helpers&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Custom Layers and Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/modeling_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/modeling_utils&quot;},{&quot;title&quot;:&quot;Utilities for pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/pipelines_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/pipelines_utils&quot;},{&quot;title&quot;:&quot;Utilities for Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/tokenization_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/tokenization_utils&quot;},{&quot;title&quot;:&quot;Utilities for Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/trainer_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/trainer_utils&quot;},{&quot;title&quot;:&quot;Utilities for Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/generation_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/generation_utils&quot;},{&quot;title&quot;:&quot;Utilities for Image Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/image_processing_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/image_processing_utils&quot;},{&quot;title&quot;:&quot;Utilities for Audio processing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/audio_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/audio_utils&quot;},{&quot;title&quot;:&quot;General Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/file_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/file_utils&quot;},{&quot;title&quot;:&quot;Utilities for Time Series&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/time_series_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/time_series_utils&quot;}]}]}],&quot;chapterId&quot;:&quot;tasks/document_question_answering&quot;,&quot;docType&quot;:&quot;docs&quot;,&quot;isLoggedIn&quot;:false,&quot;lang&quot;:&quot;en&quot;,&quot;langs&quot;:[&quot;de&quot;,&quot;en&quot;,&quot;es&quot;,&quot;fr&quot;,&quot;it&quot;,&quot;ko&quot;,&quot;pt&quot;,&quot;zh&quot;],&quot;library&quot;:&quot;transformers&quot;,&quot;theme&quot;:&quot;light&quot;,&quot;version&quot;:&quot;v4.34.0&quot;,&quot;versions&quot;:[{&quot;version&quot;:&quot;main&quot;},{&quot;version&quot;:&quot;v4.34.0&quot;},{&quot;version&quot;:&quot;v4.33.3&quot;},{&quot;version&quot;:&quot;v4.33.2&quot;},{&quot;version&quot;:&quot;v4.33.0&quot;},{&quot;version&quot;:&quot;v4.32.1&quot;},{&quot;version&quot;:&quot;v4.32.0&quot;},{&quot;version&quot;:&quot;v4.31.0&quot;},{&quot;version&quot;:&quot;v4.30.0&quot;},{&quot;version&quot;:&quot;v4.29.1&quot;},{&quot;version&quot;:&quot;v4.29.0&quot;},{&quot;version&quot;:&quot;v4.28.1&quot;},{&quot;version&quot;:&quot;v4.28.0&quot;},{&quot;version&quot;:&quot;v4.27.2&quot;},{&quot;version&quot;:&quot;v4.27.1&quot;},{&quot;version&quot;:&quot;v4.27.0&quot;},{&quot;version&quot;:&quot;v4.26.1&quot;},{&quot;version&quot;:&quot;v4.26.0&quot;},{&quot;version&quot;:&quot;v4.25.1&quot;},{&quot;version&quot;:&quot;v4.24.0&quot;},{&quot;version&quot;:&quot;v4.23.1&quot;},{&quot;version&quot;:&quot;v4.23.0&quot;},{&quot;version&quot;:&quot;v4.22.2&quot;},{&quot;version&quot;:&quot;v4.22.1&quot;},{&quot;version&quot;:&quot;v4.22.0&quot;},{&quot;version&quot;:&quot;v4.21.3&quot;},{&quot;version&quot;:&quot;v4.21.2&quot;},{&quot;version&quot;:&quot;v4.21.1&quot;},{&quot;version&quot;:&quot;v4.21.0&quot;},{&quot;version&quot;:&quot;v4.20.1&quot;},{&quot;version&quot;:&quot;v4.20.0&quot;},{&quot;version&quot;:&quot;v4.19.4&quot;},{&quot;version&quot;:&quot;v4.19.3&quot;},{&quot;version&quot;:&quot;v4.19.2&quot;},{&quot;version&quot;:&quot;v4.19.0&quot;},{&quot;version&quot;:&quot;v4.18.0&quot;},{&quot;version&quot;:&quot;v4.17.0&quot;},{&quot;version&quot;:&quot;v4.16.2&quot;},{&quot;version&quot;:&quot;v4.16.1&quot;},{&quot;version&quot;:&quot;v4.16.0&quot;},{&quot;version&quot;:&quot;v4.15.0&quot;},{&quot;version&quot;:&quot;v4.14.1&quot;},{&quot;version&quot;:&quot;v4.13.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.5&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.4&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.0.0&quot;},{&quot;version&quot;:&quot;doc-builder-html&quot;}],&quot;title&quot;:&quot;Document Question Answering&quot;}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"><div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation</p> <div class="flex items-center"><p class="font-semibold">Document Question Answering</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "><button class=" " type="button"><h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> <span class="ml-auto rounded border border-gray-200 bg-gray-100 px-0.5 text-xs dark:border-gray-800 dark:bg-gray-800"><kbd class="font-sans">⌘K</kbd></span></button> <div class="flex items-center"><select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1">v4.34.0</option><option value="2">v4.33.3</option><option value="3">v4.32.1</option><option value="4">v4.31.0</option><option value="5">v4.30.0</option><option value="6">v4.29.1</option><option value="7">v4.28.1</option><option value="8">v4.27.2</option><option value="9">v4.26.1</option><option value="10">v4.25.1</option><option value="11">v4.24.0</option><option value="12">v4.23.1</option><option value="13">v4.22.2</option><option value="14">v4.21.3</option><option value="15">v4.20.1</option><option value="16">v4.19.4</option><option value="17">v4.18.0</option><option value="18">v4.17.0</option><option value="19">v4.16.2</option><option value="20">v4.15.0</option><option value="21">v4.14.1</option><option value="22">v4.13.0</option><option value="23">v4.12.5</option><option value="24">v4.11.3</option><option value="25">v4.10.1</option><option value="26">v4.9.2</option><option value="27">v4.8.2</option><option value="28">v4.7.0</option><option value="29">v4.6.0</option><option value="30">v4.5.1</option><option value="31">v4.4.2</option><option value="32">v4.3.3</option><option value="33">v4.2.2</option><option value="34">v4.1.1</option><option value="35">v4.0.1</option><option value="36">v3.5.1</option><option value="37">v3.4.0</option><option value="38">v3.3.1</option><option value="39">v3.2.0</option><option value="40">v3.1.0</option><option value="41">v3.0.2</option><option value="42">v2.11.0</option><option value="43">v2.10.0</option><option value="44">v2.9.1</option><option value="45">v2.8.0</option><option value="46">v2.7.0</option><option value="47">v2.6.0</option><option value="48">v2.5.1</option><option value="49">v2.4.1</option><option value="50">v2.3.0</option><option value="51">v2.2.2</option><option value="52">v2.1.1</option><option value="53">v2.0.0</option><option value="54">v1.2.0</option><option value="55">v1.1.0</option><option value="56">v1.0.0</option><option value="57">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"><button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"><svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> 112,792</a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Get started</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/index">🤗 Transformers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/quicktour">Quick tour </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/installation">Installation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Tutorials</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_tutorial">Run inference with pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/autoclass_tutorial">Write portable code with AutoClass </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/preprocessing">Preprocess data </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/training">Fine-tune a pretrained model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/run_scripts">Train with a script </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/accelerate">Set up distributed training with 🤗 Accelerate </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/peft">Load and train adapters with 🤗 PEFT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_sharing">Share your model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/transformers_agents">Agents </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/llm_tutorial">Generation with LLMs </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Task Guides</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Natural Language Processing</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Computer Vision</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/image_captioning">Image captioning </a><a data-sveltekit-reload="" class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-4" href="/docs/transformers/v4.34.0/en/tasks/document_question_answering">Document Question Answering </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/visual_question_answering">Visual Question Answering </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/text-to-speech">Text to speech </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Generation</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Prompting</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Developer guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/fast_tokenizers">Use fast tokenizers from 🤗 Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/multilingual">Run inference with multilingual models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/create_a_model">Use model-specific APIs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_models">Share a custom model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/chat_templating">Templates for chat models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/sagemaker">Run training on Amazon SageMaker </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/serialization">Export to ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tflite">Export to TFLite </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/torchscript">Export to TorchScript </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/benchmarks">Benchmarks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/notebooks">Notebooks with examples </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/community">Community resources </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_tools">Custom Tools and Prompts </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/troubleshooting">Troubleshoot </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Performance and scalability</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/performance">Overview </a><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Efficient training techniques</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_one">Methods and tools for efficient training on a single GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_many">Multiple GPUs and parallelism </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu">Efficient training on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu_many">Distributed CPU training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu">Training on TPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu_tf">Training on TPU with TensorFlow </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_special">Training on Specialized Hardware </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_hardware">Custom hardware for training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/hpo_train">Hyperparameter Search using Trainer API </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Optimizing inference</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_cpu">Inference on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_one">Inference on one GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_many">Inference on many GPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_special">Inference on Specialized Hardware </a> </div><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/big_models">Instantiating a big model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/debugging">Troubleshooting </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tf_xla">XLA Integration for TensorFlow Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perf_torch_compile">Optimize inference using `torch.compile()` </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Contribute</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/contributing">How to contribute to transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_model">How to add a model to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_tensorflow_model">How to convert a 🤗 Transformers model to TensorFlow? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_pipeline">How to add a pipeline to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/testing">Testing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pr_checks">Checks on a Pull Request </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Conceptual guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/philosophy">Philosophy </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/glossary">Glossary </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/task_summary">What 🤗 Transformers can do </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tasks_explained">How 🤗 Transformers solve tasks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_summary">The Transformer model family </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tokenizer_summary">Summary of the tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/attention">Attention mechanisms </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pad_truncation">Padding and truncation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/bertology">BERTology </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perplexity">Perplexity of fixed-length models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_webserver">Pipelines for webserver inference </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_memory_anatomy">Model training anatomy </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>API</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Main Classes</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/agent">Agents and Tools </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/model_doc/auto">Auto Classes </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/callback">Callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/configuration">Configuration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/data_collator">Data Collator </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks">Keras callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/logging">Logging </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/model">Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/text_generation">Text Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/onnx">ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules">Optimization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/output">Model outputs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/pipelines">Pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/processors">Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/quantization">Quantization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/tokenizer">Tokenizer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/trainer">Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/deepspeed">DeepSpeed Integration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor">Feature Extractor </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/image_processor">Image Processor </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Models</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Text models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Vision models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Reinforcement learning models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Time series models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Graph models</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Internal Helpers</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/modeling_utils">Custom Layers and Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/pipelines_utils">Utilities for pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/tokenization_utils">Utilities for Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/trainer_utils">Utilities for Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/generation_utils">Utilities for Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/image_processing_utils">Utilities for Image Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/audio_utils">Utilities for Audio processing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/file_utils">General Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/time_series_utils">Utilities for Time Series </a> </div> </div></nav></div></div></div> <div class="z-1 min-w-0 flex-1"> <div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg"> <div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div> <p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience </p> <div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes </div></div></div> <div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a> <p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div> <div class="prose-doc prose relative mx-auto max-w-4xl break-words"> <p></p> <h1 class="relative group"><a id="document-question-answering" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#document-question-answering"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-ypkay6">Document Question Answering</span></h1> <div class="flex space-x-1 absolute z-10 right-0 top-0"> <div class="relative colab-dropdown "><button class=" " type="button"><img alt="Open In Colab" class="!m-0" src="https://colab.research.google.com/assets/colab-badge.svg"></button> </div> <div class="relative colab-dropdown "><button class=" " type="button"><img alt="Open In Studio Lab" class="!m-0" src="https://studiolab.sagemaker.aws/studiolab.svg"></button> </div></div> <p data-svelte-h="svelte-1c1m6de">Document Question Answering, also referred to as Document Visual Question Answering, is a task that involves providing answers to questions posed about document images. The input to models supporting this task is typically a combination of an image and a question, and the output is an answer expressed in natural language. These models utilize multiple modalities, including text, the positions of words (bounding boxes), and the image itself.</p> <p data-svelte-h="svelte-ku8orh">This guide illustrates how to:</p> <ul data-svelte-h="svelte-1g8eree"><li>Fine-tune <a href="../model_doc/layoutlmv2">LayoutLMv2</a> on the <a href="https://huggingface.co/datasets/nielsr/docvqa_1200_examples_donut" rel="nofollow">DocVQA dataset</a>.</li> <li>Use your fine-tuned model for inference.</li></ul> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-6l5sa6">The task illustrated in this tutorial is supported by the following model architectures:</p> <p data-svelte-h="svelte-4nltzx"><a href="../model_doc/layoutlm">LayoutLM</a>, <a href="../model_doc/layoutlmv2">LayoutLMv2</a>, <a href="../model_doc/layoutlmv3">LayoutLMv3</a></p></div> <p data-svelte-h="svelte-1svbrv5">LayoutLMv2 solves the document question-answering task by adding a question-answering head on top of the final hidden states of the tokens, to predict the positions of the start and end tokens of the answer. In other words, the problem is treated as extractive question answering: given the context, extract which piece of information answers the question. The context comes from the output of an OCR engine, here it is Google’s Tesseract.</p> <p data-svelte-h="svelte-17fjxql">Before you begin, make sure you have all the necessary libraries installed. LayoutLMv2 depends on detectron2, torchvision and tesseract.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class="">pip install -q transformers datasets</pre></div> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class="">pip install <span class="hljs-string">'git+https://github.com/facebookresearch/detectron2.git'</span> pip install torchvision</pre></div> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class="">sudo apt install tesseract-ocr pip install -q pytesseract</pre></div> <p data-svelte-h="svelte-hsz112">Once you have installed all of the dependencies, restart your runtime.</p> <p data-svelte-h="svelte-1yqpblu">We encourage you to share your model with the community. Log in to your Hugging Face account to upload it to the 🤗 Hub. When prompted, enter your token to log in:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> huggingface_hub <span class="hljs-keyword">import</span> notebook_login <span class="hljs-meta">&gt;&gt;&gt; </span>notebook_login()</pre></div> <p data-svelte-h="svelte-1us2g34">Let’s define some global variables.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>model_checkpoint = <span class="hljs-string">"microsoft/layoutlmv2-base-uncased"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>batch_size = <span class="hljs-number">4</span></pre></div> <h2 class="relative group"><a id="load-the-data" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#load-the-data"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1sbjexa">Load the data</span></h2> <p data-svelte-h="svelte-xkaeyi">In this guide we use a small sample of preprocessed DocVQA that you can find on 🤗 Hub. If you’d like to use the full DocVQA dataset, you can register and download it on <a href="https://rrc.cvc.uab.es/?ch=17" rel="nofollow">DocVQA homepage</a>. If you do so, to proceed with this guide check out <a href="https://huggingface.co/docs/datasets/loading#local-and-remote-files" rel="nofollow">how to load files into a 🤗 dataset</a>.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset <span class="hljs-meta">&gt;&gt;&gt; </span>dataset = load_dataset(<span class="hljs-string">"nielsr/docvqa_1200_examples"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>dataset DatasetDict({ train: Dataset({ features: [<span class="hljs-string">'id'</span>, <span class="hljs-string">'image'</span>, <span class="hljs-string">'query'</span>, <span class="hljs-string">'answers'</span>, <span class="hljs-string">'words'</span>, <span class="hljs-string">'bounding_boxes'</span>, <span class="hljs-string">'answer'</span>], num_rows: <span class="hljs-number">1000</span> }) test: Dataset({ features: [<span class="hljs-string">'id'</span>, <span class="hljs-string">'image'</span>, <span class="hljs-string">'query'</span>, <span class="hljs-string">'answers'</span>, <span class="hljs-string">'words'</span>, <span class="hljs-string">'bounding_boxes'</span>, <span class="hljs-string">'answer'</span>], num_rows: <span class="hljs-number">200</span> }) })</pre></div> <p data-svelte-h="svelte-18ggx10">As you can see, the dataset is split into train and test sets already. Take a look at a random example to familiarize yourself with the features.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>dataset[<span class="hljs-string">"train"</span>].features</pre></div> <p data-svelte-h="svelte-1fi388d">Here’s what the individual fields represent:</p> <ul data-svelte-h="svelte-12b5dxa"><li><code>id</code>: the example’s id</li> <li><code>image</code>: a PIL.Image.Image object containing the document image</li> <li><code>query</code>: the question string - natural language asked question, in several languages</li> <li><code>answers</code>: a list of correct answers provided by human annotators</li> <li><code>words</code> and <code>bounding_boxes</code>: the results of OCR, which we will not use here</li> <li><code>answer</code>: an answer matched by a different model which we will not use here</li></ul> <p data-svelte-h="svelte-1h0f0qo">Let’s leave only English questions, and drop the <code>answer</code> feature which appears to contain predictions by another model. We’ll also take the first of the answers from the set provided by the annotators. Alternatively, you can randomly sample it.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>updated_dataset = dataset.<span class="hljs-built_in">map</span>(<span class="hljs-keyword">lambda</span> example: {<span class="hljs-string">"question"</span>: example[<span class="hljs-string">"query"</span>][<span class="hljs-string">"en"</span>]}, remove_columns=[<span class="hljs-string">"query"</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span>updated_dataset = updated_dataset.<span class="hljs-built_in">map</span>( <span class="hljs-meta">... </span> <span class="hljs-keyword">lambda</span> example: {<span class="hljs-string">"answer"</span>: example[<span class="hljs-string">"answers"</span>][<span class="hljs-number">0</span>]}, remove_columns=[<span class="hljs-string">"answer"</span>, <span class="hljs-string">"answers"</span>] <span class="hljs-meta">... </span>)</pre></div> <p data-svelte-h="svelte-x5p0j2">Note that the LayoutLMv2 checkpoint that we use in this guide has been trained with <code>max_position_embeddings = 512</code> (you can find this information in the <a href="https://huggingface.co/microsoft/layoutlmv2-base-uncased/blob/main/config.json#L18" rel="nofollow">checkpoint’s <code>config.json</code> file</a>). We can truncate the examples but to avoid the situation where the answer might be at the end of a large document and end up truncated, here we’ll remove the few examples where the embedding is likely to end up longer than 512. If most of the documents in your dataset are long, you can implement a sliding window strategy - check out <a href="https://github.com/huggingface/notebooks/blob/main/examples/question_answering.ipynb" rel="nofollow">this notebook</a> for details.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>updated_dataset = updated_dataset.<span class="hljs-built_in">filter</span>(<span class="hljs-keyword">lambda</span> x: <span class="hljs-built_in">len</span>(x[<span class="hljs-string">"words"</span>]) + <span class="hljs-built_in">len</span>(x[<span class="hljs-string">"question"</span>].split()) &lt; <span class="hljs-number">512</span>)</pre></div> <p data-svelte-h="svelte-qrnfz4">At this point let’s also remove the OCR features from this dataset. These are a result of OCR for fine-tuning a different model. They would still require some processing if we wanted to use them, as they do not match the input requirements of the model we use in this guide. Instead, we can use the <a href="/docs/transformers/v4.34.0/en/model_doc/layoutlmv2#transformers.LayoutLMv2Processor">LayoutLMv2Processor</a> on the original data for both OCR and tokenization. This way we’ll get the inputs that match model’s expected input. If you want to process images manually, check out the <a href="../model_doc/layoutlmv2"><code>LayoutLMv2</code> model documentation</a> to learn what input format the model expects.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>updated_dataset = updated_dataset.remove_columns(<span class="hljs-string">"words"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>updated_dataset = updated_dataset.remove_columns(<span class="hljs-string">"bounding_boxes"</span>)</pre></div> <p data-svelte-h="svelte-1vy80t">Finally, the data exploration won’t be complete if we don’t peek at an image example.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>updated_dataset[<span class="hljs-string">"train"</span>][<span class="hljs-number">11</span>][<span class="hljs-string">"image"</span>]</pre></div> <div class="flex justify-center" data-svelte-h="svelte-q63tj1"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/docvqa_example.jpg" alt="DocVQA Image Example"></div> <h2 class="relative group"><a id="preprocess-the-data" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#preprocess-the-data"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-171xy48">Preprocess the data</span></h2> <p data-svelte-h="svelte-dy7d6q">The Document Question Answering task is a multimodal task, and you need to make sure that the inputs from each modality are preprocessed according to the model’s expectations. Let’s start by loading the <a href="/docs/transformers/v4.34.0/en/model_doc/layoutlmv2#transformers.LayoutLMv2Processor">LayoutLMv2Processor</a>, which internally combines an image processor that can handle image data and a tokenizer that can encode text data.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoProcessor <span class="hljs-meta">&gt;&gt;&gt; </span>processor = AutoProcessor.from_pretrained(model_checkpoint)</pre></div> <h3 class="relative group"><a id="preprocessing-document-images" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#preprocessing-document-images"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-76kkmc">Preprocessing document images</span></h3> <p data-svelte-h="svelte-1u7369n">First, let’s prepare the document images for the model with the help of the <code>image_processor</code> from the processor. By default, image processor resizes the images to 224x224, makes sure they have the correct order of color channels, applies OCR with tesseract to get words and normalized bounding boxes. In this tutorial, all of these defaults are exactly what we need. Write a function that applies the default image processing to a batch of images and returns the results of OCR.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>image_processor = processor.image_processor <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">get_ocr_words_and_boxes</span>(<span class="hljs-params">examples</span>): <span class="hljs-meta">... </span> images = [image.convert(<span class="hljs-string">"RGB"</span>) <span class="hljs-keyword">for</span> image <span class="hljs-keyword">in</span> examples[<span class="hljs-string">"image"</span>]] <span class="hljs-meta">... </span> encoded_inputs = image_processor(images) <span class="hljs-meta">... </span> examples[<span class="hljs-string">"image"</span>] = encoded_inputs.pixel_values <span class="hljs-meta">... </span> examples[<span class="hljs-string">"words"</span>] = encoded_inputs.words <span class="hljs-meta">... </span> examples[<span class="hljs-string">"boxes"</span>] = encoded_inputs.boxes <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> examples</pre></div> <p data-svelte-h="svelte-1rxmj99">To apply this preprocessing to the entire dataset in a fast way, use <a href="https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.map" rel="nofollow">map</a>.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>dataset_with_ocr = updated_dataset.<span class="hljs-built_in">map</span>(get_ocr_words_and_boxes, batched=<span class="hljs-literal">True</span>, batch_size=<span class="hljs-number">2</span>)</pre></div> <h3 class="relative group"><a id="preprocessing-text-data" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#preprocessing-text-data"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1epmnus">Preprocessing text data</span></h3> <p data-svelte-h="svelte-dfarfe">Once we have applied OCR to the images, we need to encode the text part of the dataset to prepare it for the model. This involves converting the words and boxes that we got in the previous step to token-level <code>input_ids</code>, <code>attention_mask</code>, <code>token_type_ids</code> and <code>bbox</code>. For preprocessing text, we’ll need the <code>tokenizer</code> from the processor.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = processor.tokenizer</pre></div> <p data-svelte-h="svelte-12sojfo">On top of the preprocessing mentioned above, we also need to add the labels for the model. For <code>xxxForQuestionAnswering</code> models in 🤗 Transformers, the labels consist of the <code>start_positions</code> and <code>end_positions</code>, indicating which token is at the start and which token is at the end of the answer.</p> <p data-svelte-h="svelte-1kkerbo">Let’s start with that. Define a helper function that can find a sublist (the answer split into words) in a larger list (the words list).</p> <p data-svelte-h="svelte-1wppb4o">This function will take two lists as input, <code>words_list</code> and <code>answer_list</code>. It will then iterate over the <code>words_list</code> and check if the current word in the <code>words_list</code> (words_list[i]) is equal to the first word of answer_list (answer_list[0]) and if the sublist of <code>words_list</code> starting from the current word and of the same length as <code>answer_list</code> is equal <code>to answer_list</code>. If this condition is true, it means that a match has been found, and the function will record the match, its starting index (idx), and its ending index (idx + len(answer_list) - 1). If more than one match was found, the function will return only the first one. If no match is found, the function returns (<code>None</code>, 0, and 0).</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">subfinder</span>(<span class="hljs-params">words_list, answer_list</span>): <span class="hljs-meta">... </span> matches = [] <span class="hljs-meta">... </span> start_indices = [] <span class="hljs-meta">... </span> end_indices = [] <span class="hljs-meta">... </span> <span class="hljs-keyword">for</span> idx, i <span class="hljs-keyword">in</span> <span class="hljs-built_in">enumerate</span>(<span class="hljs-built_in">range</span>(<span class="hljs-built_in">len</span>(words_list))): <span class="hljs-meta">... </span> <span class="hljs-keyword">if</span> words_list[i] == answer_list[<span class="hljs-number">0</span>] <span class="hljs-keyword">and</span> words_list[i : i + <span class="hljs-built_in">len</span>(answer_list)] == answer_list: <span class="hljs-meta">... </span> matches.append(answer_list) <span class="hljs-meta">... </span> start_indices.append(idx) <span class="hljs-meta">... </span> end_indices.append(idx + <span class="hljs-built_in">len</span>(answer_list) - <span class="hljs-number">1</span>) <span class="hljs-meta">... </span> <span class="hljs-keyword">if</span> matches: <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> matches[<span class="hljs-number">0</span>], start_indices[<span class="hljs-number">0</span>], end_indices[<span class="hljs-number">0</span>] <span class="hljs-meta">... </span> <span class="hljs-keyword">else</span>: <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> <span class="hljs-literal">None</span>, <span class="hljs-number">0</span>, <span class="hljs-number">0</span></pre></div> <p data-svelte-h="svelte-19pibjd">To illustrate how this function finds the position of the answer, let’s use it on an example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>example = dataset_with_ocr[<span class="hljs-string">"train"</span>][<span class="hljs-number">1</span>] <span class="hljs-meta">&gt;&gt;&gt; </span>words = [word.lower() <span class="hljs-keyword">for</span> word <span class="hljs-keyword">in</span> example[<span class="hljs-string">"words"</span>]] <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">match</span>, word_idx_start, word_idx_end = subfinder(words, example[<span class="hljs-string">"answer"</span>].lower().split()) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-built_in">print</span>(<span class="hljs-string">"Question: "</span>, example[<span class="hljs-string">"question"</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-built_in">print</span>(<span class="hljs-string">"Words:"</span>, words) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-built_in">print</span>(<span class="hljs-string">"Answer: "</span>, example[<span class="hljs-string">"answer"</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-built_in">print</span>(<span class="hljs-string">"start_index"</span>, word_idx_start) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-built_in">print</span>(<span class="hljs-string">"end_index"</span>, word_idx_end) Question: Who <span class="hljs-keyword">is</span> <span class="hljs-keyword">in</span> cc <span class="hljs-keyword">in</span> this letter? Words: [<span class="hljs-string">'wie'</span>, <span class="hljs-string">'baw'</span>, <span class="hljs-string">'brown'</span>, <span class="hljs-string">'&amp;'</span>, <span class="hljs-string">'williamson'</span>, <span class="hljs-string">'tobacco'</span>, <span class="hljs-string">'corporation'</span>, <span class="hljs-string">'research'</span>, <span class="hljs-string">'&amp;'</span>, <span class="hljs-string">'development'</span>, <span class="hljs-string">'internal'</span>, <span class="hljs-string">'correspondence'</span>, <span class="hljs-string">'to:'</span>, <span class="hljs-string">'r.'</span>, <span class="hljs-string">'h.'</span>, <span class="hljs-string">'honeycutt'</span>, <span class="hljs-string">'ce:'</span>, <span class="hljs-string">'t.f.'</span>, <span class="hljs-string">'riehl'</span>, <span class="hljs-string">'from:'</span>, <span class="hljs-string">'.'</span>, <span class="hljs-string">'c.j.'</span>, <span class="hljs-string">'cook'</span>, <span class="hljs-string">'date:'</span>, <span class="hljs-string">'may'</span>, <span class="hljs-string">'8,'</span>, <span class="hljs-string">'1995'</span>, <span class="hljs-string">'subject:'</span>, <span class="hljs-string">'review'</span>, <span class="hljs-string">'of'</span>, <span class="hljs-string">'existing'</span>, <span class="hljs-string">'brainstorming'</span>, <span class="hljs-string">'ideas/483'</span>, <span class="hljs-string">'the'</span>, <span class="hljs-string">'major'</span>, <span class="hljs-string">'function'</span>, <span class="hljs-string">'of'</span>, <span class="hljs-string">'the'</span>, <span class="hljs-string">'product'</span>, <span class="hljs-string">'innovation'</span>, <span class="hljs-string">'graup'</span>, <span class="hljs-string">'is'</span>, <span class="hljs-string">'to'</span>, <span class="hljs-string">'develop'</span>, <span class="hljs-string">'marketable'</span>, <span class="hljs-string">'nove!'</span>, <span class="hljs-string">'products'</span>, <span class="hljs-string">'that'</span>, <span class="hljs-string">'would'</span>, <span class="hljs-string">'be'</span>, <span class="hljs-string">'profitable'</span>, <span class="hljs-string">'to'</span>, <span class="hljs-string">'manufacture'</span>, <span class="hljs-string">'and'</span>, <span class="hljs-string">'sell.'</span>, <span class="hljs-string">'novel'</span>, <span class="hljs-string">'is'</span>, <span class="hljs-string">'defined'</span>, <span class="hljs-string">'as:'</span>, <span class="hljs-string">'of'</span>, <span class="hljs-string">'a'</span>, <span class="hljs-string">'new'</span>, <span class="hljs-string">'kind,'</span>, <span class="hljs-string">'or'</span>, <span class="hljs-string">'different'</span>, <span class="hljs-string">'from'</span>, <span class="hljs-string">'anything'</span>, <span class="hljs-string">'seen'</span>, <span class="hljs-string">'or'</span>, <span class="hljs-string">'known'</span>, <span class="hljs-string">'before.'</span>, <span class="hljs-string">'innovation'</span>, <span class="hljs-string">'is'</span>, <span class="hljs-string">'defined'</span>, <span class="hljs-string">'as:'</span>, <span class="hljs-string">'something'</span>, <span class="hljs-string">'new'</span>, <span class="hljs-string">'or'</span>, <span class="hljs-string">'different'</span>, <span class="hljs-string">'introduced;'</span>, <span class="hljs-string">'act'</span>, <span class="hljs-string">'of'</span>, <span class="hljs-string">'innovating;'</span>, <span class="hljs-string">'introduction'</span>, <span class="hljs-string">'of'</span>, <span class="hljs-string">'new'</span>, <span class="hljs-string">'things'</span>, <span class="hljs-string">'or'</span>, <span class="hljs-string">'methods.'</span>, <span class="hljs-string">'the'</span>, <span class="hljs-string">'products'</span>, <span class="hljs-string">'may'</span>, <span class="hljs-string">'incorporate'</span>, <span class="hljs-string">'the'</span>, <span class="hljs-string">'latest'</span>, <span class="hljs-string">'technologies,'</span>, <span class="hljs-string">'materials'</span>, <span class="hljs-string">'and'</span>, <span class="hljs-string">'know-how'</span>, <span class="hljs-string">'available'</span>, <span class="hljs-string">'to'</span>, <span class="hljs-string">'give'</span>, <span class="hljs-string">'then'</span>, <span class="hljs-string">'a'</span>, <span class="hljs-string">'unique'</span>, <span class="hljs-string">'taste'</span>, <span class="hljs-string">'or'</span>, <span class="hljs-string">'look.'</span>, <span class="hljs-string">'the'</span>, <span class="hljs-string">'first'</span>, <span class="hljs-string">'task'</span>, <span class="hljs-string">'of'</span>, <span class="hljs-string">'the'</span>, <span class="hljs-string">'product'</span>, <span class="hljs-string">'innovation'</span>, <span class="hljs-string">'group'</span>, <span class="hljs-string">'was'</span>, <span class="hljs-string">'to'</span>, <span class="hljs-string">'assemble,'</span>, <span class="hljs-string">'review'</span>, <span class="hljs-string">'and'</span>, <span class="hljs-string">'categorize'</span>, <span class="hljs-string">'a'</span>, <span class="hljs-string">'list'</span>, <span class="hljs-string">'of'</span>, <span class="hljs-string">'existing'</span>, <span class="hljs-string">'brainstorming'</span>, <span class="hljs-string">'ideas.'</span>, <span class="hljs-string">'ideas'</span>, <span class="hljs-string">'were'</span>, <span class="hljs-string">'grouped'</span>, <span class="hljs-string">'into'</span>, <span class="hljs-string">'two'</span>, <span class="hljs-string">'major'</span>, <span class="hljs-string">'categories'</span>, <span class="hljs-string">'labeled'</span>, <span class="hljs-string">'appearance'</span>, <span class="hljs-string">'and'</span>, <span class="hljs-string">'taste/aroma.'</span>, <span class="hljs-string">'these'</span>, <span class="hljs-string">'categories'</span>, <span class="hljs-string">'are'</span>, <span class="hljs-string">'used'</span>, <span class="hljs-string">'for'</span>, <span class="hljs-string">'novel'</span>, <span class="hljs-string">'products'</span>, <span class="hljs-string">'that'</span>, <span class="hljs-string">'may'</span>, <span class="hljs-string">'differ'</span>, <span class="hljs-string">'from'</span>, <span class="hljs-string">'a'</span>, <span class="hljs-string">'visual'</span>, <span class="hljs-string">'and/or'</span>, <span class="hljs-string">'taste/aroma'</span>, <span class="hljs-string">'point'</span>, <span class="hljs-string">'of'</span>, <span class="hljs-string">'view'</span>, <span class="hljs-string">'compared'</span>, <span class="hljs-string">'to'</span>, <span class="hljs-string">'canventional'</span>, <span class="hljs-string">'cigarettes.'</span>, <span class="hljs-string">'other'</span>, <span class="hljs-string">'categories'</span>, <span class="hljs-string">'include'</span>, <span class="hljs-string">'a'</span>, <span class="hljs-string">'combination'</span>, <span class="hljs-string">'of'</span>, <span class="hljs-string">'the'</span>, <span class="hljs-string">'above,'</span>, <span class="hljs-string">'filters,'</span>, <span class="hljs-string">'packaging'</span>, <span class="hljs-string">'and'</span>, <span class="hljs-string">'brand'</span>, <span class="hljs-string">'extensions.'</span>, <span class="hljs-string">'appearance'</span>, <span class="hljs-string">'this'</span>, <span class="hljs-string">'category'</span>, <span class="hljs-string">'is'</span>, <span class="hljs-string">'used'</span>, <span class="hljs-string">'for'</span>, <span class="hljs-string">'novel'</span>, <span class="hljs-string">'cigarette'</span>, <span class="hljs-string">'constructions'</span>, <span class="hljs-string">'that'</span>, <span class="hljs-string">'yield'</span>, <span class="hljs-string">'visually'</span>, <span class="hljs-string">'different'</span>, <span class="hljs-string">'products'</span>, <span class="hljs-string">'with'</span>, <span class="hljs-string">'minimal'</span>, <span class="hljs-string">'changes'</span>, <span class="hljs-string">'in'</span>, <span class="hljs-string">'smoke'</span>, <span class="hljs-string">'chemistry'</span>, <span class="hljs-string">'two'</span>, <span class="hljs-string">'cigarettes'</span>, <span class="hljs-string">'in'</span>, <span class="hljs-string">'cne.'</span>, <span class="hljs-string">'emulti-plug'</span>, <span class="hljs-string">'te'</span>, <span class="hljs-string">'build'</span>, <span class="hljs-string">'yaur'</span>, <span class="hljs-string">'awn'</span>, <span class="hljs-string">'cigarette.'</span>, <span class="hljs-string">'eswitchable'</span>, <span class="hljs-string">'menthol'</span>, <span class="hljs-string">'or'</span>, <span class="hljs-string">'non'</span>, <span class="hljs-string">'menthol'</span>, <span class="hljs-string">'cigarette.'</span>, <span class="hljs-string">'*cigarettes'</span>, <span class="hljs-string">'with'</span>, <span class="hljs-string">'interspaced'</span>, <span class="hljs-string">'perforations'</span>, <span class="hljs-string">'to'</span>, <span class="hljs-string">'enable'</span>, <span class="hljs-string">'smoker'</span>, <span class="hljs-string">'to'</span>, <span class="hljs-string">'separate'</span>, <span class="hljs-string">'unburned'</span>, <span class="hljs-string">'section'</span>, <span class="hljs-string">'for'</span>, <span class="hljs-string">'future'</span>, <span class="hljs-string">'smoking.'</span>, <span class="hljs-string">'«short'</span>, <span class="hljs-string">'cigarette,'</span>, <span class="hljs-string">'tobacco'</span>, <span class="hljs-string">'section'</span>, <span class="hljs-string">'30'</span>, <span class="hljs-string">'mm.'</span>, <span class="hljs-string">'«extremely'</span>, <span class="hljs-string">'fast'</span>, <span class="hljs-string">'buming'</span>, <span class="hljs-string">'cigarette.'</span>, <span class="hljs-string">'«novel'</span>, <span class="hljs-string">'cigarette'</span>, <span class="hljs-string">'constructions'</span>, <span class="hljs-string">'that'</span>, <span class="hljs-string">'permit'</span>, <span class="hljs-string">'a'</span>, <span class="hljs-string">'significant'</span>, <span class="hljs-string">'reduction'</span>, <span class="hljs-string">'iretobacco'</span>, <span class="hljs-string">'weight'</span>, <span class="hljs-string">'while'</span>, <span class="hljs-string">'maintaining'</span>, <span class="hljs-string">'smoking'</span>, <span class="hljs-string">'mechanics'</span>, <span class="hljs-string">'and'</span>, <span class="hljs-string">'visual'</span>, <span class="hljs-string">'characteristics.'</span>, <span class="hljs-string">'higher'</span>, <span class="hljs-string">'basis'</span>, <span class="hljs-string">'weight'</span>, <span class="hljs-string">'paper:'</span>, <span class="hljs-string">'potential'</span>, <span class="hljs-string">'reduction'</span>, <span class="hljs-string">'in'</span>, <span class="hljs-string">'tobacco'</span>, <span class="hljs-string">'weight.'</span>, <span class="hljs-string">'«more'</span>, <span class="hljs-string">'rigid'</span>, <span class="hljs-string">'tobacco'</span>, <span class="hljs-string">'column;'</span>, <span class="hljs-string">'stiffing'</span>, <span class="hljs-string">'agent'</span>, <span class="hljs-string">'for'</span>, <span class="hljs-string">'tobacco;'</span>, <span class="hljs-string">'e.g.'</span>, <span class="hljs-string">'starch'</span>, <span class="hljs-string">'*colored'</span>, <span class="hljs-string">'tow'</span>, <span class="hljs-string">'and'</span>, <span class="hljs-string">'cigarette'</span>, <span class="hljs-string">'papers;'</span>, <span class="hljs-string">'seasonal'</span>, <span class="hljs-string">'promotions,'</span>, <span class="hljs-string">'e.g.'</span>, <span class="hljs-string">'pastel'</span>, <span class="hljs-string">'colored'</span>, <span class="hljs-string">'cigarettes'</span>, <span class="hljs-string">'for'</span>, <span class="hljs-string">'easter'</span>, <span class="hljs-string">'or'</span>, <span class="hljs-string">'in'</span>, <span class="hljs-string">'an'</span>, <span class="hljs-string">'ebony'</span>, <span class="hljs-string">'and'</span>, <span class="hljs-string">'ivory'</span>, <span class="hljs-string">'brand'</span>, <span class="hljs-string">'containing'</span>, <span class="hljs-string">'a'</span>, <span class="hljs-string">'mixture'</span>, <span class="hljs-string">'of'</span>, <span class="hljs-string">'all'</span>, <span class="hljs-string">'black'</span>, <span class="hljs-string">'(black'</span>, <span class="hljs-string">'paper'</span>, <span class="hljs-string">'and'</span>, <span class="hljs-string">'tow)'</span>, <span class="hljs-string">'and'</span>, <span class="hljs-string">'ail'</span>, <span class="hljs-string">'white'</span>, <span class="hljs-string">'cigarettes.'</span>, <span class="hljs-string">'499150498'</span>] Answer: T.F. Riehl start_index <span class="hljs-number">17</span> end_index <span class="hljs-number">18</span></pre></div> <p data-svelte-h="svelte-19lp6r8">Once examples are encoded, however, they will look like this:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>encoding = tokenizer(example[<span class="hljs-string">"question"</span>], example[<span class="hljs-string">"words"</span>], example[<span class="hljs-string">"boxes"</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer.decode(encoding[<span class="hljs-string">"input_ids"</span>]) [CLS] who <span class="hljs-keyword">is</span> <span class="hljs-keyword">in</span> cc <span class="hljs-keyword">in</span> this letter? [SEP] wie baw brown &amp; williamson tobacco corporation research &amp; development ...</pre></div> <p data-svelte-h="svelte-1tk94l">We’ll need to find the position of the answer in the encoded input.</p> <ul data-svelte-h="svelte-zfehno"><li><code>token_type_ids</code> tells us which tokens are part of the question, and which ones are part of the document’s words.</li> <li><code>tokenizer.cls_token_id</code> will help find the special token at the beginning of the input.</li> <li><code>word_ids</code> will help match the answer found in the original <code>words</code> to the same answer in the full encoded input and determine the start/end position of the answer in the encoded input.</li></ul> <p data-svelte-h="svelte-701rvg">With that in mind, let’s create a function to encode a batch of examples in the dataset:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">encode_dataset</span>(<span class="hljs-params">examples, max_length=<span class="hljs-number">512</span></span>): <span class="hljs-meta">... </span> questions = examples[<span class="hljs-string">"question"</span>] <span class="hljs-meta">... </span> words = examples[<span class="hljs-string">"words"</span>] <span class="hljs-meta">... </span> boxes = examples[<span class="hljs-string">"boxes"</span>] <span class="hljs-meta">... </span> answers = examples[<span class="hljs-string">"answer"</span>] <span class="hljs-meta">... </span> <span class="hljs-comment"># encode the batch of examples and initialize the start_positions and end_positions</span> <span class="hljs-meta">... </span> encoding = tokenizer(questions, words, boxes, max_length=max_length, padding=<span class="hljs-string">"max_length"</span>, truncation=<span class="hljs-literal">True</span>) <span class="hljs-meta">... </span> start_positions = [] <span class="hljs-meta">... </span> end_positions = [] <span class="hljs-meta">... </span> <span class="hljs-comment"># loop through the examples in the batch</span> <span class="hljs-meta">... </span> <span class="hljs-keyword">for</span> i <span class="hljs-keyword">in</span> <span class="hljs-built_in">range</span>(<span class="hljs-built_in">len</span>(questions)): <span class="hljs-meta">... </span> cls_index = encoding[<span class="hljs-string">"input_ids"</span>][i].index(tokenizer.cls_token_id) <span class="hljs-meta">... </span> <span class="hljs-comment"># find the position of the answer in example's words</span> <span class="hljs-meta">... </span> words_example = [word.lower() <span class="hljs-keyword">for</span> word <span class="hljs-keyword">in</span> words[i]] <span class="hljs-meta">... </span> answer = answers[i] <span class="hljs-meta">... </span> <span class="hljs-keyword">match</span>, word_idx_start, word_idx_end = subfinder(words_example, answer.lower().split()) <span class="hljs-meta">... </span> <span class="hljs-keyword">if</span> <span class="hljs-keyword">match</span>: <span class="hljs-meta">... </span> <span class="hljs-comment"># if match is found, use `token_type_ids` to find where words start in the encoding</span> <span class="hljs-meta">... </span> token_type_ids = encoding[<span class="hljs-string">"token_type_ids"</span>][i] <span class="hljs-meta">... </span> token_start_index = <span class="hljs-number">0</span> <span class="hljs-meta">... </span> <span class="hljs-keyword">while</span> token_type_ids[token_start_index] != <span class="hljs-number">1</span>: <span class="hljs-meta">... </span> token_start_index += <span class="hljs-number">1</span> <span class="hljs-meta">... </span> token_end_index = <span class="hljs-built_in">len</span>(encoding[<span class="hljs-string">"input_ids"</span>][i]) - <span class="hljs-number">1</span> <span class="hljs-meta">... </span> <span class="hljs-keyword">while</span> token_type_ids[token_end_index] != <span class="hljs-number">1</span>: <span class="hljs-meta">... </span> token_end_index -= <span class="hljs-number">1</span> <span class="hljs-meta">... </span> word_ids = encoding.word_ids(i)[token_start_index : token_end_index + <span class="hljs-number">1</span>] <span class="hljs-meta">... </span> start_position = cls_index <span class="hljs-meta">... </span> end_position = cls_index <span class="hljs-meta">... </span> <span class="hljs-comment"># loop over word_ids and increase `token_start_index` until it matches the answer position in words</span> <span class="hljs-meta">... </span> <span class="hljs-comment"># once it matches, save the `token_start_index` as the `start_position` of the answer in the encoding</span> <span class="hljs-meta">... </span> <span class="hljs-keyword">for</span> <span class="hljs-built_in">id</span> <span class="hljs-keyword">in</span> word_ids: <span class="hljs-meta">... </span> <span class="hljs-keyword">if</span> <span class="hljs-built_in">id</span> == word_idx_start: <span class="hljs-meta">... </span> start_position = token_start_index <span class="hljs-meta">... </span> <span class="hljs-keyword">else</span>: <span class="hljs-meta">... </span> token_start_index += <span class="hljs-number">1</span> <span class="hljs-meta">... </span> <span class="hljs-comment"># similarly loop over `word_ids` starting from the end to find the `end_position` of the answer</span> <span class="hljs-meta">... </span> <span class="hljs-keyword">for</span> <span class="hljs-built_in">id</span> <span class="hljs-keyword">in</span> word_ids[::-<span class="hljs-number">1</span>]: <span class="hljs-meta">... </span> <span class="hljs-keyword">if</span> <span class="hljs-built_in">id</span> == word_idx_end: <span class="hljs-meta">... </span> end_position = token_end_index <span class="hljs-meta">... </span> <span class="hljs-keyword">else</span>: <span class="hljs-meta">... </span> token_end_index -= <span class="hljs-number">1</span> <span class="hljs-meta">... </span> start_positions.append(start_position) <span class="hljs-meta">... </span> end_positions.append(end_position) <span class="hljs-meta">... </span> <span class="hljs-keyword">else</span>: <span class="hljs-meta">... </span> start_positions.append(cls_index) <span class="hljs-meta">... </span> end_positions.append(cls_index) <span class="hljs-meta">... </span> encoding[<span class="hljs-string">"image"</span>] = examples[<span class="hljs-string">"image"</span>] <span class="hljs-meta">... </span> encoding[<span class="hljs-string">"start_positions"</span>] = start_positions <span class="hljs-meta">... </span> encoding[<span class="hljs-string">"end_positions"</span>] = end_positions <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> encoding</pre></div> <p data-svelte-h="svelte-1ori799">Now that we have this preprocessing function, we can encode the entire dataset:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>encoded_train_dataset = dataset_with_ocr[<span class="hljs-string">"train"</span>].<span class="hljs-built_in">map</span>( <span class="hljs-meta">... </span> encode_dataset, batched=<span class="hljs-literal">True</span>, batch_size=<span class="hljs-number">2</span>, remove_columns=dataset_with_ocr[<span class="hljs-string">"train"</span>].column_names <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>encoded_test_dataset = dataset_with_ocr[<span class="hljs-string">"test"</span>].<span class="hljs-built_in">map</span>( <span class="hljs-meta">... </span> encode_dataset, batched=<span class="hljs-literal">True</span>, batch_size=<span class="hljs-number">2</span>, remove_columns=dataset_with_ocr[<span class="hljs-string">"test"</span>].column_names <span class="hljs-meta">... </span>)</pre></div> <p data-svelte-h="svelte-upxsp">Let’s check what the features of the encoded dataset look like:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>encoded_train_dataset.features {<span class="hljs-string">'image'</span>: <span class="hljs-type">Sequence</span>(feature=<span class="hljs-type">Sequence</span>(feature=<span class="hljs-type">Sequence</span>(feature=Value(dtype=<span class="hljs-string">'uint8'</span>, <span class="hljs-built_in">id</span>=<span class="hljs-literal">None</span>), length=-<span class="hljs-number">1</span>, <span class="hljs-built_in">id</span>=<span class="hljs-literal">None</span>), length=-<span class="hljs-number">1</span>, <span class="hljs-built_in">id</span>=<span class="hljs-literal">None</span>), length=-<span class="hljs-number">1</span>, <span class="hljs-built_in">id</span>=<span class="hljs-literal">None</span>), <span class="hljs-string">'input_ids'</span>: <span class="hljs-type">Sequence</span>(feature=Value(dtype=<span class="hljs-string">'int32'</span>, <span class="hljs-built_in">id</span>=<span class="hljs-literal">None</span>), length=-<span class="hljs-number">1</span>, <span class="hljs-built_in">id</span>=<span class="hljs-literal">None</span>), <span class="hljs-string">'token_type_ids'</span>: <span class="hljs-type">Sequence</span>(feature=Value(dtype=<span class="hljs-string">'int8'</span>, <span class="hljs-built_in">id</span>=<span class="hljs-literal">None</span>), length=-<span class="hljs-number">1</span>, <span class="hljs-built_in">id</span>=<span class="hljs-literal">None</span>), <span class="hljs-string">'attention_mask'</span>: <span class="hljs-type">Sequence</span>(feature=Value(dtype=<span class="hljs-string">'int8'</span>, <span class="hljs-built_in">id</span>=<span class="hljs-literal">None</span>), length=-<span class="hljs-number">1</span>, <span class="hljs-built_in">id</span>=<span class="hljs-literal">None</span>), <span class="hljs-string">'bbox'</span>: <span class="hljs-type">Sequence</span>(feature=<span class="hljs-type">Sequence</span>(feature=Value(dtype=<span class="hljs-string">'int64'</span>, <span class="hljs-built_in">id</span>=<span class="hljs-literal">None</span>), length=-<span class="hljs-number">1</span>, <span class="hljs-built_in">id</span>=<span class="hljs-literal">None</span>), length=-<span class="hljs-number">1</span>, <span class="hljs-built_in">id</span>=<span class="hljs-literal">None</span>), <span class="hljs-string">'start_positions'</span>: Value(dtype=<span class="hljs-string">'int64'</span>, <span class="hljs-built_in">id</span>=<span class="hljs-literal">None</span>), <span class="hljs-string">'end_positions'</span>: Value(dtype=<span class="hljs-string">'int64'</span>, <span class="hljs-built_in">id</span>=<span class="hljs-literal">None</span>)}</pre></div> <h2 class="relative group"><a id="evaluation" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#evaluation"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-rkze07">Evaluation</span></h2> <p data-svelte-h="svelte-155fpzb">Evaluation for document question answering requires a significant amount of postprocessing. To avoid taking up too much of your time, this guide skips the evaluation step. The <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer">Trainer</a> still calculates the evaluation loss during training so you’re not completely in the dark about your model’s performance. Extractive question answering is typically evaluated using F1/exact match. If you’d like to implement it yourself, check out the <a href="https://huggingface.co/course/chapter7/7?fw=pt#postprocessing" rel="nofollow">Question Answering chapter</a> of the Hugging Face course for inspiration.</p> <h2 class="relative group"><a id="train" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#train"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-5arm0l">Train</span></h2> <p data-svelte-h="svelte-10f6ay">Congratulations! You’ve successfully navigated the toughest part of this guide and now you are ready to train your own model. Training involves the following steps:</p> <ul data-svelte-h="svelte-b8ivr"><li>Load the model with <a href="/docs/transformers/v4.34.0/en/model_doc/auto#transformers.AutoModelForDocumentQuestionAnswering">AutoModelForDocumentQuestionAnswering</a> using the same checkpoint as in the preprocessing.</li> <li>Define your training hyperparameters in <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.TrainingArguments">TrainingArguments</a>.</li> <li>Define a function to batch examples together, here the <a href="/docs/transformers/v4.34.0/en/main_classes/data_collator#transformers.DefaultDataCollator">DefaultDataCollator</a> will do just fine</li> <li>Pass the training arguments to <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer">Trainer</a> along with the model, dataset, and data collator.</li> <li>Call <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.train">train()</a> to finetune your model.</li></ul> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoModelForDocumentQuestionAnswering <span class="hljs-meta">&gt;&gt;&gt; </span>model = AutoModelForDocumentQuestionAnswering.from_pretrained(model_checkpoint)</pre></div> <p data-svelte-h="svelte-4b70mx">In the <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.TrainingArguments">TrainingArguments</a> use <code>output_dir</code> to specify where to save your model, and configure hyperparameters as you see fit. If you wish to share your model with the community, set <code>push_to_hub</code> to <code>True</code> (you must be signed in to Hugging Face to upload your model). In this case the <code>output_dir</code> will also be the name of the repo where your model checkpoint will be pushed.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> TrainingArguments <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># REPLACE THIS WITH YOUR REPO ID</span> <span class="hljs-meta">&gt;&gt;&gt; </span>repo_id = <span class="hljs-string">"MariaK/layoutlmv2-base-uncased_finetuned_docvqa"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>training_args = TrainingArguments( <span class="hljs-meta">... </span> output_dir=repo_id, <span class="hljs-meta">... </span> per_device_train_batch_size=<span class="hljs-number">4</span>, <span class="hljs-meta">... </span> num_train_epochs=<span class="hljs-number">20</span>, <span class="hljs-meta">... </span> save_steps=<span class="hljs-number">200</span>, <span class="hljs-meta">... </span> logging_steps=<span class="hljs-number">50</span>, <span class="hljs-meta">... </span> evaluation_strategy=<span class="hljs-string">"steps"</span>, <span class="hljs-meta">... </span> learning_rate=<span class="hljs-number">5e-5</span>, <span class="hljs-meta">... </span> save_total_limit=<span class="hljs-number">2</span>, <span class="hljs-meta">... </span> remove_unused_columns=<span class="hljs-literal">False</span>, <span class="hljs-meta">... </span> push_to_hub=<span class="hljs-literal">True</span>, <span class="hljs-meta">... </span>)</pre></div> <p data-svelte-h="svelte-1gq6t8w">Define a simple data collator to batch examples together.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> DefaultDataCollator <span class="hljs-meta">&gt;&gt;&gt; </span>data_collator = DefaultDataCollator()</pre></div> <p data-svelte-h="svelte-1990uzx">Finally, bring everything together, and call <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.train">train()</a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> Trainer <span class="hljs-meta">&gt;&gt;&gt; </span>trainer = Trainer( <span class="hljs-meta">... </span> model=model, <span class="hljs-meta">... </span> args=training_args, <span class="hljs-meta">... </span> data_collator=data_collator, <span class="hljs-meta">... </span> train_dataset=encoded_train_dataset, <span class="hljs-meta">... </span> eval_dataset=encoded_test_dataset, <span class="hljs-meta">... </span> tokenizer=processor, <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>trainer.train()</pre></div> <p data-svelte-h="svelte-gilssp">To add the final model to 🤗 Hub, create a model card and call <code>push_to_hub</code>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>trainer.create_model_card() <span class="hljs-meta">&gt;&gt;&gt; </span>trainer.push_to_hub()</pre></div> <h2 class="relative group"><a id="inference" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#inference"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-199uz7g">Inference</span></h2> <p data-svelte-h="svelte-1t4xw3s">Now that you have finetuned a LayoutLMv2 model, and uploaded it to the 🤗 Hub, you can use it for inference. The simplest way to try out your finetuned model for inference is to use it in a <a href="/docs/transformers/v4.34.0/en/main_classes/pipelines#transformers.Pipeline">Pipeline</a>.</p> <p data-svelte-h="svelte-1wtngfz">Let’s take an example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>example = dataset[<span class="hljs-string">"test"</span>][<span class="hljs-number">2</span>] <span class="hljs-meta">&gt;&gt;&gt; </span>question = example[<span class="hljs-string">"query"</span>][<span class="hljs-string">"en"</span>] <span class="hljs-meta">&gt;&gt;&gt; </span>image = example[<span class="hljs-string">"image"</span>] <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-built_in">print</span>(question) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-built_in">print</span>(example[<span class="hljs-string">"answers"</span>]) <span class="hljs-string">'Who is ‘presiding’ TRRF GENERAL SESSION (PART 1)?'</span> [<span class="hljs-string">'TRRF Vice President'</span>, <span class="hljs-string">'lee a. waller'</span>]</pre></div> <p data-svelte-h="svelte-mcchgg">Next, instantiate a pipeline for document question answering with your model, and pass the image + question combination to it.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> pipeline <span class="hljs-meta">&gt;&gt;&gt; </span>qa_pipeline = pipeline(<span class="hljs-string">"document-question-answering"</span>, model=<span class="hljs-string">"MariaK/layoutlmv2-base-uncased_finetuned_docvqa"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>qa_pipeline(image, question) [{<span class="hljs-string">'score'</span>: <span class="hljs-number">0.9949808120727539</span>, <span class="hljs-string">'answer'</span>: <span class="hljs-string">'Lee A. Waller'</span>, <span class="hljs-string">'start'</span>: <span class="hljs-number">55</span>, <span class="hljs-string">'end'</span>: <span class="hljs-number">57</span>}]</pre></div> <p data-svelte-h="svelte-o6117l">You can also manually replicate the results of the pipeline if you’d like:</p> <ol data-svelte-h="svelte-19rdijs"><li>Take an image and a question, prepare them for the model using the processor from your model.</li> <li>Forward the result or preprocessing through the model.</li> <li>The model returns <code>start_logits</code> and <code>end_logits</code>, which indicate which token is at the start of the answer and which token is at the end of the answer. Both have shape (batch_size, sequence_length).</li> <li>Take an argmax on the last dimension of both the <code>start_logits</code> and <code>end_logits</code> to get the predicted <code>start_idx</code> and <code>end_idx</code>.</li> <li>Decode the answer with the tokenizer.</li></ol> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoProcessor <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoModelForDocumentQuestionAnswering <span class="hljs-meta">&gt;&gt;&gt; </span>processor = AutoProcessor.from_pretrained(<span class="hljs-string">"MariaK/layoutlmv2-base-uncased_finetuned_docvqa"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = AutoModelForDocumentQuestionAnswering.from_pretrained(<span class="hljs-string">"MariaK/layoutlmv2-base-uncased_finetuned_docvqa"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> encoding = processor(image.convert(<span class="hljs-string">"RGB"</span>), question, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">... </span> outputs = model(**encoding) <span class="hljs-meta">... </span> start_logits = outputs.start_logits <span class="hljs-meta">... </span> end_logits = outputs.end_logits <span class="hljs-meta">... </span> predicted_start_idx = start_logits.argmax(-<span class="hljs-number">1</span>).item() <span class="hljs-meta">... </span> predicted_end_idx = end_logits.argmax(-<span class="hljs-number">1</span>).item() <span class="hljs-meta">&gt;&gt;&gt; </span>processor.tokenizer.decode(encoding.input_ids.squeeze()[predicted_start_idx : predicted_end_idx + <span class="hljs-number">1</span>]) <span class="hljs-string">'lee a. waller'</span></pre></div> <p></p> <div id="svelte-announcer" aria-live="assertive" aria-atomic="true" style="position: absolute; left: 0px; top: 0px; clip: rect(0px, 0px, 0px, 0px); clip-path: inset(50%); overflow: hidden; white-space: nowrap; width: 1px; height: 1px;"></div></div> <div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/tasks/image_captioning" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>Image captioning</a> <a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/tasks/visual_question_answering" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">Visual Question Answering<span class="ml-2 translate-y-px">→</span></a></div></div></div> <div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapter&quot;:{&quot;title&quot;:&quot;Document Question Answering&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;document-question-answering&quot;,&quot;url&quot;:&quot;#document-question-answering&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Load the data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;load-the-data&quot;,&quot;url&quot;:&quot;#load-the-data&quot;},{&quot;title&quot;:&quot;Preprocess the data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocess-the-data&quot;,&quot;url&quot;:&quot;#preprocess-the-data&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Preprocessing document images&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocessing-document-images&quot;,&quot;url&quot;:&quot;#preprocessing-document-images&quot;},{&quot;title&quot;:&quot;Preprocessing text data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocessing-text-data&quot;,&quot;url&quot;:&quot;#preprocessing-text-data&quot;}]},{&quot;title&quot;:&quot;Evaluation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;evaluation&quot;,&quot;url&quot;:&quot;#evaluation&quot;},{&quot;title&quot;:&quot;Train&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;train&quot;,&quot;url&quot;:&quot;#train&quot;},{&quot;title&quot;:&quot;Inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;inference&quot;,&quot;url&quot;:&quot;#inference&quot;}]}}" data-target="SubSideMenu"><nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#document-question-answering" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-document-question-answering"><wbr>Document <wbr>Question <wbr>Answering</a> <a href="#load-the-data" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-load-the-data"><wbr>Load the data</a> <a href="#preprocess-the-data" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-preprocess-the-data"><wbr>Preprocess the data</a> <a href="#preprocessing-document-images" class="pl-8 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-preprocessing-document-images"><wbr>Preprocessing document images</a> <a href="#preprocessing-text-data" class="pl-8 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-preprocessing-text-data"><wbr>Preprocessing text data</a> <a href="#evaluation" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-evaluation"><wbr>Evaluation</a> <a href="#train" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-train"><wbr>Train</a> <a href="#inference" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-inference"><wbr>Inference</a> </nav></div></div></div> <div id="doc-footer"></div></main> </div> <script> import("/front/build/kube-5e23f38/index.js"); window.moonSha = "kube-5e23f38/"; window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc","environment":"production","userAgent":"HuggingFace (production)"}`); </script> <!-- Stripe --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://js.stripe.com/v3/"; script.async = true; document.head.appendChild(script); } </script> <!-- Google analytics v4 --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL"; script.async = true; document.head.appendChild(script); window.dataLayer = window.dataLayer || []; function gtag() { if (window.dataLayer !== undefined) { window.dataLayer.push(arguments); } } gtag("js", new Date()); gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/v4.34.0/en/tasks/document_question_answering" }); /// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" }); /// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent /// TODO: ask the user for their consent and update this with gtag('consent', 'update') } </script> <!-- Google Analytics v3 (deprecated) --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { (function (i, s, o, g, r, a, m) { i["GoogleAnalyticsObject"] = r; (i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments); }), (i[r].l = 1 * new Date()); (a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]); a.async = 1; a.src = g; m.parentNode.insertBefore(a, m); })(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics"); ganalytics("create", "UA-83738774-2", "auto"); ganalytics("send", "pageview", "/docs/transformers/v4.34.0/en/tasks/document_question_answering"); } </script> <iframe name="__privateStripeMetricsController7770" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-27c67c0d52761104439bb051c7856ab1.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fv4.34.0%2Fen%2Ftasks%2Fdocument_question_answering&amp;title=Document%20Question%20Answering&amp;referrer=&amp;muid=NA&amp;sid=NA&amp;version=6&amp;preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html>
2023-10-05T13:33:55.093Z
https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/data2vec-vision
The documentation page MODEL\_DOC/DATA2VEC-VISION doesn’t exist in v4.34.0, but exists on the main version. Click [here](/docs/transformers/main/en/model_doc/data2vec-vision) to redirect to the main version of the documentation.
<html><head></head><body>The documentation page MODEL_DOC/DATA2VEC-VISION doesn’t exist in v4.34.0, but exists on the main version. Click <a href="/docs/transformers/main/en/model_doc/data2vec-vision">here</a> to redirect to the main version of the documentation.</body></html>
2023-10-05T13:33:55.485Z
Video classification
https://huggingface.co/docs/transformers/v4.34.0/en/tasks/video_classification
# Video classification Video classification is the task of assigning a label or class to an entire video. Videos are expected to have only one class for each video. Video classification models take a video as input and return a prediction about which class the video belongs to. These models can be used to categorize what a video is all about. A real-world application of video classification is action / activity recognition, which is useful for fitness applications. It is also helpful for vision-impaired individuals, especially when they are commuting. This guide will show you how to: 1. Fine-tune [VideoMAE](https://huggingface.co/docs/transformers/main/en/model_doc/videomae) on a subset of the [UCF101](https://www.crcv.ucf.edu/data/UCF101.php) dataset. 2. Use your fine-tuned model for inference. The task illustrated in this tutorial is supported by the following model architectures: [TimeSformer](../model_doc/timesformer), [VideoMAE](../model_doc/videomae), [ViViT](../model_doc/vivit) Before you begin, make sure you have all the necessary libraries installed: ``` pip install -q pytorchvideo transformers evaluate ``` You will use [PyTorchVideo](https://pytorchvideo.org/) (dubbed `pytorchvideo`) to process and prepare the videos. We encourage you to log in to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to log in: ``` >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## Load UCF101 dataset Start by loading a subset of the [UCF-101 dataset](https://www.crcv.ucf.edu/data/UCF101.php). This will give you a chance to experiment and make sure everything works before spending more time training on the full dataset. ``` >>> from huggingface_hub import hf_hub_download >>> hf_dataset_identifier = "sayakpaul/ucf101-subset" >>> filename = "UCF101_subset.tar.gz" >>> file_path = hf_hub_download(repo_id=hf_dataset_identifier, filename=filename, repo_type="dataset") ``` After the subset has been downloaded, you need to extract the compressed archive: ``` >>> import tarfile >>> with tarfile.open(file_path) as t: ... t.extractall(".") ``` At a high level, the dataset is organized like so: ``` UCF101_subset/ train/ BandMarching/ video_1.mp4 video_2.mp4 ... Archery video_1.mp4 video_2.mp4 ... ... val/ BandMarching/ video_1.mp4 video_2.mp4 ... Archery video_1.mp4 video_2.mp4 ... ... test/ BandMarching/ video_1.mp4 video_2.mp4 ... Archery video_1.mp4 video_2.mp4 ... ... ``` The (`sorted`) video paths appear like so: ``` ... 'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g07_c04.avi', 'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g07_c06.avi', 'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g08_c01.avi', 'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g09_c02.avi', 'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g09_c06.avi' ... ``` You will notice that there are video clips belonging to the same group / scene where group is denoted by `g` in the video file paths. `v_ApplyEyeMakeup_g07_c04.avi` and `v_ApplyEyeMakeup_g07_c06.avi`, for example. For the validation and evaluation splits, you wouldn’t want to have video clips from the same group / scene to prevent [data leakage](https://www.kaggle.com/code/alexisbcook/data-leakage). The subset that you are using in this tutorial takes this information into account. Next up, you will derive the set of labels present in the dataset. Also, create two dictionaries that’ll be helpful when initializing the model: - `label2id`: maps the class names to integers. - `id2label`: maps the integers to class names. ``` >>> class_labels = sorted({str(path).split("/")[2] for path in all_video_file_paths}) >>> label2id = {label: i for i, label in enumerate(class_labels)} >>> id2label = {i: label for label, i in label2id.items()} >>> print(f"Unique classes: {list(label2id.keys())}.") ``` There are 10 unique classes. For each class, there are 30 videos in the training set. ## Load a model to fine-tune Instantiate a video classification model from a pretrained checkpoint and its associated image processor. The model’s encoder comes with pre-trained parameters, and the classification head is randomly initialized. The image processor will come in handy when writing the preprocessing pipeline for our dataset. ``` >>> from transformers import VideoMAEImageProcessor, VideoMAEForVideoClassification >>> model_ckpt = "MCG-NJU/videomae-base" >>> image_processor = VideoMAEImageProcessor.from_pretrained(model_ckpt) >>> model = VideoMAEForVideoClassification.from_pretrained( ... model_ckpt, ... label2id=label2id, ... id2label=id2label, ... ignore_mismatched_sizes=True, ... ) ``` While the model is loading, you might notice the following warning: ``` Some weights of the model checkpoint at MCG-NJU/videomae-base were not used when initializing VideoMAEForVideoClassification: [..., 'decoder.decoder_layers.1.attention.output.dense.bias', 'decoder.decoder_layers.2.attention.attention.key.weight'] - This IS expected if you are initializing VideoMAEForVideoClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing VideoMAEForVideoClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of VideoMAEForVideoClassification were not initialized from the model checkpoint at MCG-NJU/videomae-base and are newly initialized: ['classifier.bias', 'classifier.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. ``` The warning is telling us we are throwing away some weights (e.g. the weights and bias of the `classifier` layer) and randomly initializing some others (the weights and bias of a new `classifier` layer). This is expected in this case, because we are adding a new head for which we don’t have pretrained weights, so the library warns us we should fine-tune this model before using it for inference, which is exactly what we are going to do. **Note** that [this checkpoint](https://huggingface.co/MCG-NJU/videomae-base-finetuned-kinetics) leads to better performance on this task as the checkpoint was obtained fine-tuning on a similar downstream task having considerable domain overlap. You can check out [this checkpoint](https://huggingface.co/sayakpaul/videomae-base-finetuned-kinetics-finetuned-ucf101-subset) which was obtained by fine-tuning `MCG-NJU/videomae-base-finetuned-kinetics`. ## Prepare the datasets for training For preprocessing the videos, you will leverage the [PyTorchVideo library](https://pytorchvideo.org/). Start by importing the dependencies we need. ``` >>> import pytorchvideo.data >>> from pytorchvideo.transforms import ( ... ApplyTransformToKey, ... Normalize, ... RandomShortSideScale, ... RemoveKey, ... ShortSideScale, ... UniformTemporalSubsample, ... ) >>> from torchvision.transforms import ( ... Compose, ... Lambda, ... RandomCrop, ... RandomHorizontalFlip, ... Resize, ... ) ``` For the training dataset transformations, use a combination of uniform temporal subsampling, pixel normalization, random cropping, and random horizontal flipping. For the validation and evaluation dataset transformations, keep the same transformation chain except for random cropping and horizontal flipping. To learn more about the details of these transformations check out the [official documentation of PyTorchVideo](https://pytorchvideo.org/). Use the `image_processor` associated with the pre-trained model to obtain the following information: - Image mean and standard deviation with which the video frame pixels will be normalized. - Spatial resolution to which the video frames will be resized. Start by defining some constants. ``` >>> mean = image_processor.image_mean >>> std = image_processor.image_std >>> if "shortest_edge" in image_processor.size: ... height = width = image_processor.size["shortest_edge"] >>> else: ... height = image_processor.size["height"] ... width = image_processor.size["width"] >>> resize_to = (height, width) >>> num_frames_to_sample = model.config.num_frames >>> sample_rate = 4 >>> fps = 30 >>> clip_duration = num_frames_to_sample * sample_rate / fps ``` Now, define the dataset-specific transformations and the datasets respectively. Starting with the training set: ``` >>> train_transform = Compose( ... [ ... ApplyTransformToKey( ... key="video", ... transform=Compose( ... [ ... UniformTemporalSubsample(num_frames_to_sample), ... Lambda(lambda x: x / 255.0), ... Normalize(mean, std), ... RandomShortSideScale(min_size=256, max_size=320), ... RandomCrop(resize_to), ... RandomHorizontalFlip(p=0.5), ... ] ... ), ... ), ... ] ... ) >>> train_dataset = pytorchvideo.data.Ucf101( ... data_path=os.path.join(dataset_root_path, "train"), ... clip_sampler=pytorchvideo.data.make_clip_sampler("random", clip_duration), ... decode_audio=False, ... transform=train_transform, ... ) ``` The same sequence of workflow can be applied to the validation and evaluation sets: ``` >>> val_transform = Compose( ... [ ... ApplyTransformToKey( ... key="video", ... transform=Compose( ... [ ... UniformTemporalSubsample(num_frames_to_sample), ... Lambda(lambda x: x / 255.0), ... Normalize(mean, std), ... Resize(resize_to), ... ] ... ), ... ), ... ] ... ) >>> val_dataset = pytorchvideo.data.Ucf101( ... data_path=os.path.join(dataset_root_path, "val"), ... clip_sampler=pytorchvideo.data.make_clip_sampler("uniform", clip_duration), ... decode_audio=False, ... transform=val_transform, ... ) >>> test_dataset = pytorchvideo.data.Ucf101( ... data_path=os.path.join(dataset_root_path, "test"), ... clip_sampler=pytorchvideo.data.make_clip_sampler("uniform", clip_duration), ... decode_audio=False, ... transform=val_transform, ... ) ``` **Note**: The above dataset pipelines are taken from the [official PyTorchVideo example](https://pytorchvideo.org/docs/tutorial_classification#dataset). We’re using the [`pytorchvideo.data.Ucf101()`](https://pytorchvideo.readthedocs.io/en/latest/api/data/data.html#pytorchvideo.data.Ucf101) function because it’s tailored for the UCF-101 dataset. Under the hood, it returns a [`pytorchvideo.data.labeled_video_dataset.LabeledVideoDataset`](https://pytorchvideo.readthedocs.io/en/latest/api/data/data.html#pytorchvideo.data.LabeledVideoDataset) object. `LabeledVideoDataset` class is the base class for all things video in the PyTorchVideo dataset. So, if you want to use a custom dataset not supported off-the-shelf by PyTorchVideo, you can extend the `LabeledVideoDataset` class accordingly. Refer to the `data` API [documentation to](https://pytorchvideo.readthedocs.io/en/latest/api/data/data.html) learn more. Also, if your dataset follows a similar structure (as shown above), then using the `pytorchvideo.data.Ucf101()` should work just fine. You can access the `num_videos` argument to know the number of videos in the dataset. ``` >>> print(train_dataset.num_videos, val_dataset.num_videos, test_dataset.num_videos) ``` ## Visualize the preprocessed video for better debugging ``` >>> import imageio >>> import numpy as np >>> from IPython.display import Image >>> def unnormalize_img(img): ... """Un-normalizes the image pixels.""" ... img = (img * std) + mean ... img = (img * 255).astype("uint8") ... return img.clip(0, 255) >>> def create_gif(video_tensor, filename="sample.gif"): ... """Prepares a GIF from a video tensor. ... ... The video tensor is expected to have the following shape: ... (num_frames, num_channels, height, width). ... """ ... frames = [] ... for video_frame in video_tensor: ... frame_unnormalized = unnormalize_img(video_frame.permute(1, 2, 0).numpy()) ... frames.append(frame_unnormalized) ... kargs = {"duration": 0.25} ... imageio.mimsave(filename, frames, "GIF", **kargs) ... return filename >>> def display_gif(video_tensor, gif_name="sample.gif"): ... """Prepares and displays a GIF from a video tensor.""" ... video_tensor = video_tensor.permute(1, 0, 2, 3) ... gif_filename = create_gif(video_tensor, gif_name) ... return Image(filename=gif_filename) >>> sample_video = next(iter(train_dataset)) >>> video_tensor = sample_video["video"] >>> display_gif(video_tensor) ``` ![Person playing basketball](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/sample_gif.gif) ## Train the model Leverage [`Trainer`](https://huggingface.co/docs/transformers/main_classes/trainer) from 🤗 Transformers for training the model. To instantiate a `Trainer`, you need to define the training configuration and an evaluation metric. The most important is the [`TrainingArguments`](https://huggingface.co/transformers/main_classes/trainer.html#transformers.TrainingArguments), which is a class that contains all the attributes to configure the training. It requires an output folder name, which will be used to save the checkpoints of the model. It also helps sync all the information in the model repository on 🤗 Hub. Most of the training arguments are self-explanatory, but one that is quite important here is `remove_unused_columns=False`. This one will drop any features not used by the model’s call function. By default it’s `True` because usually it’s ideal to drop unused feature columns, making it easier to unpack inputs into the model’s call function. But, in this case, you need the unused features (‘video’ in particular) in order to create `pixel_values` (which is a mandatory key our model expects in its inputs). ``` >>> from transformers import TrainingArguments, Trainer >>> model_name = model_ckpt.split("/")[-1] >>> new_model_name = f"{model_name}-finetuned-ucf101-subset" >>> num_epochs = 4 >>> args = TrainingArguments( ... new_model_name, ... remove_unused_columns=False, ... evaluation_strategy="epoch", ... save_strategy="epoch", ... learning_rate=5e-5, ... per_device_train_batch_size=batch_size, ... per_device_eval_batch_size=batch_size, ... warmup_ratio=0.1, ... logging_steps=10, ... load_best_model_at_end=True, ... metric_for_best_model="accuracy", ... push_to_hub=True, ... max_steps=(train_dataset.num_videos // batch_size) * num_epochs, ... ) ``` The dataset returned by `pytorchvideo.data.Ucf101()` doesn’t implement the `__len__` method. As such, we must define `max_steps` when instantiating `TrainingArguments`. Next, you need to define a function to compute the metrics from the predictions, which will use the `metric` you’ll load now. The only preprocessing you have to do is to take the argmax of our predicted logits: ``` import evaluate metric = evaluate.load("accuracy") def compute_metrics(eval_pred): predictions = np.argmax(eval_pred.predictions, axis=1) return metric.compute(predictions=predictions, references=eval_pred.label_ids) ``` **A note on evaluation**: In the [VideoMAE paper](https://arxiv.org/abs/2203.12602), the authors use the following evaluation strategy. They evaluate the model on several clips from test videos and apply different crops to those clips and report the aggregate score. However, in the interest of simplicity and brevity, we don’t consider that in this tutorial. Also, define a `collate_fn`, which will be used to batch examples together. Each batch consists of 2 keys, namely `pixel_values` and `labels`. ``` >>> def collate_fn(examples): ... ... pixel_values = torch.stack( ... [example["video"].permute(1, 0, 2, 3) for example in examples] ... ) ... labels = torch.tensor([example["label"] for example in examples]) ... return {"pixel_values": pixel_values, "labels": labels} ``` Then you just pass all of this along with the datasets to `Trainer`: ``` >>> trainer = Trainer( ... model, ... args, ... train_dataset=train_dataset, ... eval_dataset=val_dataset, ... tokenizer=image_processor, ... compute_metrics=compute_metrics, ... data_collator=collate_fn, ... ) ``` You might wonder why you passed along the `image_processor` as a tokenizer when you preprocessed the data already. This is only to make sure the image processor configuration file (stored as JSON) will also be uploaded to the repo on the Hub. Now fine-tune our model by calling the `train` method: ``` >>> train_results = trainer.train() ``` Once training is completed, share your model to the Hub with the [push\_to\_hub()](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.push_to_hub) method so everyone can use your model: ``` >>> trainer.push_to_hub() ``` ## Inference Great, now that you have fine-tuned a model, you can use it for inference! Load a video for inference: ``` >>> sample_test_video = next(iter(test_dataset)) ``` ![Teams playing basketball](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/sample_gif_two.gif) The simplest way to try out your fine-tuned model for inference is to use it in a [`pipeline`](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.VideoClassificationPipeline). Instantiate a `pipeline` for video classification with your model, and pass your video to it: ``` >>> from transformers import pipeline >>> video_cls = pipeline(model="my_awesome_video_cls_model") >>> video_cls("https://huggingface.co/datasets/sayakpaul/ucf101-subset/resolve/main/v_BasketballDunk_g14_c06.avi") [{'score': 0.9272987842559814, 'label': 'BasketballDunk'}, {'score': 0.017777055501937866, 'label': 'BabyCrawling'}, {'score': 0.01663011871278286, 'label': 'BalanceBeam'}, {'score': 0.009560945443809032, 'label': 'BandMarching'}, {'score': 0.0068979403004050255, 'label': 'BaseballPitch'}] ``` You can also manually replicate the results of the `pipeline` if you’d like. ``` >>> def run_inference(model, video): ... ... perumuted_sample_test_video = video.permute(1, 0, 2, 3) ... inputs = { ... "pixel_values": perumuted_sample_test_video.unsqueeze(0), ... "labels": torch.tensor( ... [sample_test_video["label"]] ... ), ... } ... device = torch.device("cuda" if torch.cuda.is_available() else "cpu") ... inputs = {k: v.to(device) for k, v in inputs.items()} ... model = model.to(device) ... ... with torch.no_grad(): ... outputs = model(**inputs) ... logits = outputs.logits ... return logits ``` Now, pass your input to the model and return the `logits`: ``` >>> logits = run_inference(trained_model, sample_test_video["video"]) ``` Decoding the `logits`, we get: ``` >>> predicted_class_idx = logits.argmax(-1).item() >>> print("Predicted class:", model.config.id2label[predicted_class_idx]) ```
<!DOCTYPE html><html class=""><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"> <meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science."> <meta property="fb:app_id" content="1321688464574422"> <meta name="twitter:card" content="summary_large_image"> <meta name="twitter:site" content="@huggingface"> <meta property="og:title" content="Video classification"> <meta property="og:type" content="website"> <meta property="og:url" content="https://huggingface.co/docs/transformers/v4.34.0/en/tasks/video_classification"> <meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png"> <link rel="stylesheet" href="/front/build/kube-5e23f38/style.css"> <link rel="preconnect" href="https://fonts.gstatic.com"> <link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&amp;display=swap" rel="stylesheet"> <link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&amp;display=swap" rel="stylesheet"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" as="style" onload="this.onload=null;this.rel='stylesheet'"> <noscript> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" /> </noscript> <title>Video classification</title> <script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script> <script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"><link rel="stylesheet" href="/docs/transformers/v4.34.0/en/_app/immutable/assets/0.e3b0c442.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/1.38c5c2f6.js"><meta name="hf:doc:metadata" content="{&quot;local&quot;:&quot;video-classification&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;load-ucf101-dataset&quot;,&quot;title&quot;:&quot;Load UCF101 dataset&quot;},{&quot;local&quot;:&quot;load-a-model-to-finetune&quot;,&quot;title&quot;:&quot;Load a model to fine-tune&quot;},{&quot;local&quot;:&quot;prepare-the-datasets-for-training&quot;,&quot;title&quot;:&quot;Prepare the datasets for training&quot;},{&quot;local&quot;:&quot;visualize-the-preprocessed-video-for-better-debugging&quot;,&quot;title&quot;:&quot;Visualize the preprocessed video for better debugging &quot;},{&quot;local&quot;:&quot;train-the-model&quot;,&quot;title&quot;:&quot;Train the model &quot;},{&quot;local&quot;:&quot;inference&quot;,&quot;title&quot;:&quot;Inference&quot;}],&quot;title&quot;:&quot;Video classification&quot;}"></head> <body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage"> <div class="flex min-h-screen flex-col"> <div class="SVELTE_HYDRATER contents" data-props="{&quot;classNames&quot;:&quot;&quot;,&quot;isWide&quot;:true,&quot;isZh&quot;:false}" data-target="MainHeader"><header class="border-b border-gray-100 "><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <div class="flex flex-none items-center justify-center p-0.5 place-self-stretch lg:hidden"><button class="relative z-40 flex h-6 w-8 items-center justify-center" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div></div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="rounded-full border border-transparent bg-gray-900 px-3 py-1 leading-none text-white hover:border-black hover:bg-white hover:text-black" href="/join">Sign Up</a></li></ul></nav></div></header></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div> <main class="flex flex-1 flex-col"><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapters&quot;:[{&quot;title&quot;:&quot;Get started&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;🤗 Transformers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;index&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/index&quot;},{&quot;title&quot;:&quot;Quick tour&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;quicktour&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/quicktour&quot;},{&quot;title&quot;:&quot;Installation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;installation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/installation&quot;}]},{&quot;title&quot;:&quot;Tutorials&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Run inference with pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_tutorial&quot;},{&quot;title&quot;:&quot;Write portable code with AutoClass&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;autoclass_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/autoclass_tutorial&quot;},{&quot;title&quot;:&quot;Preprocess data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocessing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/preprocessing&quot;},{&quot;title&quot;:&quot;Fine-tune a pretrained model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;training&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/training&quot;},{&quot;title&quot;:&quot;Train with a script&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;run_scripts&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/run_scripts&quot;},{&quot;title&quot;:&quot;Set up distributed training with 🤗 Accelerate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;accelerate&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/accelerate&quot;},{&quot;title&quot;:&quot;Load and train adapters with 🤗 PEFT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;peft&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/peft&quot;},{&quot;title&quot;:&quot;Share your model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_sharing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_sharing&quot;},{&quot;title&quot;:&quot;Agents&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers_agents&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/transformers_agents&quot;},{&quot;title&quot;:&quot;Generation with LLMs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;llm_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/llm_tutorial&quot;}]},{&quot;title&quot;:&quot;Task Guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Natural Language Processing&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text classification&quot;,&quot;id&quot;:&quot;tasks/sequence_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/sequence_classification&quot;},{&quot;title&quot;:&quot;Token classification&quot;,&quot;id&quot;:&quot;tasks/token_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/token_classification&quot;},{&quot;title&quot;:&quot;Question answering&quot;,&quot;id&quot;:&quot;tasks/question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/question_answering&quot;},{&quot;title&quot;:&quot;Causal language modeling&quot;,&quot;id&quot;:&quot;tasks/language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/language_modeling&quot;},{&quot;title&quot;:&quot;Masked language modeling&quot;,&quot;id&quot;:&quot;tasks/masked_language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/masked_language_modeling&quot;},{&quot;title&quot;:&quot;Translation&quot;,&quot;id&quot;:&quot;tasks/translation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/translation&quot;},{&quot;title&quot;:&quot;Summarization&quot;,&quot;id&quot;:&quot;tasks/summarization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/summarization&quot;},{&quot;title&quot;:&quot;Multiple choice&quot;,&quot;id&quot;:&quot;tasks/multiple_choice&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/multiple_choice&quot;}]},{&quot;title&quot;:&quot;Audio&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio classification&quot;,&quot;id&quot;:&quot;tasks/audio_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/audio_classification&quot;},{&quot;title&quot;:&quot;Automatic speech recognition&quot;,&quot;id&quot;:&quot;tasks/asr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/asr&quot;}]},{&quot;title&quot;:&quot;Computer Vision&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image classification&quot;,&quot;id&quot;:&quot;tasks/image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_classification&quot;},{&quot;title&quot;:&quot;Semantic segmentation&quot;,&quot;id&quot;:&quot;tasks/semantic_segmentation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/semantic_segmentation&quot;},{&quot;title&quot;:&quot;Video classification&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks/video_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/video_classification&quot;},{&quot;title&quot;:&quot;Object detection&quot;,&quot;id&quot;:&quot;tasks/object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot object detection&quot;,&quot;id&quot;:&quot;tasks/zero_shot_object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot image classification&quot;,&quot;id&quot;:&quot;tasks/zero_shot_image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification&quot;},{&quot;title&quot;:&quot;Depth estimation&quot;,&quot;id&quot;:&quot;tasks/monocular_depth_estimation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation&quot;}]},{&quot;title&quot;:&quot;Multimodal&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image captioning&quot;,&quot;id&quot;:&quot;tasks/image_captioning&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_captioning&quot;},{&quot;title&quot;:&quot;Document Question Answering&quot;,&quot;id&quot;:&quot;tasks/document_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/document_question_answering&quot;},{&quot;title&quot;:&quot;Visual Question Answering&quot;,&quot;id&quot;:&quot;tasks/visual_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/visual_question_answering&quot;},{&quot;title&quot;:&quot;Text to speech&quot;,&quot;id&quot;:&quot;tasks/text-to-speech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/text-to-speech&quot;}]},{&quot;title&quot;:&quot;Generation&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Customize the generation strategy&quot;,&quot;id&quot;:&quot;generation_strategies&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/generation_strategies&quot;}]},{&quot;title&quot;:&quot;Prompting&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image tasks with IDEFICS&quot;,&quot;id&quot;:&quot;tasks/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/idefics&quot;}]}]},{&quot;title&quot;:&quot;Developer guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Use fast tokenizers from 🤗 Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;fast_tokenizers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/fast_tokenizers&quot;},{&quot;title&quot;:&quot;Run inference with multilingual models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;multilingual&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/multilingual&quot;},{&quot;title&quot;:&quot;Use model-specific APIs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;create_a_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/create_a_model&quot;},{&quot;title&quot;:&quot;Share a custom model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_models&quot;},{&quot;title&quot;:&quot;Templates for chat models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;chat_templating&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/chat_templating&quot;},{&quot;title&quot;:&quot;Run training on Amazon SageMaker&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;sagemaker&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/sagemaker&quot;},{&quot;title&quot;:&quot;Export to ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;serialization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/serialization&quot;},{&quot;title&quot;:&quot;Export to TFLite&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tflite&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tflite&quot;},{&quot;title&quot;:&quot;Export to TorchScript&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;torchscript&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/torchscript&quot;},{&quot;title&quot;:&quot;Benchmarks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;benchmarks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/benchmarks&quot;},{&quot;title&quot;:&quot;Notebooks with examples&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;notebooks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/notebooks&quot;},{&quot;title&quot;:&quot;Community resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;community&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/community&quot;},{&quot;title&quot;:&quot;Custom Tools and Prompts&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_tools&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_tools&quot;},{&quot;title&quot;:&quot;Troubleshoot&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;troubleshooting&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/troubleshooting&quot;}]},{&quot;title&quot;:&quot;Performance and scalability&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;performance&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/performance&quot;},{&quot;title&quot;:&quot;Efficient training techniques&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Methods and tools for efficient training on a single GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_one&quot;},{&quot;title&quot;:&quot;Multiple GPUs and parallelism&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_many&quot;},{&quot;title&quot;:&quot;Efficient training on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu&quot;},{&quot;title&quot;:&quot;Distributed CPU training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu_many&quot;},{&quot;title&quot;:&quot;Training on TPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu&quot;},{&quot;title&quot;:&quot;Training on TPU with TensorFlow&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu_tf&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu_tf&quot;},{&quot;title&quot;:&quot;Training on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_special&quot;},{&quot;title&quot;:&quot;Custom hardware for training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_hardware&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_hardware&quot;},{&quot;title&quot;:&quot;Hyperparameter Search using Trainer API&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;hpo_train&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/hpo_train&quot;}]},{&quot;title&quot;:&quot;Optimizing inference&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Inference on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_cpu&quot;},{&quot;title&quot;:&quot;Inference on one GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_one&quot;},{&quot;title&quot;:&quot;Inference on many GPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_many&quot;},{&quot;title&quot;:&quot;Inference on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_special&quot;}]},{&quot;title&quot;:&quot;Instantiating a big model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;big_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/big_models&quot;},{&quot;title&quot;:&quot;Troubleshooting&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;debugging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/debugging&quot;},{&quot;title&quot;:&quot;XLA Integration for TensorFlow Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tf_xla&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tf_xla&quot;},{&quot;title&quot;:&quot;Optimize inference using `torch.compile()`&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_torch_compile&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_torch_compile&quot;}]},{&quot;title&quot;:&quot;Contribute&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;How to contribute to transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;contributing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/contributing&quot;},{&quot;title&quot;:&quot;How to add a model to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_model&quot;},{&quot;title&quot;:&quot;How to convert a 🤗 Transformers model to TensorFlow?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_tensorflow_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_tensorflow_model&quot;},{&quot;title&quot;:&quot;How to add a pipeline to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_pipeline&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_pipeline&quot;},{&quot;title&quot;:&quot;Testing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;testing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/testing&quot;},{&quot;title&quot;:&quot;Checks on a Pull Request&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pr_checks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pr_checks&quot;}]},{&quot;title&quot;:&quot;Conceptual guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Philosophy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;philosophy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/philosophy&quot;},{&quot;title&quot;:&quot;Glossary&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;glossary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/glossary&quot;},{&quot;title&quot;:&quot;What 🤗 Transformers can do&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;task_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/task_summary&quot;},{&quot;title&quot;:&quot;How 🤗 Transformers solve tasks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks_explained&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks_explained&quot;},{&quot;title&quot;:&quot;The Transformer model family&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_summary&quot;},{&quot;title&quot;:&quot;Summary of the tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tokenizer_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tokenizer_summary&quot;},{&quot;title&quot;:&quot;Attention mechanisms&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;attention&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/attention&quot;},{&quot;title&quot;:&quot;Padding and truncation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pad_truncation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pad_truncation&quot;},{&quot;title&quot;:&quot;BERTology&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;bertology&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/bertology&quot;},{&quot;title&quot;:&quot;Perplexity of fixed-length models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perplexity&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perplexity&quot;},{&quot;title&quot;:&quot;Pipelines for webserver inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_webserver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_webserver&quot;},{&quot;title&quot;:&quot;Model training anatomy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_memory_anatomy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_memory_anatomy&quot;}]},{&quot;title&quot;:&quot;API&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Main Classes&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Agents and Tools&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/agent&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/agent&quot;},{&quot;title&quot;:&quot;Auto Classes&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/auto&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/auto&quot;},{&quot;title&quot;:&quot;Callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/callback&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/callback&quot;},{&quot;title&quot;:&quot;Configuration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/configuration&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/configuration&quot;},{&quot;title&quot;:&quot;Data Collator&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/data_collator&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/data_collator&quot;},{&quot;title&quot;:&quot;Keras callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/keras_callbacks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/keras_callbacks&quot;},{&quot;title&quot;:&quot;Logging&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/logging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/logging&quot;},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/model&quot;},{&quot;title&quot;:&quot;Text Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/text_generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/text_generation&quot;},{&quot;title&quot;:&quot;ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/onnx&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/onnx&quot;},{&quot;title&quot;:&quot;Optimization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/optimizer_schedules&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules&quot;},{&quot;title&quot;:&quot;Model outputs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/output&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/output&quot;},{&quot;title&quot;:&quot;Pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/pipelines&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/pipelines&quot;},{&quot;title&quot;:&quot;Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/processors&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/processors&quot;},{&quot;title&quot;:&quot;Quantization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/quantization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/quantization&quot;},{&quot;title&quot;:&quot;Tokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/tokenizer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/tokenizer&quot;},{&quot;title&quot;:&quot;Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/trainer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/trainer&quot;},{&quot;title&quot;:&quot;DeepSpeed Integration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/deepspeed&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/deepspeed&quot;},{&quot;title&quot;:&quot;Feature Extractor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/feature_extractor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/feature_extractor&quot;},{&quot;title&quot;:&quot;Image Processor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/image_processor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/image_processor&quot;}]},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALBERT&quot;,&quot;id&quot;:&quot;model_doc/albert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/albert&quot;},{&quot;title&quot;:&quot;BART&quot;,&quot;id&quot;:&quot;model_doc/bart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bart&quot;},{&quot;title&quot;:&quot;BARThez&quot;,&quot;id&quot;:&quot;model_doc/barthez&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/barthez&quot;},{&quot;title&quot;:&quot;BARTpho&quot;,&quot;id&quot;:&quot;model_doc/bartpho&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bartpho&quot;},{&quot;title&quot;:&quot;BERT&quot;,&quot;id&quot;:&quot;model_doc/bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert&quot;},{&quot;title&quot;:&quot;BertGeneration&quot;,&quot;id&quot;:&quot;model_doc/bert-generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-generation&quot;},{&quot;title&quot;:&quot;BertJapanese&quot;,&quot;id&quot;:&quot;model_doc/bert-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-japanese&quot;},{&quot;title&quot;:&quot;Bertweet&quot;,&quot;id&quot;:&quot;model_doc/bertweet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bertweet&quot;},{&quot;title&quot;:&quot;BigBird&quot;,&quot;id&quot;:&quot;model_doc/big_bird&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/big_bird&quot;},{&quot;title&quot;:&quot;BigBirdPegasus&quot;,&quot;id&quot;:&quot;model_doc/bigbird_pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus&quot;},{&quot;title&quot;:&quot;BioGpt&quot;,&quot;id&quot;:&quot;model_doc/biogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/biogpt&quot;},{&quot;title&quot;:&quot;Blenderbot&quot;,&quot;id&quot;:&quot;model_doc/blenderbot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot&quot;},{&quot;title&quot;:&quot;Blenderbot Small&quot;,&quot;id&quot;:&quot;model_doc/blenderbot-small&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot-small&quot;},{&quot;title&quot;:&quot;BLOOM&quot;,&quot;id&quot;:&quot;model_doc/bloom&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bloom&quot;},{&quot;title&quot;:&quot;BORT&quot;,&quot;id&quot;:&quot;model_doc/bort&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bort&quot;},{&quot;title&quot;:&quot;ByT5&quot;,&quot;id&quot;:&quot;model_doc/byt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/byt5&quot;},{&quot;title&quot;:&quot;CamemBERT&quot;,&quot;id&quot;:&quot;model_doc/camembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/camembert&quot;},{&quot;title&quot;:&quot;CANINE&quot;,&quot;id&quot;:&quot;model_doc/canine&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/canine&quot;},{&quot;title&quot;:&quot;CodeGen&quot;,&quot;id&quot;:&quot;model_doc/codegen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/codegen&quot;},{&quot;title&quot;:&quot;CodeLlama&quot;,&quot;id&quot;:&quot;model_doc/code_llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/code_llama&quot;},{&quot;title&quot;:&quot;ConvBERT&quot;,&quot;id&quot;:&quot;model_doc/convbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convbert&quot;},{&quot;title&quot;:&quot;CPM&quot;,&quot;id&quot;:&quot;model_doc/cpm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpm&quot;},{&quot;title&quot;:&quot;CPMANT&quot;,&quot;id&quot;:&quot;model_doc/cpmant&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpmant&quot;},{&quot;title&quot;:&quot;CTRL&quot;,&quot;id&quot;:&quot;model_doc/ctrl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ctrl&quot;},{&quot;title&quot;:&quot;DeBERTa&quot;,&quot;id&quot;:&quot;model_doc/deberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta&quot;},{&quot;title&quot;:&quot;DeBERTa-v2&quot;,&quot;id&quot;:&quot;model_doc/deberta-v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta-v2&quot;},{&quot;title&quot;:&quot;DialoGPT&quot;,&quot;id&quot;:&quot;model_doc/dialogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dialogpt&quot;},{&quot;title&quot;:&quot;DistilBERT&quot;,&quot;id&quot;:&quot;model_doc/distilbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/distilbert&quot;},{&quot;title&quot;:&quot;DPR&quot;,&quot;id&quot;:&quot;model_doc/dpr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpr&quot;},{&quot;title&quot;:&quot;ELECTRA&quot;,&quot;id&quot;:&quot;model_doc/electra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/electra&quot;},{&quot;title&quot;:&quot;Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encoder-decoder&quot;},{&quot;title&quot;:&quot;ERNIE&quot;,&quot;id&quot;:&quot;model_doc/ernie&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie&quot;},{&quot;title&quot;:&quot;ErnieM&quot;,&quot;id&quot;:&quot;model_doc/ernie_m&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie_m&quot;},{&quot;title&quot;:&quot;ESM&quot;,&quot;id&quot;:&quot;model_doc/esm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/esm&quot;},{&quot;title&quot;:&quot;Falcon&quot;,&quot;id&quot;:&quot;model_doc/falcon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/falcon&quot;},{&quot;title&quot;:&quot;FLAN-T5&quot;,&quot;id&quot;:&quot;model_doc/flan-t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-t5&quot;},{&quot;title&quot;:&quot;FLAN-UL2&quot;,&quot;id&quot;:&quot;model_doc/flan-ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-ul2&quot;},{&quot;title&quot;:&quot;FlauBERT&quot;,&quot;id&quot;:&quot;model_doc/flaubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flaubert&quot;},{&quot;title&quot;:&quot;FNet&quot;,&quot;id&quot;:&quot;model_doc/fnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fnet&quot;},{&quot;title&quot;:&quot;FSMT&quot;,&quot;id&quot;:&quot;model_doc/fsmt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fsmt&quot;},{&quot;title&quot;:&quot;Funnel Transformer&quot;,&quot;id&quot;:&quot;model_doc/funnel&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/funnel&quot;},{&quot;title&quot;:&quot;GPT&quot;,&quot;id&quot;:&quot;model_doc/openai-gpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/openai-gpt&quot;},{&quot;title&quot;:&quot;GPT Neo&quot;,&quot;id&quot;:&quot;model_doc/gpt_neo&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neo&quot;},{&quot;title&quot;:&quot;GPT NeoX&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox&quot;},{&quot;title&quot;:&quot;GPT NeoX Japanese&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox_japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese&quot;},{&quot;title&quot;:&quot;GPT-J&quot;,&quot;id&quot;:&quot;model_doc/gptj&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptj&quot;},{&quot;title&quot;:&quot;GPT2&quot;,&quot;id&quot;:&quot;model_doc/gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt2&quot;},{&quot;title&quot;:&quot;GPTBigCode&quot;,&quot;id&quot;:&quot;model_doc/gpt_bigcode&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode&quot;},{&quot;title&quot;:&quot;GPTSAN Japanese&quot;,&quot;id&quot;:&quot;model_doc/gptsan-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese&quot;},{&quot;title&quot;:&quot;GPTSw3&quot;,&quot;id&quot;:&quot;model_doc/gpt-sw3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt-sw3&quot;},{&quot;title&quot;:&quot;HerBERT&quot;,&quot;id&quot;:&quot;model_doc/herbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/herbert&quot;},{&quot;title&quot;:&quot;I-BERT&quot;,&quot;id&quot;:&quot;model_doc/ibert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ibert&quot;},{&quot;title&quot;:&quot;Jukebox&quot;,&quot;id&quot;:&quot;model_doc/jukebox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/jukebox&quot;},{&quot;title&quot;:&quot;LED&quot;,&quot;id&quot;:&quot;model_doc/led&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/led&quot;},{&quot;title&quot;:&quot;LLaMA&quot;,&quot;id&quot;:&quot;model_doc/llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama&quot;},{&quot;title&quot;:&quot;Llama2&quot;,&quot;id&quot;:&quot;model_doc/llama2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama2&quot;},{&quot;title&quot;:&quot;Longformer&quot;,&quot;id&quot;:&quot;model_doc/longformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longformer&quot;},{&quot;title&quot;:&quot;LongT5&quot;,&quot;id&quot;:&quot;model_doc/longt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longt5&quot;},{&quot;title&quot;:&quot;LUKE&quot;,&quot;id&quot;:&quot;model_doc/luke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/luke&quot;},{&quot;title&quot;:&quot;M2M100&quot;,&quot;id&quot;:&quot;model_doc/m2m_100&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/m2m_100&quot;},{&quot;title&quot;:&quot;MarianMT&quot;,&quot;id&quot;:&quot;model_doc/marian&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/marian&quot;},{&quot;title&quot;:&quot;MarkupLM&quot;,&quot;id&quot;:&quot;model_doc/markuplm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/markuplm&quot;},{&quot;title&quot;:&quot;MBart and MBart-50&quot;,&quot;id&quot;:&quot;model_doc/mbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mbart&quot;},{&quot;title&quot;:&quot;MEGA&quot;,&quot;id&quot;:&quot;model_doc/mega&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mega&quot;},{&quot;title&quot;:&quot;MegatronBERT&quot;,&quot;id&quot;:&quot;model_doc/megatron-bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron-bert&quot;},{&quot;title&quot;:&quot;MegatronGPT2&quot;,&quot;id&quot;:&quot;model_doc/megatron_gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2&quot;},{&quot;title&quot;:&quot;Mistral&quot;,&quot;id&quot;:&quot;model_doc/mistral&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mistral&quot;},{&quot;title&quot;:&quot;mLUKE&quot;,&quot;id&quot;:&quot;model_doc/mluke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mluke&quot;},{&quot;title&quot;:&quot;MobileBERT&quot;,&quot;id&quot;:&quot;model_doc/mobilebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilebert&quot;},{&quot;title&quot;:&quot;MPNet&quot;,&quot;id&quot;:&quot;model_doc/mpnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpnet&quot;},{&quot;title&quot;:&quot;MPT&quot;,&quot;id&quot;:&quot;model_doc/mpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpt&quot;},{&quot;title&quot;:&quot;MRA&quot;,&quot;id&quot;:&quot;model_doc/mra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mra&quot;},{&quot;title&quot;:&quot;MT5&quot;,&quot;id&quot;:&quot;model_doc/mt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mt5&quot;},{&quot;title&quot;:&quot;MVP&quot;,&quot;id&quot;:&quot;model_doc/mvp&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mvp&quot;},{&quot;title&quot;:&quot;NEZHA&quot;,&quot;id&quot;:&quot;model_doc/nezha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nezha&quot;},{&quot;title&quot;:&quot;NLLB&quot;,&quot;id&quot;:&quot;model_doc/nllb&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb&quot;},{&quot;title&quot;:&quot;NLLB-MoE&quot;,&quot;id&quot;:&quot;model_doc/nllb-moe&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb-moe&quot;},{&quot;title&quot;:&quot;Nyströmformer&quot;,&quot;id&quot;:&quot;model_doc/nystromformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nystromformer&quot;},{&quot;title&quot;:&quot;Open-Llama&quot;,&quot;id&quot;:&quot;model_doc/open-llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/open-llama&quot;},{&quot;title&quot;:&quot;OPT&quot;,&quot;id&quot;:&quot;model_doc/opt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/opt&quot;},{&quot;title&quot;:&quot;Pegasus&quot;,&quot;id&quot;:&quot;model_doc/pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus&quot;},{&quot;title&quot;:&quot;PEGASUS-X&quot;,&quot;id&quot;:&quot;model_doc/pegasus_x&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus_x&quot;},{&quot;title&quot;:&quot;Persimmon&quot;,&quot;id&quot;:&quot;model_doc/persimmon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/persimmon&quot;},{&quot;title&quot;:&quot;PhoBERT&quot;,&quot;id&quot;:&quot;model_doc/phobert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/phobert&quot;},{&quot;title&quot;:&quot;PLBart&quot;,&quot;id&quot;:&quot;model_doc/plbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/plbart&quot;},{&quot;title&quot;:&quot;ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/prophetnet&quot;},{&quot;title&quot;:&quot;QDQBert&quot;,&quot;id&quot;:&quot;model_doc/qdqbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/qdqbert&quot;},{&quot;title&quot;:&quot;RAG&quot;,&quot;id&quot;:&quot;model_doc/rag&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rag&quot;},{&quot;title&quot;:&quot;REALM&quot;,&quot;id&quot;:&quot;model_doc/realm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/realm&quot;},{&quot;title&quot;:&quot;Reformer&quot;,&quot;id&quot;:&quot;model_doc/reformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/reformer&quot;},{&quot;title&quot;:&quot;RemBERT&quot;,&quot;id&quot;:&quot;model_doc/rembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rembert&quot;},{&quot;title&quot;:&quot;RetriBERT&quot;,&quot;id&quot;:&quot;model_doc/retribert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/retribert&quot;},{&quot;title&quot;:&quot;RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta&quot;},{&quot;title&quot;:&quot;RoBERTa-PreLayerNorm&quot;,&quot;id&quot;:&quot;model_doc/roberta-prelayernorm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm&quot;},{&quot;title&quot;:&quot;RoCBert&quot;,&quot;id&quot;:&quot;model_doc/roc_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roc_bert&quot;},{&quot;title&quot;:&quot;RoFormer&quot;,&quot;id&quot;:&quot;model_doc/roformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roformer&quot;},{&quot;title&quot;:&quot;RWKV&quot;,&quot;id&quot;:&quot;model_doc/rwkv&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rwkv&quot;},{&quot;title&quot;:&quot;Splinter&quot;,&quot;id&quot;:&quot;model_doc/splinter&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/splinter&quot;},{&quot;title&quot;:&quot;SqueezeBERT&quot;,&quot;id&quot;:&quot;model_doc/squeezebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/squeezebert&quot;},{&quot;title&quot;:&quot;SwitchTransformers&quot;,&quot;id&quot;:&quot;model_doc/switch_transformers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/switch_transformers&quot;},{&quot;title&quot;:&quot;T5&quot;,&quot;id&quot;:&quot;model_doc/t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5&quot;},{&quot;title&quot;:&quot;T5v1.1&quot;,&quot;id&quot;:&quot;model_doc/t5v1.1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5v1.1&quot;},{&quot;title&quot;:&quot;TAPEX&quot;,&quot;id&quot;:&quot;model_doc/tapex&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapex&quot;},{&quot;title&quot;:&quot;Transformer XL&quot;,&quot;id&quot;:&quot;model_doc/transfo-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/transfo-xl&quot;},{&quot;title&quot;:&quot;UL2&quot;,&quot;id&quot;:&quot;model_doc/ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ul2&quot;},{&quot;title&quot;:&quot;UMT5&quot;,&quot;id&quot;:&quot;model_doc/umt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/umt5&quot;},{&quot;title&quot;:&quot;X-MOD&quot;,&quot;id&quot;:&quot;model_doc/xmod&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xmod&quot;},{&quot;title&quot;:&quot;XGLM&quot;,&quot;id&quot;:&quot;model_doc/xglm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xglm&quot;},{&quot;title&quot;:&quot;XLM&quot;,&quot;id&quot;:&quot;model_doc/xlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm&quot;},{&quot;title&quot;:&quot;XLM-ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/xlm-prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa-XL&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl&quot;},{&quot;title&quot;:&quot;XLM-V&quot;,&quot;id&quot;:&quot;model_doc/xlm-v&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-v&quot;},{&quot;title&quot;:&quot;XLNet&quot;,&quot;id&quot;:&quot;model_doc/xlnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlnet&quot;},{&quot;title&quot;:&quot;YOSO&quot;,&quot;id&quot;:&quot;model_doc/yoso&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yoso&quot;}]},{&quot;title&quot;:&quot;Vision models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;BEiT&quot;,&quot;id&quot;:&quot;model_doc/beit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/beit&quot;},{&quot;title&quot;:&quot;BiT&quot;,&quot;id&quot;:&quot;model_doc/bit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bit&quot;},{&quot;title&quot;:&quot;Conditional DETR&quot;,&quot;id&quot;:&quot;model_doc/conditional_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/conditional_detr&quot;},{&quot;title&quot;:&quot;ConvNeXT&quot;,&quot;id&quot;:&quot;model_doc/convnext&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnext&quot;},{&quot;title&quot;:&quot;ConvNeXTV2&quot;,&quot;id&quot;:&quot;model_doc/convnextv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnextv2&quot;},{&quot;title&quot;:&quot;CvT&quot;,&quot;id&quot;:&quot;model_doc/cvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cvt&quot;},{&quot;title&quot;:&quot;Deformable DETR&quot;,&quot;id&quot;:&quot;model_doc/deformable_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deformable_detr&quot;},{&quot;title&quot;:&quot;DeiT&quot;,&quot;id&quot;:&quot;model_doc/deit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deit&quot;},{&quot;title&quot;:&quot;DETA&quot;,&quot;id&quot;:&quot;model_doc/deta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deta&quot;},{&quot;title&quot;:&quot;DETR&quot;,&quot;id&quot;:&quot;model_doc/detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/detr&quot;},{&quot;title&quot;:&quot;DiNAT&quot;,&quot;id&quot;:&quot;model_doc/dinat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinat&quot;},{&quot;title&quot;:&quot;DINO V2&quot;,&quot;id&quot;:&quot;model_doc/dinov2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinov2&quot;},{&quot;title&quot;:&quot;DiT&quot;,&quot;id&quot;:&quot;model_doc/dit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dit&quot;},{&quot;title&quot;:&quot;DPT&quot;,&quot;id&quot;:&quot;model_doc/dpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpt&quot;},{&quot;title&quot;:&quot;EfficientFormer&quot;,&quot;id&quot;:&quot;model_doc/efficientformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientformer&quot;},{&quot;title&quot;:&quot;EfficientNet&quot;,&quot;id&quot;:&quot;model_doc/efficientnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientnet&quot;},{&quot;title&quot;:&quot;FocalNet&quot;,&quot;id&quot;:&quot;model_doc/focalnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/focalnet&quot;},{&quot;title&quot;:&quot;GLPN&quot;,&quot;id&quot;:&quot;model_doc/glpn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/glpn&quot;},{&quot;title&quot;:&quot;ImageGPT&quot;,&quot;id&quot;:&quot;model_doc/imagegpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/imagegpt&quot;},{&quot;title&quot;:&quot;LeViT&quot;,&quot;id&quot;:&quot;model_doc/levit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/levit&quot;},{&quot;title&quot;:&quot;Mask2Former&quot;,&quot;id&quot;:&quot;model_doc/mask2former&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mask2former&quot;},{&quot;title&quot;:&quot;MaskFormer&quot;,&quot;id&quot;:&quot;model_doc/maskformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/maskformer&quot;},{&quot;title&quot;:&quot;MobileNetV1&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v1&quot;},{&quot;title&quot;:&quot;MobileNetV2&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v2&quot;},{&quot;title&quot;:&quot;MobileViT&quot;,&quot;id&quot;:&quot;model_doc/mobilevit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevit&quot;},{&quot;title&quot;:&quot;MobileViTV2&quot;,&quot;id&quot;:&quot;model_doc/mobilevitv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevitv2&quot;},{&quot;title&quot;:&quot;NAT&quot;,&quot;id&quot;:&quot;model_doc/nat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nat&quot;},{&quot;title&quot;:&quot;PoolFormer&quot;,&quot;id&quot;:&quot;model_doc/poolformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/poolformer&quot;},{&quot;title&quot;:&quot;Pyramid Vision Transformer (PVT)&quot;,&quot;id&quot;:&quot;model_doc/pvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pvt&quot;},{&quot;title&quot;:&quot;RegNet&quot;,&quot;id&quot;:&quot;model_doc/regnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/regnet&quot;},{&quot;title&quot;:&quot;ResNet&quot;,&quot;id&quot;:&quot;model_doc/resnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/resnet&quot;},{&quot;title&quot;:&quot;SegFormer&quot;,&quot;id&quot;:&quot;model_doc/segformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/segformer&quot;},{&quot;title&quot;:&quot;SwiftFormer&quot;,&quot;id&quot;:&quot;model_doc/swiftformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swiftformer&quot;},{&quot;title&quot;:&quot;Swin Transformer&quot;,&quot;id&quot;:&quot;model_doc/swin&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin&quot;},{&quot;title&quot;:&quot;Swin Transformer V2&quot;,&quot;id&quot;:&quot;model_doc/swinv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swinv2&quot;},{&quot;title&quot;:&quot;Swin2SR&quot;,&quot;id&quot;:&quot;model_doc/swin2sr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin2sr&quot;},{&quot;title&quot;:&quot;Table Transformer&quot;,&quot;id&quot;:&quot;model_doc/table-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/table-transformer&quot;},{&quot;title&quot;:&quot;TimeSformer&quot;,&quot;id&quot;:&quot;model_doc/timesformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/timesformer&quot;},{&quot;title&quot;:&quot;UperNet&quot;,&quot;id&quot;:&quot;model_doc/upernet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/upernet&quot;},{&quot;title&quot;:&quot;VAN&quot;,&quot;id&quot;:&quot;model_doc/van&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/van&quot;},{&quot;title&quot;:&quot;VideoMAE&quot;,&quot;id&quot;:&quot;model_doc/videomae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/videomae&quot;},{&quot;title&quot;:&quot;Vision Transformer (ViT)&quot;,&quot;id&quot;:&quot;model_doc/vit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit&quot;},{&quot;title&quot;:&quot;ViT Hybrid&quot;,&quot;id&quot;:&quot;model_doc/vit_hybrid&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_hybrid&quot;},{&quot;title&quot;:&quot;ViTDet&quot;,&quot;id&quot;:&quot;model_doc/vitdet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitdet&quot;},{&quot;title&quot;:&quot;ViTMAE&quot;,&quot;id&quot;:&quot;model_doc/vit_mae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_mae&quot;},{&quot;title&quot;:&quot;ViTMatte&quot;,&quot;id&quot;:&quot;model_doc/vitmatte&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitmatte&quot;},{&quot;title&quot;:&quot;ViTMSN&quot;,&quot;id&quot;:&quot;model_doc/vit_msn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_msn&quot;},{&quot;title&quot;:&quot;ViViT&quot;,&quot;id&quot;:&quot;model_doc/vivit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vivit&quot;},{&quot;title&quot;:&quot;YOLOS&quot;,&quot;id&quot;:&quot;model_doc/yolos&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yolos&quot;}]},{&quot;title&quot;:&quot;Audio models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio Spectrogram Transformer&quot;,&quot;id&quot;:&quot;model_doc/audio-spectrogram-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer&quot;},{&quot;title&quot;:&quot;Bark&quot;,&quot;id&quot;:&quot;model_doc/bark&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bark&quot;},{&quot;title&quot;:&quot;CLAP&quot;,&quot;id&quot;:&quot;model_doc/clap&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clap&quot;},{&quot;title&quot;:&quot;EnCodec&quot;,&quot;id&quot;:&quot;model_doc/encodec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encodec&quot;},{&quot;title&quot;:&quot;Hubert&quot;,&quot;id&quot;:&quot;model_doc/hubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/hubert&quot;},{&quot;title&quot;:&quot;MCTCT&quot;,&quot;id&quot;:&quot;model_doc/mctct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mctct&quot;},{&quot;title&quot;:&quot;MMS&quot;,&quot;id&quot;:&quot;model_doc/mms&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mms&quot;},{&quot;title&quot;:&quot;MusicGen&quot;,&quot;id&quot;:&quot;model_doc/musicgen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/musicgen&quot;},{&quot;title&quot;:&quot;Pop2Piano&quot;,&quot;id&quot;:&quot;model_doc/pop2piano&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pop2piano&quot;},{&quot;title&quot;:&quot;SEW&quot;,&quot;id&quot;:&quot;model_doc/sew&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew&quot;},{&quot;title&quot;:&quot;SEW-D&quot;,&quot;id&quot;:&quot;model_doc/sew-d&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew-d&quot;},{&quot;title&quot;:&quot;Speech2Text&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text&quot;},{&quot;title&quot;:&quot;Speech2Text2&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text_2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2&quot;},{&quot;title&quot;:&quot;SpeechT5&quot;,&quot;id&quot;:&quot;model_doc/speecht5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speecht5&quot;},{&quot;title&quot;:&quot;UniSpeech&quot;,&quot;id&quot;:&quot;model_doc/unispeech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech&quot;},{&quot;title&quot;:&quot;UniSpeech-SAT&quot;,&quot;id&quot;:&quot;model_doc/unispeech-sat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech-sat&quot;},{&quot;title&quot;:&quot;VITS&quot;,&quot;id&quot;:&quot;model_doc/vits&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vits&quot;},{&quot;title&quot;:&quot;Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2&quot;},{&quot;title&quot;:&quot;Wav2Vec2-Conformer&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2-conformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer&quot;},{&quot;title&quot;:&quot;Wav2Vec2Phoneme&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2_phoneme&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme&quot;},{&quot;title&quot;:&quot;WavLM&quot;,&quot;id&quot;:&quot;model_doc/wavlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wavlm&quot;},{&quot;title&quot;:&quot;Whisper&quot;,&quot;id&quot;:&quot;model_doc/whisper&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/whisper&quot;},{&quot;title&quot;:&quot;XLS-R&quot;,&quot;id&quot;:&quot;model_doc/xls_r&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xls_r&quot;},{&quot;title&quot;:&quot;XLSR-Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/xlsr_wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2&quot;}]},{&quot;title&quot;:&quot;Multimodal models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALIGN&quot;,&quot;id&quot;:&quot;model_doc/align&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/align&quot;},{&quot;title&quot;:&quot;AltCLIP&quot;,&quot;id&quot;:&quot;model_doc/altclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/altclip&quot;},{&quot;title&quot;:&quot;BLIP&quot;,&quot;id&quot;:&quot;model_doc/blip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip&quot;},{&quot;title&quot;:&quot;BLIP-2&quot;,&quot;id&quot;:&quot;model_doc/blip-2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip-2&quot;},{&quot;title&quot;:&quot;BridgeTower&quot;,&quot;id&quot;:&quot;model_doc/bridgetower&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bridgetower&quot;},{&quot;title&quot;:&quot;BROS&quot;,&quot;id&quot;:&quot;model_doc/bros&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bros&quot;},{&quot;title&quot;:&quot;Chinese-CLIP&quot;,&quot;id&quot;:&quot;model_doc/chinese_clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/chinese_clip&quot;},{&quot;title&quot;:&quot;CLIP&quot;,&quot;id&quot;:&quot;model_doc/clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clip&quot;},{&quot;title&quot;:&quot;CLIPSeg&quot;,&quot;id&quot;:&quot;model_doc/clipseg&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clipseg&quot;},{&quot;title&quot;:&quot;Data2Vec&quot;,&quot;id&quot;:&quot;model_doc/data2vec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/data2vec&quot;},{&quot;title&quot;:&quot;DePlot&quot;,&quot;id&quot;:&quot;model_doc/deplot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deplot&quot;},{&quot;title&quot;:&quot;Donut&quot;,&quot;id&quot;:&quot;model_doc/donut&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/donut&quot;},{&quot;title&quot;:&quot;FLAVA&quot;,&quot;id&quot;:&quot;model_doc/flava&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flava&quot;},{&quot;title&quot;:&quot;GIT&quot;,&quot;id&quot;:&quot;model_doc/git&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/git&quot;},{&quot;title&quot;:&quot;GroupViT&quot;,&quot;id&quot;:&quot;model_doc/groupvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/groupvit&quot;},{&quot;title&quot;:&quot;IDEFICS&quot;,&quot;id&quot;:&quot;model_doc/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/idefics&quot;},{&quot;title&quot;:&quot;InstructBLIP&quot;,&quot;id&quot;:&quot;model_doc/instructblip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/instructblip&quot;},{&quot;title&quot;:&quot;LayoutLM&quot;,&quot;id&quot;:&quot;model_doc/layoutlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlm&quot;},{&quot;title&quot;:&quot;LayoutLMV2&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv2&quot;},{&quot;title&quot;:&quot;LayoutLMV3&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv3&quot;},{&quot;title&quot;:&quot;LayoutXLM&quot;,&quot;id&quot;:&quot;model_doc/layoutxlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutxlm&quot;},{&quot;title&quot;:&quot;LiLT&quot;,&quot;id&quot;:&quot;model_doc/lilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lilt&quot;},{&quot;title&quot;:&quot;LXMERT&quot;,&quot;id&quot;:&quot;model_doc/lxmert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lxmert&quot;},{&quot;title&quot;:&quot;MatCha&quot;,&quot;id&quot;:&quot;model_doc/matcha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/matcha&quot;},{&quot;title&quot;:&quot;MGP-STR&quot;,&quot;id&quot;:&quot;model_doc/mgp-str&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mgp-str&quot;},{&quot;title&quot;:&quot;Nougat&quot;,&quot;id&quot;:&quot;model_doc/nougat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nougat&quot;},{&quot;title&quot;:&quot;OneFormer&quot;,&quot;id&quot;:&quot;model_doc/oneformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/oneformer&quot;},{&quot;title&quot;:&quot;OWL-ViT&quot;,&quot;id&quot;:&quot;model_doc/owlvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/owlvit&quot;},{&quot;title&quot;:&quot;Perceiver&quot;,&quot;id&quot;:&quot;model_doc/perceiver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/perceiver&quot;},{&quot;title&quot;:&quot;Pix2Struct&quot;,&quot;id&quot;:&quot;model_doc/pix2struct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pix2struct&quot;},{&quot;title&quot;:&quot;Segment Anything&quot;,&quot;id&quot;:&quot;model_doc/sam&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sam&quot;},{&quot;title&quot;:&quot;Speech Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/speech-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder&quot;},{&quot;title&quot;:&quot;TAPAS&quot;,&quot;id&quot;:&quot;model_doc/tapas&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapas&quot;},{&quot;title&quot;:&quot;TrOCR&quot;,&quot;id&quot;:&quot;model_doc/trocr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trocr&quot;},{&quot;title&quot;:&quot;TVLT&quot;,&quot;id&quot;:&quot;model_doc/tvlt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tvlt&quot;},{&quot;title&quot;:&quot;ViLT&quot;,&quot;id&quot;:&quot;model_doc/vilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vilt&quot;},{&quot;title&quot;:&quot;Vision Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/vision-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder&quot;},{&quot;title&quot;:&quot;Vision Text Dual Encoder&quot;,&quot;id&quot;:&quot;model_doc/vision-text-dual-encoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder&quot;},{&quot;title&quot;:&quot;VisualBERT&quot;,&quot;id&quot;:&quot;model_doc/visual_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/visual_bert&quot;},{&quot;title&quot;:&quot;X-CLIP&quot;,&quot;id&quot;:&quot;model_doc/xclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xclip&quot;}]},{&quot;title&quot;:&quot;Reinforcement learning models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Decision Transformer&quot;,&quot;id&quot;:&quot;model_doc/decision_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/decision_transformer&quot;},{&quot;title&quot;:&quot;Trajectory Transformer&quot;,&quot;id&quot;:&quot;model_doc/trajectory_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer&quot;}]},{&quot;title&quot;:&quot;Time series models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Autoformer&quot;,&quot;id&quot;:&quot;model_doc/autoformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/autoformer&quot;},{&quot;title&quot;:&quot;Informer&quot;,&quot;id&quot;:&quot;model_doc/informer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/informer&quot;},{&quot;title&quot;:&quot;Time Series Transformer&quot;,&quot;id&quot;:&quot;model_doc/time_series_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/time_series_transformer&quot;}]},{&quot;title&quot;:&quot;Graph models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Graphormer&quot;,&quot;id&quot;:&quot;model_doc/graphormer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/graphormer&quot;}]}]},{&quot;title&quot;:&quot;Internal Helpers&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Custom Layers and Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/modeling_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/modeling_utils&quot;},{&quot;title&quot;:&quot;Utilities for pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/pipelines_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/pipelines_utils&quot;},{&quot;title&quot;:&quot;Utilities for Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/tokenization_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/tokenization_utils&quot;},{&quot;title&quot;:&quot;Utilities for Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/trainer_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/trainer_utils&quot;},{&quot;title&quot;:&quot;Utilities for Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/generation_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/generation_utils&quot;},{&quot;title&quot;:&quot;Utilities for Image Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/image_processing_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/image_processing_utils&quot;},{&quot;title&quot;:&quot;Utilities for Audio processing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/audio_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/audio_utils&quot;},{&quot;title&quot;:&quot;General Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/file_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/file_utils&quot;},{&quot;title&quot;:&quot;Utilities for Time Series&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/time_series_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/time_series_utils&quot;}]}]}],&quot;chapterId&quot;:&quot;tasks/video_classification&quot;,&quot;docType&quot;:&quot;docs&quot;,&quot;isLoggedIn&quot;:false,&quot;lang&quot;:&quot;en&quot;,&quot;langs&quot;:[&quot;de&quot;,&quot;en&quot;,&quot;es&quot;,&quot;fr&quot;,&quot;it&quot;,&quot;ko&quot;,&quot;pt&quot;,&quot;zh&quot;],&quot;library&quot;:&quot;transformers&quot;,&quot;theme&quot;:&quot;light&quot;,&quot;version&quot;:&quot;v4.34.0&quot;,&quot;versions&quot;:[{&quot;version&quot;:&quot;main&quot;},{&quot;version&quot;:&quot;v4.34.0&quot;},{&quot;version&quot;:&quot;v4.33.3&quot;},{&quot;version&quot;:&quot;v4.33.2&quot;},{&quot;version&quot;:&quot;v4.33.0&quot;},{&quot;version&quot;:&quot;v4.32.1&quot;},{&quot;version&quot;:&quot;v4.32.0&quot;},{&quot;version&quot;:&quot;v4.31.0&quot;},{&quot;version&quot;:&quot;v4.30.0&quot;},{&quot;version&quot;:&quot;v4.29.1&quot;},{&quot;version&quot;:&quot;v4.29.0&quot;},{&quot;version&quot;:&quot;v4.28.1&quot;},{&quot;version&quot;:&quot;v4.28.0&quot;},{&quot;version&quot;:&quot;v4.27.2&quot;},{&quot;version&quot;:&quot;v4.27.1&quot;},{&quot;version&quot;:&quot;v4.27.0&quot;},{&quot;version&quot;:&quot;v4.26.1&quot;},{&quot;version&quot;:&quot;v4.26.0&quot;},{&quot;version&quot;:&quot;v4.25.1&quot;},{&quot;version&quot;:&quot;v4.24.0&quot;},{&quot;version&quot;:&quot;v4.23.1&quot;},{&quot;version&quot;:&quot;v4.23.0&quot;},{&quot;version&quot;:&quot;v4.22.2&quot;},{&quot;version&quot;:&quot;v4.22.1&quot;},{&quot;version&quot;:&quot;v4.22.0&quot;},{&quot;version&quot;:&quot;v4.21.3&quot;},{&quot;version&quot;:&quot;v4.21.2&quot;},{&quot;version&quot;:&quot;v4.21.1&quot;},{&quot;version&quot;:&quot;v4.21.0&quot;},{&quot;version&quot;:&quot;v4.20.1&quot;},{&quot;version&quot;:&quot;v4.20.0&quot;},{&quot;version&quot;:&quot;v4.19.4&quot;},{&quot;version&quot;:&quot;v4.19.3&quot;},{&quot;version&quot;:&quot;v4.19.2&quot;},{&quot;version&quot;:&quot;v4.19.0&quot;},{&quot;version&quot;:&quot;v4.18.0&quot;},{&quot;version&quot;:&quot;v4.17.0&quot;},{&quot;version&quot;:&quot;v4.16.2&quot;},{&quot;version&quot;:&quot;v4.16.1&quot;},{&quot;version&quot;:&quot;v4.16.0&quot;},{&quot;version&quot;:&quot;v4.15.0&quot;},{&quot;version&quot;:&quot;v4.14.1&quot;},{&quot;version&quot;:&quot;v4.13.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.5&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.4&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.0.0&quot;},{&quot;version&quot;:&quot;doc-builder-html&quot;}],&quot;title&quot;:&quot;Video classification&quot;}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"><div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation</p> <div class="flex items-center"><p class="font-semibold">Video classification</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "><button class=" " type="button"><h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> <span class="ml-auto rounded border border-gray-200 bg-gray-100 px-0.5 text-xs dark:border-gray-800 dark:bg-gray-800"><kbd class="font-sans">⌘K</kbd></span></button> <div class="flex items-center"><select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1">v4.34.0</option><option value="2">v4.33.3</option><option value="3">v4.32.1</option><option value="4">v4.31.0</option><option value="5">v4.30.0</option><option value="6">v4.29.1</option><option value="7">v4.28.1</option><option value="8">v4.27.2</option><option value="9">v4.26.1</option><option value="10">v4.25.1</option><option value="11">v4.24.0</option><option value="12">v4.23.1</option><option value="13">v4.22.2</option><option value="14">v4.21.3</option><option value="15">v4.20.1</option><option value="16">v4.19.4</option><option value="17">v4.18.0</option><option value="18">v4.17.0</option><option value="19">v4.16.2</option><option value="20">v4.15.0</option><option value="21">v4.14.1</option><option value="22">v4.13.0</option><option value="23">v4.12.5</option><option value="24">v4.11.3</option><option value="25">v4.10.1</option><option value="26">v4.9.2</option><option value="27">v4.8.2</option><option value="28">v4.7.0</option><option value="29">v4.6.0</option><option value="30">v4.5.1</option><option value="31">v4.4.2</option><option value="32">v4.3.3</option><option value="33">v4.2.2</option><option value="34">v4.1.1</option><option value="35">v4.0.1</option><option value="36">v3.5.1</option><option value="37">v3.4.0</option><option value="38">v3.3.1</option><option value="39">v3.2.0</option><option value="40">v3.1.0</option><option value="41">v3.0.2</option><option value="42">v2.11.0</option><option value="43">v2.10.0</option><option value="44">v2.9.1</option><option value="45">v2.8.0</option><option value="46">v2.7.0</option><option value="47">v2.6.0</option><option value="48">v2.5.1</option><option value="49">v2.4.1</option><option value="50">v2.3.0</option><option value="51">v2.2.2</option><option value="52">v2.1.1</option><option value="53">v2.0.0</option><option value="54">v1.2.0</option><option value="55">v1.1.0</option><option value="56">v1.0.0</option><option value="57">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"><button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"><svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> 112,792</a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Get started</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/index">🤗 Transformers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/quicktour">Quick tour </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/installation">Installation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Tutorials</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_tutorial">Run inference with pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/autoclass_tutorial">Write portable code with AutoClass </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/preprocessing">Preprocess data </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/training">Fine-tune a pretrained model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/run_scripts">Train with a script </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/accelerate">Set up distributed training with 🤗 Accelerate </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/peft">Load and train adapters with 🤗 PEFT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_sharing">Share your model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/transformers_agents">Agents </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/llm_tutorial">Generation with LLMs </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Task Guides</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Natural Language Processing</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Computer Vision</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/image_classification">Image classification </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/semantic_segmentation">Semantic segmentation </a><a data-sveltekit-reload="" class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-4" href="/docs/transformers/v4.34.0/en/tasks/video_classification">Video classification </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/object_detection">Object detection </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection">Zero-shot object detection </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification">Zero-shot image classification </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation">Depth estimation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Generation</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Prompting</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Developer guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/fast_tokenizers">Use fast tokenizers from 🤗 Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/multilingual">Run inference with multilingual models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/create_a_model">Use model-specific APIs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_models">Share a custom model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/chat_templating">Templates for chat models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/sagemaker">Run training on Amazon SageMaker </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/serialization">Export to ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tflite">Export to TFLite </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/torchscript">Export to TorchScript </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/benchmarks">Benchmarks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/notebooks">Notebooks with examples </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/community">Community resources </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_tools">Custom Tools and Prompts </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/troubleshooting">Troubleshoot </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Performance and scalability</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/performance">Overview </a><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Efficient training techniques</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_one">Methods and tools for efficient training on a single GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_many">Multiple GPUs and parallelism </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu">Efficient training on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu_many">Distributed CPU training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu">Training on TPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu_tf">Training on TPU with TensorFlow </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_special">Training on Specialized Hardware </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_hardware">Custom hardware for training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/hpo_train">Hyperparameter Search using Trainer API </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Optimizing inference</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_cpu">Inference on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_one">Inference on one GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_many">Inference on many GPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_special">Inference on Specialized Hardware </a> </div><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/big_models">Instantiating a big model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/debugging">Troubleshooting </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tf_xla">XLA Integration for TensorFlow Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perf_torch_compile">Optimize inference using `torch.compile()` </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Contribute</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/contributing">How to contribute to transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_model">How to add a model to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_tensorflow_model">How to convert a 🤗 Transformers model to TensorFlow? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_pipeline">How to add a pipeline to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/testing">Testing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pr_checks">Checks on a Pull Request </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Conceptual guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/philosophy">Philosophy </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/glossary">Glossary </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/task_summary">What 🤗 Transformers can do </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tasks_explained">How 🤗 Transformers solve tasks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_summary">The Transformer model family </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tokenizer_summary">Summary of the tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/attention">Attention mechanisms </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pad_truncation">Padding and truncation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/bertology">BERTology </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perplexity">Perplexity of fixed-length models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_webserver">Pipelines for webserver inference </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_memory_anatomy">Model training anatomy </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>API</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Main Classes</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/agent">Agents and Tools </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/model_doc/auto">Auto Classes </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/callback">Callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/configuration">Configuration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/data_collator">Data Collator </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks">Keras callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/logging">Logging </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/model">Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/text_generation">Text Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/onnx">ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules">Optimization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/output">Model outputs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/pipelines">Pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/processors">Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/quantization">Quantization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/tokenizer">Tokenizer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/trainer">Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/deepspeed">DeepSpeed Integration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor">Feature Extractor </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/image_processor">Image Processor </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Models</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Text models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Vision models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Reinforcement learning models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Time series models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Graph models</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Internal Helpers</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/modeling_utils">Custom Layers and Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/pipelines_utils">Utilities for pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/tokenization_utils">Utilities for Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/trainer_utils">Utilities for Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/generation_utils">Utilities for Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/image_processing_utils">Utilities for Image Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/audio_utils">Utilities for Audio processing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/file_utils">General Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/time_series_utils">Utilities for Time Series </a> </div> </div></nav></div></div></div> <div class="z-1 min-w-0 flex-1"> <div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg"> <div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div> <p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience </p> <div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes </div></div></div> <div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a> <p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div> <div class="prose-doc prose relative mx-auto max-w-4xl break-words"> <p></p> <h1 class="relative group"><a id="video-classification" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#video-classification"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1vwwdb2">Video classification</span></h1> <div class="flex space-x-1 absolute z-10 right-0 top-0"> <div class="relative colab-dropdown "><button class=" " type="button"><img alt="Open In Colab" class="!m-0" src="https://colab.research.google.com/assets/colab-badge.svg"></button> </div> <div class="relative colab-dropdown "><button class=" " type="button"><img alt="Open In Studio Lab" class="!m-0" src="https://studiolab.sagemaker.aws/studiolab.svg"></button> </div></div> <p data-svelte-h="svelte-1uw5n59">Video classification is the task of assigning a label or class to an entire video. Videos are expected to have only one class for each video. Video classification models take a video as input and return a prediction about which class the video belongs to. These models can be used to categorize what a video is all about. A real-world application of video classification is action / activity recognition, which is useful for fitness applications. It is also helpful for vision-impaired individuals, especially when they are commuting.</p> <p data-svelte-h="svelte-1aff4p7">This guide will show you how to:</p> <ol data-svelte-h="svelte-1qfvs25"><li>Fine-tune <a href="https://huggingface.co/docs/transformers/main/en/model_doc/videomae" rel="nofollow">VideoMAE</a> on a subset of the <a href="https://www.crcv.ucf.edu/data/UCF101.php" rel="nofollow">UCF101</a> dataset.</li> <li>Use your fine-tuned model for inference.</li></ol> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400">The task illustrated in this tutorial is supported by the following model architectures: <p data-svelte-h="svelte-jaqk6f"><a href="../model_doc/timesformer">TimeSformer</a>, <a href="../model_doc/videomae">VideoMAE</a>, <a href="../model_doc/vivit">ViViT</a></p></div> <p data-svelte-h="svelte-1c9nexd">Before you begin, make sure you have all the necessary libraries installed:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class="">pip install -q pytorchvideo transformers evaluate</pre></div> <p data-svelte-h="svelte-cnicg0">You will use <a href="https://pytorchvideo.org/" rel="nofollow">PyTorchVideo</a> (dubbed <code>pytorchvideo</code>) to process and prepare the videos.</p> <p data-svelte-h="svelte-27hn0u">We encourage you to log in to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to log in:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> huggingface_hub <span class="hljs-keyword">import</span> notebook_login <span class="hljs-meta">&gt;&gt;&gt; </span>notebook_login()</pre></div> <h2 class="relative group"><a id="load-ucf101-dataset" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#load-ucf101-dataset"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-z7egtd">Load UCF101 dataset</span></h2> <p data-svelte-h="svelte-zo4tpc">Start by loading a subset of the <a href="https://www.crcv.ucf.edu/data/UCF101.php" rel="nofollow">UCF-101 dataset</a>. This will give you a chance to experiment and make sure everything works before spending more time training on the full dataset.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> huggingface_hub <span class="hljs-keyword">import</span> hf_hub_download <span class="hljs-meta">&gt;&gt;&gt; </span>hf_dataset_identifier = <span class="hljs-string">"sayakpaul/ucf101-subset"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>filename = <span class="hljs-string">"UCF101_subset.tar.gz"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>file_path = hf_hub_download(repo_id=hf_dataset_identifier, filename=filename, repo_type=<span class="hljs-string">"dataset"</span>)</pre></div> <p data-svelte-h="svelte-yyrc6l">After the subset has been downloaded, you need to extract the compressed archive:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> tarfile <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> tarfile.<span class="hljs-built_in">open</span>(file_path) <span class="hljs-keyword">as</span> t: <span class="hljs-meta">... </span> t.extractall(<span class="hljs-string">"."</span>)</pre></div> <p data-svelte-h="svelte-vxwx6z">At a high level, the dataset is organized like so:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class="">UCF101_subset/ train/ BandMarching/ video_1.mp4 video_2.mp4 ... Archery video_1.mp4 video_2.mp4 ... ... val/ BandMarching/ video_1.mp4 video_2.mp4 ... Archery video_1.mp4 video_2.mp4 ... ... <span class="hljs-built_in">test</span>/ BandMarching/ video_1.mp4 video_2.mp4 ... Archery video_1.mp4 video_2.mp4 ... ...</pre></div> <p data-svelte-h="svelte-xl39ap">The (<code>sorted</code>) video paths appear like so:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class="">... <span class="hljs-string">'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g07_c04.avi'</span>, <span class="hljs-string">'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g07_c06.avi'</span>, <span class="hljs-string">'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g08_c01.avi'</span>, <span class="hljs-string">'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g09_c02.avi'</span>, <span class="hljs-string">'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g09_c06.avi'</span> ...</pre></div> <p data-svelte-h="svelte-1t7u230">You will notice that there are video clips belonging to the same group / scene where group is denoted by <code>g</code> in the video file paths. <code>v_ApplyEyeMakeup_g07_c04.avi</code> and <code>v_ApplyEyeMakeup_g07_c06.avi</code>, for example.</p> <p data-svelte-h="svelte-igo46q">For the validation and evaluation splits, you wouldn’t want to have video clips from the same group / scene to prevent <a href="https://www.kaggle.com/code/alexisbcook/data-leakage" rel="nofollow">data leakage</a>. The subset that you are using in this tutorial takes this information into account.</p> <p data-svelte-h="svelte-4ll1ff">Next up, you will derive the set of labels present in the dataset. Also, create two dictionaries that’ll be helpful when initializing the model:</p> <ul data-svelte-h="svelte-1y0n38a"><li><code>label2id</code>: maps the class names to integers.</li> <li><code>id2label</code>: maps the integers to class names.</li></ul> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>class_labels = <span class="hljs-built_in">sorted</span>({<span class="hljs-built_in">str</span>(path).split(<span class="hljs-string">"/"</span>)[<span class="hljs-number">2</span>] <span class="hljs-keyword">for</span> path <span class="hljs-keyword">in</span> all_video_file_paths}) <span class="hljs-meta">&gt;&gt;&gt; </span>label2id = {label: i <span class="hljs-keyword">for</span> i, label <span class="hljs-keyword">in</span> <span class="hljs-built_in">enumerate</span>(class_labels)} <span class="hljs-meta">&gt;&gt;&gt; </span>id2label = {i: label <span class="hljs-keyword">for</span> label, i <span class="hljs-keyword">in</span> label2id.items()} <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-built_in">print</span>(<span class="hljs-string">f"Unique classes: <span class="hljs-subst">{<span class="hljs-built_in">list</span>(label2id.keys())}</span>."</span>) <span class="hljs-comment"># Unique classes: ['ApplyEyeMakeup', 'ApplyLipstick', 'Archery', 'BabyCrawling', 'BalanceBeam', 'BandMarching', 'BaseballPitch', 'Basketball', 'BasketballDunk', 'BenchPress'].</span></pre></div> <p data-svelte-h="svelte-1z0r2k5">There are 10 unique classes. For each class, there are 30 videos in the training set.</p> <h2 class="relative group"><a id="load-a-model-to-finetune" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#load-a-model-to-finetune"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-m33fft">Load a model to fine-tune</span></h2> <p data-svelte-h="svelte-14088fx">Instantiate a video classification model from a pretrained checkpoint and its associated image processor. The model’s encoder comes with pre-trained parameters, and the classification head is randomly initialized. The image processor will come in handy when writing the preprocessing pipeline for our dataset.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> VideoMAEImageProcessor, VideoMAEForVideoClassification <span class="hljs-meta">&gt;&gt;&gt; </span>model_ckpt = <span class="hljs-string">"MCG-NJU/videomae-base"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>image_processor = VideoMAEImageProcessor.from_pretrained(model_ckpt) <span class="hljs-meta">&gt;&gt;&gt; </span>model = VideoMAEForVideoClassification.from_pretrained( <span class="hljs-meta">... </span> model_ckpt, <span class="hljs-meta">... </span> label2id=label2id, <span class="hljs-meta">... </span> id2label=id2label, <span class="hljs-meta">... </span> ignore_mismatched_sizes=<span class="hljs-literal">True</span>, <span class="hljs-comment"># provide this in case you're planning to fine-tune an already fine-tuned checkpoint</span> <span class="hljs-meta">... </span>)</pre></div> <p data-svelte-h="svelte-1rauzal">While the model is loading, you might notice the following warning:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class="">Some weights of the model checkpoint at MCG-NJU/videomae-base were not used when initializing VideoMAEForVideoClassification: [..., <span class="hljs-string">'decoder.decoder_layers.1.attention.output.dense.bias'</span>, <span class="hljs-string">'decoder.decoder_layers.2.attention.attention.key.weight'</span>] - This IS expected <span class="hljs-keyword">if</span> you are initializing VideoMAEForVideoClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected <span class="hljs-keyword">if</span> you are initializing VideoMAEForVideoClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of VideoMAEForVideoClassification were not initialized from the model checkpoint at MCG-NJU/videomae-base and are newly initialized: [<span class="hljs-string">'classifier.bias'</span>, <span class="hljs-string">'classifier.weight'</span>] You should probably TRAIN this model on a down-stream task to be able to use it <span class="hljs-keyword">for</span> predictions and inference.</pre></div> <p data-svelte-h="svelte-19dkvgp">The warning is telling us we are throwing away some weights (e.g. the weights and bias of the <code>classifier</code> layer) and randomly initializing some others (the weights and bias of a new <code>classifier</code> layer). This is expected in this case, because we are adding a new head for which we don’t have pretrained weights, so the library warns us we should fine-tune this model before using it for inference, which is exactly what we are going to do.</p> <p data-svelte-h="svelte-nk737h"><strong>Note</strong> that <a href="https://huggingface.co/MCG-NJU/videomae-base-finetuned-kinetics" rel="nofollow">this checkpoint</a> leads to better performance on this task as the checkpoint was obtained fine-tuning on a similar downstream task having considerable domain overlap. You can check out <a href="https://huggingface.co/sayakpaul/videomae-base-finetuned-kinetics-finetuned-ucf101-subset" rel="nofollow">this checkpoint</a> which was obtained by fine-tuning <code>MCG-NJU/videomae-base-finetuned-kinetics</code>.</p> <h2 class="relative group"><a id="prepare-the-datasets-for-training" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#prepare-the-datasets-for-training"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-g9w9z7">Prepare the datasets for training</span></h2> <p data-svelte-h="svelte-islfqt">For preprocessing the videos, you will leverage the <a href="https://pytorchvideo.org/" rel="nofollow">PyTorchVideo library</a>. Start by importing the dependencies we need.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> pytorchvideo.data <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> pytorchvideo.transforms <span class="hljs-keyword">import</span> ( <span class="hljs-meta">... </span> ApplyTransformToKey, <span class="hljs-meta">... </span> Normalize, <span class="hljs-meta">... </span> RandomShortSideScale, <span class="hljs-meta">... </span> RemoveKey, <span class="hljs-meta">... </span> ShortSideScale, <span class="hljs-meta">... </span> UniformTemporalSubsample, <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> torchvision.transforms <span class="hljs-keyword">import</span> ( <span class="hljs-meta">... </span> Compose, <span class="hljs-meta">... </span> Lambda, <span class="hljs-meta">... </span> RandomCrop, <span class="hljs-meta">... </span> RandomHorizontalFlip, <span class="hljs-meta">... </span> Resize, <span class="hljs-meta">... </span>)</pre></div> <p data-svelte-h="svelte-5m4gle">For the training dataset transformations, use a combination of uniform temporal subsampling, pixel normalization, random cropping, and random horizontal flipping. For the validation and evaluation dataset transformations, keep the same transformation chain except for random cropping and horizontal flipping. To learn more about the details of these transformations check out the <a href="https://pytorchvideo.org" rel="nofollow">official documentation of PyTorchVideo</a>.</p> <p data-svelte-h="svelte-8w7a7w">Use the <code>image_processor</code> associated with the pre-trained model to obtain the following information:</p> <ul data-svelte-h="svelte-u2neln"><li>Image mean and standard deviation with which the video frame pixels will be normalized.</li> <li>Spatial resolution to which the video frames will be resized.</li></ul> <p data-svelte-h="svelte-llv4fi">Start by defining some constants.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>mean = image_processor.image_mean <span class="hljs-meta">&gt;&gt;&gt; </span>std = image_processor.image_std <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">if</span> <span class="hljs-string">"shortest_edge"</span> <span class="hljs-keyword">in</span> image_processor.size: <span class="hljs-meta">... </span> height = width = image_processor.size[<span class="hljs-string">"shortest_edge"</span>] <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">else</span>: <span class="hljs-meta">... </span> height = image_processor.size[<span class="hljs-string">"height"</span>] <span class="hljs-meta">... </span> width = image_processor.size[<span class="hljs-string">"width"</span>] <span class="hljs-meta">&gt;&gt;&gt; </span>resize_to = (height, width) <span class="hljs-meta">&gt;&gt;&gt; </span>num_frames_to_sample = model.config.num_frames <span class="hljs-meta">&gt;&gt;&gt; </span>sample_rate = <span class="hljs-number">4</span> <span class="hljs-meta">&gt;&gt;&gt; </span>fps = <span class="hljs-number">30</span> <span class="hljs-meta">&gt;&gt;&gt; </span>clip_duration = num_frames_to_sample * sample_rate / fps</pre></div> <p data-svelte-h="svelte-1lr33l4">Now, define the dataset-specific transformations and the datasets respectively. Starting with the training set:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>train_transform = Compose( <span class="hljs-meta">... </span> [ <span class="hljs-meta">... </span> ApplyTransformToKey( <span class="hljs-meta">... </span> key=<span class="hljs-string">"video"</span>, <span class="hljs-meta">... </span> transform=Compose( <span class="hljs-meta">... </span> [ <span class="hljs-meta">... </span> UniformTemporalSubsample(num_frames_to_sample), <span class="hljs-meta">... </span> Lambda(<span class="hljs-keyword">lambda</span> x: x / <span class="hljs-number">255.0</span>), <span class="hljs-meta">... </span> Normalize(mean, std), <span class="hljs-meta">... </span> RandomShortSideScale(min_size=<span class="hljs-number">256</span>, max_size=<span class="hljs-number">320</span>), <span class="hljs-meta">... </span> RandomCrop(resize_to), <span class="hljs-meta">... </span> RandomHorizontalFlip(p=<span class="hljs-number">0.5</span>), <span class="hljs-meta">... </span> ] <span class="hljs-meta">... </span> ), <span class="hljs-meta">... </span> ), <span class="hljs-meta">... </span> ] <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>train_dataset = pytorchvideo.data.Ucf101( <span class="hljs-meta">... </span> data_path=os.path.join(dataset_root_path, <span class="hljs-string">"train"</span>), <span class="hljs-meta">... </span> clip_sampler=pytorchvideo.data.make_clip_sampler(<span class="hljs-string">"random"</span>, clip_duration), <span class="hljs-meta">... </span> decode_audio=<span class="hljs-literal">False</span>, <span class="hljs-meta">... </span> transform=train_transform, <span class="hljs-meta">... </span>)</pre></div> <p data-svelte-h="svelte-92qkhv">The same sequence of workflow can be applied to the validation and evaluation sets:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>val_transform = Compose( <span class="hljs-meta">... </span> [ <span class="hljs-meta">... </span> ApplyTransformToKey( <span class="hljs-meta">... </span> key=<span class="hljs-string">"video"</span>, <span class="hljs-meta">... </span> transform=Compose( <span class="hljs-meta">... </span> [ <span class="hljs-meta">... </span> UniformTemporalSubsample(num_frames_to_sample), <span class="hljs-meta">... </span> Lambda(<span class="hljs-keyword">lambda</span> x: x / <span class="hljs-number">255.0</span>), <span class="hljs-meta">... </span> Normalize(mean, std), <span class="hljs-meta">... </span> Resize(resize_to), <span class="hljs-meta">... </span> ] <span class="hljs-meta">... </span> ), <span class="hljs-meta">... </span> ), <span class="hljs-meta">... </span> ] <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>val_dataset = pytorchvideo.data.Ucf101( <span class="hljs-meta">... </span> data_path=os.path.join(dataset_root_path, <span class="hljs-string">"val"</span>), <span class="hljs-meta">... </span> clip_sampler=pytorchvideo.data.make_clip_sampler(<span class="hljs-string">"uniform"</span>, clip_duration), <span class="hljs-meta">... </span> decode_audio=<span class="hljs-literal">False</span>, <span class="hljs-meta">... </span> transform=val_transform, <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>test_dataset = pytorchvideo.data.Ucf101( <span class="hljs-meta">... </span> data_path=os.path.join(dataset_root_path, <span class="hljs-string">"test"</span>), <span class="hljs-meta">... </span> clip_sampler=pytorchvideo.data.make_clip_sampler(<span class="hljs-string">"uniform"</span>, clip_duration), <span class="hljs-meta">... </span> decode_audio=<span class="hljs-literal">False</span>, <span class="hljs-meta">... </span> transform=val_transform, <span class="hljs-meta">... </span>)</pre></div> <p data-svelte-h="svelte-lyltai"><strong>Note</strong>: The above dataset pipelines are taken from the <a href="https://pytorchvideo.org/docs/tutorial_classification#dataset" rel="nofollow">official PyTorchVideo example</a>. We’re using the <a href="https://pytorchvideo.readthedocs.io/en/latest/api/data/data.html#pytorchvideo.data.Ucf101" rel="nofollow"><code>pytorchvideo.data.Ucf101()</code></a> function because it’s tailored for the UCF-101 dataset. Under the hood, it returns a <a href="https://pytorchvideo.readthedocs.io/en/latest/api/data/data.html#pytorchvideo.data.LabeledVideoDataset" rel="nofollow"><code>pytorchvideo.data.labeled_video_dataset.LabeledVideoDataset</code></a> object. <code>LabeledVideoDataset</code> class is the base class for all things video in the PyTorchVideo dataset. So, if you want to use a custom dataset not supported off-the-shelf by PyTorchVideo, you can extend the <code>LabeledVideoDataset</code> class accordingly. Refer to the <code>data</code> API <a href="https://pytorchvideo.readthedocs.io/en/latest/api/data/data.html" rel="nofollow">documentation to</a> learn more. Also, if your dataset follows a similar structure (as shown above), then using the <code>pytorchvideo.data.Ucf101()</code> should work just fine.</p> <p data-svelte-h="svelte-1vli4t8">You can access the <code>num_videos</code> argument to know the number of videos in the dataset.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-built_in">print</span>(train_dataset.num_videos, val_dataset.num_videos, test_dataset.num_videos) <span class="hljs-comment"># (300, 30, 75)</span></pre></div> <h2 class="relative group"><a id="visualize-the-preprocessed-video-for-better-debugging" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#visualize-the-preprocessed-video-for-better-debugging"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-16bcby7">Visualize the preprocessed video for better debugging</span></h2> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> imageio <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> IPython.display <span class="hljs-keyword">import</span> Image <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">unnormalize_img</span>(<span class="hljs-params">img</span>): <span class="hljs-meta">... </span> <span class="hljs-string">"""Un-normalizes the image pixels."""</span> <span class="hljs-meta">... </span> img = (img * std) + mean <span class="hljs-meta">... </span> img = (img * <span class="hljs-number">255</span>).astype(<span class="hljs-string">"uint8"</span>) <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> img.clip(<span class="hljs-number">0</span>, <span class="hljs-number">255</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">create_gif</span>(<span class="hljs-params">video_tensor, filename=<span class="hljs-string">"sample.gif"</span></span>): <span class="hljs-meta">... </span> <span class="hljs-string">"""Prepares a GIF from a video tensor. <span class="hljs-meta">... </span> <span class="hljs-meta">... </span> The video tensor is expected to have the following shape: <span class="hljs-meta">... </span> (num_frames, num_channels, height, width). <span class="hljs-meta">... </span> """</span> <span class="hljs-meta">... </span> frames = [] <span class="hljs-meta">... </span> <span class="hljs-keyword">for</span> video_frame <span class="hljs-keyword">in</span> video_tensor: <span class="hljs-meta">... </span> frame_unnormalized = unnormalize_img(video_frame.permute(<span class="hljs-number">1</span>, <span class="hljs-number">2</span>, <span class="hljs-number">0</span>).numpy()) <span class="hljs-meta">... </span> frames.append(frame_unnormalized) <span class="hljs-meta">... </span> kargs = {<span class="hljs-string">"duration"</span>: <span class="hljs-number">0.25</span>} <span class="hljs-meta">... </span> imageio.mimsave(filename, frames, <span class="hljs-string">"GIF"</span>, **kargs) <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> filename <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">display_gif</span>(<span class="hljs-params">video_tensor, gif_name=<span class="hljs-string">"sample.gif"</span></span>): <span class="hljs-meta">... </span> <span class="hljs-string">"""Prepares and displays a GIF from a video tensor."""</span> <span class="hljs-meta">... </span> video_tensor = video_tensor.permute(<span class="hljs-number">1</span>, <span class="hljs-number">0</span>, <span class="hljs-number">2</span>, <span class="hljs-number">3</span>) <span class="hljs-meta">... </span> gif_filename = create_gif(video_tensor, gif_name) <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> Image(filename=gif_filename) <span class="hljs-meta">&gt;&gt;&gt; </span>sample_video = <span class="hljs-built_in">next</span>(<span class="hljs-built_in">iter</span>(train_dataset)) <span class="hljs-meta">&gt;&gt;&gt; </span>video_tensor = sample_video[<span class="hljs-string">"video"</span>] <span class="hljs-meta">&gt;&gt;&gt; </span>display_gif(video_tensor)</pre></div> <div class="flex justify-center" data-svelte-h="svelte-1mxsghh"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/sample_gif.gif" alt="Person playing basketball"></div> <h2 class="relative group"><a id="train-the-model" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#train-the-model"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-9ydwd">Train the model</span></h2> <p data-svelte-h="svelte-rirkpj">Leverage <a href="https://huggingface.co/docs/transformers/main_classes/trainer" rel="nofollow"><code>Trainer</code></a> from 🤗 Transformers for training the model. To instantiate a <code>Trainer</code>, you need to define the training configuration and an evaluation metric. The most important is the <a href="https://huggingface.co/transformers/main_classes/trainer.html#transformers.TrainingArguments" rel="nofollow"><code>TrainingArguments</code></a>, which is a class that contains all the attributes to configure the training. It requires an output folder name, which will be used to save the checkpoints of the model. It also helps sync all the information in the model repository on 🤗 Hub.</p> <p data-svelte-h="svelte-ixq0kp">Most of the training arguments are self-explanatory, but one that is quite important here is <code>remove_unused_columns=False</code>. This one will drop any features not used by the model’s call function. By default it’s <code>True</code> because usually it’s ideal to drop unused feature columns, making it easier to unpack inputs into the model’s call function. But, in this case, you need the unused features (‘video’ in particular) in order to create <code>pixel_values</code> (which is a mandatory key our model expects in its inputs).</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> TrainingArguments, Trainer <span class="hljs-meta">&gt;&gt;&gt; </span>model_name = model_ckpt.split(<span class="hljs-string">"/"</span>)[-<span class="hljs-number">1</span>] <span class="hljs-meta">&gt;&gt;&gt; </span>new_model_name = <span class="hljs-string">f"<span class="hljs-subst">{model_name}</span>-finetuned-ucf101-subset"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>num_epochs = <span class="hljs-number">4</span> <span class="hljs-meta">&gt;&gt;&gt; </span>args = TrainingArguments( <span class="hljs-meta">... </span> new_model_name, <span class="hljs-meta">... </span> remove_unused_columns=<span class="hljs-literal">False</span>, <span class="hljs-meta">... </span> evaluation_strategy=<span class="hljs-string">"epoch"</span>, <span class="hljs-meta">... </span> save_strategy=<span class="hljs-string">"epoch"</span>, <span class="hljs-meta">... </span> learning_rate=<span class="hljs-number">5e-5</span>, <span class="hljs-meta">... </span> per_device_train_batch_size=batch_size, <span class="hljs-meta">... </span> per_device_eval_batch_size=batch_size, <span class="hljs-meta">... </span> warmup_ratio=<span class="hljs-number">0.1</span>, <span class="hljs-meta">... </span> logging_steps=<span class="hljs-number">10</span>, <span class="hljs-meta">... </span> load_best_model_at_end=<span class="hljs-literal">True</span>, <span class="hljs-meta">... </span> metric_for_best_model=<span class="hljs-string">"accuracy"</span>, <span class="hljs-meta">... </span> push_to_hub=<span class="hljs-literal">True</span>, <span class="hljs-meta">... </span> max_steps=(train_dataset.num_videos // batch_size) * num_epochs, <span class="hljs-meta">... </span>)</pre></div> <p data-svelte-h="svelte-1v50sum">The dataset returned by <code>pytorchvideo.data.Ucf101()</code> doesn’t implement the <code>__len__</code> method. As such, we must define <code>max_steps</code> when instantiating <code>TrainingArguments</code>.</p> <p data-svelte-h="svelte-1kbaooa">Next, you need to define a function to compute the metrics from the predictions, which will use the <code>metric</code> you’ll load now. The only preprocessing you have to do is to take the argmax of our predicted logits:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-keyword">import</span> evaluate metric = evaluate.load(<span class="hljs-string">"accuracy"</span>) <span class="hljs-keyword">def</span> <span class="hljs-title function_">compute_metrics</span>(<span class="hljs-params">eval_pred</span>): predictions = np.argmax(eval_pred.predictions, axis=<span class="hljs-number">1</span>) <span class="hljs-keyword">return</span> metric.compute(predictions=predictions, references=eval_pred.label_ids)</pre></div> <p data-svelte-h="svelte-1yc7v5f"><strong>A note on evaluation</strong>:</p> <p data-svelte-h="svelte-9bk5j6">In the <a href="https://arxiv.org/abs/2203.12602" rel="nofollow">VideoMAE paper</a>, the authors use the following evaluation strategy. They evaluate the model on several clips from test videos and apply different crops to those clips and report the aggregate score. However, in the interest of simplicity and brevity, we don’t consider that in this tutorial.</p> <p data-svelte-h="svelte-1csqroh">Also, define a <code>collate_fn</code>, which will be used to batch examples together. Each batch consists of 2 keys, namely <code>pixel_values</code> and <code>labels</code>.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">collate_fn</span>(<span class="hljs-params">examples</span>): <span class="hljs-meta">... </span> <span class="hljs-comment"># permute to (num_frames, num_channels, height, width)</span> <span class="hljs-meta">... </span> pixel_values = torch.stack( <span class="hljs-meta">... </span> [example[<span class="hljs-string">"video"</span>].permute(<span class="hljs-number">1</span>, <span class="hljs-number">0</span>, <span class="hljs-number">2</span>, <span class="hljs-number">3</span>) <span class="hljs-keyword">for</span> example <span class="hljs-keyword">in</span> examples] <span class="hljs-meta">... </span> ) <span class="hljs-meta">... </span> labels = torch.tensor([example[<span class="hljs-string">"label"</span>] <span class="hljs-keyword">for</span> example <span class="hljs-keyword">in</span> examples]) <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> {<span class="hljs-string">"pixel_values"</span>: pixel_values, <span class="hljs-string">"labels"</span>: labels}</pre></div> <p data-svelte-h="svelte-16wl6hd">Then you just pass all of this along with the datasets to <code>Trainer</code>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>trainer = Trainer( <span class="hljs-meta">... </span> model, <span class="hljs-meta">... </span> args, <span class="hljs-meta">... </span> train_dataset=train_dataset, <span class="hljs-meta">... </span> eval_dataset=val_dataset, <span class="hljs-meta">... </span> tokenizer=image_processor, <span class="hljs-meta">... </span> compute_metrics=compute_metrics, <span class="hljs-meta">... </span> data_collator=collate_fn, <span class="hljs-meta">... </span>)</pre></div> <p data-svelte-h="svelte-g6wg27">You might wonder why you passed along the <code>image_processor</code> as a tokenizer when you preprocessed the data already. This is only to make sure the image processor configuration file (stored as JSON) will also be uploaded to the repo on the Hub.</p> <p data-svelte-h="svelte-umxz0w">Now fine-tune our model by calling the <code>train</code> method:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>train_results = trainer.train()</pre></div> <p data-svelte-h="svelte-cv8z08">Once training is completed, share your model to the Hub with the <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.push_to_hub">push_to_hub()</a> method so everyone can use your model:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>trainer.push_to_hub()</pre></div> <h2 class="relative group"><a id="inference" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#inference"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-199uz7g">Inference</span></h2> <p data-svelte-h="svelte-riodtu">Great, now that you have fine-tuned a model, you can use it for inference!</p> <p data-svelte-h="svelte-w1spga">Load a video for inference:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>sample_test_video = <span class="hljs-built_in">next</span>(<span class="hljs-built_in">iter</span>(test_dataset))</pre></div> <div class="flex justify-center" data-svelte-h="svelte-556htt"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/sample_gif_two.gif" alt="Teams playing basketball"></div> <p data-svelte-h="svelte-e8z0ag">The simplest way to try out your fine-tuned model for inference is to use it in a <a href="https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.VideoClassificationPipeline" rel="nofollow"><code>pipeline</code></a>. Instantiate a <code>pipeline</code> for video classification with your model, and pass your video to it:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> pipeline <span class="hljs-meta">&gt;&gt;&gt; </span>video_cls = pipeline(model=<span class="hljs-string">"my_awesome_video_cls_model"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>video_cls(<span class="hljs-string">"https://huggingface.co/datasets/sayakpaul/ucf101-subset/resolve/main/v_BasketballDunk_g14_c06.avi"</span>) [{<span class="hljs-string">'score'</span>: <span class="hljs-number">0.9272987842559814</span>, <span class="hljs-string">'label'</span>: <span class="hljs-string">'BasketballDunk'</span>}, {<span class="hljs-string">'score'</span>: <span class="hljs-number">0.017777055501937866</span>, <span class="hljs-string">'label'</span>: <span class="hljs-string">'BabyCrawling'</span>}, {<span class="hljs-string">'score'</span>: <span class="hljs-number">0.01663011871278286</span>, <span class="hljs-string">'label'</span>: <span class="hljs-string">'BalanceBeam'</span>}, {<span class="hljs-string">'score'</span>: <span class="hljs-number">0.009560945443809032</span>, <span class="hljs-string">'label'</span>: <span class="hljs-string">'BandMarching'</span>}, {<span class="hljs-string">'score'</span>: <span class="hljs-number">0.0068979403004050255</span>, <span class="hljs-string">'label'</span>: <span class="hljs-string">'BaseballPitch'</span>}]</pre></div> <p data-svelte-h="svelte-1j33lbi">You can also manually replicate the results of the <code>pipeline</code> if you’d like.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">run_inference</span>(<span class="hljs-params">model, video</span>): <span class="hljs-meta">... </span> <span class="hljs-comment"># (num_frames, num_channels, height, width)</span> <span class="hljs-meta">... </span> perumuted_sample_test_video = video.permute(<span class="hljs-number">1</span>, <span class="hljs-number">0</span>, <span class="hljs-number">2</span>, <span class="hljs-number">3</span>) <span class="hljs-meta">... </span> inputs = { <span class="hljs-meta">... </span> <span class="hljs-string">"pixel_values"</span>: perumuted_sample_test_video.unsqueeze(<span class="hljs-number">0</span>), <span class="hljs-meta">... </span> <span class="hljs-string">"labels"</span>: torch.tensor( <span class="hljs-meta">... </span> [sample_test_video[<span class="hljs-string">"label"</span>]] <span class="hljs-meta">... </span> ), <span class="hljs-comment"># this can be skipped if you don't have labels available.</span> <span class="hljs-meta">... </span> } <span class="hljs-meta">... </span> device = torch.device(<span class="hljs-string">"cuda"</span> <span class="hljs-keyword">if</span> torch.cuda.is_available() <span class="hljs-keyword">else</span> <span class="hljs-string">"cpu"</span>) <span class="hljs-meta">... </span> inputs = {k: v.to(device) <span class="hljs-keyword">for</span> k, v <span class="hljs-keyword">in</span> inputs.items()} <span class="hljs-meta">... </span> model = model.to(device) <span class="hljs-meta">... </span> <span class="hljs-comment"># forward pass</span> <span class="hljs-meta">... </span> <span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> outputs = model(**inputs) <span class="hljs-meta">... </span> logits = outputs.logits <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> logits</pre></div> <p data-svelte-h="svelte-12olihs">Now, pass your input to the model and return the <code>logits</code>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class="">&gt;&gt;&gt; logits = run<span class="hljs-constructor">_inference(<span class="hljs-params">trained_model</span>, <span class="hljs-params">sample_test_video</span>[<span class="hljs-string">"video"</span>])</span></pre></div> <p data-svelte-h="svelte-1v8qszj">Decoding the <code>logits</code>, we get:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>predicted_class_idx = logits.argmax(-<span class="hljs-number">1</span>).item() <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-built_in">print</span>(<span class="hljs-string">"Predicted class:"</span>, model.config.id2label[predicted_class_idx]) <span class="hljs-comment"># Predicted class: BasketballDunk</span></pre></div> <p></p> <div id="svelte-announcer" aria-live="assertive" aria-atomic="true" style="position: absolute; left: 0px; top: 0px; clip: rect(0px, 0px, 0px, 0px); clip-path: inset(50%); overflow: hidden; white-space: nowrap; width: 1px; height: 1px;"></div></div> <div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/tasks/semantic_segmentation" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>Semantic segmentation</a> <a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/tasks/object_detection" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">Object detection<span class="ml-2 translate-y-px">→</span></a></div></div></div> <div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapter&quot;:{&quot;title&quot;:&quot;Video classification&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;video-classification&quot;,&quot;url&quot;:&quot;#video-classification&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Load UCF101 dataset&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;load-ucf101-dataset&quot;,&quot;url&quot;:&quot;#load-ucf101-dataset&quot;},{&quot;title&quot;:&quot;Load a model to fine-tune&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;load-a-model-to-finetune&quot;,&quot;url&quot;:&quot;#load-a-model-to-finetune&quot;},{&quot;title&quot;:&quot;Prepare the datasets for training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;prepare-the-datasets-for-training&quot;,&quot;url&quot;:&quot;#prepare-the-datasets-for-training&quot;},{&quot;title&quot;:&quot;Visualize the preprocessed video for better debugging &quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;visualize-the-preprocessed-video-for-better-debugging&quot;,&quot;url&quot;:&quot;#visualize-the-preprocessed-video-for-better-debugging&quot;},{&quot;title&quot;:&quot;Train the model &quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;train-the-model&quot;,&quot;url&quot;:&quot;#train-the-model&quot;},{&quot;title&quot;:&quot;Inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;inference&quot;,&quot;url&quot;:&quot;#inference&quot;}]}}" data-target="SubSideMenu"><nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#video-classification" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-video-classification"><wbr>Video classification</a> <a href="#load-ucf101-dataset" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-load-ucf101-dataset"><wbr>Load UC<wbr>F101 dataset</a> <a href="#load-a-model-to-finetune" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-load-a-model-to-finetune"><wbr>Load a model to fine-tune</a> <a href="#prepare-the-datasets-for-training" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-prepare-the-datasets-for-training"><wbr>Prepare the datasets for training</a> <a href="#visualize-the-preprocessed-video-for-better-debugging" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-visualize-the-preprocessed-video-for-better-debugging"><wbr>Visualize the preprocessed video for better debugging </a> <a href="#train-the-model" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-train-the-model"><wbr>Train the model </a> <a href="#inference" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-inference"><wbr>Inference</a> </nav></div></div></div> <div id="doc-footer"></div></main> </div> <script> import("/front/build/kube-5e23f38/index.js"); window.moonSha = "kube-5e23f38/"; window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc","environment":"production","userAgent":"HuggingFace (production)"}`); </script> <!-- Stripe --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://js.stripe.com/v3/"; script.async = true; document.head.appendChild(script); } </script> <!-- Google analytics v4 --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL"; script.async = true; document.head.appendChild(script); window.dataLayer = window.dataLayer || []; function gtag() { if (window.dataLayer !== undefined) { window.dataLayer.push(arguments); } } gtag("js", new Date()); gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/v4.34.0/en/tasks/video_classification" }); /// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" }); /// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent /// TODO: ask the user for their consent and update this with gtag('consent', 'update') } </script> <!-- Google Analytics v3 (deprecated) --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { (function (i, s, o, g, r, a, m) { i["GoogleAnalyticsObject"] = r; (i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments); }), (i[r].l = 1 * new Date()); (a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]); a.async = 1; a.src = g; m.parentNode.insertBefore(a, m); })(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics"); ganalytics("create", "UA-83738774-2", "auto"); ganalytics("send", "pageview", "/docs/transformers/v4.34.0/en/tasks/video_classification"); } </script> <iframe name="__privateStripeMetricsController2840" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-27c67c0d52761104439bb051c7856ab1.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fv4.34.0%2Fen%2Ftasks%2Fvideo_classification&amp;title=Video%20classification&amp;referrer=&amp;muid=NA&amp;sid=NA&amp;version=6&amp;preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html>
2023-10-05T13:33:55.684Z
https://huggingface.co/docs/transformers/v4.34.0/en/tasks/model_doc/vit
The documentation page TASKS/MODEL\_DOC/VIT doesn’t exist in v4.34.0, but exists on the main version. Click [here](/docs/transformers/main/en/tasks/model_doc/vit) to redirect to the main version of the documentation.
<html><head></head><body>The documentation page TASKS/MODEL_DOC/VIT doesn’t exist in v4.34.0, but exists on the main version. Click <a href="/docs/transformers/main/en/tasks/model_doc/vit">here</a> to redirect to the main version of the documentation.</body></html>
2023-10-05T13:33:55.706Z
Text to speech
https://huggingface.co/docs/transformers/v4.34.0/en/tasks/text-to-speech
# Text to speech Text-to-speech (TTS) is the task of creating natural-sounding speech from text, where the speech can be generated in multiple languages and for multiple speakers. Several text-to-speech models are currently available in 🤗 Transformers, such as [Bark](../model_doc/bark), [MMS](../model_doc/mms), [VITS](../model_doc/vits) and [SpeechT5](../model_doc/speecht5). You can easily generate audio using the `"text-to-audio"` pipeline (or its alias - `"text-to-speech"`). Some models, like Bark, can also be conditioned to generate non-verbal communications such as laughing, sighing and crying, or even add music. Here’s an example of how you would use the `"text-to-speech"` pipeline with Bark: ``` >>> from transformers import pipeline >>> pipe = pipeline("text-to-speech", model="suno/bark-small") >>> text = "[clears throat] This is a test ... and I just took a long pause." >>> output = pipe(text) ``` Here’s a code snippet you can use to listen to the resulting audio in a notebook: ``` >>> from IPython.display import Audio >>> Audio(output["audio"], rate=output["sampling_rate"]) ``` For more examples on what Bark and other pretrained TTS models can do, refer to our [Audio course](https://huggingface.co/learn/audio-course/chapter6/pre-trained_models). If you are looking to fine-tune a TTS model, you can currently fine-tune SpeechT5 only. SpeechT5 is pre-trained on a combination of speech-to-text and text-to-speech data, allowing it to learn a unified space of hidden representations shared by both text and speech. This means that the same pre-trained model can be fine-tuned for different tasks. Furthermore, SpeechT5 supports multiple speakers through x-vector speaker embeddings. The remainder of this guide illustrates how to: 1. Fine-tune [SpeechT5](../model_doc/speecht5) that was originally trained on English speech on the Dutch (`nl`) language subset of the [VoxPopuli](https://huggingface.co/datasets/facebook/voxpopuli) dataset. 2. Use your refined model for inference in one of two ways: using a pipeline or directly. Before you begin, make sure you have all the necessary libraries installed: ``` pip install datasets soundfile speechbrain accelerate ``` Install 🤗Transformers from source as not all the SpeechT5 features have been merged into an official release yet: ``` pip install git+https://github.com/huggingface/transformers.git ``` To follow this guide you will need a GPU. If you’re working in a notebook, run the following line to check if a GPU is available: We encourage you to log in to your Hugging Face account to upload and share your model with the community. When prompted, enter your token to log in: ``` >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## Load the dataset [VoxPopuli](https://huggingface.co/datasets/facebook/voxpopuli) is a large-scale multilingual speech corpus consisting of data sourced from 2009-2020 European Parliament event recordings. It contains labelled audio-transcription data for 15 European languages. In this guide, we are using the Dutch language subset, feel free to pick another subset. Note that VoxPopuli or any other automated speech recognition (ASR) dataset may not be the most suitable option for training TTS models. The features that make it beneficial for ASR, such as excessive background noise, are typically undesirable in TTS. However, finding top-quality, multilingual, and multi-speaker TTS datasets can be quite challenging. Let’s load the data: ``` >>> from datasets import load_dataset, Audio >>> dataset = load_dataset("facebook/voxpopuli", "nl", split="train") >>> len(dataset) 20968 ``` 20968 examples should be sufficient for fine-tuning. SpeechT5 expects audio data to have a sampling rate of 16 kHz, so make sure the examples in the dataset meet this requirement: ``` dataset = dataset.cast_column("audio", Audio(sampling_rate=16000)) ``` ## Preprocess the data Let’s begin by defining the model checkpoint to use and loading the appropriate processor: ``` >>> from transformers import SpeechT5Processor >>> checkpoint = "microsoft/speecht5_tts" >>> processor = SpeechT5Processor.from_pretrained(checkpoint) ``` ### Text cleanup for SpeechT5 tokenization Start by cleaning up the text data. You’ll need the tokenizer part of the processor to process the text: ``` >>> tokenizer = processor.tokenizer ``` The dataset examples contain `raw_text` and `normalized_text` features. When deciding which feature to use as the text input, consider that the SpeechT5 tokenizer doesn’t have any tokens for numbers. In `normalized_text` the numbers are written out as text. Thus, it is a better fit, and we recommend using `normalized_text` as input text. Because SpeechT5 was trained on the English language, it may not recognize certain characters in the Dutch dataset. If left as is, these characters will be converted to `<unk>` tokens. However, in Dutch, certain characters like `à` are used to stress syllables. In order to preserve the meaning of the text, we can replace this character with a regular `a`. To identify unsupported tokens, extract all unique characters in the dataset using the `SpeechT5Tokenizer` which works with characters as tokens. To do this, write the `extract_all_chars` mapping function that concatenates the transcriptions from all examples into one string and converts it to a set of characters. Make sure to set `batched=True` and `batch_size=-1` in `dataset.map()` so that all transcriptions are available at once for the mapping function. ``` >>> def extract_all_chars(batch): ... all_text = " ".join(batch["normalized_text"]) ... vocab = list(set(all_text)) ... return {"vocab": [vocab], "all_text": [all_text]} >>> vocabs = dataset.map( ... extract_all_chars, ... batched=True, ... batch_size=-1, ... keep_in_memory=True, ... remove_columns=dataset.column_names, ... ) >>> dataset_vocab = set(vocabs["vocab"][0]) >>> tokenizer_vocab = {k for k, _ in tokenizer.get_vocab().items()} ``` Now you have two sets of characters: one with the vocabulary from the dataset and one with the vocabulary from the tokenizer. To identify any unsupported characters in the dataset, you can take the difference between these two sets. The resulting set will contain the characters that are in the dataset but not in the tokenizer. ``` >>> dataset_vocab - tokenizer_vocab {' ', 'à', 'ç', 'è', 'ë', 'í', 'ï', 'ö', 'ü'} ``` To handle the unsupported characters identified in the previous step, define a function that maps these characters to valid tokens. Note that spaces are already replaced by `▁` in the tokenizer and don’t need to be handled separately. ``` >>> replacements = [ ... ("à", "a"), ... ("ç", "c"), ... ("è", "e"), ... ("ë", "e"), ... ("í", "i"), ... ("ï", "i"), ... ("ö", "o"), ... ("ü", "u"), ... ] >>> def cleanup_text(inputs): ... for src, dst in replacements: ... inputs["normalized_text"] = inputs["normalized_text"].replace(src, dst) ... return inputs >>> dataset = dataset.map(cleanup_text) ``` Now that you have dealt with special characters in the text, it’s time to shift focus to the audio data. ### Speakers The VoxPopuli dataset includes speech from multiple speakers, but how many speakers are represented in the dataset? To determine this, we can count the number of unique speakers and the number of examples each speaker contributes to the dataset. With a total of 20,968 examples in the dataset, this information will give us a better understanding of the distribution of speakers and examples in the data. ``` >>> from collections import defaultdict >>> speaker_counts = defaultdict(int) >>> for speaker_id in dataset["speaker_id"]: ... speaker_counts[speaker_id] += 1 ``` By plotting a histogram you can get a sense of how much data there is for each speaker. ``` >>> import matplotlib.pyplot as plt >>> plt.figure() >>> plt.hist(speaker_counts.values(), bins=20) >>> plt.ylabel("Speakers") >>> plt.xlabel("Examples") >>> plt.show() ``` ![Speakers histogram](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/tts_speakers_histogram.png) The histogram reveals that approximately one-third of the speakers in the dataset have fewer than 100 examples, while around ten speakers have more than 500 examples. To improve training efficiency and balance the dataset, we can limit the data to speakers with between 100 and 400 examples. ``` >>> def select_speaker(speaker_id): ... return 100 <= speaker_counts[speaker_id] <= 400 >>> dataset = dataset.filter(select_speaker, input_columns=["speaker_id"]) ``` Let’s check how many speakers remain: ``` >>> len(set(dataset["speaker_id"])) 42 ``` Let’s see how many examples are left: You are left with just under 10,000 examples from approximately 40 unique speakers, which should be sufficient. Note that some speakers with few examples may actually have more audio available if the examples are long. However, determining the total amount of audio for each speaker requires scanning through the entire dataset, which is a time-consuming process that involves loading and decoding each audio file. As such, we have chosen to skip this step here. ### Speaker embeddings To enable the TTS model to differentiate between multiple speakers, you’ll need to create a speaker embedding for each example. The speaker embedding is an additional input into the model that captures a particular speaker’s voice characteristics. To generate these speaker embeddings, use the pre-trained [spkrec-xvect-voxceleb](https://huggingface.co/speechbrain/spkrec-xvect-voxceleb) model from SpeechBrain. Create a function `create_speaker_embedding()` that takes an input audio waveform and outputs a 512-element vector containing the corresponding speaker embedding. ``` >>> import os >>> import torch >>> from speechbrain.pretrained import EncoderClassifier >>> spk_model_name = "speechbrain/spkrec-xvect-voxceleb" >>> device = "cuda" if torch.cuda.is_available() else "cpu" >>> speaker_model = EncoderClassifier.from_hparams( ... source=spk_model_name, ... run_opts={"device": device}, ... savedir=os.path.join("/tmp", spk_model_name), ... ) >>> def create_speaker_embedding(waveform): ... with torch.no_grad(): ... speaker_embeddings = speaker_model.encode_batch(torch.tensor(waveform)) ... speaker_embeddings = torch.nn.functional.normalize(speaker_embeddings, dim=2) ... speaker_embeddings = speaker_embeddings.squeeze().cpu().numpy() ... return speaker_embeddings ``` It’s important to note that the `speechbrain/spkrec-xvect-voxceleb` model was trained on English speech from the VoxCeleb dataset, whereas the training examples in this guide are in Dutch. While we believe that this model will still generate reasonable speaker embeddings for our Dutch dataset, this assumption may not hold true in all cases. For optimal results, we recommend training an X-vector model on the target speech first. This will ensure that the model is better able to capture the unique voice characteristics present in the Dutch language. ### Processing the dataset Finally, let’s process the data into the format the model expects. Create a `prepare_dataset` function that takes in a single example and uses the `SpeechT5Processor` object to tokenize the input text and load the target audio into a log-mel spectrogram. It should also add the speaker embeddings as an additional input. ``` >>> def prepare_dataset(example): ... audio = example["audio"] ... example = processor( ... text=example["normalized_text"], ... audio_target=audio["array"], ... sampling_rate=audio["sampling_rate"], ... return_attention_mask=False, ... ) ... ... example["labels"] = example["labels"][0] ... ... example["speaker_embeddings"] = create_speaker_embedding(audio["array"]) ... return example ``` Verify the processing is correct by looking at a single example: ``` >>> processed_example = prepare_dataset(dataset[0]) >>> list(processed_example.keys()) ['input_ids', 'labels', 'stop_labels', 'speaker_embeddings'] ``` Speaker embeddings should be a 512-element vector: ``` >>> processed_example["speaker_embeddings"].shape (512,) ``` The labels should be a log-mel spectrogram with 80 mel bins. ``` >>> import matplotlib.pyplot as plt >>> plt.figure() >>> plt.imshow(processed_example["labels"].T) >>> plt.show() ``` ![Log-mel spectrogram with 80 mel bins](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/tts_logmelspectrogram_1.png) Side note: If you find this spectrogram confusing, it may be due to your familiarity with the convention of placing low frequencies at the bottom and high frequencies at the top of a plot. However, when plotting spectrograms as an image using the matplotlib library, the y-axis is flipped and the spectrograms appear upside down. Now apply the processing function to the entire dataset. This will take between 5 and 10 minutes. ``` >>> dataset = dataset.map(prepare_dataset, remove_columns=dataset.column_names) ``` You’ll see a warning saying that some examples in the dataset are longer than the maximum input length the model can handle (600 tokens). Remove those examples from the dataset. Here we go even further and to allow for larger batch sizes we remove anything over 200 tokens. ``` >>> def is_not_too_long(input_ids): ... input_length = len(input_ids) ... return input_length < 200 >>> dataset = dataset.filter(is_not_too_long, input_columns=["input_ids"]) >>> len(dataset) 8259 ``` Next, create a basic train/test split: ``` >>> dataset = dataset.train_test_split(test_size=0.1) ``` ### Data collator In order to combine multiple examples into a batch, you need to define a custom data collator. This collator will pad shorter sequences with padding tokens, ensuring that all examples have the same length. For the spectrogram labels, the padded portions are replaced with the special value `-100`. This special value instructs the model to ignore that part of the spectrogram when calculating the spectrogram loss. ``` >>> from dataclasses import dataclass >>> from typing import Any, Dict, List, Union >>> @dataclass ... class TTSDataCollatorWithPadding: ... processor: Any ... def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]: ... input_ids = [{"input_ids": feature["input_ids"]} for feature in features] ... label_features = [{"input_values": feature["labels"]} for feature in features] ... speaker_features = [feature["speaker_embeddings"] for feature in features] ... ... batch = processor.pad(input_ids=input_ids, labels=label_features, return_tensors="pt") ... ... batch["labels"] = batch["labels"].masked_fill(batch.decoder_attention_mask.unsqueeze(-1).ne(1), -100) ... ... del batch["decoder_attention_mask"] ... ... if model.config.reduction_factor > 1: ... target_lengths = torch.tensor([len(feature["input_values"]) for feature in label_features]) ... target_lengths = target_lengths.new( ... [length - length % model.config.reduction_factor for length in target_lengths] ... ) ... max_length = max(target_lengths) ... batch["labels"] = batch["labels"][:, :max_length] ... ... batch["speaker_embeddings"] = torch.tensor(speaker_features) ... return batch ``` In SpeechT5, the input to the decoder part of the model is reduced by a factor 2. In other words, it throws away every other timestep from the target sequence. The decoder then predicts a sequence that is twice as long. Since the original target sequence length may be odd, the data collator makes sure to round the maximum length of the batch down to be a multiple of 2. ``` >>> data_collator = TTSDataCollatorWithPadding(processor=processor) ``` ## Train the model Load the pre-trained model from the same checkpoint as you used for loading the processor: ``` >>> from transformers import SpeechT5ForTextToSpeech >>> model = SpeechT5ForTextToSpeech.from_pretrained(checkpoint) ``` The `use_cache=True` option is incompatible with gradient checkpointing. Disable it for training. ``` >>> model.config.use_cache = False ``` Define the training arguments. Here we are not computing any evaluation metrics during the training process. Instead, we’ll only look at the loss: ``` >>> from transformers import Seq2SeqTrainingArguments >>> training_args = Seq2SeqTrainingArguments( ... output_dir="speecht5_finetuned_voxpopuli_nl", ... per_device_train_batch_size=4, ... gradient_accumulation_steps=8, ... learning_rate=1e-5, ... warmup_steps=500, ... max_steps=4000, ... gradient_checkpointing=True, ... fp16=True, ... evaluation_strategy="steps", ... per_device_eval_batch_size=2, ... save_steps=1000, ... eval_steps=1000, ... logging_steps=25, ... report_to=["tensorboard"], ... load_best_model_at_end=True, ... greater_is_better=False, ... label_names=["labels"], ... push_to_hub=True, ... ) ``` Instantiate the `Trainer` object and pass the model, dataset, and data collator to it. ``` >>> from transformers import Seq2SeqTrainer >>> trainer = Seq2SeqTrainer( ... args=training_args, ... model=model, ... train_dataset=dataset["train"], ... eval_dataset=dataset["test"], ... data_collator=data_collator, ... tokenizer=processor, ... ) ``` And with that, you’re ready to start training! Training will take several hours. Depending on your GPU, it is possible that you will encounter a CUDA “out-of-memory” error when you start training. In this case, you can reduce the `per_device_train_batch_size` incrementally by factors of 2 and increase `gradient_accumulation_steps` by 2x to compensate. To be able to use your checkpoint with a pipeline, make sure to save the processor with the checkpoint: ``` >>> processor.save_pretrained("YOUR_ACCOUNT_NAME/speecht5_finetuned_voxpopuli_nl") ``` Push the final model to the 🤗 Hub: ``` >>> trainer.push_to_hub() ``` ## Inference ### Inference with a pipeline Great, now that you’ve fine-tuned a model, you can use it for inference! First, let’s see how you can use it with a corresponding pipeline. Let’s create a `"text-to-speech"` pipeline with your checkpoint: ``` >>> from transformers import pipeline >>> pipe = pipeline("text-to-speech", model="YOUR_ACCOUNT_NAME/speecht5_finetuned_voxpopuli_nl") ``` Pick a piece of text in Dutch you’d like narrated, e.g.: ``` >>> text = "hallo allemaal, ik praat nederlands. groetjes aan iedereen!" ``` To use SpeechT5 with the pipeline, you’ll need a speaker embedding. Let’s get it from an example in the test dataset: ``` >>> example = dataset["test"][304] >>> speaker_embeddings = torch.tensor(example["speaker_embeddings"]).unsqueeze(0) ``` Now you can pass the text and speaker embeddings to the pipeline, and it will take care of the rest: ``` >>> forward_params = {"speaker_embeddings": speaker_embeddings} >>> output = pipe(text, forward_params=forward_params) >>> output {'audio': array([-6.82714235e-05, -4.26525949e-04, 1.06134125e-04, ..., -1.22392643e-03, -7.76011671e-04, 3.29112721e-04], dtype=float32), 'sampling_rate': 16000} ``` You can then listen to the result: ``` >>> from IPython.display import Audio >>> Audio(output['audio'], rate=output['sampling_rate']) ``` ### Run inference manually You can achieve the same inference results without using the pipeline, however, more steps will be required. Load the model from the 🤗 Hub: ``` >>> model = SpeechT5ForTextToSpeech.from_pretrained("YOUR_ACCOUNT/speecht5_finetuned_voxpopuli_nl") ``` Pick an example from the test dataset obtain a speaker embedding. ``` >>> example = dataset["test"][304] >>> speaker_embeddings = torch.tensor(example["speaker_embeddings"]).unsqueeze(0) ``` Define the input text and tokenize it. ``` >>> text = "hallo allemaal, ik praat nederlands. groetjes aan iedereen!" >>> inputs = processor(text=text, return_tensors="pt") ``` Create a spectrogram with your model: ``` >>> spectrogram = model.generate_speech(inputs["input_ids"], speaker_embeddings) ``` Visualize the spectrogram, if you’d like to: ``` >>> plt.figure() >>> plt.imshow(spectrogram.T) >>> plt.show() ``` ![Generated log-mel spectrogram](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/tts_logmelspectrogram_2.png) Finally, use the vocoder to turn the spectrogram into sound. ``` >>> with torch.no_grad(): ... speech = vocoder(spectrogram) >>> from IPython.display import Audio >>> Audio(speech.numpy(), rate=16000) ``` In our experience, obtaining satisfactory results from this model can be challenging. The quality of the speaker embeddings appears to be a significant factor. Since SpeechT5 was pre-trained with English x-vectors, it performs best when using English speaker embeddings. If the synthesized speech sounds poor, try using a different speaker embedding. Increasing the training duration is also likely to enhance the quality of the results. Even so, the speech clearly is Dutch instead of English, and it does capture the voice characteristics of the speaker (compare to the original audio in the example). Another thing to experiment with is the model’s configuration. For example, try using `config.reduction_factor = 1` to see if this improves the results. Finally, it is essential to consider ethical considerations. Although TTS technology has numerous useful applications, it may also be used for malicious purposes, such as impersonating someone’s voice without their knowledge or consent. Please use TTS judiciously and responsibly.
<!DOCTYPE html><html class=""><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"> <meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science."> <meta property="fb:app_id" content="1321688464574422"> <meta name="twitter:card" content="summary_large_image"> <meta name="twitter:site" content="@huggingface"> <meta property="og:title" content="Text to speech"> <meta property="og:type" content="website"> <meta property="og:url" content="https://huggingface.co/docs/transformers/v4.34.0/en/tasks/text-to-speech"> <meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png"> <link rel="stylesheet" href="/front/build/kube-5e23f38/style.css"> <link rel="preconnect" href="https://fonts.gstatic.com"> <link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&amp;display=swap" rel="stylesheet"> <link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&amp;display=swap" rel="stylesheet"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" as="style" onload="this.onload=null;this.rel='stylesheet'"> <noscript> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" /> </noscript> <title>Text to speech</title> <script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script> <script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"><link rel="stylesheet" href="/docs/transformers/v4.34.0/en/_app/immutable/assets/0.e3b0c442.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/1.38c5c2f6.js"><meta name="hf:doc:metadata" content="{&quot;local&quot;:&quot;text-to-speech&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;load-the-dataset&quot;,&quot;title&quot;:&quot;Load the dataset&quot;},{&quot;local&quot;:&quot;preprocess-the-data&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;text-cleanup-for-speecht5-tokenization&quot;,&quot;title&quot;:&quot;Text cleanup for SpeechT5 tokenization &quot;},{&quot;local&quot;:&quot;speakers&quot;,&quot;title&quot;:&quot;Speakers&quot;},{&quot;local&quot;:&quot;speaker-embeddings&quot;,&quot;title&quot;:&quot;Speaker embeddings&quot;},{&quot;local&quot;:&quot;processing-the-dataset&quot;,&quot;title&quot;:&quot;Processing the dataset&quot;},{&quot;local&quot;:&quot;data-collator&quot;,&quot;title&quot;:&quot;Data collator&quot;}],&quot;title&quot;:&quot;Preprocess the data&quot;},{&quot;local&quot;:&quot;train-the-model&quot;,&quot;title&quot;:&quot;Train the model&quot;},{&quot;local&quot;:&quot;inference&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;inference-with-a-pipeline&quot;,&quot;title&quot;:&quot;Inference with a pipeline&quot;},{&quot;local&quot;:&quot;run-inference-manually&quot;,&quot;title&quot;:&quot;Run inference manually&quot;}],&quot;title&quot;:&quot;Inference&quot;}],&quot;title&quot;:&quot;Text to speech&quot;}"></head> <body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage"> <div class="flex min-h-screen flex-col"> <div class="SVELTE_HYDRATER contents" data-props="{&quot;classNames&quot;:&quot;&quot;,&quot;isWide&quot;:true,&quot;isZh&quot;:false}" data-target="MainHeader"><header class="border-b border-gray-100 "><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <div class="flex flex-none items-center justify-center p-0.5 place-self-stretch lg:hidden"><button class="relative z-40 flex h-6 w-8 items-center justify-center" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div></div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="rounded-full border border-transparent bg-gray-900 px-3 py-1 leading-none text-white hover:border-black hover:bg-white hover:text-black" href="/join">Sign Up</a></li></ul></nav></div></header></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div> <main class="flex flex-1 flex-col"><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapters&quot;:[{&quot;title&quot;:&quot;Get started&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;🤗 Transformers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;index&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/index&quot;},{&quot;title&quot;:&quot;Quick tour&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;quicktour&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/quicktour&quot;},{&quot;title&quot;:&quot;Installation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;installation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/installation&quot;}]},{&quot;title&quot;:&quot;Tutorials&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Run inference with pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_tutorial&quot;},{&quot;title&quot;:&quot;Write portable code with AutoClass&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;autoclass_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/autoclass_tutorial&quot;},{&quot;title&quot;:&quot;Preprocess data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocessing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/preprocessing&quot;},{&quot;title&quot;:&quot;Fine-tune a pretrained model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;training&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/training&quot;},{&quot;title&quot;:&quot;Train with a script&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;run_scripts&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/run_scripts&quot;},{&quot;title&quot;:&quot;Set up distributed training with 🤗 Accelerate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;accelerate&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/accelerate&quot;},{&quot;title&quot;:&quot;Load and train adapters with 🤗 PEFT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;peft&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/peft&quot;},{&quot;title&quot;:&quot;Share your model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_sharing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_sharing&quot;},{&quot;title&quot;:&quot;Agents&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers_agents&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/transformers_agents&quot;},{&quot;title&quot;:&quot;Generation with LLMs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;llm_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/llm_tutorial&quot;}]},{&quot;title&quot;:&quot;Task Guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Natural Language Processing&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text classification&quot;,&quot;id&quot;:&quot;tasks/sequence_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/sequence_classification&quot;},{&quot;title&quot;:&quot;Token classification&quot;,&quot;id&quot;:&quot;tasks/token_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/token_classification&quot;},{&quot;title&quot;:&quot;Question answering&quot;,&quot;id&quot;:&quot;tasks/question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/question_answering&quot;},{&quot;title&quot;:&quot;Causal language modeling&quot;,&quot;id&quot;:&quot;tasks/language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/language_modeling&quot;},{&quot;title&quot;:&quot;Masked language modeling&quot;,&quot;id&quot;:&quot;tasks/masked_language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/masked_language_modeling&quot;},{&quot;title&quot;:&quot;Translation&quot;,&quot;id&quot;:&quot;tasks/translation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/translation&quot;},{&quot;title&quot;:&quot;Summarization&quot;,&quot;id&quot;:&quot;tasks/summarization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/summarization&quot;},{&quot;title&quot;:&quot;Multiple choice&quot;,&quot;id&quot;:&quot;tasks/multiple_choice&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/multiple_choice&quot;}]},{&quot;title&quot;:&quot;Audio&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio classification&quot;,&quot;id&quot;:&quot;tasks/audio_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/audio_classification&quot;},{&quot;title&quot;:&quot;Automatic speech recognition&quot;,&quot;id&quot;:&quot;tasks/asr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/asr&quot;}]},{&quot;title&quot;:&quot;Computer Vision&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image classification&quot;,&quot;id&quot;:&quot;tasks/image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_classification&quot;},{&quot;title&quot;:&quot;Semantic segmentation&quot;,&quot;id&quot;:&quot;tasks/semantic_segmentation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/semantic_segmentation&quot;},{&quot;title&quot;:&quot;Video classification&quot;,&quot;id&quot;:&quot;tasks/video_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/video_classification&quot;},{&quot;title&quot;:&quot;Object detection&quot;,&quot;id&quot;:&quot;tasks/object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot object detection&quot;,&quot;id&quot;:&quot;tasks/zero_shot_object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot image classification&quot;,&quot;id&quot;:&quot;tasks/zero_shot_image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification&quot;},{&quot;title&quot;:&quot;Depth estimation&quot;,&quot;id&quot;:&quot;tasks/monocular_depth_estimation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation&quot;}]},{&quot;title&quot;:&quot;Multimodal&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image captioning&quot;,&quot;id&quot;:&quot;tasks/image_captioning&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_captioning&quot;},{&quot;title&quot;:&quot;Document Question Answering&quot;,&quot;id&quot;:&quot;tasks/document_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/document_question_answering&quot;},{&quot;title&quot;:&quot;Visual Question Answering&quot;,&quot;id&quot;:&quot;tasks/visual_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/visual_question_answering&quot;},{&quot;title&quot;:&quot;Text to speech&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks/text-to-speech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/text-to-speech&quot;}]},{&quot;title&quot;:&quot;Generation&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Customize the generation strategy&quot;,&quot;id&quot;:&quot;generation_strategies&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/generation_strategies&quot;}]},{&quot;title&quot;:&quot;Prompting&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image tasks with IDEFICS&quot;,&quot;id&quot;:&quot;tasks/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/idefics&quot;}]}]},{&quot;title&quot;:&quot;Developer guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Use fast tokenizers from 🤗 Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;fast_tokenizers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/fast_tokenizers&quot;},{&quot;title&quot;:&quot;Run inference with multilingual models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;multilingual&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/multilingual&quot;},{&quot;title&quot;:&quot;Use model-specific APIs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;create_a_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/create_a_model&quot;},{&quot;title&quot;:&quot;Share a custom model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_models&quot;},{&quot;title&quot;:&quot;Templates for chat models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;chat_templating&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/chat_templating&quot;},{&quot;title&quot;:&quot;Run training on Amazon SageMaker&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;sagemaker&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/sagemaker&quot;},{&quot;title&quot;:&quot;Export to ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;serialization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/serialization&quot;},{&quot;title&quot;:&quot;Export to TFLite&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tflite&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tflite&quot;},{&quot;title&quot;:&quot;Export to TorchScript&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;torchscript&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/torchscript&quot;},{&quot;title&quot;:&quot;Benchmarks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;benchmarks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/benchmarks&quot;},{&quot;title&quot;:&quot;Notebooks with examples&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;notebooks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/notebooks&quot;},{&quot;title&quot;:&quot;Community resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;community&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/community&quot;},{&quot;title&quot;:&quot;Custom Tools and Prompts&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_tools&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_tools&quot;},{&quot;title&quot;:&quot;Troubleshoot&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;troubleshooting&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/troubleshooting&quot;}]},{&quot;title&quot;:&quot;Performance and scalability&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;performance&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/performance&quot;},{&quot;title&quot;:&quot;Efficient training techniques&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Methods and tools for efficient training on a single GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_one&quot;},{&quot;title&quot;:&quot;Multiple GPUs and parallelism&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_many&quot;},{&quot;title&quot;:&quot;Efficient training on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu&quot;},{&quot;title&quot;:&quot;Distributed CPU training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu_many&quot;},{&quot;title&quot;:&quot;Training on TPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu&quot;},{&quot;title&quot;:&quot;Training on TPU with TensorFlow&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu_tf&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu_tf&quot;},{&quot;title&quot;:&quot;Training on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_special&quot;},{&quot;title&quot;:&quot;Custom hardware for training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_hardware&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_hardware&quot;},{&quot;title&quot;:&quot;Hyperparameter Search using Trainer API&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;hpo_train&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/hpo_train&quot;}]},{&quot;title&quot;:&quot;Optimizing inference&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Inference on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_cpu&quot;},{&quot;title&quot;:&quot;Inference on one GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_one&quot;},{&quot;title&quot;:&quot;Inference on many GPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_many&quot;},{&quot;title&quot;:&quot;Inference on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_special&quot;}]},{&quot;title&quot;:&quot;Instantiating a big model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;big_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/big_models&quot;},{&quot;title&quot;:&quot;Troubleshooting&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;debugging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/debugging&quot;},{&quot;title&quot;:&quot;XLA Integration for TensorFlow Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tf_xla&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tf_xla&quot;},{&quot;title&quot;:&quot;Optimize inference using `torch.compile()`&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_torch_compile&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_torch_compile&quot;}]},{&quot;title&quot;:&quot;Contribute&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;How to contribute to transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;contributing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/contributing&quot;},{&quot;title&quot;:&quot;How to add a model to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_model&quot;},{&quot;title&quot;:&quot;How to convert a 🤗 Transformers model to TensorFlow?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_tensorflow_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_tensorflow_model&quot;},{&quot;title&quot;:&quot;How to add a pipeline to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_pipeline&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_pipeline&quot;},{&quot;title&quot;:&quot;Testing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;testing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/testing&quot;},{&quot;title&quot;:&quot;Checks on a Pull Request&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pr_checks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pr_checks&quot;}]},{&quot;title&quot;:&quot;Conceptual guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Philosophy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;philosophy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/philosophy&quot;},{&quot;title&quot;:&quot;Glossary&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;glossary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/glossary&quot;},{&quot;title&quot;:&quot;What 🤗 Transformers can do&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;task_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/task_summary&quot;},{&quot;title&quot;:&quot;How 🤗 Transformers solve tasks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks_explained&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks_explained&quot;},{&quot;title&quot;:&quot;The Transformer model family&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_summary&quot;},{&quot;title&quot;:&quot;Summary of the tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tokenizer_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tokenizer_summary&quot;},{&quot;title&quot;:&quot;Attention mechanisms&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;attention&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/attention&quot;},{&quot;title&quot;:&quot;Padding and truncation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pad_truncation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pad_truncation&quot;},{&quot;title&quot;:&quot;BERTology&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;bertology&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/bertology&quot;},{&quot;title&quot;:&quot;Perplexity of fixed-length models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perplexity&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perplexity&quot;},{&quot;title&quot;:&quot;Pipelines for webserver inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_webserver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_webserver&quot;},{&quot;title&quot;:&quot;Model training anatomy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_memory_anatomy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_memory_anatomy&quot;}]},{&quot;title&quot;:&quot;API&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Main Classes&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Agents and Tools&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/agent&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/agent&quot;},{&quot;title&quot;:&quot;Auto Classes&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/auto&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/auto&quot;},{&quot;title&quot;:&quot;Callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/callback&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/callback&quot;},{&quot;title&quot;:&quot;Configuration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/configuration&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/configuration&quot;},{&quot;title&quot;:&quot;Data Collator&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/data_collator&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/data_collator&quot;},{&quot;title&quot;:&quot;Keras callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/keras_callbacks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/keras_callbacks&quot;},{&quot;title&quot;:&quot;Logging&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/logging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/logging&quot;},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/model&quot;},{&quot;title&quot;:&quot;Text Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/text_generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/text_generation&quot;},{&quot;title&quot;:&quot;ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/onnx&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/onnx&quot;},{&quot;title&quot;:&quot;Optimization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/optimizer_schedules&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules&quot;},{&quot;title&quot;:&quot;Model outputs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/output&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/output&quot;},{&quot;title&quot;:&quot;Pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/pipelines&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/pipelines&quot;},{&quot;title&quot;:&quot;Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/processors&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/processors&quot;},{&quot;title&quot;:&quot;Quantization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/quantization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/quantization&quot;},{&quot;title&quot;:&quot;Tokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/tokenizer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/tokenizer&quot;},{&quot;title&quot;:&quot;Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/trainer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/trainer&quot;},{&quot;title&quot;:&quot;DeepSpeed Integration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/deepspeed&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/deepspeed&quot;},{&quot;title&quot;:&quot;Feature Extractor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/feature_extractor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/feature_extractor&quot;},{&quot;title&quot;:&quot;Image Processor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/image_processor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/image_processor&quot;}]},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALBERT&quot;,&quot;id&quot;:&quot;model_doc/albert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/albert&quot;},{&quot;title&quot;:&quot;BART&quot;,&quot;id&quot;:&quot;model_doc/bart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bart&quot;},{&quot;title&quot;:&quot;BARThez&quot;,&quot;id&quot;:&quot;model_doc/barthez&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/barthez&quot;},{&quot;title&quot;:&quot;BARTpho&quot;,&quot;id&quot;:&quot;model_doc/bartpho&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bartpho&quot;},{&quot;title&quot;:&quot;BERT&quot;,&quot;id&quot;:&quot;model_doc/bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert&quot;},{&quot;title&quot;:&quot;BertGeneration&quot;,&quot;id&quot;:&quot;model_doc/bert-generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-generation&quot;},{&quot;title&quot;:&quot;BertJapanese&quot;,&quot;id&quot;:&quot;model_doc/bert-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-japanese&quot;},{&quot;title&quot;:&quot;Bertweet&quot;,&quot;id&quot;:&quot;model_doc/bertweet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bertweet&quot;},{&quot;title&quot;:&quot;BigBird&quot;,&quot;id&quot;:&quot;model_doc/big_bird&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/big_bird&quot;},{&quot;title&quot;:&quot;BigBirdPegasus&quot;,&quot;id&quot;:&quot;model_doc/bigbird_pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus&quot;},{&quot;title&quot;:&quot;BioGpt&quot;,&quot;id&quot;:&quot;model_doc/biogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/biogpt&quot;},{&quot;title&quot;:&quot;Blenderbot&quot;,&quot;id&quot;:&quot;model_doc/blenderbot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot&quot;},{&quot;title&quot;:&quot;Blenderbot Small&quot;,&quot;id&quot;:&quot;model_doc/blenderbot-small&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot-small&quot;},{&quot;title&quot;:&quot;BLOOM&quot;,&quot;id&quot;:&quot;model_doc/bloom&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bloom&quot;},{&quot;title&quot;:&quot;BORT&quot;,&quot;id&quot;:&quot;model_doc/bort&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bort&quot;},{&quot;title&quot;:&quot;ByT5&quot;,&quot;id&quot;:&quot;model_doc/byt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/byt5&quot;},{&quot;title&quot;:&quot;CamemBERT&quot;,&quot;id&quot;:&quot;model_doc/camembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/camembert&quot;},{&quot;title&quot;:&quot;CANINE&quot;,&quot;id&quot;:&quot;model_doc/canine&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/canine&quot;},{&quot;title&quot;:&quot;CodeGen&quot;,&quot;id&quot;:&quot;model_doc/codegen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/codegen&quot;},{&quot;title&quot;:&quot;CodeLlama&quot;,&quot;id&quot;:&quot;model_doc/code_llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/code_llama&quot;},{&quot;title&quot;:&quot;ConvBERT&quot;,&quot;id&quot;:&quot;model_doc/convbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convbert&quot;},{&quot;title&quot;:&quot;CPM&quot;,&quot;id&quot;:&quot;model_doc/cpm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpm&quot;},{&quot;title&quot;:&quot;CPMANT&quot;,&quot;id&quot;:&quot;model_doc/cpmant&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpmant&quot;},{&quot;title&quot;:&quot;CTRL&quot;,&quot;id&quot;:&quot;model_doc/ctrl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ctrl&quot;},{&quot;title&quot;:&quot;DeBERTa&quot;,&quot;id&quot;:&quot;model_doc/deberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta&quot;},{&quot;title&quot;:&quot;DeBERTa-v2&quot;,&quot;id&quot;:&quot;model_doc/deberta-v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta-v2&quot;},{&quot;title&quot;:&quot;DialoGPT&quot;,&quot;id&quot;:&quot;model_doc/dialogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dialogpt&quot;},{&quot;title&quot;:&quot;DistilBERT&quot;,&quot;id&quot;:&quot;model_doc/distilbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/distilbert&quot;},{&quot;title&quot;:&quot;DPR&quot;,&quot;id&quot;:&quot;model_doc/dpr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpr&quot;},{&quot;title&quot;:&quot;ELECTRA&quot;,&quot;id&quot;:&quot;model_doc/electra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/electra&quot;},{&quot;title&quot;:&quot;Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encoder-decoder&quot;},{&quot;title&quot;:&quot;ERNIE&quot;,&quot;id&quot;:&quot;model_doc/ernie&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie&quot;},{&quot;title&quot;:&quot;ErnieM&quot;,&quot;id&quot;:&quot;model_doc/ernie_m&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie_m&quot;},{&quot;title&quot;:&quot;ESM&quot;,&quot;id&quot;:&quot;model_doc/esm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/esm&quot;},{&quot;title&quot;:&quot;Falcon&quot;,&quot;id&quot;:&quot;model_doc/falcon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/falcon&quot;},{&quot;title&quot;:&quot;FLAN-T5&quot;,&quot;id&quot;:&quot;model_doc/flan-t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-t5&quot;},{&quot;title&quot;:&quot;FLAN-UL2&quot;,&quot;id&quot;:&quot;model_doc/flan-ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-ul2&quot;},{&quot;title&quot;:&quot;FlauBERT&quot;,&quot;id&quot;:&quot;model_doc/flaubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flaubert&quot;},{&quot;title&quot;:&quot;FNet&quot;,&quot;id&quot;:&quot;model_doc/fnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fnet&quot;},{&quot;title&quot;:&quot;FSMT&quot;,&quot;id&quot;:&quot;model_doc/fsmt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fsmt&quot;},{&quot;title&quot;:&quot;Funnel Transformer&quot;,&quot;id&quot;:&quot;model_doc/funnel&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/funnel&quot;},{&quot;title&quot;:&quot;GPT&quot;,&quot;id&quot;:&quot;model_doc/openai-gpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/openai-gpt&quot;},{&quot;title&quot;:&quot;GPT Neo&quot;,&quot;id&quot;:&quot;model_doc/gpt_neo&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neo&quot;},{&quot;title&quot;:&quot;GPT NeoX&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox&quot;},{&quot;title&quot;:&quot;GPT NeoX Japanese&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox_japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese&quot;},{&quot;title&quot;:&quot;GPT-J&quot;,&quot;id&quot;:&quot;model_doc/gptj&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptj&quot;},{&quot;title&quot;:&quot;GPT2&quot;,&quot;id&quot;:&quot;model_doc/gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt2&quot;},{&quot;title&quot;:&quot;GPTBigCode&quot;,&quot;id&quot;:&quot;model_doc/gpt_bigcode&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode&quot;},{&quot;title&quot;:&quot;GPTSAN Japanese&quot;,&quot;id&quot;:&quot;model_doc/gptsan-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese&quot;},{&quot;title&quot;:&quot;GPTSw3&quot;,&quot;id&quot;:&quot;model_doc/gpt-sw3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt-sw3&quot;},{&quot;title&quot;:&quot;HerBERT&quot;,&quot;id&quot;:&quot;model_doc/herbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/herbert&quot;},{&quot;title&quot;:&quot;I-BERT&quot;,&quot;id&quot;:&quot;model_doc/ibert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ibert&quot;},{&quot;title&quot;:&quot;Jukebox&quot;,&quot;id&quot;:&quot;model_doc/jukebox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/jukebox&quot;},{&quot;title&quot;:&quot;LED&quot;,&quot;id&quot;:&quot;model_doc/led&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/led&quot;},{&quot;title&quot;:&quot;LLaMA&quot;,&quot;id&quot;:&quot;model_doc/llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama&quot;},{&quot;title&quot;:&quot;Llama2&quot;,&quot;id&quot;:&quot;model_doc/llama2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama2&quot;},{&quot;title&quot;:&quot;Longformer&quot;,&quot;id&quot;:&quot;model_doc/longformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longformer&quot;},{&quot;title&quot;:&quot;LongT5&quot;,&quot;id&quot;:&quot;model_doc/longt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longt5&quot;},{&quot;title&quot;:&quot;LUKE&quot;,&quot;id&quot;:&quot;model_doc/luke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/luke&quot;},{&quot;title&quot;:&quot;M2M100&quot;,&quot;id&quot;:&quot;model_doc/m2m_100&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/m2m_100&quot;},{&quot;title&quot;:&quot;MarianMT&quot;,&quot;id&quot;:&quot;model_doc/marian&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/marian&quot;},{&quot;title&quot;:&quot;MarkupLM&quot;,&quot;id&quot;:&quot;model_doc/markuplm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/markuplm&quot;},{&quot;title&quot;:&quot;MBart and MBart-50&quot;,&quot;id&quot;:&quot;model_doc/mbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mbart&quot;},{&quot;title&quot;:&quot;MEGA&quot;,&quot;id&quot;:&quot;model_doc/mega&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mega&quot;},{&quot;title&quot;:&quot;MegatronBERT&quot;,&quot;id&quot;:&quot;model_doc/megatron-bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron-bert&quot;},{&quot;title&quot;:&quot;MegatronGPT2&quot;,&quot;id&quot;:&quot;model_doc/megatron_gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2&quot;},{&quot;title&quot;:&quot;Mistral&quot;,&quot;id&quot;:&quot;model_doc/mistral&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mistral&quot;},{&quot;title&quot;:&quot;mLUKE&quot;,&quot;id&quot;:&quot;model_doc/mluke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mluke&quot;},{&quot;title&quot;:&quot;MobileBERT&quot;,&quot;id&quot;:&quot;model_doc/mobilebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilebert&quot;},{&quot;title&quot;:&quot;MPNet&quot;,&quot;id&quot;:&quot;model_doc/mpnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpnet&quot;},{&quot;title&quot;:&quot;MPT&quot;,&quot;id&quot;:&quot;model_doc/mpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpt&quot;},{&quot;title&quot;:&quot;MRA&quot;,&quot;id&quot;:&quot;model_doc/mra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mra&quot;},{&quot;title&quot;:&quot;MT5&quot;,&quot;id&quot;:&quot;model_doc/mt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mt5&quot;},{&quot;title&quot;:&quot;MVP&quot;,&quot;id&quot;:&quot;model_doc/mvp&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mvp&quot;},{&quot;title&quot;:&quot;NEZHA&quot;,&quot;id&quot;:&quot;model_doc/nezha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nezha&quot;},{&quot;title&quot;:&quot;NLLB&quot;,&quot;id&quot;:&quot;model_doc/nllb&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb&quot;},{&quot;title&quot;:&quot;NLLB-MoE&quot;,&quot;id&quot;:&quot;model_doc/nllb-moe&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb-moe&quot;},{&quot;title&quot;:&quot;Nyströmformer&quot;,&quot;id&quot;:&quot;model_doc/nystromformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nystromformer&quot;},{&quot;title&quot;:&quot;Open-Llama&quot;,&quot;id&quot;:&quot;model_doc/open-llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/open-llama&quot;},{&quot;title&quot;:&quot;OPT&quot;,&quot;id&quot;:&quot;model_doc/opt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/opt&quot;},{&quot;title&quot;:&quot;Pegasus&quot;,&quot;id&quot;:&quot;model_doc/pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus&quot;},{&quot;title&quot;:&quot;PEGASUS-X&quot;,&quot;id&quot;:&quot;model_doc/pegasus_x&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus_x&quot;},{&quot;title&quot;:&quot;Persimmon&quot;,&quot;id&quot;:&quot;model_doc/persimmon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/persimmon&quot;},{&quot;title&quot;:&quot;PhoBERT&quot;,&quot;id&quot;:&quot;model_doc/phobert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/phobert&quot;},{&quot;title&quot;:&quot;PLBart&quot;,&quot;id&quot;:&quot;model_doc/plbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/plbart&quot;},{&quot;title&quot;:&quot;ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/prophetnet&quot;},{&quot;title&quot;:&quot;QDQBert&quot;,&quot;id&quot;:&quot;model_doc/qdqbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/qdqbert&quot;},{&quot;title&quot;:&quot;RAG&quot;,&quot;id&quot;:&quot;model_doc/rag&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rag&quot;},{&quot;title&quot;:&quot;REALM&quot;,&quot;id&quot;:&quot;model_doc/realm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/realm&quot;},{&quot;title&quot;:&quot;Reformer&quot;,&quot;id&quot;:&quot;model_doc/reformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/reformer&quot;},{&quot;title&quot;:&quot;RemBERT&quot;,&quot;id&quot;:&quot;model_doc/rembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rembert&quot;},{&quot;title&quot;:&quot;RetriBERT&quot;,&quot;id&quot;:&quot;model_doc/retribert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/retribert&quot;},{&quot;title&quot;:&quot;RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta&quot;},{&quot;title&quot;:&quot;RoBERTa-PreLayerNorm&quot;,&quot;id&quot;:&quot;model_doc/roberta-prelayernorm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm&quot;},{&quot;title&quot;:&quot;RoCBert&quot;,&quot;id&quot;:&quot;model_doc/roc_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roc_bert&quot;},{&quot;title&quot;:&quot;RoFormer&quot;,&quot;id&quot;:&quot;model_doc/roformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roformer&quot;},{&quot;title&quot;:&quot;RWKV&quot;,&quot;id&quot;:&quot;model_doc/rwkv&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rwkv&quot;},{&quot;title&quot;:&quot;Splinter&quot;,&quot;id&quot;:&quot;model_doc/splinter&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/splinter&quot;},{&quot;title&quot;:&quot;SqueezeBERT&quot;,&quot;id&quot;:&quot;model_doc/squeezebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/squeezebert&quot;},{&quot;title&quot;:&quot;SwitchTransformers&quot;,&quot;id&quot;:&quot;model_doc/switch_transformers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/switch_transformers&quot;},{&quot;title&quot;:&quot;T5&quot;,&quot;id&quot;:&quot;model_doc/t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5&quot;},{&quot;title&quot;:&quot;T5v1.1&quot;,&quot;id&quot;:&quot;model_doc/t5v1.1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5v1.1&quot;},{&quot;title&quot;:&quot;TAPEX&quot;,&quot;id&quot;:&quot;model_doc/tapex&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapex&quot;},{&quot;title&quot;:&quot;Transformer XL&quot;,&quot;id&quot;:&quot;model_doc/transfo-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/transfo-xl&quot;},{&quot;title&quot;:&quot;UL2&quot;,&quot;id&quot;:&quot;model_doc/ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ul2&quot;},{&quot;title&quot;:&quot;UMT5&quot;,&quot;id&quot;:&quot;model_doc/umt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/umt5&quot;},{&quot;title&quot;:&quot;X-MOD&quot;,&quot;id&quot;:&quot;model_doc/xmod&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xmod&quot;},{&quot;title&quot;:&quot;XGLM&quot;,&quot;id&quot;:&quot;model_doc/xglm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xglm&quot;},{&quot;title&quot;:&quot;XLM&quot;,&quot;id&quot;:&quot;model_doc/xlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm&quot;},{&quot;title&quot;:&quot;XLM-ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/xlm-prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa-XL&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl&quot;},{&quot;title&quot;:&quot;XLM-V&quot;,&quot;id&quot;:&quot;model_doc/xlm-v&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-v&quot;},{&quot;title&quot;:&quot;XLNet&quot;,&quot;id&quot;:&quot;model_doc/xlnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlnet&quot;},{&quot;title&quot;:&quot;YOSO&quot;,&quot;id&quot;:&quot;model_doc/yoso&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yoso&quot;}]},{&quot;title&quot;:&quot;Vision models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;BEiT&quot;,&quot;id&quot;:&quot;model_doc/beit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/beit&quot;},{&quot;title&quot;:&quot;BiT&quot;,&quot;id&quot;:&quot;model_doc/bit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bit&quot;},{&quot;title&quot;:&quot;Conditional DETR&quot;,&quot;id&quot;:&quot;model_doc/conditional_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/conditional_detr&quot;},{&quot;title&quot;:&quot;ConvNeXT&quot;,&quot;id&quot;:&quot;model_doc/convnext&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnext&quot;},{&quot;title&quot;:&quot;ConvNeXTV2&quot;,&quot;id&quot;:&quot;model_doc/convnextv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnextv2&quot;},{&quot;title&quot;:&quot;CvT&quot;,&quot;id&quot;:&quot;model_doc/cvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cvt&quot;},{&quot;title&quot;:&quot;Deformable DETR&quot;,&quot;id&quot;:&quot;model_doc/deformable_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deformable_detr&quot;},{&quot;title&quot;:&quot;DeiT&quot;,&quot;id&quot;:&quot;model_doc/deit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deit&quot;},{&quot;title&quot;:&quot;DETA&quot;,&quot;id&quot;:&quot;model_doc/deta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deta&quot;},{&quot;title&quot;:&quot;DETR&quot;,&quot;id&quot;:&quot;model_doc/detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/detr&quot;},{&quot;title&quot;:&quot;DiNAT&quot;,&quot;id&quot;:&quot;model_doc/dinat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinat&quot;},{&quot;title&quot;:&quot;DINO V2&quot;,&quot;id&quot;:&quot;model_doc/dinov2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinov2&quot;},{&quot;title&quot;:&quot;DiT&quot;,&quot;id&quot;:&quot;model_doc/dit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dit&quot;},{&quot;title&quot;:&quot;DPT&quot;,&quot;id&quot;:&quot;model_doc/dpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpt&quot;},{&quot;title&quot;:&quot;EfficientFormer&quot;,&quot;id&quot;:&quot;model_doc/efficientformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientformer&quot;},{&quot;title&quot;:&quot;EfficientNet&quot;,&quot;id&quot;:&quot;model_doc/efficientnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientnet&quot;},{&quot;title&quot;:&quot;FocalNet&quot;,&quot;id&quot;:&quot;model_doc/focalnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/focalnet&quot;},{&quot;title&quot;:&quot;GLPN&quot;,&quot;id&quot;:&quot;model_doc/glpn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/glpn&quot;},{&quot;title&quot;:&quot;ImageGPT&quot;,&quot;id&quot;:&quot;model_doc/imagegpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/imagegpt&quot;},{&quot;title&quot;:&quot;LeViT&quot;,&quot;id&quot;:&quot;model_doc/levit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/levit&quot;},{&quot;title&quot;:&quot;Mask2Former&quot;,&quot;id&quot;:&quot;model_doc/mask2former&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mask2former&quot;},{&quot;title&quot;:&quot;MaskFormer&quot;,&quot;id&quot;:&quot;model_doc/maskformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/maskformer&quot;},{&quot;title&quot;:&quot;MobileNetV1&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v1&quot;},{&quot;title&quot;:&quot;MobileNetV2&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v2&quot;},{&quot;title&quot;:&quot;MobileViT&quot;,&quot;id&quot;:&quot;model_doc/mobilevit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevit&quot;},{&quot;title&quot;:&quot;MobileViTV2&quot;,&quot;id&quot;:&quot;model_doc/mobilevitv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevitv2&quot;},{&quot;title&quot;:&quot;NAT&quot;,&quot;id&quot;:&quot;model_doc/nat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nat&quot;},{&quot;title&quot;:&quot;PoolFormer&quot;,&quot;id&quot;:&quot;model_doc/poolformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/poolformer&quot;},{&quot;title&quot;:&quot;Pyramid Vision Transformer (PVT)&quot;,&quot;id&quot;:&quot;model_doc/pvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pvt&quot;},{&quot;title&quot;:&quot;RegNet&quot;,&quot;id&quot;:&quot;model_doc/regnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/regnet&quot;},{&quot;title&quot;:&quot;ResNet&quot;,&quot;id&quot;:&quot;model_doc/resnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/resnet&quot;},{&quot;title&quot;:&quot;SegFormer&quot;,&quot;id&quot;:&quot;model_doc/segformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/segformer&quot;},{&quot;title&quot;:&quot;SwiftFormer&quot;,&quot;id&quot;:&quot;model_doc/swiftformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swiftformer&quot;},{&quot;title&quot;:&quot;Swin Transformer&quot;,&quot;id&quot;:&quot;model_doc/swin&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin&quot;},{&quot;title&quot;:&quot;Swin Transformer V2&quot;,&quot;id&quot;:&quot;model_doc/swinv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swinv2&quot;},{&quot;title&quot;:&quot;Swin2SR&quot;,&quot;id&quot;:&quot;model_doc/swin2sr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin2sr&quot;},{&quot;title&quot;:&quot;Table Transformer&quot;,&quot;id&quot;:&quot;model_doc/table-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/table-transformer&quot;},{&quot;title&quot;:&quot;TimeSformer&quot;,&quot;id&quot;:&quot;model_doc/timesformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/timesformer&quot;},{&quot;title&quot;:&quot;UperNet&quot;,&quot;id&quot;:&quot;model_doc/upernet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/upernet&quot;},{&quot;title&quot;:&quot;VAN&quot;,&quot;id&quot;:&quot;model_doc/van&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/van&quot;},{&quot;title&quot;:&quot;VideoMAE&quot;,&quot;id&quot;:&quot;model_doc/videomae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/videomae&quot;},{&quot;title&quot;:&quot;Vision Transformer (ViT)&quot;,&quot;id&quot;:&quot;model_doc/vit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit&quot;},{&quot;title&quot;:&quot;ViT Hybrid&quot;,&quot;id&quot;:&quot;model_doc/vit_hybrid&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_hybrid&quot;},{&quot;title&quot;:&quot;ViTDet&quot;,&quot;id&quot;:&quot;model_doc/vitdet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitdet&quot;},{&quot;title&quot;:&quot;ViTMAE&quot;,&quot;id&quot;:&quot;model_doc/vit_mae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_mae&quot;},{&quot;title&quot;:&quot;ViTMatte&quot;,&quot;id&quot;:&quot;model_doc/vitmatte&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitmatte&quot;},{&quot;title&quot;:&quot;ViTMSN&quot;,&quot;id&quot;:&quot;model_doc/vit_msn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_msn&quot;},{&quot;title&quot;:&quot;ViViT&quot;,&quot;id&quot;:&quot;model_doc/vivit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vivit&quot;},{&quot;title&quot;:&quot;YOLOS&quot;,&quot;id&quot;:&quot;model_doc/yolos&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yolos&quot;}]},{&quot;title&quot;:&quot;Audio models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio Spectrogram Transformer&quot;,&quot;id&quot;:&quot;model_doc/audio-spectrogram-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer&quot;},{&quot;title&quot;:&quot;Bark&quot;,&quot;id&quot;:&quot;model_doc/bark&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bark&quot;},{&quot;title&quot;:&quot;CLAP&quot;,&quot;id&quot;:&quot;model_doc/clap&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clap&quot;},{&quot;title&quot;:&quot;EnCodec&quot;,&quot;id&quot;:&quot;model_doc/encodec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encodec&quot;},{&quot;title&quot;:&quot;Hubert&quot;,&quot;id&quot;:&quot;model_doc/hubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/hubert&quot;},{&quot;title&quot;:&quot;MCTCT&quot;,&quot;id&quot;:&quot;model_doc/mctct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mctct&quot;},{&quot;title&quot;:&quot;MMS&quot;,&quot;id&quot;:&quot;model_doc/mms&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mms&quot;},{&quot;title&quot;:&quot;MusicGen&quot;,&quot;id&quot;:&quot;model_doc/musicgen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/musicgen&quot;},{&quot;title&quot;:&quot;Pop2Piano&quot;,&quot;id&quot;:&quot;model_doc/pop2piano&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pop2piano&quot;},{&quot;title&quot;:&quot;SEW&quot;,&quot;id&quot;:&quot;model_doc/sew&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew&quot;},{&quot;title&quot;:&quot;SEW-D&quot;,&quot;id&quot;:&quot;model_doc/sew-d&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew-d&quot;},{&quot;title&quot;:&quot;Speech2Text&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text&quot;},{&quot;title&quot;:&quot;Speech2Text2&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text_2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2&quot;},{&quot;title&quot;:&quot;SpeechT5&quot;,&quot;id&quot;:&quot;model_doc/speecht5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speecht5&quot;},{&quot;title&quot;:&quot;UniSpeech&quot;,&quot;id&quot;:&quot;model_doc/unispeech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech&quot;},{&quot;title&quot;:&quot;UniSpeech-SAT&quot;,&quot;id&quot;:&quot;model_doc/unispeech-sat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech-sat&quot;},{&quot;title&quot;:&quot;VITS&quot;,&quot;id&quot;:&quot;model_doc/vits&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vits&quot;},{&quot;title&quot;:&quot;Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2&quot;},{&quot;title&quot;:&quot;Wav2Vec2-Conformer&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2-conformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer&quot;},{&quot;title&quot;:&quot;Wav2Vec2Phoneme&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2_phoneme&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme&quot;},{&quot;title&quot;:&quot;WavLM&quot;,&quot;id&quot;:&quot;model_doc/wavlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wavlm&quot;},{&quot;title&quot;:&quot;Whisper&quot;,&quot;id&quot;:&quot;model_doc/whisper&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/whisper&quot;},{&quot;title&quot;:&quot;XLS-R&quot;,&quot;id&quot;:&quot;model_doc/xls_r&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xls_r&quot;},{&quot;title&quot;:&quot;XLSR-Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/xlsr_wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2&quot;}]},{&quot;title&quot;:&quot;Multimodal models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALIGN&quot;,&quot;id&quot;:&quot;model_doc/align&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/align&quot;},{&quot;title&quot;:&quot;AltCLIP&quot;,&quot;id&quot;:&quot;model_doc/altclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/altclip&quot;},{&quot;title&quot;:&quot;BLIP&quot;,&quot;id&quot;:&quot;model_doc/blip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip&quot;},{&quot;title&quot;:&quot;BLIP-2&quot;,&quot;id&quot;:&quot;model_doc/blip-2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip-2&quot;},{&quot;title&quot;:&quot;BridgeTower&quot;,&quot;id&quot;:&quot;model_doc/bridgetower&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bridgetower&quot;},{&quot;title&quot;:&quot;BROS&quot;,&quot;id&quot;:&quot;model_doc/bros&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bros&quot;},{&quot;title&quot;:&quot;Chinese-CLIP&quot;,&quot;id&quot;:&quot;model_doc/chinese_clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/chinese_clip&quot;},{&quot;title&quot;:&quot;CLIP&quot;,&quot;id&quot;:&quot;model_doc/clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clip&quot;},{&quot;title&quot;:&quot;CLIPSeg&quot;,&quot;id&quot;:&quot;model_doc/clipseg&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clipseg&quot;},{&quot;title&quot;:&quot;Data2Vec&quot;,&quot;id&quot;:&quot;model_doc/data2vec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/data2vec&quot;},{&quot;title&quot;:&quot;DePlot&quot;,&quot;id&quot;:&quot;model_doc/deplot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deplot&quot;},{&quot;title&quot;:&quot;Donut&quot;,&quot;id&quot;:&quot;model_doc/donut&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/donut&quot;},{&quot;title&quot;:&quot;FLAVA&quot;,&quot;id&quot;:&quot;model_doc/flava&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flava&quot;},{&quot;title&quot;:&quot;GIT&quot;,&quot;id&quot;:&quot;model_doc/git&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/git&quot;},{&quot;title&quot;:&quot;GroupViT&quot;,&quot;id&quot;:&quot;model_doc/groupvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/groupvit&quot;},{&quot;title&quot;:&quot;IDEFICS&quot;,&quot;id&quot;:&quot;model_doc/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/idefics&quot;},{&quot;title&quot;:&quot;InstructBLIP&quot;,&quot;id&quot;:&quot;model_doc/instructblip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/instructblip&quot;},{&quot;title&quot;:&quot;LayoutLM&quot;,&quot;id&quot;:&quot;model_doc/layoutlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlm&quot;},{&quot;title&quot;:&quot;LayoutLMV2&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv2&quot;},{&quot;title&quot;:&quot;LayoutLMV3&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv3&quot;},{&quot;title&quot;:&quot;LayoutXLM&quot;,&quot;id&quot;:&quot;model_doc/layoutxlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutxlm&quot;},{&quot;title&quot;:&quot;LiLT&quot;,&quot;id&quot;:&quot;model_doc/lilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lilt&quot;},{&quot;title&quot;:&quot;LXMERT&quot;,&quot;id&quot;:&quot;model_doc/lxmert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lxmert&quot;},{&quot;title&quot;:&quot;MatCha&quot;,&quot;id&quot;:&quot;model_doc/matcha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/matcha&quot;},{&quot;title&quot;:&quot;MGP-STR&quot;,&quot;id&quot;:&quot;model_doc/mgp-str&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mgp-str&quot;},{&quot;title&quot;:&quot;Nougat&quot;,&quot;id&quot;:&quot;model_doc/nougat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nougat&quot;},{&quot;title&quot;:&quot;OneFormer&quot;,&quot;id&quot;:&quot;model_doc/oneformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/oneformer&quot;},{&quot;title&quot;:&quot;OWL-ViT&quot;,&quot;id&quot;:&quot;model_doc/owlvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/owlvit&quot;},{&quot;title&quot;:&quot;Perceiver&quot;,&quot;id&quot;:&quot;model_doc/perceiver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/perceiver&quot;},{&quot;title&quot;:&quot;Pix2Struct&quot;,&quot;id&quot;:&quot;model_doc/pix2struct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pix2struct&quot;},{&quot;title&quot;:&quot;Segment Anything&quot;,&quot;id&quot;:&quot;model_doc/sam&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sam&quot;},{&quot;title&quot;:&quot;Speech Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/speech-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder&quot;},{&quot;title&quot;:&quot;TAPAS&quot;,&quot;id&quot;:&quot;model_doc/tapas&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapas&quot;},{&quot;title&quot;:&quot;TrOCR&quot;,&quot;id&quot;:&quot;model_doc/trocr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trocr&quot;},{&quot;title&quot;:&quot;TVLT&quot;,&quot;id&quot;:&quot;model_doc/tvlt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tvlt&quot;},{&quot;title&quot;:&quot;ViLT&quot;,&quot;id&quot;:&quot;model_doc/vilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vilt&quot;},{&quot;title&quot;:&quot;Vision Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/vision-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder&quot;},{&quot;title&quot;:&quot;Vision Text Dual Encoder&quot;,&quot;id&quot;:&quot;model_doc/vision-text-dual-encoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder&quot;},{&quot;title&quot;:&quot;VisualBERT&quot;,&quot;id&quot;:&quot;model_doc/visual_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/visual_bert&quot;},{&quot;title&quot;:&quot;X-CLIP&quot;,&quot;id&quot;:&quot;model_doc/xclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xclip&quot;}]},{&quot;title&quot;:&quot;Reinforcement learning models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Decision Transformer&quot;,&quot;id&quot;:&quot;model_doc/decision_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/decision_transformer&quot;},{&quot;title&quot;:&quot;Trajectory Transformer&quot;,&quot;id&quot;:&quot;model_doc/trajectory_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer&quot;}]},{&quot;title&quot;:&quot;Time series models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Autoformer&quot;,&quot;id&quot;:&quot;model_doc/autoformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/autoformer&quot;},{&quot;title&quot;:&quot;Informer&quot;,&quot;id&quot;:&quot;model_doc/informer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/informer&quot;},{&quot;title&quot;:&quot;Time Series Transformer&quot;,&quot;id&quot;:&quot;model_doc/time_series_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/time_series_transformer&quot;}]},{&quot;title&quot;:&quot;Graph models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Graphormer&quot;,&quot;id&quot;:&quot;model_doc/graphormer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/graphormer&quot;}]}]},{&quot;title&quot;:&quot;Internal Helpers&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Custom Layers and Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/modeling_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/modeling_utils&quot;},{&quot;title&quot;:&quot;Utilities for pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/pipelines_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/pipelines_utils&quot;},{&quot;title&quot;:&quot;Utilities for Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/tokenization_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/tokenization_utils&quot;},{&quot;title&quot;:&quot;Utilities for Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/trainer_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/trainer_utils&quot;},{&quot;title&quot;:&quot;Utilities for Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/generation_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/generation_utils&quot;},{&quot;title&quot;:&quot;Utilities for Image Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/image_processing_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/image_processing_utils&quot;},{&quot;title&quot;:&quot;Utilities for Audio processing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/audio_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/audio_utils&quot;},{&quot;title&quot;:&quot;General Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/file_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/file_utils&quot;},{&quot;title&quot;:&quot;Utilities for Time Series&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/time_series_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/time_series_utils&quot;}]}]}],&quot;chapterId&quot;:&quot;tasks/text-to-speech&quot;,&quot;docType&quot;:&quot;docs&quot;,&quot;isLoggedIn&quot;:false,&quot;lang&quot;:&quot;en&quot;,&quot;langs&quot;:[&quot;de&quot;,&quot;en&quot;,&quot;es&quot;,&quot;fr&quot;,&quot;it&quot;,&quot;ko&quot;,&quot;pt&quot;,&quot;zh&quot;],&quot;library&quot;:&quot;transformers&quot;,&quot;theme&quot;:&quot;light&quot;,&quot;version&quot;:&quot;v4.34.0&quot;,&quot;versions&quot;:[{&quot;version&quot;:&quot;main&quot;},{&quot;version&quot;:&quot;v4.34.0&quot;},{&quot;version&quot;:&quot;v4.33.3&quot;},{&quot;version&quot;:&quot;v4.33.2&quot;},{&quot;version&quot;:&quot;v4.33.0&quot;},{&quot;version&quot;:&quot;v4.32.1&quot;},{&quot;version&quot;:&quot;v4.32.0&quot;},{&quot;version&quot;:&quot;v4.31.0&quot;},{&quot;version&quot;:&quot;v4.30.0&quot;},{&quot;version&quot;:&quot;v4.29.1&quot;},{&quot;version&quot;:&quot;v4.29.0&quot;},{&quot;version&quot;:&quot;v4.28.1&quot;},{&quot;version&quot;:&quot;v4.28.0&quot;},{&quot;version&quot;:&quot;v4.27.2&quot;},{&quot;version&quot;:&quot;v4.27.1&quot;},{&quot;version&quot;:&quot;v4.27.0&quot;},{&quot;version&quot;:&quot;v4.26.1&quot;},{&quot;version&quot;:&quot;v4.26.0&quot;},{&quot;version&quot;:&quot;v4.25.1&quot;},{&quot;version&quot;:&quot;v4.24.0&quot;},{&quot;version&quot;:&quot;v4.23.1&quot;},{&quot;version&quot;:&quot;v4.23.0&quot;},{&quot;version&quot;:&quot;v4.22.2&quot;},{&quot;version&quot;:&quot;v4.22.1&quot;},{&quot;version&quot;:&quot;v4.22.0&quot;},{&quot;version&quot;:&quot;v4.21.3&quot;},{&quot;version&quot;:&quot;v4.21.2&quot;},{&quot;version&quot;:&quot;v4.21.1&quot;},{&quot;version&quot;:&quot;v4.21.0&quot;},{&quot;version&quot;:&quot;v4.20.1&quot;},{&quot;version&quot;:&quot;v4.20.0&quot;},{&quot;version&quot;:&quot;v4.19.4&quot;},{&quot;version&quot;:&quot;v4.19.3&quot;},{&quot;version&quot;:&quot;v4.19.2&quot;},{&quot;version&quot;:&quot;v4.19.0&quot;},{&quot;version&quot;:&quot;v4.18.0&quot;},{&quot;version&quot;:&quot;v4.17.0&quot;},{&quot;version&quot;:&quot;v4.16.2&quot;},{&quot;version&quot;:&quot;v4.16.1&quot;},{&quot;version&quot;:&quot;v4.16.0&quot;},{&quot;version&quot;:&quot;v4.15.0&quot;},{&quot;version&quot;:&quot;v4.14.1&quot;},{&quot;version&quot;:&quot;v4.13.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.5&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.4&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.0.0&quot;},{&quot;version&quot;:&quot;doc-builder-html&quot;}],&quot;title&quot;:&quot;Text to speech&quot;}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"><div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation</p> <div class="flex items-center"><p class="font-semibold">Text to speech</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "><button class=" " type="button"><h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> <span class="ml-auto rounded border border-gray-200 bg-gray-100 px-0.5 text-xs dark:border-gray-800 dark:bg-gray-800"><kbd class="font-sans">⌘K</kbd></span></button> <div class="flex items-center"><select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1">v4.34.0</option><option value="2">v4.33.3</option><option value="3">v4.32.1</option><option value="4">v4.31.0</option><option value="5">v4.30.0</option><option value="6">v4.29.1</option><option value="7">v4.28.1</option><option value="8">v4.27.2</option><option value="9">v4.26.1</option><option value="10">v4.25.1</option><option value="11">v4.24.0</option><option value="12">v4.23.1</option><option value="13">v4.22.2</option><option value="14">v4.21.3</option><option value="15">v4.20.1</option><option value="16">v4.19.4</option><option value="17">v4.18.0</option><option value="18">v4.17.0</option><option value="19">v4.16.2</option><option value="20">v4.15.0</option><option value="21">v4.14.1</option><option value="22">v4.13.0</option><option value="23">v4.12.5</option><option value="24">v4.11.3</option><option value="25">v4.10.1</option><option value="26">v4.9.2</option><option value="27">v4.8.2</option><option value="28">v4.7.0</option><option value="29">v4.6.0</option><option value="30">v4.5.1</option><option value="31">v4.4.2</option><option value="32">v4.3.3</option><option value="33">v4.2.2</option><option value="34">v4.1.1</option><option value="35">v4.0.1</option><option value="36">v3.5.1</option><option value="37">v3.4.0</option><option value="38">v3.3.1</option><option value="39">v3.2.0</option><option value="40">v3.1.0</option><option value="41">v3.0.2</option><option value="42">v2.11.0</option><option value="43">v2.10.0</option><option value="44">v2.9.1</option><option value="45">v2.8.0</option><option value="46">v2.7.0</option><option value="47">v2.6.0</option><option value="48">v2.5.1</option><option value="49">v2.4.1</option><option value="50">v2.3.0</option><option value="51">v2.2.2</option><option value="52">v2.1.1</option><option value="53">v2.0.0</option><option value="54">v1.2.0</option><option value="55">v1.1.0</option><option value="56">v1.0.0</option><option value="57">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"><button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"><svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> 112,792</a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Get started</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/index">🤗 Transformers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/quicktour">Quick tour </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/installation">Installation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Tutorials</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_tutorial">Run inference with pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/autoclass_tutorial">Write portable code with AutoClass </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/preprocessing">Preprocess data </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/training">Fine-tune a pretrained model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/run_scripts">Train with a script </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/accelerate">Set up distributed training with 🤗 Accelerate </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/peft">Load and train adapters with 🤗 PEFT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_sharing">Share your model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/transformers_agents">Agents </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/llm_tutorial">Generation with LLMs </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Task Guides</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Natural Language Processing</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Computer Vision</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/image_captioning">Image captioning </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/document_question_answering">Document Question Answering </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/visual_question_answering">Visual Question Answering </a><a data-sveltekit-reload="" class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-4" href="/docs/transformers/v4.34.0/en/tasks/text-to-speech">Text to speech </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Generation</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Prompting</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Developer guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/fast_tokenizers">Use fast tokenizers from 🤗 Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/multilingual">Run inference with multilingual models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/create_a_model">Use model-specific APIs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_models">Share a custom model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/chat_templating">Templates for chat models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/sagemaker">Run training on Amazon SageMaker </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/serialization">Export to ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tflite">Export to TFLite </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/torchscript">Export to TorchScript </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/benchmarks">Benchmarks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/notebooks">Notebooks with examples </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/community">Community resources </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_tools">Custom Tools and Prompts </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/troubleshooting">Troubleshoot </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Performance and scalability</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/performance">Overview </a><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Efficient training techniques</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_one">Methods and tools for efficient training on a single GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_many">Multiple GPUs and parallelism </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu">Efficient training on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu_many">Distributed CPU training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu">Training on TPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu_tf">Training on TPU with TensorFlow </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_special">Training on Specialized Hardware </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_hardware">Custom hardware for training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/hpo_train">Hyperparameter Search using Trainer API </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Optimizing inference</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_cpu">Inference on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_one">Inference on one GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_many">Inference on many GPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_special">Inference on Specialized Hardware </a> </div><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/big_models">Instantiating a big model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/debugging">Troubleshooting </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tf_xla">XLA Integration for TensorFlow Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perf_torch_compile">Optimize inference using `torch.compile()` </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Contribute</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/contributing">How to contribute to transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_model">How to add a model to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_tensorflow_model">How to convert a 🤗 Transformers model to TensorFlow? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_pipeline">How to add a pipeline to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/testing">Testing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pr_checks">Checks on a Pull Request </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Conceptual guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/philosophy">Philosophy </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/glossary">Glossary </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/task_summary">What 🤗 Transformers can do </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tasks_explained">How 🤗 Transformers solve tasks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_summary">The Transformer model family </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tokenizer_summary">Summary of the tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/attention">Attention mechanisms </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pad_truncation">Padding and truncation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/bertology">BERTology </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perplexity">Perplexity of fixed-length models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_webserver">Pipelines for webserver inference </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_memory_anatomy">Model training anatomy </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>API</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Main Classes</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/agent">Agents and Tools </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/model_doc/auto">Auto Classes </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/callback">Callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/configuration">Configuration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/data_collator">Data Collator </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks">Keras callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/logging">Logging </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/model">Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/text_generation">Text Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/onnx">ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules">Optimization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/output">Model outputs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/pipelines">Pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/processors">Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/quantization">Quantization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/tokenizer">Tokenizer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/trainer">Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/deepspeed">DeepSpeed Integration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor">Feature Extractor </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/image_processor">Image Processor </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Models</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Text models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Vision models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Reinforcement learning models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Time series models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Graph models</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Internal Helpers</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/modeling_utils">Custom Layers and Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/pipelines_utils">Utilities for pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/tokenization_utils">Utilities for Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/trainer_utils">Utilities for Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/generation_utils">Utilities for Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/image_processing_utils">Utilities for Image Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/audio_utils">Utilities for Audio processing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/file_utils">General Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/time_series_utils">Utilities for Time Series </a> </div> </div></nav></div></div></div> <div class="z-1 min-w-0 flex-1"> <div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg"> <div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div> <p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience </p> <div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes </div></div></div> <div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a> <p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div> <div class="prose-doc prose relative mx-auto max-w-4xl break-words"> <p></p> <h1 class="relative group"><a id="text-to-speech" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#text-to-speech"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-19xbyp3">Text to speech</span></h1> <div class="flex space-x-1 absolute z-10 right-0 top-0"> <div class="relative colab-dropdown "><button class=" " type="button"><img alt="Open In Colab" class="!m-0" src="https://colab.research.google.com/assets/colab-badge.svg"></button> </div> <div class="relative colab-dropdown "><button class=" " type="button"><img alt="Open In Studio Lab" class="!m-0" src="https://studiolab.sagemaker.aws/studiolab.svg"></button> </div></div> <p data-svelte-h="svelte-l8vv5f">Text-to-speech (TTS) is the task of creating natural-sounding speech from text, where the speech can be generated in multiple languages and for multiple speakers. Several text-to-speech models are currently available in 🤗 Transformers, such as <a href="../model_doc/bark">Bark</a>, <a href="../model_doc/mms">MMS</a>, <a href="../model_doc/vits">VITS</a> and <a href="../model_doc/speecht5">SpeechT5</a>.</p> <p data-svelte-h="svelte-onhyd5">You can easily generate audio using the <code>"text-to-audio"</code> pipeline (or its alias - <code>"text-to-speech"</code>). Some models, like Bark, can also be conditioned to generate non-verbal communications such as laughing, sighing and crying, or even add music. Here’s an example of how you would use the <code>"text-to-speech"</code> pipeline with Bark:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> pipeline <span class="hljs-meta">&gt;&gt;&gt; </span>pipe = pipeline(<span class="hljs-string">"text-to-speech"</span>, model=<span class="hljs-string">"suno/bark-small"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>text = <span class="hljs-string">"[clears throat] This is a test ... and I just took a long pause."</span> <span class="hljs-meta">&gt;&gt;&gt; </span>output = pipe(text)</pre></div> <p data-svelte-h="svelte-8r9uzp">Here’s a code snippet you can use to listen to the resulting audio in a notebook:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> IPython.display <span class="hljs-keyword">import</span> Audio <span class="hljs-meta">&gt;&gt;&gt; </span>Audio(output[<span class="hljs-string">"audio"</span>], rate=output[<span class="hljs-string">"sampling_rate"</span>])</pre></div> <p data-svelte-h="svelte-jff0e4">For more examples on what Bark and other pretrained TTS models can do, refer to our <a href="https://huggingface.co/learn/audio-course/chapter6/pre-trained_models" rel="nofollow">Audio course</a>.</p> <p data-svelte-h="svelte-c5wqzo">If you are looking to fine-tune a TTS model, you can currently fine-tune SpeechT5 only. SpeechT5 is pre-trained on a combination of speech-to-text and text-to-speech data, allowing it to learn a unified space of hidden representations shared by both text and speech. This means that the same pre-trained model can be fine-tuned for different tasks. Furthermore, SpeechT5 supports multiple speakers through x-vector speaker embeddings.</p> <p data-svelte-h="svelte-13pgbx8">The remainder of this guide illustrates how to:</p> <ol data-svelte-h="svelte-16nfhgc"><li>Fine-tune <a href="../model_doc/speecht5">SpeechT5</a> that was originally trained on English speech on the Dutch (<code>nl</code>) language subset of the <a href="https://huggingface.co/datasets/facebook/voxpopuli" rel="nofollow">VoxPopuli</a> dataset.</li> <li>Use your refined model for inference in one of two ways: using a pipeline or directly.</li></ol> <p data-svelte-h="svelte-1c9nexd">Before you begin, make sure you have all the necessary libraries installed:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class="">pip install datasets soundfile speechbrain accelerate</pre></div> <p data-svelte-h="svelte-1pk9b4n">Install 🤗Transformers from source as not all the SpeechT5 features have been merged into an official release yet:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class="">pip install git+https://github.com/huggingface/transformers.git</pre></div> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1qyctb1">To follow this guide you will need a GPU. If you’re working in a notebook, run the following line to check if a GPU is available:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class="">!nvidia-smi</pre></div></div> <p data-svelte-h="svelte-yib87s">We encourage you to log in to your Hugging Face account to upload and share your model with the community. When prompted, enter your token to log in:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> huggingface_hub <span class="hljs-keyword">import</span> notebook_login <span class="hljs-meta">&gt;&gt;&gt; </span>notebook_login()</pre></div> <h2 class="relative group"><a id="load-the-dataset" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#load-the-dataset"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-6t1nik">Load the dataset</span></h2> <p data-svelte-h="svelte-1lfskuv"><a href="https://huggingface.co/datasets/facebook/voxpopuli" rel="nofollow">VoxPopuli</a> is a large-scale multilingual speech corpus consisting of data sourced from 2009-2020 European Parliament event recordings. It contains labelled audio-transcription data for 15 European languages. In this guide, we are using the Dutch language subset, feel free to pick another subset.</p> <p data-svelte-h="svelte-1smm6mj">Note that VoxPopuli or any other automated speech recognition (ASR) dataset may not be the most suitable option for training TTS models. The features that make it beneficial for ASR, such as excessive background noise, are typically undesirable in TTS. However, finding top-quality, multilingual, and multi-speaker TTS datasets can be quite challenging.</p> <p data-svelte-h="svelte-1soaqfm">Let’s load the data:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset, Audio <span class="hljs-meta">&gt;&gt;&gt; </span>dataset = load_dataset(<span class="hljs-string">"facebook/voxpopuli"</span>, <span class="hljs-string">"nl"</span>, split=<span class="hljs-string">"train"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-built_in">len</span>(dataset) <span class="hljs-number">20968</span></pre></div> <p data-svelte-h="svelte-1xdebij">20968 examples should be sufficient for fine-tuning. SpeechT5 expects audio data to have a sampling rate of 16 kHz, so make sure the examples in the dataset meet this requirement:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class="">dataset = dataset.cast_column(<span class="hljs-string">"audio"</span>, Audio(sampling_rate=<span class="hljs-number">16000</span>))</pre></div> <h2 class="relative group"><a id="preprocess-the-data" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#preprocess-the-data"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-171xy48">Preprocess the data</span></h2> <p data-svelte-h="svelte-1cfkf3a">Let’s begin by defining the model checkpoint to use and loading the appropriate processor:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> SpeechT5Processor <span class="hljs-meta">&gt;&gt;&gt; </span>checkpoint = <span class="hljs-string">"microsoft/speecht5_tts"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>processor = SpeechT5Processor.from_pretrained(checkpoint)</pre></div> <h3 class="relative group"><a id="text-cleanup-for-speecht5-tokenization" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#text-cleanup-for-speecht5-tokenization"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-zd2o8z">Text cleanup for SpeechT5 tokenization</span></h3> <p data-svelte-h="svelte-1bseo5y">Start by cleaning up the text data. You’ll need the tokenizer part of the processor to process the text:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer = processor.tokenizer</pre></div> <p data-svelte-h="svelte-alux34">The dataset examples contain <code>raw_text</code> and <code>normalized_text</code> features. When deciding which feature to use as the text input, consider that the SpeechT5 tokenizer doesn’t have any tokens for numbers. In <code>normalized_text</code> the numbers are written out as text. Thus, it is a better fit, and we recommend using <code>normalized_text</code> as input text.</p> <p data-svelte-h="svelte-1qiuc1m">Because SpeechT5 was trained on the English language, it may not recognize certain characters in the Dutch dataset. If left as is, these characters will be converted to <code>&lt;unk&gt;</code> tokens. However, in Dutch, certain characters like <code>à</code> are used to stress syllables. In order to preserve the meaning of the text, we can replace this character with a regular <code>a</code>.</p> <p data-svelte-h="svelte-l4aof2">To identify unsupported tokens, extract all unique characters in the dataset using the <code>SpeechT5Tokenizer</code> which works with characters as tokens. To do this, write the <code>extract_all_chars</code> mapping function that concatenates the transcriptions from all examples into one string and converts it to a set of characters. Make sure to set <code>batched=True</code> and <code>batch_size=-1</code> in <code>dataset.map()</code> so that all transcriptions are available at once for the mapping function.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">extract_all_chars</span>(<span class="hljs-params">batch</span>): <span class="hljs-meta">... </span> all_text = <span class="hljs-string">" "</span>.join(batch[<span class="hljs-string">"normalized_text"</span>]) <span class="hljs-meta">... </span> vocab = <span class="hljs-built_in">list</span>(<span class="hljs-built_in">set</span>(all_text)) <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> {<span class="hljs-string">"vocab"</span>: [vocab], <span class="hljs-string">"all_text"</span>: [all_text]} <span class="hljs-meta">&gt;&gt;&gt; </span>vocabs = dataset.<span class="hljs-built_in">map</span>( <span class="hljs-meta">... </span> extract_all_chars, <span class="hljs-meta">... </span> batched=<span class="hljs-literal">True</span>, <span class="hljs-meta">... </span> batch_size=-<span class="hljs-number">1</span>, <span class="hljs-meta">... </span> keep_in_memory=<span class="hljs-literal">True</span>, <span class="hljs-meta">... </span> remove_columns=dataset.column_names, <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>dataset_vocab = <span class="hljs-built_in">set</span>(vocabs[<span class="hljs-string">"vocab"</span>][<span class="hljs-number">0</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span>tokenizer_vocab = {k <span class="hljs-keyword">for</span> k, _ <span class="hljs-keyword">in</span> tokenizer.get_vocab().items()}</pre></div> <p data-svelte-h="svelte-fefaym">Now you have two sets of characters: one with the vocabulary from the dataset and one with the vocabulary from the tokenizer. To identify any unsupported characters in the dataset, you can take the difference between these two sets. The resulting set will contain the characters that are in the dataset but not in the tokenizer.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>dataset_vocab - tokenizer_vocab {<span class="hljs-string">' '</span>, <span class="hljs-string">'à'</span>, <span class="hljs-string">'ç'</span>, <span class="hljs-string">'è'</span>, <span class="hljs-string">'ë'</span>, <span class="hljs-string">'í'</span>, <span class="hljs-string">'ï'</span>, <span class="hljs-string">'ö'</span>, <span class="hljs-string">'ü'</span>}</pre></div> <p data-svelte-h="svelte-1m7sgt2">To handle the unsupported characters identified in the previous step, define a function that maps these characters to valid tokens. Note that spaces are already replaced by <code>▁</code> in the tokenizer and don’t need to be handled separately.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>replacements = [ <span class="hljs-meta">... </span> (<span class="hljs-string">"à"</span>, <span class="hljs-string">"a"</span>), <span class="hljs-meta">... </span> (<span class="hljs-string">"ç"</span>, <span class="hljs-string">"c"</span>), <span class="hljs-meta">... </span> (<span class="hljs-string">"è"</span>, <span class="hljs-string">"e"</span>), <span class="hljs-meta">... </span> (<span class="hljs-string">"ë"</span>, <span class="hljs-string">"e"</span>), <span class="hljs-meta">... </span> (<span class="hljs-string">"í"</span>, <span class="hljs-string">"i"</span>), <span class="hljs-meta">... </span> (<span class="hljs-string">"ï"</span>, <span class="hljs-string">"i"</span>), <span class="hljs-meta">... </span> (<span class="hljs-string">"ö"</span>, <span class="hljs-string">"o"</span>), <span class="hljs-meta">... </span> (<span class="hljs-string">"ü"</span>, <span class="hljs-string">"u"</span>), <span class="hljs-meta">... </span>] <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">cleanup_text</span>(<span class="hljs-params">inputs</span>): <span class="hljs-meta">... </span> <span class="hljs-keyword">for</span> src, dst <span class="hljs-keyword">in</span> replacements: <span class="hljs-meta">... </span> inputs[<span class="hljs-string">"normalized_text"</span>] = inputs[<span class="hljs-string">"normalized_text"</span>].replace(src, dst) <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> inputs <span class="hljs-meta">&gt;&gt;&gt; </span>dataset = dataset.<span class="hljs-built_in">map</span>(cleanup_text)</pre></div> <p data-svelte-h="svelte-k1jzc5">Now that you have dealt with special characters in the text, it’s time to shift focus to the audio data.</p> <h3 class="relative group"><a id="speakers" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#speakers"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1djgoy3">Speakers</span></h3> <p data-svelte-h="svelte-1edo483">The VoxPopuli dataset includes speech from multiple speakers, but how many speakers are represented in the dataset? To determine this, we can count the number of unique speakers and the number of examples each speaker contributes to the dataset. With a total of 20,968 examples in the dataset, this information will give us a better understanding of the distribution of speakers and examples in the data.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> collections <span class="hljs-keyword">import</span> defaultdict <span class="hljs-meta">&gt;&gt;&gt; </span>speaker_counts = defaultdict(<span class="hljs-built_in">int</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">for</span> speaker_id <span class="hljs-keyword">in</span> dataset[<span class="hljs-string">"speaker_id"</span>]: <span class="hljs-meta">... </span> speaker_counts[speaker_id] += <span class="hljs-number">1</span></pre></div> <p data-svelte-h="svelte-1hah56a">By plotting a histogram you can get a sense of how much data there is for each speaker.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> matplotlib.pyplot <span class="hljs-keyword">as</span> plt <span class="hljs-meta">&gt;&gt;&gt; </span>plt.figure() <span class="hljs-meta">&gt;&gt;&gt; </span>plt.hist(speaker_counts.values(), bins=<span class="hljs-number">20</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>plt.ylabel(<span class="hljs-string">"Speakers"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>plt.xlabel(<span class="hljs-string">"Examples"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>plt.show()</pre></div> <div class="flex justify-center" data-svelte-h="svelte-1uufiik"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/tts_speakers_histogram.png" alt="Speakers histogram"></div> <p data-svelte-h="svelte-1ujf40">The histogram reveals that approximately one-third of the speakers in the dataset have fewer than 100 examples, while around ten speakers have more than 500 examples. To improve training efficiency and balance the dataset, we can limit the data to speakers with between 100 and 400 examples.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">select_speaker</span>(<span class="hljs-params">speaker_id</span>): <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> <span class="hljs-number">100</span> &lt;= speaker_counts[speaker_id] &lt;= <span class="hljs-number">400</span> <span class="hljs-meta">&gt;&gt;&gt; </span>dataset = dataset.<span class="hljs-built_in">filter</span>(select_speaker, input_columns=[<span class="hljs-string">"speaker_id"</span>])</pre></div> <p data-svelte-h="svelte-g2dd32">Let’s check how many speakers remain:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-built_in">len</span>(<span class="hljs-built_in">set</span>(dataset[<span class="hljs-string">"speaker_id"</span>])) <span class="hljs-number">42</span></pre></div> <p data-svelte-h="svelte-hbes5t">Let’s see how many examples are left:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-built_in">len</span>(dataset) <span class="hljs-number">9973</span></pre></div> <p data-svelte-h="svelte-oizlk1">You are left with just under 10,000 examples from approximately 40 unique speakers, which should be sufficient.</p> <p data-svelte-h="svelte-1n0u7fp">Note that some speakers with few examples may actually have more audio available if the examples are long. However, determining the total amount of audio for each speaker requires scanning through the entire dataset, which is a time-consuming process that involves loading and decoding each audio file. As such, we have chosen to skip this step here.</p> <h3 class="relative group"><a id="speaker-embeddings" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#speaker-embeddings"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-11a8qqy">Speaker embeddings</span></h3> <p data-svelte-h="svelte-1xdfpy2">To enable the TTS model to differentiate between multiple speakers, you’ll need to create a speaker embedding for each example. The speaker embedding is an additional input into the model that captures a particular speaker’s voice characteristics. To generate these speaker embeddings, use the pre-trained <a href="https://huggingface.co/speechbrain/spkrec-xvect-voxceleb" rel="nofollow">spkrec-xvect-voxceleb</a> model from SpeechBrain.</p> <p data-svelte-h="svelte-7nncma">Create a function <code>create_speaker_embedding()</code> that takes an input audio waveform and outputs a 512-element vector containing the corresponding speaker embedding.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> os <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> speechbrain.pretrained <span class="hljs-keyword">import</span> EncoderClassifier <span class="hljs-meta">&gt;&gt;&gt; </span>spk_model_name = <span class="hljs-string">"speechbrain/spkrec-xvect-voxceleb"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>device = <span class="hljs-string">"cuda"</span> <span class="hljs-keyword">if</span> torch.cuda.is_available() <span class="hljs-keyword">else</span> <span class="hljs-string">"cpu"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>speaker_model = EncoderClassifier.from_hparams( <span class="hljs-meta">... </span> source=spk_model_name, <span class="hljs-meta">... </span> run_opts={<span class="hljs-string">"device"</span>: device}, <span class="hljs-meta">... </span> savedir=os.path.join(<span class="hljs-string">"/tmp"</span>, spk_model_name), <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">create_speaker_embedding</span>(<span class="hljs-params">waveform</span>): <span class="hljs-meta">... </span> <span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> speaker_embeddings = speaker_model.encode_batch(torch.tensor(waveform)) <span class="hljs-meta">... </span> speaker_embeddings = torch.nn.functional.normalize(speaker_embeddings, dim=<span class="hljs-number">2</span>) <span class="hljs-meta">... </span> speaker_embeddings = speaker_embeddings.squeeze().cpu().numpy() <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> speaker_embeddings</pre></div> <p data-svelte-h="svelte-1mtxkxr">It’s important to note that the <code>speechbrain/spkrec-xvect-voxceleb</code> model was trained on English speech from the VoxCeleb dataset, whereas the training examples in this guide are in Dutch. While we believe that this model will still generate reasonable speaker embeddings for our Dutch dataset, this assumption may not hold true in all cases.</p> <p data-svelte-h="svelte-1az2ed">For optimal results, we recommend training an X-vector model on the target speech first. This will ensure that the model is better able to capture the unique voice characteristics present in the Dutch language.</p> <h3 class="relative group"><a id="processing-the-dataset" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#processing-the-dataset"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-yn1vvh">Processing the dataset</span></h3> <p data-svelte-h="svelte-1tlidee">Finally, let’s process the data into the format the model expects. Create a <code>prepare_dataset</code> function that takes in a single example and uses the <code>SpeechT5Processor</code> object to tokenize the input text and load the target audio into a log-mel spectrogram. It should also add the speaker embeddings as an additional input.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">prepare_dataset</span>(<span class="hljs-params">example</span>): <span class="hljs-meta">... </span> audio = example[<span class="hljs-string">"audio"</span>] <span class="hljs-meta">... </span> example = processor( <span class="hljs-meta">... </span> text=example[<span class="hljs-string">"normalized_text"</span>], <span class="hljs-meta">... </span> audio_target=audio[<span class="hljs-string">"array"</span>], <span class="hljs-meta">... </span> sampling_rate=audio[<span class="hljs-string">"sampling_rate"</span>], <span class="hljs-meta">... </span> return_attention_mask=<span class="hljs-literal">False</span>, <span class="hljs-meta">... </span> ) <span class="hljs-meta">... </span> <span class="hljs-comment"># strip off the batch dimension</span> <span class="hljs-meta">... </span> example[<span class="hljs-string">"labels"</span>] = example[<span class="hljs-string">"labels"</span>][<span class="hljs-number">0</span>] <span class="hljs-meta">... </span> <span class="hljs-comment"># use SpeechBrain to obtain x-vector</span> <span class="hljs-meta">... </span> example[<span class="hljs-string">"speaker_embeddings"</span>] = create_speaker_embedding(audio[<span class="hljs-string">"array"</span>]) <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> example</pre></div> <p data-svelte-h="svelte-17mzfft">Verify the processing is correct by looking at a single example:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>processed_example = prepare_dataset(dataset[<span class="hljs-number">0</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-built_in">list</span>(processed_example.keys()) [<span class="hljs-string">'input_ids'</span>, <span class="hljs-string">'labels'</span>, <span class="hljs-string">'stop_labels'</span>, <span class="hljs-string">'speaker_embeddings'</span>]</pre></div> <p data-svelte-h="svelte-1mqg4ck">Speaker embeddings should be a 512-element vector:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>processed_example[<span class="hljs-string">"speaker_embeddings"</span>].shape (<span class="hljs-number">512</span>,)</pre></div> <p data-svelte-h="svelte-f7g58h">The labels should be a log-mel spectrogram with 80 mel bins.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> matplotlib.pyplot <span class="hljs-keyword">as</span> plt <span class="hljs-meta">&gt;&gt;&gt; </span>plt.figure() <span class="hljs-meta">&gt;&gt;&gt; </span>plt.imshow(processed_example[<span class="hljs-string">"labels"</span>].T) <span class="hljs-meta">&gt;&gt;&gt; </span>plt.show()</pre></div> <div class="flex justify-center" data-svelte-h="svelte-j4o1b8"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/tts_logmelspectrogram_1.png" alt="Log-mel spectrogram with 80 mel bins"></div> <p data-svelte-h="svelte-1t9vz0y">Side note: If you find this spectrogram confusing, it may be due to your familiarity with the convention of placing low frequencies at the bottom and high frequencies at the top of a plot. However, when plotting spectrograms as an image using the matplotlib library, the y-axis is flipped and the spectrograms appear upside down.</p> <p data-svelte-h="svelte-1r5jy40">Now apply the processing function to the entire dataset. This will take between 5 and 10 minutes.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>dataset = dataset.<span class="hljs-built_in">map</span>(prepare_dataset, remove_columns=dataset.column_names)</pre></div> <p data-svelte-h="svelte-wvyh3r">You’ll see a warning saying that some examples in the dataset are longer than the maximum input length the model can handle (600 tokens). Remove those examples from the dataset. Here we go even further and to allow for larger batch sizes we remove anything over 200 tokens.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">is_not_too_long</span>(<span class="hljs-params">input_ids</span>): <span class="hljs-meta">... </span> input_length = <span class="hljs-built_in">len</span>(input_ids) <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> input_length &lt; <span class="hljs-number">200</span> <span class="hljs-meta">&gt;&gt;&gt; </span>dataset = dataset.<span class="hljs-built_in">filter</span>(is_not_too_long, input_columns=[<span class="hljs-string">"input_ids"</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-built_in">len</span>(dataset) <span class="hljs-number">8259</span></pre></div> <p data-svelte-h="svelte-12gx63x">Next, create a basic train/test split:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>dataset = dataset.train_test_split(test_size=<span class="hljs-number">0.1</span>)</pre></div> <h3 class="relative group"><a id="data-collator" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#data-collator"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-a1w05">Data collator</span></h3> <p data-svelte-h="svelte-1qadcxm">In order to combine multiple examples into a batch, you need to define a custom data collator. This collator will pad shorter sequences with padding tokens, ensuring that all examples have the same length. For the spectrogram labels, the padded portions are replaced with the special value <code>-100</code>. This special value instructs the model to ignore that part of the spectrogram when calculating the spectrogram loss.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> dataclasses <span class="hljs-keyword">import</span> dataclass <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> typing <span class="hljs-keyword">import</span> <span class="hljs-type">Any</span>, <span class="hljs-type">Dict</span>, <span class="hljs-type">List</span>, <span class="hljs-type">Union</span> <span class="hljs-meta">&gt;&gt;&gt; </span>@dataclass <span class="hljs-meta">... </span><span class="hljs-keyword">class</span> <span class="hljs-title class_">TTSDataCollatorWithPadding</span>: <span class="hljs-meta">... </span> processor: <span class="hljs-type">Any</span> <span class="hljs-meta">... </span> <span class="hljs-keyword">def</span> <span class="hljs-title function_">__call__</span>(<span class="hljs-params">self, features: <span class="hljs-type">List</span>[<span class="hljs-type">Dict</span>[<span class="hljs-built_in">str</span>, <span class="hljs-type">Union</span>[<span class="hljs-type">List</span>[<span class="hljs-built_in">int</span>], torch.Tensor]]]</span>) -&gt; <span class="hljs-type">Dict</span>[<span class="hljs-built_in">str</span>, torch.Tensor]: <span class="hljs-meta">... </span> input_ids = [{<span class="hljs-string">"input_ids"</span>: feature[<span class="hljs-string">"input_ids"</span>]} <span class="hljs-keyword">for</span> feature <span class="hljs-keyword">in</span> features] <span class="hljs-meta">... </span> label_features = [{<span class="hljs-string">"input_values"</span>: feature[<span class="hljs-string">"labels"</span>]} <span class="hljs-keyword">for</span> feature <span class="hljs-keyword">in</span> features] <span class="hljs-meta">... </span> speaker_features = [feature[<span class="hljs-string">"speaker_embeddings"</span>] <span class="hljs-keyword">for</span> feature <span class="hljs-keyword">in</span> features] <span class="hljs-meta">... </span> <span class="hljs-comment"># collate the inputs and targets into a batch</span> <span class="hljs-meta">... </span> batch = processor.pad(input_ids=input_ids, labels=label_features, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">... </span> <span class="hljs-comment"># replace padding with -100 to ignore loss correctly</span> <span class="hljs-meta">... </span> batch[<span class="hljs-string">"labels"</span>] = batch[<span class="hljs-string">"labels"</span>].masked_fill(batch.decoder_attention_mask.unsqueeze(-<span class="hljs-number">1</span>).ne(<span class="hljs-number">1</span>), -<span class="hljs-number">100</span>) <span class="hljs-meta">... </span> <span class="hljs-comment"># not used during fine-tuning</span> <span class="hljs-meta">... </span> <span class="hljs-keyword">del</span> batch[<span class="hljs-string">"decoder_attention_mask"</span>] <span class="hljs-meta">... </span> <span class="hljs-comment"># round down target lengths to multiple of reduction factor</span> <span class="hljs-meta">... </span> <span class="hljs-keyword">if</span> model.config.reduction_factor &gt; <span class="hljs-number">1</span>: <span class="hljs-meta">... </span> target_lengths = torch.tensor([<span class="hljs-built_in">len</span>(feature[<span class="hljs-string">"input_values"</span>]) <span class="hljs-keyword">for</span> feature <span class="hljs-keyword">in</span> label_features]) <span class="hljs-meta">... </span> target_lengths = target_lengths.new( <span class="hljs-meta">... </span> [length - length % model.config.reduction_factor <span class="hljs-keyword">for</span> length <span class="hljs-keyword">in</span> target_lengths] <span class="hljs-meta">... </span> ) <span class="hljs-meta">... </span> max_length = <span class="hljs-built_in">max</span>(target_lengths) <span class="hljs-meta">... </span> batch[<span class="hljs-string">"labels"</span>] = batch[<span class="hljs-string">"labels"</span>][:, :max_length] <span class="hljs-meta">... </span> <span class="hljs-comment"># also add in the speaker embeddings</span> <span class="hljs-meta">... </span> batch[<span class="hljs-string">"speaker_embeddings"</span>] = torch.tensor(speaker_features) <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> batch</pre></div> <p data-svelte-h="svelte-u623yc">In SpeechT5, the input to the decoder part of the model is reduced by a factor 2. In other words, it throws away every other timestep from the target sequence. The decoder then predicts a sequence that is twice as long. Since the original target sequence length may be odd, the data collator makes sure to round the maximum length of the batch down to be a multiple of 2.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>data_collator = TTSDataCollatorWithPadding(processor=processor)</pre></div> <h2 class="relative group"><a id="train-the-model" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#train-the-model"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-n13n3f">Train the model</span></h2> <p data-svelte-h="svelte-1kkriue">Load the pre-trained model from the same checkpoint as you used for loading the processor:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> SpeechT5ForTextToSpeech <span class="hljs-meta">&gt;&gt;&gt; </span>model = SpeechT5ForTextToSpeech.from_pretrained(checkpoint)</pre></div> <p data-svelte-h="svelte-15xsopr">The <code>use_cache=True</code> option is incompatible with gradient checkpointing. Disable it for training.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>model.config.use_cache = <span class="hljs-literal">False</span></pre></div> <p data-svelte-h="svelte-11l73nm">Define the training arguments. Here we are not computing any evaluation metrics during the training process. Instead, we’ll only look at the loss:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> Seq2SeqTrainingArguments <span class="hljs-meta">&gt;&gt;&gt; </span>training_args = Seq2SeqTrainingArguments( <span class="hljs-meta">... </span> output_dir=<span class="hljs-string">"speecht5_finetuned_voxpopuli_nl"</span>, <span class="hljs-comment"># change to a repo name of your choice</span> <span class="hljs-meta">... </span> per_device_train_batch_size=<span class="hljs-number">4</span>, <span class="hljs-meta">... </span> gradient_accumulation_steps=<span class="hljs-number">8</span>, <span class="hljs-meta">... </span> learning_rate=<span class="hljs-number">1e-5</span>, <span class="hljs-meta">... </span> warmup_steps=<span class="hljs-number">500</span>, <span class="hljs-meta">... </span> max_steps=<span class="hljs-number">4000</span>, <span class="hljs-meta">... </span> gradient_checkpointing=<span class="hljs-literal">True</span>, <span class="hljs-meta">... </span> fp16=<span class="hljs-literal">True</span>, <span class="hljs-meta">... </span> evaluation_strategy=<span class="hljs-string">"steps"</span>, <span class="hljs-meta">... </span> per_device_eval_batch_size=<span class="hljs-number">2</span>, <span class="hljs-meta">... </span> save_steps=<span class="hljs-number">1000</span>, <span class="hljs-meta">... </span> eval_steps=<span class="hljs-number">1000</span>, <span class="hljs-meta">... </span> logging_steps=<span class="hljs-number">25</span>, <span class="hljs-meta">... </span> report_to=[<span class="hljs-string">"tensorboard"</span>], <span class="hljs-meta">... </span> load_best_model_at_end=<span class="hljs-literal">True</span>, <span class="hljs-meta">... </span> greater_is_better=<span class="hljs-literal">False</span>, <span class="hljs-meta">... </span> label_names=[<span class="hljs-string">"labels"</span>], <span class="hljs-meta">... </span> push_to_hub=<span class="hljs-literal">True</span>, <span class="hljs-meta">... </span>)</pre></div> <p data-svelte-h="svelte-1yq1u47">Instantiate the <code>Trainer</code> object and pass the model, dataset, and data collator to it.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> Seq2SeqTrainer <span class="hljs-meta">&gt;&gt;&gt; </span>trainer = Seq2SeqTrainer( <span class="hljs-meta">... </span> args=training_args, <span class="hljs-meta">... </span> model=model, <span class="hljs-meta">... </span> train_dataset=dataset[<span class="hljs-string">"train"</span>], <span class="hljs-meta">... </span> eval_dataset=dataset[<span class="hljs-string">"test"</span>], <span class="hljs-meta">... </span> data_collator=data_collator, <span class="hljs-meta">... </span> tokenizer=processor, <span class="hljs-meta">... </span>)</pre></div> <p data-svelte-h="svelte-1qh8b8v">And with that, you’re ready to start training! Training will take several hours. Depending on your GPU, it is possible that you will encounter a CUDA “out-of-memory” error when you start training. In this case, you can reduce the <code>per_device_train_batch_size</code> incrementally by factors of 2 and increase <code>gradient_accumulation_steps</code> by 2x to compensate.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>trainer.train()</pre></div> <p data-svelte-h="svelte-pldkg9">To be able to use your checkpoint with a pipeline, make sure to save the processor with the checkpoint:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>processor.save_pretrained(<span class="hljs-string">"YOUR_ACCOUNT_NAME/speecht5_finetuned_voxpopuli_nl"</span>)</pre></div> <p data-svelte-h="svelte-hefgo2">Push the final model to the 🤗 Hub:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>trainer.push_to_hub()</pre></div> <h2 class="relative group"><a id="inference" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#inference"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-199uz7g">Inference</span></h2> <h3 class="relative group"><a id="inference-with-a-pipeline" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#inference-with-a-pipeline"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1vckfxz">Inference with a pipeline</span></h3> <p data-svelte-h="svelte-1jgqgi3">Great, now that you’ve fine-tuned a model, you can use it for inference! First, let’s see how you can use it with a corresponding pipeline. Let’s create a <code>"text-to-speech"</code> pipeline with your checkpoint:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> pipeline <span class="hljs-meta">&gt;&gt;&gt; </span>pipe = pipeline(<span class="hljs-string">"text-to-speech"</span>, model=<span class="hljs-string">"YOUR_ACCOUNT_NAME/speecht5_finetuned_voxpopuli_nl"</span>)</pre></div> <p data-svelte-h="svelte-1jnrzsv">Pick a piece of text in Dutch you’d like narrated, e.g.:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>text = <span class="hljs-string">"hallo allemaal, ik praat nederlands. groetjes aan iedereen!"</span></pre></div> <p data-svelte-h="svelte-1kvctpa">To use SpeechT5 with the pipeline, you’ll need a speaker embedding. Let’s get it from an example in the test dataset:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>example = dataset[<span class="hljs-string">"test"</span>][<span class="hljs-number">304</span>] <span class="hljs-meta">&gt;&gt;&gt; </span>speaker_embeddings = torch.tensor(example[<span class="hljs-string">"speaker_embeddings"</span>]).unsqueeze(<span class="hljs-number">0</span>)</pre></div> <p data-svelte-h="svelte-n7ysoq">Now you can pass the text and speaker embeddings to the pipeline, and it will take care of the rest:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>forward_params = {<span class="hljs-string">"speaker_embeddings"</span>: speaker_embeddings} <span class="hljs-meta">&gt;&gt;&gt; </span>output = pipe(text, forward_params=forward_params) <span class="hljs-meta">&gt;&gt;&gt; </span>output {<span class="hljs-string">'audio'</span>: array([-<span class="hljs-number">6.82714235e-05</span>, -<span class="hljs-number">4.26525949e-04</span>, <span class="hljs-number">1.06134125e-04</span>, ..., -<span class="hljs-number">1.22392643e-03</span>, -<span class="hljs-number">7.76011671e-04</span>, <span class="hljs-number">3.29112721e-04</span>], dtype=float32), <span class="hljs-string">'sampling_rate'</span>: <span class="hljs-number">16000</span>}</pre></div> <p data-svelte-h="svelte-1rln9cy">You can then listen to the result:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> IPython.display <span class="hljs-keyword">import</span> Audio <span class="hljs-meta">&gt;&gt;&gt; </span>Audio(output[<span class="hljs-string">'audio'</span>], rate=output[<span class="hljs-string">'sampling_rate'</span>]) </pre></div> <h3 class="relative group"><a id="run-inference-manually" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#run-inference-manually"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1lqchd8">Run inference manually</span></h3> <p data-svelte-h="svelte-1r1hsi3">You can achieve the same inference results without using the pipeline, however, more steps will be required.</p> <p data-svelte-h="svelte-1ydpy0n">Load the model from the 🤗 Hub:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>model = SpeechT5ForTextToSpeech.from_pretrained(<span class="hljs-string">"YOUR_ACCOUNT/speecht5_finetuned_voxpopuli_nl"</span>)</pre></div> <p data-svelte-h="svelte-fmd5zf">Pick an example from the test dataset obtain a speaker embedding.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>example = dataset[<span class="hljs-string">"test"</span>][<span class="hljs-number">304</span>] <span class="hljs-meta">&gt;&gt;&gt; </span>speaker_embeddings = torch.tensor(example[<span class="hljs-string">"speaker_embeddings"</span>]).unsqueeze(<span class="hljs-number">0</span>)</pre></div> <p data-svelte-h="svelte-1611eam">Define the input text and tokenize it.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>text = <span class="hljs-string">"hallo allemaal, ik praat nederlands. groetjes aan iedereen!"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = processor(text=text, return_tensors=<span class="hljs-string">"pt"</span>)</pre></div> <p data-svelte-h="svelte-eurn6">Create a spectrogram with your model:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>spectrogram = model.generate_speech(inputs[<span class="hljs-string">"input_ids"</span>], speaker_embeddings)</pre></div> <p data-svelte-h="svelte-iofzmb">Visualize the spectrogram, if you’d like to:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>plt.figure() <span class="hljs-meta">&gt;&gt;&gt; </span>plt.imshow(spectrogram.T) <span class="hljs-meta">&gt;&gt;&gt; </span>plt.show()</pre></div> <div class="flex justify-center" data-svelte-h="svelte-16b2dt6"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/tts_logmelspectrogram_2.png" alt="Generated log-mel spectrogram"></div> <p data-svelte-h="svelte-chmo02">Finally, use the vocoder to turn the spectrogram into sound.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> speech = vocoder(spectrogram) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> IPython.display <span class="hljs-keyword">import</span> Audio <span class="hljs-meta">&gt;&gt;&gt; </span>Audio(speech.numpy(), rate=<span class="hljs-number">16000</span>)</pre></div> <p data-svelte-h="svelte-1fdujzv">In our experience, obtaining satisfactory results from this model can be challenging. The quality of the speaker embeddings appears to be a significant factor. Since SpeechT5 was pre-trained with English x-vectors, it performs best when using English speaker embeddings. If the synthesized speech sounds poor, try using a different speaker embedding.</p> <p data-svelte-h="svelte-6lmfh0">Increasing the training duration is also likely to enhance the quality of the results. Even so, the speech clearly is Dutch instead of English, and it does capture the voice characteristics of the speaker (compare to the original audio in the example). Another thing to experiment with is the model’s configuration. For example, try using <code>config.reduction_factor = 1</code> to see if this improves the results.</p> <p data-svelte-h="svelte-1mo0fd">Finally, it is essential to consider ethical considerations. Although TTS technology has numerous useful applications, it may also be used for malicious purposes, such as impersonating someone’s voice without their knowledge or consent. Please use TTS judiciously and responsibly.</p> <p></p> <div id="svelte-announcer" aria-live="assertive" aria-atomic="true" style="position: absolute; left: 0px; top: 0px; clip: rect(0px, 0px, 0px, 0px); clip-path: inset(50%); overflow: hidden; white-space: nowrap; width: 1px; height: 1px;"></div></div> <div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/tasks/visual_question_answering" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>Visual Question Answering</a> <a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/generation_strategies" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">Customize the generation strategy<span class="ml-2 translate-y-px">→</span></a></div></div></div> <div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapter&quot;:{&quot;title&quot;:&quot;Text to speech&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;text-to-speech&quot;,&quot;url&quot;:&quot;#text-to-speech&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Load the dataset&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;load-the-dataset&quot;,&quot;url&quot;:&quot;#load-the-dataset&quot;},{&quot;title&quot;:&quot;Preprocess the data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocess-the-data&quot;,&quot;url&quot;:&quot;#preprocess-the-data&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text cleanup for SpeechT5 tokenization &quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;text-cleanup-for-speecht5-tokenization&quot;,&quot;url&quot;:&quot;#text-cleanup-for-speecht5-tokenization&quot;},{&quot;title&quot;:&quot;Speakers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;speakers&quot;,&quot;url&quot;:&quot;#speakers&quot;},{&quot;title&quot;:&quot;Speaker embeddings&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;speaker-embeddings&quot;,&quot;url&quot;:&quot;#speaker-embeddings&quot;},{&quot;title&quot;:&quot;Processing the dataset&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;processing-the-dataset&quot;,&quot;url&quot;:&quot;#processing-the-dataset&quot;},{&quot;title&quot;:&quot;Data collator&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;data-collator&quot;,&quot;url&quot;:&quot;#data-collator&quot;}]},{&quot;title&quot;:&quot;Train the model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;train-the-model&quot;,&quot;url&quot;:&quot;#train-the-model&quot;},{&quot;title&quot;:&quot;Inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;inference&quot;,&quot;url&quot;:&quot;#inference&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Inference with a pipeline&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;inference-with-a-pipeline&quot;,&quot;url&quot;:&quot;#inference-with-a-pipeline&quot;},{&quot;title&quot;:&quot;Run inference manually&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;run-inference-manually&quot;,&quot;url&quot;:&quot;#run-inference-manually&quot;}]}]}}" data-target="SubSideMenu"><nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#text-to-speech" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-text-to-speech"><wbr>Text to speech</a> <a href="#load-the-dataset" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-load-the-dataset"><wbr>Load the dataset</a> <a href="#preprocess-the-data" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-preprocess-the-data"><wbr>Preprocess the data</a> <a href="#text-cleanup-for-speecht5-tokenization" class="pl-8 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-text-cleanup-for-speecht5-tokenization"><wbr>Text cleanup for <wbr>Speech<wbr>T5 tokenization </a> <a href="#speakers" class="pl-8 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-speakers"><wbr>Speakers</a> <a href="#speaker-embeddings" class="pl-8 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-speaker-embeddings"><wbr>Speaker embeddings</a> <a href="#processing-the-dataset" class="pl-8 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-processing-the-dataset"><wbr>Processing the dataset</a> <a href="#data-collator" class="pl-8 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-data-collator"><wbr>Data collator</a> <a href="#train-the-model" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-train-the-model"><wbr>Train the model</a> <a href="#inference" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-inference"><wbr>Inference</a> <a href="#inference-with-a-pipeline" class="pl-8 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-inference-with-a-pipeline"><wbr>Inference with a pipeline</a> <a href="#run-inference-manually" class="pl-8 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-run-inference-manually"><wbr>Run inference manually</a> </nav></div></div></div> <div id="doc-footer"></div></main> </div> <script> import("/front/build/kube-5e23f38/index.js"); window.moonSha = "kube-5e23f38/"; window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc","environment":"production","userAgent":"HuggingFace (production)"}`); </script> <!-- Stripe --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://js.stripe.com/v3/"; script.async = true; document.head.appendChild(script); } </script> <!-- Google analytics v4 --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL"; script.async = true; document.head.appendChild(script); window.dataLayer = window.dataLayer || []; function gtag() { if (window.dataLayer !== undefined) { window.dataLayer.push(arguments); } } gtag("js", new Date()); gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/v4.34.0/en/tasks/text-to-speech" }); /// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" }); /// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent /// TODO: ask the user for their consent and update this with gtag('consent', 'update') } </script> <!-- Google Analytics v3 (deprecated) --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { (function (i, s, o, g, r, a, m) { i["GoogleAnalyticsObject"] = r; (i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments); }), (i[r].l = 1 * new Date()); (a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]); a.async = 1; a.src = g; m.parentNode.insertBefore(a, m); })(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics"); ganalytics("create", "UA-83738774-2", "auto"); ganalytics("send", "pageview", "/docs/transformers/v4.34.0/en/tasks/text-to-speech"); } </script> <iframe name="__privateStripeMetricsController4160" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-27c67c0d52761104439bb051c7856ab1.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fv4.34.0%2Fen%2Ftasks%2Ftext-to-speech&amp;title=Text%20to%20speech&amp;referrer=&amp;muid=NA&amp;sid=NA&amp;version=6&amp;preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html>
2023-10-05T13:33:56.000Z
https://huggingface.co/docs/transformers/v4.34.0/en/tasks/training#train-a-tensorflow-model-with-keras
The documentation page TASKS/TRAINING doesn’t exist in v4.34.0, but exists on the main version. Click [here](/docs/transformers/main/en/tasks/training) to redirect to the main version of the documentation.
<html><head></head><body>The documentation page TASKS/TRAINING doesn’t exist in v4.34.0, but exists on the main version. Click <a href="/docs/transformers/main/en/tasks/training">here</a> to redirect to the main version of the documentation.</body></html>
2023-10-05T13:33:56.020Z
https://huggingface.co/docs/transformers/v4.34.0/en/model_doc/sequence_classification.md
The documentation page MODEL\_DOC/SEQUENCE\_CLASSIFICATION.MD doesn’t exist in v4.34.0, but exists on the main version. Click [here](/docs/transformers/main/en/model_doc/sequence_classification.md) to redirect to the main version of the documentation.
<html><head></head><body>The documentation page MODEL_DOC/SEQUENCE_CLASSIFICATION.MD doesn’t exist in v4.34.0, but exists on the main version. Click <a href="/docs/transformers/main/en/model_doc/sequence_classification.md">here</a> to redirect to the main version of the documentation.</body></html>
2023-10-05T13:33:56.036Z
Zero-shot image classification
https://huggingface.co/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification
Zero-shot image classification is a task that involves classifying images into different categories using a model that was not explicitly trained on data containing labeled examples from those specific categories. Traditionally, image classification requires training a model on a specific set of labeled images, and this model learns to “map” certain image features to labels. When there’s a need to use such model for a classification task that introduces a new set of labels, fine-tuning is required to “recalibrate” the model. In contrast, zero-shot or open vocabulary image classification models are typically multi-modal models that have been trained on a large dataset of images and associated descriptions. These models learn aligned vision-language representations that can be used for many downstream tasks including zero-shot image classification. This is a more flexible approach to image classification that allows models to generalize to new and unseen categories without the need for additional training data and enables users to query images with free-form text descriptions of their target objects . In this guide you’ll learn how to: - create a zero-shot image classification pipeline - run zero-shot image classification inference by hand Before you begin, make sure you have all the necessary libraries installed: ``` pip install -q transformers ``` # Zero-shot image classification pipeline The simplest way to try out inference with a model supporting zero-shot image classification is to use the corresponding [pipeline()](/docs/transformers/v4.34.0/en/main_classes/pipelines#transformers.pipeline). Instantiate a pipeline from a [checkpoint on the Hugging Face Hub](https://huggingface.co/models?pipeline_tag=zero-shot-image-classification&sort=downloads): ``` >>> from transformers import pipeline >>> checkpoint = "openai/clip-vit-large-patch14" >>> detector = pipeline(model=checkpoint, task="zero-shot-image-classification") ``` Next, choose an image you’d like to classify. ``` >>> from PIL import Image >>> import requests >>> url = "https://unsplash.com/photos/g8oS8-82DxI/download?ixid=MnwxMjA3fDB8MXx0b3BpY3x8SnBnNktpZGwtSGt8fHx8fDJ8fDE2NzgxMDYwODc&force=true&w=640" >>> image = Image.open(requests.get(url, stream=True).raw) >>> image ``` ![Photo of an owl](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/owl.jpg) Pass the image and the candidate object labels to the pipeline. Here we pass the image directly; other suitable options include a local path to an image or an image url. The candidate labels can be simple words like in this example, or more descriptive. ``` >>> predictions = detector(image, candidate_labels=["fox", "bear", "seagull", "owl"]) >>> predictions [{'score': 0.9996670484542847, 'label': 'owl'}, {'score': 0.000199399160919711, 'label': 'seagull'}, {'score': 7.392891711788252e-05, 'label': 'fox'}, {'score': 5.96074532950297e-05, 'label': 'bear'}] ``` ## Zero-shot image classification by hand Now that you’ve seen how to use the zero-shot image classification pipeline, let’s take a look how you can run zero-shot image classification manually. Start by loading the model and associated processor from a [checkpoint on the Hugging Face Hub](https://huggingface.co/models?pipeline_tag=zero-shot-image-classification&sort=downloads). Here we’ll use the same checkpoint as before: ``` >>> from transformers import AutoProcessor, AutoModelForZeroShotImageClassification >>> model = AutoModelForZeroShotImageClassification.from_pretrained(checkpoint) >>> processor = AutoProcessor.from_pretrained(checkpoint) ``` Let’s take a different image to switch things up. ``` >>> from PIL import Image >>> import requests >>> url = "https://unsplash.com/photos/xBRQfR2bqNI/download?ixid=MnwxMjA3fDB8MXxhbGx8fHx8fHx8fHwxNjc4Mzg4ODEx&force=true&w=640" >>> image = Image.open(requests.get(url, stream=True).raw) >>> image ``` ![Photo of a car](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg) Use the processor to prepare the inputs for the model. The processor combines an image processor that prepares the image for the model by resizing and normalizing it, and a tokenizer that takes care of the text inputs. ``` >>> candidate_labels = ["tree", "car", "bike", "cat"] >>> inputs = processor(images=image, text=candidate_labels, return_tensors="pt", padding=True) ``` Pass the inputs through the model, and post-process the results: ``` >>> import torch >>> with torch.no_grad(): ... outputs = model(**inputs) >>> logits = outputs.logits_per_image[0] >>> probs = logits.softmax(dim=-1).numpy() >>> scores = probs.tolist() >>> result = [ ... {"score": score, "label": candidate_label} ... for score, candidate_label in sorted(zip(probs, candidate_labels), key=lambda x: -x[0]) ... ] >>> result [{'score': 0.998572, 'label': 'car'}, {'score': 0.0010570387, 'label': 'bike'}, {'score': 0.0003393686, 'label': 'tree'}, {'score': 3.1572064e-05, 'label': 'cat'}] ```
<!DOCTYPE html><html class=""><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"> <meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science."> <meta property="fb:app_id" content="1321688464574422"> <meta name="twitter:card" content="summary_large_image"> <meta name="twitter:site" content="@huggingface"> <meta property="og:title" content="Zero-shot image classification"> <meta property="og:type" content="website"> <meta property="og:url" content="https://huggingface.co/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification"> <meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png"> <link rel="stylesheet" href="/front/build/kube-5e23f38/style.css"> <link rel="preconnect" href="https://fonts.gstatic.com"> <link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&amp;display=swap" rel="stylesheet"> <link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&amp;display=swap" rel="stylesheet"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" as="style" onload="this.onload=null;this.rel='stylesheet'"> <noscript> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" /> </noscript> <title>Zero-shot image classification</title> <script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script> <script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"><link rel="stylesheet" href="/docs/transformers/v4.34.0/en/_app/immutable/assets/0.e3b0c442.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/1.38c5c2f6.js"><meta name="hf:doc:metadata" content="{&quot;local&quot;:&quot;zeroshot-image-classification&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;zeroshot-image-classification-pipeline&quot;,&quot;title&quot;:&quot;Zero-shot image classification pipeline&quot;},{&quot;local&quot;:&quot;zeroshot-image-classification-by-hand&quot;,&quot;title&quot;:&quot;Zero-shot image classification by hand&quot;}],&quot;title&quot;:&quot;Zero-shot image classification&quot;}"></head> <body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage"> <div class="flex min-h-screen flex-col"> <div class="SVELTE_HYDRATER contents" data-props="{&quot;classNames&quot;:&quot;&quot;,&quot;isWide&quot;:true,&quot;isZh&quot;:false}" data-target="MainHeader"><header class="border-b border-gray-100 "><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <div class="flex flex-none items-center justify-center p-0.5 place-self-stretch lg:hidden"><button class="relative z-40 flex h-6 w-8 items-center justify-center" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div></div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="rounded-full border border-transparent bg-gray-900 px-3 py-1 leading-none text-white hover:border-black hover:bg-white hover:text-black" href="/join">Sign Up</a></li></ul></nav></div></header></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div> <main class="flex flex-1 flex-col"><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapters&quot;:[{&quot;title&quot;:&quot;Get started&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;🤗 Transformers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;index&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/index&quot;},{&quot;title&quot;:&quot;Quick tour&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;quicktour&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/quicktour&quot;},{&quot;title&quot;:&quot;Installation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;installation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/installation&quot;}]},{&quot;title&quot;:&quot;Tutorials&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Run inference with pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_tutorial&quot;},{&quot;title&quot;:&quot;Write portable code with AutoClass&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;autoclass_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/autoclass_tutorial&quot;},{&quot;title&quot;:&quot;Preprocess data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocessing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/preprocessing&quot;},{&quot;title&quot;:&quot;Fine-tune a pretrained model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;training&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/training&quot;},{&quot;title&quot;:&quot;Train with a script&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;run_scripts&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/run_scripts&quot;},{&quot;title&quot;:&quot;Set up distributed training with 🤗 Accelerate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;accelerate&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/accelerate&quot;},{&quot;title&quot;:&quot;Load and train adapters with 🤗 PEFT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;peft&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/peft&quot;},{&quot;title&quot;:&quot;Share your model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_sharing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_sharing&quot;},{&quot;title&quot;:&quot;Agents&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers_agents&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/transformers_agents&quot;},{&quot;title&quot;:&quot;Generation with LLMs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;llm_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/llm_tutorial&quot;}]},{&quot;title&quot;:&quot;Task Guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Natural Language Processing&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text classification&quot;,&quot;id&quot;:&quot;tasks/sequence_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/sequence_classification&quot;},{&quot;title&quot;:&quot;Token classification&quot;,&quot;id&quot;:&quot;tasks/token_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/token_classification&quot;},{&quot;title&quot;:&quot;Question answering&quot;,&quot;id&quot;:&quot;tasks/question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/question_answering&quot;},{&quot;title&quot;:&quot;Causal language modeling&quot;,&quot;id&quot;:&quot;tasks/language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/language_modeling&quot;},{&quot;title&quot;:&quot;Masked language modeling&quot;,&quot;id&quot;:&quot;tasks/masked_language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/masked_language_modeling&quot;},{&quot;title&quot;:&quot;Translation&quot;,&quot;id&quot;:&quot;tasks/translation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/translation&quot;},{&quot;title&quot;:&quot;Summarization&quot;,&quot;id&quot;:&quot;tasks/summarization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/summarization&quot;},{&quot;title&quot;:&quot;Multiple choice&quot;,&quot;id&quot;:&quot;tasks/multiple_choice&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/multiple_choice&quot;}]},{&quot;title&quot;:&quot;Audio&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio classification&quot;,&quot;id&quot;:&quot;tasks/audio_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/audio_classification&quot;},{&quot;title&quot;:&quot;Automatic speech recognition&quot;,&quot;id&quot;:&quot;tasks/asr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/asr&quot;}]},{&quot;title&quot;:&quot;Computer Vision&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image classification&quot;,&quot;id&quot;:&quot;tasks/image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_classification&quot;},{&quot;title&quot;:&quot;Semantic segmentation&quot;,&quot;id&quot;:&quot;tasks/semantic_segmentation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/semantic_segmentation&quot;},{&quot;title&quot;:&quot;Video classification&quot;,&quot;id&quot;:&quot;tasks/video_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/video_classification&quot;},{&quot;title&quot;:&quot;Object detection&quot;,&quot;id&quot;:&quot;tasks/object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot object detection&quot;,&quot;id&quot;:&quot;tasks/zero_shot_object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot image classification&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks/zero_shot_image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification&quot;},{&quot;title&quot;:&quot;Depth estimation&quot;,&quot;id&quot;:&quot;tasks/monocular_depth_estimation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation&quot;}]},{&quot;title&quot;:&quot;Multimodal&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image captioning&quot;,&quot;id&quot;:&quot;tasks/image_captioning&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_captioning&quot;},{&quot;title&quot;:&quot;Document Question Answering&quot;,&quot;id&quot;:&quot;tasks/document_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/document_question_answering&quot;},{&quot;title&quot;:&quot;Visual Question Answering&quot;,&quot;id&quot;:&quot;tasks/visual_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/visual_question_answering&quot;},{&quot;title&quot;:&quot;Text to speech&quot;,&quot;id&quot;:&quot;tasks/text-to-speech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/text-to-speech&quot;}]},{&quot;title&quot;:&quot;Generation&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Customize the generation strategy&quot;,&quot;id&quot;:&quot;generation_strategies&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/generation_strategies&quot;}]},{&quot;title&quot;:&quot;Prompting&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image tasks with IDEFICS&quot;,&quot;id&quot;:&quot;tasks/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/idefics&quot;}]}]},{&quot;title&quot;:&quot;Developer guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Use fast tokenizers from 🤗 Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;fast_tokenizers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/fast_tokenizers&quot;},{&quot;title&quot;:&quot;Run inference with multilingual models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;multilingual&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/multilingual&quot;},{&quot;title&quot;:&quot;Use model-specific APIs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;create_a_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/create_a_model&quot;},{&quot;title&quot;:&quot;Share a custom model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_models&quot;},{&quot;title&quot;:&quot;Templates for chat models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;chat_templating&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/chat_templating&quot;},{&quot;title&quot;:&quot;Run training on Amazon SageMaker&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;sagemaker&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/sagemaker&quot;},{&quot;title&quot;:&quot;Export to ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;serialization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/serialization&quot;},{&quot;title&quot;:&quot;Export to TFLite&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tflite&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tflite&quot;},{&quot;title&quot;:&quot;Export to TorchScript&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;torchscript&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/torchscript&quot;},{&quot;title&quot;:&quot;Benchmarks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;benchmarks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/benchmarks&quot;},{&quot;title&quot;:&quot;Notebooks with examples&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;notebooks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/notebooks&quot;},{&quot;title&quot;:&quot;Community resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;community&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/community&quot;},{&quot;title&quot;:&quot;Custom Tools and Prompts&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_tools&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_tools&quot;},{&quot;title&quot;:&quot;Troubleshoot&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;troubleshooting&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/troubleshooting&quot;}]},{&quot;title&quot;:&quot;Performance and scalability&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;performance&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/performance&quot;},{&quot;title&quot;:&quot;Efficient training techniques&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Methods and tools for efficient training on a single GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_one&quot;},{&quot;title&quot;:&quot;Multiple GPUs and parallelism&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_many&quot;},{&quot;title&quot;:&quot;Efficient training on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu&quot;},{&quot;title&quot;:&quot;Distributed CPU training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu_many&quot;},{&quot;title&quot;:&quot;Training on TPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu&quot;},{&quot;title&quot;:&quot;Training on TPU with TensorFlow&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu_tf&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu_tf&quot;},{&quot;title&quot;:&quot;Training on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_special&quot;},{&quot;title&quot;:&quot;Custom hardware for training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_hardware&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_hardware&quot;},{&quot;title&quot;:&quot;Hyperparameter Search using Trainer API&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;hpo_train&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/hpo_train&quot;}]},{&quot;title&quot;:&quot;Optimizing inference&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Inference on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_cpu&quot;},{&quot;title&quot;:&quot;Inference on one GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_one&quot;},{&quot;title&quot;:&quot;Inference on many GPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_many&quot;},{&quot;title&quot;:&quot;Inference on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_special&quot;}]},{&quot;title&quot;:&quot;Instantiating a big model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;big_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/big_models&quot;},{&quot;title&quot;:&quot;Troubleshooting&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;debugging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/debugging&quot;},{&quot;title&quot;:&quot;XLA Integration for TensorFlow Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tf_xla&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tf_xla&quot;},{&quot;title&quot;:&quot;Optimize inference using `torch.compile()`&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_torch_compile&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_torch_compile&quot;}]},{&quot;title&quot;:&quot;Contribute&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;How to contribute to transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;contributing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/contributing&quot;},{&quot;title&quot;:&quot;How to add a model to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_model&quot;},{&quot;title&quot;:&quot;How to convert a 🤗 Transformers model to TensorFlow?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_tensorflow_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_tensorflow_model&quot;},{&quot;title&quot;:&quot;How to add a pipeline to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_pipeline&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_pipeline&quot;},{&quot;title&quot;:&quot;Testing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;testing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/testing&quot;},{&quot;title&quot;:&quot;Checks on a Pull Request&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pr_checks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pr_checks&quot;}]},{&quot;title&quot;:&quot;Conceptual guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Philosophy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;philosophy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/philosophy&quot;},{&quot;title&quot;:&quot;Glossary&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;glossary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/glossary&quot;},{&quot;title&quot;:&quot;What 🤗 Transformers can do&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;task_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/task_summary&quot;},{&quot;title&quot;:&quot;How 🤗 Transformers solve tasks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks_explained&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks_explained&quot;},{&quot;title&quot;:&quot;The Transformer model family&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_summary&quot;},{&quot;title&quot;:&quot;Summary of the tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tokenizer_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tokenizer_summary&quot;},{&quot;title&quot;:&quot;Attention mechanisms&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;attention&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/attention&quot;},{&quot;title&quot;:&quot;Padding and truncation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pad_truncation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pad_truncation&quot;},{&quot;title&quot;:&quot;BERTology&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;bertology&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/bertology&quot;},{&quot;title&quot;:&quot;Perplexity of fixed-length models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perplexity&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perplexity&quot;},{&quot;title&quot;:&quot;Pipelines for webserver inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_webserver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_webserver&quot;},{&quot;title&quot;:&quot;Model training anatomy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_memory_anatomy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_memory_anatomy&quot;}]},{&quot;title&quot;:&quot;API&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Main Classes&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Agents and Tools&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/agent&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/agent&quot;},{&quot;title&quot;:&quot;Auto Classes&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/auto&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/auto&quot;},{&quot;title&quot;:&quot;Callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/callback&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/callback&quot;},{&quot;title&quot;:&quot;Configuration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/configuration&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/configuration&quot;},{&quot;title&quot;:&quot;Data Collator&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/data_collator&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/data_collator&quot;},{&quot;title&quot;:&quot;Keras callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/keras_callbacks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/keras_callbacks&quot;},{&quot;title&quot;:&quot;Logging&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/logging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/logging&quot;},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/model&quot;},{&quot;title&quot;:&quot;Text Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/text_generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/text_generation&quot;},{&quot;title&quot;:&quot;ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/onnx&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/onnx&quot;},{&quot;title&quot;:&quot;Optimization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/optimizer_schedules&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules&quot;},{&quot;title&quot;:&quot;Model outputs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/output&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/output&quot;},{&quot;title&quot;:&quot;Pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/pipelines&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/pipelines&quot;},{&quot;title&quot;:&quot;Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/processors&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/processors&quot;},{&quot;title&quot;:&quot;Quantization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/quantization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/quantization&quot;},{&quot;title&quot;:&quot;Tokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/tokenizer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/tokenizer&quot;},{&quot;title&quot;:&quot;Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/trainer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/trainer&quot;},{&quot;title&quot;:&quot;DeepSpeed Integration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/deepspeed&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/deepspeed&quot;},{&quot;title&quot;:&quot;Feature Extractor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/feature_extractor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/feature_extractor&quot;},{&quot;title&quot;:&quot;Image Processor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/image_processor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/image_processor&quot;}]},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALBERT&quot;,&quot;id&quot;:&quot;model_doc/albert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/albert&quot;},{&quot;title&quot;:&quot;BART&quot;,&quot;id&quot;:&quot;model_doc/bart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bart&quot;},{&quot;title&quot;:&quot;BARThez&quot;,&quot;id&quot;:&quot;model_doc/barthez&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/barthez&quot;},{&quot;title&quot;:&quot;BARTpho&quot;,&quot;id&quot;:&quot;model_doc/bartpho&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bartpho&quot;},{&quot;title&quot;:&quot;BERT&quot;,&quot;id&quot;:&quot;model_doc/bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert&quot;},{&quot;title&quot;:&quot;BertGeneration&quot;,&quot;id&quot;:&quot;model_doc/bert-generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-generation&quot;},{&quot;title&quot;:&quot;BertJapanese&quot;,&quot;id&quot;:&quot;model_doc/bert-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-japanese&quot;},{&quot;title&quot;:&quot;Bertweet&quot;,&quot;id&quot;:&quot;model_doc/bertweet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bertweet&quot;},{&quot;title&quot;:&quot;BigBird&quot;,&quot;id&quot;:&quot;model_doc/big_bird&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/big_bird&quot;},{&quot;title&quot;:&quot;BigBirdPegasus&quot;,&quot;id&quot;:&quot;model_doc/bigbird_pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus&quot;},{&quot;title&quot;:&quot;BioGpt&quot;,&quot;id&quot;:&quot;model_doc/biogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/biogpt&quot;},{&quot;title&quot;:&quot;Blenderbot&quot;,&quot;id&quot;:&quot;model_doc/blenderbot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot&quot;},{&quot;title&quot;:&quot;Blenderbot Small&quot;,&quot;id&quot;:&quot;model_doc/blenderbot-small&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot-small&quot;},{&quot;title&quot;:&quot;BLOOM&quot;,&quot;id&quot;:&quot;model_doc/bloom&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bloom&quot;},{&quot;title&quot;:&quot;BORT&quot;,&quot;id&quot;:&quot;model_doc/bort&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bort&quot;},{&quot;title&quot;:&quot;ByT5&quot;,&quot;id&quot;:&quot;model_doc/byt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/byt5&quot;},{&quot;title&quot;:&quot;CamemBERT&quot;,&quot;id&quot;:&quot;model_doc/camembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/camembert&quot;},{&quot;title&quot;:&quot;CANINE&quot;,&quot;id&quot;:&quot;model_doc/canine&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/canine&quot;},{&quot;title&quot;:&quot;CodeGen&quot;,&quot;id&quot;:&quot;model_doc/codegen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/codegen&quot;},{&quot;title&quot;:&quot;CodeLlama&quot;,&quot;id&quot;:&quot;model_doc/code_llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/code_llama&quot;},{&quot;title&quot;:&quot;ConvBERT&quot;,&quot;id&quot;:&quot;model_doc/convbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convbert&quot;},{&quot;title&quot;:&quot;CPM&quot;,&quot;id&quot;:&quot;model_doc/cpm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpm&quot;},{&quot;title&quot;:&quot;CPMANT&quot;,&quot;id&quot;:&quot;model_doc/cpmant&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpmant&quot;},{&quot;title&quot;:&quot;CTRL&quot;,&quot;id&quot;:&quot;model_doc/ctrl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ctrl&quot;},{&quot;title&quot;:&quot;DeBERTa&quot;,&quot;id&quot;:&quot;model_doc/deberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta&quot;},{&quot;title&quot;:&quot;DeBERTa-v2&quot;,&quot;id&quot;:&quot;model_doc/deberta-v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta-v2&quot;},{&quot;title&quot;:&quot;DialoGPT&quot;,&quot;id&quot;:&quot;model_doc/dialogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dialogpt&quot;},{&quot;title&quot;:&quot;DistilBERT&quot;,&quot;id&quot;:&quot;model_doc/distilbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/distilbert&quot;},{&quot;title&quot;:&quot;DPR&quot;,&quot;id&quot;:&quot;model_doc/dpr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpr&quot;},{&quot;title&quot;:&quot;ELECTRA&quot;,&quot;id&quot;:&quot;model_doc/electra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/electra&quot;},{&quot;title&quot;:&quot;Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encoder-decoder&quot;},{&quot;title&quot;:&quot;ERNIE&quot;,&quot;id&quot;:&quot;model_doc/ernie&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie&quot;},{&quot;title&quot;:&quot;ErnieM&quot;,&quot;id&quot;:&quot;model_doc/ernie_m&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie_m&quot;},{&quot;title&quot;:&quot;ESM&quot;,&quot;id&quot;:&quot;model_doc/esm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/esm&quot;},{&quot;title&quot;:&quot;Falcon&quot;,&quot;id&quot;:&quot;model_doc/falcon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/falcon&quot;},{&quot;title&quot;:&quot;FLAN-T5&quot;,&quot;id&quot;:&quot;model_doc/flan-t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-t5&quot;},{&quot;title&quot;:&quot;FLAN-UL2&quot;,&quot;id&quot;:&quot;model_doc/flan-ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-ul2&quot;},{&quot;title&quot;:&quot;FlauBERT&quot;,&quot;id&quot;:&quot;model_doc/flaubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flaubert&quot;},{&quot;title&quot;:&quot;FNet&quot;,&quot;id&quot;:&quot;model_doc/fnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fnet&quot;},{&quot;title&quot;:&quot;FSMT&quot;,&quot;id&quot;:&quot;model_doc/fsmt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fsmt&quot;},{&quot;title&quot;:&quot;Funnel Transformer&quot;,&quot;id&quot;:&quot;model_doc/funnel&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/funnel&quot;},{&quot;title&quot;:&quot;GPT&quot;,&quot;id&quot;:&quot;model_doc/openai-gpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/openai-gpt&quot;},{&quot;title&quot;:&quot;GPT Neo&quot;,&quot;id&quot;:&quot;model_doc/gpt_neo&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neo&quot;},{&quot;title&quot;:&quot;GPT NeoX&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox&quot;},{&quot;title&quot;:&quot;GPT NeoX Japanese&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox_japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese&quot;},{&quot;title&quot;:&quot;GPT-J&quot;,&quot;id&quot;:&quot;model_doc/gptj&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptj&quot;},{&quot;title&quot;:&quot;GPT2&quot;,&quot;id&quot;:&quot;model_doc/gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt2&quot;},{&quot;title&quot;:&quot;GPTBigCode&quot;,&quot;id&quot;:&quot;model_doc/gpt_bigcode&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode&quot;},{&quot;title&quot;:&quot;GPTSAN Japanese&quot;,&quot;id&quot;:&quot;model_doc/gptsan-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese&quot;},{&quot;title&quot;:&quot;GPTSw3&quot;,&quot;id&quot;:&quot;model_doc/gpt-sw3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt-sw3&quot;},{&quot;title&quot;:&quot;HerBERT&quot;,&quot;id&quot;:&quot;model_doc/herbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/herbert&quot;},{&quot;title&quot;:&quot;I-BERT&quot;,&quot;id&quot;:&quot;model_doc/ibert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ibert&quot;},{&quot;title&quot;:&quot;Jukebox&quot;,&quot;id&quot;:&quot;model_doc/jukebox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/jukebox&quot;},{&quot;title&quot;:&quot;LED&quot;,&quot;id&quot;:&quot;model_doc/led&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/led&quot;},{&quot;title&quot;:&quot;LLaMA&quot;,&quot;id&quot;:&quot;model_doc/llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama&quot;},{&quot;title&quot;:&quot;Llama2&quot;,&quot;id&quot;:&quot;model_doc/llama2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama2&quot;},{&quot;title&quot;:&quot;Longformer&quot;,&quot;id&quot;:&quot;model_doc/longformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longformer&quot;},{&quot;title&quot;:&quot;LongT5&quot;,&quot;id&quot;:&quot;model_doc/longt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longt5&quot;},{&quot;title&quot;:&quot;LUKE&quot;,&quot;id&quot;:&quot;model_doc/luke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/luke&quot;},{&quot;title&quot;:&quot;M2M100&quot;,&quot;id&quot;:&quot;model_doc/m2m_100&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/m2m_100&quot;},{&quot;title&quot;:&quot;MarianMT&quot;,&quot;id&quot;:&quot;model_doc/marian&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/marian&quot;},{&quot;title&quot;:&quot;MarkupLM&quot;,&quot;id&quot;:&quot;model_doc/markuplm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/markuplm&quot;},{&quot;title&quot;:&quot;MBart and MBart-50&quot;,&quot;id&quot;:&quot;model_doc/mbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mbart&quot;},{&quot;title&quot;:&quot;MEGA&quot;,&quot;id&quot;:&quot;model_doc/mega&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mega&quot;},{&quot;title&quot;:&quot;MegatronBERT&quot;,&quot;id&quot;:&quot;model_doc/megatron-bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron-bert&quot;},{&quot;title&quot;:&quot;MegatronGPT2&quot;,&quot;id&quot;:&quot;model_doc/megatron_gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2&quot;},{&quot;title&quot;:&quot;Mistral&quot;,&quot;id&quot;:&quot;model_doc/mistral&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mistral&quot;},{&quot;title&quot;:&quot;mLUKE&quot;,&quot;id&quot;:&quot;model_doc/mluke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mluke&quot;},{&quot;title&quot;:&quot;MobileBERT&quot;,&quot;id&quot;:&quot;model_doc/mobilebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilebert&quot;},{&quot;title&quot;:&quot;MPNet&quot;,&quot;id&quot;:&quot;model_doc/mpnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpnet&quot;},{&quot;title&quot;:&quot;MPT&quot;,&quot;id&quot;:&quot;model_doc/mpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpt&quot;},{&quot;title&quot;:&quot;MRA&quot;,&quot;id&quot;:&quot;model_doc/mra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mra&quot;},{&quot;title&quot;:&quot;MT5&quot;,&quot;id&quot;:&quot;model_doc/mt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mt5&quot;},{&quot;title&quot;:&quot;MVP&quot;,&quot;id&quot;:&quot;model_doc/mvp&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mvp&quot;},{&quot;title&quot;:&quot;NEZHA&quot;,&quot;id&quot;:&quot;model_doc/nezha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nezha&quot;},{&quot;title&quot;:&quot;NLLB&quot;,&quot;id&quot;:&quot;model_doc/nllb&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb&quot;},{&quot;title&quot;:&quot;NLLB-MoE&quot;,&quot;id&quot;:&quot;model_doc/nllb-moe&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb-moe&quot;},{&quot;title&quot;:&quot;Nyströmformer&quot;,&quot;id&quot;:&quot;model_doc/nystromformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nystromformer&quot;},{&quot;title&quot;:&quot;Open-Llama&quot;,&quot;id&quot;:&quot;model_doc/open-llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/open-llama&quot;},{&quot;title&quot;:&quot;OPT&quot;,&quot;id&quot;:&quot;model_doc/opt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/opt&quot;},{&quot;title&quot;:&quot;Pegasus&quot;,&quot;id&quot;:&quot;model_doc/pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus&quot;},{&quot;title&quot;:&quot;PEGASUS-X&quot;,&quot;id&quot;:&quot;model_doc/pegasus_x&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus_x&quot;},{&quot;title&quot;:&quot;Persimmon&quot;,&quot;id&quot;:&quot;model_doc/persimmon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/persimmon&quot;},{&quot;title&quot;:&quot;PhoBERT&quot;,&quot;id&quot;:&quot;model_doc/phobert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/phobert&quot;},{&quot;title&quot;:&quot;PLBart&quot;,&quot;id&quot;:&quot;model_doc/plbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/plbart&quot;},{&quot;title&quot;:&quot;ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/prophetnet&quot;},{&quot;title&quot;:&quot;QDQBert&quot;,&quot;id&quot;:&quot;model_doc/qdqbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/qdqbert&quot;},{&quot;title&quot;:&quot;RAG&quot;,&quot;id&quot;:&quot;model_doc/rag&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rag&quot;},{&quot;title&quot;:&quot;REALM&quot;,&quot;id&quot;:&quot;model_doc/realm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/realm&quot;},{&quot;title&quot;:&quot;Reformer&quot;,&quot;id&quot;:&quot;model_doc/reformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/reformer&quot;},{&quot;title&quot;:&quot;RemBERT&quot;,&quot;id&quot;:&quot;model_doc/rembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rembert&quot;},{&quot;title&quot;:&quot;RetriBERT&quot;,&quot;id&quot;:&quot;model_doc/retribert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/retribert&quot;},{&quot;title&quot;:&quot;RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta&quot;},{&quot;title&quot;:&quot;RoBERTa-PreLayerNorm&quot;,&quot;id&quot;:&quot;model_doc/roberta-prelayernorm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm&quot;},{&quot;title&quot;:&quot;RoCBert&quot;,&quot;id&quot;:&quot;model_doc/roc_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roc_bert&quot;},{&quot;title&quot;:&quot;RoFormer&quot;,&quot;id&quot;:&quot;model_doc/roformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roformer&quot;},{&quot;title&quot;:&quot;RWKV&quot;,&quot;id&quot;:&quot;model_doc/rwkv&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rwkv&quot;},{&quot;title&quot;:&quot;Splinter&quot;,&quot;id&quot;:&quot;model_doc/splinter&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/splinter&quot;},{&quot;title&quot;:&quot;SqueezeBERT&quot;,&quot;id&quot;:&quot;model_doc/squeezebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/squeezebert&quot;},{&quot;title&quot;:&quot;SwitchTransformers&quot;,&quot;id&quot;:&quot;model_doc/switch_transformers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/switch_transformers&quot;},{&quot;title&quot;:&quot;T5&quot;,&quot;id&quot;:&quot;model_doc/t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5&quot;},{&quot;title&quot;:&quot;T5v1.1&quot;,&quot;id&quot;:&quot;model_doc/t5v1.1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5v1.1&quot;},{&quot;title&quot;:&quot;TAPEX&quot;,&quot;id&quot;:&quot;model_doc/tapex&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapex&quot;},{&quot;title&quot;:&quot;Transformer XL&quot;,&quot;id&quot;:&quot;model_doc/transfo-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/transfo-xl&quot;},{&quot;title&quot;:&quot;UL2&quot;,&quot;id&quot;:&quot;model_doc/ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ul2&quot;},{&quot;title&quot;:&quot;UMT5&quot;,&quot;id&quot;:&quot;model_doc/umt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/umt5&quot;},{&quot;title&quot;:&quot;X-MOD&quot;,&quot;id&quot;:&quot;model_doc/xmod&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xmod&quot;},{&quot;title&quot;:&quot;XGLM&quot;,&quot;id&quot;:&quot;model_doc/xglm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xglm&quot;},{&quot;title&quot;:&quot;XLM&quot;,&quot;id&quot;:&quot;model_doc/xlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm&quot;},{&quot;title&quot;:&quot;XLM-ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/xlm-prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa-XL&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl&quot;},{&quot;title&quot;:&quot;XLM-V&quot;,&quot;id&quot;:&quot;model_doc/xlm-v&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-v&quot;},{&quot;title&quot;:&quot;XLNet&quot;,&quot;id&quot;:&quot;model_doc/xlnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlnet&quot;},{&quot;title&quot;:&quot;YOSO&quot;,&quot;id&quot;:&quot;model_doc/yoso&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yoso&quot;}]},{&quot;title&quot;:&quot;Vision models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;BEiT&quot;,&quot;id&quot;:&quot;model_doc/beit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/beit&quot;},{&quot;title&quot;:&quot;BiT&quot;,&quot;id&quot;:&quot;model_doc/bit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bit&quot;},{&quot;title&quot;:&quot;Conditional DETR&quot;,&quot;id&quot;:&quot;model_doc/conditional_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/conditional_detr&quot;},{&quot;title&quot;:&quot;ConvNeXT&quot;,&quot;id&quot;:&quot;model_doc/convnext&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnext&quot;},{&quot;title&quot;:&quot;ConvNeXTV2&quot;,&quot;id&quot;:&quot;model_doc/convnextv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnextv2&quot;},{&quot;title&quot;:&quot;CvT&quot;,&quot;id&quot;:&quot;model_doc/cvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cvt&quot;},{&quot;title&quot;:&quot;Deformable DETR&quot;,&quot;id&quot;:&quot;model_doc/deformable_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deformable_detr&quot;},{&quot;title&quot;:&quot;DeiT&quot;,&quot;id&quot;:&quot;model_doc/deit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deit&quot;},{&quot;title&quot;:&quot;DETA&quot;,&quot;id&quot;:&quot;model_doc/deta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deta&quot;},{&quot;title&quot;:&quot;DETR&quot;,&quot;id&quot;:&quot;model_doc/detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/detr&quot;},{&quot;title&quot;:&quot;DiNAT&quot;,&quot;id&quot;:&quot;model_doc/dinat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinat&quot;},{&quot;title&quot;:&quot;DINO V2&quot;,&quot;id&quot;:&quot;model_doc/dinov2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinov2&quot;},{&quot;title&quot;:&quot;DiT&quot;,&quot;id&quot;:&quot;model_doc/dit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dit&quot;},{&quot;title&quot;:&quot;DPT&quot;,&quot;id&quot;:&quot;model_doc/dpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpt&quot;},{&quot;title&quot;:&quot;EfficientFormer&quot;,&quot;id&quot;:&quot;model_doc/efficientformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientformer&quot;},{&quot;title&quot;:&quot;EfficientNet&quot;,&quot;id&quot;:&quot;model_doc/efficientnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientnet&quot;},{&quot;title&quot;:&quot;FocalNet&quot;,&quot;id&quot;:&quot;model_doc/focalnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/focalnet&quot;},{&quot;title&quot;:&quot;GLPN&quot;,&quot;id&quot;:&quot;model_doc/glpn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/glpn&quot;},{&quot;title&quot;:&quot;ImageGPT&quot;,&quot;id&quot;:&quot;model_doc/imagegpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/imagegpt&quot;},{&quot;title&quot;:&quot;LeViT&quot;,&quot;id&quot;:&quot;model_doc/levit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/levit&quot;},{&quot;title&quot;:&quot;Mask2Former&quot;,&quot;id&quot;:&quot;model_doc/mask2former&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mask2former&quot;},{&quot;title&quot;:&quot;MaskFormer&quot;,&quot;id&quot;:&quot;model_doc/maskformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/maskformer&quot;},{&quot;title&quot;:&quot;MobileNetV1&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v1&quot;},{&quot;title&quot;:&quot;MobileNetV2&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v2&quot;},{&quot;title&quot;:&quot;MobileViT&quot;,&quot;id&quot;:&quot;model_doc/mobilevit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevit&quot;},{&quot;title&quot;:&quot;MobileViTV2&quot;,&quot;id&quot;:&quot;model_doc/mobilevitv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevitv2&quot;},{&quot;title&quot;:&quot;NAT&quot;,&quot;id&quot;:&quot;model_doc/nat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nat&quot;},{&quot;title&quot;:&quot;PoolFormer&quot;,&quot;id&quot;:&quot;model_doc/poolformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/poolformer&quot;},{&quot;title&quot;:&quot;Pyramid Vision Transformer (PVT)&quot;,&quot;id&quot;:&quot;model_doc/pvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pvt&quot;},{&quot;title&quot;:&quot;RegNet&quot;,&quot;id&quot;:&quot;model_doc/regnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/regnet&quot;},{&quot;title&quot;:&quot;ResNet&quot;,&quot;id&quot;:&quot;model_doc/resnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/resnet&quot;},{&quot;title&quot;:&quot;SegFormer&quot;,&quot;id&quot;:&quot;model_doc/segformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/segformer&quot;},{&quot;title&quot;:&quot;SwiftFormer&quot;,&quot;id&quot;:&quot;model_doc/swiftformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swiftformer&quot;},{&quot;title&quot;:&quot;Swin Transformer&quot;,&quot;id&quot;:&quot;model_doc/swin&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin&quot;},{&quot;title&quot;:&quot;Swin Transformer V2&quot;,&quot;id&quot;:&quot;model_doc/swinv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swinv2&quot;},{&quot;title&quot;:&quot;Swin2SR&quot;,&quot;id&quot;:&quot;model_doc/swin2sr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin2sr&quot;},{&quot;title&quot;:&quot;Table Transformer&quot;,&quot;id&quot;:&quot;model_doc/table-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/table-transformer&quot;},{&quot;title&quot;:&quot;TimeSformer&quot;,&quot;id&quot;:&quot;model_doc/timesformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/timesformer&quot;},{&quot;title&quot;:&quot;UperNet&quot;,&quot;id&quot;:&quot;model_doc/upernet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/upernet&quot;},{&quot;title&quot;:&quot;VAN&quot;,&quot;id&quot;:&quot;model_doc/van&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/van&quot;},{&quot;title&quot;:&quot;VideoMAE&quot;,&quot;id&quot;:&quot;model_doc/videomae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/videomae&quot;},{&quot;title&quot;:&quot;Vision Transformer (ViT)&quot;,&quot;id&quot;:&quot;model_doc/vit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit&quot;},{&quot;title&quot;:&quot;ViT Hybrid&quot;,&quot;id&quot;:&quot;model_doc/vit_hybrid&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_hybrid&quot;},{&quot;title&quot;:&quot;ViTDet&quot;,&quot;id&quot;:&quot;model_doc/vitdet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitdet&quot;},{&quot;title&quot;:&quot;ViTMAE&quot;,&quot;id&quot;:&quot;model_doc/vit_mae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_mae&quot;},{&quot;title&quot;:&quot;ViTMatte&quot;,&quot;id&quot;:&quot;model_doc/vitmatte&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitmatte&quot;},{&quot;title&quot;:&quot;ViTMSN&quot;,&quot;id&quot;:&quot;model_doc/vit_msn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_msn&quot;},{&quot;title&quot;:&quot;ViViT&quot;,&quot;id&quot;:&quot;model_doc/vivit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vivit&quot;},{&quot;title&quot;:&quot;YOLOS&quot;,&quot;id&quot;:&quot;model_doc/yolos&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yolos&quot;}]},{&quot;title&quot;:&quot;Audio models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio Spectrogram Transformer&quot;,&quot;id&quot;:&quot;model_doc/audio-spectrogram-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer&quot;},{&quot;title&quot;:&quot;Bark&quot;,&quot;id&quot;:&quot;model_doc/bark&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bark&quot;},{&quot;title&quot;:&quot;CLAP&quot;,&quot;id&quot;:&quot;model_doc/clap&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clap&quot;},{&quot;title&quot;:&quot;EnCodec&quot;,&quot;id&quot;:&quot;model_doc/encodec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encodec&quot;},{&quot;title&quot;:&quot;Hubert&quot;,&quot;id&quot;:&quot;model_doc/hubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/hubert&quot;},{&quot;title&quot;:&quot;MCTCT&quot;,&quot;id&quot;:&quot;model_doc/mctct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mctct&quot;},{&quot;title&quot;:&quot;MMS&quot;,&quot;id&quot;:&quot;model_doc/mms&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mms&quot;},{&quot;title&quot;:&quot;MusicGen&quot;,&quot;id&quot;:&quot;model_doc/musicgen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/musicgen&quot;},{&quot;title&quot;:&quot;Pop2Piano&quot;,&quot;id&quot;:&quot;model_doc/pop2piano&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pop2piano&quot;},{&quot;title&quot;:&quot;SEW&quot;,&quot;id&quot;:&quot;model_doc/sew&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew&quot;},{&quot;title&quot;:&quot;SEW-D&quot;,&quot;id&quot;:&quot;model_doc/sew-d&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew-d&quot;},{&quot;title&quot;:&quot;Speech2Text&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text&quot;},{&quot;title&quot;:&quot;Speech2Text2&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text_2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2&quot;},{&quot;title&quot;:&quot;SpeechT5&quot;,&quot;id&quot;:&quot;model_doc/speecht5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speecht5&quot;},{&quot;title&quot;:&quot;UniSpeech&quot;,&quot;id&quot;:&quot;model_doc/unispeech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech&quot;},{&quot;title&quot;:&quot;UniSpeech-SAT&quot;,&quot;id&quot;:&quot;model_doc/unispeech-sat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech-sat&quot;},{&quot;title&quot;:&quot;VITS&quot;,&quot;id&quot;:&quot;model_doc/vits&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vits&quot;},{&quot;title&quot;:&quot;Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2&quot;},{&quot;title&quot;:&quot;Wav2Vec2-Conformer&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2-conformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer&quot;},{&quot;title&quot;:&quot;Wav2Vec2Phoneme&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2_phoneme&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme&quot;},{&quot;title&quot;:&quot;WavLM&quot;,&quot;id&quot;:&quot;model_doc/wavlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wavlm&quot;},{&quot;title&quot;:&quot;Whisper&quot;,&quot;id&quot;:&quot;model_doc/whisper&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/whisper&quot;},{&quot;title&quot;:&quot;XLS-R&quot;,&quot;id&quot;:&quot;model_doc/xls_r&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xls_r&quot;},{&quot;title&quot;:&quot;XLSR-Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/xlsr_wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2&quot;}]},{&quot;title&quot;:&quot;Multimodal models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALIGN&quot;,&quot;id&quot;:&quot;model_doc/align&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/align&quot;},{&quot;title&quot;:&quot;AltCLIP&quot;,&quot;id&quot;:&quot;model_doc/altclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/altclip&quot;},{&quot;title&quot;:&quot;BLIP&quot;,&quot;id&quot;:&quot;model_doc/blip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip&quot;},{&quot;title&quot;:&quot;BLIP-2&quot;,&quot;id&quot;:&quot;model_doc/blip-2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip-2&quot;},{&quot;title&quot;:&quot;BridgeTower&quot;,&quot;id&quot;:&quot;model_doc/bridgetower&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bridgetower&quot;},{&quot;title&quot;:&quot;BROS&quot;,&quot;id&quot;:&quot;model_doc/bros&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bros&quot;},{&quot;title&quot;:&quot;Chinese-CLIP&quot;,&quot;id&quot;:&quot;model_doc/chinese_clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/chinese_clip&quot;},{&quot;title&quot;:&quot;CLIP&quot;,&quot;id&quot;:&quot;model_doc/clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clip&quot;},{&quot;title&quot;:&quot;CLIPSeg&quot;,&quot;id&quot;:&quot;model_doc/clipseg&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clipseg&quot;},{&quot;title&quot;:&quot;Data2Vec&quot;,&quot;id&quot;:&quot;model_doc/data2vec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/data2vec&quot;},{&quot;title&quot;:&quot;DePlot&quot;,&quot;id&quot;:&quot;model_doc/deplot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deplot&quot;},{&quot;title&quot;:&quot;Donut&quot;,&quot;id&quot;:&quot;model_doc/donut&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/donut&quot;},{&quot;title&quot;:&quot;FLAVA&quot;,&quot;id&quot;:&quot;model_doc/flava&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flava&quot;},{&quot;title&quot;:&quot;GIT&quot;,&quot;id&quot;:&quot;model_doc/git&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/git&quot;},{&quot;title&quot;:&quot;GroupViT&quot;,&quot;id&quot;:&quot;model_doc/groupvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/groupvit&quot;},{&quot;title&quot;:&quot;IDEFICS&quot;,&quot;id&quot;:&quot;model_doc/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/idefics&quot;},{&quot;title&quot;:&quot;InstructBLIP&quot;,&quot;id&quot;:&quot;model_doc/instructblip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/instructblip&quot;},{&quot;title&quot;:&quot;LayoutLM&quot;,&quot;id&quot;:&quot;model_doc/layoutlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlm&quot;},{&quot;title&quot;:&quot;LayoutLMV2&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv2&quot;},{&quot;title&quot;:&quot;LayoutLMV3&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv3&quot;},{&quot;title&quot;:&quot;LayoutXLM&quot;,&quot;id&quot;:&quot;model_doc/layoutxlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutxlm&quot;},{&quot;title&quot;:&quot;LiLT&quot;,&quot;id&quot;:&quot;model_doc/lilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lilt&quot;},{&quot;title&quot;:&quot;LXMERT&quot;,&quot;id&quot;:&quot;model_doc/lxmert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lxmert&quot;},{&quot;title&quot;:&quot;MatCha&quot;,&quot;id&quot;:&quot;model_doc/matcha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/matcha&quot;},{&quot;title&quot;:&quot;MGP-STR&quot;,&quot;id&quot;:&quot;model_doc/mgp-str&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mgp-str&quot;},{&quot;title&quot;:&quot;Nougat&quot;,&quot;id&quot;:&quot;model_doc/nougat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nougat&quot;},{&quot;title&quot;:&quot;OneFormer&quot;,&quot;id&quot;:&quot;model_doc/oneformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/oneformer&quot;},{&quot;title&quot;:&quot;OWL-ViT&quot;,&quot;id&quot;:&quot;model_doc/owlvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/owlvit&quot;},{&quot;title&quot;:&quot;Perceiver&quot;,&quot;id&quot;:&quot;model_doc/perceiver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/perceiver&quot;},{&quot;title&quot;:&quot;Pix2Struct&quot;,&quot;id&quot;:&quot;model_doc/pix2struct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pix2struct&quot;},{&quot;title&quot;:&quot;Segment Anything&quot;,&quot;id&quot;:&quot;model_doc/sam&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sam&quot;},{&quot;title&quot;:&quot;Speech Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/speech-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder&quot;},{&quot;title&quot;:&quot;TAPAS&quot;,&quot;id&quot;:&quot;model_doc/tapas&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapas&quot;},{&quot;title&quot;:&quot;TrOCR&quot;,&quot;id&quot;:&quot;model_doc/trocr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trocr&quot;},{&quot;title&quot;:&quot;TVLT&quot;,&quot;id&quot;:&quot;model_doc/tvlt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tvlt&quot;},{&quot;title&quot;:&quot;ViLT&quot;,&quot;id&quot;:&quot;model_doc/vilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vilt&quot;},{&quot;title&quot;:&quot;Vision Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/vision-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder&quot;},{&quot;title&quot;:&quot;Vision Text Dual Encoder&quot;,&quot;id&quot;:&quot;model_doc/vision-text-dual-encoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder&quot;},{&quot;title&quot;:&quot;VisualBERT&quot;,&quot;id&quot;:&quot;model_doc/visual_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/visual_bert&quot;},{&quot;title&quot;:&quot;X-CLIP&quot;,&quot;id&quot;:&quot;model_doc/xclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xclip&quot;}]},{&quot;title&quot;:&quot;Reinforcement learning models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Decision Transformer&quot;,&quot;id&quot;:&quot;model_doc/decision_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/decision_transformer&quot;},{&quot;title&quot;:&quot;Trajectory Transformer&quot;,&quot;id&quot;:&quot;model_doc/trajectory_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer&quot;}]},{&quot;title&quot;:&quot;Time series models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Autoformer&quot;,&quot;id&quot;:&quot;model_doc/autoformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/autoformer&quot;},{&quot;title&quot;:&quot;Informer&quot;,&quot;id&quot;:&quot;model_doc/informer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/informer&quot;},{&quot;title&quot;:&quot;Time Series Transformer&quot;,&quot;id&quot;:&quot;model_doc/time_series_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/time_series_transformer&quot;}]},{&quot;title&quot;:&quot;Graph models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Graphormer&quot;,&quot;id&quot;:&quot;model_doc/graphormer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/graphormer&quot;}]}]},{&quot;title&quot;:&quot;Internal Helpers&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Custom Layers and Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/modeling_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/modeling_utils&quot;},{&quot;title&quot;:&quot;Utilities for pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/pipelines_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/pipelines_utils&quot;},{&quot;title&quot;:&quot;Utilities for Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/tokenization_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/tokenization_utils&quot;},{&quot;title&quot;:&quot;Utilities for Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/trainer_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/trainer_utils&quot;},{&quot;title&quot;:&quot;Utilities for Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/generation_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/generation_utils&quot;},{&quot;title&quot;:&quot;Utilities for Image Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/image_processing_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/image_processing_utils&quot;},{&quot;title&quot;:&quot;Utilities for Audio processing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/audio_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/audio_utils&quot;},{&quot;title&quot;:&quot;General Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/file_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/file_utils&quot;},{&quot;title&quot;:&quot;Utilities for Time Series&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/time_series_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/time_series_utils&quot;}]}]}],&quot;chapterId&quot;:&quot;tasks/zero_shot_image_classification&quot;,&quot;docType&quot;:&quot;docs&quot;,&quot;isLoggedIn&quot;:false,&quot;lang&quot;:&quot;en&quot;,&quot;langs&quot;:[&quot;de&quot;,&quot;en&quot;,&quot;es&quot;,&quot;fr&quot;,&quot;it&quot;,&quot;ko&quot;,&quot;pt&quot;,&quot;zh&quot;],&quot;library&quot;:&quot;transformers&quot;,&quot;theme&quot;:&quot;light&quot;,&quot;version&quot;:&quot;v4.34.0&quot;,&quot;versions&quot;:[{&quot;version&quot;:&quot;main&quot;},{&quot;version&quot;:&quot;v4.34.0&quot;},{&quot;version&quot;:&quot;v4.33.3&quot;},{&quot;version&quot;:&quot;v4.33.2&quot;},{&quot;version&quot;:&quot;v4.33.0&quot;},{&quot;version&quot;:&quot;v4.32.1&quot;},{&quot;version&quot;:&quot;v4.32.0&quot;},{&quot;version&quot;:&quot;v4.31.0&quot;},{&quot;version&quot;:&quot;v4.30.0&quot;},{&quot;version&quot;:&quot;v4.29.1&quot;},{&quot;version&quot;:&quot;v4.29.0&quot;},{&quot;version&quot;:&quot;v4.28.1&quot;},{&quot;version&quot;:&quot;v4.28.0&quot;},{&quot;version&quot;:&quot;v4.27.2&quot;},{&quot;version&quot;:&quot;v4.27.1&quot;},{&quot;version&quot;:&quot;v4.27.0&quot;},{&quot;version&quot;:&quot;v4.26.1&quot;},{&quot;version&quot;:&quot;v4.26.0&quot;},{&quot;version&quot;:&quot;v4.25.1&quot;},{&quot;version&quot;:&quot;v4.24.0&quot;},{&quot;version&quot;:&quot;v4.23.1&quot;},{&quot;version&quot;:&quot;v4.23.0&quot;},{&quot;version&quot;:&quot;v4.22.2&quot;},{&quot;version&quot;:&quot;v4.22.1&quot;},{&quot;version&quot;:&quot;v4.22.0&quot;},{&quot;version&quot;:&quot;v4.21.3&quot;},{&quot;version&quot;:&quot;v4.21.2&quot;},{&quot;version&quot;:&quot;v4.21.1&quot;},{&quot;version&quot;:&quot;v4.21.0&quot;},{&quot;version&quot;:&quot;v4.20.1&quot;},{&quot;version&quot;:&quot;v4.20.0&quot;},{&quot;version&quot;:&quot;v4.19.4&quot;},{&quot;version&quot;:&quot;v4.19.3&quot;},{&quot;version&quot;:&quot;v4.19.2&quot;},{&quot;version&quot;:&quot;v4.19.0&quot;},{&quot;version&quot;:&quot;v4.18.0&quot;},{&quot;version&quot;:&quot;v4.17.0&quot;},{&quot;version&quot;:&quot;v4.16.2&quot;},{&quot;version&quot;:&quot;v4.16.1&quot;},{&quot;version&quot;:&quot;v4.16.0&quot;},{&quot;version&quot;:&quot;v4.15.0&quot;},{&quot;version&quot;:&quot;v4.14.1&quot;},{&quot;version&quot;:&quot;v4.13.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.5&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.4&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.0.0&quot;},{&quot;version&quot;:&quot;doc-builder-html&quot;}],&quot;title&quot;:&quot;Zero-shot image classification&quot;}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"><div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation</p> <div class="flex items-center"><p class="font-semibold">Zero-shot image classification</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "><button class=" " type="button"><h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> <span class="ml-auto rounded border border-gray-200 bg-gray-100 px-0.5 text-xs dark:border-gray-800 dark:bg-gray-800"><kbd class="font-sans">⌘K</kbd></span></button> <div class="flex items-center"><select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1">v4.34.0</option><option value="2">v4.33.3</option><option value="3">v4.32.1</option><option value="4">v4.31.0</option><option value="5">v4.30.0</option><option value="6">v4.29.1</option><option value="7">v4.28.1</option><option value="8">v4.27.2</option><option value="9">v4.26.1</option><option value="10">v4.25.1</option><option value="11">v4.24.0</option><option value="12">v4.23.1</option><option value="13">v4.22.2</option><option value="14">v4.21.3</option><option value="15">v4.20.1</option><option value="16">v4.19.4</option><option value="17">v4.18.0</option><option value="18">v4.17.0</option><option value="19">v4.16.2</option><option value="20">v4.15.0</option><option value="21">v4.14.1</option><option value="22">v4.13.0</option><option value="23">v4.12.5</option><option value="24">v4.11.3</option><option value="25">v4.10.1</option><option value="26">v4.9.2</option><option value="27">v4.8.2</option><option value="28">v4.7.0</option><option value="29">v4.6.0</option><option value="30">v4.5.1</option><option value="31">v4.4.2</option><option value="32">v4.3.3</option><option value="33">v4.2.2</option><option value="34">v4.1.1</option><option value="35">v4.0.1</option><option value="36">v3.5.1</option><option value="37">v3.4.0</option><option value="38">v3.3.1</option><option value="39">v3.2.0</option><option value="40">v3.1.0</option><option value="41">v3.0.2</option><option value="42">v2.11.0</option><option value="43">v2.10.0</option><option value="44">v2.9.1</option><option value="45">v2.8.0</option><option value="46">v2.7.0</option><option value="47">v2.6.0</option><option value="48">v2.5.1</option><option value="49">v2.4.1</option><option value="50">v2.3.0</option><option value="51">v2.2.2</option><option value="52">v2.1.1</option><option value="53">v2.0.0</option><option value="54">v1.2.0</option><option value="55">v1.1.0</option><option value="56">v1.0.0</option><option value="57">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"><button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"><svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> 112,792</a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Get started</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/index">🤗 Transformers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/quicktour">Quick tour </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/installation">Installation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Tutorials</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_tutorial">Run inference with pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/autoclass_tutorial">Write portable code with AutoClass </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/preprocessing">Preprocess data </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/training">Fine-tune a pretrained model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/run_scripts">Train with a script </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/accelerate">Set up distributed training with 🤗 Accelerate </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/peft">Load and train adapters with 🤗 PEFT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_sharing">Share your model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/transformers_agents">Agents </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/llm_tutorial">Generation with LLMs </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Task Guides</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Natural Language Processing</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Computer Vision</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/image_classification">Image classification </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/semantic_segmentation">Semantic segmentation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/video_classification">Video classification </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/object_detection">Object detection </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection">Zero-shot object detection </a><a data-sveltekit-reload="" class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-4" href="/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification">Zero-shot image classification </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation">Depth estimation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Generation</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Prompting</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Developer guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/fast_tokenizers">Use fast tokenizers from 🤗 Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/multilingual">Run inference with multilingual models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/create_a_model">Use model-specific APIs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_models">Share a custom model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/chat_templating">Templates for chat models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/sagemaker">Run training on Amazon SageMaker </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/serialization">Export to ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tflite">Export to TFLite </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/torchscript">Export to TorchScript </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/benchmarks">Benchmarks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/notebooks">Notebooks with examples </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/community">Community resources </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_tools">Custom Tools and Prompts </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/troubleshooting">Troubleshoot </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Performance and scalability</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/performance">Overview </a><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Efficient training techniques</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_one">Methods and tools for efficient training on a single GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_many">Multiple GPUs and parallelism </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu">Efficient training on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu_many">Distributed CPU training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu">Training on TPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu_tf">Training on TPU with TensorFlow </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_special">Training on Specialized Hardware </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_hardware">Custom hardware for training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/hpo_train">Hyperparameter Search using Trainer API </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Optimizing inference</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_cpu">Inference on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_one">Inference on one GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_many">Inference on many GPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_special">Inference on Specialized Hardware </a> </div><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/big_models">Instantiating a big model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/debugging">Troubleshooting </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tf_xla">XLA Integration for TensorFlow Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perf_torch_compile">Optimize inference using `torch.compile()` </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Contribute</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/contributing">How to contribute to transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_model">How to add a model to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_tensorflow_model">How to convert a 🤗 Transformers model to TensorFlow? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_pipeline">How to add a pipeline to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/testing">Testing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pr_checks">Checks on a Pull Request </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Conceptual guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/philosophy">Philosophy </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/glossary">Glossary </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/task_summary">What 🤗 Transformers can do </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tasks_explained">How 🤗 Transformers solve tasks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_summary">The Transformer model family </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tokenizer_summary">Summary of the tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/attention">Attention mechanisms </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pad_truncation">Padding and truncation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/bertology">BERTology </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perplexity">Perplexity of fixed-length models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_webserver">Pipelines for webserver inference </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_memory_anatomy">Model training anatomy </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>API</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Main Classes</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/agent">Agents and Tools </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/model_doc/auto">Auto Classes </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/callback">Callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/configuration">Configuration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/data_collator">Data Collator </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks">Keras callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/logging">Logging </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/model">Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/text_generation">Text Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/onnx">ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules">Optimization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/output">Model outputs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/pipelines">Pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/processors">Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/quantization">Quantization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/tokenizer">Tokenizer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/trainer">Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/deepspeed">DeepSpeed Integration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor">Feature Extractor </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/image_processor">Image Processor </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Models</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Text models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Vision models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Reinforcement learning models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Time series models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Graph models</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Internal Helpers</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/modeling_utils">Custom Layers and Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/pipelines_utils">Utilities for pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/tokenization_utils">Utilities for Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/trainer_utils">Utilities for Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/generation_utils">Utilities for Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/image_processing_utils">Utilities for Image Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/audio_utils">Utilities for Audio processing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/file_utils">General Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/time_series_utils">Utilities for Time Series </a> </div> </div></nav></div></div></div> <div class="z-1 min-w-0 flex-1"> <div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg"> <div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div> <p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience </p> <div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes </div></div></div> <div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a> <p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div> <div class="prose-doc prose relative mx-auto max-w-4xl break-words"> <p></p> <h1 class="relative group"><a id="zeroshot-image-classification" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#zeroshot-image-classification"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1ja1cxh">Zero-shot image classification</span></h1> <div class="flex space-x-1 absolute z-10 right-0 top-0"> <div class="relative colab-dropdown "><button class=" " type="button"><img alt="Open In Colab" class="!m-0" src="https://colab.research.google.com/assets/colab-badge.svg"></button> </div> <div class="relative colab-dropdown "><button class=" " type="button"><img alt="Open In Studio Lab" class="!m-0" src="https://studiolab.sagemaker.aws/studiolab.svg"></button> </div></div> <p data-svelte-h="svelte-1epp0bd">Zero-shot image classification is a task that involves classifying images into different categories using a model that was not explicitly trained on data containing labeled examples from those specific categories.</p> <p data-svelte-h="svelte-1cxdsqr">Traditionally, image classification requires training a model on a specific set of labeled images, and this model learns to “map” certain image features to labels. When there’s a need to use such model for a classification task that introduces a new set of labels, fine-tuning is required to “recalibrate” the model.</p> <p data-svelte-h="svelte-162gw8s">In contrast, zero-shot or open vocabulary image classification models are typically multi-modal models that have been trained on a large dataset of images and associated descriptions. These models learn aligned vision-language representations that can be used for many downstream tasks including zero-shot image classification.</p> <p data-svelte-h="svelte-1e19xpt">This is a more flexible approach to image classification that allows models to generalize to new and unseen categories without the need for additional training data and enables users to query images with free-form text descriptions of their target objects .</p> <p data-svelte-h="svelte-jr2b5g">In this guide you’ll learn how to:</p> <ul data-svelte-h="svelte-d1jtl1"><li>create a zero-shot image classification pipeline</li> <li>run zero-shot image classification inference by hand</li></ul> <p data-svelte-h="svelte-1c9nexd">Before you begin, make sure you have all the necessary libraries installed:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class="">pip install -q transformers</pre></div> <h2 class="relative group"><a id="zeroshot-image-classification-pipeline" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#zeroshot-image-classification-pipeline"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-8vrmh1">Zero-shot image classification pipeline</span></h2> <p data-svelte-h="svelte-4lia7t">The simplest way to try out inference with a model supporting zero-shot image classification is to use the corresponding <a href="/docs/transformers/v4.34.0/en/main_classes/pipelines#transformers.pipeline">pipeline()</a>. Instantiate a pipeline from a <a href="https://huggingface.co/models?pipeline_tag=zero-shot-image-classification&amp;sort=downloads" rel="nofollow">checkpoint on the Hugging Face Hub</a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> pipeline <span class="hljs-meta">&gt;&gt;&gt; </span>checkpoint = <span class="hljs-string">"openai/clip-vit-large-patch14"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>detector = pipeline(model=checkpoint, task=<span class="hljs-string">"zero-shot-image-classification"</span>)</pre></div> <p data-svelte-h="svelte-1de5ng0">Next, choose an image you’d like to classify.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> PIL <span class="hljs-keyword">import</span> Image <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> requests <span class="hljs-meta">&gt;&gt;&gt; </span>url = <span class="hljs-string">"https://unsplash.com/photos/g8oS8-82DxI/download?ixid=MnwxMjA3fDB8MXx0b3BpY3x8SnBnNktpZGwtSGt8fHx8fDJ8fDE2NzgxMDYwODc&amp;force=true&amp;w=640"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>image = Image.<span class="hljs-built_in">open</span>(requests.get(url, stream=<span class="hljs-literal">True</span>).raw) <span class="hljs-meta">&gt;&gt;&gt; </span>image</pre></div> <div class="flex justify-center" data-svelte-h="svelte-10yxso1"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/owl.jpg" alt="Photo of an owl"></div> <p data-svelte-h="svelte-1ufa490">Pass the image and the candidate object labels to the pipeline. Here we pass the image directly; other suitable options include a local path to an image or an image url. The candidate labels can be simple words like in this example, or more descriptive.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>predictions = detector(image, candidate_labels=[<span class="hljs-string">"fox"</span>, <span class="hljs-string">"bear"</span>, <span class="hljs-string">"seagull"</span>, <span class="hljs-string">"owl"</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span>predictions [{<span class="hljs-string">'score'</span>: <span class="hljs-number">0.9996670484542847</span>, <span class="hljs-string">'label'</span>: <span class="hljs-string">'owl'</span>}, {<span class="hljs-string">'score'</span>: <span class="hljs-number">0.000199399160919711</span>, <span class="hljs-string">'label'</span>: <span class="hljs-string">'seagull'</span>}, {<span class="hljs-string">'score'</span>: <span class="hljs-number">7.392891711788252e-05</span>, <span class="hljs-string">'label'</span>: <span class="hljs-string">'fox'</span>}, {<span class="hljs-string">'score'</span>: <span class="hljs-number">5.96074532950297e-05</span>, <span class="hljs-string">'label'</span>: <span class="hljs-string">'bear'</span>}]</pre></div> <h2 class="relative group"><a id="zeroshot-image-classification-by-hand" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#zeroshot-image-classification-by-hand"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-wir4g3">Zero-shot image classification by hand</span></h2> <p data-svelte-h="svelte-8eyj4j">Now that you’ve seen how to use the zero-shot image classification pipeline, let’s take a look how you can run zero-shot image classification manually.</p> <p data-svelte-h="svelte-2afjdk">Start by loading the model and associated processor from a <a href="https://huggingface.co/models?pipeline_tag=zero-shot-image-classification&amp;sort=downloads" rel="nofollow">checkpoint on the Hugging Face Hub</a>. Here we’ll use the same checkpoint as before:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoProcessor, AutoModelForZeroShotImageClassification <span class="hljs-meta">&gt;&gt;&gt; </span>model = AutoModelForZeroShotImageClassification.from_pretrained(checkpoint) <span class="hljs-meta">&gt;&gt;&gt; </span>processor = AutoProcessor.from_pretrained(checkpoint)</pre></div> <p data-svelte-h="svelte-1g7c1zc">Let’s take a different image to switch things up.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> PIL <span class="hljs-keyword">import</span> Image <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> requests <span class="hljs-meta">&gt;&gt;&gt; </span>url = <span class="hljs-string">"https://unsplash.com/photos/xBRQfR2bqNI/download?ixid=MnwxMjA3fDB8MXxhbGx8fHx8fHx8fHwxNjc4Mzg4ODEx&amp;force=true&amp;w=640"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>image = Image.<span class="hljs-built_in">open</span>(requests.get(url, stream=<span class="hljs-literal">True</span>).raw) <span class="hljs-meta">&gt;&gt;&gt; </span>image</pre></div> <div class="flex justify-center" data-svelte-h="svelte-1kfxibh"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg" alt="Photo of a car"></div> <p data-svelte-h="svelte-17gc4h7">Use the processor to prepare the inputs for the model. The processor combines an image processor that prepares the image for the model by resizing and normalizing it, and a tokenizer that takes care of the text inputs.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>candidate_labels = [<span class="hljs-string">"tree"</span>, <span class="hljs-string">"car"</span>, <span class="hljs-string">"bike"</span>, <span class="hljs-string">"cat"</span>] <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = processor(images=image, text=candidate_labels, return_tensors=<span class="hljs-string">"pt"</span>, padding=<span class="hljs-literal">True</span>)</pre></div> <p data-svelte-h="svelte-n1saee">Pass the inputs through the model, and post-process the results:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> outputs = model(**inputs) <span class="hljs-meta">&gt;&gt;&gt; </span>logits = outputs.logits_per_image[<span class="hljs-number">0</span>] <span class="hljs-meta">&gt;&gt;&gt; </span>probs = logits.softmax(dim=-<span class="hljs-number">1</span>).numpy() <span class="hljs-meta">&gt;&gt;&gt; </span>scores = probs.tolist() <span class="hljs-meta">&gt;&gt;&gt; </span>result = [ <span class="hljs-meta">... </span> {<span class="hljs-string">"score"</span>: score, <span class="hljs-string">"label"</span>: candidate_label} <span class="hljs-meta">... </span> <span class="hljs-keyword">for</span> score, candidate_label <span class="hljs-keyword">in</span> <span class="hljs-built_in">sorted</span>(<span class="hljs-built_in">zip</span>(probs, candidate_labels), key=<span class="hljs-keyword">lambda</span> x: -x[<span class="hljs-number">0</span>]) <span class="hljs-meta">... </span>] <span class="hljs-meta">&gt;&gt;&gt; </span>result [{<span class="hljs-string">'score'</span>: <span class="hljs-number">0.998572</span>, <span class="hljs-string">'label'</span>: <span class="hljs-string">'car'</span>}, {<span class="hljs-string">'score'</span>: <span class="hljs-number">0.0010570387</span>, <span class="hljs-string">'label'</span>: <span class="hljs-string">'bike'</span>}, {<span class="hljs-string">'score'</span>: <span class="hljs-number">0.0003393686</span>, <span class="hljs-string">'label'</span>: <span class="hljs-string">'tree'</span>}, {<span class="hljs-string">'score'</span>: <span class="hljs-number">3.1572064e-05</span>, <span class="hljs-string">'label'</span>: <span class="hljs-string">'cat'</span>}]</pre></div> <p></p> <div id="svelte-announcer" aria-live="assertive" aria-atomic="true" style="position: absolute; left: 0px; top: 0px; clip: rect(0px, 0px, 0px, 0px); clip-path: inset(50%); overflow: hidden; white-space: nowrap; width: 1px; height: 1px;"></div></div> <div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>Zero-shot object detection</a> <a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">Depth estimation<span class="ml-2 translate-y-px">→</span></a></div></div></div> <div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapter&quot;:{&quot;title&quot;:&quot;Zero-shot image classification&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;zeroshot-image-classification&quot;,&quot;url&quot;:&quot;#zeroshot-image-classification&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Zero-shot image classification pipeline&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;zeroshot-image-classification-pipeline&quot;,&quot;url&quot;:&quot;#zeroshot-image-classification-pipeline&quot;},{&quot;title&quot;:&quot;Zero-shot image classification by hand&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;zeroshot-image-classification-by-hand&quot;,&quot;url&quot;:&quot;#zeroshot-image-classification-by-hand&quot;}]}}" data-target="SubSideMenu"><nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#zeroshot-image-classification" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-zeroshot-image-classification"><wbr>Zero-shot image classification</a> <a href="#zeroshot-image-classification-pipeline" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-zeroshot-image-classification-pipeline"><wbr>Zero-shot image classification pipeline</a> <a href="#zeroshot-image-classification-by-hand" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-zeroshot-image-classification-by-hand"><wbr>Zero-shot image classification by hand</a> </nav></div></div></div> <div id="doc-footer"></div></main> </div> <script> import("/front/build/kube-5e23f38/index.js"); window.moonSha = "kube-5e23f38/"; window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc","environment":"production","userAgent":"HuggingFace (production)"}`); </script> <!-- Stripe --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://js.stripe.com/v3/"; script.async = true; document.head.appendChild(script); } </script> <!-- Google analytics v4 --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL"; script.async = true; document.head.appendChild(script); window.dataLayer = window.dataLayer || []; function gtag() { if (window.dataLayer !== undefined) { window.dataLayer.push(arguments); } } gtag("js", new Date()); gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification" }); /// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" }); /// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent /// TODO: ask the user for their consent and update this with gtag('consent', 'update') } </script> <!-- Google Analytics v3 (deprecated) --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { (function (i, s, o, g, r, a, m) { i["GoogleAnalyticsObject"] = r; (i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments); }), (i[r].l = 1 * new Date()); (a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]); a.async = 1; a.src = g; m.parentNode.insertBefore(a, m); })(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics"); ganalytics("create", "UA-83738774-2", "auto"); ganalytics("send", "pageview", "/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification"); } </script> <iframe name="__privateStripeMetricsController7120" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-27c67c0d52761104439bb051c7856ab1.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fv4.34.0%2Fen%2Ftasks%2Fzero_shot_image_classification&amp;title=Zero-shot%20image%20classification&amp;referrer=&amp;muid=NA&amp;sid=NA&amp;version=6&amp;preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html>
2023-10-05T13:33:56.468Z
Zero-shot object detection
https://huggingface.co/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection
Traditionally, models used for [object detection](object_detection) require labeled image datasets for training, and are limited to detecting the set of classes from the training data. Zero-shot object detection is supported by the [OWL-ViT](../model_doc/owlvit) model which uses a different approach. OWL-ViT is an open-vocabulary object detector. It means that it can detect objects in images based on free-text queries without the need to fine-tune the model on labeled datasets. OWL-ViT leverages multi-modal representations to perform open-vocabulary detection. It combines [CLIP](../model_doc/clip) with lightweight object classification and localization heads. Open-vocabulary detection is achieved by embedding free-text queries with the text encoder of CLIP and using them as input to the object classification and localization heads. associate images and their corresponding textual descriptions, and ViT processes image patches as inputs. The authors of OWL-ViT first trained CLIP from scratch and then fine-tuned OWL-ViT end to end on standard object detection datasets using a bipartite matching loss. With this approach, the model can detect objects based on textual descriptions without prior training on labeled datasets. In this guide, you will learn how to use OWL-ViT: - to detect objects based on text prompts - for batch object detection - for image-guided object detection Before you begin, make sure you have all the necessary libraries installed: ``` pip install -q transformers ``` # Zero-shot object detection pipeline The simplest way to try out inference with OWL-ViT is to use it in a [pipeline()](/docs/transformers/v4.34.0/en/main_classes/pipelines#transformers.pipeline). Instantiate a pipeline for zero-shot object detection from a [checkpoint on the Hugging Face Hub](https://huggingface.co/models?other=owlvit): ``` >>> from transformers import pipeline >>> checkpoint = "google/owlvit-base-patch32" >>> detector = pipeline(model=checkpoint, task="zero-shot-object-detection") ``` Next, choose an image you’d like to detect objects in. Here we’ll use the image of astronaut Eileen Collins that is a part of the [NASA](https://www.nasa.gov/multimedia/imagegallery/index.html) Great Images dataset. ``` >>> import skimage >>> import numpy as np >>> from PIL import Image >>> image = skimage.data.astronaut() >>> image = Image.fromarray(np.uint8(image)).convert("RGB") >>> image ``` ![Astronaut Eileen Collins](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_1.png) Pass the image and the candidate object labels to look for to the pipeline. Here we pass the image directly; other suitable options include a local path to an image or an image url. We also pass text descriptions for all items we want to query the image for. ``` >>> predictions = detector( ... image, ... candidate_labels=["human face", "rocket", "nasa badge", "star-spangled banner"], ... ) >>> predictions [{'score': 0.3571370542049408, 'label': 'human face', 'box': {'xmin': 180, 'ymin': 71, 'xmax': 271, 'ymax': 178}}, {'score': 0.28099656105041504, 'label': 'nasa badge', 'box': {'xmin': 129, 'ymin': 348, 'xmax': 206, 'ymax': 427}}, {'score': 0.2110239565372467, 'label': 'rocket', 'box': {'xmin': 350, 'ymin': -1, 'xmax': 468, 'ymax': 288}}, {'score': 0.13790413737297058, 'label': 'star-spangled banner', 'box': {'xmin': 1, 'ymin': 1, 'xmax': 105, 'ymax': 509}}, {'score': 0.11950037628412247, 'label': 'nasa badge', 'box': {'xmin': 277, 'ymin': 338, 'xmax': 327, 'ymax': 380}}, {'score': 0.10649408400058746, 'label': 'rocket', 'box': {'xmin': 358, 'ymin': 64, 'xmax': 424, 'ymax': 280}}] ``` Let’s visualize the predictions: ``` >>> from PIL import ImageDraw >>> draw = ImageDraw.Draw(image) >>> for prediction in predictions: ... box = prediction["box"] ... label = prediction["label"] ... score = prediction["score"] ... xmin, ymin, xmax, ymax = box.values() ... draw.rectangle((xmin, ymin, xmax, ymax), outline="red", width=1) ... draw.text((xmin, ymin), f"{label}: {round(score,2)}", fill="white") >>> image ``` ![Visualized predictions on NASA image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_2.png) ## Text-prompted zero-shot object detection by hand Now that you’ve seen how to use the zero-shot object detection pipeline, let’s replicate the same result manually. Start by loading the model and associated processor from a [checkpoint on the Hugging Face Hub](https://huggingface.co/models?other=owlvit). Here we’ll use the same checkpoint as before: ``` >>> from transformers import AutoProcessor, AutoModelForZeroShotObjectDetection >>> model = AutoModelForZeroShotObjectDetection.from_pretrained(checkpoint) >>> processor = AutoProcessor.from_pretrained(checkpoint) ``` Let’s take a different image to switch things up. ``` >>> import requests >>> url = "https://unsplash.com/photos/oj0zeY2Ltk4/download?ixid=MnwxMjA3fDB8MXxzZWFyY2h8MTR8fHBpY25pY3xlbnwwfHx8fDE2Nzc0OTE1NDk&force=true&w=640" >>> im = Image.open(requests.get(url, stream=True).raw) >>> im ``` ![Beach photo](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_3.png) Use the processor to prepare the inputs for the model. The processor combines an image processor that prepares the image for the model by resizing and normalizing it, and a [CLIPTokenizer](/docs/transformers/v4.34.0/en/model_doc/clip#transformers.CLIPTokenizer) that takes care of the text inputs. ``` >>> text_queries = ["hat", "book", "sunglasses", "camera"] >>> inputs = processor(text=text_queries, images=im, return_tensors="pt") ``` Pass the inputs through the model, post-process, and visualize the results. Since the image processor resized images before feeding them to the model, you need to use the [post\_process\_object\_detection()](/docs/transformers/v4.34.0/en/model_doc/owlvit#transformers.OwlViTImageProcessor.post_process_object_detection) method to make sure the predicted bounding boxes have the correct coordinates relative to the original image: ``` >>> import torch >>> with torch.no_grad(): ... outputs = model(**inputs) ... target_sizes = torch.tensor([im.size[::-1]]) ... results = processor.post_process_object_detection(outputs, threshold=0.1, target_sizes=target_sizes)[0] >>> draw = ImageDraw.Draw(im) >>> scores = results["scores"].tolist() >>> labels = results["labels"].tolist() >>> boxes = results["boxes"].tolist() >>> for box, score, label in zip(boxes, scores, labels): ... xmin, ymin, xmax, ymax = box ... draw.rectangle((xmin, ymin, xmax, ymax), outline="red", width=1) ... draw.text((xmin, ymin), f"{text_queries[label]}: {round(score,2)}", fill="white") >>> im ``` ![Beach photo with detected objects](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_4.png) ## Batch processing You can pass multiple sets of images and text queries to search for different (or same) objects in several images. Let’s use both an astronaut image and the beach image together. For batch processing, you should pass text queries as a nested list to the processor and images as lists of PIL images, PyTorch tensors, or NumPy arrays. ``` >>> images = [image, im] >>> text_queries = [ ... ["human face", "rocket", "nasa badge", "star-spangled banner"], ... ["hat", "book", "sunglasses", "camera"], ... ] >>> inputs = processor(text=text_queries, images=images, return_tensors="pt") ``` Previously for post-processing you passed the single image’s size as a tensor, but you can also pass a tuple, or, in case of several images, a list of tuples. Let’s create predictions for the two examples, and visualize the second one (`image_idx = 1`). ``` >>> with torch.no_grad(): ... outputs = model(**inputs) ... target_sizes = [x.size[::-1] for x in images] ... results = processor.post_process_object_detection(outputs, threshold=0.1, target_sizes=target_sizes) >>> image_idx = 1 >>> draw = ImageDraw.Draw(images[image_idx]) >>> scores = results[image_idx]["scores"].tolist() >>> labels = results[image_idx]["labels"].tolist() >>> boxes = results[image_idx]["boxes"].tolist() >>> for box, score, label in zip(boxes, scores, labels): ... xmin, ymin, xmax, ymax = box ... draw.rectangle((xmin, ymin, xmax, ymax), outline="red", width=1) ... draw.text((xmin, ymin), f"{text_queries[image_idx][label]}: {round(score,2)}", fill="white") >>> images[image_idx] ``` ![Beach photo with detected objects](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_4.png) ## Image-guided object detection In addition to zero-shot object detection with text queries, OWL-ViT offers image-guided object detection. This means you can use an image query to find similar objects in the target image. Unlike text queries, only a single example image is allowed. Let’s take an image with two cats on a couch as a target image, and an image of a single cat as a query: ``` >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image_target = Image.open(requests.get(url, stream=True).raw) >>> query_url = "http://images.cocodataset.org/val2017/000000524280.jpg" >>> query_image = Image.open(requests.get(query_url, stream=True).raw) ``` Let’s take a quick look at the images: ``` >>> import matplotlib.pyplot as plt >>> fig, ax = plt.subplots(1, 2) >>> ax[0].imshow(image_target) >>> ax[1].imshow(query_image) ``` ![Cats](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_5.png) In the preprocessing step, instead of text queries, you now need to use `query_images`: ``` >>> inputs = processor(images=image_target, query_images=query_image, return_tensors="pt") ``` For predictions, instead of passing the inputs to the model, pass them to [image\_guided\_detection()](/docs/transformers/v4.34.0/en/model_doc/owlvit#transformers.OwlViTForObjectDetection.image_guided_detection). Draw the predictions as before except now there are no labels. ``` >>> with torch.no_grad(): ... outputs = model.image_guided_detection(**inputs) ... target_sizes = torch.tensor([image_target.size[::-1]]) ... results = processor.post_process_image_guided_detection(outputs=outputs, target_sizes=target_sizes)[0] >>> draw = ImageDraw.Draw(image_target) >>> scores = results["scores"].tolist() >>> boxes = results["boxes"].tolist() >>> for box, score, label in zip(boxes, scores, labels): ... xmin, ymin, xmax, ymax = box ... draw.rectangle((xmin, ymin, xmax, ymax), outline="white", width=4) >>> image_target ``` ![Cats with bounding boxes](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_6.png) If you’d like to interactively try out inference with OWL-ViT, check out this demo:
<!DOCTYPE html><html class=""><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"> <meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science."> <meta property="fb:app_id" content="1321688464574422"> <meta name="twitter:card" content="summary_large_image"> <meta name="twitter:site" content="@huggingface"> <meta property="og:title" content="Zero-shot object detection"> <meta property="og:type" content="website"> <meta property="og:url" content="https://huggingface.co/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection"> <meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png"> <link rel="stylesheet" href="/front/build/kube-5e23f38/style.css"> <link rel="preconnect" href="https://fonts.gstatic.com"> <link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&amp;display=swap" rel="stylesheet"> <link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&amp;display=swap" rel="stylesheet"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" as="style" onload="this.onload=null;this.rel='stylesheet'"> <noscript> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" /> </noscript> <title>Zero-shot object detection</title> <script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script> <script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"><link rel="stylesheet" href="/docs/transformers/v4.34.0/en/_app/immutable/assets/0.e3b0c442.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/1.38c5c2f6.js"><meta name="hf:doc:metadata" content="{&quot;local&quot;:&quot;zeroshot-object-detection&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;zeroshot-object-detection-pipeline&quot;,&quot;title&quot;:&quot;Zero-shot object detection pipeline&quot;},{&quot;local&quot;:&quot;textprompted-zeroshot-object-detection-by-hand&quot;,&quot;title&quot;:&quot;Text-prompted zero-shot object detection by hand&quot;},{&quot;local&quot;:&quot;batch-processing&quot;,&quot;title&quot;:&quot;Batch processing&quot;},{&quot;local&quot;:&quot;imageguided-object-detection&quot;,&quot;title&quot;:&quot;Image-guided object detection&quot;}],&quot;title&quot;:&quot;Zero-shot object detection&quot;}"></head> <body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage"> <div class="flex min-h-screen flex-col"> <div class="SVELTE_HYDRATER contents" data-props="{&quot;classNames&quot;:&quot;&quot;,&quot;isWide&quot;:true,&quot;isZh&quot;:false}" data-target="MainHeader"><header class="border-b border-gray-100 "><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <div class="flex flex-none items-center justify-center p-0.5 place-self-stretch lg:hidden"><button class="relative z-40 flex h-6 w-8 items-center justify-center" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div></div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="rounded-full border border-transparent bg-gray-900 px-3 py-1 leading-none text-white hover:border-black hover:bg-white hover:text-black" href="/join">Sign Up</a></li></ul></nav></div></header></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div> <main class="flex flex-1 flex-col"><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapters&quot;:[{&quot;title&quot;:&quot;Get started&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;🤗 Transformers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;index&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/index&quot;},{&quot;title&quot;:&quot;Quick tour&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;quicktour&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/quicktour&quot;},{&quot;title&quot;:&quot;Installation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;installation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/installation&quot;}]},{&quot;title&quot;:&quot;Tutorials&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Run inference with pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_tutorial&quot;},{&quot;title&quot;:&quot;Write portable code with AutoClass&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;autoclass_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/autoclass_tutorial&quot;},{&quot;title&quot;:&quot;Preprocess data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocessing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/preprocessing&quot;},{&quot;title&quot;:&quot;Fine-tune a pretrained model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;training&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/training&quot;},{&quot;title&quot;:&quot;Train with a script&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;run_scripts&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/run_scripts&quot;},{&quot;title&quot;:&quot;Set up distributed training with 🤗 Accelerate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;accelerate&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/accelerate&quot;},{&quot;title&quot;:&quot;Load and train adapters with 🤗 PEFT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;peft&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/peft&quot;},{&quot;title&quot;:&quot;Share your model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_sharing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_sharing&quot;},{&quot;title&quot;:&quot;Agents&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers_agents&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/transformers_agents&quot;},{&quot;title&quot;:&quot;Generation with LLMs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;llm_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/llm_tutorial&quot;}]},{&quot;title&quot;:&quot;Task Guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Natural Language Processing&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text classification&quot;,&quot;id&quot;:&quot;tasks/sequence_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/sequence_classification&quot;},{&quot;title&quot;:&quot;Token classification&quot;,&quot;id&quot;:&quot;tasks/token_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/token_classification&quot;},{&quot;title&quot;:&quot;Question answering&quot;,&quot;id&quot;:&quot;tasks/question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/question_answering&quot;},{&quot;title&quot;:&quot;Causal language modeling&quot;,&quot;id&quot;:&quot;tasks/language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/language_modeling&quot;},{&quot;title&quot;:&quot;Masked language modeling&quot;,&quot;id&quot;:&quot;tasks/masked_language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/masked_language_modeling&quot;},{&quot;title&quot;:&quot;Translation&quot;,&quot;id&quot;:&quot;tasks/translation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/translation&quot;},{&quot;title&quot;:&quot;Summarization&quot;,&quot;id&quot;:&quot;tasks/summarization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/summarization&quot;},{&quot;title&quot;:&quot;Multiple choice&quot;,&quot;id&quot;:&quot;tasks/multiple_choice&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/multiple_choice&quot;}]},{&quot;title&quot;:&quot;Audio&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio classification&quot;,&quot;id&quot;:&quot;tasks/audio_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/audio_classification&quot;},{&quot;title&quot;:&quot;Automatic speech recognition&quot;,&quot;id&quot;:&quot;tasks/asr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/asr&quot;}]},{&quot;title&quot;:&quot;Computer Vision&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image classification&quot;,&quot;id&quot;:&quot;tasks/image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_classification&quot;},{&quot;title&quot;:&quot;Semantic segmentation&quot;,&quot;id&quot;:&quot;tasks/semantic_segmentation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/semantic_segmentation&quot;},{&quot;title&quot;:&quot;Video classification&quot;,&quot;id&quot;:&quot;tasks/video_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/video_classification&quot;},{&quot;title&quot;:&quot;Object detection&quot;,&quot;id&quot;:&quot;tasks/object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot object detection&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks/zero_shot_object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot image classification&quot;,&quot;id&quot;:&quot;tasks/zero_shot_image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification&quot;},{&quot;title&quot;:&quot;Depth estimation&quot;,&quot;id&quot;:&quot;tasks/monocular_depth_estimation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation&quot;}]},{&quot;title&quot;:&quot;Multimodal&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image captioning&quot;,&quot;id&quot;:&quot;tasks/image_captioning&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_captioning&quot;},{&quot;title&quot;:&quot;Document Question Answering&quot;,&quot;id&quot;:&quot;tasks/document_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/document_question_answering&quot;},{&quot;title&quot;:&quot;Visual Question Answering&quot;,&quot;id&quot;:&quot;tasks/visual_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/visual_question_answering&quot;},{&quot;title&quot;:&quot;Text to speech&quot;,&quot;id&quot;:&quot;tasks/text-to-speech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/text-to-speech&quot;}]},{&quot;title&quot;:&quot;Generation&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Customize the generation strategy&quot;,&quot;id&quot;:&quot;generation_strategies&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/generation_strategies&quot;}]},{&quot;title&quot;:&quot;Prompting&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image tasks with IDEFICS&quot;,&quot;id&quot;:&quot;tasks/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/idefics&quot;}]}]},{&quot;title&quot;:&quot;Developer guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Use fast tokenizers from 🤗 Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;fast_tokenizers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/fast_tokenizers&quot;},{&quot;title&quot;:&quot;Run inference with multilingual models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;multilingual&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/multilingual&quot;},{&quot;title&quot;:&quot;Use model-specific APIs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;create_a_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/create_a_model&quot;},{&quot;title&quot;:&quot;Share a custom model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_models&quot;},{&quot;title&quot;:&quot;Templates for chat models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;chat_templating&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/chat_templating&quot;},{&quot;title&quot;:&quot;Run training on Amazon SageMaker&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;sagemaker&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/sagemaker&quot;},{&quot;title&quot;:&quot;Export to ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;serialization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/serialization&quot;},{&quot;title&quot;:&quot;Export to TFLite&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tflite&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tflite&quot;},{&quot;title&quot;:&quot;Export to TorchScript&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;torchscript&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/torchscript&quot;},{&quot;title&quot;:&quot;Benchmarks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;benchmarks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/benchmarks&quot;},{&quot;title&quot;:&quot;Notebooks with examples&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;notebooks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/notebooks&quot;},{&quot;title&quot;:&quot;Community resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;community&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/community&quot;},{&quot;title&quot;:&quot;Custom Tools and Prompts&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_tools&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_tools&quot;},{&quot;title&quot;:&quot;Troubleshoot&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;troubleshooting&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/troubleshooting&quot;}]},{&quot;title&quot;:&quot;Performance and scalability&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;performance&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/performance&quot;},{&quot;title&quot;:&quot;Efficient training techniques&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Methods and tools for efficient training on a single GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_one&quot;},{&quot;title&quot;:&quot;Multiple GPUs and parallelism&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_many&quot;},{&quot;title&quot;:&quot;Efficient training on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu&quot;},{&quot;title&quot;:&quot;Distributed CPU training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu_many&quot;},{&quot;title&quot;:&quot;Training on TPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu&quot;},{&quot;title&quot;:&quot;Training on TPU with TensorFlow&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu_tf&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu_tf&quot;},{&quot;title&quot;:&quot;Training on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_special&quot;},{&quot;title&quot;:&quot;Custom hardware for training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_hardware&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_hardware&quot;},{&quot;title&quot;:&quot;Hyperparameter Search using Trainer API&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;hpo_train&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/hpo_train&quot;}]},{&quot;title&quot;:&quot;Optimizing inference&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Inference on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_cpu&quot;},{&quot;title&quot;:&quot;Inference on one GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_one&quot;},{&quot;title&quot;:&quot;Inference on many GPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_many&quot;},{&quot;title&quot;:&quot;Inference on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_special&quot;}]},{&quot;title&quot;:&quot;Instantiating a big model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;big_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/big_models&quot;},{&quot;title&quot;:&quot;Troubleshooting&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;debugging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/debugging&quot;},{&quot;title&quot;:&quot;XLA Integration for TensorFlow Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tf_xla&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tf_xla&quot;},{&quot;title&quot;:&quot;Optimize inference using `torch.compile()`&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_torch_compile&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_torch_compile&quot;}]},{&quot;title&quot;:&quot;Contribute&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;How to contribute to transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;contributing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/contributing&quot;},{&quot;title&quot;:&quot;How to add a model to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_model&quot;},{&quot;title&quot;:&quot;How to convert a 🤗 Transformers model to TensorFlow?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_tensorflow_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_tensorflow_model&quot;},{&quot;title&quot;:&quot;How to add a pipeline to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_pipeline&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_pipeline&quot;},{&quot;title&quot;:&quot;Testing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;testing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/testing&quot;},{&quot;title&quot;:&quot;Checks on a Pull Request&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pr_checks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pr_checks&quot;}]},{&quot;title&quot;:&quot;Conceptual guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Philosophy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;philosophy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/philosophy&quot;},{&quot;title&quot;:&quot;Glossary&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;glossary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/glossary&quot;},{&quot;title&quot;:&quot;What 🤗 Transformers can do&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;task_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/task_summary&quot;},{&quot;title&quot;:&quot;How 🤗 Transformers solve tasks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks_explained&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks_explained&quot;},{&quot;title&quot;:&quot;The Transformer model family&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_summary&quot;},{&quot;title&quot;:&quot;Summary of the tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tokenizer_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tokenizer_summary&quot;},{&quot;title&quot;:&quot;Attention mechanisms&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;attention&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/attention&quot;},{&quot;title&quot;:&quot;Padding and truncation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pad_truncation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pad_truncation&quot;},{&quot;title&quot;:&quot;BERTology&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;bertology&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/bertology&quot;},{&quot;title&quot;:&quot;Perplexity of fixed-length models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perplexity&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perplexity&quot;},{&quot;title&quot;:&quot;Pipelines for webserver inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_webserver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_webserver&quot;},{&quot;title&quot;:&quot;Model training anatomy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_memory_anatomy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_memory_anatomy&quot;}]},{&quot;title&quot;:&quot;API&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Main Classes&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Agents and Tools&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/agent&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/agent&quot;},{&quot;title&quot;:&quot;Auto Classes&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/auto&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/auto&quot;},{&quot;title&quot;:&quot;Callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/callback&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/callback&quot;},{&quot;title&quot;:&quot;Configuration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/configuration&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/configuration&quot;},{&quot;title&quot;:&quot;Data Collator&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/data_collator&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/data_collator&quot;},{&quot;title&quot;:&quot;Keras callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/keras_callbacks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/keras_callbacks&quot;},{&quot;title&quot;:&quot;Logging&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/logging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/logging&quot;},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/model&quot;},{&quot;title&quot;:&quot;Text Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/text_generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/text_generation&quot;},{&quot;title&quot;:&quot;ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/onnx&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/onnx&quot;},{&quot;title&quot;:&quot;Optimization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/optimizer_schedules&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules&quot;},{&quot;title&quot;:&quot;Model outputs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/output&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/output&quot;},{&quot;title&quot;:&quot;Pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/pipelines&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/pipelines&quot;},{&quot;title&quot;:&quot;Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/processors&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/processors&quot;},{&quot;title&quot;:&quot;Quantization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/quantization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/quantization&quot;},{&quot;title&quot;:&quot;Tokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/tokenizer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/tokenizer&quot;},{&quot;title&quot;:&quot;Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/trainer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/trainer&quot;},{&quot;title&quot;:&quot;DeepSpeed Integration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/deepspeed&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/deepspeed&quot;},{&quot;title&quot;:&quot;Feature Extractor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/feature_extractor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/feature_extractor&quot;},{&quot;title&quot;:&quot;Image Processor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/image_processor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/image_processor&quot;}]},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALBERT&quot;,&quot;id&quot;:&quot;model_doc/albert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/albert&quot;},{&quot;title&quot;:&quot;BART&quot;,&quot;id&quot;:&quot;model_doc/bart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bart&quot;},{&quot;title&quot;:&quot;BARThez&quot;,&quot;id&quot;:&quot;model_doc/barthez&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/barthez&quot;},{&quot;title&quot;:&quot;BARTpho&quot;,&quot;id&quot;:&quot;model_doc/bartpho&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bartpho&quot;},{&quot;title&quot;:&quot;BERT&quot;,&quot;id&quot;:&quot;model_doc/bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert&quot;},{&quot;title&quot;:&quot;BertGeneration&quot;,&quot;id&quot;:&quot;model_doc/bert-generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-generation&quot;},{&quot;title&quot;:&quot;BertJapanese&quot;,&quot;id&quot;:&quot;model_doc/bert-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-japanese&quot;},{&quot;title&quot;:&quot;Bertweet&quot;,&quot;id&quot;:&quot;model_doc/bertweet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bertweet&quot;},{&quot;title&quot;:&quot;BigBird&quot;,&quot;id&quot;:&quot;model_doc/big_bird&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/big_bird&quot;},{&quot;title&quot;:&quot;BigBirdPegasus&quot;,&quot;id&quot;:&quot;model_doc/bigbird_pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus&quot;},{&quot;title&quot;:&quot;BioGpt&quot;,&quot;id&quot;:&quot;model_doc/biogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/biogpt&quot;},{&quot;title&quot;:&quot;Blenderbot&quot;,&quot;id&quot;:&quot;model_doc/blenderbot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot&quot;},{&quot;title&quot;:&quot;Blenderbot Small&quot;,&quot;id&quot;:&quot;model_doc/blenderbot-small&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot-small&quot;},{&quot;title&quot;:&quot;BLOOM&quot;,&quot;id&quot;:&quot;model_doc/bloom&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bloom&quot;},{&quot;title&quot;:&quot;BORT&quot;,&quot;id&quot;:&quot;model_doc/bort&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bort&quot;},{&quot;title&quot;:&quot;ByT5&quot;,&quot;id&quot;:&quot;model_doc/byt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/byt5&quot;},{&quot;title&quot;:&quot;CamemBERT&quot;,&quot;id&quot;:&quot;model_doc/camembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/camembert&quot;},{&quot;title&quot;:&quot;CANINE&quot;,&quot;id&quot;:&quot;model_doc/canine&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/canine&quot;},{&quot;title&quot;:&quot;CodeGen&quot;,&quot;id&quot;:&quot;model_doc/codegen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/codegen&quot;},{&quot;title&quot;:&quot;CodeLlama&quot;,&quot;id&quot;:&quot;model_doc/code_llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/code_llama&quot;},{&quot;title&quot;:&quot;ConvBERT&quot;,&quot;id&quot;:&quot;model_doc/convbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convbert&quot;},{&quot;title&quot;:&quot;CPM&quot;,&quot;id&quot;:&quot;model_doc/cpm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpm&quot;},{&quot;title&quot;:&quot;CPMANT&quot;,&quot;id&quot;:&quot;model_doc/cpmant&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpmant&quot;},{&quot;title&quot;:&quot;CTRL&quot;,&quot;id&quot;:&quot;model_doc/ctrl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ctrl&quot;},{&quot;title&quot;:&quot;DeBERTa&quot;,&quot;id&quot;:&quot;model_doc/deberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta&quot;},{&quot;title&quot;:&quot;DeBERTa-v2&quot;,&quot;id&quot;:&quot;model_doc/deberta-v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta-v2&quot;},{&quot;title&quot;:&quot;DialoGPT&quot;,&quot;id&quot;:&quot;model_doc/dialogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dialogpt&quot;},{&quot;title&quot;:&quot;DistilBERT&quot;,&quot;id&quot;:&quot;model_doc/distilbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/distilbert&quot;},{&quot;title&quot;:&quot;DPR&quot;,&quot;id&quot;:&quot;model_doc/dpr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpr&quot;},{&quot;title&quot;:&quot;ELECTRA&quot;,&quot;id&quot;:&quot;model_doc/electra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/electra&quot;},{&quot;title&quot;:&quot;Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encoder-decoder&quot;},{&quot;title&quot;:&quot;ERNIE&quot;,&quot;id&quot;:&quot;model_doc/ernie&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie&quot;},{&quot;title&quot;:&quot;ErnieM&quot;,&quot;id&quot;:&quot;model_doc/ernie_m&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie_m&quot;},{&quot;title&quot;:&quot;ESM&quot;,&quot;id&quot;:&quot;model_doc/esm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/esm&quot;},{&quot;title&quot;:&quot;Falcon&quot;,&quot;id&quot;:&quot;model_doc/falcon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/falcon&quot;},{&quot;title&quot;:&quot;FLAN-T5&quot;,&quot;id&quot;:&quot;model_doc/flan-t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-t5&quot;},{&quot;title&quot;:&quot;FLAN-UL2&quot;,&quot;id&quot;:&quot;model_doc/flan-ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-ul2&quot;},{&quot;title&quot;:&quot;FlauBERT&quot;,&quot;id&quot;:&quot;model_doc/flaubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flaubert&quot;},{&quot;title&quot;:&quot;FNet&quot;,&quot;id&quot;:&quot;model_doc/fnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fnet&quot;},{&quot;title&quot;:&quot;FSMT&quot;,&quot;id&quot;:&quot;model_doc/fsmt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fsmt&quot;},{&quot;title&quot;:&quot;Funnel Transformer&quot;,&quot;id&quot;:&quot;model_doc/funnel&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/funnel&quot;},{&quot;title&quot;:&quot;GPT&quot;,&quot;id&quot;:&quot;model_doc/openai-gpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/openai-gpt&quot;},{&quot;title&quot;:&quot;GPT Neo&quot;,&quot;id&quot;:&quot;model_doc/gpt_neo&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neo&quot;},{&quot;title&quot;:&quot;GPT NeoX&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox&quot;},{&quot;title&quot;:&quot;GPT NeoX Japanese&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox_japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese&quot;},{&quot;title&quot;:&quot;GPT-J&quot;,&quot;id&quot;:&quot;model_doc/gptj&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptj&quot;},{&quot;title&quot;:&quot;GPT2&quot;,&quot;id&quot;:&quot;model_doc/gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt2&quot;},{&quot;title&quot;:&quot;GPTBigCode&quot;,&quot;id&quot;:&quot;model_doc/gpt_bigcode&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode&quot;},{&quot;title&quot;:&quot;GPTSAN Japanese&quot;,&quot;id&quot;:&quot;model_doc/gptsan-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese&quot;},{&quot;title&quot;:&quot;GPTSw3&quot;,&quot;id&quot;:&quot;model_doc/gpt-sw3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt-sw3&quot;},{&quot;title&quot;:&quot;HerBERT&quot;,&quot;id&quot;:&quot;model_doc/herbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/herbert&quot;},{&quot;title&quot;:&quot;I-BERT&quot;,&quot;id&quot;:&quot;model_doc/ibert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ibert&quot;},{&quot;title&quot;:&quot;Jukebox&quot;,&quot;id&quot;:&quot;model_doc/jukebox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/jukebox&quot;},{&quot;title&quot;:&quot;LED&quot;,&quot;id&quot;:&quot;model_doc/led&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/led&quot;},{&quot;title&quot;:&quot;LLaMA&quot;,&quot;id&quot;:&quot;model_doc/llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama&quot;},{&quot;title&quot;:&quot;Llama2&quot;,&quot;id&quot;:&quot;model_doc/llama2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama2&quot;},{&quot;title&quot;:&quot;Longformer&quot;,&quot;id&quot;:&quot;model_doc/longformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longformer&quot;},{&quot;title&quot;:&quot;LongT5&quot;,&quot;id&quot;:&quot;model_doc/longt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longt5&quot;},{&quot;title&quot;:&quot;LUKE&quot;,&quot;id&quot;:&quot;model_doc/luke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/luke&quot;},{&quot;title&quot;:&quot;M2M100&quot;,&quot;id&quot;:&quot;model_doc/m2m_100&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/m2m_100&quot;},{&quot;title&quot;:&quot;MarianMT&quot;,&quot;id&quot;:&quot;model_doc/marian&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/marian&quot;},{&quot;title&quot;:&quot;MarkupLM&quot;,&quot;id&quot;:&quot;model_doc/markuplm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/markuplm&quot;},{&quot;title&quot;:&quot;MBart and MBart-50&quot;,&quot;id&quot;:&quot;model_doc/mbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mbart&quot;},{&quot;title&quot;:&quot;MEGA&quot;,&quot;id&quot;:&quot;model_doc/mega&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mega&quot;},{&quot;title&quot;:&quot;MegatronBERT&quot;,&quot;id&quot;:&quot;model_doc/megatron-bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron-bert&quot;},{&quot;title&quot;:&quot;MegatronGPT2&quot;,&quot;id&quot;:&quot;model_doc/megatron_gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2&quot;},{&quot;title&quot;:&quot;Mistral&quot;,&quot;id&quot;:&quot;model_doc/mistral&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mistral&quot;},{&quot;title&quot;:&quot;mLUKE&quot;,&quot;id&quot;:&quot;model_doc/mluke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mluke&quot;},{&quot;title&quot;:&quot;MobileBERT&quot;,&quot;id&quot;:&quot;model_doc/mobilebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilebert&quot;},{&quot;title&quot;:&quot;MPNet&quot;,&quot;id&quot;:&quot;model_doc/mpnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpnet&quot;},{&quot;title&quot;:&quot;MPT&quot;,&quot;id&quot;:&quot;model_doc/mpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpt&quot;},{&quot;title&quot;:&quot;MRA&quot;,&quot;id&quot;:&quot;model_doc/mra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mra&quot;},{&quot;title&quot;:&quot;MT5&quot;,&quot;id&quot;:&quot;model_doc/mt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mt5&quot;},{&quot;title&quot;:&quot;MVP&quot;,&quot;id&quot;:&quot;model_doc/mvp&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mvp&quot;},{&quot;title&quot;:&quot;NEZHA&quot;,&quot;id&quot;:&quot;model_doc/nezha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nezha&quot;},{&quot;title&quot;:&quot;NLLB&quot;,&quot;id&quot;:&quot;model_doc/nllb&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb&quot;},{&quot;title&quot;:&quot;NLLB-MoE&quot;,&quot;id&quot;:&quot;model_doc/nllb-moe&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb-moe&quot;},{&quot;title&quot;:&quot;Nyströmformer&quot;,&quot;id&quot;:&quot;model_doc/nystromformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nystromformer&quot;},{&quot;title&quot;:&quot;Open-Llama&quot;,&quot;id&quot;:&quot;model_doc/open-llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/open-llama&quot;},{&quot;title&quot;:&quot;OPT&quot;,&quot;id&quot;:&quot;model_doc/opt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/opt&quot;},{&quot;title&quot;:&quot;Pegasus&quot;,&quot;id&quot;:&quot;model_doc/pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus&quot;},{&quot;title&quot;:&quot;PEGASUS-X&quot;,&quot;id&quot;:&quot;model_doc/pegasus_x&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus_x&quot;},{&quot;title&quot;:&quot;Persimmon&quot;,&quot;id&quot;:&quot;model_doc/persimmon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/persimmon&quot;},{&quot;title&quot;:&quot;PhoBERT&quot;,&quot;id&quot;:&quot;model_doc/phobert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/phobert&quot;},{&quot;title&quot;:&quot;PLBart&quot;,&quot;id&quot;:&quot;model_doc/plbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/plbart&quot;},{&quot;title&quot;:&quot;ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/prophetnet&quot;},{&quot;title&quot;:&quot;QDQBert&quot;,&quot;id&quot;:&quot;model_doc/qdqbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/qdqbert&quot;},{&quot;title&quot;:&quot;RAG&quot;,&quot;id&quot;:&quot;model_doc/rag&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rag&quot;},{&quot;title&quot;:&quot;REALM&quot;,&quot;id&quot;:&quot;model_doc/realm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/realm&quot;},{&quot;title&quot;:&quot;Reformer&quot;,&quot;id&quot;:&quot;model_doc/reformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/reformer&quot;},{&quot;title&quot;:&quot;RemBERT&quot;,&quot;id&quot;:&quot;model_doc/rembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rembert&quot;},{&quot;title&quot;:&quot;RetriBERT&quot;,&quot;id&quot;:&quot;model_doc/retribert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/retribert&quot;},{&quot;title&quot;:&quot;RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta&quot;},{&quot;title&quot;:&quot;RoBERTa-PreLayerNorm&quot;,&quot;id&quot;:&quot;model_doc/roberta-prelayernorm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm&quot;},{&quot;title&quot;:&quot;RoCBert&quot;,&quot;id&quot;:&quot;model_doc/roc_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roc_bert&quot;},{&quot;title&quot;:&quot;RoFormer&quot;,&quot;id&quot;:&quot;model_doc/roformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roformer&quot;},{&quot;title&quot;:&quot;RWKV&quot;,&quot;id&quot;:&quot;model_doc/rwkv&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rwkv&quot;},{&quot;title&quot;:&quot;Splinter&quot;,&quot;id&quot;:&quot;model_doc/splinter&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/splinter&quot;},{&quot;title&quot;:&quot;SqueezeBERT&quot;,&quot;id&quot;:&quot;model_doc/squeezebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/squeezebert&quot;},{&quot;title&quot;:&quot;SwitchTransformers&quot;,&quot;id&quot;:&quot;model_doc/switch_transformers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/switch_transformers&quot;},{&quot;title&quot;:&quot;T5&quot;,&quot;id&quot;:&quot;model_doc/t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5&quot;},{&quot;title&quot;:&quot;T5v1.1&quot;,&quot;id&quot;:&quot;model_doc/t5v1.1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5v1.1&quot;},{&quot;title&quot;:&quot;TAPEX&quot;,&quot;id&quot;:&quot;model_doc/tapex&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapex&quot;},{&quot;title&quot;:&quot;Transformer XL&quot;,&quot;id&quot;:&quot;model_doc/transfo-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/transfo-xl&quot;},{&quot;title&quot;:&quot;UL2&quot;,&quot;id&quot;:&quot;model_doc/ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ul2&quot;},{&quot;title&quot;:&quot;UMT5&quot;,&quot;id&quot;:&quot;model_doc/umt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/umt5&quot;},{&quot;title&quot;:&quot;X-MOD&quot;,&quot;id&quot;:&quot;model_doc/xmod&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xmod&quot;},{&quot;title&quot;:&quot;XGLM&quot;,&quot;id&quot;:&quot;model_doc/xglm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xglm&quot;},{&quot;title&quot;:&quot;XLM&quot;,&quot;id&quot;:&quot;model_doc/xlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm&quot;},{&quot;title&quot;:&quot;XLM-ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/xlm-prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa-XL&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl&quot;},{&quot;title&quot;:&quot;XLM-V&quot;,&quot;id&quot;:&quot;model_doc/xlm-v&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-v&quot;},{&quot;title&quot;:&quot;XLNet&quot;,&quot;id&quot;:&quot;model_doc/xlnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlnet&quot;},{&quot;title&quot;:&quot;YOSO&quot;,&quot;id&quot;:&quot;model_doc/yoso&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yoso&quot;}]},{&quot;title&quot;:&quot;Vision models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;BEiT&quot;,&quot;id&quot;:&quot;model_doc/beit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/beit&quot;},{&quot;title&quot;:&quot;BiT&quot;,&quot;id&quot;:&quot;model_doc/bit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bit&quot;},{&quot;title&quot;:&quot;Conditional DETR&quot;,&quot;id&quot;:&quot;model_doc/conditional_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/conditional_detr&quot;},{&quot;title&quot;:&quot;ConvNeXT&quot;,&quot;id&quot;:&quot;model_doc/convnext&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnext&quot;},{&quot;title&quot;:&quot;ConvNeXTV2&quot;,&quot;id&quot;:&quot;model_doc/convnextv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnextv2&quot;},{&quot;title&quot;:&quot;CvT&quot;,&quot;id&quot;:&quot;model_doc/cvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cvt&quot;},{&quot;title&quot;:&quot;Deformable DETR&quot;,&quot;id&quot;:&quot;model_doc/deformable_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deformable_detr&quot;},{&quot;title&quot;:&quot;DeiT&quot;,&quot;id&quot;:&quot;model_doc/deit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deit&quot;},{&quot;title&quot;:&quot;DETA&quot;,&quot;id&quot;:&quot;model_doc/deta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deta&quot;},{&quot;title&quot;:&quot;DETR&quot;,&quot;id&quot;:&quot;model_doc/detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/detr&quot;},{&quot;title&quot;:&quot;DiNAT&quot;,&quot;id&quot;:&quot;model_doc/dinat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinat&quot;},{&quot;title&quot;:&quot;DINO V2&quot;,&quot;id&quot;:&quot;model_doc/dinov2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinov2&quot;},{&quot;title&quot;:&quot;DiT&quot;,&quot;id&quot;:&quot;model_doc/dit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dit&quot;},{&quot;title&quot;:&quot;DPT&quot;,&quot;id&quot;:&quot;model_doc/dpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpt&quot;},{&quot;title&quot;:&quot;EfficientFormer&quot;,&quot;id&quot;:&quot;model_doc/efficientformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientformer&quot;},{&quot;title&quot;:&quot;EfficientNet&quot;,&quot;id&quot;:&quot;model_doc/efficientnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientnet&quot;},{&quot;title&quot;:&quot;FocalNet&quot;,&quot;id&quot;:&quot;model_doc/focalnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/focalnet&quot;},{&quot;title&quot;:&quot;GLPN&quot;,&quot;id&quot;:&quot;model_doc/glpn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/glpn&quot;},{&quot;title&quot;:&quot;ImageGPT&quot;,&quot;id&quot;:&quot;model_doc/imagegpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/imagegpt&quot;},{&quot;title&quot;:&quot;LeViT&quot;,&quot;id&quot;:&quot;model_doc/levit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/levit&quot;},{&quot;title&quot;:&quot;Mask2Former&quot;,&quot;id&quot;:&quot;model_doc/mask2former&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mask2former&quot;},{&quot;title&quot;:&quot;MaskFormer&quot;,&quot;id&quot;:&quot;model_doc/maskformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/maskformer&quot;},{&quot;title&quot;:&quot;MobileNetV1&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v1&quot;},{&quot;title&quot;:&quot;MobileNetV2&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v2&quot;},{&quot;title&quot;:&quot;MobileViT&quot;,&quot;id&quot;:&quot;model_doc/mobilevit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevit&quot;},{&quot;title&quot;:&quot;MobileViTV2&quot;,&quot;id&quot;:&quot;model_doc/mobilevitv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevitv2&quot;},{&quot;title&quot;:&quot;NAT&quot;,&quot;id&quot;:&quot;model_doc/nat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nat&quot;},{&quot;title&quot;:&quot;PoolFormer&quot;,&quot;id&quot;:&quot;model_doc/poolformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/poolformer&quot;},{&quot;title&quot;:&quot;Pyramid Vision Transformer (PVT)&quot;,&quot;id&quot;:&quot;model_doc/pvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pvt&quot;},{&quot;title&quot;:&quot;RegNet&quot;,&quot;id&quot;:&quot;model_doc/regnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/regnet&quot;},{&quot;title&quot;:&quot;ResNet&quot;,&quot;id&quot;:&quot;model_doc/resnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/resnet&quot;},{&quot;title&quot;:&quot;SegFormer&quot;,&quot;id&quot;:&quot;model_doc/segformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/segformer&quot;},{&quot;title&quot;:&quot;SwiftFormer&quot;,&quot;id&quot;:&quot;model_doc/swiftformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swiftformer&quot;},{&quot;title&quot;:&quot;Swin Transformer&quot;,&quot;id&quot;:&quot;model_doc/swin&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin&quot;},{&quot;title&quot;:&quot;Swin Transformer V2&quot;,&quot;id&quot;:&quot;model_doc/swinv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swinv2&quot;},{&quot;title&quot;:&quot;Swin2SR&quot;,&quot;id&quot;:&quot;model_doc/swin2sr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin2sr&quot;},{&quot;title&quot;:&quot;Table Transformer&quot;,&quot;id&quot;:&quot;model_doc/table-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/table-transformer&quot;},{&quot;title&quot;:&quot;TimeSformer&quot;,&quot;id&quot;:&quot;model_doc/timesformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/timesformer&quot;},{&quot;title&quot;:&quot;UperNet&quot;,&quot;id&quot;:&quot;model_doc/upernet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/upernet&quot;},{&quot;title&quot;:&quot;VAN&quot;,&quot;id&quot;:&quot;model_doc/van&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/van&quot;},{&quot;title&quot;:&quot;VideoMAE&quot;,&quot;id&quot;:&quot;model_doc/videomae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/videomae&quot;},{&quot;title&quot;:&quot;Vision Transformer (ViT)&quot;,&quot;id&quot;:&quot;model_doc/vit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit&quot;},{&quot;title&quot;:&quot;ViT Hybrid&quot;,&quot;id&quot;:&quot;model_doc/vit_hybrid&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_hybrid&quot;},{&quot;title&quot;:&quot;ViTDet&quot;,&quot;id&quot;:&quot;model_doc/vitdet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitdet&quot;},{&quot;title&quot;:&quot;ViTMAE&quot;,&quot;id&quot;:&quot;model_doc/vit_mae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_mae&quot;},{&quot;title&quot;:&quot;ViTMatte&quot;,&quot;id&quot;:&quot;model_doc/vitmatte&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitmatte&quot;},{&quot;title&quot;:&quot;ViTMSN&quot;,&quot;id&quot;:&quot;model_doc/vit_msn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_msn&quot;},{&quot;title&quot;:&quot;ViViT&quot;,&quot;id&quot;:&quot;model_doc/vivit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vivit&quot;},{&quot;title&quot;:&quot;YOLOS&quot;,&quot;id&quot;:&quot;model_doc/yolos&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yolos&quot;}]},{&quot;title&quot;:&quot;Audio models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio Spectrogram Transformer&quot;,&quot;id&quot;:&quot;model_doc/audio-spectrogram-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer&quot;},{&quot;title&quot;:&quot;Bark&quot;,&quot;id&quot;:&quot;model_doc/bark&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bark&quot;},{&quot;title&quot;:&quot;CLAP&quot;,&quot;id&quot;:&quot;model_doc/clap&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clap&quot;},{&quot;title&quot;:&quot;EnCodec&quot;,&quot;id&quot;:&quot;model_doc/encodec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encodec&quot;},{&quot;title&quot;:&quot;Hubert&quot;,&quot;id&quot;:&quot;model_doc/hubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/hubert&quot;},{&quot;title&quot;:&quot;MCTCT&quot;,&quot;id&quot;:&quot;model_doc/mctct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mctct&quot;},{&quot;title&quot;:&quot;MMS&quot;,&quot;id&quot;:&quot;model_doc/mms&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mms&quot;},{&quot;title&quot;:&quot;MusicGen&quot;,&quot;id&quot;:&quot;model_doc/musicgen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/musicgen&quot;},{&quot;title&quot;:&quot;Pop2Piano&quot;,&quot;id&quot;:&quot;model_doc/pop2piano&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pop2piano&quot;},{&quot;title&quot;:&quot;SEW&quot;,&quot;id&quot;:&quot;model_doc/sew&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew&quot;},{&quot;title&quot;:&quot;SEW-D&quot;,&quot;id&quot;:&quot;model_doc/sew-d&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew-d&quot;},{&quot;title&quot;:&quot;Speech2Text&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text&quot;},{&quot;title&quot;:&quot;Speech2Text2&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text_2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2&quot;},{&quot;title&quot;:&quot;SpeechT5&quot;,&quot;id&quot;:&quot;model_doc/speecht5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speecht5&quot;},{&quot;title&quot;:&quot;UniSpeech&quot;,&quot;id&quot;:&quot;model_doc/unispeech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech&quot;},{&quot;title&quot;:&quot;UniSpeech-SAT&quot;,&quot;id&quot;:&quot;model_doc/unispeech-sat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech-sat&quot;},{&quot;title&quot;:&quot;VITS&quot;,&quot;id&quot;:&quot;model_doc/vits&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vits&quot;},{&quot;title&quot;:&quot;Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2&quot;},{&quot;title&quot;:&quot;Wav2Vec2-Conformer&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2-conformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer&quot;},{&quot;title&quot;:&quot;Wav2Vec2Phoneme&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2_phoneme&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme&quot;},{&quot;title&quot;:&quot;WavLM&quot;,&quot;id&quot;:&quot;model_doc/wavlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wavlm&quot;},{&quot;title&quot;:&quot;Whisper&quot;,&quot;id&quot;:&quot;model_doc/whisper&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/whisper&quot;},{&quot;title&quot;:&quot;XLS-R&quot;,&quot;id&quot;:&quot;model_doc/xls_r&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xls_r&quot;},{&quot;title&quot;:&quot;XLSR-Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/xlsr_wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2&quot;}]},{&quot;title&quot;:&quot;Multimodal models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALIGN&quot;,&quot;id&quot;:&quot;model_doc/align&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/align&quot;},{&quot;title&quot;:&quot;AltCLIP&quot;,&quot;id&quot;:&quot;model_doc/altclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/altclip&quot;},{&quot;title&quot;:&quot;BLIP&quot;,&quot;id&quot;:&quot;model_doc/blip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip&quot;},{&quot;title&quot;:&quot;BLIP-2&quot;,&quot;id&quot;:&quot;model_doc/blip-2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip-2&quot;},{&quot;title&quot;:&quot;BridgeTower&quot;,&quot;id&quot;:&quot;model_doc/bridgetower&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bridgetower&quot;},{&quot;title&quot;:&quot;BROS&quot;,&quot;id&quot;:&quot;model_doc/bros&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bros&quot;},{&quot;title&quot;:&quot;Chinese-CLIP&quot;,&quot;id&quot;:&quot;model_doc/chinese_clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/chinese_clip&quot;},{&quot;title&quot;:&quot;CLIP&quot;,&quot;id&quot;:&quot;model_doc/clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clip&quot;},{&quot;title&quot;:&quot;CLIPSeg&quot;,&quot;id&quot;:&quot;model_doc/clipseg&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clipseg&quot;},{&quot;title&quot;:&quot;Data2Vec&quot;,&quot;id&quot;:&quot;model_doc/data2vec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/data2vec&quot;},{&quot;title&quot;:&quot;DePlot&quot;,&quot;id&quot;:&quot;model_doc/deplot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deplot&quot;},{&quot;title&quot;:&quot;Donut&quot;,&quot;id&quot;:&quot;model_doc/donut&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/donut&quot;},{&quot;title&quot;:&quot;FLAVA&quot;,&quot;id&quot;:&quot;model_doc/flava&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flava&quot;},{&quot;title&quot;:&quot;GIT&quot;,&quot;id&quot;:&quot;model_doc/git&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/git&quot;},{&quot;title&quot;:&quot;GroupViT&quot;,&quot;id&quot;:&quot;model_doc/groupvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/groupvit&quot;},{&quot;title&quot;:&quot;IDEFICS&quot;,&quot;id&quot;:&quot;model_doc/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/idefics&quot;},{&quot;title&quot;:&quot;InstructBLIP&quot;,&quot;id&quot;:&quot;model_doc/instructblip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/instructblip&quot;},{&quot;title&quot;:&quot;LayoutLM&quot;,&quot;id&quot;:&quot;model_doc/layoutlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlm&quot;},{&quot;title&quot;:&quot;LayoutLMV2&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv2&quot;},{&quot;title&quot;:&quot;LayoutLMV3&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv3&quot;},{&quot;title&quot;:&quot;LayoutXLM&quot;,&quot;id&quot;:&quot;model_doc/layoutxlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutxlm&quot;},{&quot;title&quot;:&quot;LiLT&quot;,&quot;id&quot;:&quot;model_doc/lilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lilt&quot;},{&quot;title&quot;:&quot;LXMERT&quot;,&quot;id&quot;:&quot;model_doc/lxmert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lxmert&quot;},{&quot;title&quot;:&quot;MatCha&quot;,&quot;id&quot;:&quot;model_doc/matcha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/matcha&quot;},{&quot;title&quot;:&quot;MGP-STR&quot;,&quot;id&quot;:&quot;model_doc/mgp-str&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mgp-str&quot;},{&quot;title&quot;:&quot;Nougat&quot;,&quot;id&quot;:&quot;model_doc/nougat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nougat&quot;},{&quot;title&quot;:&quot;OneFormer&quot;,&quot;id&quot;:&quot;model_doc/oneformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/oneformer&quot;},{&quot;title&quot;:&quot;OWL-ViT&quot;,&quot;id&quot;:&quot;model_doc/owlvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/owlvit&quot;},{&quot;title&quot;:&quot;Perceiver&quot;,&quot;id&quot;:&quot;model_doc/perceiver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/perceiver&quot;},{&quot;title&quot;:&quot;Pix2Struct&quot;,&quot;id&quot;:&quot;model_doc/pix2struct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pix2struct&quot;},{&quot;title&quot;:&quot;Segment Anything&quot;,&quot;id&quot;:&quot;model_doc/sam&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sam&quot;},{&quot;title&quot;:&quot;Speech Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/speech-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder&quot;},{&quot;title&quot;:&quot;TAPAS&quot;,&quot;id&quot;:&quot;model_doc/tapas&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapas&quot;},{&quot;title&quot;:&quot;TrOCR&quot;,&quot;id&quot;:&quot;model_doc/trocr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trocr&quot;},{&quot;title&quot;:&quot;TVLT&quot;,&quot;id&quot;:&quot;model_doc/tvlt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tvlt&quot;},{&quot;title&quot;:&quot;ViLT&quot;,&quot;id&quot;:&quot;model_doc/vilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vilt&quot;},{&quot;title&quot;:&quot;Vision Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/vision-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder&quot;},{&quot;title&quot;:&quot;Vision Text Dual Encoder&quot;,&quot;id&quot;:&quot;model_doc/vision-text-dual-encoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder&quot;},{&quot;title&quot;:&quot;VisualBERT&quot;,&quot;id&quot;:&quot;model_doc/visual_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/visual_bert&quot;},{&quot;title&quot;:&quot;X-CLIP&quot;,&quot;id&quot;:&quot;model_doc/xclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xclip&quot;}]},{&quot;title&quot;:&quot;Reinforcement learning models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Decision Transformer&quot;,&quot;id&quot;:&quot;model_doc/decision_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/decision_transformer&quot;},{&quot;title&quot;:&quot;Trajectory Transformer&quot;,&quot;id&quot;:&quot;model_doc/trajectory_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer&quot;}]},{&quot;title&quot;:&quot;Time series models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Autoformer&quot;,&quot;id&quot;:&quot;model_doc/autoformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/autoformer&quot;},{&quot;title&quot;:&quot;Informer&quot;,&quot;id&quot;:&quot;model_doc/informer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/informer&quot;},{&quot;title&quot;:&quot;Time Series Transformer&quot;,&quot;id&quot;:&quot;model_doc/time_series_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/time_series_transformer&quot;}]},{&quot;title&quot;:&quot;Graph models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Graphormer&quot;,&quot;id&quot;:&quot;model_doc/graphormer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/graphormer&quot;}]}]},{&quot;title&quot;:&quot;Internal Helpers&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Custom Layers and Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/modeling_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/modeling_utils&quot;},{&quot;title&quot;:&quot;Utilities for pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/pipelines_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/pipelines_utils&quot;},{&quot;title&quot;:&quot;Utilities for Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/tokenization_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/tokenization_utils&quot;},{&quot;title&quot;:&quot;Utilities for Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/trainer_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/trainer_utils&quot;},{&quot;title&quot;:&quot;Utilities for Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/generation_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/generation_utils&quot;},{&quot;title&quot;:&quot;Utilities for Image Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/image_processing_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/image_processing_utils&quot;},{&quot;title&quot;:&quot;Utilities for Audio processing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/audio_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/audio_utils&quot;},{&quot;title&quot;:&quot;General Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/file_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/file_utils&quot;},{&quot;title&quot;:&quot;Utilities for Time Series&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/time_series_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/time_series_utils&quot;}]}]}],&quot;chapterId&quot;:&quot;tasks/zero_shot_object_detection&quot;,&quot;docType&quot;:&quot;docs&quot;,&quot;isLoggedIn&quot;:false,&quot;lang&quot;:&quot;en&quot;,&quot;langs&quot;:[&quot;de&quot;,&quot;en&quot;,&quot;es&quot;,&quot;fr&quot;,&quot;it&quot;,&quot;ko&quot;,&quot;pt&quot;,&quot;zh&quot;],&quot;library&quot;:&quot;transformers&quot;,&quot;theme&quot;:&quot;light&quot;,&quot;version&quot;:&quot;v4.34.0&quot;,&quot;versions&quot;:[{&quot;version&quot;:&quot;main&quot;},{&quot;version&quot;:&quot;v4.34.0&quot;},{&quot;version&quot;:&quot;v4.33.3&quot;},{&quot;version&quot;:&quot;v4.33.2&quot;},{&quot;version&quot;:&quot;v4.33.0&quot;},{&quot;version&quot;:&quot;v4.32.1&quot;},{&quot;version&quot;:&quot;v4.32.0&quot;},{&quot;version&quot;:&quot;v4.31.0&quot;},{&quot;version&quot;:&quot;v4.30.0&quot;},{&quot;version&quot;:&quot;v4.29.1&quot;},{&quot;version&quot;:&quot;v4.29.0&quot;},{&quot;version&quot;:&quot;v4.28.1&quot;},{&quot;version&quot;:&quot;v4.28.0&quot;},{&quot;version&quot;:&quot;v4.27.2&quot;},{&quot;version&quot;:&quot;v4.27.1&quot;},{&quot;version&quot;:&quot;v4.27.0&quot;},{&quot;version&quot;:&quot;v4.26.1&quot;},{&quot;version&quot;:&quot;v4.26.0&quot;},{&quot;version&quot;:&quot;v4.25.1&quot;},{&quot;version&quot;:&quot;v4.24.0&quot;},{&quot;version&quot;:&quot;v4.23.1&quot;},{&quot;version&quot;:&quot;v4.23.0&quot;},{&quot;version&quot;:&quot;v4.22.2&quot;},{&quot;version&quot;:&quot;v4.22.1&quot;},{&quot;version&quot;:&quot;v4.22.0&quot;},{&quot;version&quot;:&quot;v4.21.3&quot;},{&quot;version&quot;:&quot;v4.21.2&quot;},{&quot;version&quot;:&quot;v4.21.1&quot;},{&quot;version&quot;:&quot;v4.21.0&quot;},{&quot;version&quot;:&quot;v4.20.1&quot;},{&quot;version&quot;:&quot;v4.20.0&quot;},{&quot;version&quot;:&quot;v4.19.4&quot;},{&quot;version&quot;:&quot;v4.19.3&quot;},{&quot;version&quot;:&quot;v4.19.2&quot;},{&quot;version&quot;:&quot;v4.19.0&quot;},{&quot;version&quot;:&quot;v4.18.0&quot;},{&quot;version&quot;:&quot;v4.17.0&quot;},{&quot;version&quot;:&quot;v4.16.2&quot;},{&quot;version&quot;:&quot;v4.16.1&quot;},{&quot;version&quot;:&quot;v4.16.0&quot;},{&quot;version&quot;:&quot;v4.15.0&quot;},{&quot;version&quot;:&quot;v4.14.1&quot;},{&quot;version&quot;:&quot;v4.13.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.5&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.4&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.0.0&quot;},{&quot;version&quot;:&quot;doc-builder-html&quot;}],&quot;title&quot;:&quot;Zero-shot object detection&quot;}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"><div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation</p> <div class="flex items-center"><p class="font-semibold">Zero-shot object detection</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "><button class=" " type="button"><h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> <span class="ml-auto rounded border border-gray-200 bg-gray-100 px-0.5 text-xs dark:border-gray-800 dark:bg-gray-800"><kbd class="font-sans">⌘K</kbd></span></button> <div class="flex items-center"><select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1">v4.34.0</option><option value="2">v4.33.3</option><option value="3">v4.32.1</option><option value="4">v4.31.0</option><option value="5">v4.30.0</option><option value="6">v4.29.1</option><option value="7">v4.28.1</option><option value="8">v4.27.2</option><option value="9">v4.26.1</option><option value="10">v4.25.1</option><option value="11">v4.24.0</option><option value="12">v4.23.1</option><option value="13">v4.22.2</option><option value="14">v4.21.3</option><option value="15">v4.20.1</option><option value="16">v4.19.4</option><option value="17">v4.18.0</option><option value="18">v4.17.0</option><option value="19">v4.16.2</option><option value="20">v4.15.0</option><option value="21">v4.14.1</option><option value="22">v4.13.0</option><option value="23">v4.12.5</option><option value="24">v4.11.3</option><option value="25">v4.10.1</option><option value="26">v4.9.2</option><option value="27">v4.8.2</option><option value="28">v4.7.0</option><option value="29">v4.6.0</option><option value="30">v4.5.1</option><option value="31">v4.4.2</option><option value="32">v4.3.3</option><option value="33">v4.2.2</option><option value="34">v4.1.1</option><option value="35">v4.0.1</option><option value="36">v3.5.1</option><option value="37">v3.4.0</option><option value="38">v3.3.1</option><option value="39">v3.2.0</option><option value="40">v3.1.0</option><option value="41">v3.0.2</option><option value="42">v2.11.0</option><option value="43">v2.10.0</option><option value="44">v2.9.1</option><option value="45">v2.8.0</option><option value="46">v2.7.0</option><option value="47">v2.6.0</option><option value="48">v2.5.1</option><option value="49">v2.4.1</option><option value="50">v2.3.0</option><option value="51">v2.2.2</option><option value="52">v2.1.1</option><option value="53">v2.0.0</option><option value="54">v1.2.0</option><option value="55">v1.1.0</option><option value="56">v1.0.0</option><option value="57">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"><button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"><svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> 112,792</a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Get started</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/index">🤗 Transformers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/quicktour">Quick tour </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/installation">Installation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Tutorials</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_tutorial">Run inference with pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/autoclass_tutorial">Write portable code with AutoClass </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/preprocessing">Preprocess data </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/training">Fine-tune a pretrained model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/run_scripts">Train with a script </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/accelerate">Set up distributed training with 🤗 Accelerate </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/peft">Load and train adapters with 🤗 PEFT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_sharing">Share your model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/transformers_agents">Agents </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/llm_tutorial">Generation with LLMs </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Task Guides</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Natural Language Processing</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Computer Vision</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/image_classification">Image classification </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/semantic_segmentation">Semantic segmentation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/video_classification">Video classification </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/object_detection">Object detection </a><a data-sveltekit-reload="" class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-4" href="/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection">Zero-shot object detection </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification">Zero-shot image classification </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation">Depth estimation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Generation</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Prompting</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Developer guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/fast_tokenizers">Use fast tokenizers from 🤗 Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/multilingual">Run inference with multilingual models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/create_a_model">Use model-specific APIs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_models">Share a custom model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/chat_templating">Templates for chat models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/sagemaker">Run training on Amazon SageMaker </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/serialization">Export to ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tflite">Export to TFLite </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/torchscript">Export to TorchScript </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/benchmarks">Benchmarks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/notebooks">Notebooks with examples </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/community">Community resources </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_tools">Custom Tools and Prompts </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/troubleshooting">Troubleshoot </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Performance and scalability</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/performance">Overview </a><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Efficient training techniques</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_one">Methods and tools for efficient training on a single GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_many">Multiple GPUs and parallelism </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu">Efficient training on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu_many">Distributed CPU training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu">Training on TPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu_tf">Training on TPU with TensorFlow </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_special">Training on Specialized Hardware </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_hardware">Custom hardware for training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/hpo_train">Hyperparameter Search using Trainer API </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Optimizing inference</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_cpu">Inference on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_one">Inference on one GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_many">Inference on many GPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_special">Inference on Specialized Hardware </a> </div><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/big_models">Instantiating a big model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/debugging">Troubleshooting </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tf_xla">XLA Integration for TensorFlow Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perf_torch_compile">Optimize inference using `torch.compile()` </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Contribute</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/contributing">How to contribute to transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_model">How to add a model to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_tensorflow_model">How to convert a 🤗 Transformers model to TensorFlow? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_pipeline">How to add a pipeline to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/testing">Testing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pr_checks">Checks on a Pull Request </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Conceptual guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/philosophy">Philosophy </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/glossary">Glossary </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/task_summary">What 🤗 Transformers can do </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tasks_explained">How 🤗 Transformers solve tasks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_summary">The Transformer model family </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tokenizer_summary">Summary of the tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/attention">Attention mechanisms </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pad_truncation">Padding and truncation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/bertology">BERTology </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perplexity">Perplexity of fixed-length models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_webserver">Pipelines for webserver inference </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_memory_anatomy">Model training anatomy </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>API</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Main Classes</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/agent">Agents and Tools </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/model_doc/auto">Auto Classes </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/callback">Callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/configuration">Configuration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/data_collator">Data Collator </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks">Keras callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/logging">Logging </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/model">Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/text_generation">Text Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/onnx">ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules">Optimization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/output">Model outputs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/pipelines">Pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/processors">Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/quantization">Quantization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/tokenizer">Tokenizer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/trainer">Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/deepspeed">DeepSpeed Integration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor">Feature Extractor </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/image_processor">Image Processor </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Models</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Text models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Vision models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Reinforcement learning models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Time series models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Graph models</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Internal Helpers</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/modeling_utils">Custom Layers and Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/pipelines_utils">Utilities for pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/tokenization_utils">Utilities for Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/trainer_utils">Utilities for Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/generation_utils">Utilities for Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/image_processing_utils">Utilities for Image Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/audio_utils">Utilities for Audio processing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/file_utils">General Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/time_series_utils">Utilities for Time Series </a> </div> </div></nav></div></div></div> <div class="z-1 min-w-0 flex-1"> <div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg"> <div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div> <p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience </p> <div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes </div></div></div> <div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a> <p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div> <div class="prose-doc prose relative mx-auto max-w-4xl break-words"> <p></p> <h1 class="relative group"><a id="zeroshot-object-detection" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#zeroshot-object-detection"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1fbjobs">Zero-shot object detection</span></h1> <div class="flex space-x-1 absolute z-10 right-0 top-0"> <div class="relative colab-dropdown "><button class=" " type="button"><img alt="Open In Colab" class="!m-0" src="https://colab.research.google.com/assets/colab-badge.svg"></button> </div> <div class="relative colab-dropdown "><button class=" " type="button"><img alt="Open In Studio Lab" class="!m-0" src="https://studiolab.sagemaker.aws/studiolab.svg"></button> </div></div> <p data-svelte-h="svelte-1g13my9">Traditionally, models used for <a href="object_detection">object detection</a> require labeled image datasets for training, and are limited to detecting the set of classes from the training data.</p> <p data-svelte-h="svelte-1l6ldib">Zero-shot object detection is supported by the <a href="../model_doc/owlvit">OWL-ViT</a> model which uses a different approach. OWL-ViT is an open-vocabulary object detector. It means that it can detect objects in images based on free-text queries without the need to fine-tune the model on labeled datasets.</p> <p data-svelte-h="svelte-1r22rri">OWL-ViT leverages multi-modal representations to perform open-vocabulary detection. It combines <a href="../model_doc/clip">CLIP</a> with lightweight object classification and localization heads. Open-vocabulary detection is achieved by embedding free-text queries with the text encoder of CLIP and using them as input to the object classification and localization heads. associate images and their corresponding textual descriptions, and ViT processes image patches as inputs. The authors of OWL-ViT first trained CLIP from scratch and then fine-tuned OWL-ViT end to end on standard object detection datasets using a bipartite matching loss.</p> <p data-svelte-h="svelte-c41x80">With this approach, the model can detect objects based on textual descriptions without prior training on labeled datasets.</p> <p data-svelte-h="svelte-leihjc">In this guide, you will learn how to use OWL-ViT:</p> <ul data-svelte-h="svelte-v8ep8v"><li>to detect objects based on text prompts</li> <li>for batch object detection</li> <li>for image-guided object detection</li></ul> <p data-svelte-h="svelte-1c9nexd">Before you begin, make sure you have all the necessary libraries installed:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class="">pip install -q transformers</pre></div> <h2 class="relative group"><a id="zeroshot-object-detection-pipeline" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#zeroshot-object-detection-pipeline"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1mll514">Zero-shot object detection pipeline</span></h2> <p data-svelte-h="svelte-1opfm1z">The simplest way to try out inference with OWL-ViT is to use it in a <a href="/docs/transformers/v4.34.0/en/main_classes/pipelines#transformers.pipeline">pipeline()</a>. Instantiate a pipeline for zero-shot object detection from a <a href="https://huggingface.co/models?other=owlvit" rel="nofollow">checkpoint on the Hugging Face Hub</a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> pipeline <span class="hljs-meta">&gt;&gt;&gt; </span>checkpoint = <span class="hljs-string">"google/owlvit-base-patch32"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>detector = pipeline(model=checkpoint, task=<span class="hljs-string">"zero-shot-object-detection"</span>)</pre></div> <p data-svelte-h="svelte-henpll">Next, choose an image you’d like to detect objects in. Here we’ll use the image of astronaut Eileen Collins that is a part of the <a href="https://www.nasa.gov/multimedia/imagegallery/index.html" rel="nofollow">NASA</a> Great Images dataset.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> skimage <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> PIL <span class="hljs-keyword">import</span> Image <span class="hljs-meta">&gt;&gt;&gt; </span>image = skimage.data.astronaut() <span class="hljs-meta">&gt;&gt;&gt; </span>image = Image.fromarray(np.uint8(image)).convert(<span class="hljs-string">"RGB"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>image</pre></div> <div class="flex justify-center" data-svelte-h="svelte-17qmfee"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_1.png" alt="Astronaut Eileen Collins"></div> <p data-svelte-h="svelte-baa5my">Pass the image and the candidate object labels to look for to the pipeline. Here we pass the image directly; other suitable options include a local path to an image or an image url. We also pass text descriptions for all items we want to query the image for.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>predictions = detector( <span class="hljs-meta">... </span> image, <span class="hljs-meta">... </span> candidate_labels=[<span class="hljs-string">"human face"</span>, <span class="hljs-string">"rocket"</span>, <span class="hljs-string">"nasa badge"</span>, <span class="hljs-string">"star-spangled banner"</span>], <span class="hljs-meta">... </span>) <span class="hljs-meta">&gt;&gt;&gt; </span>predictions [{<span class="hljs-string">'score'</span>: <span class="hljs-number">0.3571370542049408</span>, <span class="hljs-string">'label'</span>: <span class="hljs-string">'human face'</span>, <span class="hljs-string">'box'</span>: {<span class="hljs-string">'xmin'</span>: <span class="hljs-number">180</span>, <span class="hljs-string">'ymin'</span>: <span class="hljs-number">71</span>, <span class="hljs-string">'xmax'</span>: <span class="hljs-number">271</span>, <span class="hljs-string">'ymax'</span>: <span class="hljs-number">178</span>}}, {<span class="hljs-string">'score'</span>: <span class="hljs-number">0.28099656105041504</span>, <span class="hljs-string">'label'</span>: <span class="hljs-string">'nasa badge'</span>, <span class="hljs-string">'box'</span>: {<span class="hljs-string">'xmin'</span>: <span class="hljs-number">129</span>, <span class="hljs-string">'ymin'</span>: <span class="hljs-number">348</span>, <span class="hljs-string">'xmax'</span>: <span class="hljs-number">206</span>, <span class="hljs-string">'ymax'</span>: <span class="hljs-number">427</span>}}, {<span class="hljs-string">'score'</span>: <span class="hljs-number">0.2110239565372467</span>, <span class="hljs-string">'label'</span>: <span class="hljs-string">'rocket'</span>, <span class="hljs-string">'box'</span>: {<span class="hljs-string">'xmin'</span>: <span class="hljs-number">350</span>, <span class="hljs-string">'ymin'</span>: -<span class="hljs-number">1</span>, <span class="hljs-string">'xmax'</span>: <span class="hljs-number">468</span>, <span class="hljs-string">'ymax'</span>: <span class="hljs-number">288</span>}}, {<span class="hljs-string">'score'</span>: <span class="hljs-number">0.13790413737297058</span>, <span class="hljs-string">'label'</span>: <span class="hljs-string">'star-spangled banner'</span>, <span class="hljs-string">'box'</span>: {<span class="hljs-string">'xmin'</span>: <span class="hljs-number">1</span>, <span class="hljs-string">'ymin'</span>: <span class="hljs-number">1</span>, <span class="hljs-string">'xmax'</span>: <span class="hljs-number">105</span>, <span class="hljs-string">'ymax'</span>: <span class="hljs-number">509</span>}}, {<span class="hljs-string">'score'</span>: <span class="hljs-number">0.11950037628412247</span>, <span class="hljs-string">'label'</span>: <span class="hljs-string">'nasa badge'</span>, <span class="hljs-string">'box'</span>: {<span class="hljs-string">'xmin'</span>: <span class="hljs-number">277</span>, <span class="hljs-string">'ymin'</span>: <span class="hljs-number">338</span>, <span class="hljs-string">'xmax'</span>: <span class="hljs-number">327</span>, <span class="hljs-string">'ymax'</span>: <span class="hljs-number">380</span>}}, {<span class="hljs-string">'score'</span>: <span class="hljs-number">0.10649408400058746</span>, <span class="hljs-string">'label'</span>: <span class="hljs-string">'rocket'</span>, <span class="hljs-string">'box'</span>: {<span class="hljs-string">'xmin'</span>: <span class="hljs-number">358</span>, <span class="hljs-string">'ymin'</span>: <span class="hljs-number">64</span>, <span class="hljs-string">'xmax'</span>: <span class="hljs-number">424</span>, <span class="hljs-string">'ymax'</span>: <span class="hljs-number">280</span>}}]</pre></div> <p data-svelte-h="svelte-af5rkc">Let’s visualize the predictions:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> PIL <span class="hljs-keyword">import</span> ImageDraw <span class="hljs-meta">&gt;&gt;&gt; </span>draw = ImageDraw.Draw(image) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">for</span> prediction <span class="hljs-keyword">in</span> predictions: <span class="hljs-meta">... </span> box = prediction[<span class="hljs-string">"box"</span>] <span class="hljs-meta">... </span> label = prediction[<span class="hljs-string">"label"</span>] <span class="hljs-meta">... </span> score = prediction[<span class="hljs-string">"score"</span>] <span class="hljs-meta">... </span> xmin, ymin, xmax, ymax = box.values() <span class="hljs-meta">... </span> draw.rectangle((xmin, ymin, xmax, ymax), outline=<span class="hljs-string">"red"</span>, width=<span class="hljs-number">1</span>) <span class="hljs-meta">... </span> draw.text((xmin, ymin), <span class="hljs-string">f"<span class="hljs-subst">{label}</span>: <span class="hljs-subst">{<span class="hljs-built_in">round</span>(score,<span class="hljs-number">2</span>)}</span>"</span>, fill=<span class="hljs-string">"white"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>image</pre></div> <div class="flex justify-center" data-svelte-h="svelte-1fwpqdn"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_2.png" alt="Visualized predictions on NASA image"></div> <h2 class="relative group"><a id="textprompted-zeroshot-object-detection-by-hand" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#textprompted-zeroshot-object-detection-by-hand"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1m72xlr">Text-prompted zero-shot object detection by hand</span></h2> <p data-svelte-h="svelte-xqdy1u">Now that you’ve seen how to use the zero-shot object detection pipeline, let’s replicate the same result manually.</p> <p data-svelte-h="svelte-x1kygm">Start by loading the model and associated processor from a <a href="https://huggingface.co/models?other=owlvit" rel="nofollow">checkpoint on the Hugging Face Hub</a>. Here we’ll use the same checkpoint as before:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoProcessor, AutoModelForZeroShotObjectDetection <span class="hljs-meta">&gt;&gt;&gt; </span>model = AutoModelForZeroShotObjectDetection.from_pretrained(checkpoint) <span class="hljs-meta">&gt;&gt;&gt; </span>processor = AutoProcessor.from_pretrained(checkpoint)</pre></div> <p data-svelte-h="svelte-1g7c1zc">Let’s take a different image to switch things up.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> requests <span class="hljs-meta">&gt;&gt;&gt; </span>url = <span class="hljs-string">"https://unsplash.com/photos/oj0zeY2Ltk4/download?ixid=MnwxMjA3fDB8MXxzZWFyY2h8MTR8fHBpY25pY3xlbnwwfHx8fDE2Nzc0OTE1NDk&amp;force=true&amp;w=640"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>im = Image.<span class="hljs-built_in">open</span>(requests.get(url, stream=<span class="hljs-literal">True</span>).raw) <span class="hljs-meta">&gt;&gt;&gt; </span>im</pre></div> <div class="flex justify-center" data-svelte-h="svelte-owux8y"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_3.png" alt="Beach photo"></div> <p data-svelte-h="svelte-nuus8n">Use the processor to prepare the inputs for the model. The processor combines an image processor that prepares the image for the model by resizing and normalizing it, and a <a href="/docs/transformers/v4.34.0/en/model_doc/clip#transformers.CLIPTokenizer">CLIPTokenizer</a> that takes care of the text inputs.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>text_queries = [<span class="hljs-string">"hat"</span>, <span class="hljs-string">"book"</span>, <span class="hljs-string">"sunglasses"</span>, <span class="hljs-string">"camera"</span>] <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = processor(text=text_queries, images=im, return_tensors=<span class="hljs-string">"pt"</span>)</pre></div> <p data-svelte-h="svelte-vo6fmu">Pass the inputs through the model, post-process, and visualize the results. Since the image processor resized images before feeding them to the model, you need to use the <a href="/docs/transformers/v4.34.0/en/model_doc/owlvit#transformers.OwlViTImageProcessor.post_process_object_detection">post_process_object_detection()</a> method to make sure the predicted bounding boxes have the correct coordinates relative to the original image:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> outputs = model(**inputs) <span class="hljs-meta">... </span> target_sizes = torch.tensor([im.size[::-<span class="hljs-number">1</span>]]) <span class="hljs-meta">... </span> results = processor.post_process_object_detection(outputs, threshold=<span class="hljs-number">0.1</span>, target_sizes=target_sizes)[<span class="hljs-number">0</span>] <span class="hljs-meta">&gt;&gt;&gt; </span>draw = ImageDraw.Draw(im) <span class="hljs-meta">&gt;&gt;&gt; </span>scores = results[<span class="hljs-string">"scores"</span>].tolist() <span class="hljs-meta">&gt;&gt;&gt; </span>labels = results[<span class="hljs-string">"labels"</span>].tolist() <span class="hljs-meta">&gt;&gt;&gt; </span>boxes = results[<span class="hljs-string">"boxes"</span>].tolist() <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">for</span> box, score, label <span class="hljs-keyword">in</span> <span class="hljs-built_in">zip</span>(boxes, scores, labels): <span class="hljs-meta">... </span> xmin, ymin, xmax, ymax = box <span class="hljs-meta">... </span> draw.rectangle((xmin, ymin, xmax, ymax), outline=<span class="hljs-string">"red"</span>, width=<span class="hljs-number">1</span>) <span class="hljs-meta">... </span> draw.text((xmin, ymin), <span class="hljs-string">f"<span class="hljs-subst">{text_queries[label]}</span>: <span class="hljs-subst">{<span class="hljs-built_in">round</span>(score,<span class="hljs-number">2</span>)}</span>"</span>, fill=<span class="hljs-string">"white"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>im</pre></div> <div class="flex justify-center" data-svelte-h="svelte-1m863ar"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_4.png" alt="Beach photo with detected objects"></div> <h2 class="relative group"><a id="batch-processing" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#batch-processing"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1iabhzo">Batch processing</span></h2> <p data-svelte-h="svelte-16j89af">You can pass multiple sets of images and text queries to search for different (or same) objects in several images. Let’s use both an astronaut image and the beach image together. For batch processing, you should pass text queries as a nested list to the processor and images as lists of PIL images, PyTorch tensors, or NumPy arrays.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>images = [image, im] <span class="hljs-meta">&gt;&gt;&gt; </span>text_queries = [ <span class="hljs-meta">... </span> [<span class="hljs-string">"human face"</span>, <span class="hljs-string">"rocket"</span>, <span class="hljs-string">"nasa badge"</span>, <span class="hljs-string">"star-spangled banner"</span>], <span class="hljs-meta">... </span> [<span class="hljs-string">"hat"</span>, <span class="hljs-string">"book"</span>, <span class="hljs-string">"sunglasses"</span>, <span class="hljs-string">"camera"</span>], <span class="hljs-meta">... </span>] <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = processor(text=text_queries, images=images, return_tensors=<span class="hljs-string">"pt"</span>)</pre></div> <p data-svelte-h="svelte-1si811t">Previously for post-processing you passed the single image’s size as a tensor, but you can also pass a tuple, or, in case of several images, a list of tuples. Let’s create predictions for the two examples, and visualize the second one (<code>image_idx = 1</code>).</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> outputs = model(**inputs) <span class="hljs-meta">... </span> target_sizes = [x.size[::-<span class="hljs-number">1</span>] <span class="hljs-keyword">for</span> x <span class="hljs-keyword">in</span> images] <span class="hljs-meta">... </span> results = processor.post_process_object_detection(outputs, threshold=<span class="hljs-number">0.1</span>, target_sizes=target_sizes) <span class="hljs-meta">&gt;&gt;&gt; </span>image_idx = <span class="hljs-number">1</span> <span class="hljs-meta">&gt;&gt;&gt; </span>draw = ImageDraw.Draw(images[image_idx]) <span class="hljs-meta">&gt;&gt;&gt; </span>scores = results[image_idx][<span class="hljs-string">"scores"</span>].tolist() <span class="hljs-meta">&gt;&gt;&gt; </span>labels = results[image_idx][<span class="hljs-string">"labels"</span>].tolist() <span class="hljs-meta">&gt;&gt;&gt; </span>boxes = results[image_idx][<span class="hljs-string">"boxes"</span>].tolist() <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">for</span> box, score, label <span class="hljs-keyword">in</span> <span class="hljs-built_in">zip</span>(boxes, scores, labels): <span class="hljs-meta">... </span> xmin, ymin, xmax, ymax = box <span class="hljs-meta">... </span> draw.rectangle((xmin, ymin, xmax, ymax), outline=<span class="hljs-string">"red"</span>, width=<span class="hljs-number">1</span>) <span class="hljs-meta">... </span> draw.text((xmin, ymin), <span class="hljs-string">f"<span class="hljs-subst">{text_queries[image_idx][label]}</span>: <span class="hljs-subst">{<span class="hljs-built_in">round</span>(score,<span class="hljs-number">2</span>)}</span>"</span>, fill=<span class="hljs-string">"white"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>images[image_idx]</pre></div> <div class="flex justify-center" data-svelte-h="svelte-1m863ar"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_4.png" alt="Beach photo with detected objects"></div> <h2 class="relative group"><a id="imageguided-object-detection" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#imageguided-object-detection"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-t6yg8j">Image-guided object detection</span></h2> <p data-svelte-h="svelte-c78gkh">In addition to zero-shot object detection with text queries, OWL-ViT offers image-guided object detection. This means you can use an image query to find similar objects in the target image. Unlike text queries, only a single example image is allowed.</p> <p data-svelte-h="svelte-1kqxako">Let’s take an image with two cats on a couch as a target image, and an image of a single cat as a query:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>url = <span class="hljs-string">"http://images.cocodataset.org/val2017/000000039769.jpg"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>image_target = Image.<span class="hljs-built_in">open</span>(requests.get(url, stream=<span class="hljs-literal">True</span>).raw) <span class="hljs-meta">&gt;&gt;&gt; </span>query_url = <span class="hljs-string">"http://images.cocodataset.org/val2017/000000524280.jpg"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>query_image = Image.<span class="hljs-built_in">open</span>(requests.get(query_url, stream=<span class="hljs-literal">True</span>).raw)</pre></div> <p data-svelte-h="svelte-1yw5ubp">Let’s take a quick look at the images:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> matplotlib.pyplot <span class="hljs-keyword">as</span> plt <span class="hljs-meta">&gt;&gt;&gt; </span>fig, ax = plt.subplots(<span class="hljs-number">1</span>, <span class="hljs-number">2</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>ax[<span class="hljs-number">0</span>].imshow(image_target) <span class="hljs-meta">&gt;&gt;&gt; </span>ax[<span class="hljs-number">1</span>].imshow(query_image)</pre></div> <div class="flex justify-center" data-svelte-h="svelte-y78yu"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_5.png" alt="Cats"></div> <p data-svelte-h="svelte-34zysh">In the preprocessing step, instead of text queries, you now need to use <code>query_images</code>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>inputs = processor(images=image_target, query_images=query_image, return_tensors=<span class="hljs-string">"pt"</span>)</pre></div> <p data-svelte-h="svelte-27ow86">For predictions, instead of passing the inputs to the model, pass them to <a href="/docs/transformers/v4.34.0/en/model_doc/owlvit#transformers.OwlViTForObjectDetection.image_guided_detection">image_guided_detection()</a>. Draw the predictions as before except now there are no labels.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> outputs = model.image_guided_detection(**inputs) <span class="hljs-meta">... </span> target_sizes = torch.tensor([image_target.size[::-<span class="hljs-number">1</span>]]) <span class="hljs-meta">... </span> results = processor.post_process_image_guided_detection(outputs=outputs, target_sizes=target_sizes)[<span class="hljs-number">0</span>] <span class="hljs-meta">&gt;&gt;&gt; </span>draw = ImageDraw.Draw(image_target) <span class="hljs-meta">&gt;&gt;&gt; </span>scores = results[<span class="hljs-string">"scores"</span>].tolist() <span class="hljs-meta">&gt;&gt;&gt; </span>boxes = results[<span class="hljs-string">"boxes"</span>].tolist() <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">for</span> box, score, label <span class="hljs-keyword">in</span> <span class="hljs-built_in">zip</span>(boxes, scores, labels): <span class="hljs-meta">... </span> xmin, ymin, xmax, ymax = box <span class="hljs-meta">... </span> draw.rectangle((xmin, ymin, xmax, ymax), outline=<span class="hljs-string">"white"</span>, width=<span class="hljs-number">4</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>image_target</pre></div> <div class="flex justify-center" data-svelte-h="svelte-1f4dev0"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_6.png" alt="Cats with bounding boxes"></div> <p data-svelte-h="svelte-14mrmrz">If you’d like to interactively try out inference with OWL-ViT, check out this demo:</p> <iframe src="https://adirik-owl-vit.hf.space" frameborder="0" width="850" height="450"></iframe> <p></p> <div id="svelte-announcer" aria-live="assertive" aria-atomic="true" style="position: absolute; left: 0px; top: 0px; clip: rect(0px, 0px, 0px, 0px); clip-path: inset(50%); overflow: hidden; white-space: nowrap; width: 1px; height: 1px;"></div></div> <div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/tasks/object_detection" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>Object detection</a> <a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">Zero-shot image classification<span class="ml-2 translate-y-px">→</span></a></div></div></div> <div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapter&quot;:{&quot;title&quot;:&quot;Zero-shot object detection&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;zeroshot-object-detection&quot;,&quot;url&quot;:&quot;#zeroshot-object-detection&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Zero-shot object detection pipeline&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;zeroshot-object-detection-pipeline&quot;,&quot;url&quot;:&quot;#zeroshot-object-detection-pipeline&quot;},{&quot;title&quot;:&quot;Text-prompted zero-shot object detection by hand&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;textprompted-zeroshot-object-detection-by-hand&quot;,&quot;url&quot;:&quot;#textprompted-zeroshot-object-detection-by-hand&quot;},{&quot;title&quot;:&quot;Batch processing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;batch-processing&quot;,&quot;url&quot;:&quot;#batch-processing&quot;},{&quot;title&quot;:&quot;Image-guided object detection&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;imageguided-object-detection&quot;,&quot;url&quot;:&quot;#imageguided-object-detection&quot;}]}}" data-target="SubSideMenu"><nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#zeroshot-object-detection" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-zeroshot-object-detection"><wbr>Zero-shot object detection</a> <a href="#zeroshot-object-detection-pipeline" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-zeroshot-object-detection-pipeline"><wbr>Zero-shot object detection pipeline</a> <a href="#textprompted-zeroshot-object-detection-by-hand" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-textprompted-zeroshot-object-detection-by-hand"><wbr>Text-prompted zero-shot object detection by hand</a> <a href="#batch-processing" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-batch-processing"><wbr>Batch processing</a> <a href="#imageguided-object-detection" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-imageguided-object-detection"><wbr>Image-guided object detection</a> </nav></div></div></div> <div id="doc-footer"></div></main> </div> <script> import("/front/build/kube-5e23f38/index.js"); window.moonSha = "kube-5e23f38/"; window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc","environment":"production","userAgent":"HuggingFace (production)"}`); </script> <!-- Stripe --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://js.stripe.com/v3/"; script.async = true; document.head.appendChild(script); } </script> <!-- Google analytics v4 --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL"; script.async = true; document.head.appendChild(script); window.dataLayer = window.dataLayer || []; function gtag() { if (window.dataLayer !== undefined) { window.dataLayer.push(arguments); } } gtag("js", new Date()); gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection" }); /// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" }); /// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent /// TODO: ask the user for their consent and update this with gtag('consent', 'update') } </script> <!-- Google Analytics v3 (deprecated) --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { (function (i, s, o, g, r, a, m) { i["GoogleAnalyticsObject"] = r; (i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments); }), (i[r].l = 1 * new Date()); (a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]); a.async = 1; a.src = g; m.parentNode.insertBefore(a, m); })(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics"); ganalytics("create", "UA-83738774-2", "auto"); ganalytics("send", "pageview", "/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection"); } </script> <iframe name="__privateStripeMetricsController6460" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-27c67c0d52761104439bb051c7856ab1.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fv4.34.0%2Fen%2Ftasks%2Fzero_shot_object_detection&amp;title=Zero-shot%20object%20detection&amp;referrer=&amp;muid=NA&amp;sid=NA&amp;version=6&amp;preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html>
2023-10-05T13:33:56.637Z
Image captioning
https://huggingface.co/docs/transformers/v4.34.0/en/tasks/image_captioning
# Image captioning Image captioning is the task of predicting a caption for a given image. Common real world applications of it include aiding visually impaired people that can help them navigate through different situations. Therefore, image captioning helps to improve content accessibility for people by describing images to them. This guide will show you how to: - Fine-tune an image captioning model. - Use the fine-tuned model for inference. Before you begin, make sure you have all the necessary libraries installed: ``` pip install transformers datasets evaluate -q pip install jiwer -q ``` We encourage you to log in to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to log in: ``` from huggingface_hub import notebook_login notebook_login() ``` ## Load the Pokémon BLIP captions dataset Use the 🤗 Dataset library to load a dataset that consists of {image-caption} pairs. To create your own image captioning dataset in PyTorch, you can follow [this notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/GIT/Fine_tune_GIT_on_an_image_captioning_dataset.ipynb). ``` from datasets import load_dataset ds = load_dataset("lambdalabs/pokemon-blip-captions") ds ``` ``` DatasetDict({ train: Dataset({ features: ['image', 'text'], num_rows: 833 }) }) ``` The dataset has two features, `image` and `text`. Many image captioning datasets contain multiple captions per image. In those cases, a common strategy is to randomly sample a caption amongst the available ones during training. Split the dataset’s train split into a train and test set with the \[~datasets.Dataset.train\_test\_split\] method: ``` ds = ds["train"].train_test_split(test_size=0.1) train_ds = ds["train"] test_ds = ds["test"] ``` Let’s visualize a couple of samples from the training set. ``` from textwrap import wrap import matplotlib.pyplot as plt import numpy as np def plot_images(images, captions): plt.figure(figsize=(20, 20)) for i in range(len(images)): ax = plt.subplot(1, len(images), i + 1) caption = captions[i] caption = "\n".join(wrap(caption, 12)) plt.title(caption) plt.imshow(images[i]) plt.axis("off") sample_images_to_visualize = [np.array(train_ds[i]["image"]) for i in range(5)] sample_captions = [train_ds[i]["text"] for i in range(5)] plot_images(sample_images_to_visualize, sample_captions) ``` ![Sample training images](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/sample_training_images_image_cap.png) ## Preprocess the dataset Since the dataset has two modalities (image and text), the pre-processing pipeline will preprocess images and the captions. To do so, load the processor class associated with the model you are about to fine-tune. ``` from transformers import AutoProcessor checkpoint = "microsoft/git-base" processor = AutoProcessor.from_pretrained(checkpoint) ``` The processor will internally pre-process the image (which includes resizing, and pixel scaling) and tokenize the caption. ``` def transforms(example_batch): images = [x for x in example_batch["image"]] captions = [x for x in example_batch["text"]] inputs = processor(images=images, text=captions, padding="max_length") inputs.update({"labels": inputs["input_ids"]}) return inputs train_ds.set_transform(transforms) test_ds.set_transform(transforms) ``` With the dataset ready, you can now set up the model for fine-tuning. ## Load a base model Load the [“microsoft/git-base”](https://huggingface.co/microsoft/git-base) into a [`AutoModelForCausalLM`](https://huggingface.co/docs/transformers/model_doc/auto#transformers.AutoModelForCausalLM) object. ``` from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained(checkpoint) ``` ## Evaluate Image captioning models are typically evaluated with the [Rouge Score](https://huggingface.co/spaces/evaluate-metric/rouge) or [Word Error Rate](https://huggingface.co/spaces/evaluate-metric/wer). For this guide, you will use the Word Error Rate (WER). We use the 🤗 Evaluate library to do so. For potential limitations and other gotchas of the WER, refer to [this guide](https://huggingface.co/spaces/evaluate-metric/wer). ``` from evaluate import load import torch wer = load("wer") def compute_metrics(eval_pred): logits, labels = eval_pred predicted = logits.argmax(-1) decoded_labels = processor.batch_decode(labels, skip_special_tokens=True) decoded_predictions = processor.batch_decode(predicted, skip_special_tokens=True) wer_score = wer.compute(predictions=decoded_predictions, references=decoded_labels) return {"wer_score": wer_score} ``` ## Train! Now, you are ready to start fine-tuning the model. You will use the 🤗 [Trainer](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer) for this. First, define the training arguments using [TrainingArguments](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.TrainingArguments). ``` from transformers import TrainingArguments, Trainer model_name = checkpoint.split("/")[1] training_args = TrainingArguments( output_dir=f"{model_name}-pokemon", learning_rate=5e-5, num_train_epochs=50, fp16=True, per_device_train_batch_size=32, per_device_eval_batch_size=32, gradient_accumulation_steps=2, save_total_limit=3, evaluation_strategy="steps", eval_steps=50, save_strategy="steps", save_steps=50, logging_steps=50, remove_unused_columns=False, push_to_hub=True, label_names=["labels"], load_best_model_at_end=True, ) ``` Then pass them along with the datasets and the model to 🤗 Trainer. ``` trainer = Trainer( model=model, args=training_args, train_dataset=train_ds, eval_dataset=test_ds, compute_metrics=compute_metrics, ) ``` To start training, simply call [train()](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.train) on the [Trainer](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer) object. You should see the training loss drop smoothly as training progresses. Once training is completed, share your model to the Hub with the [push\_to\_hub()](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.push_to_hub) method so everyone can use your model: ## Inference Take a sample image from `test_ds` to test the model. ``` from PIL import Image import requests url = "https://huggingface.co/datasets/sayakpaul/sample-datasets/resolve/main/pokemon.png" image = Image.open(requests.get(url, stream=True).raw) image ``` ![Test image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/test_image_image_cap.png) Prepare image for the model. ``` device = "cuda" if torch.cuda.is_available() else "cpu" inputs = processor(images=image, return_tensors="pt").to(device) pixel_values = inputs.pixel_values ``` Call `generate` and decode the predictions. ``` generated_ids = model.generate(pixel_values=pixel_values, max_length=50) generated_caption = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] print(generated_caption) ``` ``` a drawing of a pink and blue pokemon ``` Looks like the fine-tuned model generated a pretty good caption!
<!DOCTYPE html><html class=""><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"> <meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science."> <meta property="fb:app_id" content="1321688464574422"> <meta name="twitter:card" content="summary_large_image"> <meta name="twitter:site" content="@huggingface"> <meta property="og:title" content="Image captioning"> <meta property="og:type" content="website"> <meta property="og:url" content="https://huggingface.co/docs/transformers/v4.34.0/en/tasks/image_captioning"> <meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png"> <link rel="stylesheet" href="/front/build/kube-5e23f38/style.css"> <link rel="preconnect" href="https://fonts.gstatic.com"> <link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&amp;display=swap" rel="stylesheet"> <link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&amp;display=swap" rel="stylesheet"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" as="style" onload="this.onload=null;this.rel='stylesheet'"> <noscript> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" /> </noscript> <title>Image captioning</title> <script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script> <script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><link rel="stylesheet" href="/docs/transformers/v4.34.0/en/_app/immutable/assets/0.e3b0c442.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/1.38c5c2f6.js"><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"><meta name="hf:doc:metadata" content="{&quot;local&quot;:&quot;image-captioning&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;load-the-pokmon-blip-captions-dataset&quot;,&quot;title&quot;:&quot;Load the Pokémon BLIP captions dataset&quot;},{&quot;local&quot;:&quot;preprocess-the-dataset&quot;,&quot;title&quot;:&quot;Preprocess the dataset&quot;},{&quot;local&quot;:&quot;load-a-base-model&quot;,&quot;title&quot;:&quot;Load a base model&quot;},{&quot;local&quot;:&quot;evaluate&quot;,&quot;title&quot;:&quot;Evaluate&quot;},{&quot;local&quot;:&quot;train&quot;,&quot;title&quot;:&quot;Train!&quot;},{&quot;local&quot;:&quot;inference&quot;,&quot;title&quot;:&quot;Inference&quot;}],&quot;title&quot;:&quot;Image captioning&quot;}"></head> <body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage"> <div class="flex min-h-screen flex-col"> <div class="SVELTE_HYDRATER contents" data-props="{&quot;classNames&quot;:&quot;&quot;,&quot;isWide&quot;:true,&quot;isZh&quot;:false}" data-target="MainHeader"><header class="border-b border-gray-100 "><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <div class="flex flex-none items-center justify-center p-0.5 place-self-stretch lg:hidden"><button class="relative z-40 flex h-6 w-8 items-center justify-center" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div></div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="rounded-full border border-transparent bg-gray-900 px-3 py-1 leading-none text-white hover:border-black hover:bg-white hover:text-black" href="/join">Sign Up</a></li></ul></nav></div></header></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div> <main class="flex flex-1 flex-col"><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapters&quot;:[{&quot;title&quot;:&quot;Get started&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;🤗 Transformers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;index&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/index&quot;},{&quot;title&quot;:&quot;Quick tour&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;quicktour&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/quicktour&quot;},{&quot;title&quot;:&quot;Installation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;installation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/installation&quot;}]},{&quot;title&quot;:&quot;Tutorials&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Run inference with pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_tutorial&quot;},{&quot;title&quot;:&quot;Write portable code with AutoClass&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;autoclass_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/autoclass_tutorial&quot;},{&quot;title&quot;:&quot;Preprocess data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocessing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/preprocessing&quot;},{&quot;title&quot;:&quot;Fine-tune a pretrained model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;training&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/training&quot;},{&quot;title&quot;:&quot;Train with a script&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;run_scripts&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/run_scripts&quot;},{&quot;title&quot;:&quot;Set up distributed training with 🤗 Accelerate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;accelerate&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/accelerate&quot;},{&quot;title&quot;:&quot;Load and train adapters with 🤗 PEFT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;peft&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/peft&quot;},{&quot;title&quot;:&quot;Share your model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_sharing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_sharing&quot;},{&quot;title&quot;:&quot;Agents&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers_agents&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/transformers_agents&quot;},{&quot;title&quot;:&quot;Generation with LLMs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;llm_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/llm_tutorial&quot;}]},{&quot;title&quot;:&quot;Task Guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Natural Language Processing&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text classification&quot;,&quot;id&quot;:&quot;tasks/sequence_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/sequence_classification&quot;},{&quot;title&quot;:&quot;Token classification&quot;,&quot;id&quot;:&quot;tasks/token_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/token_classification&quot;},{&quot;title&quot;:&quot;Question answering&quot;,&quot;id&quot;:&quot;tasks/question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/question_answering&quot;},{&quot;title&quot;:&quot;Causal language modeling&quot;,&quot;id&quot;:&quot;tasks/language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/language_modeling&quot;},{&quot;title&quot;:&quot;Masked language modeling&quot;,&quot;id&quot;:&quot;tasks/masked_language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/masked_language_modeling&quot;},{&quot;title&quot;:&quot;Translation&quot;,&quot;id&quot;:&quot;tasks/translation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/translation&quot;},{&quot;title&quot;:&quot;Summarization&quot;,&quot;id&quot;:&quot;tasks/summarization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/summarization&quot;},{&quot;title&quot;:&quot;Multiple choice&quot;,&quot;id&quot;:&quot;tasks/multiple_choice&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/multiple_choice&quot;}]},{&quot;title&quot;:&quot;Audio&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio classification&quot;,&quot;id&quot;:&quot;tasks/audio_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/audio_classification&quot;},{&quot;title&quot;:&quot;Automatic speech recognition&quot;,&quot;id&quot;:&quot;tasks/asr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/asr&quot;}]},{&quot;title&quot;:&quot;Computer Vision&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image classification&quot;,&quot;id&quot;:&quot;tasks/image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_classification&quot;},{&quot;title&quot;:&quot;Semantic segmentation&quot;,&quot;id&quot;:&quot;tasks/semantic_segmentation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/semantic_segmentation&quot;},{&quot;title&quot;:&quot;Video classification&quot;,&quot;id&quot;:&quot;tasks/video_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/video_classification&quot;},{&quot;title&quot;:&quot;Object detection&quot;,&quot;id&quot;:&quot;tasks/object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot object detection&quot;,&quot;id&quot;:&quot;tasks/zero_shot_object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot image classification&quot;,&quot;id&quot;:&quot;tasks/zero_shot_image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification&quot;},{&quot;title&quot;:&quot;Depth estimation&quot;,&quot;id&quot;:&quot;tasks/monocular_depth_estimation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation&quot;}]},{&quot;title&quot;:&quot;Multimodal&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image captioning&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks/image_captioning&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_captioning&quot;},{&quot;title&quot;:&quot;Document Question Answering&quot;,&quot;id&quot;:&quot;tasks/document_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/document_question_answering&quot;},{&quot;title&quot;:&quot;Visual Question Answering&quot;,&quot;id&quot;:&quot;tasks/visual_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/visual_question_answering&quot;},{&quot;title&quot;:&quot;Text to speech&quot;,&quot;id&quot;:&quot;tasks/text-to-speech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/text-to-speech&quot;}]},{&quot;title&quot;:&quot;Generation&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Customize the generation strategy&quot;,&quot;id&quot;:&quot;generation_strategies&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/generation_strategies&quot;}]},{&quot;title&quot;:&quot;Prompting&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image tasks with IDEFICS&quot;,&quot;id&quot;:&quot;tasks/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/idefics&quot;}]}]},{&quot;title&quot;:&quot;Developer guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Use fast tokenizers from 🤗 Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;fast_tokenizers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/fast_tokenizers&quot;},{&quot;title&quot;:&quot;Run inference with multilingual models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;multilingual&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/multilingual&quot;},{&quot;title&quot;:&quot;Use model-specific APIs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;create_a_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/create_a_model&quot;},{&quot;title&quot;:&quot;Share a custom model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_models&quot;},{&quot;title&quot;:&quot;Templates for chat models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;chat_templating&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/chat_templating&quot;},{&quot;title&quot;:&quot;Run training on Amazon SageMaker&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;sagemaker&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/sagemaker&quot;},{&quot;title&quot;:&quot;Export to ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;serialization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/serialization&quot;},{&quot;title&quot;:&quot;Export to TFLite&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tflite&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tflite&quot;},{&quot;title&quot;:&quot;Export to TorchScript&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;torchscript&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/torchscript&quot;},{&quot;title&quot;:&quot;Benchmarks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;benchmarks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/benchmarks&quot;},{&quot;title&quot;:&quot;Notebooks with examples&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;notebooks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/notebooks&quot;},{&quot;title&quot;:&quot;Community resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;community&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/community&quot;},{&quot;title&quot;:&quot;Custom Tools and Prompts&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_tools&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_tools&quot;},{&quot;title&quot;:&quot;Troubleshoot&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;troubleshooting&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/troubleshooting&quot;}]},{&quot;title&quot;:&quot;Performance and scalability&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;performance&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/performance&quot;},{&quot;title&quot;:&quot;Efficient training techniques&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Methods and tools for efficient training on a single GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_one&quot;},{&quot;title&quot;:&quot;Multiple GPUs and parallelism&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_many&quot;},{&quot;title&quot;:&quot;Efficient training on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu&quot;},{&quot;title&quot;:&quot;Distributed CPU training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu_many&quot;},{&quot;title&quot;:&quot;Training on TPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu&quot;},{&quot;title&quot;:&quot;Training on TPU with TensorFlow&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu_tf&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu_tf&quot;},{&quot;title&quot;:&quot;Training on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_special&quot;},{&quot;title&quot;:&quot;Custom hardware for training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_hardware&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_hardware&quot;},{&quot;title&quot;:&quot;Hyperparameter Search using Trainer API&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;hpo_train&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/hpo_train&quot;}]},{&quot;title&quot;:&quot;Optimizing inference&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Inference on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_cpu&quot;},{&quot;title&quot;:&quot;Inference on one GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_one&quot;},{&quot;title&quot;:&quot;Inference on many GPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_many&quot;},{&quot;title&quot;:&quot;Inference on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_special&quot;}]},{&quot;title&quot;:&quot;Instantiating a big model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;big_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/big_models&quot;},{&quot;title&quot;:&quot;Troubleshooting&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;debugging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/debugging&quot;},{&quot;title&quot;:&quot;XLA Integration for TensorFlow Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tf_xla&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tf_xla&quot;},{&quot;title&quot;:&quot;Optimize inference using `torch.compile()`&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_torch_compile&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_torch_compile&quot;}]},{&quot;title&quot;:&quot;Contribute&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;How to contribute to transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;contributing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/contributing&quot;},{&quot;title&quot;:&quot;How to add a model to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_model&quot;},{&quot;title&quot;:&quot;How to convert a 🤗 Transformers model to TensorFlow?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_tensorflow_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_tensorflow_model&quot;},{&quot;title&quot;:&quot;How to add a pipeline to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_pipeline&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_pipeline&quot;},{&quot;title&quot;:&quot;Testing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;testing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/testing&quot;},{&quot;title&quot;:&quot;Checks on a Pull Request&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pr_checks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pr_checks&quot;}]},{&quot;title&quot;:&quot;Conceptual guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Philosophy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;philosophy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/philosophy&quot;},{&quot;title&quot;:&quot;Glossary&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;glossary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/glossary&quot;},{&quot;title&quot;:&quot;What 🤗 Transformers can do&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;task_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/task_summary&quot;},{&quot;title&quot;:&quot;How 🤗 Transformers solve tasks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks_explained&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks_explained&quot;},{&quot;title&quot;:&quot;The Transformer model family&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_summary&quot;},{&quot;title&quot;:&quot;Summary of the tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tokenizer_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tokenizer_summary&quot;},{&quot;title&quot;:&quot;Attention mechanisms&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;attention&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/attention&quot;},{&quot;title&quot;:&quot;Padding and truncation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pad_truncation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pad_truncation&quot;},{&quot;title&quot;:&quot;BERTology&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;bertology&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/bertology&quot;},{&quot;title&quot;:&quot;Perplexity of fixed-length models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perplexity&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perplexity&quot;},{&quot;title&quot;:&quot;Pipelines for webserver inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_webserver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_webserver&quot;},{&quot;title&quot;:&quot;Model training anatomy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_memory_anatomy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_memory_anatomy&quot;}]},{&quot;title&quot;:&quot;API&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Main Classes&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Agents and Tools&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/agent&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/agent&quot;},{&quot;title&quot;:&quot;Auto Classes&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/auto&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/auto&quot;},{&quot;title&quot;:&quot;Callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/callback&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/callback&quot;},{&quot;title&quot;:&quot;Configuration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/configuration&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/configuration&quot;},{&quot;title&quot;:&quot;Data Collator&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/data_collator&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/data_collator&quot;},{&quot;title&quot;:&quot;Keras callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/keras_callbacks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/keras_callbacks&quot;},{&quot;title&quot;:&quot;Logging&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/logging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/logging&quot;},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/model&quot;},{&quot;title&quot;:&quot;Text Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/text_generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/text_generation&quot;},{&quot;title&quot;:&quot;ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/onnx&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/onnx&quot;},{&quot;title&quot;:&quot;Optimization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/optimizer_schedules&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules&quot;},{&quot;title&quot;:&quot;Model outputs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/output&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/output&quot;},{&quot;title&quot;:&quot;Pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/pipelines&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/pipelines&quot;},{&quot;title&quot;:&quot;Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/processors&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/processors&quot;},{&quot;title&quot;:&quot;Quantization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/quantization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/quantization&quot;},{&quot;title&quot;:&quot;Tokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/tokenizer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/tokenizer&quot;},{&quot;title&quot;:&quot;Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/trainer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/trainer&quot;},{&quot;title&quot;:&quot;DeepSpeed Integration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/deepspeed&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/deepspeed&quot;},{&quot;title&quot;:&quot;Feature Extractor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/feature_extractor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/feature_extractor&quot;},{&quot;title&quot;:&quot;Image Processor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/image_processor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/image_processor&quot;}]},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALBERT&quot;,&quot;id&quot;:&quot;model_doc/albert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/albert&quot;},{&quot;title&quot;:&quot;BART&quot;,&quot;id&quot;:&quot;model_doc/bart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bart&quot;},{&quot;title&quot;:&quot;BARThez&quot;,&quot;id&quot;:&quot;model_doc/barthez&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/barthez&quot;},{&quot;title&quot;:&quot;BARTpho&quot;,&quot;id&quot;:&quot;model_doc/bartpho&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bartpho&quot;},{&quot;title&quot;:&quot;BERT&quot;,&quot;id&quot;:&quot;model_doc/bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert&quot;},{&quot;title&quot;:&quot;BertGeneration&quot;,&quot;id&quot;:&quot;model_doc/bert-generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-generation&quot;},{&quot;title&quot;:&quot;BertJapanese&quot;,&quot;id&quot;:&quot;model_doc/bert-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-japanese&quot;},{&quot;title&quot;:&quot;Bertweet&quot;,&quot;id&quot;:&quot;model_doc/bertweet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bertweet&quot;},{&quot;title&quot;:&quot;BigBird&quot;,&quot;id&quot;:&quot;model_doc/big_bird&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/big_bird&quot;},{&quot;title&quot;:&quot;BigBirdPegasus&quot;,&quot;id&quot;:&quot;model_doc/bigbird_pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus&quot;},{&quot;title&quot;:&quot;BioGpt&quot;,&quot;id&quot;:&quot;model_doc/biogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/biogpt&quot;},{&quot;title&quot;:&quot;Blenderbot&quot;,&quot;id&quot;:&quot;model_doc/blenderbot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot&quot;},{&quot;title&quot;:&quot;Blenderbot Small&quot;,&quot;id&quot;:&quot;model_doc/blenderbot-small&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot-small&quot;},{&quot;title&quot;:&quot;BLOOM&quot;,&quot;id&quot;:&quot;model_doc/bloom&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bloom&quot;},{&quot;title&quot;:&quot;BORT&quot;,&quot;id&quot;:&quot;model_doc/bort&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bort&quot;},{&quot;title&quot;:&quot;ByT5&quot;,&quot;id&quot;:&quot;model_doc/byt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/byt5&quot;},{&quot;title&quot;:&quot;CamemBERT&quot;,&quot;id&quot;:&quot;model_doc/camembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/camembert&quot;},{&quot;title&quot;:&quot;CANINE&quot;,&quot;id&quot;:&quot;model_doc/canine&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/canine&quot;},{&quot;title&quot;:&quot;CodeGen&quot;,&quot;id&quot;:&quot;model_doc/codegen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/codegen&quot;},{&quot;title&quot;:&quot;CodeLlama&quot;,&quot;id&quot;:&quot;model_doc/code_llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/code_llama&quot;},{&quot;title&quot;:&quot;ConvBERT&quot;,&quot;id&quot;:&quot;model_doc/convbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convbert&quot;},{&quot;title&quot;:&quot;CPM&quot;,&quot;id&quot;:&quot;model_doc/cpm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpm&quot;},{&quot;title&quot;:&quot;CPMANT&quot;,&quot;id&quot;:&quot;model_doc/cpmant&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpmant&quot;},{&quot;title&quot;:&quot;CTRL&quot;,&quot;id&quot;:&quot;model_doc/ctrl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ctrl&quot;},{&quot;title&quot;:&quot;DeBERTa&quot;,&quot;id&quot;:&quot;model_doc/deberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta&quot;},{&quot;title&quot;:&quot;DeBERTa-v2&quot;,&quot;id&quot;:&quot;model_doc/deberta-v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta-v2&quot;},{&quot;title&quot;:&quot;DialoGPT&quot;,&quot;id&quot;:&quot;model_doc/dialogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dialogpt&quot;},{&quot;title&quot;:&quot;DistilBERT&quot;,&quot;id&quot;:&quot;model_doc/distilbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/distilbert&quot;},{&quot;title&quot;:&quot;DPR&quot;,&quot;id&quot;:&quot;model_doc/dpr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpr&quot;},{&quot;title&quot;:&quot;ELECTRA&quot;,&quot;id&quot;:&quot;model_doc/electra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/electra&quot;},{&quot;title&quot;:&quot;Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encoder-decoder&quot;},{&quot;title&quot;:&quot;ERNIE&quot;,&quot;id&quot;:&quot;model_doc/ernie&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie&quot;},{&quot;title&quot;:&quot;ErnieM&quot;,&quot;id&quot;:&quot;model_doc/ernie_m&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie_m&quot;},{&quot;title&quot;:&quot;ESM&quot;,&quot;id&quot;:&quot;model_doc/esm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/esm&quot;},{&quot;title&quot;:&quot;Falcon&quot;,&quot;id&quot;:&quot;model_doc/falcon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/falcon&quot;},{&quot;title&quot;:&quot;FLAN-T5&quot;,&quot;id&quot;:&quot;model_doc/flan-t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-t5&quot;},{&quot;title&quot;:&quot;FLAN-UL2&quot;,&quot;id&quot;:&quot;model_doc/flan-ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-ul2&quot;},{&quot;title&quot;:&quot;FlauBERT&quot;,&quot;id&quot;:&quot;model_doc/flaubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flaubert&quot;},{&quot;title&quot;:&quot;FNet&quot;,&quot;id&quot;:&quot;model_doc/fnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fnet&quot;},{&quot;title&quot;:&quot;FSMT&quot;,&quot;id&quot;:&quot;model_doc/fsmt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fsmt&quot;},{&quot;title&quot;:&quot;Funnel Transformer&quot;,&quot;id&quot;:&quot;model_doc/funnel&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/funnel&quot;},{&quot;title&quot;:&quot;GPT&quot;,&quot;id&quot;:&quot;model_doc/openai-gpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/openai-gpt&quot;},{&quot;title&quot;:&quot;GPT Neo&quot;,&quot;id&quot;:&quot;model_doc/gpt_neo&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neo&quot;},{&quot;title&quot;:&quot;GPT NeoX&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox&quot;},{&quot;title&quot;:&quot;GPT NeoX Japanese&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox_japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese&quot;},{&quot;title&quot;:&quot;GPT-J&quot;,&quot;id&quot;:&quot;model_doc/gptj&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptj&quot;},{&quot;title&quot;:&quot;GPT2&quot;,&quot;id&quot;:&quot;model_doc/gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt2&quot;},{&quot;title&quot;:&quot;GPTBigCode&quot;,&quot;id&quot;:&quot;model_doc/gpt_bigcode&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode&quot;},{&quot;title&quot;:&quot;GPTSAN Japanese&quot;,&quot;id&quot;:&quot;model_doc/gptsan-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese&quot;},{&quot;title&quot;:&quot;GPTSw3&quot;,&quot;id&quot;:&quot;model_doc/gpt-sw3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt-sw3&quot;},{&quot;title&quot;:&quot;HerBERT&quot;,&quot;id&quot;:&quot;model_doc/herbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/herbert&quot;},{&quot;title&quot;:&quot;I-BERT&quot;,&quot;id&quot;:&quot;model_doc/ibert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ibert&quot;},{&quot;title&quot;:&quot;Jukebox&quot;,&quot;id&quot;:&quot;model_doc/jukebox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/jukebox&quot;},{&quot;title&quot;:&quot;LED&quot;,&quot;id&quot;:&quot;model_doc/led&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/led&quot;},{&quot;title&quot;:&quot;LLaMA&quot;,&quot;id&quot;:&quot;model_doc/llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama&quot;},{&quot;title&quot;:&quot;Llama2&quot;,&quot;id&quot;:&quot;model_doc/llama2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama2&quot;},{&quot;title&quot;:&quot;Longformer&quot;,&quot;id&quot;:&quot;model_doc/longformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longformer&quot;},{&quot;title&quot;:&quot;LongT5&quot;,&quot;id&quot;:&quot;model_doc/longt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longt5&quot;},{&quot;title&quot;:&quot;LUKE&quot;,&quot;id&quot;:&quot;model_doc/luke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/luke&quot;},{&quot;title&quot;:&quot;M2M100&quot;,&quot;id&quot;:&quot;model_doc/m2m_100&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/m2m_100&quot;},{&quot;title&quot;:&quot;MarianMT&quot;,&quot;id&quot;:&quot;model_doc/marian&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/marian&quot;},{&quot;title&quot;:&quot;MarkupLM&quot;,&quot;id&quot;:&quot;model_doc/markuplm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/markuplm&quot;},{&quot;title&quot;:&quot;MBart and MBart-50&quot;,&quot;id&quot;:&quot;model_doc/mbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mbart&quot;},{&quot;title&quot;:&quot;MEGA&quot;,&quot;id&quot;:&quot;model_doc/mega&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mega&quot;},{&quot;title&quot;:&quot;MegatronBERT&quot;,&quot;id&quot;:&quot;model_doc/megatron-bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron-bert&quot;},{&quot;title&quot;:&quot;MegatronGPT2&quot;,&quot;id&quot;:&quot;model_doc/megatron_gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2&quot;},{&quot;title&quot;:&quot;Mistral&quot;,&quot;id&quot;:&quot;model_doc/mistral&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mistral&quot;},{&quot;title&quot;:&quot;mLUKE&quot;,&quot;id&quot;:&quot;model_doc/mluke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mluke&quot;},{&quot;title&quot;:&quot;MobileBERT&quot;,&quot;id&quot;:&quot;model_doc/mobilebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilebert&quot;},{&quot;title&quot;:&quot;MPNet&quot;,&quot;id&quot;:&quot;model_doc/mpnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpnet&quot;},{&quot;title&quot;:&quot;MPT&quot;,&quot;id&quot;:&quot;model_doc/mpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpt&quot;},{&quot;title&quot;:&quot;MRA&quot;,&quot;id&quot;:&quot;model_doc/mra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mra&quot;},{&quot;title&quot;:&quot;MT5&quot;,&quot;id&quot;:&quot;model_doc/mt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mt5&quot;},{&quot;title&quot;:&quot;MVP&quot;,&quot;id&quot;:&quot;model_doc/mvp&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mvp&quot;},{&quot;title&quot;:&quot;NEZHA&quot;,&quot;id&quot;:&quot;model_doc/nezha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nezha&quot;},{&quot;title&quot;:&quot;NLLB&quot;,&quot;id&quot;:&quot;model_doc/nllb&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb&quot;},{&quot;title&quot;:&quot;NLLB-MoE&quot;,&quot;id&quot;:&quot;model_doc/nllb-moe&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb-moe&quot;},{&quot;title&quot;:&quot;Nyströmformer&quot;,&quot;id&quot;:&quot;model_doc/nystromformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nystromformer&quot;},{&quot;title&quot;:&quot;Open-Llama&quot;,&quot;id&quot;:&quot;model_doc/open-llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/open-llama&quot;},{&quot;title&quot;:&quot;OPT&quot;,&quot;id&quot;:&quot;model_doc/opt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/opt&quot;},{&quot;title&quot;:&quot;Pegasus&quot;,&quot;id&quot;:&quot;model_doc/pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus&quot;},{&quot;title&quot;:&quot;PEGASUS-X&quot;,&quot;id&quot;:&quot;model_doc/pegasus_x&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus_x&quot;},{&quot;title&quot;:&quot;Persimmon&quot;,&quot;id&quot;:&quot;model_doc/persimmon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/persimmon&quot;},{&quot;title&quot;:&quot;PhoBERT&quot;,&quot;id&quot;:&quot;model_doc/phobert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/phobert&quot;},{&quot;title&quot;:&quot;PLBart&quot;,&quot;id&quot;:&quot;model_doc/plbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/plbart&quot;},{&quot;title&quot;:&quot;ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/prophetnet&quot;},{&quot;title&quot;:&quot;QDQBert&quot;,&quot;id&quot;:&quot;model_doc/qdqbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/qdqbert&quot;},{&quot;title&quot;:&quot;RAG&quot;,&quot;id&quot;:&quot;model_doc/rag&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rag&quot;},{&quot;title&quot;:&quot;REALM&quot;,&quot;id&quot;:&quot;model_doc/realm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/realm&quot;},{&quot;title&quot;:&quot;Reformer&quot;,&quot;id&quot;:&quot;model_doc/reformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/reformer&quot;},{&quot;title&quot;:&quot;RemBERT&quot;,&quot;id&quot;:&quot;model_doc/rembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rembert&quot;},{&quot;title&quot;:&quot;RetriBERT&quot;,&quot;id&quot;:&quot;model_doc/retribert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/retribert&quot;},{&quot;title&quot;:&quot;RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta&quot;},{&quot;title&quot;:&quot;RoBERTa-PreLayerNorm&quot;,&quot;id&quot;:&quot;model_doc/roberta-prelayernorm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm&quot;},{&quot;title&quot;:&quot;RoCBert&quot;,&quot;id&quot;:&quot;model_doc/roc_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roc_bert&quot;},{&quot;title&quot;:&quot;RoFormer&quot;,&quot;id&quot;:&quot;model_doc/roformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roformer&quot;},{&quot;title&quot;:&quot;RWKV&quot;,&quot;id&quot;:&quot;model_doc/rwkv&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rwkv&quot;},{&quot;title&quot;:&quot;Splinter&quot;,&quot;id&quot;:&quot;model_doc/splinter&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/splinter&quot;},{&quot;title&quot;:&quot;SqueezeBERT&quot;,&quot;id&quot;:&quot;model_doc/squeezebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/squeezebert&quot;},{&quot;title&quot;:&quot;SwitchTransformers&quot;,&quot;id&quot;:&quot;model_doc/switch_transformers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/switch_transformers&quot;},{&quot;title&quot;:&quot;T5&quot;,&quot;id&quot;:&quot;model_doc/t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5&quot;},{&quot;title&quot;:&quot;T5v1.1&quot;,&quot;id&quot;:&quot;model_doc/t5v1.1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5v1.1&quot;},{&quot;title&quot;:&quot;TAPEX&quot;,&quot;id&quot;:&quot;model_doc/tapex&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapex&quot;},{&quot;title&quot;:&quot;Transformer XL&quot;,&quot;id&quot;:&quot;model_doc/transfo-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/transfo-xl&quot;},{&quot;title&quot;:&quot;UL2&quot;,&quot;id&quot;:&quot;model_doc/ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ul2&quot;},{&quot;title&quot;:&quot;UMT5&quot;,&quot;id&quot;:&quot;model_doc/umt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/umt5&quot;},{&quot;title&quot;:&quot;X-MOD&quot;,&quot;id&quot;:&quot;model_doc/xmod&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xmod&quot;},{&quot;title&quot;:&quot;XGLM&quot;,&quot;id&quot;:&quot;model_doc/xglm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xglm&quot;},{&quot;title&quot;:&quot;XLM&quot;,&quot;id&quot;:&quot;model_doc/xlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm&quot;},{&quot;title&quot;:&quot;XLM-ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/xlm-prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa-XL&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl&quot;},{&quot;title&quot;:&quot;XLM-V&quot;,&quot;id&quot;:&quot;model_doc/xlm-v&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-v&quot;},{&quot;title&quot;:&quot;XLNet&quot;,&quot;id&quot;:&quot;model_doc/xlnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlnet&quot;},{&quot;title&quot;:&quot;YOSO&quot;,&quot;id&quot;:&quot;model_doc/yoso&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yoso&quot;}]},{&quot;title&quot;:&quot;Vision models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;BEiT&quot;,&quot;id&quot;:&quot;model_doc/beit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/beit&quot;},{&quot;title&quot;:&quot;BiT&quot;,&quot;id&quot;:&quot;model_doc/bit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bit&quot;},{&quot;title&quot;:&quot;Conditional DETR&quot;,&quot;id&quot;:&quot;model_doc/conditional_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/conditional_detr&quot;},{&quot;title&quot;:&quot;ConvNeXT&quot;,&quot;id&quot;:&quot;model_doc/convnext&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnext&quot;},{&quot;title&quot;:&quot;ConvNeXTV2&quot;,&quot;id&quot;:&quot;model_doc/convnextv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnextv2&quot;},{&quot;title&quot;:&quot;CvT&quot;,&quot;id&quot;:&quot;model_doc/cvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cvt&quot;},{&quot;title&quot;:&quot;Deformable DETR&quot;,&quot;id&quot;:&quot;model_doc/deformable_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deformable_detr&quot;},{&quot;title&quot;:&quot;DeiT&quot;,&quot;id&quot;:&quot;model_doc/deit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deit&quot;},{&quot;title&quot;:&quot;DETA&quot;,&quot;id&quot;:&quot;model_doc/deta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deta&quot;},{&quot;title&quot;:&quot;DETR&quot;,&quot;id&quot;:&quot;model_doc/detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/detr&quot;},{&quot;title&quot;:&quot;DiNAT&quot;,&quot;id&quot;:&quot;model_doc/dinat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinat&quot;},{&quot;title&quot;:&quot;DINO V2&quot;,&quot;id&quot;:&quot;model_doc/dinov2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinov2&quot;},{&quot;title&quot;:&quot;DiT&quot;,&quot;id&quot;:&quot;model_doc/dit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dit&quot;},{&quot;title&quot;:&quot;DPT&quot;,&quot;id&quot;:&quot;model_doc/dpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpt&quot;},{&quot;title&quot;:&quot;EfficientFormer&quot;,&quot;id&quot;:&quot;model_doc/efficientformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientformer&quot;},{&quot;title&quot;:&quot;EfficientNet&quot;,&quot;id&quot;:&quot;model_doc/efficientnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientnet&quot;},{&quot;title&quot;:&quot;FocalNet&quot;,&quot;id&quot;:&quot;model_doc/focalnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/focalnet&quot;},{&quot;title&quot;:&quot;GLPN&quot;,&quot;id&quot;:&quot;model_doc/glpn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/glpn&quot;},{&quot;title&quot;:&quot;ImageGPT&quot;,&quot;id&quot;:&quot;model_doc/imagegpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/imagegpt&quot;},{&quot;title&quot;:&quot;LeViT&quot;,&quot;id&quot;:&quot;model_doc/levit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/levit&quot;},{&quot;title&quot;:&quot;Mask2Former&quot;,&quot;id&quot;:&quot;model_doc/mask2former&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mask2former&quot;},{&quot;title&quot;:&quot;MaskFormer&quot;,&quot;id&quot;:&quot;model_doc/maskformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/maskformer&quot;},{&quot;title&quot;:&quot;MobileNetV1&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v1&quot;},{&quot;title&quot;:&quot;MobileNetV2&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v2&quot;},{&quot;title&quot;:&quot;MobileViT&quot;,&quot;id&quot;:&quot;model_doc/mobilevit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevit&quot;},{&quot;title&quot;:&quot;MobileViTV2&quot;,&quot;id&quot;:&quot;model_doc/mobilevitv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevitv2&quot;},{&quot;title&quot;:&quot;NAT&quot;,&quot;id&quot;:&quot;model_doc/nat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nat&quot;},{&quot;title&quot;:&quot;PoolFormer&quot;,&quot;id&quot;:&quot;model_doc/poolformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/poolformer&quot;},{&quot;title&quot;:&quot;Pyramid Vision Transformer (PVT)&quot;,&quot;id&quot;:&quot;model_doc/pvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pvt&quot;},{&quot;title&quot;:&quot;RegNet&quot;,&quot;id&quot;:&quot;model_doc/regnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/regnet&quot;},{&quot;title&quot;:&quot;ResNet&quot;,&quot;id&quot;:&quot;model_doc/resnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/resnet&quot;},{&quot;title&quot;:&quot;SegFormer&quot;,&quot;id&quot;:&quot;model_doc/segformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/segformer&quot;},{&quot;title&quot;:&quot;SwiftFormer&quot;,&quot;id&quot;:&quot;model_doc/swiftformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swiftformer&quot;},{&quot;title&quot;:&quot;Swin Transformer&quot;,&quot;id&quot;:&quot;model_doc/swin&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin&quot;},{&quot;title&quot;:&quot;Swin Transformer V2&quot;,&quot;id&quot;:&quot;model_doc/swinv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swinv2&quot;},{&quot;title&quot;:&quot;Swin2SR&quot;,&quot;id&quot;:&quot;model_doc/swin2sr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin2sr&quot;},{&quot;title&quot;:&quot;Table Transformer&quot;,&quot;id&quot;:&quot;model_doc/table-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/table-transformer&quot;},{&quot;title&quot;:&quot;TimeSformer&quot;,&quot;id&quot;:&quot;model_doc/timesformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/timesformer&quot;},{&quot;title&quot;:&quot;UperNet&quot;,&quot;id&quot;:&quot;model_doc/upernet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/upernet&quot;},{&quot;title&quot;:&quot;VAN&quot;,&quot;id&quot;:&quot;model_doc/van&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/van&quot;},{&quot;title&quot;:&quot;VideoMAE&quot;,&quot;id&quot;:&quot;model_doc/videomae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/videomae&quot;},{&quot;title&quot;:&quot;Vision Transformer (ViT)&quot;,&quot;id&quot;:&quot;model_doc/vit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit&quot;},{&quot;title&quot;:&quot;ViT Hybrid&quot;,&quot;id&quot;:&quot;model_doc/vit_hybrid&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_hybrid&quot;},{&quot;title&quot;:&quot;ViTDet&quot;,&quot;id&quot;:&quot;model_doc/vitdet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitdet&quot;},{&quot;title&quot;:&quot;ViTMAE&quot;,&quot;id&quot;:&quot;model_doc/vit_mae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_mae&quot;},{&quot;title&quot;:&quot;ViTMatte&quot;,&quot;id&quot;:&quot;model_doc/vitmatte&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitmatte&quot;},{&quot;title&quot;:&quot;ViTMSN&quot;,&quot;id&quot;:&quot;model_doc/vit_msn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_msn&quot;},{&quot;title&quot;:&quot;ViViT&quot;,&quot;id&quot;:&quot;model_doc/vivit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vivit&quot;},{&quot;title&quot;:&quot;YOLOS&quot;,&quot;id&quot;:&quot;model_doc/yolos&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yolos&quot;}]},{&quot;title&quot;:&quot;Audio models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio Spectrogram Transformer&quot;,&quot;id&quot;:&quot;model_doc/audio-spectrogram-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer&quot;},{&quot;title&quot;:&quot;Bark&quot;,&quot;id&quot;:&quot;model_doc/bark&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bark&quot;},{&quot;title&quot;:&quot;CLAP&quot;,&quot;id&quot;:&quot;model_doc/clap&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clap&quot;},{&quot;title&quot;:&quot;EnCodec&quot;,&quot;id&quot;:&quot;model_doc/encodec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encodec&quot;},{&quot;title&quot;:&quot;Hubert&quot;,&quot;id&quot;:&quot;model_doc/hubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/hubert&quot;},{&quot;title&quot;:&quot;MCTCT&quot;,&quot;id&quot;:&quot;model_doc/mctct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mctct&quot;},{&quot;title&quot;:&quot;MMS&quot;,&quot;id&quot;:&quot;model_doc/mms&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mms&quot;},{&quot;title&quot;:&quot;MusicGen&quot;,&quot;id&quot;:&quot;model_doc/musicgen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/musicgen&quot;},{&quot;title&quot;:&quot;Pop2Piano&quot;,&quot;id&quot;:&quot;model_doc/pop2piano&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pop2piano&quot;},{&quot;title&quot;:&quot;SEW&quot;,&quot;id&quot;:&quot;model_doc/sew&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew&quot;},{&quot;title&quot;:&quot;SEW-D&quot;,&quot;id&quot;:&quot;model_doc/sew-d&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew-d&quot;},{&quot;title&quot;:&quot;Speech2Text&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text&quot;},{&quot;title&quot;:&quot;Speech2Text2&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text_2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2&quot;},{&quot;title&quot;:&quot;SpeechT5&quot;,&quot;id&quot;:&quot;model_doc/speecht5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speecht5&quot;},{&quot;title&quot;:&quot;UniSpeech&quot;,&quot;id&quot;:&quot;model_doc/unispeech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech&quot;},{&quot;title&quot;:&quot;UniSpeech-SAT&quot;,&quot;id&quot;:&quot;model_doc/unispeech-sat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech-sat&quot;},{&quot;title&quot;:&quot;VITS&quot;,&quot;id&quot;:&quot;model_doc/vits&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vits&quot;},{&quot;title&quot;:&quot;Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2&quot;},{&quot;title&quot;:&quot;Wav2Vec2-Conformer&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2-conformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer&quot;},{&quot;title&quot;:&quot;Wav2Vec2Phoneme&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2_phoneme&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme&quot;},{&quot;title&quot;:&quot;WavLM&quot;,&quot;id&quot;:&quot;model_doc/wavlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wavlm&quot;},{&quot;title&quot;:&quot;Whisper&quot;,&quot;id&quot;:&quot;model_doc/whisper&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/whisper&quot;},{&quot;title&quot;:&quot;XLS-R&quot;,&quot;id&quot;:&quot;model_doc/xls_r&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xls_r&quot;},{&quot;title&quot;:&quot;XLSR-Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/xlsr_wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2&quot;}]},{&quot;title&quot;:&quot;Multimodal models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALIGN&quot;,&quot;id&quot;:&quot;model_doc/align&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/align&quot;},{&quot;title&quot;:&quot;AltCLIP&quot;,&quot;id&quot;:&quot;model_doc/altclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/altclip&quot;},{&quot;title&quot;:&quot;BLIP&quot;,&quot;id&quot;:&quot;model_doc/blip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip&quot;},{&quot;title&quot;:&quot;BLIP-2&quot;,&quot;id&quot;:&quot;model_doc/blip-2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip-2&quot;},{&quot;title&quot;:&quot;BridgeTower&quot;,&quot;id&quot;:&quot;model_doc/bridgetower&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bridgetower&quot;},{&quot;title&quot;:&quot;BROS&quot;,&quot;id&quot;:&quot;model_doc/bros&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bros&quot;},{&quot;title&quot;:&quot;Chinese-CLIP&quot;,&quot;id&quot;:&quot;model_doc/chinese_clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/chinese_clip&quot;},{&quot;title&quot;:&quot;CLIP&quot;,&quot;id&quot;:&quot;model_doc/clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clip&quot;},{&quot;title&quot;:&quot;CLIPSeg&quot;,&quot;id&quot;:&quot;model_doc/clipseg&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clipseg&quot;},{&quot;title&quot;:&quot;Data2Vec&quot;,&quot;id&quot;:&quot;model_doc/data2vec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/data2vec&quot;},{&quot;title&quot;:&quot;DePlot&quot;,&quot;id&quot;:&quot;model_doc/deplot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deplot&quot;},{&quot;title&quot;:&quot;Donut&quot;,&quot;id&quot;:&quot;model_doc/donut&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/donut&quot;},{&quot;title&quot;:&quot;FLAVA&quot;,&quot;id&quot;:&quot;model_doc/flava&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flava&quot;},{&quot;title&quot;:&quot;GIT&quot;,&quot;id&quot;:&quot;model_doc/git&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/git&quot;},{&quot;title&quot;:&quot;GroupViT&quot;,&quot;id&quot;:&quot;model_doc/groupvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/groupvit&quot;},{&quot;title&quot;:&quot;IDEFICS&quot;,&quot;id&quot;:&quot;model_doc/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/idefics&quot;},{&quot;title&quot;:&quot;InstructBLIP&quot;,&quot;id&quot;:&quot;model_doc/instructblip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/instructblip&quot;},{&quot;title&quot;:&quot;LayoutLM&quot;,&quot;id&quot;:&quot;model_doc/layoutlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlm&quot;},{&quot;title&quot;:&quot;LayoutLMV2&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv2&quot;},{&quot;title&quot;:&quot;LayoutLMV3&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv3&quot;},{&quot;title&quot;:&quot;LayoutXLM&quot;,&quot;id&quot;:&quot;model_doc/layoutxlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutxlm&quot;},{&quot;title&quot;:&quot;LiLT&quot;,&quot;id&quot;:&quot;model_doc/lilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lilt&quot;},{&quot;title&quot;:&quot;LXMERT&quot;,&quot;id&quot;:&quot;model_doc/lxmert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lxmert&quot;},{&quot;title&quot;:&quot;MatCha&quot;,&quot;id&quot;:&quot;model_doc/matcha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/matcha&quot;},{&quot;title&quot;:&quot;MGP-STR&quot;,&quot;id&quot;:&quot;model_doc/mgp-str&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mgp-str&quot;},{&quot;title&quot;:&quot;Nougat&quot;,&quot;id&quot;:&quot;model_doc/nougat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nougat&quot;},{&quot;title&quot;:&quot;OneFormer&quot;,&quot;id&quot;:&quot;model_doc/oneformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/oneformer&quot;},{&quot;title&quot;:&quot;OWL-ViT&quot;,&quot;id&quot;:&quot;model_doc/owlvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/owlvit&quot;},{&quot;title&quot;:&quot;Perceiver&quot;,&quot;id&quot;:&quot;model_doc/perceiver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/perceiver&quot;},{&quot;title&quot;:&quot;Pix2Struct&quot;,&quot;id&quot;:&quot;model_doc/pix2struct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pix2struct&quot;},{&quot;title&quot;:&quot;Segment Anything&quot;,&quot;id&quot;:&quot;model_doc/sam&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sam&quot;},{&quot;title&quot;:&quot;Speech Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/speech-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder&quot;},{&quot;title&quot;:&quot;TAPAS&quot;,&quot;id&quot;:&quot;model_doc/tapas&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapas&quot;},{&quot;title&quot;:&quot;TrOCR&quot;,&quot;id&quot;:&quot;model_doc/trocr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trocr&quot;},{&quot;title&quot;:&quot;TVLT&quot;,&quot;id&quot;:&quot;model_doc/tvlt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tvlt&quot;},{&quot;title&quot;:&quot;ViLT&quot;,&quot;id&quot;:&quot;model_doc/vilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vilt&quot;},{&quot;title&quot;:&quot;Vision Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/vision-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder&quot;},{&quot;title&quot;:&quot;Vision Text Dual Encoder&quot;,&quot;id&quot;:&quot;model_doc/vision-text-dual-encoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder&quot;},{&quot;title&quot;:&quot;VisualBERT&quot;,&quot;id&quot;:&quot;model_doc/visual_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/visual_bert&quot;},{&quot;title&quot;:&quot;X-CLIP&quot;,&quot;id&quot;:&quot;model_doc/xclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xclip&quot;}]},{&quot;title&quot;:&quot;Reinforcement learning models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Decision Transformer&quot;,&quot;id&quot;:&quot;model_doc/decision_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/decision_transformer&quot;},{&quot;title&quot;:&quot;Trajectory Transformer&quot;,&quot;id&quot;:&quot;model_doc/trajectory_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer&quot;}]},{&quot;title&quot;:&quot;Time series models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Autoformer&quot;,&quot;id&quot;:&quot;model_doc/autoformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/autoformer&quot;},{&quot;title&quot;:&quot;Informer&quot;,&quot;id&quot;:&quot;model_doc/informer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/informer&quot;},{&quot;title&quot;:&quot;Time Series Transformer&quot;,&quot;id&quot;:&quot;model_doc/time_series_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/time_series_transformer&quot;}]},{&quot;title&quot;:&quot;Graph models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Graphormer&quot;,&quot;id&quot;:&quot;model_doc/graphormer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/graphormer&quot;}]}]},{&quot;title&quot;:&quot;Internal Helpers&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Custom Layers and Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/modeling_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/modeling_utils&quot;},{&quot;title&quot;:&quot;Utilities for pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/pipelines_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/pipelines_utils&quot;},{&quot;title&quot;:&quot;Utilities for Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/tokenization_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/tokenization_utils&quot;},{&quot;title&quot;:&quot;Utilities for Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/trainer_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/trainer_utils&quot;},{&quot;title&quot;:&quot;Utilities for Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/generation_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/generation_utils&quot;},{&quot;title&quot;:&quot;Utilities for Image Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/image_processing_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/image_processing_utils&quot;},{&quot;title&quot;:&quot;Utilities for Audio processing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/audio_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/audio_utils&quot;},{&quot;title&quot;:&quot;General Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/file_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/file_utils&quot;},{&quot;title&quot;:&quot;Utilities for Time Series&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/time_series_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/time_series_utils&quot;}]}]}],&quot;chapterId&quot;:&quot;tasks/image_captioning&quot;,&quot;docType&quot;:&quot;docs&quot;,&quot;isLoggedIn&quot;:false,&quot;lang&quot;:&quot;en&quot;,&quot;langs&quot;:[&quot;de&quot;,&quot;en&quot;,&quot;es&quot;,&quot;fr&quot;,&quot;it&quot;,&quot;ko&quot;,&quot;pt&quot;,&quot;zh&quot;],&quot;library&quot;:&quot;transformers&quot;,&quot;theme&quot;:&quot;light&quot;,&quot;version&quot;:&quot;v4.34.0&quot;,&quot;versions&quot;:[{&quot;version&quot;:&quot;main&quot;},{&quot;version&quot;:&quot;v4.34.0&quot;},{&quot;version&quot;:&quot;v4.33.3&quot;},{&quot;version&quot;:&quot;v4.33.2&quot;},{&quot;version&quot;:&quot;v4.33.0&quot;},{&quot;version&quot;:&quot;v4.32.1&quot;},{&quot;version&quot;:&quot;v4.32.0&quot;},{&quot;version&quot;:&quot;v4.31.0&quot;},{&quot;version&quot;:&quot;v4.30.0&quot;},{&quot;version&quot;:&quot;v4.29.1&quot;},{&quot;version&quot;:&quot;v4.29.0&quot;},{&quot;version&quot;:&quot;v4.28.1&quot;},{&quot;version&quot;:&quot;v4.28.0&quot;},{&quot;version&quot;:&quot;v4.27.2&quot;},{&quot;version&quot;:&quot;v4.27.1&quot;},{&quot;version&quot;:&quot;v4.27.0&quot;},{&quot;version&quot;:&quot;v4.26.1&quot;},{&quot;version&quot;:&quot;v4.26.0&quot;},{&quot;version&quot;:&quot;v4.25.1&quot;},{&quot;version&quot;:&quot;v4.24.0&quot;},{&quot;version&quot;:&quot;v4.23.1&quot;},{&quot;version&quot;:&quot;v4.23.0&quot;},{&quot;version&quot;:&quot;v4.22.2&quot;},{&quot;version&quot;:&quot;v4.22.1&quot;},{&quot;version&quot;:&quot;v4.22.0&quot;},{&quot;version&quot;:&quot;v4.21.3&quot;},{&quot;version&quot;:&quot;v4.21.2&quot;},{&quot;version&quot;:&quot;v4.21.1&quot;},{&quot;version&quot;:&quot;v4.21.0&quot;},{&quot;version&quot;:&quot;v4.20.1&quot;},{&quot;version&quot;:&quot;v4.20.0&quot;},{&quot;version&quot;:&quot;v4.19.4&quot;},{&quot;version&quot;:&quot;v4.19.3&quot;},{&quot;version&quot;:&quot;v4.19.2&quot;},{&quot;version&quot;:&quot;v4.19.0&quot;},{&quot;version&quot;:&quot;v4.18.0&quot;},{&quot;version&quot;:&quot;v4.17.0&quot;},{&quot;version&quot;:&quot;v4.16.2&quot;},{&quot;version&quot;:&quot;v4.16.1&quot;},{&quot;version&quot;:&quot;v4.16.0&quot;},{&quot;version&quot;:&quot;v4.15.0&quot;},{&quot;version&quot;:&quot;v4.14.1&quot;},{&quot;version&quot;:&quot;v4.13.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.5&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.4&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.0.0&quot;},{&quot;version&quot;:&quot;doc-builder-html&quot;}],&quot;title&quot;:&quot;Image captioning&quot;}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"><div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation</p> <div class="flex items-center"><p class="font-semibold">Image captioning</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "><button class=" " type="button"><h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> <span class="ml-auto rounded border border-gray-200 bg-gray-100 px-0.5 text-xs dark:border-gray-800 dark:bg-gray-800"><kbd class="font-sans">⌘K</kbd></span></button> <div class="flex items-center"><select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1">v4.34.0</option><option value="2">v4.33.3</option><option value="3">v4.32.1</option><option value="4">v4.31.0</option><option value="5">v4.30.0</option><option value="6">v4.29.1</option><option value="7">v4.28.1</option><option value="8">v4.27.2</option><option value="9">v4.26.1</option><option value="10">v4.25.1</option><option value="11">v4.24.0</option><option value="12">v4.23.1</option><option value="13">v4.22.2</option><option value="14">v4.21.3</option><option value="15">v4.20.1</option><option value="16">v4.19.4</option><option value="17">v4.18.0</option><option value="18">v4.17.0</option><option value="19">v4.16.2</option><option value="20">v4.15.0</option><option value="21">v4.14.1</option><option value="22">v4.13.0</option><option value="23">v4.12.5</option><option value="24">v4.11.3</option><option value="25">v4.10.1</option><option value="26">v4.9.2</option><option value="27">v4.8.2</option><option value="28">v4.7.0</option><option value="29">v4.6.0</option><option value="30">v4.5.1</option><option value="31">v4.4.2</option><option value="32">v4.3.3</option><option value="33">v4.2.2</option><option value="34">v4.1.1</option><option value="35">v4.0.1</option><option value="36">v3.5.1</option><option value="37">v3.4.0</option><option value="38">v3.3.1</option><option value="39">v3.2.0</option><option value="40">v3.1.0</option><option value="41">v3.0.2</option><option value="42">v2.11.0</option><option value="43">v2.10.0</option><option value="44">v2.9.1</option><option value="45">v2.8.0</option><option value="46">v2.7.0</option><option value="47">v2.6.0</option><option value="48">v2.5.1</option><option value="49">v2.4.1</option><option value="50">v2.3.0</option><option value="51">v2.2.2</option><option value="52">v2.1.1</option><option value="53">v2.0.0</option><option value="54">v1.2.0</option><option value="55">v1.1.0</option><option value="56">v1.0.0</option><option value="57">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"><button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"><svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> 112,792</a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Get started</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/index">🤗 Transformers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/quicktour">Quick tour </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/installation">Installation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Tutorials</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_tutorial">Run inference with pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/autoclass_tutorial">Write portable code with AutoClass </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/preprocessing">Preprocess data </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/training">Fine-tune a pretrained model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/run_scripts">Train with a script </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/accelerate">Set up distributed training with 🤗 Accelerate </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/peft">Load and train adapters with 🤗 PEFT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_sharing">Share your model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/transformers_agents">Agents </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/llm_tutorial">Generation with LLMs </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Task Guides</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Natural Language Processing</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Computer Vision</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-4" href="/docs/transformers/v4.34.0/en/tasks/image_captioning">Image captioning </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/document_question_answering">Document Question Answering </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/visual_question_answering">Visual Question Answering </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/text-to-speech">Text to speech </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Generation</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Prompting</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Developer guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/fast_tokenizers">Use fast tokenizers from 🤗 Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/multilingual">Run inference with multilingual models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/create_a_model">Use model-specific APIs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_models">Share a custom model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/chat_templating">Templates for chat models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/sagemaker">Run training on Amazon SageMaker </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/serialization">Export to ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tflite">Export to TFLite </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/torchscript">Export to TorchScript </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/benchmarks">Benchmarks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/notebooks">Notebooks with examples </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/community">Community resources </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_tools">Custom Tools and Prompts </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/troubleshooting">Troubleshoot </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Performance and scalability</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/performance">Overview </a><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Efficient training techniques</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_one">Methods and tools for efficient training on a single GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_many">Multiple GPUs and parallelism </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu">Efficient training on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu_many">Distributed CPU training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu">Training on TPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu_tf">Training on TPU with TensorFlow </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_special">Training on Specialized Hardware </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_hardware">Custom hardware for training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/hpo_train">Hyperparameter Search using Trainer API </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Optimizing inference</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_cpu">Inference on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_one">Inference on one GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_many">Inference on many GPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_special">Inference on Specialized Hardware </a> </div><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/big_models">Instantiating a big model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/debugging">Troubleshooting </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tf_xla">XLA Integration for TensorFlow Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perf_torch_compile">Optimize inference using `torch.compile()` </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Contribute</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/contributing">How to contribute to transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_model">How to add a model to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_tensorflow_model">How to convert a 🤗 Transformers model to TensorFlow? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_pipeline">How to add a pipeline to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/testing">Testing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pr_checks">Checks on a Pull Request </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Conceptual guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/philosophy">Philosophy </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/glossary">Glossary </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/task_summary">What 🤗 Transformers can do </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tasks_explained">How 🤗 Transformers solve tasks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_summary">The Transformer model family </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tokenizer_summary">Summary of the tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/attention">Attention mechanisms </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pad_truncation">Padding and truncation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/bertology">BERTology </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perplexity">Perplexity of fixed-length models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_webserver">Pipelines for webserver inference </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_memory_anatomy">Model training anatomy </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>API</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Main Classes</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/agent">Agents and Tools </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/model_doc/auto">Auto Classes </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/callback">Callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/configuration">Configuration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/data_collator">Data Collator </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks">Keras callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/logging">Logging </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/model">Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/text_generation">Text Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/onnx">ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules">Optimization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/output">Model outputs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/pipelines">Pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/processors">Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/quantization">Quantization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/tokenizer">Tokenizer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/trainer">Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/deepspeed">DeepSpeed Integration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor">Feature Extractor </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/image_processor">Image Processor </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Models</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Text models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Vision models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Reinforcement learning models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Time series models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Graph models</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Internal Helpers</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/modeling_utils">Custom Layers and Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/pipelines_utils">Utilities for pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/tokenization_utils">Utilities for Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/trainer_utils">Utilities for Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/generation_utils">Utilities for Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/image_processing_utils">Utilities for Image Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/audio_utils">Utilities for Audio processing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/file_utils">General Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/time_series_utils">Utilities for Time Series </a> </div> </div></nav></div></div></div> <div class="z-1 min-w-0 flex-1"> <div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg"> <div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div> <p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience </p> <div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes </div></div></div> <div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a> <p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div> <div class="prose-doc prose relative mx-auto max-w-4xl break-words"> <p></p> <h1 class="relative group"><a id="image-captioning" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#image-captioning"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1uwawv2">Image captioning</span></h1> <div class="flex space-x-1 absolute z-10 right-0 top-0"> <div class="relative colab-dropdown "><button class=" " type="button"><img alt="Open In Colab" class="!m-0" src="https://colab.research.google.com/assets/colab-badge.svg"></button> </div> <div class="relative colab-dropdown "><button class=" " type="button"><img alt="Open In Studio Lab" class="!m-0" src="https://studiolab.sagemaker.aws/studiolab.svg"></button> </div></div> <p data-svelte-h="svelte-ws486a">Image captioning is the task of predicting a caption for a given image. Common real world applications of it include aiding visually impaired people that can help them navigate through different situations. Therefore, image captioning helps to improve content accessibility for people by describing images to them.</p> <p data-svelte-h="svelte-1aff4p7">This guide will show you how to:</p> <ul data-svelte-h="svelte-l0kgiy"><li>Fine-tune an image captioning model.</li> <li>Use the fine-tuned model for inference.</li></ul> <p data-svelte-h="svelte-1c9nexd">Before you begin, make sure you have all the necessary libraries installed:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class="">pip install transformers datasets evaluate -q pip install jiwer -q</pre></div> <p data-svelte-h="svelte-27hn0u">We encourage you to log in to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to log in:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-keyword">from</span> huggingface_hub <span class="hljs-keyword">import</span> notebook_login notebook_login()</pre></div> <h2 class="relative group"><a id="load-the-pokmon-blip-captions-dataset" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#load-the-pokmon-blip-captions-dataset"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1s20unx">Load the Pokémon BLIP captions dataset</span></h2> <p data-svelte-h="svelte-1ijrbcw">Use the 🤗 Dataset library to load a dataset that consists of {image-caption} pairs. To create your own image captioning dataset in PyTorch, you can follow <a href="https://github.com/NielsRogge/Transformers-Tutorials/blob/master/GIT/Fine_tune_GIT_on_an_image_captioning_dataset.ipynb" rel="nofollow">this notebook</a>.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset ds = load_dataset(<span class="hljs-string">"lambdalabs/pokemon-blip-captions"</span>) ds</pre></div> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class="">DatasetDict({ train: Dataset({ features: [<span class="hljs-string">'image'</span>, <span class="hljs-string">'text'</span>], num_rows: 833 }) })</pre></div> <p data-svelte-h="svelte-14ukxt">The dataset has two features, <code>image</code> and <code>text</code>.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-n6sigx">Many image captioning datasets contain multiple captions per image. In those cases, a common strategy is to randomly sample a caption amongst the available ones during training.</p></div> <p data-svelte-h="svelte-11iqabw">Split the dataset’s train split into a train and test set with the [~datasets.Dataset.train_test_split] method:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class="">ds = ds[<span class="hljs-string">"train"</span>].train_test_split(test_size=<span class="hljs-number">0.1</span>) train_ds = ds[<span class="hljs-string">"train"</span>] test_ds = ds[<span class="hljs-string">"test"</span>]</pre></div> <p data-svelte-h="svelte-1fsrtvj">Let’s visualize a couple of samples from the training set.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-keyword">from</span> textwrap <span class="hljs-keyword">import</span> wrap <span class="hljs-keyword">import</span> matplotlib.pyplot <span class="hljs-keyword">as</span> plt <span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np <span class="hljs-keyword">def</span> <span class="hljs-title function_">plot_images</span>(<span class="hljs-params">images, captions</span>): plt.figure(figsize=(<span class="hljs-number">20</span>, <span class="hljs-number">20</span>)) <span class="hljs-keyword">for</span> i <span class="hljs-keyword">in</span> <span class="hljs-built_in">range</span>(<span class="hljs-built_in">len</span>(images)): ax = plt.subplot(<span class="hljs-number">1</span>, <span class="hljs-built_in">len</span>(images), i + <span class="hljs-number">1</span>) caption = captions[i] caption = <span class="hljs-string">"\n"</span>.join(wrap(caption, <span class="hljs-number">12</span>)) plt.title(caption) plt.imshow(images[i]) plt.axis(<span class="hljs-string">"off"</span>) sample_images_to_visualize = [np.array(train_ds[i][<span class="hljs-string">"image"</span>]) <span class="hljs-keyword">for</span> i <span class="hljs-keyword">in</span> <span class="hljs-built_in">range</span>(<span class="hljs-number">5</span>)] sample_captions = [train_ds[i][<span class="hljs-string">"text"</span>] <span class="hljs-keyword">for</span> i <span class="hljs-keyword">in</span> <span class="hljs-built_in">range</span>(<span class="hljs-number">5</span>)] plot_images(sample_images_to_visualize, sample_captions)</pre></div> <div class="flex justify-center" data-svelte-h="svelte-1qemygy"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/sample_training_images_image_cap.png" alt="Sample training images"></div> <h2 class="relative group"><a id="preprocess-the-dataset" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#preprocess-the-dataset"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1utb06q">Preprocess the dataset</span></h2> <p data-svelte-h="svelte-1ogvs11">Since the dataset has two modalities (image and text), the pre-processing pipeline will preprocess images and the captions.</p> <p data-svelte-h="svelte-1shh6cf">To do so, load the processor class associated with the model you are about to fine-tune.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoProcessor checkpoint = <span class="hljs-string">"microsoft/git-base"</span> processor = AutoProcessor.from_pretrained(checkpoint)</pre></div> <p data-svelte-h="svelte-l3qjuh">The processor will internally pre-process the image (which includes resizing, and pixel scaling) and tokenize the caption.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-keyword">def</span> <span class="hljs-title function_">transforms</span>(<span class="hljs-params">example_batch</span>): images = [x <span class="hljs-keyword">for</span> x <span class="hljs-keyword">in</span> example_batch[<span class="hljs-string">"image"</span>]] captions = [x <span class="hljs-keyword">for</span> x <span class="hljs-keyword">in</span> example_batch[<span class="hljs-string">"text"</span>]] inputs = processor(images=images, text=captions, padding=<span class="hljs-string">"max_length"</span>) inputs.update({<span class="hljs-string">"labels"</span>: inputs[<span class="hljs-string">"input_ids"</span>]}) <span class="hljs-keyword">return</span> inputs train_ds.set_transform(transforms) test_ds.set_transform(transforms)</pre></div> <p data-svelte-h="svelte-1bhogex">With the dataset ready, you can now set up the model for fine-tuning.</p> <h2 class="relative group"><a id="load-a-base-model" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#load-a-base-model"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1ic7mki">Load a base model</span></h2> <p data-svelte-h="svelte-n2g9rz">Load the <a href="https://huggingface.co/microsoft/git-base" rel="nofollow">“microsoft/git-base”</a> into a <a href="https://huggingface.co/docs/transformers/model_doc/auto#transformers.AutoModelForCausalLM" rel="nofollow"><code>AutoModelForCausalLM</code></a> object.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained(checkpoint)</pre></div> <h2 class="relative group"><a id="evaluate" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#evaluate"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-sh8s6s">Evaluate</span></h2> <p data-svelte-h="svelte-1tzk2iw">Image captioning models are typically evaluated with the <a href="https://huggingface.co/spaces/evaluate-metric/rouge" rel="nofollow">Rouge Score</a> or <a href="https://huggingface.co/spaces/evaluate-metric/wer" rel="nofollow">Word Error Rate</a>. For this guide, you will use the Word Error Rate (WER).</p> <p data-svelte-h="svelte-e8lgr2">We use the 🤗 Evaluate library to do so. For potential limitations and other gotchas of the WER, refer to <a href="https://huggingface.co/spaces/evaluate-metric/wer" rel="nofollow">this guide</a>.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-keyword">from</span> evaluate <span class="hljs-keyword">import</span> load <span class="hljs-keyword">import</span> torch wer = load(<span class="hljs-string">"wer"</span>) <span class="hljs-keyword">def</span> <span class="hljs-title function_">compute_metrics</span>(<span class="hljs-params">eval_pred</span>): logits, labels = eval_pred predicted = logits.argmax(-<span class="hljs-number">1</span>) decoded_labels = processor.batch_decode(labels, skip_special_tokens=<span class="hljs-literal">True</span>) decoded_predictions = processor.batch_decode(predicted, skip_special_tokens=<span class="hljs-literal">True</span>) wer_score = wer.compute(predictions=decoded_predictions, references=decoded_labels) <span class="hljs-keyword">return</span> {<span class="hljs-string">"wer_score"</span>: wer_score}</pre></div> <h2 class="relative group"><a id="train" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#train"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1v5xwf2">Train!</span></h2> <p data-svelte-h="svelte-1dj3y2">Now, you are ready to start fine-tuning the model. You will use the 🤗 <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer">Trainer</a> for this.</p> <p data-svelte-h="svelte-452p4s">First, define the training arguments using <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.TrainingArguments">TrainingArguments</a>.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> TrainingArguments, Trainer model_name = checkpoint.split(<span class="hljs-string">"/"</span>)[<span class="hljs-number">1</span>] training_args = TrainingArguments( output_dir=<span class="hljs-string">f"<span class="hljs-subst">{model_name}</span>-pokemon"</span>, learning_rate=<span class="hljs-number">5e-5</span>, num_train_epochs=<span class="hljs-number">50</span>, fp16=<span class="hljs-literal">True</span>, per_device_train_batch_size=<span class="hljs-number">32</span>, per_device_eval_batch_size=<span class="hljs-number">32</span>, gradient_accumulation_steps=<span class="hljs-number">2</span>, save_total_limit=<span class="hljs-number">3</span>, evaluation_strategy=<span class="hljs-string">"steps"</span>, eval_steps=<span class="hljs-number">50</span>, save_strategy=<span class="hljs-string">"steps"</span>, save_steps=<span class="hljs-number">50</span>, logging_steps=<span class="hljs-number">50</span>, remove_unused_columns=<span class="hljs-literal">False</span>, push_to_hub=<span class="hljs-literal">True</span>, label_names=[<span class="hljs-string">"labels"</span>], load_best_model_at_end=<span class="hljs-literal">True</span>, )</pre></div> <p data-svelte-h="svelte-y5ywt5">Then pass them along with the datasets and the model to 🤗 Trainer.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class="">trainer = Trainer( model=model, args=training_args, train_dataset=train_ds, eval_dataset=test_ds, compute_metrics=compute_metrics, )</pre></div> <p data-svelte-h="svelte-y9rw5m">To start training, simply call <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.train">train()</a> on the <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer">Trainer</a> object.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class="">trainer.train()</pre></div> <p data-svelte-h="svelte-acs4yg">You should see the training loss drop smoothly as training progresses.</p> <p data-svelte-h="svelte-cv8z08">Once training is completed, share your model to the Hub with the <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.push_to_hub">push_to_hub()</a> method so everyone can use your model:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class="">trainer.push_to_hub()</pre></div> <h2 class="relative group"><a id="inference" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#inference"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-199uz7g">Inference</span></h2> <p data-svelte-h="svelte-16tgs9z">Take a sample image from <code>test_ds</code> to test the model.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-keyword">from</span> PIL <span class="hljs-keyword">import</span> Image <span class="hljs-keyword">import</span> requests url = <span class="hljs-string">"https://huggingface.co/datasets/sayakpaul/sample-datasets/resolve/main/pokemon.png"</span> image = Image.<span class="hljs-built_in">open</span>(requests.get(url, stream=<span class="hljs-literal">True</span>).raw) image</pre></div> <div class="flex justify-center" data-svelte-h="svelte-yvzmn4"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/test_image_image_cap.png" alt="Test image"></div> Prepare image for the model. <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class="">device = <span class="hljs-string">"cuda"</span> <span class="hljs-keyword">if</span> torch.cuda.is_available() <span class="hljs-keyword">else</span> <span class="hljs-string">"cpu"</span> inputs = processor(images=image, return_tensors=<span class="hljs-string">"pt"</span>).to(device) pixel_values = inputs.pixel_values</pre></div> <p data-svelte-h="svelte-1rbuk2y">Call <code>generate</code> and decode the predictions.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class="">generated_ids = model.generate(pixel_values=pixel_values, max_length=<span class="hljs-number">50</span>) generated_caption = processor.batch_decode(generated_ids, skip_special_tokens=<span class="hljs-literal">True</span>)[<span class="hljs-number">0</span>] <span class="hljs-built_in">print</span>(generated_caption)</pre></div> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class="">a drawing of a pink and blue pokemon</pre></div> <p data-svelte-h="svelte-vibkpa">Looks like the fine-tuned model generated a pretty good caption!</p> <p></p> <div id="svelte-announcer" aria-live="assertive" aria-atomic="true" style="position: absolute; left: 0px; top: 0px; clip: rect(0px, 0px, 0px, 0px); clip-path: inset(50%); overflow: hidden; white-space: nowrap; width: 1px; height: 1px;"></div></div> <div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>Depth estimation</a> <a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/tasks/document_question_answering" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">Document Question Answering<span class="ml-2 translate-y-px">→</span></a></div></div></div> <div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapter&quot;:{&quot;title&quot;:&quot;Image captioning&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;image-captioning&quot;,&quot;url&quot;:&quot;#image-captioning&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Load the Pokémon BLIP captions dataset&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;load-the-pokmon-blip-captions-dataset&quot;,&quot;url&quot;:&quot;#load-the-pokmon-blip-captions-dataset&quot;},{&quot;title&quot;:&quot;Preprocess the dataset&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocess-the-dataset&quot;,&quot;url&quot;:&quot;#preprocess-the-dataset&quot;},{&quot;title&quot;:&quot;Load a base model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;load-a-base-model&quot;,&quot;url&quot;:&quot;#load-a-base-model&quot;},{&quot;title&quot;:&quot;Evaluate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;evaluate&quot;,&quot;url&quot;:&quot;#evaluate&quot;},{&quot;title&quot;:&quot;Train!&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;train&quot;,&quot;url&quot;:&quot;#train&quot;},{&quot;title&quot;:&quot;Inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;inference&quot;,&quot;url&quot;:&quot;#inference&quot;}]}}" data-target="SubSideMenu"><nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#image-captioning" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-image-captioning"><wbr>Image captioning</a> <a href="#load-the-pokmon-blip-captions-dataset" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-load-the-pokmon-blip-captions-dataset"><wbr>Load the <wbr>Pokémon BLI<wbr>P captions dataset</a> <a href="#preprocess-the-dataset" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-preprocess-the-dataset"><wbr>Preprocess the dataset</a> <a href="#load-a-base-model" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-load-a-base-model"><wbr>Load a base model</a> <a href="#evaluate" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-evaluate"><wbr>Evaluate</a> <a href="#train" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-train"><wbr>Train!</a> <a href="#inference" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-inference"><wbr>Inference</a> </nav></div></div></div> <div id="doc-footer"></div></main> </div> <script> import("/front/build/kube-5e23f38/index.js"); window.moonSha = "kube-5e23f38/"; window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc","environment":"production","userAgent":"HuggingFace (production)"}`); </script> <!-- Stripe --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://js.stripe.com/v3/"; script.async = true; document.head.appendChild(script); } </script> <!-- Google analytics v4 --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL"; script.async = true; document.head.appendChild(script); window.dataLayer = window.dataLayer || []; function gtag() { if (window.dataLayer !== undefined) { window.dataLayer.push(arguments); } } gtag("js", new Date()); gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/v4.34.0/en/tasks/image_captioning" }); /// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" }); /// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent /// TODO: ask the user for their consent and update this with gtag('consent', 'update') } </script> <!-- Google Analytics v3 (deprecated) --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { (function (i, s, o, g, r, a, m) { i["GoogleAnalyticsObject"] = r; (i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments); }), (i[r].l = 1 * new Date()); (a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]); a.async = 1; a.src = g; m.parentNode.insertBefore(a, m); })(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics"); ganalytics("create", "UA-83738774-2", "auto"); ganalytics("send", "pageview", "/docs/transformers/v4.34.0/en/tasks/image_captioning"); } </script> <iframe name="__privateStripeMetricsController1010" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-27c67c0d52761104439bb051c7856ab1.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fv4.34.0%2Fen%2Ftasks%2Fimage_captioning&amp;title=Image%20captioning&amp;referrer=&amp;muid=NA&amp;sid=NA&amp;version=6&amp;preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html>
2023-10-05T13:33:56.841Z
Visual Question Answering
https://huggingface.co/docs/transformers/v4.34.0/en/tasks/visual_question_answering
# Visual Question Answering Visual Question Answering (VQA) is the task of answering open-ended questions based on an image. The input to models supporting this task is typically a combination of an image and a question, and the output is an answer expressed in natural language. Some noteworthy use case examples for VQA include: - Accessibility applications for visually impaired individuals. - Education: posing questions about visual materials presented in lectures or textbooks. VQA can also be utilized in interactive museum exhibits or historical sites. - Customer service and e-commerce: VQA can enhance user experience by letting users ask questions about products. - Image retrieval: VQA models can be used to retrieve images with specific characteristics. For example, the user can ask “Is there a dog?” to find all images with dogs from a set of images. In this guide you’ll learn how to: - Fine-tune a classification VQA model, specifically [ViLT](../model_doc/vilt), on the [`Graphcore/vqa` dataset](https://huggingface.co/datasets/Graphcore/vqa). - Use your fine-tuned ViLT for inference. - Run zero-shot VQA inference with a generative model, like BLIP-2. ## Fine-tuning ViLT ViLT model incorporates text embeddings into a Vision Transformer (ViT), allowing it to have a minimal design for Vision-and-Language Pre-training (VLP). This model can be used for several downstream tasks. For the VQA task, a classifier head is placed on top (a linear layer on top of the final hidden state of the `[CLS]` token) and randomly initialized. Visual Question Answering is thus treated as a **classification problem**. More recent models, such as BLIP, BLIP-2, and InstructBLIP, treat VQA as a generative task. Later in this guide we illustrate how to use them for zero-shot VQA inference. Before you begin, make sure you have all the necessary libraries installed. ``` pip install -q transformers datasets ``` We encourage you to share your model with the community. Log in to your Hugging Face account to upload it to the 🤗 Hub. When prompted, enter your token to log in: ``` >>> from huggingface_hub import notebook_login >>> notebook_login() ``` Let’s define the model checkpoint as a global variable. ``` >>> model_checkpoint = "dandelin/vilt-b32-mlm" ``` ## Load the data For illustration purposes, in this guide we use a very small sample of the annotated visual question answering `Graphcore/vqa` dataset. You can find the full dataset on [🤗 Hub](https://huggingface.co/datasets/Graphcore/vqa). As an alternative to the [`Graphcore/vqa` dataset](https://huggingface.co/datasets/Graphcore/vqa), you can download the same data manually from the official [VQA dataset page](https://visualqa.org/download.html). If you prefer to follow the tutorial with your custom data, check out how to [Create an image dataset](https://huggingface.co/docs/datasets/image_dataset#loading-script) guide in the 🤗 Datasets documentation. Let’s load the first 200 examples from the validation split and explore the dataset’s features: ``` >>> from datasets import load_dataset >>> dataset = load_dataset("Graphcore/vqa", split="validation[:200]") >>> dataset Dataset({ features: ['question', 'question_type', 'question_id', 'image_id', 'answer_type', 'label'], num_rows: 200 }) ``` Let’s take a look at an example to understand the dataset’s features: ``` >>> dataset[0] {'question': 'Where is he looking?', 'question_type': 'none of the above', 'question_id': 262148000, 'image_id': '/root/.cache/huggingface/datasets/downloads/extracted/ca733e0e000fb2d7a09fbcc94dbfe7b5a30750681d0e965f8e0a23b1c2f98c75/val2014/COCO_val2014_000000262148.jpg', 'answer_type': 'other', 'label': {'ids': ['at table', 'down', 'skateboard', 'table'], 'weights': [0.30000001192092896, 1.0, 0.30000001192092896, 0.30000001192092896]}} ``` The features relevant to the task include: - `question`: the question to be answered from the image - `image_id`: the path to the image the question refers to - `label`: the annotations We can remove the rest of the features as they won’t be necessary: ``` >>> dataset = dataset.remove_columns(['question_type', 'question_id', 'answer_type']) ``` As you can see, the `label` feature contains several answers to the same question (called `ids` here) collected by different human annotators. This is because the answer to a question can be subjective. In this case, the question is “where is he looking?“. Some people annotated this with “down”, others with “at table”, another one with “skateboard”, etc. Take a look at the image and consider which answer would you give: ``` >>> from PIL import Image >>> image = Image.open(dataset[0]['image_id']) >>> image ``` ![VQA Image Example](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/vqa-example.png) Due to the questions’ and answers’ ambiguity, datasets like this are treated as a multi-label classification problem (as multiple answers are possibly valid). Moreover, rather than just creating a one-hot encoded vector, one creates a soft encoding, based on the number of times a certain answer appeared in the annotations. For instance, in the example above, because the answer “down” is selected way more often than other answers, it has a score (called `weight` in the dataset) of 1.0, and the rest of the answers have scores < 1.0. To later instantiate the model with an appropriate classification head, let’s create two dictionaries: one that maps the label name to an integer and vice versa: ``` >>> import itertools >>> labels = [item['ids'] for item in dataset['label']] >>> flattened_labels = list(itertools.chain(*labels)) >>> unique_labels = list(set(flattened_labels)) >>> label2id = {label: idx for idx, label in enumerate(unique_labels)} >>> id2label = {idx: label for label, idx in label2id.items()} ``` Now that we have the mappings, we can replace the string answers with their ids, and flatten the dataset for a more convenient further preprocessing. ``` >>> def replace_ids(inputs): ... inputs["label"]["ids"] = [label2id[x] for x in inputs["label"]["ids"]] ... return inputs >>> dataset = dataset.map(replace_ids) >>> flat_dataset = dataset.flatten() >>> flat_dataset.features {'question': Value(dtype='string', id=None), 'image_id': Value(dtype='string', id=None), 'label.ids': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'label.weights': Sequence(feature=Value(dtype='float64', id=None), length=-1, id=None)} ``` ## Preprocessing data The next step is to load a ViLT processor to prepare the image and text data for the model. [ViltProcessor](/docs/transformers/v4.34.0/en/model_doc/vilt#transformers.ViltProcessor) wraps a BERT tokenizer and ViLT image processor into a convenient single processor: ``` >>> from transformers import ViltProcessor >>> processor = ViltProcessor.from_pretrained(model_checkpoint) ``` To preprocess the data we need to encode the images and questions using the [ViltProcessor](/docs/transformers/v4.34.0/en/model_doc/vilt#transformers.ViltProcessor). The processor will use the [BertTokenizerFast](/docs/transformers/v4.34.0/en/model_doc/bert#transformers.BertTokenizerFast) to tokenize the text and create `input_ids`, `attention_mask` and `token_type_ids` for the text data. As for images, the processor will leverage [ViltImageProcessor](/docs/transformers/v4.34.0/en/model_doc/vilt#transformers.ViltImageProcessor) to resize and normalize the image, and create `pixel_values` and `pixel_mask`. All these preprocessing steps are done under the hood, we only need to call the `processor`. However, we still need to prepare the target labels. In this representation, each element corresponds to a possible answer (label). For correct answers, the element holds their respective score (weight), while the remaining elements are set to zero. The following function applies the `processor` to the images and questions and formats the labels as described above: ``` >>> import torch >>> def preprocess_data(examples): ... image_paths = examples['image_id'] ... images = [Image.open(image_path) for image_path in image_paths] ... texts = examples['question'] ... encoding = processor(images, texts, padding="max_length", truncation=True, return_tensors="pt") ... for k, v in encoding.items(): ... encoding[k] = v.squeeze() ... targets = [] ... for labels, scores in zip(examples['label.ids'], examples['label.weights']): ... target = torch.zeros(len(id2label)) ... for label, score in zip(labels, scores): ... target[label] = score ... targets.append(target) ... encoding["labels"] = targets ... return encoding ``` To apply the preprocessing function over the entire dataset, use 🤗 Datasets `map` function. You can speed up `map` by setting `batched=True` to process multiple elements of the dataset at once. At this point, feel free to remove the columns you don’t need. ``` >>> processed_dataset = flat_dataset.map(preprocess_data, batched=True, remove_columns=['question','question_type', 'question_id', 'image_id', 'answer_type', 'label.ids', 'label.weights']) >>> processed_dataset Dataset({ features: ['input_ids', 'token_type_ids', 'attention_mask', 'pixel_values', 'pixel_mask', 'labels'], num_rows: 200 }) ``` As a final step, create a batch of examples using [DefaultDataCollator](/docs/transformers/v4.34.0/en/main_classes/data_collator#transformers.DefaultDataCollator): ``` >>> from transformers import DefaultDataCollator >>> data_collator = DefaultDataCollator() ``` ## Train the model You’re ready to start training your model now! Load ViLT with [ViltForQuestionAnswering](/docs/transformers/v4.34.0/en/model_doc/vilt#transformers.ViltForQuestionAnswering). Specify the number of labels along with the label mappings: ``` >>> from transformers import ViltForQuestionAnswering >>> model = ViltForQuestionAnswering.from_pretrained(model_checkpoint, num_labels=len(id2label), id2label=id2label, label2id=label2id) ``` At this point, only three steps remain: 1. Define your training hyperparameters in [TrainingArguments](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.TrainingArguments): ``` >>> from transformers import TrainingArguments >>> repo_id = "MariaK/vilt_finetuned_200" >>> training_args = TrainingArguments( ... output_dir=repo_id, ... per_device_train_batch_size=4, ... num_train_epochs=20, ... save_steps=200, ... logging_steps=50, ... learning_rate=5e-5, ... save_total_limit=2, ... remove_unused_columns=False, ... push_to_hub=True, ... ) ``` 2. Pass the training arguments to [Trainer](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer) along with the model, dataset, processor, and data collator. ``` >>> from transformers import Trainer >>> trainer = Trainer( ... model=model, ... args=training_args, ... data_collator=data_collator, ... train_dataset=processed_dataset, ... tokenizer=processor, ... ) ``` 3. Call [train()](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.train) to finetune your model. Once training is completed, share your model to the Hub with the [push\_to\_hub()](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.push_to_hub) method to share your final model on the 🤗 Hub: ``` >>> trainer.push_to_hub() ``` ## Inference Now that you have fine-tuned a ViLT model, and uploaded it to the 🤗 Hub, you can use it for inference. The simplest way to try out your fine-tuned model for inference is to use it in a [Pipeline](/docs/transformers/v4.34.0/en/main_classes/pipelines#transformers.Pipeline). ``` >>> from transformers import pipeline >>> pipe = pipeline("visual-question-answering", model="MariaK/vilt_finetuned_200") ``` The model in this guide has only been trained on 200 examples, so don’t expect a lot from it. Let’s see if it at least learned something from the data and take the first example from the dataset to illustrate inference: ``` >>> example = dataset[0] >>> image = Image.open(example['image_id']) >>> question = example['question'] >>> print(question) >>> pipe(image, question, top_k=1) "Where is he looking?" [{'score': 0.5498199462890625, 'answer': 'down'}] ``` Even though not very confident, the model indeed has learned something. With more examples and longer training, you’ll get far better results! You can also manually replicate the results of the pipeline if you’d like: 1. Take an image and a question, prepare them for the model using the processor from your model. 2. Forward the result or preprocessing through the model. 3. From the logits, get the most likely answer’s id, and find the actual answer in the `id2label`. ``` >>> processor = ViltProcessor.from_pretrained("MariaK/vilt_finetuned_200") >>> image = Image.open(example['image_id']) >>> question = example['question'] >>> >>> inputs = processor(image, question, return_tensors="pt") >>> model = ViltForQuestionAnswering.from_pretrained("MariaK/vilt_finetuned_200") >>> >>> with torch.no_grad(): ... outputs = model(**inputs) >>> logits = outputs.logits >>> idx = logits.argmax(-1).item() >>> print("Predicted answer:", model.config.id2label[idx]) Predicted answer: down ``` ## Zero-shot VQA The previous model treated VQA as a classification task. Some recent models, such as BLIP, BLIP-2, and InstructBLIP approach VQA as a generative task. Let’s take [BLIP-2](../model_doc/blip-2) as an example. It introduced a new visual-language pre-training paradigm in which any combination of pre-trained vision encoder and LLM can be used (learn more in the [BLIP-2 blog post](https://huggingface.co/blog/blip-2)). This enables achieving state-of-the-art results on multiple visual-language tasks including visual question answering. Let’s illustrate how you can use this model for VQA. First, let’s load the model. Here we’ll explicitly send the model to a GPU, if available, which we didn’t need to do earlier when training, as [Trainer](/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer) handles this automatically: ``` >>> from transformers import AutoProcessor, Blip2ForConditionalGeneration >>> import torch >>> processor = AutoProcessor.from_pretrained("Salesforce/blip2-opt-2.7b") >>> model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", torch_dtype=torch.float16) >>> device = "cuda" if torch.cuda.is_available() else "cpu" >>> model.to(device) ``` The model takes image and text as input, so let’s use the exact same image/question pair from the first example in the VQA dataset: ``` >>> example = dataset[0] >>> image = Image.open(example['image_id']) >>> question = example['question'] ``` To use BLIP-2 for visual question answering task, the textual prompt has to follow a specific format: `Question: {} Answer:`. ``` >>> prompt = f"Question: {question} Answer:" ``` Now we need to preprocess the image/prompt with the model’s processor, pass the processed input through the model, and decode the output: ``` >>> inputs = processor(image, text=prompt, return_tensors="pt").to(device, torch.float16) >>> generated_ids = model.generate(**inputs, max_new_tokens=10) >>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0].strip() >>> print(generated_text) "He is looking at the crowd" ``` As you can see, the model recognized the crowd, and the direction of the face (looking down), however, it seems to miss the fact the crowd is behind the skater. Still, in cases where acquiring human-annotated datasets is not feasible, this approach can quickly produce useful results.
<!DOCTYPE html><html class=""><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"> <meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science."> <meta property="fb:app_id" content="1321688464574422"> <meta name="twitter:card" content="summary_large_image"> <meta name="twitter:site" content="@huggingface"> <meta property="og:title" content="Visual Question Answering"> <meta property="og:type" content="website"> <meta property="og:url" content="https://huggingface.co/docs/transformers/v4.34.0/en/tasks/visual_question_answering"> <meta property="og:image" content="https://huggingface.co/front/thumbnails/docs/transformers.png"> <link rel="stylesheet" href="/front/build/kube-5e23f38/style.css"> <link rel="preconnect" href="https://fonts.gstatic.com"> <link href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;0,900;1,200;1,300;1,400;1,600;1,700;1,900&amp;display=swap" rel="stylesheet"> <link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&amp;display=swap" rel="stylesheet"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" as="style" onload="this.onload=null;this.rel='stylesheet'"> <noscript> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" /> </noscript> <title>Visual Question Answering</title> <script async="" src="https://www.google-analytics.com/analytics.js"></script><script defer="" data-domain="huggingface.co" src="/js/script.js"></script> <script src="https://js.stripe.com/v3/" async=""></script><script src="https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"><link rel="stylesheet" href="/docs/transformers/v4.34.0/en/_app/immutable/assets/0.e3b0c442.css"><link rel="modulepreload" as="script" crossorigin="" href="/docs/transformers/v4.34.0/en/_app/immutable/nodes/1.38c5c2f6.js"><meta name="hf:doc:metadata" content="{&quot;local&quot;:&quot;visual-question-answering&quot;,&quot;sections&quot;:[{&quot;local&quot;:&quot;finetuning-vilt&quot;,&quot;title&quot;:&quot;Fine-tuning ViLT&quot;},{&quot;local&quot;:&quot;load-the-data&quot;,&quot;title&quot;:&quot;Load the data&quot;},{&quot;local&quot;:&quot;preprocessing-data&quot;,&quot;title&quot;:&quot;Preprocessing data&quot;},{&quot;local&quot;:&quot;train-the-model&quot;,&quot;title&quot;:&quot;Train the model&quot;},{&quot;local&quot;:&quot;inference&quot;,&quot;title&quot;:&quot;Inference&quot;},{&quot;local&quot;:&quot;zeroshot-vqa&quot;,&quot;title&quot;:&quot;Zero-shot VQA&quot;}],&quot;title&quot;:&quot;Visual Question Answering&quot;}"></head> <body class="flex flex-col min-h-screen bg-white dark:bg-gray-950 text-black DocBuilderPage"> <div class="flex min-h-screen flex-col"> <div class="SVELTE_HYDRATER contents" data-props="{&quot;classNames&quot;:&quot;&quot;,&quot;isWide&quot;:true,&quot;isZh&quot;:false}" data-target="MainHeader"><header class="border-b border-gray-100 "><div class="w-full px-4 flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg"> <span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a> <div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 lg:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl" name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text"> <svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> </div> <div class="flex flex-none items-center justify-center p-0.5 place-self-stretch lg:hidden"><button class="relative z-40 flex h-6 w-8 items-center justify-center" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg> </button> </div></div> <nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center space-x-2"><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-indigo-700" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-red-700" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-blue-700" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</a></li><li><a class="group flex items-center px-2 py-0.5 dark:hover:text-gray-400 hover:text-yellow-700" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path opacity="0.5" d="M20.9022 5.10334L10.8012 10.8791L7.76318 9.11193C8.07741 8.56791 8.5256 8.11332 9.06512 7.7914L15.9336 3.73907C17.0868 3.08811 18.5002 3.26422 19.6534 3.91519L19.3859 3.73911C19.9253 4.06087 20.5879 4.56025 20.9022 5.10334Z" fill="currentColor"></path><path d="M10.7999 10.8792V28.5483C10.2136 28.5475 9.63494 28.4139 9.10745 28.1578C8.5429 27.8312 8.074 27.3621 7.74761 26.7975C7.42122 26.2327 7.24878 25.5923 7.24756 24.9402V10.9908C7.25062 10.3319 7.42358 9.68487 7.74973 9.1123L10.7999 10.8792Z" fill="currentColor" fill-opacity="0.75"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M21.3368 10.8499V6.918C21.3331 6.25959 21.16 5.61234 20.8346 5.03949L10.7971 10.8727L10.8046 10.874L21.3368 10.8499Z" fill="currentColor"></path><path opacity="0.5" d="M21.7937 10.8488L10.7825 10.8741V28.5486L21.7937 28.5234C23.3344 28.5234 24.5835 27.2743 24.5835 25.7335V13.6387C24.5835 12.0979 23.4365 11.1233 21.7937 10.8488Z" fill="currentColor"></path></svg> Docs</a></li> <li><div class="relative "><button class="px-2 py-0.5 group hover:text-green-700 dark:hover:text-gray-400 flex items-center " type="button"><svg class="mr-1.5 text-gray-400 group-hover:text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M19 6H5a3 3 0 0 0-3 3v2.72L8.837 14h6.326L22 11.72V9a3 3 0 0 0-3-3z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M10 6V5h4v1h2V5a2.002 2.002 0 0 0-2-2h-4a2.002 2.002 0 0 0-2 2v1h2zm-1.163 8L2 11.72V18a3.003 3.003 0 0 0 3 3h14a3.003 3.003 0 0 0 3-3v-6.28L15.163 14H8.837z" fill="currentColor"></path></svg> Solutions </button> </div></li> <li><a class="group flex items-center px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/pricing">Pricing</a></li> <li><div class="relative group"><button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button"><svg class="mr-1.5 text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg> </button> </div></li> <li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li> <li><a class="block cursor-pointer px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-400" href="/login">Log In</a></li> <li><a class="rounded-full border border-transparent bg-gray-900 px-3 py-1 leading-none text-white hover:border-black hover:bg-white hover:text-black" href="/join">Sign Up</a></li></ul></nav></div></header></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="GoogleAnalyticsTracker"></div> <div class="SVELTE_HYDRATER contents" data-props="{}" data-target="SSOBanner"></div> <main class="flex flex-1 flex-col"><div class="relative lg:flex"><div class="sticky top-0 z-20 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapters&quot;:[{&quot;title&quot;:&quot;Get started&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;🤗 Transformers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;index&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/index&quot;},{&quot;title&quot;:&quot;Quick tour&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;quicktour&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/quicktour&quot;},{&quot;title&quot;:&quot;Installation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;installation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/installation&quot;}]},{&quot;title&quot;:&quot;Tutorials&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Run inference with pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_tutorial&quot;},{&quot;title&quot;:&quot;Write portable code with AutoClass&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;autoclass_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/autoclass_tutorial&quot;},{&quot;title&quot;:&quot;Preprocess data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocessing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/preprocessing&quot;},{&quot;title&quot;:&quot;Fine-tune a pretrained model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;training&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/training&quot;},{&quot;title&quot;:&quot;Train with a script&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;run_scripts&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/run_scripts&quot;},{&quot;title&quot;:&quot;Set up distributed training with 🤗 Accelerate&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;accelerate&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/accelerate&quot;},{&quot;title&quot;:&quot;Load and train adapters with 🤗 PEFT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;peft&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/peft&quot;},{&quot;title&quot;:&quot;Share your model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_sharing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_sharing&quot;},{&quot;title&quot;:&quot;Agents&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;transformers_agents&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/transformers_agents&quot;},{&quot;title&quot;:&quot;Generation with LLMs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;llm_tutorial&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/llm_tutorial&quot;}]},{&quot;title&quot;:&quot;Task Guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Natural Language Processing&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text classification&quot;,&quot;id&quot;:&quot;tasks/sequence_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/sequence_classification&quot;},{&quot;title&quot;:&quot;Token classification&quot;,&quot;id&quot;:&quot;tasks/token_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/token_classification&quot;},{&quot;title&quot;:&quot;Question answering&quot;,&quot;id&quot;:&quot;tasks/question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/question_answering&quot;},{&quot;title&quot;:&quot;Causal language modeling&quot;,&quot;id&quot;:&quot;tasks/language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/language_modeling&quot;},{&quot;title&quot;:&quot;Masked language modeling&quot;,&quot;id&quot;:&quot;tasks/masked_language_modeling&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/masked_language_modeling&quot;},{&quot;title&quot;:&quot;Translation&quot;,&quot;id&quot;:&quot;tasks/translation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/translation&quot;},{&quot;title&quot;:&quot;Summarization&quot;,&quot;id&quot;:&quot;tasks/summarization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/summarization&quot;},{&quot;title&quot;:&quot;Multiple choice&quot;,&quot;id&quot;:&quot;tasks/multiple_choice&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/multiple_choice&quot;}]},{&quot;title&quot;:&quot;Audio&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio classification&quot;,&quot;id&quot;:&quot;tasks/audio_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/audio_classification&quot;},{&quot;title&quot;:&quot;Automatic speech recognition&quot;,&quot;id&quot;:&quot;tasks/asr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/asr&quot;}]},{&quot;title&quot;:&quot;Computer Vision&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image classification&quot;,&quot;id&quot;:&quot;tasks/image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_classification&quot;},{&quot;title&quot;:&quot;Semantic segmentation&quot;,&quot;id&quot;:&quot;tasks/semantic_segmentation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/semantic_segmentation&quot;},{&quot;title&quot;:&quot;Video classification&quot;,&quot;id&quot;:&quot;tasks/video_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/video_classification&quot;},{&quot;title&quot;:&quot;Object detection&quot;,&quot;id&quot;:&quot;tasks/object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot object detection&quot;,&quot;id&quot;:&quot;tasks/zero_shot_object_detection&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_object_detection&quot;},{&quot;title&quot;:&quot;Zero-shot image classification&quot;,&quot;id&quot;:&quot;tasks/zero_shot_image_classification&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/zero_shot_image_classification&quot;},{&quot;title&quot;:&quot;Depth estimation&quot;,&quot;id&quot;:&quot;tasks/monocular_depth_estimation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/monocular_depth_estimation&quot;}]},{&quot;title&quot;:&quot;Multimodal&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image captioning&quot;,&quot;id&quot;:&quot;tasks/image_captioning&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/image_captioning&quot;},{&quot;title&quot;:&quot;Document Question Answering&quot;,&quot;id&quot;:&quot;tasks/document_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/document_question_answering&quot;},{&quot;title&quot;:&quot;Visual Question Answering&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks/visual_question_answering&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/visual_question_answering&quot;},{&quot;title&quot;:&quot;Text to speech&quot;,&quot;id&quot;:&quot;tasks/text-to-speech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/text-to-speech&quot;}]},{&quot;title&quot;:&quot;Generation&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Customize the generation strategy&quot;,&quot;id&quot;:&quot;generation_strategies&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/generation_strategies&quot;}]},{&quot;title&quot;:&quot;Prompting&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Image tasks with IDEFICS&quot;,&quot;id&quot;:&quot;tasks/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks/idefics&quot;}]}]},{&quot;title&quot;:&quot;Developer guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Use fast tokenizers from 🤗 Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;fast_tokenizers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/fast_tokenizers&quot;},{&quot;title&quot;:&quot;Run inference with multilingual models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;multilingual&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/multilingual&quot;},{&quot;title&quot;:&quot;Use model-specific APIs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;create_a_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/create_a_model&quot;},{&quot;title&quot;:&quot;Share a custom model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_models&quot;},{&quot;title&quot;:&quot;Templates for chat models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;chat_templating&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/chat_templating&quot;},{&quot;title&quot;:&quot;Run training on Amazon SageMaker&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;sagemaker&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/sagemaker&quot;},{&quot;title&quot;:&quot;Export to ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;serialization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/serialization&quot;},{&quot;title&quot;:&quot;Export to TFLite&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tflite&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tflite&quot;},{&quot;title&quot;:&quot;Export to TorchScript&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;torchscript&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/torchscript&quot;},{&quot;title&quot;:&quot;Benchmarks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;benchmarks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/benchmarks&quot;},{&quot;title&quot;:&quot;Notebooks with examples&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;notebooks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/notebooks&quot;},{&quot;title&quot;:&quot;Community resources&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;community&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/community&quot;},{&quot;title&quot;:&quot;Custom Tools and Prompts&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;custom_tools&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/custom_tools&quot;},{&quot;title&quot;:&quot;Troubleshoot&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;troubleshooting&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/troubleshooting&quot;}]},{&quot;title&quot;:&quot;Performance and scalability&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Overview&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;performance&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/performance&quot;},{&quot;title&quot;:&quot;Efficient training techniques&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Methods and tools for efficient training on a single GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_one&quot;},{&quot;title&quot;:&quot;Multiple GPUs and parallelism&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_gpu_many&quot;},{&quot;title&quot;:&quot;Efficient training on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu&quot;},{&quot;title&quot;:&quot;Distributed CPU training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_cpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_cpu_many&quot;},{&quot;title&quot;:&quot;Training on TPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu&quot;},{&quot;title&quot;:&quot;Training on TPU with TensorFlow&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_tpu_tf&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_tpu_tf&quot;},{&quot;title&quot;:&quot;Training on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_train_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_train_special&quot;},{&quot;title&quot;:&quot;Custom hardware for training&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_hardware&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_hardware&quot;},{&quot;title&quot;:&quot;Hyperparameter Search using Trainer API&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;hpo_train&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/hpo_train&quot;}]},{&quot;title&quot;:&quot;Optimizing inference&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Inference on CPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_cpu&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_cpu&quot;},{&quot;title&quot;:&quot;Inference on one GPU&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_one&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_one&quot;},{&quot;title&quot;:&quot;Inference on many GPUs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_gpu_many&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_gpu_many&quot;},{&quot;title&quot;:&quot;Inference on Specialized Hardware&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_infer_special&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_infer_special&quot;}]},{&quot;title&quot;:&quot;Instantiating a big model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;big_models&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/big_models&quot;},{&quot;title&quot;:&quot;Troubleshooting&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;debugging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/debugging&quot;},{&quot;title&quot;:&quot;XLA Integration for TensorFlow Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tf_xla&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tf_xla&quot;},{&quot;title&quot;:&quot;Optimize inference using `torch.compile()`&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perf_torch_compile&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perf_torch_compile&quot;}]},{&quot;title&quot;:&quot;Contribute&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;How to contribute to transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;contributing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/contributing&quot;},{&quot;title&quot;:&quot;How to add a model to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_model&quot;},{&quot;title&quot;:&quot;How to convert a 🤗 Transformers model to TensorFlow?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_tensorflow_model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_tensorflow_model&quot;},{&quot;title&quot;:&quot;How to add a pipeline to 🤗 Transformers?&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;add_new_pipeline&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/add_new_pipeline&quot;},{&quot;title&quot;:&quot;Testing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;testing&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/testing&quot;},{&quot;title&quot;:&quot;Checks on a Pull Request&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pr_checks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pr_checks&quot;}]},{&quot;title&quot;:&quot;Conceptual guides&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Philosophy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;philosophy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/philosophy&quot;},{&quot;title&quot;:&quot;Glossary&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;glossary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/glossary&quot;},{&quot;title&quot;:&quot;What 🤗 Transformers can do&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;task_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/task_summary&quot;},{&quot;title&quot;:&quot;How 🤗 Transformers solve tasks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tasks_explained&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tasks_explained&quot;},{&quot;title&quot;:&quot;The Transformer model family&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_summary&quot;},{&quot;title&quot;:&quot;Summary of the tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;tokenizer_summary&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/tokenizer_summary&quot;},{&quot;title&quot;:&quot;Attention mechanisms&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;attention&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/attention&quot;},{&quot;title&quot;:&quot;Padding and truncation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pad_truncation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pad_truncation&quot;},{&quot;title&quot;:&quot;BERTology&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;bertology&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/bertology&quot;},{&quot;title&quot;:&quot;Perplexity of fixed-length models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;perplexity&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/perplexity&quot;},{&quot;title&quot;:&quot;Pipelines for webserver inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;pipeline_webserver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/pipeline_webserver&quot;},{&quot;title&quot;:&quot;Model training anatomy&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_memory_anatomy&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_memory_anatomy&quot;}]},{&quot;title&quot;:&quot;API&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Main Classes&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Agents and Tools&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/agent&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/agent&quot;},{&quot;title&quot;:&quot;Auto Classes&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;model_doc/auto&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/auto&quot;},{&quot;title&quot;:&quot;Callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/callback&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/callback&quot;},{&quot;title&quot;:&quot;Configuration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/configuration&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/configuration&quot;},{&quot;title&quot;:&quot;Data Collator&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/data_collator&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/data_collator&quot;},{&quot;title&quot;:&quot;Keras callbacks&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/keras_callbacks&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/keras_callbacks&quot;},{&quot;title&quot;:&quot;Logging&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/logging&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/logging&quot;},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/model&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/model&quot;},{&quot;title&quot;:&quot;Text Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/text_generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/text_generation&quot;},{&quot;title&quot;:&quot;ONNX&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/onnx&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/onnx&quot;},{&quot;title&quot;:&quot;Optimization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/optimizer_schedules&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules&quot;},{&quot;title&quot;:&quot;Model outputs&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/output&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/output&quot;},{&quot;title&quot;:&quot;Pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/pipelines&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/pipelines&quot;},{&quot;title&quot;:&quot;Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/processors&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/processors&quot;},{&quot;title&quot;:&quot;Quantization&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/quantization&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/quantization&quot;},{&quot;title&quot;:&quot;Tokenizer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/tokenizer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/tokenizer&quot;},{&quot;title&quot;:&quot;Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/trainer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/trainer&quot;},{&quot;title&quot;:&quot;DeepSpeed Integration&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/deepspeed&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/deepspeed&quot;},{&quot;title&quot;:&quot;Feature Extractor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/feature_extractor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/feature_extractor&quot;},{&quot;title&quot;:&quot;Image Processor&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;main_classes/image_processor&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/main_classes/image_processor&quot;}]},{&quot;title&quot;:&quot;Models&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Text models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALBERT&quot;,&quot;id&quot;:&quot;model_doc/albert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/albert&quot;},{&quot;title&quot;:&quot;BART&quot;,&quot;id&quot;:&quot;model_doc/bart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bart&quot;},{&quot;title&quot;:&quot;BARThez&quot;,&quot;id&quot;:&quot;model_doc/barthez&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/barthez&quot;},{&quot;title&quot;:&quot;BARTpho&quot;,&quot;id&quot;:&quot;model_doc/bartpho&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bartpho&quot;},{&quot;title&quot;:&quot;BERT&quot;,&quot;id&quot;:&quot;model_doc/bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert&quot;},{&quot;title&quot;:&quot;BertGeneration&quot;,&quot;id&quot;:&quot;model_doc/bert-generation&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-generation&quot;},{&quot;title&quot;:&quot;BertJapanese&quot;,&quot;id&quot;:&quot;model_doc/bert-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bert-japanese&quot;},{&quot;title&quot;:&quot;Bertweet&quot;,&quot;id&quot;:&quot;model_doc/bertweet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bertweet&quot;},{&quot;title&quot;:&quot;BigBird&quot;,&quot;id&quot;:&quot;model_doc/big_bird&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/big_bird&quot;},{&quot;title&quot;:&quot;BigBirdPegasus&quot;,&quot;id&quot;:&quot;model_doc/bigbird_pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bigbird_pegasus&quot;},{&quot;title&quot;:&quot;BioGpt&quot;,&quot;id&quot;:&quot;model_doc/biogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/biogpt&quot;},{&quot;title&quot;:&quot;Blenderbot&quot;,&quot;id&quot;:&quot;model_doc/blenderbot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot&quot;},{&quot;title&quot;:&quot;Blenderbot Small&quot;,&quot;id&quot;:&quot;model_doc/blenderbot-small&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blenderbot-small&quot;},{&quot;title&quot;:&quot;BLOOM&quot;,&quot;id&quot;:&quot;model_doc/bloom&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bloom&quot;},{&quot;title&quot;:&quot;BORT&quot;,&quot;id&quot;:&quot;model_doc/bort&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bort&quot;},{&quot;title&quot;:&quot;ByT5&quot;,&quot;id&quot;:&quot;model_doc/byt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/byt5&quot;},{&quot;title&quot;:&quot;CamemBERT&quot;,&quot;id&quot;:&quot;model_doc/camembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/camembert&quot;},{&quot;title&quot;:&quot;CANINE&quot;,&quot;id&quot;:&quot;model_doc/canine&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/canine&quot;},{&quot;title&quot;:&quot;CodeGen&quot;,&quot;id&quot;:&quot;model_doc/codegen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/codegen&quot;},{&quot;title&quot;:&quot;CodeLlama&quot;,&quot;id&quot;:&quot;model_doc/code_llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/code_llama&quot;},{&quot;title&quot;:&quot;ConvBERT&quot;,&quot;id&quot;:&quot;model_doc/convbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convbert&quot;},{&quot;title&quot;:&quot;CPM&quot;,&quot;id&quot;:&quot;model_doc/cpm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpm&quot;},{&quot;title&quot;:&quot;CPMANT&quot;,&quot;id&quot;:&quot;model_doc/cpmant&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cpmant&quot;},{&quot;title&quot;:&quot;CTRL&quot;,&quot;id&quot;:&quot;model_doc/ctrl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ctrl&quot;},{&quot;title&quot;:&quot;DeBERTa&quot;,&quot;id&quot;:&quot;model_doc/deberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta&quot;},{&quot;title&quot;:&quot;DeBERTa-v2&quot;,&quot;id&quot;:&quot;model_doc/deberta-v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deberta-v2&quot;},{&quot;title&quot;:&quot;DialoGPT&quot;,&quot;id&quot;:&quot;model_doc/dialogpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dialogpt&quot;},{&quot;title&quot;:&quot;DistilBERT&quot;,&quot;id&quot;:&quot;model_doc/distilbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/distilbert&quot;},{&quot;title&quot;:&quot;DPR&quot;,&quot;id&quot;:&quot;model_doc/dpr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpr&quot;},{&quot;title&quot;:&quot;ELECTRA&quot;,&quot;id&quot;:&quot;model_doc/electra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/electra&quot;},{&quot;title&quot;:&quot;Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encoder-decoder&quot;},{&quot;title&quot;:&quot;ERNIE&quot;,&quot;id&quot;:&quot;model_doc/ernie&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie&quot;},{&quot;title&quot;:&quot;ErnieM&quot;,&quot;id&quot;:&quot;model_doc/ernie_m&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ernie_m&quot;},{&quot;title&quot;:&quot;ESM&quot;,&quot;id&quot;:&quot;model_doc/esm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/esm&quot;},{&quot;title&quot;:&quot;Falcon&quot;,&quot;id&quot;:&quot;model_doc/falcon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/falcon&quot;},{&quot;title&quot;:&quot;FLAN-T5&quot;,&quot;id&quot;:&quot;model_doc/flan-t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-t5&quot;},{&quot;title&quot;:&quot;FLAN-UL2&quot;,&quot;id&quot;:&quot;model_doc/flan-ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flan-ul2&quot;},{&quot;title&quot;:&quot;FlauBERT&quot;,&quot;id&quot;:&quot;model_doc/flaubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flaubert&quot;},{&quot;title&quot;:&quot;FNet&quot;,&quot;id&quot;:&quot;model_doc/fnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fnet&quot;},{&quot;title&quot;:&quot;FSMT&quot;,&quot;id&quot;:&quot;model_doc/fsmt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/fsmt&quot;},{&quot;title&quot;:&quot;Funnel Transformer&quot;,&quot;id&quot;:&quot;model_doc/funnel&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/funnel&quot;},{&quot;title&quot;:&quot;GPT&quot;,&quot;id&quot;:&quot;model_doc/openai-gpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/openai-gpt&quot;},{&quot;title&quot;:&quot;GPT Neo&quot;,&quot;id&quot;:&quot;model_doc/gpt_neo&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neo&quot;},{&quot;title&quot;:&quot;GPT NeoX&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox&quot;},{&quot;title&quot;:&quot;GPT NeoX Japanese&quot;,&quot;id&quot;:&quot;model_doc/gpt_neox_japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_neox_japanese&quot;},{&quot;title&quot;:&quot;GPT-J&quot;,&quot;id&quot;:&quot;model_doc/gptj&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptj&quot;},{&quot;title&quot;:&quot;GPT2&quot;,&quot;id&quot;:&quot;model_doc/gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt2&quot;},{&quot;title&quot;:&quot;GPTBigCode&quot;,&quot;id&quot;:&quot;model_doc/gpt_bigcode&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt_bigcode&quot;},{&quot;title&quot;:&quot;GPTSAN Japanese&quot;,&quot;id&quot;:&quot;model_doc/gptsan-japanese&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gptsan-japanese&quot;},{&quot;title&quot;:&quot;GPTSw3&quot;,&quot;id&quot;:&quot;model_doc/gpt-sw3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/gpt-sw3&quot;},{&quot;title&quot;:&quot;HerBERT&quot;,&quot;id&quot;:&quot;model_doc/herbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/herbert&quot;},{&quot;title&quot;:&quot;I-BERT&quot;,&quot;id&quot;:&quot;model_doc/ibert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ibert&quot;},{&quot;title&quot;:&quot;Jukebox&quot;,&quot;id&quot;:&quot;model_doc/jukebox&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/jukebox&quot;},{&quot;title&quot;:&quot;LED&quot;,&quot;id&quot;:&quot;model_doc/led&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/led&quot;},{&quot;title&quot;:&quot;LLaMA&quot;,&quot;id&quot;:&quot;model_doc/llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama&quot;},{&quot;title&quot;:&quot;Llama2&quot;,&quot;id&quot;:&quot;model_doc/llama2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/llama2&quot;},{&quot;title&quot;:&quot;Longformer&quot;,&quot;id&quot;:&quot;model_doc/longformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longformer&quot;},{&quot;title&quot;:&quot;LongT5&quot;,&quot;id&quot;:&quot;model_doc/longt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/longt5&quot;},{&quot;title&quot;:&quot;LUKE&quot;,&quot;id&quot;:&quot;model_doc/luke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/luke&quot;},{&quot;title&quot;:&quot;M2M100&quot;,&quot;id&quot;:&quot;model_doc/m2m_100&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/m2m_100&quot;},{&quot;title&quot;:&quot;MarianMT&quot;,&quot;id&quot;:&quot;model_doc/marian&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/marian&quot;},{&quot;title&quot;:&quot;MarkupLM&quot;,&quot;id&quot;:&quot;model_doc/markuplm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/markuplm&quot;},{&quot;title&quot;:&quot;MBart and MBart-50&quot;,&quot;id&quot;:&quot;model_doc/mbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mbart&quot;},{&quot;title&quot;:&quot;MEGA&quot;,&quot;id&quot;:&quot;model_doc/mega&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mega&quot;},{&quot;title&quot;:&quot;MegatronBERT&quot;,&quot;id&quot;:&quot;model_doc/megatron-bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron-bert&quot;},{&quot;title&quot;:&quot;MegatronGPT2&quot;,&quot;id&quot;:&quot;model_doc/megatron_gpt2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/megatron_gpt2&quot;},{&quot;title&quot;:&quot;Mistral&quot;,&quot;id&quot;:&quot;model_doc/mistral&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mistral&quot;},{&quot;title&quot;:&quot;mLUKE&quot;,&quot;id&quot;:&quot;model_doc/mluke&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mluke&quot;},{&quot;title&quot;:&quot;MobileBERT&quot;,&quot;id&quot;:&quot;model_doc/mobilebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilebert&quot;},{&quot;title&quot;:&quot;MPNet&quot;,&quot;id&quot;:&quot;model_doc/mpnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpnet&quot;},{&quot;title&quot;:&quot;MPT&quot;,&quot;id&quot;:&quot;model_doc/mpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mpt&quot;},{&quot;title&quot;:&quot;MRA&quot;,&quot;id&quot;:&quot;model_doc/mra&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mra&quot;},{&quot;title&quot;:&quot;MT5&quot;,&quot;id&quot;:&quot;model_doc/mt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mt5&quot;},{&quot;title&quot;:&quot;MVP&quot;,&quot;id&quot;:&quot;model_doc/mvp&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mvp&quot;},{&quot;title&quot;:&quot;NEZHA&quot;,&quot;id&quot;:&quot;model_doc/nezha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nezha&quot;},{&quot;title&quot;:&quot;NLLB&quot;,&quot;id&quot;:&quot;model_doc/nllb&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb&quot;},{&quot;title&quot;:&quot;NLLB-MoE&quot;,&quot;id&quot;:&quot;model_doc/nllb-moe&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nllb-moe&quot;},{&quot;title&quot;:&quot;Nyströmformer&quot;,&quot;id&quot;:&quot;model_doc/nystromformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nystromformer&quot;},{&quot;title&quot;:&quot;Open-Llama&quot;,&quot;id&quot;:&quot;model_doc/open-llama&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/open-llama&quot;},{&quot;title&quot;:&quot;OPT&quot;,&quot;id&quot;:&quot;model_doc/opt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/opt&quot;},{&quot;title&quot;:&quot;Pegasus&quot;,&quot;id&quot;:&quot;model_doc/pegasus&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus&quot;},{&quot;title&quot;:&quot;PEGASUS-X&quot;,&quot;id&quot;:&quot;model_doc/pegasus_x&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pegasus_x&quot;},{&quot;title&quot;:&quot;Persimmon&quot;,&quot;id&quot;:&quot;model_doc/persimmon&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/persimmon&quot;},{&quot;title&quot;:&quot;PhoBERT&quot;,&quot;id&quot;:&quot;model_doc/phobert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/phobert&quot;},{&quot;title&quot;:&quot;PLBart&quot;,&quot;id&quot;:&quot;model_doc/plbart&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/plbart&quot;},{&quot;title&quot;:&quot;ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/prophetnet&quot;},{&quot;title&quot;:&quot;QDQBert&quot;,&quot;id&quot;:&quot;model_doc/qdqbert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/qdqbert&quot;},{&quot;title&quot;:&quot;RAG&quot;,&quot;id&quot;:&quot;model_doc/rag&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rag&quot;},{&quot;title&quot;:&quot;REALM&quot;,&quot;id&quot;:&quot;model_doc/realm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/realm&quot;},{&quot;title&quot;:&quot;Reformer&quot;,&quot;id&quot;:&quot;model_doc/reformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/reformer&quot;},{&quot;title&quot;:&quot;RemBERT&quot;,&quot;id&quot;:&quot;model_doc/rembert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rembert&quot;},{&quot;title&quot;:&quot;RetriBERT&quot;,&quot;id&quot;:&quot;model_doc/retribert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/retribert&quot;},{&quot;title&quot;:&quot;RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta&quot;},{&quot;title&quot;:&quot;RoBERTa-PreLayerNorm&quot;,&quot;id&quot;:&quot;model_doc/roberta-prelayernorm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roberta-prelayernorm&quot;},{&quot;title&quot;:&quot;RoCBert&quot;,&quot;id&quot;:&quot;model_doc/roc_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roc_bert&quot;},{&quot;title&quot;:&quot;RoFormer&quot;,&quot;id&quot;:&quot;model_doc/roformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/roformer&quot;},{&quot;title&quot;:&quot;RWKV&quot;,&quot;id&quot;:&quot;model_doc/rwkv&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/rwkv&quot;},{&quot;title&quot;:&quot;Splinter&quot;,&quot;id&quot;:&quot;model_doc/splinter&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/splinter&quot;},{&quot;title&quot;:&quot;SqueezeBERT&quot;,&quot;id&quot;:&quot;model_doc/squeezebert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/squeezebert&quot;},{&quot;title&quot;:&quot;SwitchTransformers&quot;,&quot;id&quot;:&quot;model_doc/switch_transformers&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/switch_transformers&quot;},{&quot;title&quot;:&quot;T5&quot;,&quot;id&quot;:&quot;model_doc/t5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5&quot;},{&quot;title&quot;:&quot;T5v1.1&quot;,&quot;id&quot;:&quot;model_doc/t5v1.1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/t5v1.1&quot;},{&quot;title&quot;:&quot;TAPEX&quot;,&quot;id&quot;:&quot;model_doc/tapex&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapex&quot;},{&quot;title&quot;:&quot;Transformer XL&quot;,&quot;id&quot;:&quot;model_doc/transfo-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/transfo-xl&quot;},{&quot;title&quot;:&quot;UL2&quot;,&quot;id&quot;:&quot;model_doc/ul2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/ul2&quot;},{&quot;title&quot;:&quot;UMT5&quot;,&quot;id&quot;:&quot;model_doc/umt5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/umt5&quot;},{&quot;title&quot;:&quot;X-MOD&quot;,&quot;id&quot;:&quot;model_doc/xmod&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xmod&quot;},{&quot;title&quot;:&quot;XGLM&quot;,&quot;id&quot;:&quot;model_doc/xglm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xglm&quot;},{&quot;title&quot;:&quot;XLM&quot;,&quot;id&quot;:&quot;model_doc/xlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm&quot;},{&quot;title&quot;:&quot;XLM-ProphetNet&quot;,&quot;id&quot;:&quot;model_doc/xlm-prophetnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-prophetnet&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta&quot;},{&quot;title&quot;:&quot;XLM-RoBERTa-XL&quot;,&quot;id&quot;:&quot;model_doc/xlm-roberta-xl&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-roberta-xl&quot;},{&quot;title&quot;:&quot;XLM-V&quot;,&quot;id&quot;:&quot;model_doc/xlm-v&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlm-v&quot;},{&quot;title&quot;:&quot;XLNet&quot;,&quot;id&quot;:&quot;model_doc/xlnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlnet&quot;},{&quot;title&quot;:&quot;YOSO&quot;,&quot;id&quot;:&quot;model_doc/yoso&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yoso&quot;}]},{&quot;title&quot;:&quot;Vision models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;BEiT&quot;,&quot;id&quot;:&quot;model_doc/beit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/beit&quot;},{&quot;title&quot;:&quot;BiT&quot;,&quot;id&quot;:&quot;model_doc/bit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bit&quot;},{&quot;title&quot;:&quot;Conditional DETR&quot;,&quot;id&quot;:&quot;model_doc/conditional_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/conditional_detr&quot;},{&quot;title&quot;:&quot;ConvNeXT&quot;,&quot;id&quot;:&quot;model_doc/convnext&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnext&quot;},{&quot;title&quot;:&quot;ConvNeXTV2&quot;,&quot;id&quot;:&quot;model_doc/convnextv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/convnextv2&quot;},{&quot;title&quot;:&quot;CvT&quot;,&quot;id&quot;:&quot;model_doc/cvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/cvt&quot;},{&quot;title&quot;:&quot;Deformable DETR&quot;,&quot;id&quot;:&quot;model_doc/deformable_detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deformable_detr&quot;},{&quot;title&quot;:&quot;DeiT&quot;,&quot;id&quot;:&quot;model_doc/deit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deit&quot;},{&quot;title&quot;:&quot;DETA&quot;,&quot;id&quot;:&quot;model_doc/deta&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deta&quot;},{&quot;title&quot;:&quot;DETR&quot;,&quot;id&quot;:&quot;model_doc/detr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/detr&quot;},{&quot;title&quot;:&quot;DiNAT&quot;,&quot;id&quot;:&quot;model_doc/dinat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinat&quot;},{&quot;title&quot;:&quot;DINO V2&quot;,&quot;id&quot;:&quot;model_doc/dinov2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dinov2&quot;},{&quot;title&quot;:&quot;DiT&quot;,&quot;id&quot;:&quot;model_doc/dit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dit&quot;},{&quot;title&quot;:&quot;DPT&quot;,&quot;id&quot;:&quot;model_doc/dpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/dpt&quot;},{&quot;title&quot;:&quot;EfficientFormer&quot;,&quot;id&quot;:&quot;model_doc/efficientformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientformer&quot;},{&quot;title&quot;:&quot;EfficientNet&quot;,&quot;id&quot;:&quot;model_doc/efficientnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/efficientnet&quot;},{&quot;title&quot;:&quot;FocalNet&quot;,&quot;id&quot;:&quot;model_doc/focalnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/focalnet&quot;},{&quot;title&quot;:&quot;GLPN&quot;,&quot;id&quot;:&quot;model_doc/glpn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/glpn&quot;},{&quot;title&quot;:&quot;ImageGPT&quot;,&quot;id&quot;:&quot;model_doc/imagegpt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/imagegpt&quot;},{&quot;title&quot;:&quot;LeViT&quot;,&quot;id&quot;:&quot;model_doc/levit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/levit&quot;},{&quot;title&quot;:&quot;Mask2Former&quot;,&quot;id&quot;:&quot;model_doc/mask2former&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mask2former&quot;},{&quot;title&quot;:&quot;MaskFormer&quot;,&quot;id&quot;:&quot;model_doc/maskformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/maskformer&quot;},{&quot;title&quot;:&quot;MobileNetV1&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v1&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v1&quot;},{&quot;title&quot;:&quot;MobileNetV2&quot;,&quot;id&quot;:&quot;model_doc/mobilenet_v2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilenet_v2&quot;},{&quot;title&quot;:&quot;MobileViT&quot;,&quot;id&quot;:&quot;model_doc/mobilevit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevit&quot;},{&quot;title&quot;:&quot;MobileViTV2&quot;,&quot;id&quot;:&quot;model_doc/mobilevitv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mobilevitv2&quot;},{&quot;title&quot;:&quot;NAT&quot;,&quot;id&quot;:&quot;model_doc/nat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nat&quot;},{&quot;title&quot;:&quot;PoolFormer&quot;,&quot;id&quot;:&quot;model_doc/poolformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/poolformer&quot;},{&quot;title&quot;:&quot;Pyramid Vision Transformer (PVT)&quot;,&quot;id&quot;:&quot;model_doc/pvt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pvt&quot;},{&quot;title&quot;:&quot;RegNet&quot;,&quot;id&quot;:&quot;model_doc/regnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/regnet&quot;},{&quot;title&quot;:&quot;ResNet&quot;,&quot;id&quot;:&quot;model_doc/resnet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/resnet&quot;},{&quot;title&quot;:&quot;SegFormer&quot;,&quot;id&quot;:&quot;model_doc/segformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/segformer&quot;},{&quot;title&quot;:&quot;SwiftFormer&quot;,&quot;id&quot;:&quot;model_doc/swiftformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swiftformer&quot;},{&quot;title&quot;:&quot;Swin Transformer&quot;,&quot;id&quot;:&quot;model_doc/swin&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin&quot;},{&quot;title&quot;:&quot;Swin Transformer V2&quot;,&quot;id&quot;:&quot;model_doc/swinv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swinv2&quot;},{&quot;title&quot;:&quot;Swin2SR&quot;,&quot;id&quot;:&quot;model_doc/swin2sr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/swin2sr&quot;},{&quot;title&quot;:&quot;Table Transformer&quot;,&quot;id&quot;:&quot;model_doc/table-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/table-transformer&quot;},{&quot;title&quot;:&quot;TimeSformer&quot;,&quot;id&quot;:&quot;model_doc/timesformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/timesformer&quot;},{&quot;title&quot;:&quot;UperNet&quot;,&quot;id&quot;:&quot;model_doc/upernet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/upernet&quot;},{&quot;title&quot;:&quot;VAN&quot;,&quot;id&quot;:&quot;model_doc/van&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/van&quot;},{&quot;title&quot;:&quot;VideoMAE&quot;,&quot;id&quot;:&quot;model_doc/videomae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/videomae&quot;},{&quot;title&quot;:&quot;Vision Transformer (ViT)&quot;,&quot;id&quot;:&quot;model_doc/vit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit&quot;},{&quot;title&quot;:&quot;ViT Hybrid&quot;,&quot;id&quot;:&quot;model_doc/vit_hybrid&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_hybrid&quot;},{&quot;title&quot;:&quot;ViTDet&quot;,&quot;id&quot;:&quot;model_doc/vitdet&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitdet&quot;},{&quot;title&quot;:&quot;ViTMAE&quot;,&quot;id&quot;:&quot;model_doc/vit_mae&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_mae&quot;},{&quot;title&quot;:&quot;ViTMatte&quot;,&quot;id&quot;:&quot;model_doc/vitmatte&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vitmatte&quot;},{&quot;title&quot;:&quot;ViTMSN&quot;,&quot;id&quot;:&quot;model_doc/vit_msn&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vit_msn&quot;},{&quot;title&quot;:&quot;ViViT&quot;,&quot;id&quot;:&quot;model_doc/vivit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vivit&quot;},{&quot;title&quot;:&quot;YOLOS&quot;,&quot;id&quot;:&quot;model_doc/yolos&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/yolos&quot;}]},{&quot;title&quot;:&quot;Audio models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Audio Spectrogram Transformer&quot;,&quot;id&quot;:&quot;model_doc/audio-spectrogram-transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/audio-spectrogram-transformer&quot;},{&quot;title&quot;:&quot;Bark&quot;,&quot;id&quot;:&quot;model_doc/bark&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bark&quot;},{&quot;title&quot;:&quot;CLAP&quot;,&quot;id&quot;:&quot;model_doc/clap&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clap&quot;},{&quot;title&quot;:&quot;EnCodec&quot;,&quot;id&quot;:&quot;model_doc/encodec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/encodec&quot;},{&quot;title&quot;:&quot;Hubert&quot;,&quot;id&quot;:&quot;model_doc/hubert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/hubert&quot;},{&quot;title&quot;:&quot;MCTCT&quot;,&quot;id&quot;:&quot;model_doc/mctct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mctct&quot;},{&quot;title&quot;:&quot;MMS&quot;,&quot;id&quot;:&quot;model_doc/mms&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mms&quot;},{&quot;title&quot;:&quot;MusicGen&quot;,&quot;id&quot;:&quot;model_doc/musicgen&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/musicgen&quot;},{&quot;title&quot;:&quot;Pop2Piano&quot;,&quot;id&quot;:&quot;model_doc/pop2piano&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pop2piano&quot;},{&quot;title&quot;:&quot;SEW&quot;,&quot;id&quot;:&quot;model_doc/sew&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew&quot;},{&quot;title&quot;:&quot;SEW-D&quot;,&quot;id&quot;:&quot;model_doc/sew-d&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sew-d&quot;},{&quot;title&quot;:&quot;Speech2Text&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text&quot;},{&quot;title&quot;:&quot;Speech2Text2&quot;,&quot;id&quot;:&quot;model_doc/speech_to_text_2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech_to_text_2&quot;},{&quot;title&quot;:&quot;SpeechT5&quot;,&quot;id&quot;:&quot;model_doc/speecht5&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speecht5&quot;},{&quot;title&quot;:&quot;UniSpeech&quot;,&quot;id&quot;:&quot;model_doc/unispeech&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech&quot;},{&quot;title&quot;:&quot;UniSpeech-SAT&quot;,&quot;id&quot;:&quot;model_doc/unispeech-sat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/unispeech-sat&quot;},{&quot;title&quot;:&quot;VITS&quot;,&quot;id&quot;:&quot;model_doc/vits&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vits&quot;},{&quot;title&quot;:&quot;Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2&quot;},{&quot;title&quot;:&quot;Wav2Vec2-Conformer&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2-conformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2-conformer&quot;},{&quot;title&quot;:&quot;Wav2Vec2Phoneme&quot;,&quot;id&quot;:&quot;model_doc/wav2vec2_phoneme&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wav2vec2_phoneme&quot;},{&quot;title&quot;:&quot;WavLM&quot;,&quot;id&quot;:&quot;model_doc/wavlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/wavlm&quot;},{&quot;title&quot;:&quot;Whisper&quot;,&quot;id&quot;:&quot;model_doc/whisper&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/whisper&quot;},{&quot;title&quot;:&quot;XLS-R&quot;,&quot;id&quot;:&quot;model_doc/xls_r&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xls_r&quot;},{&quot;title&quot;:&quot;XLSR-Wav2Vec2&quot;,&quot;id&quot;:&quot;model_doc/xlsr_wav2vec2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xlsr_wav2vec2&quot;}]},{&quot;title&quot;:&quot;Multimodal models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;ALIGN&quot;,&quot;id&quot;:&quot;model_doc/align&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/align&quot;},{&quot;title&quot;:&quot;AltCLIP&quot;,&quot;id&quot;:&quot;model_doc/altclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/altclip&quot;},{&quot;title&quot;:&quot;BLIP&quot;,&quot;id&quot;:&quot;model_doc/blip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip&quot;},{&quot;title&quot;:&quot;BLIP-2&quot;,&quot;id&quot;:&quot;model_doc/blip-2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/blip-2&quot;},{&quot;title&quot;:&quot;BridgeTower&quot;,&quot;id&quot;:&quot;model_doc/bridgetower&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bridgetower&quot;},{&quot;title&quot;:&quot;BROS&quot;,&quot;id&quot;:&quot;model_doc/bros&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/bros&quot;},{&quot;title&quot;:&quot;Chinese-CLIP&quot;,&quot;id&quot;:&quot;model_doc/chinese_clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/chinese_clip&quot;},{&quot;title&quot;:&quot;CLIP&quot;,&quot;id&quot;:&quot;model_doc/clip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clip&quot;},{&quot;title&quot;:&quot;CLIPSeg&quot;,&quot;id&quot;:&quot;model_doc/clipseg&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/clipseg&quot;},{&quot;title&quot;:&quot;Data2Vec&quot;,&quot;id&quot;:&quot;model_doc/data2vec&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/data2vec&quot;},{&quot;title&quot;:&quot;DePlot&quot;,&quot;id&quot;:&quot;model_doc/deplot&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/deplot&quot;},{&quot;title&quot;:&quot;Donut&quot;,&quot;id&quot;:&quot;model_doc/donut&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/donut&quot;},{&quot;title&quot;:&quot;FLAVA&quot;,&quot;id&quot;:&quot;model_doc/flava&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/flava&quot;},{&quot;title&quot;:&quot;GIT&quot;,&quot;id&quot;:&quot;model_doc/git&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/git&quot;},{&quot;title&quot;:&quot;GroupViT&quot;,&quot;id&quot;:&quot;model_doc/groupvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/groupvit&quot;},{&quot;title&quot;:&quot;IDEFICS&quot;,&quot;id&quot;:&quot;model_doc/idefics&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/idefics&quot;},{&quot;title&quot;:&quot;InstructBLIP&quot;,&quot;id&quot;:&quot;model_doc/instructblip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/instructblip&quot;},{&quot;title&quot;:&quot;LayoutLM&quot;,&quot;id&quot;:&quot;model_doc/layoutlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlm&quot;},{&quot;title&quot;:&quot;LayoutLMV2&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv2&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv2&quot;},{&quot;title&quot;:&quot;LayoutLMV3&quot;,&quot;id&quot;:&quot;model_doc/layoutlmv3&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutlmv3&quot;},{&quot;title&quot;:&quot;LayoutXLM&quot;,&quot;id&quot;:&quot;model_doc/layoutxlm&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/layoutxlm&quot;},{&quot;title&quot;:&quot;LiLT&quot;,&quot;id&quot;:&quot;model_doc/lilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lilt&quot;},{&quot;title&quot;:&quot;LXMERT&quot;,&quot;id&quot;:&quot;model_doc/lxmert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/lxmert&quot;},{&quot;title&quot;:&quot;MatCha&quot;,&quot;id&quot;:&quot;model_doc/matcha&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/matcha&quot;},{&quot;title&quot;:&quot;MGP-STR&quot;,&quot;id&quot;:&quot;model_doc/mgp-str&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/mgp-str&quot;},{&quot;title&quot;:&quot;Nougat&quot;,&quot;id&quot;:&quot;model_doc/nougat&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/nougat&quot;},{&quot;title&quot;:&quot;OneFormer&quot;,&quot;id&quot;:&quot;model_doc/oneformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/oneformer&quot;},{&quot;title&quot;:&quot;OWL-ViT&quot;,&quot;id&quot;:&quot;model_doc/owlvit&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/owlvit&quot;},{&quot;title&quot;:&quot;Perceiver&quot;,&quot;id&quot;:&quot;model_doc/perceiver&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/perceiver&quot;},{&quot;title&quot;:&quot;Pix2Struct&quot;,&quot;id&quot;:&quot;model_doc/pix2struct&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/pix2struct&quot;},{&quot;title&quot;:&quot;Segment Anything&quot;,&quot;id&quot;:&quot;model_doc/sam&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/sam&quot;},{&quot;title&quot;:&quot;Speech Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/speech-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/speech-encoder-decoder&quot;},{&quot;title&quot;:&quot;TAPAS&quot;,&quot;id&quot;:&quot;model_doc/tapas&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tapas&quot;},{&quot;title&quot;:&quot;TrOCR&quot;,&quot;id&quot;:&quot;model_doc/trocr&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trocr&quot;},{&quot;title&quot;:&quot;TVLT&quot;,&quot;id&quot;:&quot;model_doc/tvlt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/tvlt&quot;},{&quot;title&quot;:&quot;ViLT&quot;,&quot;id&quot;:&quot;model_doc/vilt&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vilt&quot;},{&quot;title&quot;:&quot;Vision Encoder Decoder Models&quot;,&quot;id&quot;:&quot;model_doc/vision-encoder-decoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-encoder-decoder&quot;},{&quot;title&quot;:&quot;Vision Text Dual Encoder&quot;,&quot;id&quot;:&quot;model_doc/vision-text-dual-encoder&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/vision-text-dual-encoder&quot;},{&quot;title&quot;:&quot;VisualBERT&quot;,&quot;id&quot;:&quot;model_doc/visual_bert&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/visual_bert&quot;},{&quot;title&quot;:&quot;X-CLIP&quot;,&quot;id&quot;:&quot;model_doc/xclip&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/xclip&quot;}]},{&quot;title&quot;:&quot;Reinforcement learning models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Decision Transformer&quot;,&quot;id&quot;:&quot;model_doc/decision_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/decision_transformer&quot;},{&quot;title&quot;:&quot;Trajectory Transformer&quot;,&quot;id&quot;:&quot;model_doc/trajectory_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/trajectory_transformer&quot;}]},{&quot;title&quot;:&quot;Time series models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Autoformer&quot;,&quot;id&quot;:&quot;model_doc/autoformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/autoformer&quot;},{&quot;title&quot;:&quot;Informer&quot;,&quot;id&quot;:&quot;model_doc/informer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/informer&quot;},{&quot;title&quot;:&quot;Time Series Transformer&quot;,&quot;id&quot;:&quot;model_doc/time_series_transformer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/time_series_transformer&quot;}]},{&quot;title&quot;:&quot;Graph models&quot;,&quot;isExpanded&quot;:false,&quot;sections&quot;:[{&quot;title&quot;:&quot;Graphormer&quot;,&quot;id&quot;:&quot;model_doc/graphormer&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/model_doc/graphormer&quot;}]}]},{&quot;title&quot;:&quot;Internal Helpers&quot;,&quot;isExpanded&quot;:true,&quot;sections&quot;:[{&quot;title&quot;:&quot;Custom Layers and Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/modeling_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/modeling_utils&quot;},{&quot;title&quot;:&quot;Utilities for pipelines&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/pipelines_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/pipelines_utils&quot;},{&quot;title&quot;:&quot;Utilities for Tokenizers&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/tokenization_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/tokenization_utils&quot;},{&quot;title&quot;:&quot;Utilities for Trainer&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/trainer_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/trainer_utils&quot;},{&quot;title&quot;:&quot;Utilities for Generation&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/generation_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/generation_utils&quot;},{&quot;title&quot;:&quot;Utilities for Image Processors&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/image_processing_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/image_processing_utils&quot;},{&quot;title&quot;:&quot;Utilities for Audio processing&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/audio_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/audio_utils&quot;},{&quot;title&quot;:&quot;General Utilities&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/file_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/file_utils&quot;},{&quot;title&quot;:&quot;Utilities for Time Series&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;internal/time_series_utils&quot;,&quot;url&quot;:&quot;/docs/transformers/v4.34.0/en/internal/time_series_utils&quot;}]}]}],&quot;chapterId&quot;:&quot;tasks/visual_question_answering&quot;,&quot;docType&quot;:&quot;docs&quot;,&quot;isLoggedIn&quot;:false,&quot;lang&quot;:&quot;en&quot;,&quot;langs&quot;:[&quot;de&quot;,&quot;en&quot;,&quot;es&quot;,&quot;fr&quot;,&quot;it&quot;,&quot;ko&quot;,&quot;pt&quot;,&quot;zh&quot;],&quot;library&quot;:&quot;transformers&quot;,&quot;theme&quot;:&quot;light&quot;,&quot;version&quot;:&quot;v4.34.0&quot;,&quot;versions&quot;:[{&quot;version&quot;:&quot;main&quot;},{&quot;version&quot;:&quot;v4.34.0&quot;},{&quot;version&quot;:&quot;v4.33.3&quot;},{&quot;version&quot;:&quot;v4.33.2&quot;},{&quot;version&quot;:&quot;v4.33.0&quot;},{&quot;version&quot;:&quot;v4.32.1&quot;},{&quot;version&quot;:&quot;v4.32.0&quot;},{&quot;version&quot;:&quot;v4.31.0&quot;},{&quot;version&quot;:&quot;v4.30.0&quot;},{&quot;version&quot;:&quot;v4.29.1&quot;},{&quot;version&quot;:&quot;v4.29.0&quot;},{&quot;version&quot;:&quot;v4.28.1&quot;},{&quot;version&quot;:&quot;v4.28.0&quot;},{&quot;version&quot;:&quot;v4.27.2&quot;},{&quot;version&quot;:&quot;v4.27.1&quot;},{&quot;version&quot;:&quot;v4.27.0&quot;},{&quot;version&quot;:&quot;v4.26.1&quot;},{&quot;version&quot;:&quot;v4.26.0&quot;},{&quot;version&quot;:&quot;v4.25.1&quot;},{&quot;version&quot;:&quot;v4.24.0&quot;},{&quot;version&quot;:&quot;v4.23.1&quot;},{&quot;version&quot;:&quot;v4.23.0&quot;},{&quot;version&quot;:&quot;v4.22.2&quot;},{&quot;version&quot;:&quot;v4.22.1&quot;},{&quot;version&quot;:&quot;v4.22.0&quot;},{&quot;version&quot;:&quot;v4.21.3&quot;},{&quot;version&quot;:&quot;v4.21.2&quot;},{&quot;version&quot;:&quot;v4.21.1&quot;},{&quot;version&quot;:&quot;v4.21.0&quot;},{&quot;version&quot;:&quot;v4.20.1&quot;},{&quot;version&quot;:&quot;v4.20.0&quot;},{&quot;version&quot;:&quot;v4.19.4&quot;},{&quot;version&quot;:&quot;v4.19.3&quot;},{&quot;version&quot;:&quot;v4.19.2&quot;},{&quot;version&quot;:&quot;v4.19.0&quot;},{&quot;version&quot;:&quot;v4.18.0&quot;},{&quot;version&quot;:&quot;v4.17.0&quot;},{&quot;version&quot;:&quot;v4.16.2&quot;},{&quot;version&quot;:&quot;v4.16.1&quot;},{&quot;version&quot;:&quot;v4.16.0&quot;},{&quot;version&quot;:&quot;v4.15.0&quot;},{&quot;version&quot;:&quot;v4.14.1&quot;},{&quot;version&quot;:&quot;v4.13.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.5&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.4&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.12.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.3&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v4.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v3.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.11.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.10.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.9.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.8.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.7.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.6.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.5.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.4.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.3.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.2&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.1.1&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v2.0.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.2.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.1.0&quot;},{&quot;sphinx&quot;:true,&quot;version&quot;:&quot;v1.0.0&quot;},{&quot;version&quot;:&quot;doc-builder-html&quot;}],&quot;title&quot;:&quot;Visual Question Answering&quot;}" data-target="SideMenu"> <div class="z-2 w-full flex-none lg:block lg:h-screen lg:w-[270px] 2xl:w-[300px] false"><div class="shadow-alternate flex h-16 w-full items-center rounded-b-xl border-b bg-white text-lg leading-tight lg:hidden"><div class="flex flex-1 cursor-pointer flex-col justify-center self-stretch pl-6"><p class="text-sm text-gray-400 first-letter:capitalize">Transformers documentation</p> <div class="flex items-center"><p class="font-semibold">Visual Question Answering</p> <svg class="text-xl false" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></div></div> <button class="hover:shadow-alternate group ml-auto mr-6 inline-flex flex-none cursor-pointer rounded-xl border p-2"><svg class="text-gray-500 group-hover:text-gray-700" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg></button></div> <div class="hidden h-32 flex-col justify-between border-r border-b bg-white bg-gradient-to-r p-4 lg:flex from-orange-50 to-white dark:from-gray-900 dark:to-gray-950"><div class="relative "><button class=" " type="button"><h1 class="flex items-center text-lg font-bold leading-tight first-letter:capitalize"><div class="mr-1.5 h-1.5 w-1.5 rounded-full bg-orange-500 flex-none"></div> Transformers <span><svg class="opacity-70 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></span></h1> </button> </div> <button class="shadow-alternate flex w-full items-center rounded-full border bg-white px-2 py-1 text-left text-sm text-gray-400 ring-indigo-200 hover:bg-indigo-50 hover:ring-2 dark:border-gray-700 dark:ring-yellow-600 dark:hover:bg-gray-900 dark:hover:text-yellow-500"><svg class="flex-none mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg> <div>Search documentation</div> <span class="ml-auto rounded border border-gray-200 bg-gray-100 px-0.5 text-xs dark:border-gray-800 dark:bg-gray-800"><kbd class="font-sans">⌘K</kbd></span></button> <div class="flex items-center"><select class="form-input mr-1 !mt-0 !w-20 rounded !border border-gray-200 p-1 text-xs uppercase dark:!text-gray-400"><option value="0">main</option><option value="1">v4.34.0</option><option value="2">v4.33.3</option><option value="3">v4.32.1</option><option value="4">v4.31.0</option><option value="5">v4.30.0</option><option value="6">v4.29.1</option><option value="7">v4.28.1</option><option value="8">v4.27.2</option><option value="9">v4.26.1</option><option value="10">v4.25.1</option><option value="11">v4.24.0</option><option value="12">v4.23.1</option><option value="13">v4.22.2</option><option value="14">v4.21.3</option><option value="15">v4.20.1</option><option value="16">v4.19.4</option><option value="17">v4.18.0</option><option value="18">v4.17.0</option><option value="19">v4.16.2</option><option value="20">v4.15.0</option><option value="21">v4.14.1</option><option value="22">v4.13.0</option><option value="23">v4.12.5</option><option value="24">v4.11.3</option><option value="25">v4.10.1</option><option value="26">v4.9.2</option><option value="27">v4.8.2</option><option value="28">v4.7.0</option><option value="29">v4.6.0</option><option value="30">v4.5.1</option><option value="31">v4.4.2</option><option value="32">v4.3.3</option><option value="33">v4.2.2</option><option value="34">v4.1.1</option><option value="35">v4.0.1</option><option value="36">v3.5.1</option><option value="37">v3.4.0</option><option value="38">v3.3.1</option><option value="39">v3.2.0</option><option value="40">v3.1.0</option><option value="41">v3.0.2</option><option value="42">v2.11.0</option><option value="43">v2.10.0</option><option value="44">v2.9.1</option><option value="45">v2.8.0</option><option value="46">v2.7.0</option><option value="47">v2.6.0</option><option value="48">v2.5.1</option><option value="49">v2.4.1</option><option value="50">v2.3.0</option><option value="51">v2.2.2</option><option value="52">v2.1.1</option><option value="53">v2.0.0</option><option value="54">v1.2.0</option><option value="55">v1.1.0</option><option value="56">v1.0.0</option><option value="57">doc-builder-html</option></select> <select class="form-input mr-1 rounded border-gray-200 p-1 text-xs dark:!text-gray-400 !w-12 !mt-0 !border"><option value="de">DE</option><option value="en">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ko">KO</option><option value="pt">PT</option><option value="zh">ZH</option></select> <div class="relative inline-block"><button class="rounded-full border border-gray-100 py-1 pl-2 pr-0.5 flex items-center text-sm text-gray-500 bg-white hover:bg-yellow-50 hover:border-yellow-200 dark:hover:bg-gray-800 dark:hover:border-gray-950 " type="button"><svg class="mr-1.5 text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" fill="currentColor"><path d="M6.05 4.14l-.39-.39a.993.993 0 0 0-1.4 0l-.01.01a.984.984 0 0 0 0 1.4l.39.39c.39.39 1.01.39 1.4 0l.01-.01a.984.984 0 0 0 0-1.4zM3.01 10.5H1.99c-.55 0-.99.44-.99.99v.01c0 .55.44.99.99.99H3c.56.01 1-.43 1-.98v-.01c0-.56-.44-1-.99-1zm9-9.95H12c-.56 0-1 .44-1 .99v.96c0 .55.44.99.99.99H12c.56.01 1-.43 1-.98v-.97c0-.55-.44-.99-.99-.99zm7.74 3.21c-.39-.39-1.02-.39-1.41-.01l-.39.39a.984.984 0 0 0 0 1.4l.01.01c.39.39 1.02.39 1.4 0l.39-.39a.984.984 0 0 0 0-1.4zm-1.81 15.1l.39.39a.996.996 0 1 0 1.41-1.41l-.39-.39a.993.993 0 0 0-1.4 0c-.4.4-.4 1.02-.01 1.41zM20 11.49v.01c0 .55.44.99.99.99H22c.55 0 .99-.44.99-.99v-.01c0-.55-.44-.99-.99-.99h-1.01c-.55 0-.99.44-.99.99zM12 5.5c-3.31 0-6 2.69-6 6s2.69 6 6 6s6-2.69 6-6s-2.69-6-6-6zm-.01 16.95H12c.55 0 .99-.44.99-.99v-.96c0-.55-.44-.99-.99-.99h-.01c-.55 0-.99.44-.99.99v.96c0 .55.44.99.99.99zm-7.74-3.21c.39.39 1.02.39 1.41 0l.39-.39a.993.993 0 0 0 0-1.4l-.01-.01a.996.996 0 0 0-1.41 0l-.39.39c-.38.4-.38 1.02.01 1.41z"></path></svg> </button> </div> <a href="https://github.com/huggingface/transformers" class="group ml-auto text-xs text-gray-500 hover:text-gray-700 hover:underline dark:hover:text-gray-300"><svg class="inline-block text-gray-500 group-hover:text-gray-700 dark:group-hover:text-gray-300 mr-1.5 -mt-1 w-4 h-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1.03em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 250"><path d="M128.001 0C57.317 0 0 57.307 0 128.001c0 56.554 36.676 104.535 87.535 121.46c6.397 1.185 8.746-2.777 8.746-6.158c0-3.052-.12-13.135-.174-23.83c-35.61 7.742-43.124-15.103-43.124-15.103c-5.823-14.795-14.213-18.73-14.213-18.73c-11.613-7.944.876-7.78.876-7.78c12.853.902 19.621 13.19 19.621 13.19c11.417 19.568 29.945 13.911 37.249 10.64c1.149-8.272 4.466-13.92 8.127-17.116c-28.431-3.236-58.318-14.212-58.318-63.258c0-13.975 5-25.394 13.188-34.358c-1.329-3.224-5.71-16.242 1.24-33.874c0 0 10.749-3.44 35.21 13.121c10.21-2.836 21.16-4.258 32.038-4.307c10.878.049 21.837 1.47 32.066 4.307c24.431-16.56 35.165-13.12 35.165-13.12c6.967 17.63 2.584 30.65 1.255 33.873c8.207 8.964 13.173 20.383 13.173 34.358c0 49.163-29.944 59.988-58.447 63.157c4.591 3.972 8.682 11.762 8.682 23.704c0 17.126-.148 30.91-.148 35.126c0 3.407 2.304 7.398 8.792 6.14C219.37 232.5 256 184.537 256 128.002C256 57.307 198.691 0 128.001 0zm-80.06 182.34c-.282.636-1.283.827-2.194.39c-.929-.417-1.45-1.284-1.15-1.922c.276-.655 1.279-.838 2.205-.399c.93.418 1.46 1.293 1.139 1.931zm6.296 5.618c-.61.566-1.804.303-2.614-.591c-.837-.892-.994-2.086-.375-2.66c.63-.566 1.787-.301 2.626.591c.838.903 1 2.088.363 2.66zm4.32 7.188c-.785.545-2.067.034-2.86-1.104c-.784-1.138-.784-2.503.017-3.05c.795-.547 2.058-.055 2.861 1.075c.782 1.157.782 2.522-.019 3.08zm7.304 8.325c-.701.774-2.196.566-3.29-.49c-1.119-1.032-1.43-2.496-.726-3.27c.71-.776 2.213-.558 3.315.49c1.11 1.03 1.45 2.505.701 3.27zm9.442 2.81c-.31 1.003-1.75 1.459-3.199 1.033c-1.448-.439-2.395-1.613-2.103-2.626c.301-1.01 1.747-1.484 3.207-1.028c1.446.436 2.396 1.602 2.095 2.622zm10.744 1.193c.036 1.055-1.193 1.93-2.715 1.95c-1.53.034-2.769-.82-2.786-1.86c0-1.065 1.202-1.932 2.733-1.958c1.522-.03 2.768.818 2.768 1.868zm10.555-.405c.182 1.03-.875 2.088-2.387 2.37c-1.485.271-2.861-.365-3.05-1.386c-.184-1.056.893-2.114 2.376-2.387c1.514-.263 2.868.356 3.061 1.403z" fill="currentColor"></path></svg> 112,792</a></div></div> <nav class="top-32 hidden lg:flex absolute bottom-0 left-0 w-full flex-col overflow-y-auto border-r px-4 pt-3 pb-16 text-[0.95rem] lg:w-[270px] 2xl:w-[300px]"> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Get started</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/index">🤗 Transformers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/quicktour">Quick tour </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/installation">Installation </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Tutorials</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_tutorial">Run inference with pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/autoclass_tutorial">Write portable code with AutoClass </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/preprocessing">Preprocess data </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/training">Fine-tune a pretrained model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/run_scripts">Train with a script </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/accelerate">Set up distributed training with 🤗 Accelerate </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/peft">Load and train adapters with 🤗 PEFT </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_sharing">Share your model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/transformers_agents">Agents </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/llm_tutorial">Generation with LLMs </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Task Guides</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Natural Language Processing</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Computer Vision</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/image_captioning">Image captioning </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/document_question_answering">Document Question Answering </a><a data-sveltekit-reload="" class="rounded-xl bg-gradient-to-br from-black to-gray-900 py-1 pr-2 pl-2 text-white first:mt-1 last:mb-4 dark:from-gray-800 dark:to-gray-900 ml-4" href="/docs/transformers/v4.34.0/en/tasks/visual_question_answering">Visual Question Answering </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/tasks/text-to-speech">Text to speech </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Generation</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Prompting</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Developer guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/fast_tokenizers">Use fast tokenizers from 🤗 Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/multilingual">Run inference with multilingual models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/create_a_model">Use model-specific APIs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_models">Share a custom model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/chat_templating">Templates for chat models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/sagemaker">Run training on Amazon SageMaker </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/serialization">Export to ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tflite">Export to TFLite </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/torchscript">Export to TorchScript </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/benchmarks">Benchmarks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/notebooks">Notebooks with examples </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/community">Community resources </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/custom_tools">Custom Tools and Prompts </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/troubleshooting">Troubleshoot </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Performance and scalability</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/performance">Overview </a><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Efficient training techniques</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_one">Methods and tools for efficient training on a single GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_gpu_many">Multiple GPUs and parallelism </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu">Efficient training on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_cpu_many">Distributed CPU training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu">Training on TPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_tpu_tf">Training on TPU with TensorFlow </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_train_special">Training on Specialized Hardware </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_hardware">Custom hardware for training </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/hpo_train">Hyperparameter Search using Trainer API </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Optimizing inference</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_cpu">Inference on CPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_one">Inference on one GPU </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_gpu_many">Inference on many GPUs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/perf_infer_special">Inference on Specialized Hardware </a> </div><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/big_models">Instantiating a big model </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/debugging">Troubleshooting </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tf_xla">XLA Integration for TensorFlow Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perf_torch_compile">Optimize inference using `torch.compile()` </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Contribute</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/contributing">How to contribute to transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_model">How to add a model to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_tensorflow_model">How to convert a 🤗 Transformers model to TensorFlow? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/add_new_pipeline">How to add a pipeline to 🤗 Transformers? </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/testing">Testing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pr_checks">Checks on a Pull Request </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Conceptual guides</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/philosophy">Philosophy </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/glossary">Glossary </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/task_summary">What 🤗 Transformers can do </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tasks_explained">How 🤗 Transformers solve tasks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_summary">The Transformer model family </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/tokenizer_summary">Summary of the tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/attention">Attention mechanisms </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pad_truncation">Padding and truncation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/bertology">BERTology </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/perplexity">Perplexity of fixed-length models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/pipeline_webserver">Pipelines for webserver inference </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-2" href="/docs/transformers/v4.34.0/en/model_memory_anatomy">Model training anatomy </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-0"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>API</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Main Classes</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/agent">Agents and Tools </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/model_doc/auto">Auto Classes </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/callback">Callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/configuration">Configuration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/data_collator">Data Collator </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/keras_callbacks">Keras callbacks </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/logging">Logging </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/model">Models </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/text_generation">Text Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/onnx">ONNX </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/optimizer_schedules">Optimization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/output">Model outputs </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/pipelines">Pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/processors">Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/quantization">Quantization </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/tokenizer">Tokenizer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/trainer">Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/deepspeed">DeepSpeed Integration </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/feature_extractor">Feature Extractor </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/main_classes/image_processor">Image Processor </a> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Models</span> </span></span></div></div> <div class="flex flex-col"><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Text models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Vision models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Audio models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Multimodal models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Reinforcement learning models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Time series models</span> </span></span></div></div> <div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-4"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] false"><span><span class="inline-block space-x-1 leading-5"><span>Graph models</span> </span></span></div></div> </div><div class="group flex cursor-pointer items-center pl-2 text-[0.8rem] font-semibold uppercase leading-9 hover:text-gray-700 dark:hover:text-gray-300 ml-2"><div class="flex after:absolute after:right-4 after:text-gray-500 group-hover:after:content-['▶'] after:rotate-90 after:transform"><span><span class="inline-block space-x-1 leading-5"><span>Internal Helpers</span> </span></span></div></div> <div class="flex flex-col"><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/modeling_utils">Custom Layers and Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/pipelines_utils">Utilities for pipelines </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/tokenization_utils">Utilities for Tokenizers </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/trainer_utils">Utilities for Trainer </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/generation_utils">Utilities for Generation </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/image_processing_utils">Utilities for Image Processors </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/audio_utils">Utilities for Audio processing </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/file_utils">General Utilities </a><a data-sveltekit-reload="" class="transform py-1 pr-2 pl-2 text-gray-500 first:mt-1 last:mb-4 hover:translate-x-px hover:text-black dark:hover:text-gray-300 ml-4" href="/docs/transformers/v4.34.0/en/internal/time_series_utils">Utilities for Time Series </a> </div> </div></nav></div></div></div> <div class="z-1 min-w-0 flex-1"> <div class="px-6 pt-6 md:px-12 md:pt-16 md:pb-16"><div class="max-w-4xl mx-auto mb-10"><div class="relative overflow-hidden rounded-xl bg-gradient-to-br from-orange-300/10 py-5 px-4 ring-1 ring-orange-100/70 md:px-6 md:py-8"><img alt="Hugging Face's logo" class="absolute -right-6 -bottom-6 w-28 -rotate-45 md:hidden" src="/front/assets/huggingface_logo-noborder.svg"> <div class="mb-2 text-2xl font-bold dark:text-gray-200 md:mb-0">Join the Hugging Face community</div> <p class="mb-4 text-lg text-gray-400 dark:text-gray-300 md:mb-8">and get access to the augmented documentation experience </p> <div class="mb-8 hidden space-y-4 md:block xl:flex xl:space-y-0 xl:space-x-6"><div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-indigo-100 to-indigo-100/20 dark:to-indigo-100"><svg class="text-indigo-400 group-hover:text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Collaborate on models, datasets and Spaces </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-orange-100 to-orange-100/20 dark:to-orange-50"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="text-xl text-yellow-400" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Faster examples with accelerated inference </div></div> <div class="flex items-center"><div class="mr-3 flex h-9 w-9 flex-none items-center justify-center rounded-lg bg-gradient-to-br from-gray-500/10 to-gray-500/5"><svg class="text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M14.9804 3C14.9217 3.0002 14.8631 3.00555 14.8054 3.016C11.622 3.58252 8.76073 5.30669 6.77248 7.85653C4.78422 10.4064 3.80955 13.6016 4.03612 16.8271C4.26268 20.0525 5.67447 23.0801 7.99967 25.327C10.3249 27.5738 13.3991 28.8811 16.6304 28.997C16.7944 29.003 16.9584 28.997 17.1204 28.997C19.2193 28.9984 21.2877 28.4943 23.1507 27.5274C25.0137 26.5605 26.6164 25.1592 27.8234 23.442C27.9212 23.294 27.9783 23.1229 27.9889 22.9458C27.9995 22.7687 27.9633 22.592 27.884 22.4333C27.8046 22.2747 27.6848 22.1397 27.5367 22.0421C27.3887 21.9444 27.2175 21.8875 27.0404 21.877C25.0426 21.7017 23.112 21.0693 21.3976 20.0288C19.6832 18.9884 18.231 17.5676 17.1533 15.8764C16.0756 14.1852 15.4011 12.2688 15.1822 10.2754C14.9632 8.28193 15.2055 6.26484 15.8904 4.38C15.9486 4.22913 15.97 4.06652 15.9527 3.90572C15.9354 3.74492 15.8799 3.59059 15.7909 3.45557C15.7019 3.32055 15.5819 3.20877 15.4409 3.12952C15.2999 3.05028 15.142 3.00587 14.9804 3Z" fill="currentColor"></path></svg></div> <div class="text-smd leading-tight text-gray-500 dark:text-gray-300 xl:max-w-[200px] 2xl:text-base">Switch between documentation themes </div></div></div> <div class="flex items-center space-x-2.5"><a href="/join"><button class="rounded-lg bg-white bg-gradient-to-br from-gray-100/20 to-gray-200/60 py-1.5 px-5 font-semibold text-gray-700 shadow-sm ring-1 ring-gray-300/60 hover:to-gray-100/70 hover:ring-gray-300/30 active:shadow-inner">Sign Up</button></a> <p class="text-gray-500 dark:text-gray-300">to get started</p></div></div></div> <div class="prose-doc prose relative mx-auto max-w-4xl break-words"> <p></p> <h1 class="relative group"><a id="visual-question-answering" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#visual-question-answering"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-yqvvm9">Visual Question Answering</span></h1> <div class="flex space-x-1 absolute z-10 right-0 top-0"> <div class="relative colab-dropdown "><button class=" " type="button"><img alt="Open In Colab" class="!m-0" src="https://colab.research.google.com/assets/colab-badge.svg"></button> </div> <div class="relative colab-dropdown "><button class=" " type="button"><img alt="Open In Studio Lab" class="!m-0" src="https://studiolab.sagemaker.aws/studiolab.svg"></button> </div></div> <p data-svelte-h="svelte-vljp1h">Visual Question Answering (VQA) is the task of answering open-ended questions based on an image. The input to models supporting this task is typically a combination of an image and a question, and the output is an answer expressed in natural language.</p> <p data-svelte-h="svelte-1r09kty">Some noteworthy use case examples for VQA include:</p> <ul data-svelte-h="svelte-geftd8"><li>Accessibility applications for visually impaired individuals.</li> <li>Education: posing questions about visual materials presented in lectures or textbooks. VQA can also be utilized in interactive museum exhibits or historical sites.</li> <li>Customer service and e-commerce: VQA can enhance user experience by letting users ask questions about products.</li> <li>Image retrieval: VQA models can be used to retrieve images with specific characteristics. For example, the user can ask “Is there a dog?” to find all images with dogs from a set of images.</li></ul> <p data-svelte-h="svelte-jr2b5g">In this guide you’ll learn how to:</p> <ul data-svelte-h="svelte-168wjvr"><li>Fine-tune a classification VQA model, specifically <a href="../model_doc/vilt">ViLT</a>, on the <a href="https://huggingface.co/datasets/Graphcore/vqa" rel="nofollow"><code>Graphcore/vqa</code> dataset</a>.</li> <li>Use your fine-tuned ViLT for inference.</li> <li>Run zero-shot VQA inference with a generative model, like BLIP-2.</li></ul> <h2 class="relative group"><a id="finetuning-vilt" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#finetuning-vilt"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-470qhg">Fine-tuning ViLT</span></h2> <p data-svelte-h="svelte-132kvya">ViLT model incorporates text embeddings into a Vision Transformer (ViT), allowing it to have a minimal design for Vision-and-Language Pre-training (VLP). This model can be used for several downstream tasks. For the VQA task, a classifier head is placed on top (a linear layer on top of the final hidden state of the <code>[CLS]</code> token) and randomly initialized. Visual Question Answering is thus treated as a <strong>classification problem</strong>.</p> <p data-svelte-h="svelte-x085ix">More recent models, such as BLIP, BLIP-2, and InstructBLIP, treat VQA as a generative task. Later in this guide we illustrate how to use them for zero-shot VQA inference.</p> <p data-svelte-h="svelte-qn4ey1">Before you begin, make sure you have all the necessary libraries installed.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class="">pip install -q transformers datasets</pre></div> <p data-svelte-h="svelte-1yqpblu">We encourage you to share your model with the community. Log in to your Hugging Face account to upload it to the 🤗 Hub. When prompted, enter your token to log in:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> huggingface_hub <span class="hljs-keyword">import</span> notebook_login <span class="hljs-meta">&gt;&gt;&gt; </span>notebook_login()</pre></div> <p data-svelte-h="svelte-hhk89w">Let’s define the model checkpoint as a global variable.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>model_checkpoint = <span class="hljs-string">"dandelin/vilt-b32-mlm"</span></pre></div> <h2 class="relative group"><a id="load-the-data" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#load-the-data"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-1sbjexa">Load the data</span></h2> <p data-svelte-h="svelte-n8sxxt">For illustration purposes, in this guide we use a very small sample of the annotated visual question answering <code>Graphcore/vqa</code> dataset. You can find the full dataset on <a href="https://huggingface.co/datasets/Graphcore/vqa" rel="nofollow">🤗 Hub</a>.</p> <p data-svelte-h="svelte-b7b4c0">As an alternative to the <a href="https://huggingface.co/datasets/Graphcore/vqa" rel="nofollow"><code>Graphcore/vqa</code> dataset</a>, you can download the same data manually from the official <a href="https://visualqa.org/download.html" rel="nofollow">VQA dataset page</a>. If you prefer to follow the tutorial with your custom data, check out how to <a href="https://huggingface.co/docs/datasets/image_dataset#loading-script" rel="nofollow">Create an image dataset</a> guide in the 🤗 Datasets documentation.</p> <p data-svelte-h="svelte-16i329z">Let’s load the first 200 examples from the validation split and explore the dataset’s features:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> load_dataset <span class="hljs-meta">&gt;&gt;&gt; </span>dataset = load_dataset(<span class="hljs-string">"Graphcore/vqa"</span>, split=<span class="hljs-string">"validation[:200]"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>dataset Dataset({ features: [<span class="hljs-string">'question'</span>, <span class="hljs-string">'question_type'</span>, <span class="hljs-string">'question_id'</span>, <span class="hljs-string">'image_id'</span>, <span class="hljs-string">'answer_type'</span>, <span class="hljs-string">'label'</span>], num_rows: <span class="hljs-number">200</span> })</pre></div> <p data-svelte-h="svelte-10yp249">Let’s take a look at an example to understand the dataset’s features:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>dataset[<span class="hljs-number">0</span>] {<span class="hljs-string">'question'</span>: <span class="hljs-string">'Where is he looking?'</span>, <span class="hljs-string">'question_type'</span>: <span class="hljs-string">'none of the above'</span>, <span class="hljs-string">'question_id'</span>: <span class="hljs-number">262148000</span>, <span class="hljs-string">'image_id'</span>: <span class="hljs-string">'/root/.cache/huggingface/datasets/downloads/extracted/ca733e0e000fb2d7a09fbcc94dbfe7b5a30750681d0e965f8e0a23b1c2f98c75/val2014/COCO_val2014_000000262148.jpg'</span>, <span class="hljs-string">'answer_type'</span>: <span class="hljs-string">'other'</span>, <span class="hljs-string">'label'</span>: {<span class="hljs-string">'ids'</span>: [<span class="hljs-string">'at table'</span>, <span class="hljs-string">'down'</span>, <span class="hljs-string">'skateboard'</span>, <span class="hljs-string">'table'</span>], <span class="hljs-string">'weights'</span>: [<span class="hljs-number">0.30000001192092896</span>, <span class="hljs-number">1.0</span>, <span class="hljs-number">0.30000001192092896</span>, <span class="hljs-number">0.30000001192092896</span>]}}</pre></div> <p data-svelte-h="svelte-1g3cog6">The features relevant to the task include:</p> <ul data-svelte-h="svelte-o5tqko"><li><code>question</code>: the question to be answered from the image</li> <li><code>image_id</code>: the path to the image the question refers to</li> <li><code>label</code>: the annotations</li></ul> <p data-svelte-h="svelte-73ooet">We can remove the rest of the features as they won’t be necessary:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>dataset = dataset.remove_columns([<span class="hljs-string">'question_type'</span>, <span class="hljs-string">'question_id'</span>, <span class="hljs-string">'answer_type'</span>])</pre></div> <p data-svelte-h="svelte-ozbkx3">As you can see, the <code>label</code> feature contains several answers to the same question (called <code>ids</code> here) collected by different human annotators. This is because the answer to a question can be subjective. In this case, the question is “where is he looking?“. Some people annotated this with “down”, others with “at table”, another one with “skateboard”, etc.</p> <p data-svelte-h="svelte-1iqjyc4">Take a look at the image and consider which answer would you give:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> PIL <span class="hljs-keyword">import</span> Image <span class="hljs-meta">&gt;&gt;&gt; </span>image = Image.<span class="hljs-built_in">open</span>(dataset[<span class="hljs-number">0</span>][<span class="hljs-string">'image_id'</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span>image</pre></div> <div class="flex justify-center" data-svelte-h="svelte-1tjg4st"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/vqa-example.png" alt="VQA Image Example"></div> <p data-svelte-h="svelte-7dyn3e">Due to the questions’ and answers’ ambiguity, datasets like this are treated as a multi-label classification problem (as multiple answers are possibly valid). Moreover, rather than just creating a one-hot encoded vector, one creates a soft encoding, based on the number of times a certain answer appeared in the annotations.</p> <p data-svelte-h="svelte-1g9udhi">For instance, in the example above, because the answer “down” is selected way more often than other answers, it has a score (called <code>weight</code> in the dataset) of 1.0, and the rest of the answers have scores &lt; 1.0.</p> <p data-svelte-h="svelte-11t4fgj">To later instantiate the model with an appropriate classification head, let’s create two dictionaries: one that maps the label name to an integer and vice versa:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> itertools <span class="hljs-meta">&gt;&gt;&gt; </span>labels = [item[<span class="hljs-string">'ids'</span>] <span class="hljs-keyword">for</span> item <span class="hljs-keyword">in</span> dataset[<span class="hljs-string">'label'</span>]] <span class="hljs-meta">&gt;&gt;&gt; </span>flattened_labels = <span class="hljs-built_in">list</span>(itertools.chain(*labels)) <span class="hljs-meta">&gt;&gt;&gt; </span>unique_labels = <span class="hljs-built_in">list</span>(<span class="hljs-built_in">set</span>(flattened_labels)) <span class="hljs-meta">&gt;&gt;&gt; </span>label2id = {label: idx <span class="hljs-keyword">for</span> idx, label <span class="hljs-keyword">in</span> <span class="hljs-built_in">enumerate</span>(unique_labels)} <span class="hljs-meta">&gt;&gt;&gt; </span>id2label = {idx: label <span class="hljs-keyword">for</span> label, idx <span class="hljs-keyword">in</span> label2id.items()} </pre></div> <p data-svelte-h="svelte-1p4inua">Now that we have the mappings, we can replace the string answers with their ids, and flatten the dataset for a more convenient further preprocessing.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">replace_ids</span>(<span class="hljs-params">inputs</span>): <span class="hljs-meta">... </span> inputs[<span class="hljs-string">"label"</span>][<span class="hljs-string">"ids"</span>] = [label2id[x] <span class="hljs-keyword">for</span> x <span class="hljs-keyword">in</span> inputs[<span class="hljs-string">"label"</span>][<span class="hljs-string">"ids"</span>]] <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> inputs <span class="hljs-meta">&gt;&gt;&gt; </span>dataset = dataset.<span class="hljs-built_in">map</span>(replace_ids) <span class="hljs-meta">&gt;&gt;&gt; </span>flat_dataset = dataset.flatten() <span class="hljs-meta">&gt;&gt;&gt; </span>flat_dataset.features {<span class="hljs-string">'question'</span>: Value(dtype=<span class="hljs-string">'string'</span>, <span class="hljs-built_in">id</span>=<span class="hljs-literal">None</span>), <span class="hljs-string">'image_id'</span>: Value(dtype=<span class="hljs-string">'string'</span>, <span class="hljs-built_in">id</span>=<span class="hljs-literal">None</span>), <span class="hljs-string">'label.ids'</span>: <span class="hljs-type">Sequence</span>(feature=Value(dtype=<span class="hljs-string">'int64'</span>, <span class="hljs-built_in">id</span>=<span class="hljs-literal">None</span>), length=-<span class="hljs-number">1</span>, <span class="hljs-built_in">id</span>=<span class="hljs-literal">None</span>), <span class="hljs-string">'label.weights'</span>: <span class="hljs-type">Sequence</span>(feature=Value(dtype=<span class="hljs-string">'float64'</span>, <span class="hljs-built_in">id</span>=<span class="hljs-literal">None</span>), length=-<span class="hljs-number">1</span>, <span class="hljs-built_in">id</span>=<span class="hljs-literal">None</span>)}</pre></div> <h2 class="relative group"><a id="preprocessing-data" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#preprocessing-data"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-jtsjlz">Preprocessing data</span></h2> <p data-svelte-h="svelte-1087j7b">The next step is to load a ViLT processor to prepare the image and text data for the model. <a href="/docs/transformers/v4.34.0/en/model_doc/vilt#transformers.ViltProcessor">ViltProcessor</a> wraps a BERT tokenizer and ViLT image processor into a convenient single processor:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> ViltProcessor <span class="hljs-meta">&gt;&gt;&gt; </span>processor = ViltProcessor.from_pretrained(model_checkpoint)</pre></div> <p data-svelte-h="svelte-uzommb">To preprocess the data we need to encode the images and questions using the <a href="/docs/transformers/v4.34.0/en/model_doc/vilt#transformers.ViltProcessor">ViltProcessor</a>. The processor will use the <a href="/docs/transformers/v4.34.0/en/model_doc/bert#transformers.BertTokenizerFast">BertTokenizerFast</a> to tokenize the text and create <code>input_ids</code>, <code>attention_mask</code> and <code>token_type_ids</code> for the text data. As for images, the processor will leverage <a href="/docs/transformers/v4.34.0/en/model_doc/vilt#transformers.ViltImageProcessor">ViltImageProcessor</a> to resize and normalize the image, and create <code>pixel_values</code> and <code>pixel_mask</code>.</p> <p data-svelte-h="svelte-kuuiuf">All these preprocessing steps are done under the hood, we only need to call the <code>processor</code>. However, we still need to prepare the target labels. In this representation, each element corresponds to a possible answer (label). For correct answers, the element holds their respective score (weight), while the remaining elements are set to zero.</p> <p data-svelte-h="svelte-cgn1al">The following function applies the <code>processor</code> to the images and questions and formats the labels as described above:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">preprocess_data</span>(<span class="hljs-params">examples</span>): <span class="hljs-meta">... </span> image_paths = examples[<span class="hljs-string">'image_id'</span>] <span class="hljs-meta">... </span> images = [Image.<span class="hljs-built_in">open</span>(image_path) <span class="hljs-keyword">for</span> image_path <span class="hljs-keyword">in</span> image_paths] <span class="hljs-meta">... </span> texts = examples[<span class="hljs-string">'question'</span>] <span class="hljs-meta">... </span> encoding = processor(images, texts, padding=<span class="hljs-string">"max_length"</span>, truncation=<span class="hljs-literal">True</span>, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">... </span> <span class="hljs-keyword">for</span> k, v <span class="hljs-keyword">in</span> encoding.items(): <span class="hljs-meta">... </span> encoding[k] = v.squeeze() <span class="hljs-meta">... </span> targets = [] <span class="hljs-meta">... </span> <span class="hljs-keyword">for</span> labels, scores <span class="hljs-keyword">in</span> <span class="hljs-built_in">zip</span>(examples[<span class="hljs-string">'label.ids'</span>], examples[<span class="hljs-string">'label.weights'</span>]): <span class="hljs-meta">... </span> target = torch.zeros(<span class="hljs-built_in">len</span>(id2label)) <span class="hljs-meta">... </span> <span class="hljs-keyword">for</span> label, score <span class="hljs-keyword">in</span> <span class="hljs-built_in">zip</span>(labels, scores): <span class="hljs-meta">... </span> target[label] = score <span class="hljs-meta">... </span> targets.append(target) <span class="hljs-meta">... </span> encoding[<span class="hljs-string">"labels"</span>] = targets <span class="hljs-meta">... </span> <span class="hljs-keyword">return</span> encoding</pre></div> <p data-svelte-h="svelte-1og8o1g">To apply the preprocessing function over the entire dataset, use 🤗 Datasets <code>map</code> function. You can speed up <code>map</code> by setting <code>batched=True</code> to process multiple elements of the dataset at once. At this point, feel free to remove the columns you don’t need.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>processed_dataset = flat_dataset.<span class="hljs-built_in">map</span>(preprocess_data, batched=<span class="hljs-literal">True</span>, remove_columns=[<span class="hljs-string">'question'</span>,<span class="hljs-string">'question_type'</span>, <span class="hljs-string">'question_id'</span>, <span class="hljs-string">'image_id'</span>, <span class="hljs-string">'answer_type'</span>, <span class="hljs-string">'label.ids'</span>, <span class="hljs-string">'label.weights'</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span>processed_dataset Dataset({ features: [<span class="hljs-string">'input_ids'</span>, <span class="hljs-string">'token_type_ids'</span>, <span class="hljs-string">'attention_mask'</span>, <span class="hljs-string">'pixel_values'</span>, <span class="hljs-string">'pixel_mask'</span>, <span class="hljs-string">'labels'</span>], num_rows: <span class="hljs-number">200</span> })</pre></div> <p data-svelte-h="svelte-17ycagy">As a final step, create a batch of examples using <a href="/docs/transformers/v4.34.0/en/main_classes/data_collator#transformers.DefaultDataCollator">DefaultDataCollator</a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> DefaultDataCollator <span class="hljs-meta">&gt;&gt;&gt; </span>data_collator = DefaultDataCollator()</pre></div> <h2 class="relative group"><a id="train-the-model" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#train-the-model"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-n13n3f">Train the model</span></h2> <p data-svelte-h="svelte-15wps9z">You’re ready to start training your model now! Load ViLT with <a href="/docs/transformers/v4.34.0/en/model_doc/vilt#transformers.ViltForQuestionAnswering">ViltForQuestionAnswering</a>. Specify the number of labels along with the label mappings:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> ViltForQuestionAnswering <span class="hljs-meta">&gt;&gt;&gt; </span>model = ViltForQuestionAnswering.from_pretrained(model_checkpoint, num_labels=<span class="hljs-built_in">len</span>(id2label), id2label=id2label, label2id=label2id)</pre></div> <p data-svelte-h="svelte-l42k0i">At this point, only three steps remain:</p> <ol data-svelte-h="svelte-mmq87g"><li>Define your training hyperparameters in <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.TrainingArguments">TrainingArguments</a>:</li></ol> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> TrainingArguments <span class="hljs-meta">&gt;&gt;&gt; </span>repo_id = <span class="hljs-string">"MariaK/vilt_finetuned_200"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>training_args = TrainingArguments( <span class="hljs-meta">... </span> output_dir=repo_id, <span class="hljs-meta">... </span> per_device_train_batch_size=<span class="hljs-number">4</span>, <span class="hljs-meta">... </span> num_train_epochs=<span class="hljs-number">20</span>, <span class="hljs-meta">... </span> save_steps=<span class="hljs-number">200</span>, <span class="hljs-meta">... </span> logging_steps=<span class="hljs-number">50</span>, <span class="hljs-meta">... </span> learning_rate=<span class="hljs-number">5e-5</span>, <span class="hljs-meta">... </span> save_total_limit=<span class="hljs-number">2</span>, <span class="hljs-meta">... </span> remove_unused_columns=<span class="hljs-literal">False</span>, <span class="hljs-meta">... </span> push_to_hub=<span class="hljs-literal">True</span>, <span class="hljs-meta">... </span>)</pre></div> <ol start="2" data-svelte-h="svelte-1yqexxh"><li>Pass the training arguments to <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer">Trainer</a> along with the model, dataset, processor, and data collator.</li></ol> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> Trainer <span class="hljs-meta">&gt;&gt;&gt; </span>trainer = Trainer( <span class="hljs-meta">... </span> model=model, <span class="hljs-meta">... </span> args=training_args, <span class="hljs-meta">... </span> data_collator=data_collator, <span class="hljs-meta">... </span> train_dataset=processed_dataset, <span class="hljs-meta">... </span> tokenizer=processor, <span class="hljs-meta">... </span>)</pre></div> <ol start="3" data-svelte-h="svelte-zd26ik"><li>Call <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.train">train()</a> to finetune your model.</li></ol> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>trainer.train() </pre></div> <p data-svelte-h="svelte-15ob2ju">Once training is completed, share your model to the Hub with the <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer.push_to_hub">push_to_hub()</a> method to share your final model on the 🤗 Hub:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>trainer.push_to_hub()</pre></div> <h2 class="relative group"><a id="inference" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#inference"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-199uz7g">Inference</span></h2> <p data-svelte-h="svelte-abk1vy">Now that you have fine-tuned a ViLT model, and uploaded it to the 🤗 Hub, you can use it for inference. The simplest way to try out your fine-tuned model for inference is to use it in a <a href="/docs/transformers/v4.34.0/en/main_classes/pipelines#transformers.Pipeline">Pipeline</a>.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> pipeline <span class="hljs-meta">&gt;&gt;&gt; </span>pipe = pipeline(<span class="hljs-string">"visual-question-answering"</span>, model=<span class="hljs-string">"MariaK/vilt_finetuned_200"</span>)</pre></div> <p data-svelte-h="svelte-1lvulzt">The model in this guide has only been trained on 200 examples, so don’t expect a lot from it. Let’s see if it at least learned something from the data and take the first example from the dataset to illustrate inference:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>example = dataset[<span class="hljs-number">0</span>] <span class="hljs-meta">&gt;&gt;&gt; </span>image = Image.<span class="hljs-built_in">open</span>(example[<span class="hljs-string">'image_id'</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span>question = example[<span class="hljs-string">'question'</span>] <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-built_in">print</span>(question) <span class="hljs-meta">&gt;&gt;&gt; </span>pipe(image, question, top_k=<span class="hljs-number">1</span>) <span class="hljs-string">"Where is he looking?"</span> [{<span class="hljs-string">'score'</span>: <span class="hljs-number">0.5498199462890625</span>, <span class="hljs-string">'answer'</span>: <span class="hljs-string">'down'</span>}]</pre></div> <p data-svelte-h="svelte-mc0cvg">Even though not very confident, the model indeed has learned something. With more examples and longer training, you’ll get far better results!</p> <p data-svelte-h="svelte-o6117l">You can also manually replicate the results of the pipeline if you’d like:</p> <ol data-svelte-h="svelte-346f3f"><li>Take an image and a question, prepare them for the model using the processor from your model.</li> <li>Forward the result or preprocessing through the model.</li> <li>From the logits, get the most likely answer’s id, and find the actual answer in the <code>id2label</code>.</li></ol> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>processor = ViltProcessor.from_pretrained(<span class="hljs-string">"MariaK/vilt_finetuned_200"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>image = Image.<span class="hljs-built_in">open</span>(example[<span class="hljs-string">'image_id'</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span>question = example[<span class="hljs-string">'question'</span>] <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># prepare inputs</span> <span class="hljs-meta">&gt;&gt;&gt; </span>inputs = processor(image, question, return_tensors=<span class="hljs-string">"pt"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = ViltForQuestionAnswering.from_pretrained(<span class="hljs-string">"MariaK/vilt_finetuned_200"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-comment"># forward pass</span> <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">with</span> torch.no_grad(): <span class="hljs-meta">... </span> outputs = model(**inputs) <span class="hljs-meta">&gt;&gt;&gt; </span>logits = outputs.logits <span class="hljs-meta">&gt;&gt;&gt; </span>idx = logits.argmax(-<span class="hljs-number">1</span>).item() <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-built_in">print</span>(<span class="hljs-string">"Predicted answer:"</span>, model.config.id2label[idx]) Predicted answer: down</pre></div> <h2 class="relative group"><a id="zeroshot-vqa" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#zeroshot-vqa"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span data-svelte-h="svelte-x8ffr6">Zero-shot VQA</span></h2> <p data-svelte-h="svelte-4nhby3">The previous model treated VQA as a classification task. Some recent models, such as BLIP, BLIP-2, and InstructBLIP approach VQA as a generative task. Let’s take <a href="../model_doc/blip-2">BLIP-2</a> as an example. It introduced a new visual-language pre-training paradigm in which any combination of pre-trained vision encoder and LLM can be used (learn more in the <a href="https://huggingface.co/blog/blip-2" rel="nofollow">BLIP-2 blog post</a>). This enables achieving state-of-the-art results on multiple visual-language tasks including visual question answering.</p> <p data-svelte-h="svelte-1f4439i">Let’s illustrate how you can use this model for VQA. First, let’s load the model. Here we’ll explicitly send the model to a GPU, if available, which we didn’t need to do earlier when training, as <a href="/docs/transformers/v4.34.0/en/main_classes/trainer#transformers.Trainer">Trainer</a> handles this automatically:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoProcessor, Blip2ForConditionalGeneration <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-keyword">import</span> torch <span class="hljs-meta">&gt;&gt;&gt; </span>processor = AutoProcessor.from_pretrained(<span class="hljs-string">"Salesforce/blip2-opt-2.7b"</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>model = Blip2ForConditionalGeneration.from_pretrained(<span class="hljs-string">"Salesforce/blip2-opt-2.7b"</span>, torch_dtype=torch.float16) <span class="hljs-meta">&gt;&gt;&gt; </span>device = <span class="hljs-string">"cuda"</span> <span class="hljs-keyword">if</span> torch.cuda.is_available() <span class="hljs-keyword">else</span> <span class="hljs-string">"cpu"</span> <span class="hljs-meta">&gt;&gt;&gt; </span>model.to(device)</pre></div> <p data-svelte-h="svelte-w113ee">The model takes image and text as input, so let’s use the exact same image/question pair from the first example in the VQA dataset:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>example = dataset[<span class="hljs-number">0</span>] <span class="hljs-meta">&gt;&gt;&gt; </span>image = Image.<span class="hljs-built_in">open</span>(example[<span class="hljs-string">'image_id'</span>]) <span class="hljs-meta">&gt;&gt;&gt; </span>question = example[<span class="hljs-string">'question'</span>]</pre></div> <p data-svelte-h="svelte-n2zykh">To use BLIP-2 for visual question answering task, the textual prompt has to follow a specific format: <code>Question: {} Answer:</code>.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>prompt = <span class="hljs-string">f"Question: <span class="hljs-subst">{question}</span> Answer:"</span> </pre></div> <p data-svelte-h="svelte-10l7d2y">Now we need to preprocess the image/prompt with the model’s processor, pass the processed input through the model, and decode the output:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><span class="hljs-meta">&gt;&gt;&gt; </span>inputs = processor(image, text=prompt, return_tensors=<span class="hljs-string">"pt"</span>).to(device, torch.float16) <span class="hljs-meta">&gt;&gt;&gt; </span>generated_ids = model.generate(**inputs, max_new_tokens=<span class="hljs-number">10</span>) <span class="hljs-meta">&gt;&gt;&gt; </span>generated_text = processor.batch_decode(generated_ids, skip_special_tokens=<span class="hljs-literal">True</span>)[<span class="hljs-number">0</span>].strip() <span class="hljs-meta">&gt;&gt;&gt; </span><span class="hljs-built_in">print</span>(generated_text) <span class="hljs-string">"He is looking at the crowd"</span> </pre></div> <p data-svelte-h="svelte-1wjg6co">As you can see, the model recognized the crowd, and the direction of the face (looking down), however, it seems to miss the fact the crowd is behind the skater. Still, in cases where acquiring human-annotated datasets is not feasible, this approach can quickly produce useful results.</p> <p></p> <div id="svelte-announcer" aria-live="assertive" aria-atomic="true" style="position: absolute; left: 0px; top: 0px; clip: rect(0px, 0px, 0px, 0px); clip-path: inset(50%); overflow: hidden; white-space: nowrap; width: 1px; height: 1px;"></div></div> <div class="mx-auto mt-16 flex max-w-4xl items-center pb-8 font-sans font-medium leading-6 xl:mt-32"><a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/tasks/document_question_answering" class="mr-8 flex transform items-center text-gray-600 transition-all hover:-translate-x-px hover:text-gray-900 dark:hover:text-gray-300"><span class="mr-2 translate-y-px">←</span>Document Question Answering</a> <a data-sveltekit-reload="" href="/docs/transformers/v4.34.0/en/tasks/text-to-speech" class="ml-auto flex transform items-center text-right text-gray-600 transition-all hover:translate-x-px hover:text-gray-900 dark:hover:text-gray-300">Text to speech<span class="ml-2 translate-y-px">→</span></a></div></div></div> <div class="sticky top-0 self-start"><div class="SVELTE_HYDRATER contents" data-props="{&quot;chapter&quot;:{&quot;title&quot;:&quot;Visual Question Answering&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;visual-question-answering&quot;,&quot;url&quot;:&quot;#visual-question-answering&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Fine-tuning ViLT&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;finetuning-vilt&quot;,&quot;url&quot;:&quot;#finetuning-vilt&quot;},{&quot;title&quot;:&quot;Load the data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;load-the-data&quot;,&quot;url&quot;:&quot;#load-the-data&quot;},{&quot;title&quot;:&quot;Preprocessing data&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;preprocessing-data&quot;,&quot;url&quot;:&quot;#preprocessing-data&quot;},{&quot;title&quot;:&quot;Train the model&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;train-the-model&quot;,&quot;url&quot;:&quot;#train-the-model&quot;},{&quot;title&quot;:&quot;Inference&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;inference&quot;,&quot;url&quot;:&quot;#inference&quot;},{&quot;title&quot;:&quot;Zero-shot VQA&quot;,&quot;isExpanded&quot;:true,&quot;id&quot;:&quot;zeroshot-vqa&quot;,&quot;url&quot;:&quot;#zeroshot-vqa&quot;}]}}" data-target="SubSideMenu"><nav class="hidden h-screen w-[270px] flex-none flex-col space-y-3 overflow-y-auto break-words border-l pt-24 pl-6 pr-10 pb-16 text-sm lg:flex 2xl:w-[305px]"><a href="#visual-question-answering" class=" text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-visual-question-answering"><wbr>Visual <wbr>Question <wbr>Answering</a> <a href="#finetuning-vilt" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-finetuning-vilt"><wbr>Fine-tuning <wbr>ViLT</a> <a href="#load-the-data" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-load-the-data"><wbr>Load the data</a> <a href="#preprocessing-data" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-preprocessing-data"><wbr>Preprocessing data</a> <a href="#train-the-model" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-train-the-model"><wbr>Train the model</a> <a href="#inference" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-inference"><wbr>Inference</a> <a href="#zeroshot-vqa" class="pl-4 text-gray-400 transform hover:translate-x-px hover:text-gray-700 dark:hover:text-gray-300" id="nav-zeroshot-vqa"><wbr>Zero-shot VQA</a> </nav></div></div></div> <div id="doc-footer"></div></main> </div> <script> import("/front/build/kube-5e23f38/index.js"); window.moonSha = "kube-5e23f38/"; window.hubConfig = JSON.parse(`{"features":{"signupDisabled":false},"sshGitUrl":"git@hf.co","moonHttpUrl":"https://huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc","environment":"production","userAgent":"HuggingFace (production)"}`); </script> <!-- Stripe --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://js.stripe.com/v3/"; script.async = true; document.head.appendChild(script); } </script> <!-- Google analytics v4 --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { const script = document.createElement("script"); script.src = "https://www.googletagmanager.com/gtag/js?id=G-8Q63TH4CSL"; script.async = true; document.head.appendChild(script); window.dataLayer = window.dataLayer || []; function gtag() { if (window.dataLayer !== undefined) { window.dataLayer.push(arguments); } } gtag("js", new Date()); gtag("config", "G-8Q63TH4CSL", { page_path: "/docs/transformers/v4.34.0/en/tasks/visual_question_answering" }); /// ^ See https://developers.google.com/analytics/devguides/collection/gtagjs/pages gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" }); /// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent /// TODO: ask the user for their consent and update this with gtag('consent', 'update') } </script> <!-- Google Analytics v3 (deprecated) --> <script> if (["hf.co", "huggingface.co"].includes(window.location.hostname)) { (function (i, s, o, g, r, a, m) { i["GoogleAnalyticsObject"] = r; (i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments); }), (i[r].l = 1 * new Date()); (a = s.createElement(o)), (m = s.getElementsByTagName(o)[0]); a.async = 1; a.src = g; m.parentNode.insertBefore(a, m); })(window, document, "script", "https://www.google-analytics.com/analytics.js", "ganalytics"); ganalytics("create", "UA-83738774-2", "auto"); ganalytics("send", "pageview", "/docs/transformers/v4.34.0/en/tasks/visual_question_answering"); } </script> <iframe name="__privateStripeMetricsController6740" frameborder="0" allowtransparency="true" scrolling="no" role="presentation" allow="payment *" src="https://js.stripe.com/v3/m-outer-27c67c0d52761104439bb051c7856ab1.html#url=https%3A%2F%2Fhuggingface.co%2Fdocs%2Ftransformers%2Fv4.34.0%2Fen%2Ftasks%2Fvisual_question_answering&amp;title=Visual%20Question%20Answering&amp;referrer=&amp;muid=NA&amp;sid=NA&amp;version=6&amp;preview=false" aria-hidden="true" tabindex="-1" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; visibility: hidden !important; position: fixed !important; height: 1px !important; pointer-events: none !important; user-select: none !important;"></iframe></body></html>
2023-10-05T13:33:57.228Z